{"training_set": [["The investigated new microwave plasma torch is based on an axially symmetric resonator. Microwaves of a frequency of 2.45 GHz are resonantly fed into this cavity resulting in a sufficiently high electric field to ignite plasma without any additional igniters as well as to maintain stable plasma operation. Optical emission spectroscopy was carried out to characterize a humid air plasma. OH\u2010bands were used to determine the gas rotational temperature Trot while the electron temperature was estimated by a Boltzmann plot of oxygen lines. Maximum temperatures of Trot of about 3600 K and electron temperatures of 5800 K could be measured. The electron density ne was estimated to ne \u2248 3 \u00b7 1020m\u20133 by using Saha's equation. Parametric studies in dependence of the gas flow and the supplied microwave power revealed that the maximum temperatures are independent of these parameters. However, the volume of the plasma increases with increasing microwave power and with a decrease of the gas flow. Considerations using collision frequencies, energy transfer times and power coupling provide an explanation of the observed phenomena: The optimal microwave heating is reached for electron\u2010neutral collision frequencies \u03bden being near to the angular frequency of the wave \u03c9 (\u00a9 2012 WILEY\u2010VCH Verlag GmbH & Co. KGaA, Weinheim)", "which Excitation_type ?", "GHz", 122.0, 125.0], ["A two-dimensional model of microwave-induced plasma (field frequency 2.45 GHz) in argon at atmospheric pressure is presented. The model describes in a self-consistent manner the gas flow and heat transfer, the in-coupling of the microwave energy into the plasma, and the reaction kinetics relevant to high-pressure argon plasma including the contribution of molecular ion species. The model provides the gas and electron temperature distributions, the electron, ion, and excited state number densities, and the power deposited into the plasma for given gas flow rate and temperature at the inlet, and input power of the incoming TEM microwave. For flow rate and absorbed microwave power typical for analytical applications (200-400 ml/min and 20 W), the plasma is far from thermodynamic equilibrium. The gas temperature reaches values above 2000 K in the plasma region, while the electron temperature is about 1 eV. The electron density reaches a maximum value of about 4 \u00d7 10(21) m(-3). The balance of the charged particles is essentially controlled by the kinetics of the molecular ions. For temperatures above 1200 K, quasineutrality of the plasma is provided by the atomic ions, and below 1200 K the molecular ion density exceeds the atomic ion density and a contraction of the discharge is observed. Comparison with experimental data is presented which demonstrates good quantitative and qualitative agreement.", "which Excitation_type ?", "GHz", 74.0, 77.0], ["The Integrated Microwave Atmospheric Plasma Source (IMAPlaS) operating with a microwave resonator at 2.45 GHz driven by a solid-state transistor oscillator generates a core plasma of high temperature (T > 1000 K), therefore producing reactive species such as NO very effectively. The effluent of the plasma source is much colder, which enables direct treatment of thermolabile materials or even living tissue. In this study the source was operated with argon, helium and nitrogen with gas flow rates between 0.3 and 1.0 slm. Depending on working gas and distance, axial gas temperatures between 30 and 250 \u00b0C were determined in front of the nozzle. Reactive species were identified by emission spectroscopy in the spectral range from vacuum ultraviolet to near infrared. The irradiance in the ultraviolet range was also measured. Using B. atrophaeus spores to test antimicrobial efficiency, we determined log10-reduction rates of up to a factor of 4.", "which Excitation_type ?", "GHz", 106.0, 109.0], ["An extensive electrical study was performed on a coaxial geometry atmospheric pressure plasma jet source in helium, driven by 30 kHz sine voltage. Two modes of operation were observed, a highly reproducible low-power mode that features the emission of one plasma bullet per voltage period and an erratic high-power mode in which micro-discharges appear around the grounded electrode. The minimum of power transfer efficiency corresponds to the transition between the two modes. Effective capacitance was identified as a varying property influenced by the discharge and the dissipated power. The charge carried by plasma bullets was found to be a small fraction of charge produced in the source irrespective of input power and configuration of the grounded electrode. The biggest part of the produced charge stays localized in the plasma source and below the grounded electrode, in the range 1.2\u20133.3 nC for ground length of 3\u20138 mm.", "which Excitation_type ?", "kHz", 129.0, 132.0], ["Providing easy to use methods for visual analysis of Linked Data is often hindered by the complexity of semantic technologies. On the other hand, semantic information inherent to Linked Data provides opportunities to support the user in interactively analysing the data. This paper provides a demonstration of an interactive, Web-based visualisation tool, the \"Vis Wizard\", which makes use of semantics to simplify the process of setting up visualisations, transforming the data and, most importantly, interactively analysing multiple datasets using brushing and linking methods.", "which implementation ?", "Vis Wizard", 361.0, 371.0], ["This paper describes the multi-document text summarization system NeATS. Using a simple algorithm, NeATS was among the top two performers of the DUC-01 evaluation.", "which implementation ?", "NeATS", 66.0, 71.0], ["In the past decade, much effort has been put into the visual representation of ontologies. However, present visualization strategies are not equipped to handle complex ontologies with many relations, leading to visual clutter and inefficient use of space. In this paper, we propose GLOW, a method for ontology visualization based on Hierarchical Edge Bundles. Hierarchical Edge Bundles is a new visually attractive technique for displaying relations in hierarchical data, such as concept structures formed by 'subclass-of' and 'type-of' relations. We have developed a visualization library based on OWL API, as well as a plug-in for Prot\u00e9g\u00e9, a well-known ontology editor. The displayed adjacency relations can be selected from an ontology using a set of common configurations, allowing for intuitive discovery of information. Our evaluation demonstrates that the GLOW visualization provides better visual clarity, and displays relations and complex ontologies better than the existing Prot\u00e9g\u00e9 visualization plug-in Jambalaya.", "which implementation ?", "GLOW", 282.0, 286.0], ["We present ERSS 2005, our entry to this year\u2019s DUC competition. With only slight modifications from last year\u2019s version to accommodate the more complex context information present in DUC 2005, we achieved a similar performance to last year\u2019s entry, ranking roughly in the upper third when examining the ROUGE-1 and Basic Element score. We also participated in the additional manual evaluation based on the new Pyramid method and performed further evaluations based on the Basic Elements method and the automatic generation of Pyramids. Interestingly, the ranking of our system differs greatly between the different measures; we attempt to analyse this effect based on correlations between the different results using the Spearman coefficient.", "which implementation ?", "ERSS 2005", 11.0, 20.0], ["In this paper, we present a novel exploratory visual analytic system called TIARA (Text Insight via Automated Responsive Analytics), which combines text analytics and interactive visualization to help users explore and analyze large collections of text. Given a collection of documents, TIARA first uses topic analysis techniques to summarize the documents into a set of topics, each of which is represented by a set of keywords. In addition to extracting topics, TIARA derives time-sensitive keywords to depict the content evolution of each topic over time. To help users understand the topic-based summarization results, TIARA employs several interactive text visualization techniques to explain the summarization results and seamlessly link such results to the original text. We have applied TIARA to several real-world applications, including email summarization and patient record analysis. To measure the effectiveness of TIARA, we have conducted several experiments. Our experimental results and initial user feedback suggest that TIARA is effective in aiding users in their exploratory text analytic tasks.", "which implementation ?", "TIARA", 76.0, 81.0], ["Gephi is an open source software for graph and network analysis. It uses a 3D render engine to display large networks in real-time and to speed up the exploration. A flexible and multi-task architecture brings new possibilities to work with complex data sets and produce valuable visual results. We present several key features of Gephi in the context of interactive exploration and interpretation of networks. It provides easy and broad access to network data and allows for spatializing, filtering, navigating, manipulating and clustering. Finally, by presenting dynamic features of Gephi, we highlight key aspects of dynamic network visualization.", "which implementation ?", "Gephi", 0.0, 5.0], ["This paper presents a method for the reuse of existing knowledge in UML software models. Our purpose is being able to adapt fragments of existing UML class diagrams in order to build domain ontologies, represented in OWL-DL, reducing the required amount of time and resources to create one from scratch. Our method is supported by a CASE tool, VisualWADE, and a developed plug-in, used for the management of ontologies and the generation of semantically tagged Web applications. In order to analyse the designed transformations between knowledge representation formalisms, UML and OWL, we have chosen a use case in the pharmacotherapeutic domain. Then, we discuss some of the most relevant aspects of the proposal and, finally, conclusions are obtained and future work briefly described.", "which implementation ?", "Visualwade", 344.0, 354.0], ["In this paper, we present the POMELO system developed for participating in the task 2 of the QALD-4 challenge. For translating natural language questions in SPARQL queries we exploit Natural Language Processing methods, semantic resources and RDF triples description. We designed a four-step method which pre-processes the question, performs an abstraction of the question, then builds a representation of the SPARQL query and finally generates the query. The system was ranked second out of three participating systems. It achieves good performance with 0.85 F-measure on the set of 25 test questions.", "which implementation ?", "POMELO", 30.0, 36.0], ["The success of Open Data initiatives has increased the amount of data available on the Web. Unfortunately, most of this data is only available in raw tabular form, what makes analysis and reuse quite difficult for non-experts. Linked Data principles allow for a more sophisticated approach by making explicit both the structure and semantics of the data. However, from the end-user viewpoint, they continue to be monolithic files completely opaque or difficult to explore by making tedious semantic queries. Our objective is to facilitate the user to grasp what kind of entities are in the dataset, how they are interrelated, which are their main properties and values, etc. Rhizomer is a tool for data publishing whose interface provides a set of components borrowed from Information Architecture (IA) that facilitate awareness of the dataset at hand. It automatically generates navigation menus and facets based on the kinds of things in the dataset and how they are described through metadata properties and values. Moreover, motivated by recent tests with end-users, it also provides the possibility to pivot among the faceted views created for each class of resources in the dataset.", "which implementation ?", "Rhizomer", 675.0, 683.0], ["We present and evaluate SumUM, a text summarization system that takes a raw technical text as input and produces an indicative informative summary. The indicative part of the summary identifies the topics of the document, and the informative part elaborates on some of these topics according to the reader's interest. SumUM motivates the topics, describes entities, and defines concepts. It is a first step for exploring the issue of dynamic summarization. This is accomplished through a process of shallow syntactic and semantic analysis, concept identification, and text regeneration. Our method was developed through the study of a corpus of abstracts written by professional abstractors. Relying on human judgment, we have evaluated indicativeness, informativeness, and text acceptability of the automatic summaries. The results thus far indicate good performance when compared with other summarization technologies.", "which implementation ?", "SumUM", 24.0, 29.0], ["This article presents GOFAISUM, a topicanswering and summarizing system developed for the main task of DUC 2007 by the Universit\u00e9 de Montr\u00e9al and the Universit\u00e9 de Gen\u00e8ve. We chose to use an all-symbolic approach, the only source of linguistic knowledge being FIPS, a multilingual syntactic parser. We further attempted to innovate by using XML and XSLT to both represent FIPS\u2019s analysis trees and to manipulate them to create summaries. We relied on tf\u00b7idf -like scores to ensure relevance to the topic and on syntactic pruning to achieve conciseness and grammaticality. NIST evaluation metrics show that our system performs well with respect to summary responsiveness and linguistic quality, but could be improved to reduce redundancy.", "which implementation ?", "GOFAIsum", 22.0, 30.0], ["We present the results of Michigan\u2019s participation in DUC 2004. Our system, MEAD, ranked as one of the top systems in four of the five tasks. We introduce our new feature, LexPageRank, a new measure of sentence centrality inspired by the prestige concept in social networks. LexPageRank gave promising results in multi-document summarization. Our approach for Task 5, biographical summarization, was simplistic, yet succesful. We used regular expression matching to boost up the scores of the sentences that are likely to contain biographical information patterns.", "which implementation ?", "MEAD", 76.0, 80.0], ["This paper presents a novel multi-document summarization approach based on personalized pagerank (PPRSum). In this algorithm, we uniformly integrate various kinds of information in the corpus. At first, we train a salience model of sentence global features based on Naive Bayes Model. Secondly, we generate a relevance model for each corpus utilizing the query of it. Then, we compute the personalized prior probability for each sentence in the corpus utilizing the salience model and the relevance model both. With the help of personalized prior probability, a Personalized PageRank ranking process is performed depending on the relationships among all sentences in the corpus. Additionally, the redundancy penalty is imposed on each sentence. The summary is produced by choosing the sentences with both high query-focused information richness and high information novelty. Experiments on DUC2007 are performed and the ROUGE evaluation results show that PPRSum ranks between the 1st and the 2nd systems on DUC2007 main task.", "which implementation ?", "PPRSum", 98.0, 104.0], ["Intui3 is one of the participating systems at the fourth evaluation campaign on multilingual question answering over linked data, QALD4. The system accepts as input a question formulated in natural language (in English), and uses syntactic and semantic information to construct its interpretation with respect to a given database of RDF triples (in this case DBpedia 3.9). The interpretation is mapped to the corresponding SPARQL query, which is then run against a SPARQL endpoint to retrieve the answers to the initial question. Intui3 competed in the challenge called Task 1: Multilingual question answering over linked data, which offered 200 training questions and 50 test questions in 7 different languages. It obtained an F-measure of 0.24 by providing a correct answer to 10 of the test questions and a partial answer to 4 of them.", "which implementation ?", "Intui3", 0.0, 6.0], ["This paper describes and analyzes how the FEMsum system deals with DUC 2007 tasks of providing summary-length answers to complex questions, both background and just-the-news summaries. We participated in producing background summaries for the main task with the FEMsum approach that obtained better results in our last year participation. The FEMsum semantic based approach was adapted to deal with the update pilot task with the aim of producing just-the-news summaries.", "which implementation ?", "FEMsum", 42.0, 48.0], ["Visualizing Resource Description Framework (RDF) data to support decision-making processes is an important and challenging aspect of consuming Linked Data. With the recent development of JavaScript libraries for data visualization, new opportunities for Web-based visualization of Linked Data arise. This paper presents an extensive evaluation of JavaScript-based libraries for visualizing RDF data. A set of criteria has been devised for the evaluation and 15 major JavaScript libraries have been analyzed against the criteria. The two JavaScript libraries with the highest score in the evaluation acted as the basis for developing LODWheel (Linked Open Data Wheel) - a prototype for visualizing Linked Open Data in graphs and charts - introduced in this paper. This way of visualizing RDF data leads to a great deal of challenges related to data-categorization and connecting data resources together in new ways, which are discussed in this paper.", "which implementation ?", "LODWheel", 633.0, 641.0], ["LodLive project, http://en.lodlive.it/, provides a demonstration of the use of Linked Data standard (RDF, SPARQL) to browse RDF resources. The application aims to spread linked data principles with a simple and friendly interface and reusable techniques. In this report we present an overview of the potential of LodLive, mentioning tools and methodologies that were used to create it.", "which implementation ?", "Lodlive", 0.0, 7.0], ["Recently, the amount of semantic data available in the Web has increased dramatically. The potential of this vast amount of data is enormous but in most cases it is difficult for users to explore and use this data, especially for those without experience with Semantic Web technologies. Applying information visualization techniques to the Semantic Web helps users to easily explore large amounts of data and interact with them. In this article we devise a formal Linked Data Visualization Model (LDVM), which allows to dynamically connect data with visualizations. We report about our implementation of the LDVM comprising a library of generic visualizations that enable both users and data analysts to get an overview on, visualize and explore the Data Web and perform detailed analyzes on Linked Data.", "which implementation ?", "LDVM", 497.0, 501.0], ["NEWSINESSENCE is a system for finding, visualizing and summarizing a topic-based cluster of news stories. In the generic scenario for NEWSINESSENCE, a user selects a single news story from a news Web site. Our system then searches other live sources of news for other stories related to the same event and produces summaries of a subset of the stories that it finds, according to parameters specified by the user.", "which implementation ?", "NewsInEssence", 0.0, 13.0], ["We present a question answering system (CASIA@V2) over Linked Data (DBpedia), which translates natural language questions into structured queries automatically. Existing systems usually adopt a pipeline framework, which con- tains four major steps: 1) Decomposing the question and detecting candidate phrases; 2) mapping the detected phrases into semantic items of Linked Data; 3) grouping the mapped semantic items into semantic triples; and 4) generat- ing the rightful SPARQL query. We present a jointly learning framework using Markov Logic Network(MLN) for phrase detection, phrases mapping to seman- tic items and semantic items grouping. We formulate the knowledge for resolving the ambiguities in three steps of QALD as first-order logic clauses in a MLN. We evaluate our approach on QALD-4 test dataset and achieve an F-measure score of 0.36, an average precision of 0.32 and an average recall of 0.40 over 50 questions.", "which implementation ?", "CASIA", 40.0, 45.0], ["Datasets published in the LOD cloud are recommended to follow some best practice in order to be 4-5 stars Linked Data compliant. They can often be consumed and accessed by different means such as API access, bulk download or as linked data fragments, but most of the time, a SPARQL endpoint is also provided. While the LOD cloud keeps growing, having a quick glimpse of those datasets is getting harder and there is a need to develop new methods enabling to detect automatically what an arbitrary dataset is about and to recommend visualizations for data samples. We consider that \"a visualization is worth a million triples\", and in this paper, we propose a novel approach that mines the content of datasets and automatically generates visualizations. Our approach is directly based on the usage of SPARQL queries that will detect the important categories of a dataset and that will specifically consider the properties used by the objects which have been interlinked via owl:sameAs links. We then propose to associate type of visualization for those categories. We have implemented this approach into a so-called Linked Data Vizualization Wizard (LDVizWiz).", "which implementation ?", "LDVizWiz", 1149.0, 1157.0], ["We present Paged Graph Visualization (PGV), a new semi-autonomous tool for RDF data exploration and visualization. PGV consists of two main components: a) the \"PGV explorer\" and b) the \"RDF pager\" module utilizing BRAHMS, our high per-formance main-memory RDF storage system. Unlike existing graph visualization techniques which attempt to display the entire graph and then filter out irrelevant data, PGV begins with a small graph and provides the tools to incrementally explore and visualize relevant data of very large RDF ontologies. We implemented several techniques to visualize and explore hot spots in the graph, i.e. nodes with large numbers of immediate neighbors. In response to the user-controlled, semantics-driven direction of the exploration, the PGV explorer obtains the necessary sub-graphs from the RDF pager and enables their incremental visualization leaving the previously laid out sub-graphs intact. We outline the problem of visualizing large RDF data sets, discuss our interface and its implementation, and through a controlled experiment we show the benefits of PGV.", "which implementation ?", "PGV", 38.0, 41.0], ["We present QAKiS, a system for open domain Question Answering over linked data. It addresses the problem of question interpretation as a relation-based match, where fragments of the question are matched to binary relations of the triple store, using relational textual patterns automatically collected. For the demo, the relational patterns are automatically extracted from Wikipedia, while DBpedia is the RDF data set to be queried using a natural language interface.", "which implementation ?", "QAKiS", 11.0, 16.0], ["A wealth of information has recently become available as browsable RDF data on the Web, but the selection of client applications to interact with this Linked Data remains limited. We show how to browse Linked Data with Fenfire, a Free and Open Source Software RDF browser and editor that employs a graph view and focuses on an engaging and interactive browsing experience. This sets Fenfire apart from previous table- and outline-based Linked Data browsers.", "which implementation ?", "Fenfire", 219.0, 226.0], ["Topic representation mismatch is a key problem in topic-oriented summarization for the specified topic is usually too short to understand/interpret. This paper proposes a novel adaptive model for summarization, AdaSum, under the assumption that the summary and the topic representation can be mutually boosted. AdaSum aims to simultaneously optimize the topic representation and extract effective summaries. This model employs a mutual boosting process to minimize the topic representation mismatch for base summarizers. Furthermore, a linear combination of base summarizers is proposed to further reduce the topic representation mismatch from the diversity of base summarizers with a general learning framework. We prove that the training process of AdaSum can enhance the performance measure used. Experimental results on DUC 2007 dataset show that AdaSum significantly outperforms the baseline methods for summarization (e.g. MRP, LexRank, and GSPS).", "which implementation ?", "AdaSum", 211.0, 217.0], ["We present a methodology for summarization of news about current events in the form of briefings that include appropriate background (historical) information. The system that we developed, SUMMONS, uses the output of systems developed for the DARPA Message Understanding Conferences to generate summaries of multiple documents on the same or related events, presenting similarities and differences, contradictions, and generalizations among sources of information. We describe the various components of the system, showing how information from multiple articles is combined, organized into a paragraph, and finally, realized as English sentences. A feature of our work is the extraction of descriptions of entities such as people and places for reuse to enhance a briefing.", "which implementation ?", "SUMMONS", 189.0, 196.0], ["With the continued growth of online semantic information, the processes of searching and managing this massive scale and heterogeneous content have become increasingly challenging. In this work, we present PowerAqua, an ontologybased Question Answering system that is able to answer queries by locating and integrating information, which can be distributed across heterogeneous semantic resources. We provide a complete overview of the system including: the research challenges that it addresses, its architecture, the evaluations that have been conducted to test it, and an in-depth discussion showing how PowerAqua effectively supports users in querying and exploring Semantic Web content.", "which implementation ?", "PowerAqua", 206.0, 215.0], ["Automatic Document Summarization is a highly interdisciplinary research area related with computer science as well as cognitive psychology. This Summarization is to compress an original document into a summarized version by extracting almost all of the essential concepts with text mining techniques. This research focuses on developing a statistical automatic text summarization approach, Kmixture probabilistic model, to enhancing the quality of summaries. KSRS employs the K-mixture probabilistic model to establish term weights in a statistical sense, and further identifies the term relationships to derive the semantic relationship significance (SRS) of nouns. Sentences are ranked and extracted based on their semantic relationship significance values. The objective of this research is thus to propose a statistical approach to text summarization. We propose a K-mixture semantic relationship significance (KSRS) approach to enhancing the quality of document summary results. The K-mixture probabilistic model is used to determine the term weights. Term relationships are then investigated to develop the semantic relationship of nouns that manifests sentence semantics. Sentences with significant semantic relationship, nouns are extracted to form the summary accordingly.", "which implementation ?", "KSRS", 459.0, 463.0], ["The need to visualize large social networks is growing as hardware capabilities make analyzing large networks feasible and many new data sets become available. Unfortunately, the visualizations in existing systems do not satisfactorily resolve the basic dilemma of being readable both for the global structure of the network and also for detailed analysis of local communities. To address this problem, we present NodeTrix, a hybrid representation for networks that combines the advantages of two traditional representations: node-link diagrams are used to show the global structure of a network, while arbitrary portions of the network can be shown as adjacency matrices to better support the analysis of communities. A key contribution is a set of interaction techniques. These allow analysts to create a NodeTrix visualization by dragging selections to and from node-link and matrix forms, and to flexibly manipulate the NodeTrix representation to explore the dataset and create meaningful summary visualizations of their findings. Finally, we present a case study applying NodeTrix to the analysis of the InfoVis 2004 coauthorship dataset to illustrate the capabilities of NodeTrix as both an exploration tool and an effective means of communicating results.", "which implementation ?", "NodeTrix", 414.0, 422.0], ["We present UWN, a large multilingual lexical knowledge base that describes the meanings and relationships of words in over 200 languages. This paper explains how link prediction, information integration and taxonomy induction methods have been used to build UWN based on WordNet and extend it with millions of named entities from Wikipedia. We additionally introduce extensions to cover lexical relationships, frame-semantic knowledge, and language data. An online interface provides human access to the data, while a software API enables applications to look up over 16 million words and names.", "which implementation ?", "UWN", 11.0, 14.0], ["Online news recommender systems aim to address the information explosion of news and make personalized recommendation for users. In general, news language is highly condensed, full of knowledge entities and common sense. However, existing methods are unaware of such external knowledge and cannot fully discover latent knowledge-level connections among news. The recommended results for a user are consequently limited to simple patterns and cannot be extended reasonably. To solve the above problem, in this paper, we propose a deep knowledge-aware network (DKN) that incorporates knowledge graph representation into news recommendation. DKN is a content-based deep recommendation framework for click-through rate prediction. The key component of DKN is a multi-channel and word-entity-aligned knowledge-aware convolutional neural network (KCNN) that fuses semantic-level and knowledge-level representations of news. KCNN treats words and entities as multiple channels, and explicitly keeps their alignment relationship during convolution. In addition, to address users\u00bb diverse interests, we also design an attention module in DKN to dynamically aggregate a user\u00bbs history with respect to current candidate news. Through extensive experiments on a real online news platform, we demonstrate that DKN achieves substantial gains over state-of-the-art deep recommendation models. We also validate the efficacy of the usage of knowledge in DKN.", "which Machine Learning Method ?", "CNN", NaN, NaN], ["With the revival of neural networks, many studies try to adapt powerful sequential neural models, \u0131e Recurrent Neural Networks (RNN), to sequential recommendation. RNN-based networks encode historical interaction records into a hidden state vector. Although the state vector is able to encode sequential dependency, it still has limited representation power in capturing complicated user preference. It is difficult to capture fine-grained user preference from the interaction sequence. Furthermore, the latent vector representation is usually hard to understand and explain. To address these issues, in this paper, we propose a novel knowledge enhanced sequential recommender. Our model integrates the RNN-based networks with Key-Value Memory Network (KV-MN). We further incorporate knowledge base (KB) information to enhance the semantic representation of KV-MN. RNN-based models are good at capturing sequential user preference, while knowledge-enhanced KV-MNs are good at capturing attribute-level user preference. By using a hybrid of RNNs and KV-MNs, it is expected to be endowed with both benefits from these two components. The sequential preference representation together with the attribute-level preference representation are combined as the final representation of user preference. With the incorporation of KB information, our model is also highly interpretable. To our knowledge, it is the first time that sequential recommender is integrated with external memories by leveraging large-scale KB information.", "which Machine Learning Method ?", "RNN", 128.0, 131.0], ["Providing model-generated explanations in recommender systems is important to user experience. State-of-the-art recommendation algorithms\u2014especially the collaborative filtering (CF)- based approaches with shallow or deep models\u2014usually work with various unstructured information sources for recommendation, such as textual reviews, visual images, and various implicit or explicit feedbacks. Though structured knowledge bases were considered in content-based approaches, they have been largely ignored recently due to the availability of vast amounts of data and the learning power of many complex models. However, structured knowledge bases exhibit unique advantages in personalized recommendation systems. When the explicit knowledge about users and items is considered for recommendation, the system could provide highly customized recommendations based on users\u2019 historical behaviors and the knowledge is helpful for providing informed explanations regarding the recommended items. A great challenge for using knowledge bases for recommendation is how to integrate large-scale structured and unstructured data, while taking advantage of collaborative filtering for highly accurate performance. Recent achievements in knowledge-base embedding (KBE) sheds light on this problem, which makes it possible to learn user and item representations while preserving the structure of their relationship with external knowledge for explanation. In this work, we propose to explain knowledge-base embeddings for explainable recommendation. Specifically, we propose a knowledge-base representation learning framework to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm is proposed to generate personalized explanations for the recommended items. Experimental results on real-world e-commerce datasets verified the superior recommendation performance and the explainability power of our approach compared with state-of-the-art baselines.", "which Machine Learning Method ?", "Collaborative Filtering", 153.0, 176.0], ["Knowledge graph embedding aims to learn distributed representations for entities and relations, and is proven to be effective in many applications. Crossover interactions -- bi-directional effects between entities and relations --- help select related information when predicting a new triple, but haven't been formally discussed before. In this paper, we propose CrossE, a novel knowledge graph embedding which explicitly simulates crossover interactions. It not only learns one general embedding for each entity and relation as most previous methods do, but also generates multiple triple specific embeddings for both of them, named interaction embeddings. We evaluate embeddings on typical link prediction tasks and find that CrossE achieves state-of-the-art results on complex and more challenging datasets. Furthermore, we evaluate embeddings from a new perspective -- giving explanations for predicted triples, which is important for real applications. In this work, an explanation for a triple is regarded as a reliable closed-path between the head and the tail entity. Compared to other baselines, we show experimentally that CrossE, benefiting from interaction embeddings, is more capable of generating reliable explanations to support its predictions.", "which Machine Learning Method ?", "Knowledge Graph Embedding", 0.0, 25.0], ["Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.", "which Machine Learning Method ?", "multi-task CNN", 898.0, 912.0], ["Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.", "which Machine Learning Method ?", "Multi-task CNN", 898.0, 912.0], ["A rapidly growing amount of content posted online, such as food recipes, opens doors to new exciting applications at the intersection of vision and language. In this work, we aim to estimate the calorie amount of a meal directly from an image by learning from recipes people have published on the Internet, thus skipping time-consuming manual data annotation. Since there are few large-scale publicly available datasets captured in unconstrained environments, we propose the pic2kcal benchmark comprising 308 000 images from over 70 000 recipes including photographs, ingredients, and instructions. To obtain nutritional information of the ingredients and automatically determine the ground-truth calorie value, we match the items in the recipes with structured information from a food item database. We evaluate various neural networks for regression of the calorie quantity and extend them with the multi-task paradigm. Our learning procedure combines the calorie estimation with prediction of proteins, carbohydrates, and fat amounts as well as a multi-label ingredient classification. Our experiments demonstrate clear benefits of multi-task learning for calorie estimation, surpassing the single-task calorie regression by 9.9%. To encourage further research on this task, we make the code for generating the dataset and the models publicly available.", "which Machine Learning Method ?", "Multi-Task Learning", 1135.0, 1154.0], ["Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.", "which Machine Learning Method ?", "Single-task CNN", 1549.0, 1564.0], ["DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.", "which Class hierarchy extraction/learning ?", "DTD", 0.0, 3.0], ["DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.", "which Concepts extraction/learning ?", "DTD", 0.0, 3.0], ["DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.", "which Input format ?", "DTD", 0.0, 3.0], ["Metabolic pathways are an important part of systems biology research since they illustrate complex interactions between metabolites, enzymes, and regulators. Pathway maps are drawn to elucidate metabolism or to set data in a metabolic context. We present MetaboMAPS, a web-based platform to visualize numerical data on individual metabolic pathway maps. Metabolic maps can be stored, distributed and downloaded in SVG-format. MetaboMAPS was designed for users without computational background and supports pathway sharing without strict conventions. In addition to existing applications that established standards for well-studied pathways, MetaboMAPS offers a niche for individual, customized pathways beyond common knowledge, supporting ongoing research by creating publication-ready visualizations of experimental data.", "which Input format ?", "SVG", 414.0, 417.0], ["Significant amounts of knowledge in science and technology have so far not been published as Linked Open Data but are contained in the text and tables of legacy PDF publications. Making such information available as RDF would, for example, provide direct access to claims and facilitate surveys of related work. A lot of valuable tabular information that till now only existed in PDF documents would also finally become machine understandable. Instead of studying scientific literature or engineering patents for months, it would be possible to collect such input by simple SPARQL queries. The SemAnn approach enables collaborative annotation of text and tables in PDF documents, a format that is still the common denominator of publishing, thus maximising the potential user base. The resulting annotations in RDF format are available for querying through a SPARQL endpoint. To incentivise users with an immediate benefit for making the effort of annotation, SemAnn recommends related papers, taking into account the hierarchical context of annotations in a novel way. We evaluated the usability of SemAnn and the usefulness of its recommendations by analysing annotations resulting from tasks assigned to test users and by interviewing them. While the evaluation shows that even few annotations lead to a good recall, we also observed unexpected, serendipitous recommendations, which confirms the merit of our low-threshold annotation support for the crowd.", "which Input format ?", "PDF", 161.0, 164.0], ["In this paper, we present a tool called X2OWL that aims at building an OWL ontology from an XML datasource. This method is based on XML schema to automatically generate the ontology structure, as well as, a set of mapping bridges. The presented method also includes a refinement step that allows to clean the mapping bridges and possibly to restructure the generated ontology.", "which Input format ?", "XML schema", 132.0, 142.0], ["Tabular data is an abundant source of information on the Web, but remains mostly isolated from the latter's interconnections since tables lack links and computer-accessible descriptions of their structure. In other words, the schemas of these tables -- attribute names, values, data types, etc. -- are not explicitly stored as table metadata. Consequently, the structure that these tables contain is not accessible to the crawlers that power search engines and thus not accessible to user search queries. We address this lack of structure with a new method for leveraging the principles of table construction in order to extract table schemas. Discovering the schema by which a table is constructed is achieved by harnessing the similarities and differences of nearby table rows through the use of a novel set of features and a feature processing scheme. The schemas of these data tables are determined using a classification technique based on conditional random fields in combination with a novel feature encoding method called logarithmic binning, which is specifically designed for the data table extraction task. Our method provides considerable improvement over the well-known WebTables schema extraction method. In contrast with previous work that focuses on extracting individual relations, our method excels at correctly interpreting full tables, thereby being capable of handling general tables such as those found in spreadsheets, instead of being restricted to HTML tables as is the case with the WebTables method. We also extract additional schema characteristics, such as row groupings, which are important for supporting information retrieval tasks on tabular data.", "which Input format ?", "HTML", 1473.0, 1477.0], ["Ontology provides a shared and reusable piece of knowledge about a specific domain, and has been applied in many fields, such as semantic Web, e-commerce and information retrieval, etc. However, building ontology by hand is a very hard and error-prone task. Learning ontology from existing resources is a good solution. Because relational database is widely used for storing data and OWL is the latest standard recommended by W3C, this paper proposes an approach of learning OWL ontology from data in relational database. Compared with existing methods, the approach can acquire ontology from relational database automatically by using a group of learning rules instead of using a middle model. In addition, it can obtain OWL ontology, including the classes, properties, properties characteristics, cardinality and instances, while none of existing methods can acquire all of them. The proposed learning rules have been proven to be correct by practice.", "which Input format ?", "Relational data", NaN, NaN], ["The Web contains vast amounts of HTML tables. Most of these tables are used for layout purposes, but a small subset of the tables is relational, meaning that they contain structured data describing a set of entities [2]. As these relational Web tables cover a very wide range of different topics, there is a growing body of research investigating the utility of Web table data for completing cross-domain knowledge bases [6], for extending arbitrary tables with additional attributes [7, 4], as well as for translating data values [5]. The existing research shows the potentials of Web tables. However, comparing the performance of the different systems is difficult as up till now each system is evaluated using a different corpus of Web tables and as most of the corpora are owned by large search engine companies and are thus not accessible to the public. In this poster, we present a large public corpus of Web tables which contains over 233 million tables and has been extracted from the July 2015 version of the CommonCrawl. By publishing the corpus as well as all tools that we used to extract it from the crawled data, we intend to provide a common ground for evaluating Web table systems. The main difference of the corpus compared to an earlier corpus that we extracted from the 2012 version of the CommonCrawl as well as the corpus extracted by Eberius et al. [3] from the 2014 version of the CommonCrawl is that the current corpus contains a richer set of metadata for each table. This metadata includes table-specific information such as table orientation, table caption, header row, and key column, but also context information such as the text before and after the table, the title of the HTML page, as well as timestamp information that was found before and after the table. The context information can be useful for recovering the semantics of a table [7]. The timestamp information is crucial for fusing time-depended data, such as alternative population numbers for a city [8].", "which Input format ?", "HTML", 33.0, 37.0], ["In this work, we offer an approach to combine standard multimedia analysis techniques with knowledge drawn from conceptual metadata provided by domain experts of a specialized scholarly domain, to learn a domain-specific multimedia ontology from a set of annotated examples. A standard Bayesian network learning algorithm that learns structure and parameters of a Bayesian network is extended to include media observables in the learning. An expert group provides domain knowledge to construct a basic ontology of the domain as well as to annotate a set of training videos. These annotations help derive the associations between high-level semantic concepts of the domain and low-level MPEG-7 based features representing audio-visual content of the videos. We construct a more robust and refined version of this ontology by learning from this set of conceptually annotated videos. To encode this knowledge, we use MOWL, a multimedia extension of Web Ontology Language (OWL) which is capable of describing domain concepts in terms of their media properties and of capturing the inherent uncertainties involved. We use the ontology specified knowledge for recognizing concepts relevant to a video to annotate fresh addition to the video database with relevant concepts in the ontology. These conceptual annotations are used to create hyperlinks in the video collection, to provide an effective video browsing interface to the user.", "which Input format ?", "Video", 1189.0, 1194.0], ["XML has become the de-facto standard of data exchange format in E-businesses. Although XML can support syntactic inter-operability, problems arise when data sources represented as XML documents are needed to be integrated. The reason is that XML lacks support for efficient sharing of conceptualization. The Web Ontology Language (OWL) can play an important role here as it can enable semantic inter-operability, and it supports the representation of domain knowledge using classes, properties and instances for applications. In many applications it is required to convert huge XML documents automatically to OWL ontologies, which is receiving a lot of attention. There are some existing converters for this job. Unfortunately they have serious shortcomings, e. g., they do not address the handling of characteristics like internal references, (transitive) import(s), include etc. which are commonly used in XML Schemas. To alleviate these drawbacks, we propose a new framework for mapping XML to OWL automatically. We illustrate our technique on examples to show the efficacy of our approach. We also provide the performance measures of our approach on some standard datasets. We also check the correctness of the conversion process.", "which Input format ?", "XML schema", NaN, NaN], ["The aims of XML data conversion to ontologies are the indexing, integration and enrichment of existing ontologies with knowledge acquired from these sources. The contribution of this paper consists in providing a classification of the approaches used for the conversion of XML documents into OWL ontologies. This classification underlines the usage profile of each conversion method, providing a clear description of the advantages and drawbacks belonging to each method. Hence, this paper focuses on two main processes, which are ontology enrichment and ontology population using XML data. Ontology enrichment is related to the schema of the ontology (TBox), and ontology population is related to an individual (Abox). In addition, the ontologies described in these methods are based on formal languages of the Semantic Web such as OWL (Ontology Web Language) or RDF (Resource Description Framework). These languages are formal because the semantics are formally defined and take advantage of the Description Logics. In contrast, XML data sources are without formal semantics. The XML language is used to store, export and share data between processes able to process the specific data structure. However, even if the semantics is not explicitly expressed, data structure contains the universe of discourse by using a qualified vocabulary regarding a consensual agreement. In order to formalize this semantics, the OWL language provides rich logical constraints. Therefore, these logical constraints are evolved in the transformation of XML documents into OWL documents, allowing the enrichment and the population of the target ontology. To design such a transformation, the current research field establishes connections between OWL constructs (classes, predicates, simple or complex data types, etc.) and XML constructs (elements, attributes, element lists, etc.). Two different approaches for the transformation process are exposed. The instance approaches are based on XML documents without any schema associated. The validation approaches are based on the XML schema and document validated by the associated schema. The second approaches benefit from the schema definition to provide automated transformations with logic constraints. Both approaches are discussed in the text.", "which Input format ?", "XML document", NaN, NaN], ["The objective of this paper is to present the role of Ontology Learning Process in supporting an ontology engineer for creating and maintaining ontologies from textual resources. The knowledge structures that interest us are legal domain-specific ontologies. We will use these ontologies to build legal domain ontology for a Lebanese legal knowledge based system. The domain application of this work is the Lebanese criminal system. Ontologies can be learnt from various sources, such as databases, structured and unstructured documents. Here, the focus is on the acquisition of ontologies from unstructured text, provided as input. In this work, the Ontology Learning Process represents a knowledge extraction phase using Natural Language Processing techniques. The resulted ontology is considered as inexpressive ontology. There is a need to reengineer it in order to build a complete, correct and more expressive domain-specific ontology.", "which Input format ?", "Text", 608.0, 612.0], ["Ontologies have proven beneficial in different settings that make use of textual reviews. However, manually constructing ontologies is a laborious and time-consuming process in need of automation. We propose a novel methodology for automatically extracting ontologies, in the form of meronomies, from product reviews, using a very limited amount of hand-annotated training data. We show that the ontologies generated by our method outperform hand-crafted ontologies (WordNet) and ontologies extracted by existing methods (Text2Onto and COMET) in several, diverse settings. Specifically, our generated ontologies outperform the others when evaluated by human annotators as well as on an existing Q&A dataset from Amazon. Moreover, our method is better able to generalise, in capturing knowledge about unseen products. Finally, we consider a real-world setting, showing that our method is better able to determine recommended products based on their reviews, in alternative to using Amazon\u2019s standard score aggregations.", "which Input format ?", "Text", NaN, NaN], ["DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.", "which Input format ?", "XML document", 908.0, 920.0], ["Today most of the data exchanged between information systems is done with the help of the XML syntax. Unfortunately when these data have to be integrated, the integration becomes difficult because of the semantics' heterogeneity. Consequently, leading researches in the domain of database systems are moving to semantic model in order to store data and its semantics definition. To benefit from these new systems and technologies, and to integrate different data sources, a flexible method consists in populating an existing OWL ontology from XML data. In paper we present such a method based on the definition of a graph which represents rules that drive the populating process. The graph of rules facilitates the mapping definition that consists in mapping elements from an XSD schema to the elements of the OWL schema.", "which Input format ?", "XSD", 776.0, 779.0], ["DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.", "which Properties hierarchy extraction/learning ?", "DTD", 0.0, 3.0], ["This paper describes the Duluth system that participated in SemEval-2021 Task 11, NLP Contribution Graph. It details the extraction of contribution sentences and scientific entities and their relations from scholarly articles in the domain of Natural Language Processing. Our solution uses deBERTa for multi-class sentence classification to extract the contributing sentences and their type, and dependency parsing to outline each sentence and extract subject-predicate-object triples. Our system ranked fifth of seven for Phase 1: end-to-end pipeline, sixth of eight for Phase 2 Part 1: phrases and triples, and fifth of eight for Phase 2 Part 2: triples extraction.", "which Team Name ?", "DULUTH", 25.0, 31.0], ["This paper describes the system we built as the YNU-HPCC team in the SemEval-2021 Task 11: NLPContributionGraph. This task involves first identifying sentences in the given natural language processing (NLP) scholarly articles that reflect research contributions through binary classification; then identifying the core scientific terms and their relation phrases from these contribution sentences by sequence labeling; and finally, these scientific terms and relation phrases are categorized, identified, and organized into subject-predicate-object triples to form a knowledge graph with the help of multiclass classification and multi-label classification. We developed a system for this task using a pre-trained language representation model called BERT that stands for Bidirectional Encoder Representations from Transformers, and achieved good results. The average F1-score for Evaluation Phase 2, Part 1 was 0.4562 and ranked 7th, and the average F1-score for Evaluation Phase 2, Part 2 was 0.6541, and also ranked 7th.", "which Team Name ?", "YNU-HPCC", 48.0, 56.0], ["Online Community Question Answering Forums (cQA) have gained massive popularity within recent years. The rise in users for such forums have led to the increase in the need for automated evaluation for question comprehension and fact evaluation of the answers provided by various participants in the forum. Our team, Fermi, participated in sub-task A of Task 8 at SemEval 2019 - which tackles the first problem in the pipeline of factual evaluation in cQA forums, i.e., deciding whether a posed question asks for a factual information, an opinion/advice or is just socializing. This information is highly useful in segregating factual questions from non-factual ones which highly helps in organizing the questions into useful categories and trims down the problem space for the next task in the pipeline for fact evaluation among the available answers. Our system uses the embeddings obtained from Universal Sentence Encoder combined with XGBoost for the classification sub-task A. We also evaluate other combinations of embeddings and off-the-shelf machine learning algorithms to demonstrate the efficacy of the various representations and their combinations. Our results across the evaluation test set gave an accuracy of 84% and received the first position in the final standings judged by the organizers.", "which Team Name ?", "Fermi", 316.0, 321.0], ["Aligning two representations of the same domain with different expressiveness is a crucial topic in nowadays semantic web and big data research. OWL ontologies and Entity Relation Diagrams are the most widespread representations whose alignment allows for semantic data access via ontology interface, and ontology storing techniques. The term \"\"alignment\" encompasses three different processes: OWL-to-ERD and ERD-to-OWL transformation, and OWL-ERD mapping. In this paper an innovative statistical tool is presented to accomplish all the three aspects of the alignment. The main idea relies on the use of a HMM to estimate the most likely ERD sentence that is stated in a suitable grammar, and corresponds to the observed OWL axiom. The system and its theoretical background are presented, and some experiments are reported.", "which Output format ?", "OWL", 145.0, 148.0], ["Metabolic pathways are an important part of systems biology research since they illustrate complex interactions between metabolites, enzymes, and regulators. Pathway maps are drawn to elucidate metabolism or to set data in a metabolic context. We present MetaboMAPS, a web-based platform to visualize numerical data on individual metabolic pathway maps. Metabolic maps can be stored, distributed and downloaded in SVG-format. MetaboMAPS was designed for users without computational background and supports pathway sharing without strict conventions. In addition to existing applications that established standards for well-studied pathways, MetaboMAPS offers a niche for individual, customized pathways beyond common knowledge, supporting ongoing research by creating publication-ready visualizations of experimental data.", "which Output format ?", "SVG", 414.0, 417.0], ["In this work, we offer an approach to combine standard multimedia analysis techniques with knowledge drawn from conceptual metadata provided by domain experts of a specialized scholarly domain, to learn a domain-specific multimedia ontology from a set of annotated examples. A standard Bayesian network learning algorithm that learns structure and parameters of a Bayesian network is extended to include media observables in the learning. An expert group provides domain knowledge to construct a basic ontology of the domain as well as to annotate a set of training videos. These annotations help derive the associations between high-level semantic concepts of the domain and low-level MPEG-7 based features representing audio-visual content of the videos. We construct a more robust and refined version of this ontology by learning from this set of conceptually annotated videos. To encode this knowledge, we use MOWL, a multimedia extension of Web Ontology Language (OWL) which is capable of describing domain concepts in terms of their media properties and of capturing the inherent uncertainties involved. We use the ontology specified knowledge for recognizing concepts relevant to a video to annotate fresh addition to the video database with relevant concepts in the ontology. These conceptual annotations are used to create hyperlinks in the video collection, to provide an effective video browsing interface to the user.", "which Output format ?", "MOWL", 914.0, 918.0], ["By now, XML has reached a wide acceptance as data exchange format in E-Business. An efficient collaboration between different participants in E-Business thus, is only possible, when business partners agree on a common syntax and have a common understanding of the basic concepts in the domain. XML covers the syntactic level, but lacks support for efficient sharing of conceptualizations. The Web Ontology Language (OWL [Bec04]) in turn supports the representation of domain knowledge using classes, properties and instances for the use in a distributed environment as the WorldWideWeb. We present in this paper a mapping between the data model elements of XML and OWL. We give account about its implementation within a ready-to-use XSLT framework, as well as its evaluation for common use cases.", "which Output format ?", "OWL", 416.0, 419.0], ["The aims of XML data conversion to ontologies are the indexing, integration and enrichment of existing ontologies with knowledge acquired from these sources. The contribution of this paper consists in providing a classification of the approaches used for the conversion of XML documents into OWL ontologies. This classification underlines the usage profile of each conversion method, providing a clear description of the advantages and drawbacks belonging to each method. Hence, this paper focuses on two main processes, which are ontology enrichment and ontology population using XML data. Ontology enrichment is related to the schema of the ontology (TBox), and ontology population is related to an individual (Abox). In addition, the ontologies described in these methods are based on formal languages of the Semantic Web such as OWL (Ontology Web Language) or RDF (Resource Description Framework). These languages are formal because the semantics are formally defined and take advantage of the Description Logics. In contrast, XML data sources are without formal semantics. The XML language is used to store, export and share data between processes able to process the specific data structure. However, even if the semantics is not explicitly expressed, data structure contains the universe of discourse by using a qualified vocabulary regarding a consensual agreement. In order to formalize this semantics, the OWL language provides rich logical constraints. Therefore, these logical constraints are evolved in the transformation of XML documents into OWL documents, allowing the enrichment and the population of the target ontology. To design such a transformation, the current research field establishes connections between OWL constructs (classes, predicates, simple or complex data types, etc.) and XML constructs (elements, attributes, element lists, etc.). Two different approaches for the transformation process are exposed. The instance approaches are based on XML documents without any schema associated. The validation approaches are based on the XML schema and document validated by the associated schema. The second approaches benefit from the schema definition to provide automated transformations with logic constraints. Both approaches are discussed in the text.", "which Output format ?", "RDF", 864.0, 867.0], ["In this paper we present a new tool, called DB_DOOWL, for creating domain ontology from relational database schema (RDBS). In contrast with existing transformation approaches, we propose a generic solution based on automatic instantiation of a specified meta-ontology. This later is an owl ontology which describes any database structure. A prototype of our proposed tool is implemented based on Jena in Java in order to demonstrate its feasibility.", "which Output format ?", "OWL", 286.0, 289.0], ["Today most of the data exchanged between information systems is done with the help of the XML syntax. Unfortunately when these data have to be integrated, the integration becomes difficult because of the semantics' heterogeneity. Consequently, leading researches in the domain of database systems are moving to semantic model in order to store data and its semantics definition. To benefit from these new systems and technologies, and to integrate different data sources, a flexible method consists in populating an existing OWL ontology from XML data. In paper we present such a method based on the definition of a graph which represents rules that drive the populating process. The graph of rules facilitates the mapping definition that consists in mapping elements from an XSD schema to the elements of the OWL schema.", "which Output format ?", "OWL", 525.0, 528.0], ["Ontology provides a shared and reusable piece of knowledge about a specific domain, and has been applied in many fields, such as semantic Web, e-commerce and information retrieval, etc. However, building ontology by hand is a very hard and error-prone task. Learning ontology from existing resources is a good solution. Because relational database is widely used for storing data and OWL is the latest standard recommended by W3C, this paper proposes an approach of learning OWL ontology from data in relational database. Compared with existing methods, the approach can acquire ontology from relational database automatically by using a group of learning rules instead of using a middle model. In addition, it can obtain OWL ontology, including the classes, properties, properties characteristics, cardinality and instances, while none of existing methods can acquire all of them. The proposed learning rules have been proven to be correct by practice.", "which Output format ?", "OWL", 384.0, 387.0], ["DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.", "which Output format ?", "OWL", 590.0, 593.0], ["DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.", "which Output format ?", "OWL individual", NaN, NaN], ["In this paper, we present a tool called X2OWL that aims at building an OWL ontology from an XML datasource. This method is based on XML schema to automatically generate the ontology structure, as well as, a set of mapping bridges. The presented method also includes a refinement step that allows to clean the mapping bridges and possibly to restructure the generated ontology.", "which Approaches ?", "X2OWL", 40.0, 45.0], ["In this paper we present a new tool, called DB_DOOWL, for creating domain ontology from relational database schema (RDBS). In contrast with existing transformation approaches, we propose a generic solution based on automatic instantiation of a specified meta-ontology. This later is an owl ontology which describes any database structure. A prototype of our proposed tool is implemented based on Jena in Java in order to demonstrate its feasibility.", "which Learning tool ?", "DB_DOOWL", 44.0, 52.0], ["One of the main holdbacks towards a wide use of ontologies is the high building cost. In order to reduce this effort, reuse of existing Knowledge Organization Systems (KOSs), and in particular thesauri, is a valuable and much cheaper alternative to build ontologies from scratch. In the literature tools to support such reuse and conversion of thesauri as well as re-engineering patterns already exist. However, few of these tools rely on a sort of semi-automatic reasoning on the structure of the thesaurus being converted. Furthermore, patterns proposed in the literature are not updated considering the new ISO 25964 standard on thesauri. This paper introduces a new application framework aimed to convert thesauri into OWL ontologies, differing from the existing approaches for taking into consideration ISO 25964 compliant thesauri and for applying completely automatic conversion rules.", "which dataset ?", "iso 25964", 610.0, 619.0], ["In this study, we examine the abuse of online social networks at the hands of spammers through the lens of the tools, techniques, and support infrastructure they rely upon. To perform our analysis, we identify over 1.1 million accounts suspended by Twitter for disruptive activities over the course of seven months. In the process, we collect a dataset of 1.8 billion tweets, 80 million of which belong to spam accounts. We use our dataset to characterize the behavior and lifetime of spam accounts, the campaigns they execute, and the wide-spread abuse of legitimate web services such as URL shorteners and free web hosting. We also identify an emerging marketplace of illegitimate programs operated by spammers that include Twitter account sellers, ad-based URL shorteners, and spam affiliate programs that help enable underground market diversification. Our results show that 77% of spam accounts identified by Twitter are suspended within on day of their first tweet. Because of these pressures, less than 9% of accounts form social relationships with regular Twitter users. Instead, 17% of accounts rely on hijacking trends, while 52% of accounts use unsolicited mentions to reach an audience. In spite of daily account attrition, we show how five spam campaigns controlling 145 thousand accounts combined are able to persist for months at a time, with each campaign enacting a unique spamming strategy. Surprisingly, three of these campaigns send spam directing visitors to reputable store fronts, blurring the line regarding what constitutes spam on social networks.", "which dataset ?", "1.8 billion tweets", 356.0, 374.0], ["Extraction of relevant features from high-dimensional multi-way functional MRI (fMRI) data is essential for the classification of a cognitive task. In general, fMRI records a combination of neural activation signals and several other noisy components. Alternatively, fMRI data is represented as a high dimensional array using a number of voxels, time instants, and snapshots. The organisation of fMRI data includes a number of Region Of Interests (ROI), snapshots, and thousand of voxels. The crucial step in cognitive task classification is a reduction of feature size through feature selection. Extraction of a specific pattern of interest within the noisy components is a challenging task. Tensor decomposition techniques have found several applications in the scientific fields. In this paper, a novel tensor gradient-based feature extraction technique for cognitive task classification is proposed. The technique has efficiently been applied on StarPlus fMRI data. Also, the technique has been used to discriminate the ROIs in fMRI data in terms of cognitive state classification. The method has been achieved a better average accuracy when compared to other existing feature extraction methods.", "which dataset ?", "StarPlus fMRI data", 950.0, 968.0], ["Given a task T, a pool of individuals X with different skills, and a social network G that captures the compatibility among these individuals, we study the problem of finding X, a subset of X, to perform the task. We call this the TEAM FORMATION problem. We require that members of X' not only meet the skill requirements of the task, but can also work effectively together as a team. We measure effectiveness using the communication cost incurred by the subgraph in G that only involves X'. We study two variants of the problem for two different communication-cost functions, and show that both variants are NP-hard. We explore their connections with existing combinatorial problems and give novel algorithms for their solution. To the best of our knowledge, this is the first work to consider the TEAM FORMATION problem in the presence of a social network of individuals. Experiments on the DBLP dataset show that our framework works well in practice and gives useful and intuitive results.", "which dataset ?", "DBLP", 893.0, 897.0], ["Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.", "which dataset ?", "C-NMC-2019", 835.0, 845.0], ["A rapidly growing amount of content posted online, such as food recipes, opens doors to new exciting applications at the intersection of vision and language. In this work, we aim to estimate the calorie amount of a meal directly from an image by learning from recipes people have published on the Internet, thus skipping time-consuming manual data annotation. Since there are few large-scale publicly available datasets captured in unconstrained environments, we propose the pic2kcal benchmark comprising 308 000 images from over 70 000 recipes including photographs, ingredients, and instructions. To obtain nutritional information of the ingredients and automatically determine the ground-truth calorie value, we match the items in the recipes with structured information from a food item database. We evaluate various neural networks for regression of the calorie quantity and extend them with the multi-task paradigm. Our learning procedure combines the calorie estimation with prediction of proteins, carbohydrates, and fat amounts as well as a multi-label ingredient classification. Our experiments demonstrate clear benefits of multi-task learning for calorie estimation, surpassing the single-task calorie regression by 9.9%. To encourage further research on this task, we make the code for generating the dataset and the models publicly available.", "which dataset ?", "pic2kcal", 475.0, 483.0], ["Name ambiguity in the context of bibliographic citation affects the quality of services in digital libraries. Previous methods are not widely applied in practice because of their high computational complexity and their strong dependency on excessive attributes, such as institutional affiliation, research area, address, etc., which are difficult to obtain in practice. To solve this problem, we propose a novel coarse\u2010to\u2010fine framework for name disambiguation which sequentially employs 3 common and easily accessible attributes (i.e., coauthor name, article title, and publication venue). Our proposed framework is based on multiple clustering and consists of 3 steps: (a) clustering articles by coauthorship and obtaining rough clusters, that is fragments; (b) clustering fragments obtained in step 1 by title information and getting bigger fragments; (c) and clustering fragments obtained in step 2 by the latent relations among venues. Experimental results on a Digital Bibliography and Library Project (DBLP) data set show that our method outperforms the existing state\u2010of\u2010the\u2010art methods by 2.4% to 22.7% on the average pairwise F1 score and is 10 to 100 times faster in terms of execution time.", "which dataset ?", "DBLP", 1009.0, 1013.0], ["Although research and practice has attributed considerable attention to Enterprise Resource Planning (ERP) projects their failure rate is still high. There are two main fields of research, which aim at increasing the success rate of ERP projects: Research on risk factors and research on success factors. Despite their topical relatedness, efforts to integrate these two fields have been rare. Against this background, this paper analyzes 68 articles dealing with risk and success factors and categorizes all identified factors into twelve categories. Though some topics are equally important in risk and success factor research, the literature on risk factors emphasizes topics which ensure achieving budget, schedule and functionality targets. In contrast, the literature on success factors concentrates more on strategic and organizational topics. We argue that both fields of research cover important aspects of project success. The paper concludes with the presentation of a possible holistic consideration to integrate both, the understanding of risk and success factors.", "which has research problem ?", "Enterprise resource planning", 72.0, 100.0], ["The Web is the most used Internet's service to create and share information. In large information collections, Knowledge Organization plays a key role in order to classify and to find valuable information. Likewise, Linked Open Data is a powerful approach for linking different Web datasets. Today, several Knowledge Organization Systems are published by using the design criteria of linked data, it facilitates the automatic processing of them. In this paper, we address the issue of traversing open Knowledge Organization Systems, considering difficulties associated with their dynamics and size. To fill this issue, we propose a method to identify irrelevant nodes on an open graph, thus reducing the time and the scope of the graph path and maximizing the possibilities of finding more relevant results. The approach for graph reduction is independent of the domain or task for which the open system will be used. The preliminary results of the proof of concept lead us to think that the method can be effective when the coverage of the concept of interest increases.", "which has research problem ?", "Knowledge Organization", 111.0, 133.0], ["Increasing global cooperation, vertical disintegration and a focus on core activities have led to the notion that firms are links in a networked supply chain. This strategic viewpoint has created the challenge of coordinating effectively the entire supply chain, from upstream to downstream activities. While supply chains have existed ever since businesses have been organized to bring products and services to customers, the notion of their competitive advantage, and consequently supply chain management (SCM), is a relatively recent thinking in management literature. Although research interests in and the importance of SCM are growing, scholarly materials remain scattered and disjointed, and no research has been directed towards a systematic identification of the core initiatives and constructs involved in SCM. Thus, the purpose of this study is to develop a research framework that improves understanding of SCM and stimulates and facilitates researchers to undertake both theoretical and empirical investigation on the critical constructs of SCM, and the exploration of their impacts on supply chain performance. To this end, we analyse over 400 articles and synthesize the large, fragmented body of work dispersed across many disciplines such as purchasing and supply, logistics and transportation, marketing, organizational dynamics, information management, strategic management, and operations management literature.", "which has research problem ?", "Supply chain management", 483.0, 506.0], ["E-learning recommender systems are gaining significance nowadays due to its ability to enhance the learning experience by providing tailor-made services based on learner preferences. A Personalized Learning Environment (PLE) that automatically adapts to learner characteristics such as learning styles and knowledge level can recommend appropriate learning resources that would favor the learning process and improve learning outcomes. The pure cold-start problem is a relevant issue in PLEs, which arises due to the lack of prior information about the new learner in the PLE to create appropriate recommendations. This article introduces a semantic framework based on ontology to address the pure cold-start problem in content recommenders. The ontology encapsulates the domain knowledge about the learners as well as Learning Objects (LOs). The semantic model that we built has been experimented with different combinations of the key learner parameters such as learning style, knowledge level, and background knowledge. The proposed framework utilizes these parameters to build natural learner groups from the learner ontology using SPARQL queries. The ontology holds 480 learners\u2019 data, 468 annotated learning objects with 5,600 learner ratings. A multivariate k-means clustering algorithm, an unsupervised machine learning technique for grouping similar data, is used to evaluate the learner similarity computation accuracy. The learner satisfaction achieved with the proposed model is measured based on the ratings given by the 40 participants of the experiments. From the evaluation perspective, it is evident that 79% of the learners are satisfied with the recommendations generated by the proposed model in pure cold-start condition.", "which has research problem ?", "Cold-start Problem", 445.0, 463.0], ["Extraction of relevant features from high-dimensional multi-way functional MRI (fMRI) data is essential for the classification of a cognitive task. In general, fMRI records a combination of neural activation signals and several other noisy components. Alternatively, fMRI data is represented as a high dimensional array using a number of voxels, time instants, and snapshots. The organisation of fMRI data includes a number of Region Of Interests (ROI), snapshots, and thousand of voxels. The crucial step in cognitive task classification is a reduction of feature size through feature selection. Extraction of a specific pattern of interest within the noisy components is a challenging task. Tensor decomposition techniques have found several applications in the scientific fields. In this paper, a novel tensor gradient-based feature extraction technique for cognitive task classification is proposed. The technique has efficiently been applied on StarPlus fMRI data. Also, the technique has been used to discriminate the ROIs in fMRI data in terms of cognitive state classification. The method has been achieved a better average accuracy when compared to other existing feature extraction methods.", "which has research problem ?", "Cognitive state classification", 1054.0, 1084.0], ["This paper presents the IJCNLP 2017 shared task on Dimensional Sentiment Analysis for Chinese Phrases (DSAP) which seeks to identify a real-value sentiment score of Chinese single words and multi-word phrases in the both valence and arousal dimensions. Valence represents the degree of pleasant and unpleasant (or positive and negative) feelings, and arousal represents the degree of excitement and calm. Of the 19 teams registered for this shared task for two-dimensional sentiment analysis, 13 submitted results. We expected that this evaluation campaign could produce more advanced dimensional sentiment analysis techniques, especially for Chinese affective computing. All data sets with gold standards and scoring script are made publicly available to researchers.", "which has research problem ?", "Dimensional Sentiment Analysis", 51.0, 81.0], ["We present a system for named entity recognition (ner) in astronomy journal articles. We have developed this system on a ne corpus comprising approximately 200,000 words of text from astronomy articles. These have been manually annotated with \u223c40 entity types of interest to astronomers. We report on the challenges involved in extracting the corpus, defining entity classes and annotating scientific text. We investigate which features of an existing state-of-the-art Maximum Entropy approach perform well on astronomy text. Our system achieves an F-score of 87.8%.", "which has research problem ?", "Named Entity Recognition", 24.0, 48.0], ["Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported thus far still need to be improved about precision and computational time for successful applications. In this paper, we propose an improved eye localization method based on multi-scale Gator feature vector models. The proposed method first tries to locate eyes in the downscaled face image by utilizing Gabor Jet similarity between Gabor feature vector at an initial eye coordinates and the eye model bunch of the corresponding scale. The proposed method finally locates eyes in the original input face image after it processes in the same way recursively in each scaled face image by using the eye coordinates localized in the downscaled image as initial eye coordinates. Experiments verify that our proposed method improves the precision rate without causing much computational overhead compared with other eye localization methods reported in the previous researches.", "which has research problem ?", "Eye localization", 0.0, 16.0], ["PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.", "which has research problem ?", "Breast cancer", 661.0, 674.0], ["Transformers have been recently adapted for large scale image classification, achieving high scores shaking up the long supremacy of convolutional neural networks. However the optimization of vision transformers has been little studied so far. In this work, we build and optimize deeper transformer networks for image classification. In particular, we investigate the interplay of architecture and optimization of such dedicated transformers. We make two architecture changes that significantly improve the accuracy of deep transformers. This leads us to produce models whose performance does not saturate early with more depth, for in-stance we obtain 86.5% top-1 accuracy on Imagenet when training with no external data, we thus attain the current sate of the art with less floating-point operations and parameters. Our best model establishes the new state of the art on Imagenet with Reassessed labels and Imagenet-V2 / match frequency, in the setting with no additional training data. We share our code and models1.", "which has research problem ?", "Image Classification", 56.0, 76.0], ["Abstract We introduce an architecture to learn joint multilingual sentence representations for 93 languages, belonging to more than 30 different families and written in 28 different scripts. Our system uses a single BiLSTM encoder with a shared byte-pair encoding vocabulary for all languages, which is coupled with an auxiliary decoder and trained on publicly available parallel corpora. This enables us to learn a classifier on top of the resulting embeddings using English annotated data only, and transfer it to any of the 93 languages without any modification. Our experiments in cross-lingual natural language inference (XNLI data set), cross-lingual document classification (MLDoc data set), and parallel corpus mining (BUCC data set) show the effectiveness of our approach. We also introduce a new test set of aligned sentences in 112 languages, and show that our sentence embeddings obtain strong results in multilingual similarity search even for low- resource languages. Our implementation, the pre-trained encoder, and the multilingual test set are available at https://github.com/facebookresearch/LASER.", "which has research problem ?", "Cross-Lingual Document Classification", 643.0, 680.0], ["Achieving efficient and scalable exploration in complex domains poses a major challenge in reinforcement learning. While Bayesian and PAC-MDP approaches to the exploration problem offer strong formal guarantees, they are often impractical in higher dimensions due to their reliance on enumerating the state-action space. Hence, exploration in complex domains is often performed with simple epsilon-greedy methods. In this paper, we consider the challenging Atari games domain, which requires processing raw pixel inputs and delayed rewards. We evaluate several more sophisticated exploration strategies, including Thompson sampling and Boltzman exploration, and propose a new exploration method based on assigning exploration bonuses from a concurrently learned model of the system dynamics. By parameterizing our learned model with a neural network, we are able to develop a scalable and efficient approach to exploration bonuses that can be applied to tasks with complex, high-dimensional state spaces. In the Atari domain, our method provides the most consistent improvement across a range of games that pose a major challenge for prior methods. In addition to raw game-scores, we also develop an AUC-100 metric for the Atari Learning domain to evaluate the impact of exploration on this benchmark.", "which has research problem ?", "Atari Games", 457.0, 468.0], ["A key challenge for manufacturers today is efficiently producing and delivering products on time. Issues include demand for customized products, changes in orders, and equipment status change, complicating the decision-making process. A real-time digital representation of the manufacturing operation would help address these challenges. Recent technology advancements of smart sensors, IoT, and cloud computing make it possible to realize a \"digital twin\" of a manufacturing system or process. Digital twins or surrogates are data-driven virtual representations that replicate, connect, and synchronize the operation of a manufacturing system or process. They utilize dynamically collected data to track system behaviors, analyze performance, and help make decisions without interrupting production. In this paper, we define digital surrogate, explore their relationships to simulation, digital thread, artificial intelligence, and IoT. We identify the technology and standard requirements and challenges for implementing digital surrogates. A production planning case is used to exemplify the digital surrogate concept.", "which has research problem ?", "digital twin", 443.0, 455.0], ["Balancing assembly lines, a family of optimization problems commonly known as Assembly Line Balancing Problem, is notoriously NP-Hard. They comprise a set of problems of enormous practical interest to manufacturing industry due to the relevant frequency of this type of production paradigm. For this reason, many researchers on Computational Intelligence and Industrial Engineering have been conceiving algorithms for tackling different versions of assembly line balancing problems utilizing different methodologies. In this article, it was proposed a problem version referred as Mixed Model Workplace Time-dependent Assembly Line Balancing Problem with the intention of including pressing issues of real assembly lines in the optimization problem, to which four versions were conceived. Heuristic search procedures were used, namely two Swarm Intelligence algorithms from the Fish School Search family: the original version, named \"vanilla\", and a special variation including a stagnation avoidance routine. Either approaches solved the newly posed problem achieving good results when compared to Particle Swarm Optimization algorithm.", "which has research problem ?", "Optimization problem", 727.0, 747.0], ["The term \"middle-income trap\" has entered common parlance in the development policy community, despite the lack of a precise definition. This paper discusses in more detail the definitional issues associated with the term. It also provides evidence on whether the growth performance of middle-income countries (MICs) has been different from other income categories, including historical transition phases in the inter-country distribution of income. A transition matrix analysis and an exploration of cross-country growth patterns provide little support for the existence of a middle-income trap.", "which has research problem ?", "Middle-Income Trap", 10.0, 28.0], ["CONTEXT Both antidepressant medication and structured psychotherapy have been proven efficacious, but less than one third of people with depressive disorders receive effective levels of either treatment. OBJECTIVE To compare usual primary care for depression with 2 intervention programs: telephone care management and telephone care management plus telephone psychotherapy. DESIGN Three-group randomized controlled trial with allocation concealment and blinded outcome assessment conducted between November 2000 and May 2002. SETTING AND PARTICIPANTS A total of 600 patients beginning antidepressant treatment for depression were systematically sampled from 7 group-model primary care clinics; patients already receiving psychotherapy were excluded. INTERVENTIONS Usual primary care; usual care plus a telephone care management program including at least 3 outreach calls, feedback to the treating physician, and care coordination; usual care plus care management integrated with a structured 8-session cognitive-behavioral psychotherapy program delivered by telephone. MAIN OUTCOME MEASURES Blinded telephone interviews at 6 weeks, 3 months, and 6 months assessed depression severity (Hopkins Symptom Checklist Depression Scale and the Patient Health Questionnaire), patient-rated improvement, and satisfaction with treatment. Computerized administrative data examined use of antidepressant medication and outpatient visits. RESULTS Treatment participation rates were 97% for telephone care management and 93% for telephone care management plus psychotherapy. Compared with usual care, the telephone psychotherapy intervention led to lower mean Hopkins Symptom Checklist Depression Scale depression scores (P =.02), a higher proportion of patients reporting that depression was \"much improved\" (80% vs 55%, P<.001), and a higher proportion of patients \"very satisfied\" with depression treatment (59% vs 29%, P<.001). The telephone care management program had smaller effects on patient-rated improvement (66% vs 55%, P =.04) and satisfaction (47% vs 29%, P =.001); effects on mean depression scores were not statistically significant. CONCLUSIONS For primary care patients beginning antidepressant treatment, a telephone program integrating care management and structured cognitive-behavioral psychotherapy can significantly improve satisfaction and clinical outcomes. These findings suggest a new public health model of psychotherapy for depression including active outreach and vigorous efforts to improve access to and motivation for treatment.", "which has research problem ?", "Psychotherapy for Depression", 2423.0, 2451.0], ["

We introduce a solution-processed copper tin sulfide (CTS) thin film to realize high-performance of thin-film transistors (TFT) by optimizing the CTS precursor solution concentration.

", "which has research problem ?", "Performance of thin-film transistors", 88.0, 124.0], ["We present the results of the Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge, aiming to bring together researchers in educational NLP technology and textual entailment. The task of giving feedback on student answers requires semantic inference and therefore is related to recognizing textual entailment. Thus, we offered to the community a 5-way student response labeling task, as well as 3-way and 2way RTE-style tasks on educational data. In addition, a partial entailment task was piloted. We present and compare results from 9 participating teams, and discuss future directions.", "which has research problem ?", " Joint Student Response Analysis", 29.0, 61.0], ["Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p\u02c20.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.", "which has research problem ?", "Breast cancer", 0.0, 13.0], ["Properly generated test suites may not only locate the defects in software systems, but also help in reducing the high cost associated with software testing, ft is often desired that test sequences in a test suite can be automatically generated to achieve required test coverage. However, automatic test sequence generation remains a major problem in software testing. This paper proposes an ant colony optimization approach to automatic test sequence generation for state-based software testing. The proposed approach can directly use UML artifacts to automatically generate test sequences to achieve required test coverage.", "which has research problem ?", "Ant Colony Optimization", 392.0, 415.0], ["The paper examines the impact of exchange rate volatility on the exports of five Asian countries. The countries are Turkey, South Korea, Malaysia, Indonesia and Pakistan. The impact of a volatility term on exports is examined by using an Engle-Granger residual-based cointegrating technique. The results indicate that the exchange rate volatility reduced real exports for these countries. This might mean that producers in these countries are risk-averse. The producers will prefer to sell in domestic markets rather than foreign markets if the exchange rate volatility increases.", "which has research problem ?", "Exchange rate volatility", 33.0, 57.0], ["Abstract As the term \u201csmart city\u201d gains wider and wider currency, there is still confusion about what a smart city is, especially since several similar terms are often used interchangeably. This paper aims to clarify the meaning of the word \u201csmart\u201d in the context of cities through an approach based on an in-depth literature review of relevant studies as well as official documents of international institutions. It also identifies the main dimensions and elements characterizing a smart city. The different metrics of urban smartness are reviewed to show the need for a shared definition of what constitutes a smart city, what are its features, and how it performs in comparison to traditional cities. Furthermore, performance measures and initiatives in a few smart cities are identified.", "which has research problem ?", "Smart cities", 763.0, 775.0], ["Purpose \u2013 The purpose of this paper is first, to develop a methodological framework for conducting a comprehensive literature review on an empirical phenomenon based on a vast amount of papers published. Second, to use this framework to gain an understanding of the current state of the enterprise resource planning (ERP) research field, and third, based on the literature review, to develop a conceptual framework identifying areas of concern with regard to ERP systems.Design/methodology/approach \u2013 Abstracts from 885 peer\u2010reviewed journal publications from 2000 to 2009 have been analysed according to journal, authors and year of publication, and further categorised into research discipline, research topic and methods used, using the structured methodological framework.Findings \u2013 The body of academic knowledge about ERP systems has reached a certain maturity and several different research disciplines have contributed to the field from different points of view using different methods, showing that the ERP rese...", "which has research problem ?", "Enterprise resource planning", 287.0, 315.0], ["The paper describes a probabilistic active learning strategy for support vector machine (SVM) design in large data applications. The learning strategy is motivated by the statistical query model. While most existing methods of active SVM learning query for points based on their proximity to the current separating hyperplane, the proposed method queries for a set of points according to a distribution as determined by the current separating hyperplane and a newly defined concept of an adaptive confidence factor. This enables the algorithm to have more robust and efficient learning capabilities. The confidence factor is estimated from local information using the k nearest neighbor principle. The effectiveness of the method is demonstrated on real-life data sets both in terms of generalization performance, query complexity, and training time.", "which has research problem ?", "Active learning", 36.0, 51.0], ["Introduction As a way to improve student academic performance, educators have begun paying special attention to computer games (Gee, 2005; Oblinger, 2006). Reflecting the interests of the educators, studies have been conducted to explore the effects of computer games on student achievement. However, there has been no consensus on the effects of computer games: Some studies support computer games as educational resources to promote students' learning (Annetta, Mangrum, Holmes, Collazo, & Cheng, 2009; Vogel et al., 2006). Other studies have found no significant effects on the students' performance in school, especially in math achievement of elementary school students (Ke, 2008). Researchers have also been interested in the differential effects of computer games between gender groups. While several studies have reported various gender differences in the preferences of computer games (Agosto, 2004; Kinzie & Joseph, 2008), a few studies have indicated no significant differential effect of computer games between genders and asserted generic benefits for both genders (Vogel et al., 2006). To date, the studies examining computer games and gender interaction are far from conclusive. Moreover, there is a lack of empirical studies examining the differential effects of computer games on the academic performance of diverse learners. These learners included linguistic minority students who speak languages other than English. Recent trends in the K-12 population feature the increasing enrollment of linguistic minority students, whose population reached almost four million (NCES, 2004). These students have been a grieve concern for American educators because of their reported low performance. In response, this study empirically examined the effects of math computer games on the math performance of 4th-graders with focused attention on differential effects for gender and linguistic groups. To achieve greater generalizability of the study findings, the study utilized a US nationally representative database--the 2005 National Assessment of Educational Progress (NAEP). The following research questions guided the current study: 1. Are computer games in math classes associated with the 4th-grade students' math performance? 2. How does the relationship differ by linguistic group? 3. How does the association vary by gender? 4. Is there an interaction effect of computer games on linguistic and gender groups? In other words, how does the effect of computer games on linguistic groups vary by gender group? Literature review Academic performance and computer games According DeBell and Chapman (2004), of 58,273,000 students of nursery and K-12 school age in the USA, 56% of students played computer games. Along with the popularity among students, computer games have received a lot of attention from educators as a potential way to provide learners with effective and fun learning environments (Oblinger, 2006). Gee (2005) agreed that a game would turn out to be good for learning when the game is built to incorporate learning principles. Some researchers have also supported the potential of games for affective domains of learning and fostering a positive attitude towards learning (Ke, 2008; Ke & Grabowski, 2007; Vogel et al., 2006). For example, based on the study conducted on 1,274 1st- and 2nd-graders, Rosas et al. (2003) found a positive effect of educational games on the motivation of students. Although there is overall support for the idea that games have a positive effect on affective aspects of learning, there have been mixed research results regarding the role of games in promoting cognitive gains and academic achievement. In the meta-analysis, Vogel et al. (2006) examined 32 empirical studies and concluded that the inclusion of games for students' learning resulted in significantly higher cognitive gains compared with traditional teaching methods without games. \u2026", "which has research problem ?", "Educational Games", 3379.0, 3396.0], ["Software design is a process of trading off competing objectives. If the user objective space is rich, then we should use optimizers that can fully exploit that richness. For example, this study configures software product lines (expressed as feature maps) using various search-based software engineering methods. As we increase the number of optimization objectives, we find that methods in widespread use (e.g. NSGA-II, SPEA2) perform much worse than IBEA (Indicator-Based Evolutionary Algorithm). IBEA works best since it makes most use of user preference knowledge. Hence it does better on the standard measures (hypervolume and spread) but it also generates far more products with 0% violations of domain constraints. Our conclusion is that we need to change our methods for search-based software engineering, particularly when studying complex decision spaces.", "which has research problem ?", "Search-Based Software Engineering", 271.0, 304.0], ["Nowadays, the enormous volume of health and fitness data gathered from IoT wearable devices offers favourable opportunities to the research community. For instance, it can be exploited using sophisticated data analysis techniques, such as automatic reasoning, to find patterns and, extract information and new knowledge in order to enhance decision-making and deliver better healthcare. However, due to the high heterogeneity of data representation formats, the IoT healthcare landscape is characterised by an ubiquitous presence of data silos which prevents users and clinicians from obtaining a consistent representation of the whole knowledge. Semantic web technologies, such as ontologies and inference rules, have been shown as a promising way for the integration and exploitation of data from heterogeneous sources. In this paper, we present a semantic data model useful to: (1) consistently represent health and fitness data from heterogeneous IoT sources; (2) integrate and exchange them; and (3) enable automatic reasoning by inference engines.", "which has research problem ?", "consistently represent health and fitness data from heterogeneous IoT sources", 885.0, 962.0], ["Common-sense or background knowledge is required to understand natural language, but in most neural natural language understanding (NLU) systems, the requisite background knowledge is indirectly acquired from static corpora. We develop a new reading architecture for the dynamic integration of explicit background knowledge in NLU models. A new task-agnostic reading module provides refined word representations to a task-specific NLU architecture by processing background knowledge in the form of free-text statements, together with the task-specific inputs. Strong performance on the tasks of document question answering (DQA) and recognizing textual entailment (RTE) demonstrate the effectiveness and flexibility of our approach. Analysis shows that our models learn to exploit knowledge selectively and in a semantically appropriate way.", "which has research problem ?", "Question Answering", 604.0, 622.0], ["The effect of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids) on the microbial mineralisation of phenanthrene in soil slurry by the indigenous microbial community has been investigated. The rate and extent of 14C\u2013phenanthrenemineralisation in artificially spiked soils were monitored in the absence of hydroxycinnamic acids and presence of hydroxycinnamic acids applied at three different concentrations (50, 100 and 200 \u00b5g kg-1) either as single compounds or as a mixture of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids at a 1:1:1 ratio). The highest extent of 14C\u2013phenanthrene mineralisation (P 200 \u00b5g kg-1. Depending on its concentrationin soil, hydroxycinnamic acids can either stimulate or inhibit mineralisation of phenanthrene by indigenous soil microbial community. Therefore, effective understanding of phytochemical\u2013microbe\u2013organic contaminant interactions is essential for further development of phytotechnologies for remediation of PAH\u2013contaminated soils.", "which has research problem ?", "The effect of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids) on the microbial mineralisation of phenanthrene in soil slurry by the indigenous microbial community has been investigated.", NaN, NaN], ["The body of research relating to the implementation of enterprise resource planning (ERP) systems in small- and medium-sized enterprises (SMEs) has been increasing rapidly over the last few years. It is important, particularly for SMEs, to recognize the elements for a successful ERP implementation in their environments. This research aims to examine the critical elements that constitute a successful ERP implementation in SMEs. The objective is to identify the constituents within the critical elements. A comprehensive literature review and interviews with eight SMEs in the UK were carried out. The results serve as the basic input into the formation of the critical elements and their constituents. Three main critical elements are formed: critical success factors, critical people and critical uncertainties. Within each critical element, the related constituents are identified. Using the process theory approach, the constituents within each critical element are linked to their specific phase(s) of ERP implementation. Ten constituents for critical success factors were found, nine constituents for critical people and 21 constituents for critical uncertainties. The research suggests that a successful ERP implementation often requires the identification and management of the critical elements and their constituents at each phase of implementation. The results are constructed as a reference framework that aims to provide researchers and practitioners with indicators and guidelines to improve the success rate of ERP implementation in SMEs.", "which has research problem ?", "Enterprise resource planning", 55.0, 83.0], ["Abstract: We investigate multiple techniques to improve upon the current state of the art deep convolutional neural network based image classification pipeline. The techiques include adding more image transformations to training data, adding more transformations to generate additional predictions at test time and using complementary models applied to higher resolution images. This paper summarizes our entry in the Imagenet Large Scale Visual Recognition Challenge 2013. Our system achieved a top 5 classification error rate of 13.55% using no external data which is over a 20% relative improvement on the previous year's winner.", "which has research problem ?", "Image Classification", 130.0, 150.0], ["This study represents two critical steps forward in the area of smart city research and practice. The first is in the form of the development of a comprehensive conceptualization of smart city as a resource for researchers and government practition- ers; the second is in the form of the creation of a bridge between smart cities research and practice expertise. City governments increasingly need innovative arrangements to solve a variety of technical, physical, and social problems. \"Smart city\" could be used to represent efforts that in many ways describe a vision of a city, but there is little clarity about this new concept. This paper proposes a comprehensive conceptualization of smart city, including its main components and several specific elements. Academic literature is used to create a robust framework, while a review of practical tools is used to identify specific elements or aspects not treated in the academic studies, but essential to create an integrative and comprehensive conceptualization of smart city. The paper also provides policy implications and suggests areas for future research in this topic.", "which has research problem ?", "Smart cities", 317.0, 329.0], ["The MEDIQA 2021 shared tasks at the BioNLP 2021 workshop addressed three tasks on summarization for medical text: (i) a question summarization task aimed at exploring new approaches to understanding complex real-world consumer health queries, (ii) a multi-answer summarization task that targeted aggregation of multiple relevant answers to a biomedical question into one concise and relevant answer, and (iii) a radiology report summarization task addressing the development of clinically relevant impressions from radiology report findings. Thirty-five teams participated in these shared tasks with sixteen working notes submitted (fifteen accepted) describing a wide variety of models developed and tested on the shared and external datasets. In this paper, we describe the tasks, the datasets, the models and techniques developed by various teams, the results of the evaluation, and a study of correlations among various summarization evaluation measures. We hope that these shared tasks will bring new research and insights in biomedical text summarization and evaluation.", "which has research problem ?", "Summarization", 82.0, 95.0], ["Research on definition extraction has been conducted for well over a decade, largely with significant constraints on the type of definitions considered. In this work, we present DeftEval, a SemEval shared task in which participants must extract definitions from free text using a term-definition pair corpus that reflects the complex reality of definitions in natural language. Definitions and glosses in free text often appear without explicit indicators, across sentences boundaries, or in an otherwise complex linguistic manner. DeftEval involved 3 distinct subtasks: 1) Sentence classification, 2) sequence labeling, and 3) relation extraction.", "which has research problem ?", "Relation extraction", 628.0, 647.0], ["Rapid industrial modernisation and economic reform have been features of the Korean economy since the 1990s, and have brought with it substantial environmental problems. In response to these problems, the Korean government has been developing approaches to promote cleaner production technologies. Green supply chain management (GSCM) is emerging to be an important approach for Korean enterprises to improve performance. The purpose of this study is to examine the impact of GSCM CSFs (critical success factors) on the BSC (balanced scorecard) performance by the structural equation modelling, using empirical results from 249 enterprise respondents involved in national GSCM business in Korea. Planning and implementation was a dominant antecedent factor in this study, followed by collaboration with partners and integration of infrastructure. However, activation of support was a negative impact to the finance performance, raising the costs and burdens. It was found out that there were important implications in the implementation of GSCM.", "which has research problem ?", "Supply chain management", 304.0, 327.0], ["Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.", "which has research problem ?", "Atari Games", 641.0, 652.0], ["Convolutional neural networks (CNNs) have recently emerged as a popular building block for natural language processing (NLP). Despite their success, most existing CNN models employed in NLP share the same learned (and static) set of filters for all input sentences. In this paper, we consider an approach of using a small meta network to learn context-sensitive convolutional filters for text processing. The role of meta network is to abstract the contextual information of a sentence or document into a set of input-sensitive filters. We further generalize this framework to model sentence pairs, where a bidirectional filter generation mechanism is introduced to encapsulate co-dependent sentence representations. In our benchmarks on four different tasks, including ontology classification, sentiment analysis, answer sentence selection, and paraphrase identification, our proposed model, a modified CNN with context-sensitive filters, consistently outperforms the standard CNN and attention-based CNN baselines. By visualizing the learned context-sensitive filters, we further validate and rationalize the effectiveness of proposed framework.", "which has research problem ?", "Text Processing", 388.0, 403.0], ["We describe the shared task for the CLPsych 2018 workshop, which focused on predicting current and future psychological health from an essay authored in childhood. Language-based predictions of a person\u2019s current health have the potential to supplement traditional psychological assessment such as questionnaires, improving intake risk measurement and monitoring. Predictions of future psychological health can aid with both early detection and the development of preventative care. Research into the mental health trajectory of people, beginning from their childhood, has thus far been an area of little work within the NLP community. This shared task represents one of the first attempts to evaluate the use of early language to predict future health; this has the potential to support a wide variety of clinical health care tasks, from early assessment of lifetime risk for mental health problems, to optimal timing for targeted interventions aimed at both prevention and treatment.", "which has research problem ?", "Predicting Current and Future Psychological Health", 76.0, 126.0], ["Internet of Things (IoT) covers a variety of applications including the Healthcare field. Consequently, medical objects become connected to each other with the purpose to share and exchange health data. These medical connected objects raise issues on how to ensure the analysis, interpretation and semantic interoperability of the extensive obtained health data with the purpose to make an appropriate decision. This paper proposes a HealthIoT ontology for representing the semantic interoperability of the medical connected objects and their data; while an algorithm alleviates the analysis of the detected vital signs and the decision-making of the doctor. The execution of this algorithm needs the definition of several SWRL rules (Semantic Web Rule Language).", "which has research problem ?", "semantic interoperability of the medical connected objects and their data", 474.0, 547.0], ["The Implementation of Enterprise Resource Planning ERP systems require huge investments while ineffective implementations of such projects are commonly observed. A considerable number of these projects have been reported to fail or take longer than it was initially planned, while previous studies show that the aim of rapid implementation of such projects has not been successful and the failure of the fundamental goals in these projects have imposed huge amounts of costs on investors. Some of the major consequences are the reduction in demand for such products and the introduction of further skepticism to the managers and investors of ERP systems. In this regard, it is important to understand the factors determining success or failure of ERP implementation. The aim of this paper is to study the critical success factors CSFs in implementing ERP systems and to develop a conceptual model which can serve as a basis for ERP project managers. These critical success factors that are called \"core critical success factors\" are extracted from 62 published papers using the content analysis and the entropy method. The proposed conceptual model has been verified in the context of five multinational companies.", "which has research problem ?", "Enterprise resource planning", 22.0, 50.0], ["This paper grounds the critique of the \u2018smart city\u2019 in its historical and geographical context. Adapting Brenner and Theodore\u2019s notion of \u2018actually existing neoliberalism\u2019, we suggest a greater attention be paid to the \u2018actually existing smart city\u2019, rather than the exceptional or paradigmatic smart cities of Songdo, Masdar and Living PlanIT Valley. Through a closer analysis of cases in Louisville and Philadelphia, we demonstrate the utility of understanding the material effects of these policies in actual cities around the world, with a particular focus on how and from where these policies have arisen, and how they have unevenly impacted the places that have adopted them.", "which has research problem ?", "Smart cities", 295.0, 307.0], ["Recommender Systems have been widely used to help users in finding what they are looking for thus tackling the information overload problem. After several years of research and industrial findings looking after better algorithms to improve accuracy and diversity metrics, explanation services for recommendation are gaining momentum as a tool to provide a human-understandable feedback to results computed, in most of the cases, by black-box machine learning techniques. As a matter of fact, explanations may guarantee users satisfaction, trust, and loyalty in a system. In this paper, we evaluate how different information encoded in a Knowledge Graph are perceived by users when they are adopted to show them an explanation. More precisely, we compare how the use of categorical information, factual one or a mixture of them both in building explanations, affect explanatory criteria for a recommender system. Experimental results are validated through an A/B testing platform which uses a recommendation engine based on a Semantics-Aware Autoencoder to build users profiles which are in turn exploited to compute recommendation lists and to provide an explanation.", "which has research problem ?", "Recommender Systems", 0.0, 19.0], ["We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and other recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa on GLUE and SQuAD, and achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 3.5 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also replicate other pretraining schemes within the BART framework, to understand their effect on end-task performance.", "which has research problem ?", "Question Answering", 938.0, 956.0], ["Many digital libraries recommend literature to their users considering the similarity between a query document and their repository. However, they often fail to distinguish what is the relationship that makes two documents alike. In this paper, we model the problem of finding the relationship between two documents as a pairwise document classification task. To find the semantic relation between documents, we apply a series of techniques, such as GloVe, Paragraph Vectors, BERT, and XLNet under different configurations (e.g., sequence length, vector concatenation scheme), including a Siamese architecture for the Transformer-based systems. We perform our experiments on a newly proposed dataset of 32,168 Wikipedia article pairs and Wikidata properties that define the semantic document relations. Our results show vanilla BERT as the best performing system with an F1-score of 0.93, which we manually examine to better understand its applicability to other domains. Our findings suggest that classifying semantic relations between documents is a solvable task and motivates the development of a recommender system based on the evaluated techniques. The discussions in this paper serve as first steps in the exploration of documents through SPARQL-like queries such that one could find documents that are similar in one aspect but dissimilar in another.", "which has research problem ?", "Document classification", 330.0, 353.0], ["This paper presents the task definition, resources, and the single participant system for Task 12: Turkish Lexical Sample Task (TLST), which was organized in the SemEval-2007 evaluation exercise. The methodology followed for developing the specific linguistic resources necessary for the task has been described in this context. A language-specific feature set was defined for Turkish. TLST consists of three pieces of data: The dictionary, the training data, and the evaluation data. Finally, a single system that utilizes a simple statistical method was submitted for the task and evaluated.", "which has research problem ?", "Turkish Lexical Sample Task", 99.0, 126.0], ["Current approaches to building knowledge-based systems propose the development of an ontology as a precursor to building the problem-solver. This paper outlines an attempt to do the reverse and discover interesting ontologies from systems built without the ontology being explicit. In particular the paper considers large classification knowledge bases used for the interpretation of medical chemical pathology results and built using Ripple-Down Rules (RDR). The rule conclusions in these knowledge bases provide free-text interpretations of the results rather than explicit classes. The goal is to discover implicit ontological relationships between these interpretations as the system evolves. RDR allows for incremental development and the goal is that the ontology emerges as the system evolves. The results suggest that approach has potential, but further investigation is required before strong claims can be made.", "which has research problem ?", " discover implicit ontological relationships", 599.0, 643.0], ["The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents\u2019 circumstances, such as a spouse\u2019s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.", "which has research problem ?", "housework", 920.0, 929.0], ["In this paper, I examine the convergence of big data and urban governance beyond the discursive and material contexts of the smart city. I argue that in addition to understanding the intensifying relationship between data, cities, and governance in terms of regimes of automated management and coordination in \u2018actually existing\u2019 smart cities, we should further engage with urban algorithmic governance and governmentality as material-discursive projects of future-ing, i.e., of anticipating particular kinds of cities-to-come. As urban big data looks to the future, it does so through the lens of an anticipatory security calculus fixated on identifying and diverting risks of urban anarchy and personal harm against which life in cities must be securitized. I suggest that such modes of algorithmic speculation are discernible at two scales of urban big data praxis: the scale of the body, and that of the city itself. At the level of the urbanite body, I use the selective example of mobile neighborhood safety apps to demonstrate how algorithmic governmentality enacts digital mediations of individual mobilities by routing individuals around \u2018unsafe\u2019 parts of the city in the interests of technologically ameliorating the risks of urban encounter. At the scale of the city, amongst other empirical examples, sentiment analytics approaches prefigure ephemeral spatialities of civic strife by aggregating and mapping individual emotions distilled from unstructured real-time content flows (such as Tweets). In both of these instances, the urban futures anticipated by the urban \u2018big data security assemblage\u2019 are highly uneven, as data and algorithms cannot divest themselves of urban inequalities and the persistence of their geographies.", "which has research problem ?", "Smart cities", 330.0, 342.0], ["Most learning algorithms are not invariant to the scale of the function that is being approximated. We propose to adaptively normalize the targets used in learning. This is useful in value-based reinforcement learning, where the magnitude of appropriate value approximations can change over time when we update the policy of behavior. Our main motivation is prior work on learning to play Atari games, where the rewards were all clipped to a predetermined range. This clipping facilitates learning across many different games with a single learning algorithm, but a clipped reward function can result in qualitatively different behavior. Using the adaptive normalization we can remove this domain-specific heuristic without diminishing overall performance.", "which has research problem ?", "Atari Games", 389.0, 400.0], ["Both, MOOCs and learning analytics, are two emergent topics in the field of educational technology. This paper shows the main contributions of the eMadrid network in these two topics during the last years (2014-2016), as well as the planned future works in the network. The contributions in the field of the MOOCs include the design and authoring of materials, the improvement of the peer review process or experiences about teaching these courses and institutional adoption. The contributions in the field of learning analytics include the inference of higher level information, the development of dashboards, the evaluation of the learning process, or the prediction and clustering.", "which has research problem ?", "Learning Analytics", 16.0, 34.0], ["This paper presents work on a method to detect names of proteins in running text. Our system - Yapex - uses a combination of lexical and syntactic knowledge, heuristic filters and a local dynamic dictionary. The syntactic information given by a general-purpose off-the-shelf parser supports the correct identification of the boundaries of protein names, and the local dynamic dictionary finds protein names in positions incompletely analysed by the parser. We present the different steps involved in our approach to protein tagging, and show how combinations of them influence recall and precision. We evaluate the system on a corpus of MEDLINE abstracts and compare it with the KeX system (Fukuda et al., 1998) along four different notions of correctness.", "which has research problem ?", "Protein tagging", 516.0, 531.0], ["The attention for Smart governance, a key aspect of Smart cities, is growing, but our conceptual understanding of it is still limited. This article fills this gap in our understanding by exploring the concept of Smart governance both theoretically and empirically and developing a research model of Smart governance. On the basis of a systematic review of the literature defining elements, aspired outcomes and implementation strategies are identified as key dimensions of Smart governance. Inductively, we identify various categories within these variables. The key dimensions were presented to a sample of representatives of European local governments to investigate the dominant perceptions of practitioners and to refine the categories. Our study results in a model for research into the implementation strategies, Smart governance arrangements, and outcomes of Smart governance.", "which has research problem ?", "Smart cities", 52.0, 64.0], ["Efficient exploration in complex environments remains a major challenge for reinforcement learning. We propose bootstrapped DQN, a simple algorithm that explores in a computationally and statistically efficient manner through use of randomized value functions. Unlike dithering strategies such as epsilon-greedy exploration, bootstrapped DQN carries out temporally-extended (or deep) exploration; this can lead to exponentially faster learning. We demonstrate these benefits in complex stochastic MDPs and in the large-scale Arcade Learning Environment. Bootstrapped DQN substantially improves learning times and performance across most Atari games.", "which has research problem ?", "Atari Games", 637.0, 648.0], ["In our age cities are complex systems and we can say systems of systems. Today locality is the result of using information and communication technologies in all departments of our life, but in future all cities must to use smart systems for improve quality of life and on the other hand for sustainable development. The smart systems make daily activities more easily, efficiently and represent a real support for sustainable city development. This paper analysis the sus-tainable development and identified the key elements of future smart cities.", "which has research problem ?", "Smart cities", 535.0, 547.0], ["Ladder-type dithienocyclopentacarbazole (DTCC) cores, which possess highly extended \u03c0-conjugated backbones and versatile modular structures for derivatization, were widely used to develop high-performance p-type polymeric semiconductors. However, an n-type DTCC-based organic semiconductor has not been reported to date. In this study, the first DTCC-based n-type organic semiconductor (DTCC\u2013IC) with a well-defined A\u2013D\u2013A backbone was designed, synthesized, and characterized, in which a DTCC derivative substituted by four p-octyloxyphenyl groups was used as the electron-donating core and two strongly electron-withdrawing 3-(dicyanomethylene)indan-1-one moieties were used as the terminal acceptors. It was found that DTCC\u2013IC has strong light-capturing ability in the range of 500\u2013720 nm and exhibits an impressively high molar absorption coefficient of 2.24 \u00d7 105 M\u22121 cm\u22121 at 669 nm owing to effective intramolecular charge transfer and a strong D\u2013A effect. Cyclic voltammetry measurements indicated that the HOMO and LUMO energy levels of DTCC\u2013IC are \u22125.50 and \u22123.87 eV, respectively. More importantly, a high electron mobility of 2.17 \u00d7 10\u22123 cm2 V\u22121 s\u22121 was determined by the space-charge-limited current method; this electron mobility can be comparable to that of fullerene derivative acceptors (\u03bce \u223c 10\u22123 cm2 V\u22121 s\u22121). To investigate its application potential in non-fullerene solar cells, we fabricated organic solar cells (OSCs) by blending a DTCC\u2013IC acceptor with a PTB7-Th donor under various conditions. The results suggest that the optimized device exhibits a maximum power conversion efficiency (PCE) of up to 6% and a rational high VOC of 0.95 V. These findings demonstrate that the ladder-type DTCC core is a promising building block for the development of high-mobility n-type organic semiconductors for OSCs.", "which has research problem ?", "Organic solar cells", 1412.0, 1431.0], ["The fifth phase of the Coupled Model Intercomparison Project (CMIP5) will produce a state-of-the- art multimodel dataset designed to advance our knowledge of climate variability and climate change. Researchers worldwide are analyzing the model output and will produce results likely to underlie the forthcoming Fifth Assessment Report by the Intergovernmental Panel on Climate Change. Unprecedented in scale and attracting interest from all major climate modeling groups, CMIP5 includes \u201clong term\u201d simulations of twentieth-century climate and projections for the twenty-first century and beyond. Conventional atmosphere\u2013ocean global climate models and Earth system models of intermediate complexity are for the first time being joined by more recently developed Earth system models under an experiment design that allows both types of models to be compared to observations on an equal footing. Besides the longterm experiments, CMIP5 calls for an entirely new suite of \u201cnear term\u201d simulations focusing on recent decades...", "which has research problem ?", "experiment design", 792.0, 809.0], ["BioNLP Open Shared Tasks (BioNLP-OST) is an international competition organized to facilitate development and sharing of computational tasks of biomedical text mining and solutions to them. For BioNLP-OST 2019, we introduced a new mental health informatics task called \u201cRDoC Task\u201d, which is composed of two subtasks: information retrieval and sentence extraction through National Institutes of Mental Health\u2019s Research Domain Criteria framework. Five and four teams around the world participated in the two tasks, respectively. According to the performance on the two tasks, we observe that there is room for improvement for text mining on brain research and mental illness.", "which has research problem ?", "Information Retrieval", 317.0, 338.0], ["Gene Ontology (GO) annotation is a common task among model organism databases (MODs) for capturing gene function data from journal articles. It is a time-consuming and labor-intensive task, and is thus often considered as one of the bottlenecks in literature curation. There is a growing need for semiautomated or fully automated GO curation techniques that will help database curators to rapidly and accurately identify gene function information in full-length articles. Despite multiple attempts in the past, few studies have proven to be useful with regard to assisting real-world GO curation. The shortage of sentence-level training data and opportunities for interaction between text-mining developers and GO curators has limited the advances in algorithm development and corresponding use in practical circumstances. To this end, we organized a text-mining challenge task for literature-based GO annotation in BioCreative IV. More specifically, we developed two subtasks: (i) to automatically locate text passages that contain GO-relevant information (a text retrieval task) and (ii) to automatically identify relevant GO terms for the genes in a given article (a concept-recognition task). With the support from five MODs, we provided teams with >4000 unique text passages that served as the basis for each GO annotation in our task data. Such evidence text information has long been recognized as critical for text-mining algorithm development but was never made available because of the high cost of curation. In total, seven teams participated in the challenge task. From the team results, we conclude that the state of the art in automatically mining GO terms from literature has improved over the past decade while much progress is still needed for computer-assisted GO curation. Future work should focus on addressing remaining technical challenges for improved performance of automatic GO concept recognition and incorporating practical benefits of text-mining tools into real-world GO annotation. Database URL: http://www.biocreative.org/tasks/biocreative-iv/track-4-GO/.", "which has research problem ?", "text retrieval", 1060.0, 1074.0], ["This paper presents the preparation, resources, results and analysis of the Infectious Diseases (ID) information extraction task, a main task of the BioNLP Shared Task 2011. The ID task represents an application and extension of the BioNLP'09 shared task event extraction approach to full papers on infectious diseases. Seven teams submitted final results to the task, with the highest-performing system achieving 56% F-score in the full task, comparable to state-of-the-art performance in the established BioNLP'09 task. The results indicate that event extraction methods generalize well to new domains and full-text publications and are applicable to the extraction of events relevant to the molecular mechanisms of infectious diseases.", "which has research problem ?", "Infectious Diseases (ID) information extraction task", NaN, NaN], ["We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random tokens, and (2) training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it. SpanBERT consistently outperforms BERT and our better-tuned baselines, with substantial gains on span selection tasks such as question answering and coreference resolution. In particular, with the same training data and model size as BERT large , our single model obtains 94.6% and 88.7% F1 on SQuAD 1.1 and 2.0 respectively. We also achieve a new state of the art on the OntoNotes coreference resolution task (79.6% F1), strong performance on the TACRED relation extraction benchmark, and even gains on GLUE. 1", "which has research problem ?", "pre-training method that is designed to better represent and predict spans of text", 23.0, 105.0], ["Multi-agent systems (MASs) have received tremendous attention from scholars in different disciplines, including computer science and civil engineering, as a means to solve complex problems by subdividing them into smaller tasks. The individual tasks are allocated to autonomous entities, known as agents. Each agent decides on a proper action to solve the task using multiple inputs, e.g., history of actions, interactions with its neighboring agents, and its goal. The MAS has found multiple applications, including modeling complex systems, smart grids, and computer networks. Despite their wide applicability, there are still a number of challenges faced by MAS, including coordination between agents, security, and task allocation. This survey provides a comprehensive discussion of all aspects of MAS, starting from definitions, features, applications, challenges, and communications to evaluation. A classification on MAS applications and challenges is provided along with references for further studies. We expect this paper to serve as an insightful and comprehensive resource on the MAS for researchers and practitioners in the area.", "which has research problem ?", "Multi-agent systems", 0.0, 19.0], ["Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. However, these visual transformers are pre-trained with hundreds of millions of images using an expensive infrastructure, thereby limiting their adoption by the larger community. In this work, with an adequate training scheme, we produce a competitive convolution-free transformer by training on Imagenet only. We train it on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external data. We share our code and models to accelerate community advances on this line of research. Additionally, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this tokenbased distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 84.4% accuracy) and when transferring to other tasks.", "which has research problem ?", "Image Classification", 108.0, 128.0], ["NLTK, the Natural Language Toolkit, is a suite of open source program modules, tutorials and problem sets, providing ready-to-use computational linguistics courseware. NLTK covers symbolic and statistical natural language processing, and is interfaced to annotated corpora. Students augment and replace existing components, learn structured programming by example, and manipulate sophisticated models from the outset.", "which has research problem ?", "ready-to-use computational linguistics courseware", 117.0, 166.0], ["With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.", "which has research problem ?", "Natural Language Inference", 942.0, 968.0], ["We present a method for precise eye localization that uses two Support Vector Machines trained on properly selected Haar wavelet coefficients. The evaluation of our technique on many standard databases exhibits very good performance. Furthermore, we study the strong correlation between the eye localization error and the face recognition rate.", "which has research problem ?", "Eye localization", 32.0, 48.0], ["Active machine learning algorithms are used when large numbers of unlabeled examples are available and getting labels for them is costly (e.g. requiring consulting a human expert). Many conventional active learning algorithms focus on refining the decision boundary, at the expense of exploring new regions that the current hypothesis misclassifies. We propose a new active learning algorithm that balances such exploration with refining of the decision boundary by dynamically adjusting the probability to explore at each step. Our experimental results demonstrate improved performance on data sets that require extensive exploration while remaining competitive on data sets that do not. Our algorithm also shows significant tolerance of noise.", "which has research problem ?", "Active learning", 199.0, 214.0], ["The continuing development of enterprise resource planning (ERP) systems has been considered by many researchers and practitioners as one of the major IT innovations in this decade. ERP solutions seek to integrate and streamline business processes and their associated information and work flows. What makes this technology more appealing to organizations is increasing capability to integrate with the most advanced electronic and mobile commerce technologies. However, as is the case with any new IT field, research in the ERP area is still lacking and the gap in the ERP literature is huge. Attempts to fill this gap by proposing a novel taxonomy for ERP research. Also presents the current status with some major themes of ERP research relating to ERP adoption, technical aspects of ERP and ERP in IS curricula. The discussion presented on these issues should be of value to researchers and practitioners. Future research work will continue to survey other major areas presented in the taxonomy framework.", "which has research problem ?", "Enterprise resource planning", 30.0, 58.0], ["Bidirectional Encoder Representations from Transformers (BERT) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves state-of-the-art results across the board in both extractive and abstractive settings.", "which has research problem ?", "Text Summarization", 270.0, 288.0], ["Nanoscale biocompatible photoluminescence (PL) thermometers that can be used to accurately and reliably monitor intracellular temperatures have many potential applications in biology and medicine. Ideally, such nanothermometers should be functional at physiological pH across a wide range of ionic strengths, probe concentrations, and local environments. Here, we show that water-soluble N,S-co-doped carbon dots (CDs) exhibit temperature-dependent photoluminescence lifetimes and can serve as highly sensitive and reliable intracellular nanothermometers. PL intensity measurements indicate that these CDs have many advantages over alternative semiconductor- and CD-based nanoscale temperature sensors. Importantly, their PL lifetimes remain constant over wide ranges of pH values (5-12), CD concentrations (1.5 \u00d7 10-5 to 0.5 mg/mL), and environmental ionic strengths (up to 0.7 mol\u00b7L-1 NaCl). Moreover, they are biocompatible and nontoxic, as demonstrated by cell viability and flow cytometry analyses using NIH/3T3 and HeLa cell lines. N,S-CD thermal sensors also exhibit good water dispersibility, superior photo- and thermostability, extraordinary environment and concentration independence, high storage stability, and reusability-their PL decay curves at temperatures between 15 and 45 \u00b0C remained unchanged over seven sequential experiments. In vitro PL lifetime-based temperature sensing performed with human cervical cancer HeLa cells demonstrated the great potential of these nanosensors in biomedicine. Overall, N,S-doped CDs exhibit excitation-independent emission with strongly temperature-dependent monoexponential decay, making them suitable for both in vitro and in vivo luminescence lifetime thermometry.", "which has research problem ?", "Nanothermometer", NaN, NaN], ["We propose a simple yet robust stochastic answer network (SAN) that simulates multi-step reasoning in machine reading comprehension. Compared to previous work such as ReasoNet which used reinforcement learning to determine the number of steps, the unique feature is the use of a kind of stochastic prediction dropout on the answer module (final layer) of the neural network during the training. We show that this simple trick improves robustness and achieves results competitive to the state-of-the-art on the Stanford Question Answering Dataset (SQuAD), the Adversarial SQuAD, and the Microsoft MAchine Reading COmprehension Dataset (MS MARCO).", "which has research problem ?", "Question Answering", 519.0, 537.0], ["This paper presents our recent work on the design and development of a new, large scale dataset, which we name MS MARCO, for MAchine Reading COmprehension. This new dataset is aimed to overcome a number of well-known weaknesses of previous publicly available datasets for the same task of reading comprehension and question answering. In MS MARCO, all questions are sampled from real anonymized user queries. The context passages, from which answers in the dataset are derived, are extracted from real web documents using the most advanced version of the Bing search engine. The answers to the queries are human generated. Finally, a subset of these queries has multiple answers. We aim to release one million queries and the corresponding answers in the dataset, which, to the best of our knowledge, is the most comprehensive real-world dataset of its kind in both quantity and quality. We are currently releasing 100,000 queries with their corresponding answers to inspire work in reading comprehension and question answering along with gathering feedback from the research community.", "which has research problem ?", "Question Answering ", 1009.0, 1028.0], ["It is consensual that Enterprise Resource Planning (ERP) after a successful implementation has significant effects on the productivity of firm as well small and medium-sized enterprises (SMEs) recognized as fundamentally different environments compared to large enterprises. There are few reviews in the literature about the post-adoption phase and even fewer at SME level. Furthermore, to the best of our knowledge there is none with focus in ERP value stage. This review will fill this gap. It provides an updated bibliography of ERP publications published in the IS journal and conferences during the period of 2000 and 2012. A total of 33 articles from 21 journals and 12 conferences are reviewed. The main focus of this paper is to shed the light on the areas that lack sufficient research within the ERP in SME domain, in particular in ERP business value stage, suggest future research avenues, as well as, present the current research findings that could support researchers and practitioners when embarking on ERP projects.", "which has research problem ?", "Enterprise resource planning", 22.0, 50.0], ["Metabolic pathways are an important part of systems biology research since they illustrate complex interactions between metabolites, enzymes, and regulators. Pathway maps are drawn to elucidate metabolism or to set data in a metabolic context. We present MetaboMAPS, a web-based platform to visualize numerical data on individual metabolic pathway maps. Metabolic maps can be stored, distributed and downloaded in SVG-format. MetaboMAPS was designed for users without computational background and supports pathway sharing without strict conventions. In addition to existing applications that established standards for well-studied pathways, MetaboMAPS offers a niche for individual, customized pathways beyond common knowledge, supporting ongoing research by creating publication-ready visualizations of experimental data.", "which has research problem ?", "Visualization", NaN, NaN], ["This paper presents the main results achieved in the program eMadrid Program in Open Educational Resources, Free Software, Open Data, and about formats and standardization of content and services.", "which has research problem ?", "Open Education", NaN, NaN], ["This paper presents the task definition, resources, participation, and comparative results for the Web People Search task, which was organized as part of the SemEval-2007 evaluation exercise. This task consists of clustering a set of documents that mention an ambiguous person name according to the actual entities referred to using that name.", "which has research problem ?", "Web People Search task", 99.0, 121.0], ["Online news recommender systems aim to address the information explosion of news and make personalized recommendation for users. In general, news language is highly condensed, full of knowledge entities and common sense. However, existing methods are unaware of such external knowledge and cannot fully discover latent knowledge-level connections among news. The recommended results for a user are consequently limited to simple patterns and cannot be extended reasonably. To solve the above problem, in this paper, we propose a deep knowledge-aware network (DKN) that incorporates knowledge graph representation into news recommendation. DKN is a content-based deep recommendation framework for click-through rate prediction. The key component of DKN is a multi-channel and word-entity-aligned knowledge-aware convolutional neural network (KCNN) that fuses semantic-level and knowledge-level representations of news. KCNN treats words and entities as multiple channels, and explicitly keeps their alignment relationship during convolution. In addition, to address users\u00bb diverse interests, we also design an attention module in DKN to dynamically aggregate a user\u00bbs history with respect to current candidate news. Through extensive experiments on a real online news platform, we demonstrate that DKN achieves substantial gains over state-of-the-art deep recommendation models. We also validate the efficacy of the usage of knowledge in DKN.", "which has research problem ?", "Recommender Systems", 12.0, 31.0], ["Graphs arise naturally in many real-world applications including social networks, recommender systems, ontologies, biology, and computational finance. Traditionally, machine learning models for graphs have been mostly designed for static graphs. However, many applications involve evolving graphs. This introduces important challenges for learning and inference since nodes, attributes, and edges change over time. In this survey, we review the recent advances in representation learning for dynamic graphs, including dynamic knowledge graphs. We describe existing models from an encoder-decoder perspective, categorize these encoders and decoders based on the techniques they employ, and analyze the approaches in each category. We also review several prominent applications and widely used datasets, and highlight directions for future research.", "which has research problem ?", "review the recent advances in representation learning for dynamic graphs", 434.0, 506.0], ["Over the past decade, Enterprise Resource Planning systems (ERP) have become one of the most important developments in the corporate use of information technology. ERP implementations are usually large, complex projects, involving large groups of people and other resources, working together under considerable time pressure and facing many unforeseen developments. In order for an organization to compete in this rapidly expanding and integrated marketplace, ERP systems must be employed to ensure access to an efficient, effective, and highly reliable information infrastructure. Despite the benefits that can be achieved from a successful ERP system implementation, there is evidence of high failure in ERP implementation projects. Too frequently key development practices are ignored and early warn ing signs that lead to project failure are not understood. Identifying project success and failure factors and their consequences as early as possible can provide valuable clues to help project managers improve their chances of success. It is the long-lange goal of our research to shed light on these factors and to provide a tool that project managers can use to help better manage their software development projects. This paper will present a review of the general background to our work; the results from the current research and conclude with a discussion of the findings thus far. The findings will include a list of 23 unique Critical Success Factors identified throughout the literature, which we believe to be essential for Project Managers. The implications of these results will be discussed along with the lessons learnt.", "which has research problem ?", "Enterprise resource planning", 22.0, 50.0], ["Abstract The essential oil content of Artemisia herba-alba Asso decreased along the drying period from 2.5 % to 1.8 %. Conversely, the composition of the essential oil was not qualitatively affected by the drying process. The same principle components were found in all essential analyzed such as \u03b1-thujone (13.0 \u2013 22.7 %), \u03b2-thujone (18.0 \u2013 25.0 %), camphor (8.6 - 13 %), 1,8-cineole (7.1 \u2013 9.4 %), chrysanthenone (6.7 \u2013 10.9 %), terpinen-4-ol (3.4 \u2013 4.7 %). Quantitatively, during the air-drying process, the content of some components decreased slightly such as \u03b1-thujone (from 22.7 to 15.9 %) and 1,8-cineole (from 9.4 to 7.1 %), while the amount of other compounds increased such as chrysanthenone (from 6.7 to 10.9 %), borneol (from 0.8 to 1.5 %), germacrene-D (from 1.0 to 2.4 %) and spathulenol (from 0.8 to 1.5 %). The chemical composition of the oil was more affected by oven-drying the plant material at 35\u00b0C. \u03b1-Thujone and \u03b2-thujone decreased to 13.0 %and 18.0 %respectively, while the percentage of camphor, germacrene-D and spathulenol increased to 13.0 %, 5.5 %and 3.7 %, respectively.", "which has research problem ?", "Oil", 23.0, 26.0], ["Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new \"Colossal Clean Crawled Corpus\", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.", "which has research problem ?", "transfer learning", 0.0, 17.0], ["Coastal safety may be influenced by climate change, as changes in extreme surge levels and wave extremes may increase the vulnerability of dunes and other coastal defenses. In the North Sea, an area already prone to severe flooding, these high surge levels and waves are generated by low atmospheric pressure and severe wind speeds during storm events. As a result of the geometry of the North Sea, not only the maximum wind speed is relevant, but also wind direction. Climate change could change maximum wind conditions, with potentially negative effects for coastal safety. Here, we use an ensemble of 12 Coupled Model Intercomparison Project Phase 5 (CMIP5) General Circulation Models (GCMs) and diagnose the effect of two climate scenarios (rcp4.5 and rcp8.5) on annual maximum wind speed, wind speeds with lower return frequencies, and the direction of these annual maximum wind speeds. The 12 selected CMIP5 models do not project changes in annual maximum wind speed and in wind speeds with lower return frequencies; however, we do find an indication that the annual extreme wind events are coming more often from western directions. Our results are in line with the studies based on CMIP3 models and do not confirm the statement based on some reanalysis studies that there is a climate\u2010change\u2010related upward trend in storminess in the North Sea area.", "which has research problem ?", "North sea", 180.0, 189.0], ["In recent years, the problem of scene text extraction from images has received extensive attention and significant progress. However, text extraction from scholarly figures such as plots and charts remains an open problem, in part due to the difficulty of locating irregularly placed text lines. To the best of our knowledge, literature has not described the implementation of a text extraction system for scholarly figures that adapts deep convolutional neural networks used for scene text detection. In this paper, we propose a text extraction approach for scholarly figures that forgoes preprocessing in favor of using a deep convolutional neural network for text line localization. Our system uses a publicly available scene text detection approach whose network architecture is well suited to text extraction from scholarly figures. Training data are derived from charts in arXiv papers which are extracted using Allen Institute's pdffigures tool. Since this tool analyzes PDF data as a container format in order to extract text location through the mechanisms which render it, we were able to gather a large set of labeled training samples. We show significant improvement from methods in the literature, and discuss the structural changes of the text extraction pipeline.", "which has research problem ?", "text extraction from images ", 38.0, 66.0], ["Abstract Flower-like palladium nanoclusters (FPNCs) are electrodeposited onto graphene electrode that are prepared by chemical vapor deposition (CVD). The CVD graphene layer is transferred onto a poly(ethylene naphthalate) (PEN) film to provide a mechanical stability and flexibility. The surface of the CVD graphene is functionalized with diaminonaphthalene (DAN) to form flower shapes. Palladium nanoparticles act as templates to mediate the formation of FPNCs, which increase in size with reaction time. The population of FPNCs can be controlled by adjusting the DAN concentration as functionalization solution. These FPNCs_CG electrodes are sensitive to hydrogen gas at room temperature. The sensitivity and response time as a function of the FPNCs population are investigated, resulted in improved performance with increasing population. Furthermore, the minimum detectable level (MDL) of hydrogen is 0.1 ppm, which is at least 2 orders of magnitude lower than that of chemical sensors based on other Pd-based hybrid materials.", "which has research problem ?", "Chemical sensors", 974.0, 990.0], ["Is there a \u201cmiddle-income trap\u201d? Theory suggests that the determinants of growth at low and high income levels may be different. If countries struggle to transition from growth strategies that are effective at low income levels to growth strategies that are effective at high income levels, they may stagnate at some middle income level; this phenomenon can be thought of as a \u201cmiddle-income trap.\u201d Defining income levels based on per capita gross domestic product relative to the United States, we do not find evidence for (unusual) stagnation at any particular middle income level. However, we do find evidence that the determinants of growth at low and high income levels differ. These findings suggest a mixed conclusion: middle-income countries may need to change growth strategies in order to transition smoothly to high income growth strategies, but this can be done smoothly and does not imply the existence of a middle-income trap.", "which has research problem ?", "Middle-Income Trap", 12.0, 30.0], ["Transformer-based models consist of interleaved feed-forward blocks - that capture content meaning, and relatively more expensive self-attention blocks - that capture context meaning. In this paper, we explored trade-offs and ordering of the blocks to improve upon the current Transformer architecture and proposed PAR Transformer. It needs 35% lower compute time than Transformer-XL achieved by replacing ~63% of the self-attention blocks with feed-forward blocks, and retains the perplexity on WikiText-103 language modelling benchmark. We further validated our results on text8 and enwiki8 datasets, as well as on the BERT model.", "which has research problem ?", "Language Modelling", 509.0, 527.0], ["Enterprise Resource Planning (ERP) application is often viewed as a strategic investment that can provide significant competitive advantage with positive return thus contributing to the firms' revenue and growth. Despite such strategic importance given to ERP the implementation success to achieve the desired goal has been viewed disappointing. There have been numerous industry stories about failures of ERP initiatives. There have also been stories reporting on the significant benefits achieved from successful ERP initiatives. This study review the industry and academic literature on ERP results and identify possible trends or factors which may help future ERP initiatives achieve greater success and less failure. The purpose of this study is to review the industry and academic literature on ERP results, identify and discuss critical success factors which may help future ERP initiatives achieve greater success and less failure.", "which has research problem ?", "Enterprise resource planning", 0.0, 28.0], ["In this paper, we present SemEval-2020 Task 4,CommonsenseValidation andExplanation(ComVE), which includes three subtasks, aiming to evaluate whether a system can distinguish anatural language statement thatmakes senseto humans from one that does not, and provide thereasons. Specifically, in our first subtask, the participating systems are required to choose from twonatural language statements of similar wording the one thatmakes senseand the one does not. Thesecond subtask additionally asks a system to select the key reason from three options why a givenstatement does not make sense. In the third subtask, a participating system needs to generate thereason automatically. 39 teams submitted their valid systems to at least one subtask. For SubtaskA and Subtask B, top-performing teams have achieved results closed to human performance.However, for Subtask C, there is still a considerable gap between system and human performance.The dataset used in our task can be found athttps://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation.", "which has research problem ?", "ComVE", 83.0, 88.0], ["Abstract While thousands of ontologies exist on the web, a unified system for handling online ontologies \u2013 in particular with respect to discovery, versioning, access, quality-control, mappings \u2013 has not yet surfaced and users of ontologies struggle with many challenges. In this paper, we present an online ontology interface and augmented archive called DBpedia Archivo, that discovers, crawls, versions and archives ontologies on the DBpedia Databus. Based on this versioned crawl, different features, quality measures and, if possible, fixes are deployed to handle and stabilize the changes in the found ontologies at web-scale. A comparison to existing approaches and ontology repositories is given .", "which has research problem ?", "Unified System for handling online Ontologies", 59.0, 104.0], ["This paper presents a new eye localization method via Multiscale Sparse Dictionaries (MSD). We built a pyramid of dictionaries that models context information at multiple scales. Eye locations are estimated at each scale by fitting the image through sparse coefficients of the dictionary. By using context information, our method is robust to various eye appearances. The method also works efficiently since it avoids sliding a search window in the image during localization. The experiments in BioID database prove the effectiveness of our method.", "which has research problem ?", "Eye localization", 26.0, 42.0], ["We describe ParsCit, a freely available, open-source implementation of a reference string parsing package. At the core of ParsCit is a trained conditional random field (CRF) model used to label the token sequences in the reference string. A heuristic model wraps this core with added functionality to identify reference strings from a plain text file, and to retrieve the citation contexts. The package comes with utilities to run it as a web service or as a standalone utility. We compare ParsCit on three distinct reference string datasets and show that it compares well with other previously published work.", "which has research problem ?", "Open-source implementation", 41.0, 67.0], ["This paper applies the quantile fixed effects technique in exploring the CO2 environmental Kuznets curve within two groups of economic development (OECD and Non-OECD countries) and six geographical regions West, East Europe, Latin America, East Asia, West Asia and Africa. A comparison of the findings resulting from the use of this technique with those of conventional fixed effects method reveals that the latter may depict a flawed summary of the prevailing incomeemissions nexus depending on the conditional quantile examined. We also extend the Machado and Mata decomposition method to the Kuznets curve framework to explore the most important explanations for the CO2 emissions gap between OECD and Non-OECD countries. We find a statistically significant OECD-Non-OECD emissions gap and this contracts as we ascent the emissions distribution. The decomposition further reveals that there are non-income related factors working against the Non-OECD group's greening. We tentatively conclude that deliberate and systematic mitigation of current CO2 emissions in the Non-OECD group is required. JEL Classification: Q56, Q58.", "which has research problem ?", "CO2 emissions", 670.0, 683.0], ["We investigated the paraclinical profile of monosymptomatic optic neuritis(ON) and its prognosis for multiple sclerosis (MS). The correct identification of patients with very early MS carrying a high risk for conversion to clinically definite MS is important when new treatments are emerging that hopefully will prevent or at least delay future MS. We conducted a prospective single observer and population-based study of 147 consecutive patients (118 women, 80%) with acute monosymptomatic ON referred from a catchment area of 1.6 million inhabitants between January 1, 1990 and December 31, 1995. Of 116 patients examined with brain MRI, 64 (55%) had three or more high signal lesions, 11 (9%) had one to two high signal lesions, and 41 (35%) had a normal brain MRI. Among 143 patients examined, oligoclonal IgG (OB) bands in CSF only were demonstrated in 103 patients (72%). Of 146 patients analyzed, 68 (47%) carried the DR15,DQ6,Dw2 haplotype. During the study period, 53 patients (36%) developed clinically definite MS. The presence of three or more MS-like MRI lesions as well as the presence of OB were strongly associated with the development of MS (p < 0.001). Also, Dw2 phenotype was related to the development of MS (p = 0.046). MRI and CSF studies in patients with ON give clinically important information regarding the risk for future MS.", "which has research problem ?", "Multiple sclerosis", 101.0, 119.0], ["Cross-domain named entity recognition (NER) models are able to cope with the scarcity issue of NER samples in target domains. However, most of the existing NER benchmarks lack domain-specialized entity types or do not focus on a certain domain, leading to a less effective cross-domain evaluation. To address these obstacles, we introduce a cross-domain NER dataset (CrossNER), a fully-labeled collection of NER data spanning over five diverse domains with specialized entity categories for different domains. Additionally, we also provide a domain-related corpus since using it to continue pre-training language models (domain-adaptive pre-training) is effective for the domain adaptation. We then conduct comprehensive experiments to explore the effectiveness of leveraging different levels of the domain corpus and pre-training strategies to do domain-adaptive pre-training for the cross-domain task. Results show that focusing on the fractional corpus containing domain-specialized entities and utilizing a more challenging pre-training strategy in domain-adaptive pre-training are beneficial for the NER domain adaptation, and our proposed method can consistently outperform existing cross-domain NER baselines. Nevertheless, experiments also illustrate the challenge of this cross-domain NER task. We hope that our dataset and baselines will catalyze research in the NER domain adaptation area. The code and data are available at this https URL.", "which has research problem ?", "Cross-Domain Named Entity Recognition", 0.0, 37.0], ["While machine translation has traditionally relied on large amounts of parallel corpora, a recent research line has managed to train both Neural Machine Translation (NMT) and Statistical Machine Translation (SMT) systems using monolingual corpora only. In this paper, we identify and address several deficiencies of existing unsupervised SMT approaches by exploiting subword information, developing a theoretically well founded unsupervised tuning method, and incorporating a joint refinement procedure. Moreover, we use our improved SMT system to initialize a dual NMT model, which is further fine-tuned through on-the-fly back-translation. Together, we obtain large improvements over the previous state-of-the-art in unsupervised machine translation. For instance, we get 22.5 BLEU points in English-to-German WMT 2014, 5.5 points more than the previous best unsupervised system, and 0.5 points more than the (supervised) shared task winner back in 2014.", "which has research problem ?", "Unsupervised Machine Translation", 719.0, 751.0], ["Significance Drug interactions, including drug\u2013drug interactions (DDIs) and drug\u2013food constituent interactions, can trigger unexpected pharmacological effects such as adverse drug events (ADEs). Several existing methods predict drug interactions, but require detailed, but often unavailable drug information as inputs, such as drug targets. To this end, we present a computational framework DeepDDI that accurately predicts DDI types for given drug pairs and drug\u2013food constituent pairs using only name and structural information as inputs. We show four applications of DeepDDI to better understand drug interactions, including prediction of DDI mechanisms causing ADEs, suggestion of alternative drug members for the intended pharmacological effects without negative health effects, prediction of the effects of food constituents on interacting drugs, and prediction of bioactivities of food constituents. Drug interactions, including drug\u2013drug interactions (DDIs) and drug\u2013food constituent interactions (DFIs), can trigger unexpected pharmacological effects, including adverse drug events (ADEs), with causal mechanisms often unknown. Several computational methods have been developed to better understand drug interactions, especially for DDIs. However, these methods do not provide sufficient details beyond the chance of DDI occurrence, or require detailed drug information often unavailable for DDI prediction. Here, we report development of a computational framework DeepDDI that uses names of drug\u2013drug or drug\u2013food constituent pairs and their structural information as inputs to accurately generate 86 important DDI types as outputs of human-readable sentences. DeepDDI uses deep neural network with its optimized prediction performance and predicts 86 DDI types with a mean accuracy of 92.4% using the DrugBank gold standard DDI dataset covering 192,284 DDIs contributed by 191,878 drug pairs. DeepDDI is used to suggest potential causal mechanisms for the reported ADEs of 9,284 drug pairs, and also predict alternative drug candidates for 62,707 drug pairs having negative health effects. Furthermore, DeepDDI is applied to 3,288,157 drug\u2013food constituent pairs (2,159 approved drugs and 1,523 well-characterized food constituents) to predict DFIs. The effects of 256 food constituents on pharmacological effects of interacting drugs and bioactivities of 149 food constituents are predicted. These results suggest that DeepDDI can provide important information on drug prescription and even dietary suggestions while taking certain drugs and also guidelines during drug development.", "which has research problem ?", "DDI prediction", 1401.0, 1415.0], ["Smart City Control Rooms are mainly focused on Dashboards which are in turn created by using the socalled Dashboard Builders tools or generated custom. For a city the production of Dashboards is not something that is performed once forever, and it is a continuous working task for improving city monitoring, to follow extraordinary events and/or activities, to monitor critical conditions and cases. Thus, relevant complexities are due to the data aggregation architecture and to the identification of modalities to present data and their identification, prediction, etc., to arrive at producing high level representations that can be used by decision makers. In this paper, the architecture of a Dashboard Builder for creating Smart City Control Rooms is presented. As a validation and test, it has been adopted for generating the dashboards in Florence city and other cities in Tuscany area. The solution proposed has been developed in the context of REPLICATE H2020 European Commission Flagship project on Smart City and Communities.", "which has research problem ?", "Smart city control rooms", 0.0, 24.0], ["Shape Expressions (ShEx) was defined as a human-readable and concise language to describe and validate RDF. In the last years, the usage of ShEx has grown and more functionalities are being demanded. One such functionality is to ensure interoperability between ShEx schemas and domain models in programming languages. In this paper, we present ShEx-Lite, a tabular based subset of ShEx that allows to generate domain object models in different object-oriented languages. Although the current system generates Java and Python, it offers a public interface so anyone can implement code generation in other programming languages. The system has been employed in a workflow where the shape expressions are used both to define constraints over an ontology and to generate domain objects that will be part of a clean architecture style.", "which has research problem ?", "Code Generation", 579.0, 594.0], ["Abstract\n Comparative phylogenetics has been largely lacking a method for reconstructing the evolution of phenotypic entities that consist of ensembles of multiple discrete traits\u2014entire organismal anatomies or organismal body regions. In this study, we provide a new approach named PARAMO (PhylogeneticAncestralReconstruction ofAnatomy byMappingOntologies) that appropriately models anatomical dependencies and uses ontology-informed amalgamation of stochastic maps to reconstruct phenotypic evolution at different levels of anatomical hierarchy including entire phenotypes. This approach provides new opportunities for tracking phenotypic radiations and evolution of organismal anatomies.", "which has research problem ?", "Comparative phylogenetics", 57.0, 82.0], ["Gene Ontology (GO) annotation is a common task among model organism databases (MODs) for capturing gene function data from journal articles. It is a time-consuming and labor-intensive task, and is thus often considered as one of the bottlenecks in literature curation. There is a growing need for semiautomated or fully automated GO curation techniques that will help database curators to rapidly and accurately identify gene function information in full-length articles. Despite multiple attempts in the past, few studies have proven to be useful with regard to assisting real-world GO curation. The shortage of sentence-level training data and opportunities for interaction between text-mining developers and GO curators has limited the advances in algorithm development and corresponding use in practical circumstances. To this end, we organized a text-mining challenge task for literature-based GO annotation in BioCreative IV. More specifically, we developed two subtasks: (i) to automatically locate text passages that contain GO-relevant information (a text retrieval task) and (ii) to automatically identify relevant GO terms for the genes in a given article (a concept-recognition task). With the support from five MODs, we provided teams with >4000 unique text passages that served as the basis for each GO annotation in our task data. Such evidence text information has long been recognized as critical for text-mining algorithm development but was never made available because of the high cost of curation. In total, seven teams participated in the challenge task. From the team results, we conclude that the state of the art in automatically mining GO terms from literature has improved over the past decade while much progress is still needed for computer-assisted GO curation. Future work should focus on addressing remaining technical challenges for improved performance of automatic GO concept recognition and incorporating practical benefits of text-mining tools into real-world GO annotation. Database URL: http://www.biocreative.org/tasks/biocreative-iv/track-4-GO/.", "which has research problem ?", "concept-recognition", 1170.0, 1189.0], ["Support vector machines have met with significant success in numerous real-world learning tasks. However, like most machine learning algorithms, they are generally applied using a randomly selected training set classified in advance. In many settings, we also have the option of using pool-based active learning. Instead of using a randomly selected training set, the learner has access to a pool of unlabeled instances and can request the labels for some number of them. We introduce a new algorithm for performing active learning with support vector machines, i.e., an algorithm for choosing which instances to request next. We provide a theoretical motivation for the algorithm using the notion of a version space. We present experimental results showing that employing our active learning method can significantly reduce the need for labeled training instances in both the standard inductive and transductive settings.", "which has research problem ?", "Active learning", 296.0, 311.0], ["The contribution of GICAC UPM group in the eMadrid initiative has focused to the application of semantic web technologies in the Open Education context. This work presents the main results obtained through different applications and models according to a roadmap followed by the group.", "which has research problem ?", "Open Education", 129.0, 143.0], ["In this paper we present GATE, a framework and graphical development environment which enables users to develop and deploy language engineering components and resources in a robust fashion. The GATE architecture has enabled us not only to develop a number of successful applications for various language processing tasks (such as Information Extraction), but also to build and annotate corpora and carry out evaluations on the applications generated. The framework can be used to develop applications and resources in multiple languages, based on its thorough Unicode support.", "which has research problem ?", "develop and deploy language engineering components", 104.0, 154.0], ["Big Data covers a wide spectrum of technologies, which tends to support the processing of big amounts of heterogeneous data. The paper identifies the powerful benefits and the application areas of Big Data in the on-line education context. Considering the boom of academic services on-line, and the free access to the educative content, a great amount of data is being generated in the formal educational field as well as in less formal contexts. In this sense, Big Data can help stakeholders, involved in education decision making, to reach the objective of improving the quality of education and the learning outcomes. In this paper, a methodology is proposed to process big amounts of data coming from the educational field. The current study ends with a specific case study where the data of a well-known Ecuadorian institution that has more than 80 branches is analyzed.", "which has research problem ?", "Big Data", 0.0, 8.0], ["We consider the problem of adapting neural paragraph-level question answering models to the case where entire documents are given as input. Our proposed solution trains models to produce well calibrated confidence scores for their results on individual paragraphs. We sample multiple paragraphs from the documents during training, and use a shared-normalization training objective that encourages the model to produce globally correct output. We combine this method with a state-of-the-art pipeline for training models on document QA data. Experiments demonstrate strong performance on several document QA datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion of TriviaQA, a large improvement from the 56.7 F1 of the previous best system.", "which has research problem ?", "Question Answering", 59.0, 77.0], ["We present a shared task on automatically determining sentiment intensity of a word or a phrase. The words and phrases are taken from three domains: general English, English Twitter, and Arabic Twitter. The phrases include those composed of negators, modals, and degree adverbs as well as phrases formed by words with opposing polarities. For each of the three domains, we assembled the datasets that include multi-word phrases and their constituent words, both manually annotated for real-valued sentiment intensity scores. The three datasets were presented as the test sets for three separate tasks (each focusing on a specific domain). Five teams submitted nine system outputs for the three tasks. All datasets created for this shared task are freely available to the research community.", "which has research problem ?", "Determining Sentiment Intensity", 42.0, 73.0], ["Abstract Background The goal of the first BioCreAtIvE challenge (Critical Assessment of Information Extraction in Biology) was to provide a set of common evaluation tasks to assess the state of the art for text mining applied to biological problems. The results were presented in a workshop held in Granada, Spain March 28\u201331, 2004. The articles collected in this BMC Bioinformatics supplement entitled \"A critical assessment of text mining methods in molecular biology\" describe the BioCreAtIvE tasks, systems, results and their independent evaluation. Results BioCreAtIvE focused on two tasks. The first dealt with extraction of gene or protein names from text, and their mapping into standardized gene identifiers for three model organism databases (fly, mouse, yeast). The second task addressed issues of functional annotation, requiring systems to identify specific text passages that supported Gene Ontology annotations for specific proteins, given full text articles. Conclusion The first BioCreAtIvE assessment achieved a high level of international participation (27 groups from 10 countries). The assessment provided state-of-the-art performance results for a basic task (gene name finding and normalization), where the best systems achieved a balanced 80% precision / recall or better, which potentially makes them suitable for real applications in biology. The results for the advanced task (functional annotation from free text) were significantly lower, demonstrating the current limitations of text-mining approaches where knowledge extrapolation and interpretation are required. In addition, an important contribution of BioCreAtIvE has been the creation and release of training and test data sets for both tasks. There are 22 articles in this special issue, including six that provide analyses of results or data quality for the data sets, including a novel inter-annotator consistency assessment for the test set used in task 2.", "which has research problem ?", "Extraction of gene or protein names", 617.0, 652.0], ["The modernization of the European education system is allowing undertake new missions such as university responsibility. Higher education institutions have felt the need to open and provide educational resources. OpenCourseWare (OCW) and Open Educational Resources (OER) in general contribute to the dissemination of knowledge and public construction, providing a social good. Openness also means sharing, reuse information and create content in an open environment to improve and maintain the quality of education, at the same time provide an advertising medium for higher education institutions. OpenCourseWare Consortium himself and other educational organizations stress the importance of taking measurement and evaluation programs for an institution OCW. Two main reasons are argued: \uf0b7 Allow monitoring the usefulness and usability of Open Educational Resources (OER) and the efficiency of the publishing process, helping to identify and implement improvements that may be relevant over time. \uf0b7 Measure the use and impact of the parties involved in the OCW site helps ensure their commitment. This paper the authors evaluate social network analysis like a technique to measure the impact of open educational resources, and shows the results of applied this kind analysis in OpenCourseWare Spanish and Latin American, trying to tackle the above problems by extending the impact of resource materials in the new innovative teaching strategies and mission of university social responsibility providing updated information on the impact of OCW materials, and showing the true potential inherent in the current OCW repositories in Latin American universities. To evaluate the utility of Social Network Analysis in open educational resources, different social networks were built, using the explicit relationships between different participants of OCW initiatives, e.g. co-authorship, to show the current state of OCW resources. And through the implicit relationships, e.g. tagging, to assess the potential of OCW. To measure the impact of OCW, the social relationships, drawing from the information published by universities of Spain and Latin American, between OCW actors are described and assessed using social networks analysis techniques and metrics, the results obtained let to: present a current state of OCW in Latin America, know the informal organization behind the OCW initiatives, the folksonomies arise from using tags to describe courses, and potential collaborative networks between: universities, professors linked to production of OCW.", "which has research problem ?", "Open Education", NaN, NaN], ["The aim of this paper is to investigate the existence of environmental Kuznets curve (EKC) in an open economy like Tunisia using annual time series data for the period of 1971-2010. The ARDL bounds testing approach to cointegration is applied to test long run relationship in the presence of structural breaks and vector error correction model (VECM) to detect the causality among the variables. The robustness of causality analysis has been tested by applying the innovative accounting approach (IAA). The findings of this paper confirmed the long run relationship between economic growth, energy consumption, trade openness and CO2 emissions in Tunisian Economy. The results also indicated the existence of EKC confirmed by the VECM and IAA approaches. The study has significant contribution for policy implications to curtail energy pollutants by implementing environment friendly regulations to sustain the economic development in Tunisia.", "which has research problem ?", "CO2 emissions", 630.0, 643.0], ["Recently e -business has become the focus of management interest both in academics and in business. Among the major components of e -business, ERP (Enterprise Resource Planning) is the backbone of other applications. Therefore more and more enterprises attempt to adopt this new application in order to improve their business competitiveness. Owing to the specific characteristics of ERP, its implementation is more difficult than that of traditional information systems. For this reason, how to implement ERP successfully becomes an important issue for both academics and practitioners. In this paper, a review on critical successful factors of ERP in important MIS publications will be presented. Additionally traditional IS implementatio n and ERP implementation will be compared and the findings will be served as the basis for further research.", "which has research problem ?", "Enterprise resource planning", 148.0, 176.0], ["We develop an efficient fused-ring electron acceptor (ITIC-Th) based on indacenodithieno[3,2-b]thiophene core and thienyl side-chains for organic solar cells (OSCs). Relative to its counterpart with phenyl side-chains (ITIC), ITIC-Th shows lower energy levels (ITIC-Th: HOMO = -5.66 eV, LUMO = -3.93 eV; ITIC: HOMO = -5.48 eV, LUMO = -3.83 eV) due to the \u03c3-inductive effect of thienyl side-chains, which can match with high-performance narrow-band-gap polymer donors and wide-band-gap polymer donors. ITIC-Th has higher electron mobility (6.1 \u00d7 10(-4) cm(2) V(-1) s(-1)) than ITIC (2.6 \u00d7 10(-4) cm(2) V(-1) s(-1)) due to enhanced intermolecular interaction induced by sulfur-sulfur interaction. We fabricate OSCs by blending ITIC-Th acceptor with two different low-band-gap and wide-band-gap polymer donors. In one case, a power conversion efficiency of 9.6% was observed, which rivals some of the highest efficiencies for single junction OSCs based on fullerene acceptors.", "which has research problem ?", "Organic solar cells", 138.0, 157.0], ["No topic raises more contentious debate among educators than the role of interaction as a crucial component of the education process. This debate is fueled by surface problems of definition and vested interests of professional educators, but is more deeply marked by epistemological assumptions relative to the role of humans and human interaction in education and learning. The seminal article by Daniel and Marquis (1979) challenged distance educators to get the mixture right between independent study and interactive learning strategies and activities. They quite rightly pointed out that these two primary forms of education have differing economic, pedagogical, and social characteristics, and that we are unlikely to find a \u201cperfect\u201d mix that meets all learner and institutional needs across all curricula and content. Nonetheless, hard decisions have to be made. Even more than in 1979, the development of newer, cost effective technologies and the nearly ubiquitous (in developed countries) Net-based telecommunications system is transforming, at least, the cost and access implications of getting the mix right. Further, developments in social cognitive based learning theories are providing increased evidence of the importance of collaborative activity as a component of all forms of education \u2013 including those delivered at a distance. Finally, the context in which distance education is developed and delivered is changing in response to the capacity of the semantic Web (Berners-Lee, 1999) to support interaction, not only amongst humans, but also between and among autonomous agents and human beings. Thus, the landscape and challenges of \u201cgetting the mix right\u201d have not lessened in the past 25 years, and, in fact, have become even more complicated. This paper attempts to provide a theoretical rationale and guide for instructional designers and teachers interested in developing distance education systems that are both effective and efficient in meeting diverse student learning needs.", "which has research problem ?", "The role of interaction as a crucial component of the education process", 61.0, 132.0], ["Current state-of-the-art relation extraction methods typically rely on a set of lexical, syntactic, and semantic features, explicitly computed in a pre-processing step. Training feature extraction models requires additional annotated language resources, which severely restricts the applicability and portability of relation extraction to novel languages. Similarly, pre-processing introduces an additional source of error. To address these limitations, we introduce TRE, a Transformer for Relation Extraction, extending the OpenAI Generative Pre-trained Transformer [Radford et al., 2018]. Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions. TRE allows us to learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task. TRE obtains a new state-of-the-art result on the TACRED and SemEval 2010 Task 8 datasets, achieving a test F1 of 67.4 and 87.1, respectively. Furthermore, we observe a significant increase in sample efficiency. With only 20% of the training examples, TRE matches the performance of our baselines and our model trained from scratch on 100% of the TACRED dataset. We open-source our trained models, experiments, and source code.", "which has research problem ?", "Relation extraction", 25.0, 44.0], ["Software-enabled technologies and urban big data have become essential to the functioning of cities. Consequently, urban operational governance and city services are becoming highly responsive to a form of data-driven urbanism that is the key mode of production for smart cities. At the heart of data-driven urbanism is a computational understanding of city systems that reduces urban life to logic and calculative rules and procedures, which is underpinned by an instrumental rationality and realist epistemology. This rationality and epistemology are informed by and sustains urban science and urban informatics, which seek to make cities more knowable and controllable. This paper examines the forms, practices and ethics of smart cities and urban science, paying particular attention to: instrumental rationality and realist epistemology; privacy, datafication, dataveillance and geosurveillance; and data uses, such as social sorting and anticipatory governance. It argues that smart city initiatives and urban science need to be re-cast in three ways: a re-orientation in how cities are conceived; a reconfiguring of the underlying epistemology to openly recognize the contingent and relational nature of urban systems, processes and science; and the adoption of ethical principles designed to realize benefits of smart cities and urban science while reducing pernicious effects. This article is part of the themed issue \u2018The ethical impact of data science\u2019.", "which has research problem ?", "Smart cities", 266.0, 278.0], ["Since acquiring pixel-wise annotations for training convolutional neural networks for semantic image segmentation is time-consuming, weakly supervised approaches that only require class tags have been proposed. In this work, we propose another form of supervision, namely image captions as they can be found on the Internet. These captions have two advantages. They do not require additional curation as it is the case for the clean class tags used by current weakly supervised approaches and they provide textual context for the classes present in an image. To leverage such textual context, we deploy a multi-modal network that learns a joint embedding of the visual representation of the image and the textual representation of the caption. The network estimates text activation maps (TAMs) for class names as well as compound concepts, i.e. combinations of nouns and their attributes. The TAMs of compound concepts describing classes of interest substantially improve the quality of the estimated class activation maps which are then used to train a network for semantic segmentation. We evaluate our method on the COCO dataset where it achieves state of the art results for weakly supervised image segmentation.", "which has research problem ?", "image segmentation", 95.0, 113.0], ["In the past few years, there has been a surge of interest in multi-modal problems, from image captioning to visual question answering and beyond. In this paper, we focus on hate speech detection in multi-modal memes wherein memes pose an interesting multi-modal fusion problem. We try to solve the Facebook Meme Challenge (Kiela et al., 2020) which aims to solve a binary classification problem of predicting whether a meme is hateful or not. A crucial characteristic of the challenge is that it includes \u201dbenign confounders\u201d to counter the possibility of models exploiting unimodal priors. The challenge states that the state-of-theart models perform poorly compared to humans. During the analysis of the dataset, we realized that majority of the data points which are originally hateful are turned into benign just be describing the image of the meme. Also, majority of the multi-modal baselines give more preference to the hate speech (language modality). To tackle these problems, we explore the visual modality using object detection and image captioning models to fetch the \u201cactual caption\u201d and then combine it with the multi-modal representation to perform binary classification. This approach tackles the benign text confounders present in the dataset to improve the performance. Another approach we experiment with is to improve the prediction with sentiment analysis. Instead of only using multi-modal representations obtained from pre-trained neural networks, we also include the unimodal sentiment to enrich the features. We perform a detailed analysis of the above two approaches, providing compelling reasons in favor of the methodologies used.", "which has research problem ?", "hate speech detection", 173.0, 194.0], ["Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time.", "which has research problem ?", "Machine Translation", 0.0, 19.0], ["The chance to win a football match can be significantly increased if the right tactic is chosen and the behavior of the opposite team is well anticipated. For this reason, every professional football club employs a team of game analysts. However, at present game performance analysis is done manually and therefore highly time-consuming. Consequently, automated tools to support the analysis process are required. In this context, one of the main tasks is to summarize team formations by patterns such as 4-4-2 that can give insights into tactical instructions and patterns. In this paper, we introduce an analytics approach that automatically classifies and visualizes the team formation based on the players' position data. We focus on single match situations instead of complete halftimes or matches to provide a more detailed analysis. %in contrast to previous work. The novel classification approach calculates the similarity based on pre-defined templates for different tactical formations. A detailed analysis of individual match situations depending on ball possession and match segment length is provided. For this purpose, a visual summary is utilized that summarizes the team formation in a match segment. An expert annotation study is conducted that demonstrates 1)~the complexity of the task and 2)~the usefulness of the visualization of single situations to understand team formations. The suggested classification approach outperforms existing methods for formation classification. In particular, our approach gives insights into the shortcomings of using patterns like 4-4-2 to describe team formations.", "which has research problem ?", "Formation Classification", 1471.0, 1495.0], ["Recent advances in off-policy deep reinforcement learning (RL) have led to impressive success in complex tasks from visual observations. Experience replay improves sample-efficiency by reusing experiences from the past, and convolutional neural networks (CNNs) process high-dimensional inputs effectively. However, such techniques demand high memory and computational bandwidth. In this paper, we present Stored Embeddings for Efficient Reinforcement Learning (SEER), a simple modification of existing off-policy RL methods, to address these computational and memory requirements. To reduce the computational overhead of gradient updates in CNNs, we freeze the lower layers of CNN encoders early in training due to early convergence of their parameters. Additionally, we reduce memory requirements by storing the low-dimensional latent vectors for experience replay instead of high-dimensional images, enabling an adaptive increase in the replay buffer capacity, a useful technique in constrained-memory settings. In our experiments, we show that SEER does not degrade the performance of RL agents while significantly saving computation and memory across a diverse set of DeepMind Control environments and Atari games.", "which has research problem ?", "Atari Games", 1206.0, 1217.0], ["A statistical\u2010dynamical downscaling method is used to estimate future changes of wind energy output (Eout) of a benchmark wind turbine across Europe at the regional scale. With this aim, 22 global climate models (GCMs) of the Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble are considered. The downscaling method uses circulation weather types and regional climate modelling with the COSMO\u2010CLM model. Future projections are computed for two time periods (2021\u20132060 and 2061\u20132100) following two scenarios (RCP4.5 and RCP8.5). The CMIP5 ensemble mean response reveals a more likely than not increase of mean annual Eout over Northern and Central Europe and a likely decrease over Southern Europe. There is some uncertainty with respect to the magnitude and the sign of the changes. Higher robustness in future changes is observed for specific seasons. Except from the Mediterranean area, an ensemble mean increase of Eout is simulated for winter and a decreasing for the summer season, resulting in a strong increase of the intra\u2010annual variability for most of Europe. The latter is, in particular, probable during the second half of the 21st century under the RCP8.5 scenario. In general, signals are stronger for 2061\u20132100 compared to 2021\u20132060 and for RCP8.5 compared to RCP4.5. Regarding changes of the inter\u2010annual variability of Eout for Central Europe, the future projections strongly vary between individual models and also between future periods and scenarios within single models. This study showed for an ensemble of 22 CMIP5 models that changes in the wind energy potentials over Europe may take place in future decades. However, due to the uncertainties detected in this research, further investigations with multi\u2010model ensembles are needed to provide a better quantification and understanding of the future changes.", "which has research problem ?", "future changes", 63.0, 77.0], ["Future generations of NASA and U.S. Air Force vehicles will require lighter mass while being subjected to higher loads and more extreme service conditions over longer time periods than the present generation. Current approaches for certification, fleet management and sustainment are largely based on statistical distributions of material properties, heuristic design philosophies, physical testing and assumed similitude between testing and operational conditions and will likely be unable to address these extreme requirements. To address the shortcomings of conventional approaches, a fundamental paradigm shift is needed. This paradigm shift, the Digital Twin, integrates ultra-high fidelity simulation with the vehicle s on-board integrated vehicle health management system, maintenance history and all available historical and fleet data to mirror the life of its flying twin and enable unprecedented levels of safety and reliability.", "which has research problem ?", "digital twin", 651.0, 663.0], ["This paper examines end-user training (EUT) in enterprise resource planning (ERP) systems, with the aim of identifying whether current EUT research is applicable to ERP systems. An extensive review and analysis of EUT research in mainstream IS journals was undertaken. The findings of this analysis were compared to views expressed by a leading ERP trainer in a large Australian company. The principles outlined in the EUT literature were used to construct the Training, Education and Learning Strategy model for an ERP environment. Our analysis found very few high-quality empirical studies involving EUT training in such an environment. Moreover, we argue that while the extensive EUT literature provides a rich source of ideas about ERP training, the findings of many studies cannot be transferred to ERP systems, as these systems are inherently more complex than the office-based, non-mandatory applications upon which most IS EUT research is based.", "which has research problem ?", "Enterprise resource planning", 47.0, 75.0], ["Eye localization is an important part in face recognition system, because its precision closely affects the performance of face recognition. Although various methods have already achieved high precision on the face images with high quality, their precision will drop on low quality images. In this paper, we propose a robust eye localization method for low quality face images to improve the eye detection rate and localization precision. First, we propose a probabilistic cascade (P-Cascade) framework, in which we reformulate the traditional cascade classifier in a probabilistic way. The P-Cascade can give chance to each image patch contributing to the final result, regardless the patch is accepted or rejected by the cascade. Second, we propose two extensions to further improve the robustness and precision in the P-Cascade framework. There are: (1) extending feature set, and (2) stacking two classifiers in multiple scales. Extensive experiments on JAFFE, BioID, LFW and a self-collected video surveillance database show that our method is comparable to state-of-the-art methods on high quality images and can work well on low quality images. This work supplies a solid base for face recognition applications under unconstrained or surveillance environments.", "which has research problem ?", "Eye localization", 0.0, 16.0], ["Enterprise architecture for smart cities is the focus of the research project \u201cEADIC - (Developing an Enterprise Architecture for Digital Cities)\u201d which is the context of the reported results in this work. We report in detail the results of a survey we contacted. Using these results we identify important quality and functional requirements for smart cities. Important quality properties include interoperability, usability, security, availability, recoverability and maintainability. We also observe business-related issues such as an apparent uncertainty on who is selling services, the lack of business plan in most cases and uncertainty in commercialization of services. At the software architecture domain we present a conceptual architectural framework based on architectural patterns which address the identified quality requirements. The conceptual framework can be used as a starting point for actual smart cities' projects.", "which has research problem ?", "Enterprise architecture", 0.0, 23.0], ["This paper investigates the relationship between CO2 emission, real GDP, energy consumption, urbanization and trade openness for 10 for selected Central and Eastern European Countries (CEECs), including, Albania, Bulgaria, Croatia, Czech Republic, Macedonia, Hungary, Poland, Romania, Slovak Republic and Slovenia for the period of 1991\u20132011. The results show that the environmental Kuznets curve (EKC) hypothesis holds for these countries. The fully modified ordinary least squares (FMOLS) results reveal that a 1% increase in energy consumption leads to a %1.0863 increase in CO2 emissions. Results for the existence and direction of panel Vector Error Correction Model (VECM) Granger causality method show that there is bidirectional causal relationship between CO2 emissions - real GDP and energy consumption-real GDP as well.", "which has research problem ?", "CO2 emissions", 578.0, 591.0], ["This work addresses the challenge of hate speech detection in Internet memes, and attempts using visual information to automatically detect hate speech, unlike any previous work of our knowledge. Memes are pixel-based multimedia documents that contain photos or illustrations together with phrases which, when combined, usually adopt a funny meaning. However, hate memes are also used to spread hate through social networks, so their automatic detection would help reduce their harmful societal impact. Our results indicate that the model can learn to detect some of the memes, but that the task is far from being solved with this simple architecture. While previous work focuses on linguistic hate speech, our experiments indicate how the visual modality can be much more informative for hate speech detection than the linguistic one in memes. In our experiments, we built a dataset of 5,020 memes to train and evaluate a multi-layer perceptron over the visual and language representations, whether independently or fused. The source code and mode and models are available this https URL .", "which has research problem ?", "hate speech detection", 37.0, 58.0], ["We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional tree-structured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the state-of-the-art feature-based model on end-to-end relation extraction, achieving 12.1% and 5.7% relative error reductions in F1-score on ACE2005 and ACE2004, respectively. We also show that our LSTM-RNN based model compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8). Finally, we present an extensive ablation analysis of several model components.", "which has research problem ?", "End-to-end Relation Extraction", 636.0, 666.0], ["The related models in iron and steel product life cycle (IS-PLC), from order, design, purchase, scheduling to specific manufacturing processes (i.e., coking, sintering, blast furnace iron-making, converter, steel-making, continuous steel casting, rolling) is characterized by large-scale, multi-objective, multi-physics, dynamic uncertainty and complicated constraint. To achieve complex task in IS-PLC, involved models need be interrelated and interact, but how to build digital twin models in each IS-PLC stage, and carry out fusion between models and data to achieve virtual space (VS) and physical space (PS) intercorrelation, is a key technology in IS-PLC. In this paper, digital twins modeling and its fusion data problem in each IS-PLC stage are preliminary discussed.", "which has research problem ?", "digital twin", 472.0, 484.0], ["Search-based optimization techniques (e.g., hill climbing, simulated annealing, and genetic algorithms) have been applied to a wide variety of software engineering activities including cost estimation, next release problem, and test generation. Several search based test generation techniques have been developed. These techniques had focused on finding suites of test data to satisfy a number of control-flow or data-flow testing criteria. Genetic algorithms have been the most widely employed search-based optimization technique in software testing issues. Recently, there are many novel search-based optimization techniques have been developed such as Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), Artificial Immune System (AIS), and Bees Colony Optimization. ACO and AIS have been employed only in the area of control-flow testing of the programs. This paper aims at employing the ACO algorithms in the issue of software data-flow testing. The paper presents an ant colony optimization based approach for generating set of optimal paths to cover all definition-use associations (du-pairs) in the program under test. Then, this approach uses the ant colony optimization to generate suite of test-data for satisfying the generated set of paths. In addition, the paper introduces a case study to illustrate our approach. Keywordsdata-flow testing; path-cover generation, test-data generation; ant colony optimization algorithms", "which has research problem ?", "Ant Colony Optimization", 655.0, 678.0], ["The effect of rhizosphere soil or root tissues amendments on the microbial mineralisation of hydrocarbons in soil slurry by the indigenous microbial communities has been investigated. In this study, rhizosphere soil and root tissues of reed canary grass (Phalaris arundinacea), channel grass (Vallisneria spiralis), blackberry (Rubus fructicosus) and goat willow (Salix caprea) were collected from the former Shell and Imperial Industries (ICI) Refinery site in Lancaster, UK. The rates and extents of 14C\u2013hydrocarbons (naphthalene, phenanthrene, hexadecane or octacosane) mineralisation in artificially spiked soils were monitored in the absence and presence of 5% (wet weight) of rhizosphere soil or root tissues. Respirometric and microbial assays were monitored in fresh (0 d) and pre\u2013incubated (28 d) artificially spiked soils following amendment with rhizosphere soil or root tissues. There were significant increases (P < 0.001) in the extents of 14C\u2013naphthalene and 14C\u2013phenanthrene mineralisation in fresh artificially spiked soils amended with rhizosphere soil and root tissues compared to those measured in unamended soils. However, amendment of fresh artificially spiked soils with rhizosphere soil and root tissues did not enhance the microbial mineralisation of 14C\u2013hexadecane or 14C\u2013octacosane by indigenous microbial communities. Apart from artificially spiked soil systems containing naphthalene (amended with reed canary grass and channel grass rhizosphere) and hexadecane amended with goat willow rhizosphere, microbial mineralisation of hydrocarbons was further enhanced following 28 d soil\u2013organic contaminants pre\u2013exposure and subsequent amendment with rhizosphere soil or root tissues. This study suggests that organic chemicals in roots and/or rhizosphere can enhance the microbial degradation of petroleum hydrocarbons in freshly contaminated soil by supporting higher numbers of hydrocarbon\u2013degrading populations, promoting microbial activity and/or enhancing bioavailability of organic contaminants.", "which has research problem ?", "The effect of rhizosphere soil or root tissues amendments on the microbial mineralisation of hydrocarbons in soil slurry by the indigenous microbial communities has been investigated. ", 0.0, 184.0], ["SUMMARYArtemisia herba-alba Asso has been successfully cultivated in the Tunisian arid zone. However, information regarding the effects of the harvest frequency on its biomass and essential oil yields is very limited. In this study, the effects of three different frequencies of harvesting the upper half of the A. herba-alba plant tuft were compared. The harvest treatments were: harvesting the same individual plants at the flowering stage annually; harvesting the same individual plants at the full vegetative growth stage annually and harvesting the same individual plants every six months. Statistical analyses indicated that all properties studied were affected by the harvest frequency. Essential oil yield, depended both on the dry biomass and its essential oil content, and was significantly higher from plants harvested annually at the flowering stage than the other two treatments. The composition of the \u03b2- and \u03b1-thujone-rich oils did not vary throughout the experimental period.", "which has research problem ?", "Oil", 250.0, 253.0], ["A statistical\u2010dynamical downscaling method is used to estimate future changes of wind energy output (Eout) of a benchmark wind turbine across Europe at the regional scale. With this aim, 22 global climate models (GCMs) of the Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble are considered. The downscaling method uses circulation weather types and regional climate modelling with the COSMO\u2010CLM model. Future projections are computed for two time periods (2021\u20132060 and 2061\u20132100) following two scenarios (RCP4.5 and RCP8.5). The CMIP5 ensemble mean response reveals a more likely than not increase of mean annual Eout over Northern and Central Europe and a likely decrease over Southern Europe. There is some uncertainty with respect to the magnitude and the sign of the changes. Higher robustness in future changes is observed for specific seasons. Except from the Mediterranean area, an ensemble mean increase of Eout is simulated for winter and a decreasing for the summer season, resulting in a strong increase of the intra\u2010annual variability for most of Europe. The latter is, in particular, probable during the second half of the 21st century under the RCP8.5 scenario. In general, signals are stronger for 2061\u20132100 compared to 2021\u20132060 and for RCP8.5 compared to RCP4.5. Regarding changes of the inter\u2010annual variability of Eout for Central Europe, the future projections strongly vary between individual models and also between future periods and scenarios within single models. This study showed for an ensemble of 22 CMIP5 models that changes in the wind energy potentials over Europe may take place in future decades. However, due to the uncertainties detected in this research, further investigations with multi\u2010model ensembles are needed to provide a better quantification and understanding of the future changes.", "which has research problem ?", "wind energy potentials", 1577.0, 1599.0], ["With the advances in new-generation information technologies, especially big data and digital twin, smart manufacturing is becoming the focus of global manufacturing transformation and upgrading. Intelligence comes from data. Integrated analysis for the manufacturing big data is beneficial to all aspects of manufacturing. Besides, the digital twin paves a way for the cyber-physical integration of manufacturing, which is an important bottleneck to achieve smart manufacturing. In this paper, the big data and digital twin in manufacturing are reviewed, including their concept as well as their applications in product design, production planning, manufacturing, and predictive maintenance. On this basis, the similarities and differences between big data and digital twin are compared from the general and data perspectives. Since the big data and digital twin can be complementary, how they can be integrated to promote smart manufacturing are discussed.", "which has research problem ?", "digital twin", 86.0, 98.0], ["We present a counterfactual recognition (CR) task, the shared Task 5 of SemEval-2020. Counterfactuals describe potential outcomes (consequents) produced by actions or circumstances that did not happen or cannot happen and are counter to the facts (antecedent). Counterfactual thinking is an important characteristic of the human cognitive system; it connects antecedents and consequent with causal relations. Our task provides a benchmark for counterfactual recognition in natural language with two subtasks. Subtask-1 aims to determine whether a given sentence is a counterfactual statement or not. Subtask-2 requires the participating systems to extract the antecedent and consequent in a given counterfactual statement. During the SemEval-2020 official evaluation period, we received 27 submissions to Subtask-1 and 11 to Subtask-2. Our data and baseline code are made publicly available at https://zenodo.org/record/3932442. The task website and leaderboard can be found at https://competitions.codalab.org/competitions/21691.", "which has research problem ?", "Counterfactual recognition", 13.0, 39.0], ["Enterprise resource planning (ERP) system is an enterprise management system, currently in high demand by both manufacturing and service organizations. Recently, ERP systems have been drawn an important amount of attention by researchers and top managers. This paper will summarize the previous research literature about ERP application from an integrative review, and further research issues have been introduced to guide the future direction of research.", "which has research problem ?", "Enterprise resource planning", 0.0, 28.0], ["The simple assembly line balancing problem (SALBP) concern s the assignment of tasks with pre-defined processing times to work stations that are arran ged in a line. Hereby, precedence constraints between the tasks must be respected. The optimi zation goal of the SALBP-2 variant of the problem concerns the minimization of the so-called cy cle time, that is, the time in which the tasks of each work station must be completed. In this work we p ropose to tackle this problem with an iterative search method based on beam search. The propose d algorithm is able to generate optimal solutions, respectively the best upper bounds, for 283 out of 302 test cases. Moreover, for 9 further test cases the algorithm is able to improve the c urrently best upper bounds. These numbers indicate that the proposed iterative beam search al gorithm is currently a state-of-the-art method for the SALBP-2", "which has research problem ?", "Simple assembly line balancing problem (SALBP)", NaN, NaN], ["The Toxic Spans Detection task of SemEval-2021 required participants to predict the spans of toxic posts that were responsible for the toxic label of the posts. The task could be addressed as supervised sequence labeling, using training data with gold toxic spans provided by the organisers. It could also be treated as rationale extraction, using classifiers trained on potentially larger external datasets of posts manually annotated as toxic or not, without toxic span annotations. For the supervised sequence labeling approach and evaluation purposes, posts previously labeled as toxic were crowd-annotated for toxic spans. Participants submitted their predicted spans for a held-out test set and were scored using character-based F1. This overview summarises the work of the 36 teams that provided system descriptions.", "which has research problem ?", "Toxic Spans Detection", 4.0, 25.0], ["Transformers are state-of-the-art models for a variety of sequence modeling tasks. At their core is an attention function which models pairwise interactions between the inputs at every timestep. While attention is powerful, it does not scale efficiently to long sequences due to its quadratic time and space complexity in the sequence length. We propose RFA, a linear time and space attention that uses random feature methods to approximate the softmax function, and explore its application in transformers. RFA can be used as a drop-in replacement for conventional softmax attention and offers a straightforward way of learning with recency bias through an optional gating mechanism. Experiments on language modeling and machine translation demonstrate that RFA achieves similar or better performance compared to strong transformer baselines. In the machine translation experiment, RFA decodes twice as fast as a vanilla transformer. Compared to existing efficient transformer variants, RFA is competitive in terms of both accuracy and efficiency on three long text classification datasets. Our analysis shows that RFA\u2019s efficiency gains are especially notable on long sequences, suggesting that RFA will be particularly useful in tasks that require working with large inputs, fast decoding speed, or low memory footprints.", "which has research problem ?", "Machine Translation", 722.0, 741.0], ["Extractive reading comprehension systems can often locate the correct answer to a question in a context document, but they also tend to make unreliable guesses on questions for which the correct answer is not stated in the context. Existing datasets either focus exclusively on answerable questions, or use automatically generated unanswerable questions that are easy to identify. To address these weaknesses, we present SQuADRUn, a new dataset that combines the existing Stanford Question Answering Dataset (SQuAD) with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuADRUn, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. SQuADRUn is a challenging natural language understanding task for existing models: a strong neural system that gets 86% F1 on SQuAD achieves only 66% F1 on SQuADRUn. We release SQuADRUn to the community as the successor to SQuAD.", "which has research problem ?", "Question Answering ", 481.0, 500.0], ["Developing a supply chain management (SCM) system is costly, but important. However, because of its complicated nature, not many of such projects are considered successful. Few research publications directly relate to key success factors (KSFs) for implementing and operating a SCM system. Motivated by the above, this research proposes two hierarchies of KSFs for SCM system implementation and operation phase respectively in the semiconductor industry by using a two-step approach. First, a literature review indicates the initial hierarchy. The second step includes a focus group approach to finalize the proposed KSF hierarchies by extracting valuable experiences from executives and managers that actively participated in a project, which successfully establish a seamless SCM integration between the world's largest semiconductor foundry manufacturing company and the world's largest assembly and testing company. Finally, this research compared the KSF's between the two phases and made a conclusion. Future project executives may refer the resulting KSF hierarchies as a checklist for SCM system implementation and operation in semiconductor or related industries.", "which has research problem ?", "Supply chain management", 13.0, 36.0], ["Making a city \"smart\" is emerging as a strategy to mitigate the problems generated by the urban population growth and rapid urbanization. Yet little academic research has sparingly discussed the phenomenon. To close the gap in the literature about smart cities and in response to the increasing use of the concept, this paper proposes a framework to understand the concept of smart cities. Based on the exploration of a wide and extensive array of literature from various disciplinary areas we identify eight critical factors of smart city initiatives: management and organization, technology, governance, policy context, people and communities, economy, built infrastructure, and natural environment. These factors form the basis of an integrative framework that can be used to examine how local governments are envisioning smart city initiatives. The framework suggests directions and agendas for smart city research and outlines practical implications for government professionals.", "which has research problem ?", "Smart cities", 248.0, 260.0], ["A recent survey of the empirical studies examining the effects of exchange rate volatility on international trade concluded that \"the large majority of empirical studies... are unable to establish a systematically significant link between measured exchange rate variability and the volume of international trade, whether on an aggregated or on a bilateral basis\" (International Monetary Fund, Exchange Rate Volatility and World Trade, Washington, July 1984, p. 36). A recent paper by M.A. Akhtar and R.S. Hilton (\"Exchange Rate Uncertainty and International Trade,\" Federal Reserve Bank of New York, May 1984), in contrast, suggests that exchange rate volatility, as measured by the standard deviation of indices of nominal effective exchange rates, has had significant adverse effects on the trade in manufactures of the United States and the Federal Republic of Germany. The purpose of the present study is to test the robustness of Akhtar and Hilton's empirical results, with their basic theoretical framework taken as given. The study extends their analysis to include France, Japan, and the United Kingdom; it then examines the robustness of the results with respect to changes in the choice of sample period, volatility measure, and estimation techniques. The main conclusion of the analysis is that the methodology of Akhtar and Hilton fails to establish a systematically significant link between exchange rate volatility and the volume of international trade. This is not to say that significant adverse effects cannot be detected in individual cases, but rather that, viewed in the large, the results tend to be insignificant or unstable. Specifically, the results suggest that straightforward application of Akhtar and Hilton's methodology to three additional countries (France, Japan, and the United Kingdom) yields mixed results; that their methodology seems to be flawed in several respects, and that correction for such flaws has the effect of weakening their conclusions; that the estimates are quite sensitive to fairly minor variations in methodology; and that \"revised\" estimates for the five countries do not, for the most part, support the hypothesis that exchange rate volatility has had a systematically adverse effect on trade. /// Un rA\u00a9cent aperA\u00a7u des A\u00a9tudes empiriques consacrA\u00a9es aux effets de l'instabilitA\u00a9 des taux de change sur le commerce international conclut que \"dans leur grande majoritA\u00a9, les A\u00a9tudes empiriques... ne rA\u00a9ussissent pas A A\u00a9tablir un lien significatif et systA\u00a9matique entre la variabilitA\u00a9 mesurA\u00a9e des taux de change et le volume du commerce international, que celui-ci soit exprimA\u00a9 sous forme globale ou bilatA\u00a9rale\" (Fonds monA\u00a9taire international, Exchange Rate Volatility and World Trade, Washington, juillet 1984, page 36). Par contre, un article publiA\u00a9 rA\u00a9cemment par M.A. Akhtar et R.S. Hilton (\"Exchange Rate Uncertainty and International Trade\", Federal Reserve Bank of New York, mai 1984) soutient que l'instabilitA\u00a9 des taux de change, mesurA\u00a9e par l'A\u00a9cart type des indices des taux de change effectifs nominaux, a eu un effet dA\u00a9favorable significatif sur le commerce de produits manufacturA\u00a9s des Etats-Unis et de la RA\u00a9publique fA\u00a9dA\u00a9rale d'Allemagne. La prA\u00a9sente A\u00a9tude a pour objet d'A\u00a9valuer la soliditA\u00a9 des rA\u00a9sultats empiriques prA\u00a9sentA\u00a9s par Akhtar et Hilton, en prenant comme donnA\u00a9 leur cadre thA\u00a9orique de base. L'auteur A\u00a9tend l'analyse au cas de la France, du Japon et du Royaume-Uni; elle cherche ensuite dans quelle mesure ces rA\u00a9sultats restent valables si l'on modifie la pA\u00a9riode de rA\u00a9fA\u00a9rence, la mesure de l'instabilitA\u00a9 et les techniques d'estimation. La principale conclusion de cette A\u00a9tude est que la mA\u00a9thode utilisA\u00a9e par Akhtar et Hilton n'A\u00a9tablit pas de lien significatif et systA\u00a9matique entre l'instabilitA\u00a9 des taux de change et le volume du commerce international. Ceci ne veut pas dire que l'on ne puisse pas constater dans certains cas particuliers des effets dA\u00a9favorables significatifs, mais plutA\u00b4t que, pris dans leur ensemble, les rA\u00a9sultats sont peu significatifs ou peu stables. Plus prA\u00a9cisA\u00a9ment, cette A\u00a9tude laisse entendre qu'une application systA\u00a9matique de la mA\u00a9thode d'Akhtar et Hilton A trois pays supplA\u00a9mentaires (France, Japon et Royaume-Uni) donne des rA\u00a9sultats mitigA\u00a9s; que leur mA\u00a9thode semble prA\u00a9senter plusieurs dA\u00a9fauts et que la correction de ces dA\u00a9fauts a pour effet d'affaiblir la portA\u00a9e de leurs conclusions; que leurs estimations sont trA\u00a8s sensibles A des variations relativement mineures de la mA\u00a9thode utilisA\u00a9e et que la plupart des estimations \"rA\u00a9visA\u00a9es\" pour les cinq pays ne confirment pas l'hypothA\u00a8se selon laquelle l'instabilitA\u00a9 des taux de change aurait eu un effet systA\u00a9matiquement nA\u00a9gatif sur le commerce international. /// En un examen reciente de los estudios empA\u00adricos sobre los efectos de la inestabilidad de los tipos de cambio en el comercio internacional se llega a la conclusiA\u00b3n de que \"la gran mayorA\u00ada de estos anAilisis empA\u00adricos no consiguen demostrar sistemAiticamente un vA\u00adnculo significativo entre los diferentes grados de variabilidad cambiaria y el volumen del comercio internacional, tanto sea en tA\u00a9rminos agregados como bilaterales\". (Fondo Monetario Internacional, Exchange Rate Volatility and World Trade, Washington, julio de 1984, pAig. 36). Un estudio reciente de M.A. Akhtar y R.S. Hilton (\"Exchange Rate Uncertainty and International Trade,\" Banco de la Reserva Federal de Nueva York, mayo de 1984) indica, por el contrario, que la inestabilidad de los tipos de cambio, expresada segAon la desviaciA\u00b3n estAindar de los A\u00adndices de los tipos de cambio efectivos nominales, ha tenido efectos negativos considerables en el comercio de productos manufacturados de Estados Unidos y de la RepAoblica Federal de Alemania. El presente estudio tiene por objeto comprobar la solidez de los resultados empA\u00adricos de Akhtar y Hilton, tomando como base de partida su marco teA\u00b3rico bAisico. El estudio amplA\u00ada su anAilisis incluyendo a Francia, JapA\u00b3n y el Reino Unido, pasando luego a examinar la solidez de los resultados con respecto a variaciones en la selecciA\u00b3n del perA\u00adodo de la muestra, medida de la inestabilidad y tA\u00a9cnicas de estimaciA\u00b3n. La conclusiA\u00b3n principal del anAilisis es que la metodologA\u00ada de Akhtar y Hilton no logra establecer un vA\u00adnculo significativo sistemAitico entre la inestabilidad de los tipos de cambio y el volumen del comercio internacional. Esto no quiere decir que no puedan obsevarse en casos especA\u00adficos efectos negativos importantes, sino mAis bien que, en tA\u00a9rminos generales, los resultados no suelen ser ni considerables ni estables. En concreto, de los resultados se desprende que la aplicaciA\u00b3n directa de la metodologA\u00ada de Akhtar y Hilton a tres nuevos paA\u00adses (Francia, JapA\u00b3n y el Reino Unido) arroja resultados dispares; que esta metodologA\u00ada parece ser defectuosa en varios aspectos y que la correcciA\u00b3n de tales deficiencias tiene como efecto el debilitamiento de sus conclusiones; que las estimaciones son muy sensibles a modificaciones poco importantes de la metodologA\u00ada, y que las estimaciones \"revisadas\" para los cinco paA\u00adses no confirman, en su mayor parte, la hipotA\u00a9sis de que la inestabilidad de los tipos de cambio ha ejercido un efecto negativo sistemAitico en el comercio exterior.", "which has research problem ?", "Exchange rate volatility", 66.0, 90.0], ["Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.", "which has research problem ?", "Food category estimation", 1268.0, 1292.0], ["Purpose \u2013 The semiconductor market exceeded US$250 billion worldwide in 2010 and has had a double\u2010digit compound annual growth rate (CAGR) in the last 20 years. As it is located far upstream of the electronic product market, the semiconductor industry has suffered severely from the \u201cbullwhip\u201d effect. Therefore, effective e\u2010based supply chain management (e\u2010SCM) has become imperative for the efficient operation of semiconductor manufacturing (SM) companies. The purpose of this research is to define and analyze the key success factors (KSF) for e\u2010SCM system implementation in the semiconductor industry.Design/methodology/approach \u2013 A hierarchy of KSFs is defined first by a combination of a literature review and a focus group discussion with experts who successfully implemented an inter\u2010organizational e\u2010SCM project. Fuzzy analytic hierarchy process (FAHP) is then employed to rank the importance of these identified KSFs. To confirm the research result and further explore the managerial implications, a second in...", "which has research problem ?", "Supply chain management", 331.0, 354.0], ["To discuss the implications of CSF abnormalities for the course of acute monosymptomatic optic neuritis (AMON), various CSF markers were analysed in patients being randomly selected from a population-based cohort. Paired serum and CSF were obtained within a few weeks from onset of AMON. CSF-restricted oligoclonal IgG bands, free kappa and free lambda chain bands were observed in 17, 15, and nine of 27 examined patients, respectively. Sixteen patients showed a polyspecific intrathecal synthesis of oligoclonal IgG antibodies against one or more viruses. At 1 year follow-up five patients had developed clinically definite multiple sclerosis (CDMS); all had CSF oligoclonal IgG bands and virus-specific oligoclonal IgG antibodies at onset. Due to the relative small number studied at the short-term follow-up, no firm conclusion of the prognostic value of these analyses could be reached. CSF Myelin Basic Protein-like material was increased in only two of 29 patients with AMON, but may have potential value in reflecting disease activity, as the highest values were obtained among patients with CSF sampled soon after the worst visual acuity was reached, and among patients with severe visual impairment. In most previous studies of patients with AMON qualitative and quantitative analyses of CSF IgG had a predictive value for development of CDMS, but the results are conflicting.", "which has research problem ?", "Multiple sclerosis", 626.0, 644.0], ["The significant development in global information technologies and the ever-intensifying competitive market climate have both pushed many companies to transform their businesses. Enterprise resource planning (ERP) is seen as one of the most recently emerging process-orientation tools that can enable such a transformation. Its development has presented both researchers and practitioners with new challenges and opportunities. This paper provides a comprehensive review of the state of research in the ERP field relating to process management, organizational change and knowledge management. It surveys current practices, research and development, and suggests several directions for future investigation. Copyright \u00a9 2001 John Wiley & Sons, Ltd.", "which has research problem ?", "Enterprise resource planning", 179.0, 207.0], ["Providing model-generated explanations in recommender systems is important to user experience. State-of-the-art recommendation algorithms\u2014especially the collaborative filtering (CF)- based approaches with shallow or deep models\u2014usually work with various unstructured information sources for recommendation, such as textual reviews, visual images, and various implicit or explicit feedbacks. Though structured knowledge bases were considered in content-based approaches, they have been largely ignored recently due to the availability of vast amounts of data and the learning power of many complex models. However, structured knowledge bases exhibit unique advantages in personalized recommendation systems. When the explicit knowledge about users and items is considered for recommendation, the system could provide highly customized recommendations based on users\u2019 historical behaviors and the knowledge is helpful for providing informed explanations regarding the recommended items. A great challenge for using knowledge bases for recommendation is how to integrate large-scale structured and unstructured data, while taking advantage of collaborative filtering for highly accurate performance. Recent achievements in knowledge-base embedding (KBE) sheds light on this problem, which makes it possible to learn user and item representations while preserving the structure of their relationship with external knowledge for explanation. In this work, we propose to explain knowledge-base embeddings for explainable recommendation. Specifically, we propose a knowledge-base representation learning framework to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm is proposed to generate personalized explanations for the recommended items. Experimental results on real-world e-commerce datasets verified the superior recommendation performance and the explainability power of our approach compared with state-of-the-art baselines.", "which has research problem ?", "Recommender Systems", 42.0, 61.0], ["Enterprise resource planning (ERP) systems are currently involved into every aspect of organization as they provide a highly integrated solution to meet the information system needs. ERP systems have attracted a large amount of researchers and practitioners attention and received a variety of investigate and study. In this paper, we have selected a certain number of papers concerning ERP systems between 1998 and 2006, and this is by no means a comprehensive review. The literature is further classified by its topic and the major outcomes and research methods of each study are addressed. Following implications for future research are provided.", "which has research problem ?", "Enterprise resource planning", 0.0, 28.0], ["Without real bilingual corpus available, unsupervised Neural Machine Translation (NMT) typically requires pseudo parallel data generated with the back-translation method for the model training. However, due to weak supervision, the pseudo data inevitably contain noises and errors that will be accumulated and reinforced in the subsequent training process, leading to bad translation performance. To address this issue, we introduce phrase based Statistic Machine Translation (SMT) models which are robust to noisy data, as posterior regularizations to guide the training of unsupervised NMT models in the iterative back-translation process. Our method starts from SMT models built with pre-trained language models and word-level translation tables inferred from cross-lingual embeddings. Then SMT and NMT models are optimized jointly and boost each other incrementally in a unified EM framework. In this way, (1) the negative effect caused by errors in the iterative back-translation process can be alleviated timely by SMT filtering noises from its phrase tables; meanwhile, (2) NMT can compensate for the deficiency of fluency inherent in SMT. Experiments conducted on en-fr and en-de translation tasks show that our method outperforms the strong baseline and achieves new state-of-the-art unsupervised machine translation performance.", "which has research problem ?", "Unsupervised Machine Translation", 1301.0, 1333.0], ["4OASIS3.2\u20135 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 \u00b0C in ACCESS1.0 and 0.04 \u00b0C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.", "which has research problem ?", "CMIP5", 277.0, 282.0], ["Environmental Kuznets curve (EKC) analysis links changes in environmental quality to national economic growth. The reduced form models, however, do not provide insight into the underlying processes that generate these changes. We compare EKC models to structural transition models of per capita CO2 emissions and per capita GDP, and find that, for the 16 countries which have undergone such a transition, the initiation of the transition correlates not with income levels but with historic events related to the oil price shocks of the 1970s and the policies that followed them. In contrast to previous EKC studies of CO2 the transition away from positive emissions elasticities for these 16 countries is found to occur as a sudden, discontinuous transition rather than as a gradual change. We also demonstrate that the third order polynomial 'N' dependence of emissions on income is the result of data aggregation. We conclude that neither the 'U'- nor the 'N'-shaped relationship between CO2 emissions and income provide a reliable indication of future behaviour.", "which has research problem ?", "CO2 emissions", 295.0, 308.0], ["Researchers in computational linguistics have long speculated that the nuclei of the rhetorical structure tree of a text form an adequate \\summary\" of the text for which that tree was built. However, to my knowledge, there has been no experiment to connrm how valid this speculation really is. In this paper, I describe a psycholinguistic experiment that shows that the concepts of discourse structure and nuclearity can be used eeectively in text summarization. More precisely, I show that there is a strong correlation between the nuclei of the discourse structure of a text and what readers perceive to be the most important units in that text. In addition, I propose and evaluate the quality of an automatic, discourse-based summa-rization system that implements the methods that were validated by the psycholinguistic experiment. The evaluation indicates that although the system does not match yet the results that would be obtained if discourse trees had been built manually, it still signiicantly outperforms both a baseline algorithm and Microsoft's OOce97 summarizer. 1 Motivation Traditionally, previous approaches to automatic text summarization have assumed that the salient parts of a text can be determined by applying one or more of the following assumptions: important sentences in a text contain words that are used frequently (Luhn 1958; Edmundson 1968); important sentences contain words that are used in the title and section headings (Edmundson 1968); important sentences are located at the beginning or end of paragraphs (Baxendale 1958); important sentences are located at positions in a text that are genre dependent, and these positions can be determined automatically, through training important sentences use bonus words such as \\greatest\" and \\signiicant\" or indicator phrases such as \\the main aim of this paper\" and \\the purpose of this article\", while unimportant sentences use stigma words such as \\hardly\" and \\im-possible\" important sentences and concepts are the highest connected entities in elaborate semantic struc-important and unimportant sentences are derivable from a discourse representation of the text (Sparck Jones 1993b; Ono, Sumita, & Miike 1994). In determining the words that occur most frequently in a text or the sentences that use words that occur in the headings of sections, computers are accurate tools. Therefore, in testing the validity of using these indicators for determining the most important units in a text, it is adequate to compare the direct output of a summarization program that implements the assump-tion(s) under scrutiny with a human-made \u2026", "which has research problem ?", "Automatic text summarization", 1129.0, 1157.0], ["Programs offered by academic institutions in higher education need to meet specific standards that are established by the appropriate accreditation bodies. Curriculum mapping is an important part of the curriculum management process that is used to document the expected learning outcomes, ensure quality, and align programs and courses with industry standards. Semantic web languages can be used to express and share common agreement about the vocabularies used in the domain under study. In this paper, we present an approach based on ontology for curriculum mapping in higher education. Our proposed approach is focused on the creation of a core curriculum ontology that can support effective knowledge representation and knowledge discovery. The research work presents the case of ontology reuse through the extension of the curriculum ontology to support the creation of micro-credentials. We also present a conceptual framework for knowledge discovery to support various business use case scenarios based on ontology inferencing and querying operations.", "which has research problem ?", "curriculum mapping", 156.0, 174.0], ["ses a facial expression recognition system based on Gabor feature using a novel r bank. Traditionally, a global Gabor filter bank with 5 frequencies and 8 ten used to extract the Gabor feature. A lot of time will be involved to extract mensions of such Gabor feature vector are prohibitively high. A novel local Gabor art of frequency and orientation parameters is proposed. In order to evaluate the he local Gabor filter bank, we first employed a two-stage feature compression LDA to select and compress the Gabor feature, then adopted minimum distance nize facial expression. Experimental results show that the method is effective for eduction and good recognition performance in comparison with traditional entire . The best average recognition rate achieves 97.33% for JAFFE facial expression abor filter bank, feature extraction, PCA, LDA, facial expression recognition. deliver rich information about human emotion and play an essential role in human In order to facilitate a more intelligent and natural human machine interface of new cts, automatic facial expression recognition [1][18][20] had been studied world en years, which has become a very active research area in computer vision and n. There are many approaches have been proposed for facial expression analysis ages and image sequences [12][18] in the literature. we focus on the recognition of facial expression from single digital images with feature extraction. A number of approaches have been developed for extracting by: Motorola Labs Research Foundation (No.303D804372), NSFC (No.60275005), GDNSF 105938).", "which has research problem ?", "Facial Expression Recognition", 6.0, 35.0], ["For research institutes, data libraries, and data archives, validating RDF data according to predefined constraints is a much sought-after feature, particularly as this is taken for granted in the XML world. Based on our work in two international working groups on RDF validation and jointly identified requirements to formulate constraints and validate RDF data, we have published 81 types of constraints that are required by various stakeholders for data applications. In this paper, we evaluate the usability of identified constraint types for assessing RDF data quality by (1) collecting and classifying 115 constraints on vocabularies commonly used in the social, behavioral, and economic sciences, either from the vocabularies themselves or from domain experts, and (2) validating 15,694 data sets (4.26 billion triples) of research data against these constraints. We classify each constraint according to (1) the severity of occurring violations and (2) based on which types of constraint languages are able to express its constraint type. Based on the large-scale evaluation, we formulate several findings to direct the further development of constraint languages.", "which has research problem ?", "Data Quality", 561.0, 573.0], ["The Smart Cities movement has produced a large number of projects and experiments around the world. To understand the primary ones, as well as their underlying tensions and the insights emerging from them, the editors of this special issue of the California Management Review enlisted a panel of experts, academics, and practitioners from different nationalities, backgrounds, experiences, and perspectives. The panel focused its discussion on three main areas: new governance models for Smart Cities, how to spur growth and renewal, and the sharing economy\u2014both commons and market based.", "which has research problem ?", "Smart cities", 4.0, 16.0], ["A fused hexacyclic electron acceptor, IHIC, based on strong electron\u2010donating group dithienocyclopentathieno[3,2\u2010b]thiophene flanked by strong electron\u2010withdrawing group 1,1\u2010dicyanomethylene\u20103\u2010indanone, is designed, synthesized, and applied in semitransparent organic solar cells (ST\u2010OSCs). IHIC exhibits strong near\u2010infrared absorption with extinction coefficients of up to 1.6 \u00d7 105m\u22121 cm\u22121, a narrow optical bandgap of 1.38 eV, and a high electron mobility of 2.4 \u00d7 10\u22123 cm2 V\u22121 s\u22121. The ST\u2010OSCs based on blends of a narrow\u2010bandgap polymer donor PTB7\u2010Th and narrow\u2010bandgap IHIC acceptor exhibit a champion power conversion efficiency of 9.77% with an average visible transmittance of 36% and excellent device stability; this efficiency is much higher than any single\u2010junction and tandem ST\u2010OSCs reported in the literature.", "which has research problem ?", "Organic solar cells", 260.0, 279.0], ["This article is a review of work published in various journals on the topics of Enterprise Resource Planning (ERP) between January 2000 and May 2006. A total of 313 articles from 79 journals are reviewed. The article intends to serve three goals. First, it will be useful to researchers who are interested in understanding what kinds of questions have been addressed in the area of ERP. Second, the article will be a useful resource for searching for research topics. Third, it will serve as a comprehensive bibliography of the articles published during the period. The literature is analysed under six major themes and nine sub-themes.", "which has research problem ?", "Enterprise resource planning", 80.0, 108.0], ["Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task\u2014a result that approaches the human inter-annotator agreement (0.8875)\u2014and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system\u2019s ability to return real-time results: the average response time for each team\u2019s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "which has research problem ?", "disease named entity recognition (DNER)", NaN, NaN], ["This paper proposes a new idea, namely genetic algorithms with dominant genes (GADG) in order to deal with FMS scheduling problems with alternative production routing. In the traditional genetic algorithm (GA) approach, crossover and mutation rates should be pre-defined. However, different rates applied in different problems will directly influence the performance of genetic search. Determination of optimal rates in every run is time-consuming and not practical in reality due to the infinite number of possible combinations. In addition, this crossover rate governs the number of genes to be selected to undergo crossover, and this selection process is totally arbitrary. The selected genes may not represent the potential critical structure of the chromosome. To tackle this problem, GADG is proposed. This approach does not require a defined crossover rate, and the proposed similarity operator eliminates the determination of the mutation rate. This idea helps reduce the computational time remarkably and improve the performance of genetic search. The proposed GADG will identify and record the best genes and structure of each chromosome. A new crossover mechanism is designed to ensure the best genes and structures to undergo crossover. The performance of the proposed GADG is testified by comparing it with other existing methodologies, and the results show that it outperforms other approaches.", "which has research problem ?", "Scheduling problems", 111.0, 130.0], ["This Editorial describes the rationale, focus, scope and technology behind the newly launched, open access, innovative Food Modelling Journal (FMJ). The Journal is designed to publish those outputs of the research cycle that usually precede the publication of the research article, but have their own value and re-usability potential. Such outputs are methods, models, software and data. The Food Modelling Journal is launched by the AGINFRA+ community and is integrated with the AGINFRA+ Virtual Research Environment (VRE) to facilitate and streamline the authoring, peer review and publication of the manuscripts via the ARPHA Publishing Platform.", "which has research problem ?", "publishing", 637.0, 647.0], ["The paper presents the SemEval-2012 Shared Task 5: Chinese Semantic Dependency Parsing. The goal of this task is to identify the dependency structure of Chinese sentences from the semantic view. We firstly introduce the motivation of providing Chinese semantic dependency parsing task, and then describe the task in detail including data preparation, data format, task evaluation, and so on. Over ten thousand sentences were labeled for participants to train and evaluate their systems. At last, we briefly describe the submitted systems and analyze these results.", "which has research problem ?", "Chinese Semantic Dependency Parsing", 51.0, 86.0], ["This paper presents the task \"Coreference Resolution in Multiple Languages\" to be run in SemEval-2010 (5th International Workshop on Semantic Evaluations). This task aims to evaluate and compare automatic coreference resolution systems for three different languages (Catalan, English, and Spanish) by means of two alternative evaluation metrics, thus providing an insight into (i) the portability of coreference resolution systems across languages, and (ii) the effect of different scoring metrics on ranking the output of the participant systems.", "which has research problem ?", "Coreference Resolution", 30.0, 52.0], ["Increasing population density in urban centers demands adequate provision of services and infrastructure to meet the needs of city inhabitants, encompassing residents, workers, and visitors. The utilization of information and communications technologies to achieve this objective presents an opportunity for the development of smart cities, where city management and citizens are given access to a wealth of real-time information about the urban environment upon which to base decisions, actions, and future planning. This paper presents a framework for the realization of smart cities through the Internet of Things (IoT). The framework encompasses the complete urban information system, from the sensory level and networking support structure through to data management and Cloud-based integration of respective systems and services, and forms a transformational part of the existing cyber-physical system. This IoT vision for a smart city is applied to a noise mapping case study to illustrate a new method for existing operations that can be adapted for the enhancement and delivery of important city services.", "which has research problem ?", "Internet of Things", 598.0, 616.0], ["Most of the definitions of a \u201csmart city\u201d make a direct or indirect reference to improving performance as one of the main objectives of initiatives to make cities \u201csmarter\u201d. Several evaluation approaches and models have been put forward in literature and practice to measure smart cities. However, they are often normative or limited to certain aspects of cities\u2019 \u201csmartness\u201d, and a more comprehensive and holistic approach seems to be lacking. Thus, building on a review of the literature and practice in the field, this paper aims to discuss the importance of adopting a holistic approach to the assessment of smart city governance and policy decision making. It also proposes a performance assessment framework that overcomes the limitations of existing approaches and contributes to filling the current gap in the knowledge base in this domain. One of the innovative elements of the proposed framework is its holistic approach to policy evaluation. It is designed to address a smart city\u2019s specificities and can benefit from the active participation of citizens in assessing the public value of policy decisions and their sustainability over time. We focus our attention on the performance measurement of codesign and coproduction by stakeholders and social innovation processes related to public value generation. More specifically, we are interested in the assessment of both the citizen centricity of smart city decision making and the processes by which public decisions are implemented, monitored, and evaluated as regards their capability to develop truly \u201cblended\u201d value services\u2014that is, simultaneously socially inclusive, environmentally friendly, and economically sustainable.", "which has research problem ?", "Smart cities", 275.0, 287.0], ["Abstract Background The goal of the first BioCreAtIvE challenge (Critical Assessment of Information Extraction in Biology) was to provide a set of common evaluation tasks to assess the state of the art for text mining applied to biological problems. The results were presented in a workshop held in Granada, Spain March 28\u201331, 2004. The articles collected in this BMC Bioinformatics supplement entitled \"A critical assessment of text mining methods in molecular biology\" describe the BioCreAtIvE tasks, systems, results and their independent evaluation. Results BioCreAtIvE focused on two tasks. The first dealt with extraction of gene or protein names from text, and their mapping into standardized gene identifiers for three model organism databases (fly, mouse, yeast). The second task addressed issues of functional annotation, requiring systems to identify specific text passages that supported Gene Ontology annotations for specific proteins, given full text articles. Conclusion The first BioCreAtIvE assessment achieved a high level of international participation (27 groups from 10 countries). The assessment provided state-of-the-art performance results for a basic task (gene name finding and normalization), where the best systems achieved a balanced 80% precision / recall or better, which potentially makes them suitable for real applications in biology. The results for the advanced task (functional annotation from free text) were significantly lower, demonstrating the current limitations of text-mining approaches where knowledge extrapolation and interpretation are required. In addition, an important contribution of BioCreAtIvE has been the creation and release of training and test data sets for both tasks. There are 22 articles in this special issue, including six that provide analyses of results or data quality for the data sets, including a novel inter-annotator consistency assessment for the test set used in task 2.", "which has research problem ?", "Information Extraction", 88.0, 110.0], ["Learning both hierarchical and temporal representation has been among the long-standing challenges of recurrent neural networks. Multiscale recurrent neural networks have been considered as a promising approach to resolve this issue, yet there has been a lack of empirical evidence showing that this type of models can actually capture the temporal dependencies by discovering the latent hierarchical structure of the sequence. In this paper, we propose a novel multiscale approach, called the hierarchical multiscale recurrent neural networks, which can capture the latent hierarchical structure in the sequence by encoding the temporal dependencies with different timescales using a novel update mechanism. We show some evidence that our proposed multiscale architecture can discover underlying hierarchical structure in the sequences without using explicit boundary information. We evaluate our proposed model on character-level language modelling and handwriting sequence modelling.", "which has research problem ?", "Language Modelling", 932.0, 950.0], ["Previous work on over-education has assumed homogeneity of workers and jobs. Relaxing these assumptions, we find that over-educated workers have lower education credentials than matched graduates. Among the over-educated graduates we distinguish between the apparently over-educated workers, who have similar unobserved skills as matched graduates, and the genuinely over-educated workers, who have a much lower skill endowment. Over-education is associated with a pay penalty of 5%-11% for apparently over-educated workers compared with matched graduates and of 22%-26% for the genuinely over-educated. Over-education originates from the lack of skills of graduates. This should be taken into consideration in the current debate on the future of higher education in the UK. Copyright The London School of Economics and Political Science 2003.", "which has research problem ?", "Over-education", 17.0, 31.0], ["Since the seminal work of Mikolov et al., word embeddings have become the preferred word representations for many natural language processing tasks. Document similarity measures extracted from word embeddings, such as the soft cosine measure (SCM) and the Word Mover's Distance (WMD), were reported to achieve state-of-the-art performance on semantic text similarity and text classification. Despite the strong performance of the WMD on text classification and semantic text similarity, its super-cubic average time complexity is impractical. The SCM has quadratic worst-case time complexity, but its performance on text classification has never been compared with the WMD. Recently, two word embedding regularization techniques were shown to reduce storage and memory costs, and to improve training speed, document processing speed, and task performance on word analogy, word similarity, and semantic text similarity. However, the effect of these techniques on text classification has not yet been studied. In our work, we investigate the individual and joint effect of the two word embedding regularization techniques on the document processing speed and the task performance of the SCM and the WMD on text classification. For evaluation, we use the $k$NN classifier and six standard datasets: BBCSPORT, TWITTER, OHSUMED, REUTERS-21578, AMAZON, and 20NEWS. We show 39% average $k$NN test error reduction with regularized word embeddings compared to non-regularized word embeddings. We describe a practical procedure for deriving such regularized embeddings through Cholesky factorization. We also show that the SCM with regularized word embeddings significantly outperforms the WMD on text classification and is over 10,000 times faster.", "which has research problem ?", "Text Classification", 371.0, 390.0], ["During the last few years, the newly coined term middle-income trap has been widely used by policymakers to refer to the middle-income economies that seem to be stuck in the middle-income range. However, there is no accepted definition of the term in the literature. In this paper, we study historical transitions across income groups to see whether there is any evidence that supports the claim that economies do not advance. Overall, the data rejects this proposition. Instead, we argue that what distinguishes economies in their transition from middle to high income is fast versus slow transitions. We find that, historically, it has taken a \u201ctypical\u201d economy 55 years to graduate from lower-middle income ($2,000 in 1990 purchasing power parity [PPP] $) to upper-middle income ($7,250 in 1990 PPP $). Likewise, we find that, historically, it has taken 15 years for an economy to graduate from upper-middle income to high income (above $11,750 in 1990 PPP $). Our analysis implies that as of 2013, there were 10 (out of 39) lower-middle-income economies and that 4 (out of 15) upper-middle-income economies that were experiencing slow transitions (i.e., above 55 and 15 years, respectively). The historical evidence presented in this paper indicates that economies move up across income groups. Analyzing a large sample of economies over many decades, indicates that experiences are wide, including many economies that today are high income that spent many decades traversing the middle-income segment.", "which has research problem ?", "Middle-Income Trap", 49.0, 67.0], ["The growing maturity of Natural Language Processing (NLP) techniques and resources is dramatically changing the landscape of many application domains which are dependent on the analysis of unstructured data at scale. The finance domain, with its reliance on the interpretation of multiple unstructured and structured data sources and its demand for fast and comprehensive decision making is already emerging as a primary ground for the experimentation of NLP, Web Mining and Information Retrieval (IR) techniques for the automatic analysis of financial news and opinions online. This challenge focuses on advancing the state-of-the-art of aspect-based sentiment analysis and opinion-based Question Answering for the financial domain.", "which has research problem ?", "Question Answering ", 689.0, 708.0], ["We introduce BilBOWA (Bilingual Bag-of-Words without Alignments), a simple and computationally-efficient model for learning bilingual distributed representations of words which can scale to large monolingual datasets and does not require word-aligned parallel training data. Instead it trains directly on monolingual data and extracts a bilingual signal from a smaller set of raw-text sentence-aligned data. This is achieved using a novel sampled bag-of-words cross-lingual objective, which is used to regularize two noise-contrastive language models for efficient cross-lingual feature learning. We show that bilingual embeddings learned using the proposed model outperform state-of-the-art methods on a cross-lingual document classification task as well as a lexical translation task on WMT11 data.", "which has research problem ?", "Document Classification", 719.0, 742.0], ["Automatic Document Summarization is a highly interdisciplinary research area related with computer science as well as cognitive psychology. This Summarization is to compress an original document into a summarized version by extracting almost all of the essential concepts with text mining techniques. This research focuses on developing a statistical automatic text summarization approach, Kmixture probabilistic model, to enhancing the quality of summaries. KSRS employs the K-mixture probabilistic model to establish term weights in a statistical sense, and further identifies the term relationships to derive the semantic relationship significance (SRS) of nouns. Sentences are ranked and extracted based on their semantic relationship significance values. The objective of this research is thus to propose a statistical approach to text summarization. We propose a K-mixture semantic relationship significance (KSRS) approach to enhancing the quality of document summary results. The K-mixture probabilistic model is used to determine the term weights. Term relationships are then investigated to develop the semantic relationship of nouns that manifests sentence semantics. Sentences with significant semantic relationship, nouns are extracted to form the summary accordingly.", "which has research problem ?", "Automatic text summarization", 351.0, 379.0], ["Abstract. The recently developed Norwegian Earth System Model (NorESM) is employed for simulations contributing to the CMIP5 (Coupled Model Intercomparison Project phase 5) experiments and the fifth assessment report of the Intergovernmental Panel on Climate Change (IPCC-AR5). In this manuscript, we focus on evaluating the ocean and land carbon cycle components of the NorESM, based on the preindustrial control and historical simulations. Many of the observed large scale ocean biogeochemical features are reproduced satisfactorily by the NorESM. When compared to the climatological estimates from the World Ocean Atlas (WOA), the model simulated temperature, salinity, oxygen, and phosphate distributions agree reasonably well in both the surface layer and deep water structure. However, the model simulates a relatively strong overturning circulation strength that leads to noticeable model-data bias, especially within the North Atlantic Deep Water (NADW). This strong overturning circulation slightly distorts the structure of the biogeochemical tracers at depth. Advancements in simulating the oceanic mixed layer depth with respect to the previous generation model particularly improve the surface tracer distribution as well as the upper ocean biogeochemical processes, particularly in the Southern Ocean. Consequently, near-surface ocean processes such as biological production and air\u2013sea gas exchange, are in good agreement with climatological observations. The NorESM adopts the same terrestrial model as the Community Earth System Model (CESM1). It reproduces the general pattern of land-vegetation gross primary productivity (GPP) when compared to the observationally based values derived from the FLUXNET network of eddy covariance towers. While the model simulates well the vegetation carbon pool, the soil carbon pool is smaller by a factor of three relative to the observational based estimates. The simulated annual mean terrestrial GPP and total respiration are slightly larger than observed, but the difference between the global GPP and respiration is comparable. Model-data bias in GPP is mainly simulated in the tropics (overestimation) and in high latitudes (underestimation). Within the NorESM framework, both the ocean and terrestrial carbon cycle models simulate a steady increase in carbon uptake from the preindustrial period to the present-day. The land carbon uptake is noticeably smaller than the observations, which is attributed to the strong nitrogen limitation formulated by the land model.", "which has research problem ?", "CMIP5", 119.0, 124.0], ["The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents\u2019 circumstances, such as a spouse\u2019s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.", "which has research problem ?", "child", NaN, NaN], ["We introduce a multi-task setup of identifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called SciIE with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.", "which has research problem ?", "Information Extraction", 481.0, 503.0], ["Enterprise resource planning (ERP) system solutions are currently in high demand by both manufacturing and service organisations because they provide a tightly integrated solution to an organisation's information system needs. During the last decade, ERP systems have received a significant amount of attention from researchers and practitioners from a variety of functional disciplines. In this paper, a comprehensive review of the research literature (1990\u20102003) concerning ERP systems is presented. The literature is further classified and the major outcomes of each study are addressed and analysed. Following a comprehensive review of the literature, proposals for future research are formulated to identify topics where fruitful opportunities exist.", "which has research problem ?", "Enterprise resource planning", 0.0, 28.0], ["Learned social ontologies can be viewed as products of a social fermentation process, i.e. a process between users who belong in communities of common interests (CoI), in open, collaborative, and communicative environments. In such a setting, social fermentation ensures the automatic encapsulation of agreement and trust of shared knowledge that participating stakeholders provide during an ontology learning task. This chapter discusses the requirements for the automated learning of social ontologies and presents a working method and results of preliminary work. Furthermore, due to its importance for the exploitation of the learned ontologies, it introduces a model for representing the interlinking of agreement, trust and the learned domain conceptualizations that are extracted from social content. The motivation behind this work is an effort towards supporting the design of methods for learning ontologies from social content i.e. methods that aim to learn not only domain conceptualizations but also the degree that agents (software and human) may trust these conceptualizations or not.", "which has research problem ?", "learning ontologies from social content", 898.0, 937.0], ["Lexical entailment (LE) is a fundamental asymmetric lexico-semantic relation, supporting the hierarchies in lexical resources (e.g., WordNet, ConceptNet) and applications like natural language inference and taxonomy induction. Multilingual and cross-lingual NLP applications warrant models for LE detection that go beyond language boundaries. As part of SemEval 2020, we carried out a shared task (Task 2) on multilingual and cross-lingual LE. The shared task spans three dimensions: (1) monolingual vs. cross-lingual LE, (2) binary vs. graded LE, and (3) a set of 6 diverse languages (and 15 corresponding language pairs). We offered two different evaluation tracks: (a) Dist: for unsupervised, fully distributional models that capture LE solely on the basis of unannotated corpora, and (b) Any: for externally informed models, allowed to leverage any resources, including lexico-semantic networks (e.g., WordNet or BabelNet). In the Any track, we recieved runs that push state-of-the-art across all languages and language pairs, for both binary LE detection and graded LE prediction.", "which has research problem ?", "Lexical entailment", 0.0, 18.0], ["Essential oils and their components are becoming increasingly popular as naturally occurring antioxidant agents. In this work, the composition of essential oil in Artemisia herba-alba from southwest Tunisia, obtained by hydrodistillation was determined by GC/MS. Eighteen compounds were identified with the main constituents namely, \u03b1-thujone (24.88%), germacrene D (14.48%), camphor (10.81%), 1,8-cineole (8.91%) and \u03b2-thujone (8.32%). The oil was screened for its antioxidant activity with 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging, \u03b2-carotene bleaching and reducing power assays. The essential oil of A. herba-alba exhibited a good antioxidant activity with all assays with dose dependent manner and can be attributed to its presence in the oil. Key words: Artemisia herba alba, essential oil, chemical composition, antioxidant activity.", "which has research problem ?", "Oil", 156.0, 159.0], ["This paper introduces the ACL Reference Dataset for Terminology Extraction and Classification, version 2.0 (ACL RD-TEC 2.0). The ACL RD-TEC 2.0 has been developed with the aim of providing a benchmark for the evaluation of term and entity recognition tasks based on specialised text from the computational linguistics domain. This release of the corpus consists of 300 abstracts from articles in the ACL Anthology Reference Corpus, published between 1978\u20132006. In these abstracts, terms (i.e., single or multi-word lexical units with a specialised meaning) are manually annotated. In addition to their boundaries in running text, annotated terms are classified into one of the seven categories method, tool, language resource (LR), LR product, model, measures and measurements, and other. To assess the quality of the annotations and to determine the difficulty of this annotation task, more than 171 of the abstracts are annotated twice, independently, by each of the two annotators. In total, 6,818 terms are identified and annotated in more than 1300 sentences, resulting in a specialised vocabulary made of 3,318 lexical forms, mapped to 3,471 concepts. We explain the development of the annotation guidelines and discuss some of the challenges we encountered in this annotation task.", "which has research problem ?", "Language resource", 708.0, 725.0], ["Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SciBERT, a pretrained language model based on BERT (Devlin et. al., 2018) to address the lack of high-quality, large-scale labeled scientific data. SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks. The code and pretrained models are available at https://github.com/allenai/scibert/.", "which has research problem ?", "Sentence Classification", 485.0, 508.0], ["This paper presents the NLPTEA 2020 shared task for Chinese Grammatical Error Diagnosis (CGED) which seeks to identify grammatical error types, their range of occurrence and recommended corrections within sentences written by learners of Chinese as a foreign language. We describe the task definition, data preparation, performance metrics, and evaluation results. Of the 30 teams registered for this shared task, 17 teams developed the system and submitted a total of 43 runs. System performances achieved a significant progress, reaching F1 of 91% in detection level, 40% in position level and 28% in correction level. All data sets with gold standards and scoring scripts are made publicly available to researchers.", "which has research problem ?", "Chinese Grammatical Error Diagnosis", 52.0, 87.0], ["Abstract One determining characteristic of contemporary sociopolitical systems is their power over increasingly large and diverse populations. This raises questions about power relations between heterogeneous individuals and increasingly dominant and homogenizing system objectives. This article crosses epistemic boundaries by integrating computer engineering and a historicalphilosophical approach making the general organization of individuals within large-scale systems and corresponding individual homogenization intelligible. From a versatile archeological-genealogical perspective, an analysis of computer and social architectures is conducted that reinterprets Foucault\u2019s disciplines and political anatomy to establish the notion of politics for a purely technical system. This permits an understanding of system organization as modern technology with application to technical and social systems alike. Connecting to Heidegger\u2019s notions of the enframing ( Gestell ) and a more primal truth ( anf\u00e4nglicheren Wahrheit) , the recognition of politics in differently developing systems then challenges the immutability of contemporary organization. Following this critique of modernity and within the conceptualization of system organization, Derrida\u2019s democracy to come (\u00e0 venir) is then reformulated more abstractly as organizations to come . Through the integration of the discussed concepts, the framework of Large-Scale Systems Composed of Homogeneous Individuals (LSSCHI) is proposed, problematizing the relationships between individuals, structure, activity, and power within large-scale systems. The LSSCHI framework highlights the conflict of homogenizing system-level objectives and individual heterogeneity, and outlines power relations and mechanisms of control shared across different social and technical systems.", "which has research problem ?", "System organization", 814.0, 833.0], ["We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.", "which has research problem ?", "Question Answering", 558.0, 576.0], ["In this paper, we take a look at an enhanced approach for eye detection under difficult acquisition circumstances such as low-light, distance, pose variation, and blur. We present a novel correlation filter based eye detection pipeline that is specifically designed to reduce face alignment errors, thereby increasing eye localization accuracy and ultimately face recognition accuracy. The accuracy of our eye detector is validated using data derived from the Labeled Faces in the Wild (LFW) and the Face Detection on Hard Datasets Competition 2011 (FDHD) sets. The results on the LFW dataset also show that the proposed algorithm exhibits enhanced performance, compared to another correlation filter based detector, and that a considerable increase in face recognition accuracy may be achieved by focusing more effort on the eye localization stage of the face recognition process. Our results on the FDHD dataset show that our eye detector exhibits superior performance, compared to 11 different state-of-the-art algorithms, on the entire set of difficult data without any per set modifications to our detection or preprocessing algorithms. The immediate application of eye detection is automatic face recognition, though many good applications exist in other areas, including medical research, training simulators, communication systems for the disabled, and automotive engineering.", "which has research problem ?", "Eye localization", 318.0, 334.0], ["Smart cities\u2019 authorities use graphic dashboards to visualize and analyze important information on cities, citizens, institutions, and their interactions. This information supports various decision-making processes that affect citizens\u2019 quality of life. Cities across the world have similar, if not the same, functional and nonfunctional requirements to develop their dashboards. Software developers will face the same challenges and they are likely to provide similar solutions for each developed city dashboard. Moreover, the development of these dashboards implies a significant investment in terms of human and financial resources from cities. The automation of the development of smart cities dashboards is feasible as these visualization systems will have common requirements between cities. This article introduces cities-board, a framework to automate the development of smart cities dashboards based on model-driven engineering. Cities-board proposes a graphic domain-specific language (DSL) that allows the creation of dashboard models with concepts that are closer to city authorities. Cities-board transforms these dashboards models to functional code artifacts by using model-to-model (M2M) and model-to-text (M2T) transformations. We evaluate cities-board by measuring the generation time, and the quality of the generated code under different models configurations. Results show the strengths and weaknesses of cities-board compared against a generic code generation tool.", "which has research problem ?", "Model-driven engineering", 912.0, 936.0], ["We present the Compressive Transformer, an attentive sequence model which compresses past memories for long-range sequence learning. We find the Compressive Transformer obtains state-of-the-art language modelling results in the WikiText-103 and Enwik8 benchmarks, achieving 17.1 ppl and 0.97 bpc respectively. We also find it can model high-frequency speech effectively and can be used as a memory mechanism for RL, demonstrated on an object matching task. To promote the domain of long-range sequence learning, we propose a new open-vocabulary language modelling benchmark derived from books, PG-19.", "which has research problem ?", "Language Modelling", 194.0, 212.0], ["Abstract The essential oil composition from the aerial parts of Artemisia campestris var. glutinosa Gay ex Bess and Artemisia herba-alba Asso (Asteraceae) of Tunisian origin has been studied by GC and GC/MS. The main constituents of the oil from A. campestris collected in Benguerdane (South of Tunisia) were found to be \u03b2-pinene (41.0%), p-cymene (9.9%), \u03b1-terpinene (7.9%), limonene (6.5%), myrcene (4.1%), \u03b2-phellandrene (3.4%) and a-pinene (3.2%). Whereas the oil from A. herba-alba collected in Tataouine (South of Tunisia) showed, pinocarvone (38.3%), a-copaene (12.18%), limonene (11.0%), isoamyl2-methylbutyrate (19.5%) as major compounds. The mutagenic and antimutagenic activities of the two oils were investigated by the Salmonella typhimurium/microsome assay, with and without addition of an extrinsic metabolic activation system. The oils showed no mutagenicity when tested with Salmonella typhimurium strains TA98 and TA97. On the other hand, we showed that each oil had antimutagenic activity against the carcinogen Benzo (a) pyrene (B[a] P) when tested with TA97 and TA98 assay systems.", "which has research problem ?", "Oil", 23.0, 26.0], ["This paper proposes a novel deep reinforcement learning (RL) architecture, called Value Prediction Network (VPN), which integrates model-free and model-based RL methods into a single neural network. In contrast to typical model-based RL methods, VPN learns a dynamics model whose abstract states are trained to make option-conditional predictions of future values (discounted sum of rewards) rather than of future observations. Our experimental results show that VPN has several advantages over both model-free and model-based baselines in a stochastic environment where careful planning is required but building an accurate observation-prediction model is difficult. Furthermore, VPN outperforms Deep Q-Network (DQN) on several Atari games even with short-lookahead planning, demonstrating its potential as a new way of learning a good state representation.", "which has research problem ?", "Atari Games", 729.0, 740.0], ["Multi-task learning (MTL) is an effective method for learning related tasks, but designing MTL models necessitates deciding which and how many parameters should be task-specific, as opposed to shared between tasks. We investigate this issue for the problem of jointly learning named entity recognition (NER) and relation extraction (RE) and propose a novel neural architecture that allows for deeper task-specificity than does prior work. In particular, we introduce additional task-specific bidirectional RNN layers for both the NER and RE tasks and tune the number of shared and task-specific layers separately for different datasets. We achieve state-of-the-art (SOTA) results for both tasks on the ADE dataset; on the CoNLL04 dataset, we achieve SOTA results on the NER task and competitive results on the RE task while using an order of magnitude fewer trainable parameters than the current SOTA architecture. An ablation study confirms the importance of the additional task-specific layers for achieving these results. Our work suggests that previous solutions to joint NER and RE undervalue task-specificity and demonstrates the importance of correctly balancing the number of shared and task-specific parameters for MTL approaches in general.", "which has research problem ?", "Relation Extraction", 312.0, 331.0], ["Facial expression interpretation, recognition and analysis is a key issue in visual communication and man to machine interaction. We address the issues of facial expression recognition and synthesis and compare the proposed bilinear factorization based representations with previously investigated methods such as linear discriminant analysis and linear regression. We conclude that bilinear factorization outperforms these techniques in terms of correct recognition rates and synthesis photorealism especially when the number of training samples is restrained.", "which has research problem ?", "Facial Expression Recognition", 155.0, 184.0], ["This paper presents the results of the 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas. The shared task featured two independent tracks, and participants submitted machine translation systems for up to 10 indigenous languages. Overall, 8 teams participated with a total of 214 submissions. We provided training sets consisting of data collected from various sources, as well as manually translated sentences for the development and test sets. An official baseline trained on this data was also provided. Team submissions featured a variety of architectures, including both statistical and neural models, and for the majority of languages, many teams were able to considerably improve over the baseline. The best performing systems achieved 12.97 ChrF higher than baseline, when averaged across languages.", "which has research problem ?", " Open Machine Translation", 58.0, 83.0], ["This paper introduces the SemEval-2021 shared task 4: Reading Comprehension of Abstract Meaning (ReCAM). This shared task is designed to help evaluate the ability of machines in representing and understanding abstract concepts.Given a passage and the corresponding question, a participating system is expected to choose the correct answer from five candidates of abstract concepts in cloze-style machine reading comprehension tasks. Based on two typical definitions of abstractness, i.e., the imperceptibility and nonspecificity, our task provides three subtasks to evaluate models\u2019 ability in comprehending the two types of abstract meaning and the models\u2019 generalizability. Specifically, Subtask 1 aims to evaluate how well a participating system models concepts that cannot be directly perceived in the physical world. Subtask 2 focuses on models\u2019 ability in comprehending nonspecific concepts located high in a hypernym hierarchy given the context of a passage. Subtask 3 aims to provide some insights into models\u2019 generalizability over the two types of abstractness. During the SemEval-2021 official evaluation period, we received 23 submissions to Subtask 1 and 28 to Subtask 2. The participating teams additionally made 29 submissions to Subtask 3. The leaderboard and competition website can be found at https://competitions.codalab.org/competitions/26153. The data and baseline code are available at https://github.com/boyuanzheng010/SemEval2021-Reading-Comprehension-of-Abstract-Meaning.", "which has research problem ?", "Reading Comprehension", 54.0, 75.0], ["The explosion of informal entrepreneurial activity during Mongolia's transition to a market economy represents one of the most visible signs of change in this expansive but sparsely populated Asian country. To deepen our understanding of Mongolia's informal sector during the transition, the author merges anecdotal experience from qualitative interviews with hard data from a survey of 770 informals in Ulaanbaatar, from a national household survey, and from official employment statistics. Using varied sources, the author generates rudimentary estimates of the magnitude of, and trends in, informal activity in Mongolia, estimates that are surprisingly consistent with each other. He evaluates four types of reasons for the burst of informal activity in Mongolia since 1990: 1) The crisis of the early and mid-1990s, during which large pools of labor were released from formal employment. 2) Rural to urban migration. 3) The\"market's\"reallocation of resources toward areas neglected under the old system: services such as distribution and transportation. 4) The institutional environments faced by the formal and informal sectors: hindering growth of the formal sector, facilitating entry for the informal sector. Formal labor markets haven't absorbed the labor made available by the crisis and by migration and haven't fully responded to the demand for new services. The relative ease of entering the informal market explains that market's great expansion. The relative difficulty of entering formal markets is not random but is driven by policy. Improving policies in the formal sector could afford the same ease of entry there as is currently being experienced in the informal sector.", "which has research problem ?", "Informal sector", 249.0, 264.0], ["Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task\u2014a result that approaches the human inter-annotator agreement (0.8875)\u2014and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system\u2019s ability to return real-time results: the average response time for each team\u2019s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "which has research problem ?", "chemical-disease relation (CDR) extraction", NaN, NaN], ["With the advent of Big Data concept, a lot of attention has been paid to structuring and giving semantic to this data. Knowledge bases like DBPedia play an important role to achieve this goal. Question answering systems are common approach to address expressivity and usability of information extraction from knowledge bases. Recent researches focused only on monolingual QA systems while cross-lingual setting has still so many barriers. In this paper we introduce a new cross-lingual approach using a unified semantic space among languages. After keyword extraction, entity linking and answer type detection, we use cross lingual semantic similarity to extract the answer from knowledge base via relation selection and type matching. We have evaluated our approach on Persian and Spanish which are typologically different languages. Our experiments are on DBPedia. The results are promising for both languages.", "which has research problem ?", "Question answering systems", 193.0, 219.0], ["A considerable effort has been made to extract biological and chemical entities, as well as their relationships, from the scientific literature, either manually through traditional literature curation or by using information extraction and text mining technologies. Medicinal chemistry patents contain a wealth of information, for instance to uncover potential biomarkers that might play a role in cancer treatment and prognosis. However, current biomedical annotation databases do not cover such information, partly due to limitations of publicly available biomedical patent mining software. As part of the BioCreative V CHEMDNER patents track, we present the results of the first named entity recognition (NER) assignment carried out to detect mentions of chemical compounds and genes/proteins in running patent text. More specifically, this task aimed to evaluate the performance of automatic name recognition strategies capable of isolating chemical names and gene and gene product mentions from surrounding text within patent titles and abstracts. A total of 22 unique teams submitted results for at least one of the three CHEMDNER subtasks. The first subtask, called the CEMP (chemical entity mention in patents) task, focused on the detection of chemical named entity mentions in patents, requesting teams to return the start and end indices corresponding to all the chemical entities found in a given record. A total of 21 teams submitted 93 runs, for this subtask. The top performing team reached an f-measure of 0.89 with a precision of 0.87 and a recall of 0.91. The CPD (chemical passage detection) task required the classification of patent titles and abstracts whether they do or do not contain chemical compound mentions. Nine teams returned predictions for this task (40 runs). The top run in terms of Matthew\u2019s correlation coefficient (MCC) had a score of 0.88, the highest sensitivity ? Corresponding author", "which has research problem ?", "Chemical passage detection", 1583.0, 1609.0], ["Lexical Semantic Change detection, i.e., the task of identifying words that change meaning over time, is a very active research area, with applications in NLP, lexicography, and linguistics. Evaluation is currently the most pressing problem in Lexical Semantic Change detection, as no gold standards are available to the community, which hinders progress. We present the results of the first shared task that addresses this gap by providing researchers with an evaluation framework and manually annotated, high-quality datasets for English, German, Latin, and Swedish. 33 teams submitted 186 systems, which were evaluated on two subtasks.", "which has research problem ?", "Lexical Semantic Change detection", 0.0, 33.0], ["Adversarial training (AT) is a regularization method that can be used to improve the robustness of neural network methods by adding small perturbations in the training data. We show how to use AT for the tasks of entity recognition and relation extraction. In particular, we demonstrate that applying AT to a general purpose baseline model for jointly extracting entities and relations, allows improving the state-of-the-art effectiveness on several datasets in different contexts (i.e., news, biomedical, and real estate data) and for different languages (English and Dutch).", "which has research problem ?", "Relation Extraction", 236.0, 255.0], ["Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.", "which has research problem ?", "instructional designs", 286.0, 307.0], ["Abstract. We describe here the development and evaluation of an Earth system model suitable for centennial-scale climate prediction. The principal new components added to the physical climate model are the terrestrial and ocean ecosystems and gas-phase tropospheric chemistry, along with their coupled interactions. The individual Earth system components are described briefly and the relevant interactions between the components are explained. Because the multiple interactions could lead to unstable feedbacks, we go through a careful process of model spin up to ensure that all components are stable and the interactions balanced. This spun-up configuration is evaluated against observed data for the Earth system components and is generally found to perform very satisfactorily. The reason for the evaluation phase is that the model is to be used for the core climate simulations carried out by the Met Office Hadley Centre for the Coupled Model Intercomparison Project (CMIP5), so it is essential that addition of the extra complexity does not detract substantially from its climate performance. Localised changes in some specific meteorological variables can be identified, but the impacts on the overall simulation of present day climate are slight. This model is proving valuable both for climate predictions, and for investigating the strengths of biogeochemical feedbacks.", "which has research problem ?", "CMIP5", 975.0, 980.0], ["Abstract In order to study the extraction process of essential oil from Artemisia herba-alba, kinetic studies as well as an optimization of the operating conditions were achieved. The optimization was carried out by a parametric study and experiments planning method. Three operational parameters were chosen: Artemisia mass to be treated, steam flow rate and extraction time. The optimal extraction conditions obtained by the parametric study correspond to: a mass of 30 g, a steam flow rate of 1.65 mL.min\u22121 and the extraction time of 60 min. The results reveal that the combined effects of two parameters, the steam water flow rate and the extraction time, are the most significant. The yield is also affected by the interaction of the three parameters. The essential oil obtained with optimal conditions was analyzed by GC-MS and a kinetic study was realised.", "which has research problem ?", "Oil", 63.0, 66.0], ["The fifth phase of the Coupled Model Intercomparison Project (CMIP5) will produce a state-of-the- art multimodel dataset designed to advance our knowledge of climate variability and climate change. Researchers worldwide are analyzing the model output and will produce results likely to underlie the forthcoming Fifth Assessment Report by the Intergovernmental Panel on Climate Change. Unprecedented in scale and attracting interest from all major climate modeling groups, CMIP5 includes \u201clong term\u201d simulations of twentieth-century climate and projections for the twenty-first century and beyond. Conventional atmosphere\u2013ocean global climate models and Earth system models of intermediate complexity are for the first time being joined by more recently developed Earth system models under an experiment design that allows both types of models to be compared to observations on an equal footing. Besides the longterm experiments, CMIP5 calls for an entirely new suite of \u201cnear term\u201d simulations focusing on recent decades...", "which has research problem ?", "CMIP5", 62.0, 67.0], ["Classifying semantic relations between entity pairs in sentences is an important task in natural language processing (NLP). Most previous models applied to relation classification rely on high-level lexical and syntactic features obtained by NLP tools such as WordNet, the dependency parser, part-of-speech (POS) tagger, and named entity recognizers (NER). In addition, state-of-the-art neural models based on attention mechanisms do not fully utilize information related to the entity, which may be the most crucial feature for relation classification. To address these issues, we propose a novel end-to-end recurrent neural model that incorporates an entity-aware attention mechanism with a latent entity typing (LET) method. Our model not only effectively utilizes entities and their latent types as features, but also builds word representations by applying self-attention based on symmetrical similarity of a sentence itself. Moreover, the model is interpretable by visualizing applied attention mechanisms. Experimental results obtained with the SemEval-2010 Task 8 dataset, which is one of the most popular relation classification tasks, demonstrate that our model outperforms existing state-of-the-art models without any high-level features.", "which has research problem ?", "Classifying semantic relations between entity pairs in sentences", 0.0, 64.0], ["With increasing attentions focused on the energy consumption (EC) in manufacturing, it is imperative to realize the equipment energy consumption management (EECM) to reduce the EC and improve the energy efficiency. Recently, with the developments of digital twin (DT) and digital twin shop-floor (DTS), the data and models are enriched greatly and a physical-virtual convergence environment is provided. Accordingly, the new chances emerge for improving the EECM in EC monitoring, analysis and optimization. In this situation, the paper proposes the framework of EECM in DTS and discusses the potential applications, aiming at studying the improvements and providing a guideline for the future works.", "which has research problem ?", "digital twin", 250.0, 262.0], ["Abstract \u2014 In this paper I discussed Facial Expression Recognition System in two different ways and with two different databases. Principal Component Analysis is used here for feature extraction. I used JAFFE (Japanese Female Facial Expression). I implemented system with JAFFE database, I got accuracy of the algorithm is about 70-71% which gives quite poor Efficiency of the system. Then I implemented facial expression recognition system with Gabor filter and PCA. Here Gabor filter selected because of its good feature extraction property. The output of the Gabor filter was used as an input for the PCA. PCA has a good feature of dimension reduction so it was choose for that purpose.", "which has research problem ?", "Facial Expression Recognition", 37.0, 66.0], ["Relation classification is an important NLP task to extract relations between entities. The state-of-the-art methods for relation classification are primarily based on Convolutional or Recurrent Neural Networks. Recently, the pre-trained BERT model achieves very successful results in many NLP classification / sequence labeling tasks. Relation classification differs from those tasks in that it relies on information of both the sentence and the two target entities. In this paper, we propose a model that both leverages the pre-trained BERT language model and incorporates information from the target entities to tackle the relation classification task. We locate the target entities and transfer the information through the pre-trained architecture and incorporate the corresponding encoding of the two entities. We achieve significant improvement over the state-of-the-art method on the SemEval-2010 task 8 relational dataset.", "which has research problem ?", "Relation classification", 0.0, 23.0], ["Transforming natural language questions into formal queries is an integral task in Question Answering (QA) systems. QA systems built on knowledge graphs like DBpedia, require a step after natural language processing for linking words, specifically including named entities and relations, to their corresponding entities in a knowledge graph. To achieve this task, several approaches rely on background knowledge bases containing semantically-typed relations, e.g., PATTY, for an extra disambiguation step. Two major factors may affect the performance of relation linking approaches whenever background knowledge bases are accessed: a) limited availability of such semantic knowledge sources, and b) lack of a systematic approach on how to maximize the benefits of the collected knowledge. We tackle this problem and devise SIBKB, a semantic-based index able to capture knowledge encoded on background knowledge bases like PATTY. SIBKB represents a background knowledge base as a bi-partite and a dynamic index over the relation patterns included in the knowledge base. Moreover, we develop a relation linking component able to exploit SIBKB features. The benefits of SIBKB are empirically studied on existing QA benchmarks and observed results suggest that SIBKB is able to enhance the accuracy of relation linking by up to three times.", "which has research problem ?", "Relation linking", 554.0, 570.0], ["Machine translation systems achieve near human-level performance on some languages, yet their effectiveness strongly relies on the availability of large amounts of parallel sentences, which hinders their applicability to the majority of language pairs. This work investigates how to learn to translate when having access to only large monolingual corpora in each language. We propose two model variants, a neural and a phrase-based model. Both versions leverage a careful initialization of the parameters, the denoising effect of language models and automatic generation of parallel data by iterative back-translation. These models are significantly better than methods from the literature, while being simpler and having fewer hyper-parameters. On the widely used WMT\u201914 English-French and WMT\u201916 German-English benchmarks, our models respectively obtain 28.1 and 25.2 BLEU points without using a single parallel sentence, outperforming the state of the art by more than 11 BLEU points. On low-resource languages like English-Urdu and English-Romanian, our methods achieve even better results than semi-supervised and supervised approaches leveraging the paucity of available bitexts. Our code for NMT and PBSMT is publicly available.", "which has research problem ?", "Machine Translation", 0.0, 19.0], ["The OER movement poses challenges inherent to discovering and reuse digital educational materials from highly heterogeneous and distributed digital repositories. Search engines on today's Web of documents are based on keyword queries. Search engines don't provide a sufficiently comprehensive solution to answer a query that permits personalization of open educational materials. To find OER on the Web today, users must first be well informed of which OER repositories potentially contain the data they want and what data model describes these datasets, before using this information to create structured queries. Learning analytics requires not only to retrieve the useful information and knowledge about educational resources, learning processes and relations among learning agents, but also to transform the data gathered in actionable e interoperable information. Linked Data is considered as one of the most effective alternatives for creating global shared information spaces, it has become an interesting approach for discovering and enriching open educational resources data, as well as achieving semantic interoperability and re-use between multiple OER repositories. In this work, an approach based on Semantic Web technologies, the Linked Data guidelines, and Social Network Analysis methods are proposed as a fundamental way to describing, analyzing and visualizing knowledge sharing on OER initiatives.", "which has research problem ?", "Learning Analytics", 615.0, 633.0], ["This study investigates the relationship between CO2 emissions, economic growth, energy use and electricity production by hydroelectric sources in Brazil. To verify the environmental Kuznets curve (EKC) hypothesis we use time-series data for the period 1971-2011. The autoregressive distributed lag methodology was used to test for cointegration in the long run. Additionally, the vector error correction model Granger causality test was applied to verify the predictive value of independent variables. Empirical results find that there is a quadratic long run relationship between CO2emissions and economic growth, confirming the existence of an EKC for Brazil. Furthermore, energy use shows increasing effects on emissions, while electricity production by hydropower sources has an inverse relationship with environmental degradation. The short run model does not provide evidence for the EKC theory. The differences between the results in the long and short run models can be considered for establishing environmental policies. This suggests that special attention to both variables-energy use and the electricity production by hydroelectric sources- could be an effective way to mitigate CO2 emissions in Brazil", "which has research problem ?", "CO2 emissions", 49.0, 62.0], ["Research on specialized biological systems is often hampered by a lack of consistent terminology, especially across species. In bacterial Type IV secretion systems genes within one set of orthologs may have over a dozen different names. Classifying research publications based on biological processes, cellular components, molecular functions, and microorganism species should improve the precision and recall of literature searches allowing researchers to keep up with the exponentially growing literature, through resources such as the Pathosystems Resource Integration Center (PATRIC, patricbrc.org). We developed named entity recognition (NER) tools for four entities related to Type IV secretion systems: 1) bacteria names, 2) biological processes, 3) molecular functions, and 4) cellular components. These four entities are important to pathogenesis and virulence research but have received less attention than other entities, e.g., genes and proteins. Based on an annotated corpus, large domain terminological resources, and machine learning techniques, we developed recognizers for these entities. High accuracy rates (>80%) are achieved for bacteria, biological processes, and molecular function. Contrastive experiments highlighted the effectiveness of alternate recognition strategies; results of term extraction on contrasting document sets demonstrated the utility of these classes for identifying T4SS-related documents.", "which has research problem ?", "Named Entity Recognition", 617.0, 641.0], ["Argues that the organizational involvement of large scale information technology packages, such as those known as enterprise resource planning (ERP), has important implications that go far beyond the acknowledged effects of keeping the organizational operations accountable and integrated across functions and production sites. Claims that ERP packages are predicated on an understanding of human agency as a procedural affair and of organizations as an extended series of functional or cross\u2010functional transactions. Accordingly, the massive introduction of ERP packages to organizations is bound to have serious implications that precisely recount the procedural forms by which such packages instrument organizational operations and fashion organizational roles. The conception of human agency and organizational operations in procedural terms may seem reasonable yet it recounts a very specific and, in a sense, limited understanding of humans and organizations. The distinctive status of framing human agency and organizations in procedural terms becomes evident in its juxtaposition with other forms of human action like improvisation, exploration or playing. These latter forms of human involvement stand out against the serial fragmentation underlying procedural action. They imply acting on the world on loose premises that trade off a variety of forms of knowledge and courses of action in attempts to explore and discover alternative ways of coping with reality.", "which has research problem ?", "Enterprise resource planning", 114.0, 142.0], ["Due to significant industrial demands toward software systems with increasing complexity and challenging quality requirements, software architecture design has become an important development activity and the research domain is rapidly evolving. In the last decades, software architecture optimization methods, which aim to automate the search for an optimal architecture design with respect to a (set of) quality attribute(s), have proliferated. However, the reported results are fragmented over different research communities, multiple system domains, and multiple quality attributes. To integrate the existing research results, we have performed a systematic literature review and analyzed the results of 188 research papers from the different research communities. Based on this survey, a taxonomy has been created which is used to classify the existing research. Furthermore, the systematic analysis of the research literature provided in this review aims to help the research community in consolidating the existing research efforts and deriving a research agenda for future developments.", "which has research problem ?", "Software architecture optimization", 267.0, 301.0], ["Query optimization in RDF Stores is a challenging problem as SPARQL queries typically contain many more joins than equivalent relational plans, and hence lead to a large join order search space. In such cases, cost-based query optimization often is not possible. One practical reason for this is that statistics typically are missing in web scale setting such as the Linked Open Datasets (LOD). The more profound reason is that due to the absence of schematic structure in RDF, join-hit ratio estimation requires complicated forms of correlated join statistics; and currently there are no methods to identify the relevant correlations beforehand. For this reason, the use of good heuristics is essential in SPARQL query optimization, even in the case that are partially used with cost-based statistics (i.e., hybrid query optimization). In this paper we describe a set of useful heuristics for SPARQL query optimizers. We present these in the context of a new Heuristic SPARQL Planner (HSP) that is capable of exploiting the syntactic and the structural variations of the triple patterns in a SPARQL query in order to choose an execution plan without the need of any cost model. For this, we define the variable graph and we show a reduction of the SPARQL query optimization problem to the maximum weight independent set problem. We implemented our planner on top of the MonetDB open source column-store and evaluated its effectiveness against the state-of-the-art RDF-3X engine as well as comparing the plan quality with a relational (SQL) equivalent of the benchmarks.", "which has research problem ?", "SPARQL query optimization", 707.0, 732.0], ["We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at this https URL", "which has research problem ?", "Question Answering ", 24.0, 43.0], ["Green supply chain management (GSCM) has been receiving the spotlight in last few years. The study aims to identify critical success factors (CSFs) to achieve high GSCM performances from three perspectives i.e., environmental, social and economic performance. CSFs to achieve high GSCM performances relevant to Indian automobile industry have been identified and categorised according to three perspectives from the literature review and experts' opinions. Conceptual models also have been put forward. This paper may play vital role to understand CSFs to achieve GSCM performances in Indian automobile industry and help the supply chain managers to understand how they may improve environmental, social and economic performance.", "which has research problem ?", "Supply chain management", 6.0, 29.0], ["Neural network approaches to Named-Entity Recognition reduce the need for carefully hand-crafted features. While some features do remain in state-of-the-art systems, lexical features have been mostly discarded, with the exception of gazetteers. In this work, we show that this is unfair: lexical features are actually quite useful. We propose to embed words and entity types into a low-dimensional vector space we train from annotated data produced by distant supervision thanks to Wikipedia. From this, we compute \u2014 offline \u2014 a feature vector representing each word. When used with a vanilla recurrent neural network model, this representation yields substantial improvements. We establish a new state-of-the-art F1 score of 87.95 on ONTONOTES 5.0, while matching state-of-the-art performance with a F1 score of 91.73 on the over-studied CONLL-2003 dataset.", "which has research problem ?", "Named-Entity Recognition", 29.0, 53.0], ["Petroleum exploration and production in the Nigeria\u2019s Niger Delta region and export of oil and gas resources by the petroleum sector has substantially improved the nation\u2019s economy over the past five decades. However, activities associated with petroleum exploration, development and production operations have local detrimental and significant impacts on the atmosphere, soils and sediments, surface and groundwater, marine environment and terrestrial ecosystems in the Niger Delta. Discharges of petroleum hydrocarbon and petroleum\u2013derived waste streams have caused environmental pollution, adverse human health effects, socio\u2013economic problems and degradation of host communities in the 9 oil\u2013producing states in the Niger Delta region. Many approaches have been developed for the management of environmental impacts of petroleum production\u2013related activities and several environmental laws have been institutionalized to regulate the Nigerian petroleum industry. However, the existing statutory laws and regulations for environmental protection appear to be grossly inadequate and some of the multinational oil companies operating in the Niger Delta region have failed to adopt sustainable practices to prevent environmental pollution. This review examines the implications of multinational oil companies operations and further highlights some of the past and present environmental issues associated with petroleum exploitation and production in the Nigeria\u2019s Niger Delta. Although effective understanding of petroleum production and associated environmental degradation is importance for developing management strategies, there is a need for more multidisciplinary approaches for sustainable risk mitigation and effective environmental protection of the oil\u2013producing host communities in the Niger Delta.", "which has research problem ?", "Activities associated with petroleum exploration, development and production operations have local detrimental and significant impacts on the atmosphere, soils and sediments, surface and groundwater, marine environment and terrestrial ecosystems in the Niger Delta.", NaN, NaN], ["Recently, healthcare services can be delivered effectively to patients anytime and anywhere using e-Health systems. e-Health systems are developed through Information and Communication Technologies (ICT) that involve sensors, mobiles, and web-based applications for the delivery of healthcare services and information. Remote healthcare is an important purpose of the e-Health system. Usually, the eHealth system includes heterogeneous sensors from diverse manufacturers producing data in different formats. Device interoperability and data normalization is a challenging task that needs research attention. Several solutions are proposed in the literature based on manual interpretation through explicit programming. However, programmatically implementing the interpretation of the data sender and data receiver in the e-Health system for the data transmission is counterproductive as modification will be required for each new device added into the system. In this paper, an e-Health system with the Semantic Sensor Network (SSN) is proposed to address the device interoperability issue. In the proposed system, we have used IETF YANG for modeling the semantic e-Health data to represent the information of e-Health sensors. This modeling scheme helps in provisioning semantic interoperability between devices and expressing the sensing data in a user-friendly manner. For this purpose, we have developed an ontology for e-Health data that supports different styles of data formats. The ontology is defined in YANG for provisioning semantic interpretation of sensing data in the system by constructing meta-models of e-Health sensors. The proposed approach assists in the auto-configuration of eHealth sensors and querying the sensor network with semantic interoperability support for the e-Health system.", "which has research problem ?", "semantic interoperability support for the e-Health system", 1749.0, 1806.0], ["Named Entity Recognition (NER) is a key component in NLP systems for question answering, information retrieval, relation extraction, etc. NER systems have been studied and developed widely for decades, but accurate systems using deep neural networks (NN) have only been introduced in the last few years. We present a comprehensive survey of deep neural network architectures for NER, and contrast them with previous approaches to NER based on feature engineering and other supervised or semi-supervised learning algorithms. Our results highlight the improvements achieved by neural networks, and show how incorporating some of the lessons learned from past work on feature-based NER systems can yield further improvements.", "which has research problem ?", "Named Entity Recognition", 0.0, 24.0], ["Thanks to the proliferation of academic services on the Web and the opening of educational content, today, students can access a large number of free learning resources, and interact with value-added services. In this context, Learning Analytics can be carried out on a large scale thanks to the proliferation of open practices that promote the sharing of datasets. However, the opening or sharing of data managed through platforms and educational services, without considering the protection of users' sensitive data, could cause some privacy issues. Data anonymization is a strategy that should be adopted during lifecycle of data processing to reduce security risks. In this research, we try to characterize how much and how the anonymization techniques have been used in learning analytics proposals. From an initial exploration made in the Scopus database, we found that less than 6% of the papers focused on LA have also covered the privacy issue. Finally, through a specific case, we applied data anonymization and learning analytics to demonstrate that both technique can be integrated, in a reliably and effectively way, to support decision making in educational institutions.", "which has research problem ?", "Learning Analytics", 227.0, 245.0], ["This paper examines the impact of exchange rate volatility on the trade flows of the G-7 countries in the context of a multivariate error-correction model. The error-correction models do not show any sign of parameter instability. The results indicate that the exchange rate volatility has a significant negative impact on the volume of exports in each of the G-7 countries. Assuming market participants are risk averse, these results imply that exchange rate uncertainty causes them to reduce their activities, change prices, or shift sources of demand and supply in order to minimize their exposure to the effects of exchange rate volatility. This, in turn, can change the distribution of output across many sectors in these countries. It is quite possible that the surprisingly weak relationship between trade flows and exchange rate volatility reported in several previous studies are due to insufficient attention to the stochastic properties of the relevant time series. Copyright 1993 by MIT Press.", "which has research problem ?", "Exchange rate volatility", 34.0, 58.0], ["A syllabus is an important document for teachers, students, and institutions. It represents what and how a course will be conducted. Some authors have researched regarding the components for writing a good syllabus. However, there is not a standard format universally accepted by all educators. Even inside the same university, there are syllabuses written in different type of files like PDF, DOC, HTML, for instance. These kind of files are easily readable by humans but it is not the same by machines. On the other hand, ontologies are technologies of knowledge representation that allow setting information understandable for both humans and machines. In this paper, we present a literature review regarding the use of ontologies for syllabus representation. The objective of this paper is to know the use of ontologies to represent a syllabus semantically and to determine their score according to the five-stars of Linked Data vocabulary use scale. Our results show that some researchers have used ontologies for many purposes, widely concerning with learning objectives. Nevertheless, the ontologies created by the authors do not fulfill with the five-stars rating and do not have all components of a well-suited syllabus.", "which has research problem ?", "Knowledge Representation", 555.0, 579.0], ["Abstract The coronavirus disease 2019 (COVID-19) pandemic was declared a public health emergency of international concern by the World Health Organization. COVID-19 has high transmissibility and could result in acute lung injury in a fraction of patients. By counterbalancing the activity of the renin-angiotensin system, angiotensin-converting enzyme 2, which is the fusion receptor of the virus, plays a protective role against the development of complications of this viral infection. Vitamin D can induce the expression of angiotensin-converting enzyme 2 and regulate the immune system through different mechanisms. Epidemiologic studies of the relationship between vitamin D and various respiratory infections were reviewed and, here, the postulated mechanisms and clinical data supporting the protective role of vitamin D against COVID-19\u2013mediated complications are discussed.", "which has research problem ?", "COVID-19", 39.0, 47.0], ["There is an expectation that high technology companies use unique and leading edge technology to gain competitive advantage by investing heavily in supply chain management. This research uses multiple case study methodology to determine factors affecting the supply chain management at high technology companies. The research compares the supply chain performance of these high technology companies against the supply chain of benchmark (or commodity-type) companies at both strategic and tactical levels. In addition, the research also looks at supply chain practices within the high technology companies. The results indicate that at the strategic level the high technology companies and benchmark companies have a similar approach to supply chain management. However at the tactical, or critical, supply chain factor level, the analysis suggests that the high technology companies do have a different approach to supply chain management. The analysis also found differences in supply chain practices within the high technology companies; in this case the analysis shows that high technology companies with more advanced supply chain practices are more successful.", "which has research problem ?", "Supply chain management", 148.0, 171.0], ["A statistical\u2010dynamical downscaling method is used to estimate future changes of wind energy output (Eout) of a benchmark wind turbine across Europe at the regional scale. With this aim, 22 global climate models (GCMs) of the Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble are considered. The downscaling method uses circulation weather types and regional climate modelling with the COSMO\u2010CLM model. Future projections are computed for two time periods (2021\u20132060 and 2061\u20132100) following two scenarios (RCP4.5 and RCP8.5). The CMIP5 ensemble mean response reveals a more likely than not increase of mean annual Eout over Northern and Central Europe and a likely decrease over Southern Europe. There is some uncertainty with respect to the magnitude and the sign of the changes. Higher robustness in future changes is observed for specific seasons. Except from the Mediterranean area, an ensemble mean increase of Eout is simulated for winter and a decreasing for the summer season, resulting in a strong increase of the intra\u2010annual variability for most of Europe. The latter is, in particular, probable during the second half of the 21st century under the RCP8.5 scenario. In general, signals are stronger for 2061\u20132100 compared to 2021\u20132060 and for RCP8.5 compared to RCP4.5. Regarding changes of the inter\u2010annual variability of Eout for Central Europe, the future projections strongly vary between individual models and also between future periods and scenarios within single models. This study showed for an ensemble of 22 CMIP5 models that changes in the wind energy potentials over Europe may take place in future decades. However, due to the uncertainties detected in this research, further investigations with multi\u2010model ensembles are needed to provide a better quantification and understanding of the future changes.", "which has research problem ?", "Europe", 142.0, 148.0], ["Satellite imagery provides a valuable source of information for maritime surveillance. The vast majority of the research regarding satellite imagery for maritime surveillance focuses on vessel detection and image enhancement, whilst vessel classification remains a largely unexplored research topic. This paper presents a vessel classifier for spaceborne electro-optical imagery based on a feature representative across all satellite imagery, texture. Local Binary Patterns were selected to represent vessels for their high distinctivity and low computational complexity. Considering vessels characteristic super-structure, the extracted vessel signatures are sub-divided in three sections bow, middle and stern. A hierarchical decision-level classification is proposed, analysing first each vessel section individually and then combining the results in the second stage. The proposed approach is evaluated with the electro-optical satellite image dataset presented in [1]. Experimental results reveal an accuracy of 85.64% across four vessel categories.", "which has research problem ?", "Vessel detection", 186.0, 202.0], ["Abstract Apomicts tend to have larger geographical distributional ranges and to occur in ecologically more extreme environments than their sexual progenitors. However, the expression of apomixis is typically linked to polyploidy. Thus, it is a priori not clear whether intrinsic effects related to the change in the reproductive mode or rather in the ploidy drive ecological differentiation. We used sympatric sexual and apomictic populations of Potentilla puberula to test for ecological differentiation. To distinguish the effects of reproductive mode and ploidy on the ecology of cytotypes, we compared the niches (a) of sexuals (tetraploids) and autopolyploid apomicts (penta\u2010, hepta\u2010, and octoploids) and (b) of the three apomictic cytotypes. We based comparisons on a ploidy screen of 238 populations along a latitudinal transect through the Eastern European Alps and associated bioclimatic, and soil and topographic data. Sexual tetraploids preferred primary habitats at drier, steeper, more south\u2010oriented slopes, while apomicts mostly occurred in human\u2010made habitats with higher water availability. Contrariwise, we found no or only marginal ecological differentiation among the apomictic higher ploids. Based on the pronounced ecological differences found between sexuals and apomicts, in addition to the lack of niche differentiation among cytotypes of the same reproductive mode, we conclude that reproductive mode rather than ploidy is the main driver of the observed differences. Moreover, we compared our system with others from the literature, to stress the importance of identifying alternative confounding effects (such as hybrid origin). Finally, we underline the relevance of studying ecological parthenogenesis in sympatry, to minimize the effects of differential migration abilities.", "which has research problem ?", "ecological parthenogenesis", 1705.0, 1731.0], ["Organic solar cells (OSCs) are a promising cost-effective alternative for utility of solar energy, and possess low-cost, light-weight, and fl exibility advantages. [ 1\u20137 ] Much attention has been focused on the development of OSCs which have seen a dramatic rise in effi ciency over the last decade, and the encouraging power conversion effi ciency (PCE) over 9% has been achieved from bulk heterojunction (BHJ) OSCs. [ 8 ] With regard to photoactive materials, fullerenes and their derivatives, such as [6,6]-phenyl C 61 butyric acid methyl ester (PC 61 BM), have been the dominant electron-acceptor materials in BHJ OSCs, owing to their high electron mobility, large electron affi nity and isotropy of charge transport. [ 9 ] However, fullerenes have a few disadvantages, such as restricted electronic tuning and weak absorption in the visible region. Furthermore, in typical BHJ system of poly(3-hexylthiophene) (P3HT):PC 61 BM, mismatching energy levels between donor and acceptor leads to energy loss and low open-circuit voltages ( V OC ). To solve these problems, novel electron acceptor materials with strong and broad absorption spectra and appropriate energy levels are necessary for OSCs. Recently, non-fullerene small molecule acceptors have been developed. [ 10 , 11 ] However, rare reports on the devices based on solution-processed non-fullerene small molecule acceptors have shown PCEs approaching or exceeding 1.5%, [ 12\u201319 ] and only one paper reported PCEs over 2%. [ 16 ]", "which has research problem ?", "Organic solar cells", 0.0, 19.0], ["In traditional assembly lines, it is reasonable to assume that task execution times are the same for each worker. However, in sheltered work centres for disabled this assumption is not valid: some workers may execute some tasks considerably slower or even be incapable of executing them. Worker heterogeneity leads to a problem called the assembly line worker assignment and balancing problem (ALWABP). For a fixed number of workers the problem is to maximize the production rate of an assembly line by assigning workers to stations and tasks to workers, while satisfying precedence constraints between the tasks. This paper introduces new heuristic and exact methods to solve this problem. We present a new MIP model, propose a novel heuristic algorithm based on beam search, as well as a task-oriented branch-and-bound procedure which uses new reduction rules and lower bounds for solving the problem. Extensive computational tests on a large set of instances show that these methods are effective and improve over existing ones.", "which has research problem ?", "Assembly line worker assignment and balancing problem", 339.0, 392.0], ["One aspect of human commonsense reasoning is the ability to make presumptions about daily experiences, activities and social interactions with others. We propose a new commonsense reasoning benchmark where the task is to uncover commonsense presumptions implied by imprecisely stated natural language commands in the form of if-then-because statements. For example, in the command \"If it snows at night then wake me up early because I don't want to be late for work\" the speaker relies on commonsense reasoning of the listener to infer the implicit presumption that it must snow enough to cause traffic slowdowns. Such if-then-because commands are particularly important when users instruct conversational agents. We release a benchmark data set for this task, collected from humans and annotated with commonsense presumptions. We develop a neuro-symbolic theorem prover that extracts multi-hop reasoning chains and apply it to this problem. We further develop an interactive conversational framework that evokes commonsense knowledge from humans for completing reasoning chains.", "which has research problem ?", "commonsense reasoning", 20.0, 41.0], ["Enterprise Resource Planning-ERP implementing of small and middle size enterprise \u2014 SME is different from the large one. Based on the analysis on the character of ERP marketing and SMEs of China, 6 critical success factors are recommended. The research suggests that the top management support is most important to ERP implement in SME of China, in which paternalism prevails. Database of management and capital are main obstacles. ERP 1 or ERP 2 fits to demand of SME; high power project team has tremendous significance in the situation of absence of IT engineer for SME; education and training is helpful to successfully ERP implementing. The results service as better understanding the ERP implementation of SME in China and gaining the good performance of ERP implementation.", "which has research problem ?", "Enterprise resource planning", 0.0, 28.0], ["We propose a conditional non-autoregressive neural sequence model based on iterative refinement. The proposed model is designed based on the principles of latent variable models and denoising autoencoders, and is generally applicable to any sequence generation task. We extensively evaluate the proposed model on machine translation (En-De and En-Ro) and image caption generation, and observe that it significantly speeds up decoding while maintaining the generation quality comparable to the autoregressive counterpart.", "which has research problem ?", "Machine Translation", 313.0, 332.0], ["Cross-domain named entity recognition (NER) models are able to cope with the scarcity issue of NER samples in target domains. However, most of the existing NER benchmarks lack domain-specialized entity types or do not focus on a certain domain, leading to a less effective cross-domain evaluation. To address these obstacles, we introduce a cross-domain NER dataset (CrossNER), a fully-labeled collection of NER data spanning over five diverse domains with specialized entity categories for different domains. Additionally, we also provide a domain-related corpus since using it to continue pre-training language models (domain-adaptive pre-training) is effective for the domain adaptation. We then conduct comprehensive experiments to explore the effectiveness of leveraging different levels of the domain corpus and pre-training strategies to do domain-adaptive pre-training for the cross-domain task. Results show that focusing on the fractional corpus containing domain-specialized entities and utilizing a more challenging pre-training strategy in domain-adaptive pre-training are beneficial for the NER domain adaptation, and our proposed method can consistently outperform existing cross-domain NER baselines. Nevertheless, experiments also illustrate the challenge of this cross-domain NER task. We hope that our dataset and baselines will catalyze research in the NER domain adaptation area. The code and data are available at this https URL.", "which has research problem ?", "Named Entity Recognition", 13.0, 37.0], ["ABSTRACT Australia has sustained a relatively high economic growth rate since the 1980s compared to other developed countries. Per capita CO2 emissions tend to be highest amongst OECD countries, creating new challenges to cut back emissions towards international standards. This research explores the long-run dynamics of CO2 emissions, economic and population growth along with the effects of globalization tested as contributing factors. We find economic growth is not emission-intensive in Australia, while energy consumption is emissions intensive. Second, in an environment of increasing population, our findings suggest Australia needs to be energy efficient at the household level, creating appropriate infrastructure for sustainable population growth. High population growth and open migration policy can be detrimental in reducing CO2 emissions. Finally, we establish globalized environment has been conducive in combating emissions. In this respect, we establish the beneficial effect of economic globalization compared to social and political dimensions of globalization in curbing emissions.", "which has research problem ?", "CO2 emissions", 138.0, 151.0], ["The rapid deployment of e-business systems has surprised even the most futuristic management thinkers. Unfortunately very little empirical research has documented the many variations of e-business solutions as major software vendors release complex IT products into the marketplace. The literature holds simultaneous evidence of major success and major failure as implementations evolve. It is not clear from the literature just what the difference is between e-commerce and its predecessor concepts of supply chain management and enterprise resource planning. In this paper we use existing case studies, industrial interviews, and survey data to describe how these systems are similar and how they differ. We develop a conceptual model to show how these systems are related and how they serve significantly different strategic objectives. Finally, we suggest the critical success factors that are the key issues to resolve in order to successfully implement these systems in practice.", "which has research problem ?", "Supply chain management", 503.0, 526.0], ["In Semantic Textual Similarity (STS), systems rate the degree of semantic equivalence, on a graded scale from 0 to 5, with 5 being the most similar. This year we set up two tasks: (i) a core task (CORE), and (ii) a typed-similarity task (TYPED). CORE is similar in set up to SemEval STS 2012 task with pairs of sentences from sources related to those of 2012, yet different in genre from the 2012 set, namely, this year we included newswire headlines, machine translation evaluation datasets and multiple lexical resource glossed sets. TYPED, on the other hand, is novel and tries to characterize why two items are deemed similar, using cultural heritage items which are described with metadata such as title, author or description. Several types of similarity have been defined, including similar author, similar time period or similar location. The annotation for both tasks leverages crowdsourcing, with relative high interannotator correlation, ranging from 62% to 87%. The CORE task attracted 34 participants with 89 runs, and the TYPED task attracted 6 teams with 14 runs.", "which has research problem ?", "Semantic Textual Similarity", 3.0, 30.0], ["We present the findings and results of theSecond Nuanced Arabic Dialect IdentificationShared Task (NADI 2021). This Shared Taskincludes four subtasks: country-level ModernStandard Arabic (MSA) identification (Subtask1.1), country-level dialect identification (Subtask1.2), province-level MSA identification (Subtask2.1), and province-level sub-dialect identifica-tion (Subtask 2.2). The shared task dataset cov-ers a total of 100 provinces from 21 Arab coun-tries, collected from the Twitter domain. A totalof 53 teams from 23 countries registered to par-ticipate in the tasks, thus reflecting the interestof the community in this area. We received 16submissions for Subtask 1.1 from five teams, 27submissions for Subtask 1.2 from eight teams,12 submissions for Subtask 2.1 from four teams,and 13 Submissions for subtask 2.2 from fourteams.", "which has research problem ?", "Country-level dialect identification", 222.0, 258.0], ["A rare case of embryonal sarcoma of the liver in a 28\u2010year\u2010old man is reported. The patient was treated preoperatively with a combination of chemotherapy and radiation therapy. Complete surgical resection, 4.5 months after diagnosis, consisted of a left hepatic lobectomy. No viable tumor was found in the operative specimen. The patient was disease\u2010free 20 months postoperatively.", "which has research problem ?", "Embryonal sarcoma of the liver", 15.0, 45.0], ["This paper aims to offer a profound analysis of the interrelations between smart city components connecting the cornerstones of the triple helix. The triple helix model has emerged as a reference framework for the analysis of knowledge-based innovation systems, and relates the multiple and reciprocal relationships between the three main agencies in the process of knowledge creation and capitalization: university, industry and government. This analysis of the triple helix will be augmented using the Analytic Network Process to model, cluster and begin measuring the performance of smart cities. The model obtained allows interactions and feedbacks within and between clusters, providing a process to derive ratio scales priorities from elements. This offers a more truthful and realistic representation for supporting policy-making. The application of this model is still to be developed, but a full list of indicators, available at urban level, has been identified and selected from literature review.", "which has research problem ?", "Smart cities", 586.0, 598.0], ["The Internet of Things (IoT) provides a vision of a world in which the Internet extends into the real world embracing everyday objects. In the IoT, physical objects are accompanied by Digital Twins: virtual, digital equivalents to physical objects. The interaction between real/physical and digital/virtual objects (digital twins) is an essential concept behind this vision. Digital twins can act as a central means to manage farms and has the potential to revolutionize agriculture. It removes fundamental constraints concerning place, time, and human observation. Farming operations would no longer require physical proximity, which allows for remote monitoring, control and coordination of farm operations. Moreover, Digital Twins can be enriched with information that cannot be observed (or not accurately) by the human senses, e.g. sensor and satellite data. A final interesting angle is that Digital Twins do not only represent actual states, but can also reproduce historical states and simulate future states. As a consequence, applications based on Digital Twins, if properly synchronized, enable farmers and other stakeholders to act immediately in case of (expected) deviations. This paper introduces the concept of Digital Twins and illustrate its application in agriculture by six cases of the SmartAgriFood and Fractals accelerator projects (2014-2016).", "which has research problem ?", "digital twin", NaN, NaN], ["It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.", "which has research problem ?", "ontology-driven conceptual model patterns", 652.0, 693.0], ["Purpose The purpose of this paper is to introduce a model for identifying, storing and sharing contextual information across smartphone apps that uses the native device services. The authors present the idea of using user input and interaction within an app as contextual information, and how each app can identify and store contextual information. Design/methodology/approach Contexts are modeled as hierarchical objects that can be stored and shared by applications using native mechanisms. A proof-of-concept implementation of the model for the Android platform demonstrates contexts modelled as hierarchical objects stored and shared by applications using native mechanisms. Findings The model was found to be practically viable by implemented sample apps that share context and through a performance analysis of the system. Practical implications The contextual data-sharing model enables the creation of smart apps and services without being tied to any vendor\u2019s cloud services. Originality/value This paper introduces a new approach for sharing context in smartphone applications that does not require cloud services.", "which has research problem ?", "contextual data", 856.0, 871.0], ["The development of electronic health records, wearable devices, health applications and Internet of Things (IoT)-empowered smart homes is promoting various applications. It also makes health self-management much more feasible, which can partially mitigate one of the challenges that the current healthcare system is facing. Effective and convenient self-management of health requires the collaborative use of health data and home environment data from different services, devices, and even open data on the Web. Although health data interoperability standards including HL7 Fast Healthcare Interoperability Resources (FHIR) and IoT ontology including Semantic Sensor Network (SSN) have been developed and promoted, it is impossible for all the different categories of services to adopt the same standard in the near future. This study presents a method that applies Semantic Web technologies to integrate the health data and home environment data from heterogeneously built services and devices. We propose a Web Ontology Language (OWL)-based integration ontology that models health data from HL7 FHIR standard implemented services, normal Web services and Web of Things (WoT) services and Linked Data together with home environment data from formal ontology-described WoT services. It works on the resource integration layer of the layered integration architecture. An example use case with a prototype implementation shows that the proposed method successfully integrates the health data and home environment data into a resource graph. The integrated data are annotated with semantics and ontological links, which make them machine-understandable and cross-system reusable.", "which has research problem ?", "Semantic Web technologies to integrate the health data and home environment data", 866.0, 946.0], ["This study examines the potential of Renewable Energy Sources (RES) in reducing the impact of carbon emission in Malaysia and the Greenhouse Gas (GHG) emissions, which leads to global warming. Using the Environmental Kuznets Curve (EKC) hypothesis, this study analyses the impact of electricity generated using RES on the environment and trade openness for the period 1980-2009. Using the Autoregressive Distributed Lag (ARDL) approach the results show that the elasticities of electricity production from renewable sources with respect to CO2 emissions are negative and significant in both the short and long-run. This implies the potential of renewable energy in controlling CO2 emissions in both short and long-run in Malaysia. Renewable energy can ensure sustainability of electricity supply and at the same time can reduce CO2 emissions. Trade openness has a significant negative effect on CO2 emissions in the long-run. The Granger causality test based on Vector Error Correction Mode (VECM) indicates that there is an evidence of positive bi-directional Granger causality relationship between economic growth and CO2 emissions in the short and long-run suggesting that carbon emissions and economic growth are interrelated to each other. Furthermore, there is a negative long-run bi-directional Granger causality relationship between electricity production from renewable sources and CO2 emissions. The short-run Granger causality shows a negative uni-directional causality for electricity production from renewable sources to CO2 emissions. This result suggests that there is an inverted U-shaped relationship between CO2 emissions and economic growth.", "which has research problem ?", "CO2 emissions", 540.0, 553.0], ["In recent years, the problem of scene text extraction from images has received extensive attention and significant progress. However, text extraction from scholarly figures such as plots and charts remains an open problem, in part due to the difficulty of locating irregularly placed text lines. To the best of our knowledge, literature has not described the implementation of a text extraction system for scholarly figures that adapts deep convolutional neural networks used for scene text detection. In this paper, we propose a text extraction approach for scholarly figures that forgoes preprocessing in favor of using a deep convolutional neural network for text line localization. Our system uses a publicly available scene text detection approach whose network architecture is well suited to text extraction from scholarly figures. Training data are derived from charts in arXiv papers which are extracted using Allen Institute's pdffigures tool. Since this tool analyzes PDF data as a container format in order to extract text location through the mechanisms which render it, we were able to gather a large set of labeled training samples. We show significant improvement from methods in the literature, and discuss the structural changes of the text extraction pipeline.", "which has research problem ?", "scene text detection", 480.0, 500.0], ["This article investigates the Environmental Kuznets Curves (EKC) for CO2 emissions in a panel of 109 countries during the period 1959 to 2001. The length of the series makes the application of a heterogeneous estimator suitable from an econometric point of view. The results, based on the hierarchical Bayes estimator, show that different EKC dynamics are associated with the different sub-samples of countries considered. On average, more industrialized countries show evidence of EKC in quadratic specifications, which nevertheless are probably evolving into an N-shape based on their cubic specification. Nevertheless, it is worth noting that the EU, and not the Umbrella Group led by US, has been driving currently observed EKC-like shapes. The latter is associated to monotonic income\u2013CO2 dynamics. The EU shows a clear EKC shape. Evidence for less-developed countries consistently shows that CO2 emissions rise positively with income, though there are some signs of an EKC. Analyses of future performance, nevertheless, favour quadratic specifications, thus supporting EKC evidence for wealthier countries and non-EKC shapes for industrializing regions.", "which has research problem ?", "CO2 emissions", 69.0, 82.0], ["This paper presents an active learning method that directly optimizes expected future error. This is in contrast to many other popular techniques that instead aim to reduce version space size. These other methods are popular because for many learning models, closed form calculation of the expected future error is intractable. Our approach is made feasible by taking a sampling approach to estimating the expected reduction in error due to the labeling of a query. In experimental results on two real-world data sets we reach high accuracy very quickly, sometimes with four times fewer labeled examples than competing methods.", "which has research problem ?", "Active learning", 23.0, 38.0], ["This paper uses VAR models to investigate the impact of real exchange rate volatility on U.S. bilateral imports from the United Kingdom, France, Germany, Japan and Canada. The VAR systems include U.S. and foreign macro variables, and are estimated separately for each country. The major results suggest that the effect of volatility on imports is weak, although permanent shocks to volatility do have a negative impact on this measure of trade, and those effects are relatively more important over the flexible rate period. Copyright 1989 by MIT Press.", "which has research problem ?", "Exchange rate volatility", 61.0, 85.0], ["The composition of the essential oil hydrodistilled from the aerial parts of 18 individual Artemisia herba-alba Asso. plants collected in southern Tunisia was determined by GC and GCMS analysis. The oil yield varied between 0.68% v/w and 1.93% v/w. One hundred components were identified, 21 of of which are reported for the first time in Artemisia herba-alba oil. The oil contained 10 components with percentages higher than 10%. The main components were cineole, thujones, chrysanthenone, camphor, borneol, chrysanthenyl acetate, sabinyl acetate, davana ethers and davanone. Twelve samples had monoterpenes as major components, three had sesquiterpenes as major components and the last three samples had approximately the same percentage of monoterpenes and sesquiterpenes. The chemical compositions revealed that ten samples had compositions similar to those of other Artemisia herba-alba essential oils analyzed in other countries. The remaining eight samples had an original chemical composition.", "which has research problem ?", "Oil", 33.0, 36.0], ["The automatic recognition of facial expression presents a significant challenge to the pattern analysis and man-machine interaction research community. Recognition from a single static image is particularly a difficult task. In this paper, we present a methodology for facial expression recognition from a single static image using line-based caricatures. The recognition process is completely automatic. It also addresses the computational expensive problem and is thus suitable for real-time applications. The proposed approach uses structural and geometrical features of a user sketched expression model to match the line edge map (LEM) descriptor of an input face image. A disparity measure that is robust to expression variations is defined. The effectiveness of the proposed technique has been evaluated and promising results are obtained. This work has proven the proposed idea that facial expressions can be characterized and recognized by caricatures.", "which has research problem ?", "Facial Expression Recognition", 269.0, 298.0], ["AbstractLanguage and literary studies have studied style for centuries, and even since the advent of \u203astylistics\u2039 as a discipline at the beginning of the twentieth century, definitions of \u203astyle\u2039 have varied heavily across time, space and fields. Today, with increasingly large collections of literary texts being made available in digital form, computational approaches to literary style are proliferating. New methods from disciplines such as corpus linguistics and computer science are being adopted and adapted in interrelated fields such as computational stylistics and corpus stylistics, and are facilitating new approaches to literary style.The relation between definitions of style in established linguistic or literary stylistics, and definitions of style in computational or corpus stylistics has not, however, been systematically assessed. This contribution aims to respond to the need to redefine style in the light of this new situation and to establish a clearer perception of both the overlap and the boundaries between \u203amainstream\u2039 and \u203acomputational\u2039 and/or \u203aempirical\u2039 literary stylistics. While stylistic studies of non-literary texts are currently flourishing, our contribution deliberately centers on those approaches relevant to \u203aliterary stylistics\u2039. It concludes by proposing an operational definition of style that we hope can act as a common ground for diverse approaches to literary style, fostering transdisciplinary research.The focus of this contribution is on literary style in linguistics and literary studies (rather than in art history, musicology or fashion), on textual aspects of style (rather than production- or reception-oriented theories of style), and on a descriptive perspective (rather than a prescriptive or didactic one). Even within these limits, however, it appears necessary to build on a broad understanding of the various perspectives on style that have been adopted at different times and in different traditions. For this reason, the contribution first traces the development of the notion of style in three different traditions, those of German, Dutch and French language and literary studies. Despite the numerous links between each other, and between each of them to the British and American traditions, these three traditions each have their proper dynamics, especially with regard to the convergence and/or confrontation between mainstream and computational stylistics. For reasons of space and coherence, the contribution is limited to theoretical developments occurring since 1945.The contribution begins by briefly outlining the range of definitions of style that can be encountered across traditions today: style as revealing a higher-order aesthetic value, as the holistic \u203agestalt\u2039 of single texts, as an expression of the individuality of an author, as an artifact presupposing choice among alternatives, as a deviation from a norm or reference, or as any formal property of a text. The contribution then traces the development of definitions of style in each of the three traditions mentioned, with the aim of giving a concise account of how, in each tradition, definitions of style have evolved over time, with special regard to the way such definitions relate to empirical, quantitative or otherwise computational approaches to style in literary texts. It will become apparent how, in each of the three traditions, foundational texts continue to influence current discussions on literary style, but also how stylistics has continuously reacted to broader developments in cultural and literary theory, and how empirical, quantitative or computational approaches have long \u00adexisted, usually in parallel to or at the margins of mainstream stylistics. The review will also reflect the lines of discussion around style as a property of literary texts \u2013 or of any textual entity in general.The perspective on three stylistic traditions is accompanied by a more systematic perspective. The rationale is to work towards a common ground for literary scholars and linguists when talking about (literary) style, across traditions of stylistics, with respect for established definitions of style, but also in light of the digital paradigm. Here, we first show to what extent, at similar or different moments in time, the three traditions have developed comparable positions on style, and which definitions out of the range of possible definitions have been proposed or promoted by which authors in each of the three traditions.On the basis of this synthesis, we then conclude by proposing an operational definition of style that is an attempt to provide a common ground for both mainstream and computational literary stylistics. This definition is discussed in some detail in order to explain not only what is meant by each term in the definition, but also how it relates to computational analyses of style \u2013 and how this definition aims to avoid some of the pitfalls that can be perceived in earlier definitions of style. Our definition, we hope, will be put to use by a new generation of computational, quantitative, and empirical studies of style in literary texts.", "which has research problem ?", "Definition of style", 1365.0, 1384.0], ["In this paper we analyze the job-matching quality of people with disabilities. We do not find evidence of a greater importance of over-education in this group in comparison to the rest of the population. We find that people with disabilities have a lower probability of being over-educated for a period of 3 or more years, a higher probability of leaving mismatch towards inactivity or marginal employment, a lower probability of leaving mismatch towards a better match, and a higher probability of employment mobility towards inactivity or marginal employment. The empirical analysis is based on Spanish data from the European Community Household Panel from 1995 to 2000.", "which has research problem ?", "Over-education", 130.0, 144.0], ["Wikidata is becoming an increasingly important knowledge base whose usage is spreading in the research community. However, most question answering systems evaluation datasets rely on Freebase or DBpedia. We present two new datasets in order to train and benchmark QA systems over Wikidata. The first is a translation of the popular SimpleQuestions dataset to Wikidata, the second is a dataset created by collecting user feedbacks.", "which has research problem ?", "Question Answering ", 128.0, 147.0], ["Extracting information from full documents is an important problem in many domains, but most previous work focus on identifying relationships within a sentence or a paragraph. It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections. In this paper, we introduce SciREX, a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. We annotate our dataset by integrating automatic and human annotations, leveraging existing scientific knowledge resources. We develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE. Analyzing the model performance shows a significant gap between human performance and current baselines, inviting the community to use our dataset as a challenge to develop document-level IE models. Our data and code are publicly available at https://github.com/allenai/SciREX .", "which has research problem ?", "salient entity identification", 556.0, 585.0], ["Cytogenetic analyses performed at diagnosis on 443 adult patients with acute lymphoblastic leukemia (ALL) were reviewed by the Groupe Fran\u00e7ais de Cytog\u00e9n\u00e9tique H\u00e9matologique, correlated with hematologic data, and compared with findings for childhood ALL. This study showed that the same recurrent abnormalities as those reported in childhood ALL are found in adults, and it determined their frequencies and distribution according to age. Hyperdiploidy greater than 50 chromosomes with a standard pattern of chromosome gains had a lower frequency (7%) than in children, and was associated with the Philadelphia chromosome (Ph) in 11 of 30 cases. Tetraploidy (2%) and triploidy (3%) were more frequent than that in childhood ALL. Hypodiploidy 30-39 chromosomes (2%), characterized by a specific pattern of chromosome losses, might be related to the triploid group that evoked a duplication of the 30-39 hypodiploidy. Both groups shared similar hematologic features. Ph+ ALL (29%) peaked in the 40- to 50-year-old age range (49%) and showed a high frequency of myeloid antigens (24%). ALL with t(1;19) (3%) occurred in young adults (median age, 22 years). In T-cell ALL (T-ALL), frequencies of 14q11 breakpoints (26%) and of t(10;14)(q24;q11) (14%) were higher than those in childhood ALL. New recurrent changes were identified, ie, monosomies 7 present in Ph-ALL (17%) and also in other ALL (8%) and two new recurrent translocations, t(1;11)(p34;pll) in T-ALL and t(1;7)(q11-21;q35-36) in Ph+ ALL. The ploidy groups with a favorable prognostic impact were hyperdiploidy greater than 50 without Ph chromosome (median event-free survival [EFS], 46 months) and tetraploidy (median EFS, 46 months). The recurrent abnormalities associated with better response to therapy were also significantly correlated to T-cell lineage. Among them, t(10;14)(q24;q11) (median EFS, 46 months) conferred the best prognostic impact (3-year EFS, 75%). Hypodiploidy 30-39 chromosomes and the related triploidy were associated with poor outcome. All Ph-ALL had short EFS (median EFS, 5 months), and no additional change affected this prognostic impact. Most patients with t(1;19) failed therapy within 1 year. Patients with 11q23 changes not because of t(4;11) had a poor outcome, although they did not present the high-risk factors found in t(4;11).", "which has research problem ?", "acute lymphoblastic leukemia (ALL)", NaN, NaN], ["There has been a growing interest in the design and synthesis of non-fullerene acceptors for organic solar cells that may overcome the drawbacks of the traditional fullerene-based acceptors. Herein, two novel push-pull (acceptor-donor-acceptor) type small-molecule acceptors, that is, ITDI and CDTDI, with indenothiophene and cyclopentadithiophene as the core units and 2-(3-oxo-2,3-dihydroinden-1-ylidene)malononitrile (INCN) as the end-capping units, are designed and synthesized for non-fullerene polymer solar cells (PSCs). After device optimization, PSCs based on ITDI exhibit good device performance with a power conversion efficiency (PCE) as high as 8.00%, outperforming the CDTDI-based counterparts fabricated under identical condition (2.75% PCE). We further discuss the performance of these non-fullerene PSCs by correlating the energy level and carrier mobility with the core of non-fullerene acceptors. These results demonstrate that indenothiophene is a promising electron-donating core for high-performance non-fullerene small-molecule acceptors.", "which has research problem ?", "Organic solar cells", 93.0, 112.0], ["Smart cities (SCs) are a recent but emerging phenomenon, aiming at using high technology and especially information and communications technology (ICT) to implement better living conditions in large metropolises, to involve citizens in city government, and to support sustainable economic development and city attractiveness. The final goal is to improve the quality of city life for all stakeholders. Until now, SCs have been developing as bottom-up projects, bringing together smart initiatives driven by public bodies, enterprises, citizens, and not-for-profit organizations. However, to build a long-term smart strategy capable of producing better returns from investments and deciding priorities regarding each city, a comprehensive SC governance framework is needed. The aim of this paper is to collect empirical evidences regarding government structures implemented in SCs and to outline a framework for the roles of local governments, nongovernmental agencies, and administrative officials. The survey shows that no consolidated standards or best practices for governing SCs are implemented in the examined cities; however, each city applies its own governance framework. Moreover, the study reveals some interesting experiences that may be useful for involving citizens and civil society in SC governance.", "which has research problem ?", "Smart cities", 0.0, 12.0], [" Open educational resources are currently becoming increasingly available from a multitude of sources and are consequently annotated in many diverse ways. Interoperability concerns that naturally arise can often be resolved through the semantification of metadata descriptions, while at the same time strengthening the knowledge value of resources. SKOS can be a solid linking point offering a standard vocabulary for thematic descriptions, by referencing semantic thesauri. We propose the enhancement and maintenance of educational resources\u2019 metadata in the form of learning object ontologies and introduce the notion of a learning object ontology repository that can help towards their publication, discovery and reuse. At the same time, linking to thesauri datasets and contextualized sources interrelates learning objects with linked data and exposes them to the Web of Data. We build a set of extensions and workflows on top of contemporary ontology management tools, such as WebProt\u00e9g\u00e9, that can make it suitable as a learning object ontology repository. The proposed approach and implementation can help libraries and universities in discovering, managing and incorporating open educational resources and enhancing current curricula. ", "which has research problem ?", "metadata description", NaN, NaN], ["We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional tree-structured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the state-of-the-art feature-based model on end-to-end relation extraction, achieving 12.1% and 5.7% relative error reductions in F1-score on ACE2005 and ACE2004, respectively. We also show that our LSTM-RNN based model compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8). Finally, we present an extensive ablation analysis of several model components.", "which has research problem ?", "End-to-end Relation Extraction", 636.0, 666.0], ["The CoNLL-2010 Shared Task was dedicated to the detection of uncertainty cues and their linguistic scope in natural language texts. The motivation behind this task was that distinguishing factual and uncertain information in texts is of essential importance in information extraction. This paper provides a general overview of the shared task, including the annotation protocols of the training and evaluation datasets, the exact task definitions, the evaluation metrics employed and the overall results. The paper concludes with an analysis of the prominent approaches and an overview of the systems submitted to the shared task.", "which has research problem ?", "Detection of uncertainty cues and their linguistic scope", 48.0, 104.0], ["The BioCreative NLM-Chem track calls for a community effort to fine-tune automated recognition of chemical names in biomedical literature. Chemical names are one of the most searched biomedical entities in PubMed and \u2013 as highlighted during the COVID-19 pandemic \u2013 their identification may significantly advance research in multiple biomedical subfields. While previous community challenges focused on identifying chemical names mentioned in titles and abstracts, the full text contains valuable additional detail. We organized the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document. This manuscript summarizes the BioCreative NLM-Chem track. We received a total of 88 submissions in total from 17 teams worldwide. The highest performance achieved for the Chemical Identification task was 0.8672 f-score (0.8759 precision, 0.8587 recall) for strict NER performance and 0.8136 f-score (0.8621 precision, 0.7702 recall) for strict normalization performance. The highest performance achieved for the Chemical Indexing task was 0.4825 f-score (0.4397 precision, 0.5344 recall). The NLM-Chem track dataset and other challenge materials are publicly available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/. This community challenge demonstrated 1) the current substantial achievements in deep learning technologies can be utilized to further improve automated prediction accuracy, and 2) the Chemical Indexing task is substantially more challenging. We look forward to further development of biomedical text mining methods to respond to the rapid growth of biomedical literature. Keywords\u2014 biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; text mining; chemical entity recognition; chemical indexing", "which has research problem ?", "chemical indexing", 736.0, 753.0], ["The unique properties of two dimensional (2D) materials make them promising candidates for chemical and biological sensing applications. However, most 2D nanomaterial sensors suffer very long recovery time due to slow molecular desorption at room temperature. Here, we report a highly sensitive molybdenum ditelluride (MoTe2) gas sensor for NO2 and NH3 detection with greatly enhanced recovery rate. The effects of gate bias on sensing performance have been systematically studied. It is found that the recovery kinetics can be effectively adjusted by biasing the sensor to different gate voltages. Under the optimum biasing potential, the MoTe2 sensor can achieve more than 90% recovery after each sensing cycle well within 10 min at room temperature. The results demonstrate the potential of MoTe2 as a promising candidate for high-performance chemical sensors. The idea of exploiting gate bias to adjust molecular desorption kinetics can be readily applied to much wider sensing platforms based on 2D nanomaterials.", "which has research problem ?", "Chemical sensors", 846.0, 862.0], ["Smart manufacturing is the core in the 4th industrial revolution. It is very important that how to realize the intelligent interaction between hardware and software in smart manufacturing. The paper proposes the architecture of Digital Twin-driven Smart ShopFloor (DTSF), as a contribution to the research of the research discussion about Digital Twin concept. Then the scheme for dynamic resource allocation optimization (DRAO) is designed for DTSF, as an application of the proposed architecture. Furthermore, a case study is given to illustrate the detailed method of DRAO. The experimental result shows that the proposed scheme is effective.", "which has research problem ?", "digital twin", 228.0, 240.0], ["The aim of the present study was to investigate the chemical composition, antioxidant, angiotensin Iconverting enzyme (ACE) inhibitory, antibacterial and antifungal activities of the essential oil of Artemisia herba alba Asso (Aha), a traditional medicinal plant widely growing in Tunisia. The essential oil from the air dried leaves and flowers of Aha were extracted by hydrodistillation and analyzed by GC and GC/MS. More than fifty compounds, out of which 48 were identified. The main chemical class of the oil was represented by oxygenated monoterpenes (50.53%). These were represented by 21 derivatives, among which the cis -chrysantenyl acetate (10.60%), the sabinyl acetate (9.13%) and the \u03b1-thujone (8.73%) were the principal compounds. Oxygenated sesquiterpenes, particularly arbusculones were identified in the essential oil at relatively high rates. The Aha essential oil was found to have an interesting antioxidant activity as evaluated by the 2,2-diphenyl-1-picrylhydrazyl and the \u03b2-carotene bleaching methods. The Aha essential oil also exhibited an inhibitory activity towards the ACE. The antimicrobial activities of Aha essential oil was evaluated against six bacterial strains and three fungal strains by the agar diffusion method and by determining the inhibition zone. The inhibition zones were in the range of 8-51 mm. The essential oil exhibited a strong growth inhibitory activity on all the studied fungi. Our findings demonstrated that Aha growing wild in South-Western of Tunisia seems to be a new chemotype and its essential oil might be a natural potential source for food preservation and for further investigation by developing new bioactive substances.", "which has research problem ?", "Oil", 193.0, 196.0], ["Powerful speeches can captivate audiences, whereas weaker speeches fail to engage their listeners. What is happening in the brains of a captivated audience? Here, we assess audience-wide functional brain dynamics during listening to speeches of varying rhetorical quality. The speeches were given by German politicians and evaluated as rhetorically powerful or weak. Listening to each of the speeches induced similar neural response time courses, as measured by inter-subject correlation analysis, in widespread brain regions involved in spoken language processing. Crucially, alignment of the time course across listeners was stronger for rhetorically powerful speeches, especially for bilateral regions of the superior temporal gyri and medial prefrontal cortex. Thus, during powerful speeches, listeners as a group are more coupled to each other, suggesting that powerful speeches are more potent in taking control of the listeners' brain responses. Weaker speeches were processed more heterogeneously, although they still prompted substantially correlated responses. These patterns of coupled neural responses bear resemblance to metaphors of resonance, which are often invoked in discussions of speech impact, and contribute to the literature on auditory attention under natural circumstances. Overall, this approach opens up possibilities for research on the neural mechanisms mediating the reception of entertaining or persuasive messages.", "which has research problem ?", "brain dynamics during listening to speeches", 198.0, 241.0], ["In this paper, we describe our method for detection of lexical semantic change, i.e., word sense changes over time. We examine semantic differences between specific words in two corpora, chosen from different time periods, for English, German, Latin, and Swedish. Our method was created for the SemEval 2020 Task 1: Unsupervised Lexical Semantic Change Detection. We ranked 1st in Sub-task 1: binary change detection, and 4th in Sub-task 2: ranked change detection. We present our method which is completely unsupervised and language independent. It consists of preparing a semantic vector space for each corpus, earlier and later; computing a linear transformation between earlier and later spaces, using Canonical Correlation Analysis and orthogonal transformation;and measuring the cosines between the transformed vector for the target word from the earlier corpus and the vector for the target word in the later corpus.", "which has research problem ?", "Lexical Semantic Change detection", 329.0, 362.0], ["This paper introduces a class of correlation filters called average of synthetic exact filters (ASEF). For ASEF, the correlation output is completely specified for each training image. This is in marked contrast to prior methods such as synthetic discriminant functions (SDFs) which only specify a single output value per training image. Advantages of ASEF training include: insensitivity to over-fitting, greater flexibility with regard to training images, and more robust behavior in the presence of structured backgrounds. The theory and design of ASEF filters is presented using eye localization on the FERET database as an example task. ASEF is compared to other popular correlation filters including SDF, MACE, OTF, and UMACE, and with other eye localization methods including Gabor Jets and the OpenCV cascade classifier. ASEF is shown to outperform all these methods, locating the eye to within the radius of the iris approximately 98.5% of the time.", "which has research problem ?", "Eye localization", 583.0, 599.0], ["We present a novel technique for learning semantic representations, which extends the distributional hypothesis to multilingual data and joint-space embeddings. Our models leverage parallel data and learn to strongly align the embeddings of semantically equivalent sentences, while maintaining sufficient distance between those of dissimilar sentences. The models do not rely on word alignments or any syntactic information and are successfully applied to a number of diverse languages. We extend our approach to learn semantic representations at the document level, too. We evaluate these models on two cross-lingual document classification tasks, outperforming the prior state of the art. Through qualitative analysis and the study of pivoting effects we demonstrate that our representations are semantically plausible and can capture semantic relationships across languages without parallel data.", "which has research problem ?", "Cross-Lingual Document Classification", 604.0, 641.0], ["We examine how exchange rate volatility affects exporter's pricing decisions in the presence of optimal forward covering. By taking account of forward covering, we are able to derive an expression for the risk premium in the foreign exchange market, which is then estimated as a generalized ARCH model to obtain the time-dependent variance of the exchange rate. Our theory implies a connection between the estimated risk premium equation, and the influence of exchange rate volatility on export prices. In particular, we argue that if there is no risk premium, then exchange rate variance can only have a negative impact on export prices. In the presence of a risk premium, however, the effect of exchange rate variance on export prices is ambiguous, and may be statistically insignificant with aggregate data. These results are supported using data on aggregate U.S. imports and exchange rates of the dollar against the pound. yen and mark.", "which has research problem ?", "Exchange rate volatility", 15.0, 39.0], ["Extracting relational triples from unstructured text is crucial for large-scale knowledge graph construction. However, few existing works excel in solving the overlapping triple problem where multiple relational triples in the same sentence share the same entities. We propose a novel Hierarchical Binary Tagging (HBT) framework derived from a principled problem formulation. Instead of treating relations as discrete labels as in previous works, our new framework models relations as functions that map subjects to objects in a sentence, which naturally handles overlapping triples. Experiments show that the proposed framework already outperforms state-of-the-art methods even its encoder module uses a randomly initialized BERT encoder, showing the power of the new tagging framework. It enjoys further performance boost when employing a pretrained BERT encoder, outperforming the strongest baseline by 25.6 and 45.9 absolute gain in F1-score on two public datasets NYT and WebNLG, respectively. In-depth analysis on different types of overlapping triples shows that the method delivers consistent performance gain in all scenarios.", "which has research problem ?", "the overlapping triple problem where multiple relational triples in the same sentence share the same entities", 155.0, 264.0], ["We explore the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show that ES is a viable solution strategy that scales extremely well with the number of CPUs available: By using a novel communication strategy based on common random numbers, our ES implementation only needs to communicate scalars, making it possible to scale to over a thousand parallel workers. This allows us to solve 3D humanoid walking in 10 minutes and obtain competitive results on most Atari games after one hour of training. In addition, we highlight several advantages of ES as a black box optimization technique: it is invariant to action frequency and delayed rewards, tolerant of extremely long horizons, and does not need temporal discounting or value function approximation.", "which has research problem ?", "Atari Games", 617.0, 628.0], ["Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.", "which has research problem ?", "Machine comprehension", 0.0, 21.0], ["Machine comprehension(MC) style question answering is a representative problem in natural language processing. Previous methods rarely spend time on the improvement of encoding layer, especially the embedding of syntactic information and name entity of the words, which are very crucial to the quality of encoding. Moreover, existing attention methods represent each query word as a vector or use a single vector to represent the whole query sentence, neither of them can handle the proper weight of the key words in query sentence. In this paper, we introduce a novel neural network architecture called Multi-layer Embedding with Memory Network(MEMEN) for machine reading task. In the encoding layer, we employ classic skip-gram model to the syntactic and semantic information of the words to train a new kind of embedding layer. We also propose a memory network of full-orientation matching of the query and passage to catch more pivotal information. Experiments show that our model has competitive results both from the perspectives of precision and efficiency in Stanford Question Answering Dataset(SQuAD) among all published results and achieves the state-of-the-art results on TriviaQA dataset.", "which has research problem ?", "Question Answering", 32.0, 50.0], ["Within the last few decades, plastics have revolutionized our daily lives. Globally we use in excess of 260 million tonnes of plastic per annum, accounting for approximately 8 per cent of world oil production. In this Theme Issue of Philosophical Transactions of the Royal Society, we describe current and future trends in usage, together with the many benefits that plastics bring to society. At the same time, we examine the environmental consequences resulting from the accumulation of waste plastic, the effects of plastic debris on wildlife and concerns for human health that arise from the production, usage and disposal of plastics. Finally, we consider some possible solutions to these problems together with the research and policy priorities necessary for their implementation.", "which has research problem ?", "Plastic", 126.0, 133.0], ["Ontologies are useful for organizing large numbers of concepts having complex relationships, such as the breadth of genetic and clinical knowledge in pharmacogenomics. But because ontologies change and knowledge evolves, it is time consuming to maintain stable mappings to external data sources that are in relational format. We propose a method for interfacing ontology models with data acquisition from external relational data sources. This method uses a declarative interface between the ontology and the data source, and this interface is modeled in the ontology and implemented using XML schema. Data is imported from the relational source into the ontology using XML, and data integrity is checked by validating the XML submission with an XML schema. We have implemented this approach in PharmGKB (http://www.pharmgkb.org/), a pharmacogenetics knowledge base. Our goals were to (1) import genetic sequence data, collected in relational format, into the pharmacogenetics ontology, and (2) automate the process of updating the links between the ontology and data acquisition when the ontology changes. We tested our approach by linking PharmGKB with data acquisition from a relational model of genetic sequence information. The ontology subsequently evolved, and we were able to rapidly update our interface with the external data and continue acquiring the data. Similar approaches may be helpful for integrating other heterogeneous information sources in order make the diversity of pharmacogenetics data amenable to computational analysis.", "which has research problem ?", "interfacing ontology models with data acquisition from external relational data sources", 350.0, 437.0], ["The COVID-19 pandemic has spawned a diverse body of scientific literature that is challenging to navigate, stimulating interest in automated tools to help find useful knowledge. We pursue the construction of a knowledge base (KB) of mechanisms\u2014a fundamental concept across the sciences, which encompasses activities, functions and causal relations, ranging from cellular processes to economic impacts. We extract this information from the natural language of scientific papers by developing a broad, unified schema that strikes a balance between relevance and breadth. We annotate a dataset of mechanisms with our schema and train a model to extract mechanism relations from papers. Our experiments demonstrate the utility of our KB in supporting interdisciplinary scientific search over COVID-19 literature, outperforming the prominent PubMed search in a study with clinical experts. Our search engine, dataset and code are publicly available.", "which has research problem ?", "construction of a knowledge base (KB) of mechanisms", NaN, NaN], ["The fifth phase of the Coupled Model Intercomparison Project (CMIP5) will produce a state-of-the- art multimodel dataset designed to advance our knowledge of climate variability and climate change. Researchers worldwide are analyzing the model output and will produce results likely to underlie the forthcoming Fifth Assessment Report by the Intergovernmental Panel on Climate Change. Unprecedented in scale and attracting interest from all major climate modeling groups, CMIP5 includes \u201clong term\u201d simulations of twentieth-century climate and projections for the twenty-first century and beyond. Conventional atmosphere\u2013ocean global climate models and Earth system models of intermediate complexity are for the first time being joined by more recently developed Earth system models under an experiment design that allows both types of models to be compared to observations on an equal footing. Besides the longterm experiments, CMIP5 calls for an entirely new suite of \u201cnear term\u201d simulations focusing on recent decades...", "which has research problem ?", "climate change", 182.0, 196.0], ["We introduce the STEM (Science, Technology, Engineering, and Medicine) Dataset for Scientific Entity Extraction, Classification, and Resolution, version 1.0 (STEM-ECR v1.0). The STEM-ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises abstracts in 10 STEM disciplines that were found to be the most prolific ones on a major publishing platform. We describe the creation of such a multidisciplinary corpus and highlight the obtained findings in terms of the following features: 1) a generic conceptual formalism for scientific entities in a multidisciplinary scientific context; 2) the feasibility of the domain-independent human annotation of scientific entities under such a generic formalism; 3) a performance benchmark obtainable for automatic extraction of multidisciplinary scientific entities using BERT-based neural models; 4) a delineated 3-step entity resolution procedure for human annotation of the scientific entities via encyclopedic entity linking and lexicographic word sense disambiguation; and 5) human evaluations of Babelfy returned encyclopedic links and lexicographic senses for our entities. Our findings cumulatively indicate that human annotation and automatic learning of multidisciplinary scientific concepts as well as their semantic disambiguation in a wide-ranging setting as STEM is reasonable.", "which has research problem ?", "Scientific entity extraction", 83.0, 111.0], ["Named entity recognition (NER) is used in many domains beyond the newswire text that comprises current gold-standard corpora. Recent work has used Wikipedia's link structure to automatically generate near gold-standard annotations. Until now, these resources have only been evaluated on newswire corpora or themselves. We present the first NER evaluation on a Wikipedia gold standard (WG) corpus. Our analysis of cross-corpus performance on WG shows that Wikipedia text may be a harder NER domain than newswire. We find that an automatic annotation of Wikipedia has high agreement with WG and, when used as training data, outperforms newswire models by up to 7.7%.", "which has research problem ?", "Named Entity Recognition", 0.0, 24.0], ["This paper investigates the impact of real exchange rate volatility on Turkey\u2019s exports to its most important trading partners using quarterly data for the period 1982 to 2001. Cointegration and error correction modeling approaches are applied, and estimates of the cointegrating relations are obtained using Johansen\u2019s multivariate procedure. Estimates of the short-run dynamics are obtained through the error correction technique. Our results indicate that exchange rate volatility has a significant positive effect on export volume in the long run. This result may indicate that firms operating in a small economy, like Turkey, have little option for dealing with increased exchange rate risk.", "which has research problem ?", "Exchange rate volatility", 43.0, 67.0], ["We propose a novel neural attention architecture to tackle machine comprehension tasks, such as answering Cloze-style queries with respect to a document. Unlike previous models, we do not collapse the query into a single vector, instead we deploy an iterative alternating attention mechanism that allows a fine-grained exploration of both the query and the document. Our model outperforms state-of-the-art baselines in standard machine comprehension benchmarks such as CNN news articles and the Children\u2019s Book Test (CBT) dataset.", "which has research problem ?", "Machine comprehension", 59.0, 80.0], ["Relation classification is an important NLP task to extract relations between entities. The state-of-the-art methods for relation classification are primarily based on Convolutional or Recurrent Neural Networks. Recently, the pre-trained BERT model achieves very successful results in many NLP classification / sequence labeling tasks. Relation classification differs from those tasks in that it relies on information of both the sentence and the two target entities. In this paper, we propose a model that both leverages the pre-trained BERT language model and incorporates information from the target entities to tackle the relation classification task. We locate the target entities and transfer the information through the pre-trained architecture and incorporate the corresponding encoding of the two entities. We achieve significant improvement over the state-of-the-art method on the SemEval-2010 task 8 relational dataset.", "which has research problem ?", "extract relations between entities", 52.0, 86.0], ["Although the ultimate objective of Linked Data is linking and integration, it is not currently evident how connected the current Linked Open Data (LOD) cloud is. In this article, we focus on methods, supported by special indexes and algorithms, for performing measurements related to the connectivity of more than two datasets that are useful in various tasks including (a) Dataset Discovery and Selection; (b) Object Coreference, i.e., for obtaining complete information about a set of entities, including provenance information; (c) Data Quality Assessment and Improvement, i.e., for assessing the connectivity between any set of datasets and monitoring their evolution over time, as well as for estimating data veracity; (d) Dataset Visualizations; and various other tasks. Since it would be prohibitively expensive to perform all these measurements in a na\u00efve way, in this article, we introduce indexes (and their construction algorithms) that can speed up such tasks. In brief, we introduce (i) a namespace-based prefix index, (ii) a sameAs catalog for computing the symmetric and transitive closure of the owl:sameAs relationships encountered in the datasets, (iii) a semantics-aware element index (that exploits the aforementioned indexes), and, finally, (iv) two lattice-based incremental algorithms for speeding up the computation of the intersection of URIs of any set of datasets. For enhancing scalability, we propose parallel index construction algorithms and parallel lattice-based incremental algorithms, we evaluate the achieved speedup using either a single machine or a cluster of machines, and we provide insights regarding the factors that affect efficiency. Finally, we report measurements about the connectivity of the (billion triples-sized) LOD cloud that have never been carried out so far.", "which has research problem ?", "Data Quality", 535.0, 547.0], ["In this paper, we describe HeidelTime, a system for the extraction and normalization of temporal expressions. HeidelTime is a rule-based system mainly using regular expression patterns for the extraction of temporal expressions and knowledge resources as well as linguistic clues for their normalization. In the TempEval-2 challenge, HeidelTime achieved the highest F-Score (86%) for the extraction and the best results in assigning the correct value attribute, i.e., in understanding the semantics of the temporal expressions.", "which has research problem ?", "extraction and normalization of temporal expressions", 56.0, 108.0], ["Distributional Reinforcement Learning (RL) differs from traditional RL in that, rather than the expectation of total returns, it estimates distributions and has achieved state-of-the-art performance on Atari Games. The key challenge in practical distributional RL algorithms lies in how to parameterize estimated distributions so as to better approximate the true continuous distribution. Existing distributional RL algorithms parameterize either the probability side or the return value side of the distribution function, leaving the other side uniformly fixed as in C51, QR-DQN or randomly sampled as in IQN. In this paper, we propose fully parameterized quantile function that parameterizes both the quantile fraction axis (i.e., the x-axis) and the value axis (i.e., y-axis) for distributional RL. Our algorithm contains a fraction proposal network that generates a discrete set of quantile fractions and a quantile value network that gives corresponding quantile values. The two networks are jointly trained to find the best approximation of the true distribution. Experiments on 55 Atari Games show that our algorithm significantly outperforms existing distributional RL algorithms and creates a new record for the Atari Learning Environment for non-distributed agents.", "which has research problem ?", "Atari Games", 202.0, 213.0], ["The paper examines limitations that restrict the potential benefits from the use of Enterprise Resource Planning (ERP) systems in business firms. In the first part we discuss a limitation that arises from the strategic decision of top managers for mergers, acquisitions and divestitures as well as outsourcing. Managers tend to treat their companies like component-based business units, which are to be arranged and re-arranged to yet higher market values. Outsourcing of in-house activities to suppliers means disintegrating processes and information. Such consequences of strategic business decisions impose severe restrictions on what business organizations can benefit from ERP systems. The second part of the paper reflects upon the possibility of imbedding best practice business processes in ERP systems. We critically review the process of capturing and transferring best practices with a particular focus on context-dependence and nature of IT innovations.", "which has research problem ?", "Enterprise resource planning", 84.0, 112.0], ["We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. In this work, we apply a deep learning approach to solve this problem, which avoids time-consuming hand-design of features. This presents two main challenges. First, we need to evaluate a huge number of candidate grasps. In order to make detection fast and robust, we present a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second. The first network has fewer features, is faster to run, and can effectively prune out unlikely candidate grasps. The second, with more features, is slower but has to run only on the top few detections. Second, we need to handle multimodal inputs effectively, for which we present a method that applies structured regularization on the weights based on multimodal group regularization. We show that our method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms.", "which has research problem ?", "Robotic Grasping", 940.0, 956.0], ["State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data pre-processing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks --- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both the two data --- 97.55\\% accuracy for POS tagging and 91.21\\% F1 for NER.", "which has research problem ?", "Sequence labeling", 17.0, 34.0], ["Petroleum hydrocarbons contamination of soil, sediments and marine environment associated with the inadvertent discharges of petroleum\u2013derived chemical wastes and petroleum hydrocarbons associated with spillage and other sources into the environment often pose harmful effects on human health and the natural environment, and have negative socio\u2013economic impacts in the oil\u2013producing host communities. In practice, plants and microbes have played a major role in microbial transformation and growth\u2013linked mineralization of petroleum hydrocarbons in contaminated soils and/or sediments over the past years. Bioremediation strategies has been recognized as an environmental friendly and cost\u2013effective alternative in comparison with the traditional physico-chemical approaches for the restoration and reclamation of contaminated sites. The success of any plant\u2013based remediation strategy depends on the interaction of plants with rhizospheric microbial populations in the surrounding soil medium and the organic contaminant. Effective understanding of the fate and behaviour of organic contaminants in the soil can help determine the persistence of the contaminant in the terrestrial environment, promote the success of any bioremediation approach and help develop a high\u2013level of risks mitigation strategies. In this review paper, we provide a clear insight into the role of plants and microbes in the microbial degradation of petroleum hydrocarbons in contaminated soil that have emerged from the growing body of bioremediation research and its applications in practice. In addition, plant\u2013microbe interactions have been discussed with respect to biodegradation of petroleum hydrocarbons and these could provide a better understanding of some important factors necessary for development of in situ bioremediation strategies for risks mitigation in petroleum hydrocarbon\u2013contaminated soil.", "which has research problem ?", "Petroleum hydrocarbons contamination of soil, sediments and marine environment associated with the inadvertent discharges of petroleum\u2013derived chemical wastes and petroleum hydrocarbons associated with spillage and other sources into the environment often pose harmful effects on human health and the natural environment, and have negative socio\u2013economic impacts in the oil\u2013producing host communities. ", 0.0, 402.0], ["The adoption and implementation of Enterprise Resource Planning (ERP) systems is a challenging and expensive task that not only requires rigorous efforts but also demands to have a detailed analysis of such factors that are critical to the adoption or implementation of ERP systems. Many efforts have been made to identify such influential factors for ERP; however, they are not filtered comprehensively in terms of the different perspectives. This paper focuses on the ERP critical success factors from five different perspectives such as: stakeholders; process; technology; organisation; and project. Results from the literature review are presented and 19 such factors are identified that are imperative for a successful ERP implementation, which are listed in order of their importance. Considering these factors can realize several benefits such as reducing costs and saving time or extra effort.", "which has research problem ?", "Enterprise resource planning", 35.0, 63.0], ["Deep reinforcement learning, applied to vision-based problems like Atari games, maps pixels directly to actions; internally, the deep neural network bears the responsibility of both extracting useful information and making decisions based on it. By separating the image processing from decision-making, one could better understand the complexity of each task, as well as potentially find smaller policy representations that are easier for humans to understand and may generalize better. To this end, we propose a new method for learning policies and compact state representations separately but simultaneously for policy approximation in reinforcement learning. State representations are generated by an encoder based on two novel algorithms: Increasing Dictionary Vector Quantization makes the encoder capable of growing its dictionary size over time, to address new observations as they appear in an open-ended online-learning context; Direct Residuals Sparse Coding encodes observations by disregarding reconstruction error minimization, and aiming instead for highest information inclusion. The encoder autonomously selects observations online to train on, in order to maximize code sparsity. As the dictionary size increases, the encoder produces increasingly larger inputs for the neural network: this is addressed by a variation of the Exponential Natural Evolution Strategies algorithm which adapts its probability distribution dimensionality along the run. We test our system on a selection of Atari games using tiny neural networks of only 6 to 18 neurons (depending on the game's controls). These are still capable of achieving results comparable---and occasionally superior---to state-of-the-art techniques which use two orders of magnitude more neurons.", "which has research problem ?", "Atari Games", 67.0, 78.0], ["Despite significant progress in natural language processing, machine learning models require substantial expertannotated training data to perform well in tasks such as named entity recognition (NER) and entity relations extraction. Furthermore, NER is often more complicated when working with scientific text. For example, in polymer science, chemical structure may be encoded using nonstandard naming conventions, the same concept can be expressed using many different terms (synonymy), and authors may refer to polymers with ad-hoc labels. These challenges, which are not unique to polymer science, make it difficult to generate training data, as specialized skills are needed to label text correctly. We have previously designed polyNER, a semi-automated system for efficient identification of scientific entities in text. PolyNER applies word embedding models to generate entity-rich corpora for productive expert labeling, and then uses the resulting labeled data to bootstrap a context-based classifier. PolyNER facilitates a labeling process that is otherwise tedious and expensive. Here, we use active learning to efficiently obtain more annotations from experts and improve performance. Our approach requires just five hours of expert time to achieve discrimination capacity comparable to that of a state-of-the-art chemical NER toolkit.", "which has research problem ?", "Active Learning", 1103.0, 1118.0], ["The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents\u2019 circumstances, such as a spouse\u2019s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.", "which has research problem ?", "research", 775.0, 783.0], ["In contrast to vehicle routing problems, little work has been done in ship routing and scheduling, although large benefits may be expected from improving this scheduling process. We will present a real ship planning problem, which is a combined inventory management problem anda routing problem with time windows. A fleet of ships transports a single product (ammonia) between production and consumption harbors. The quantities loaded and discharged are determined by the production rates of the harbors, possible stock levels, and the actual ship visiting the harbor. We describe the real problem and the underlying mathematical model. To decompose this model, we discuss some model adjustments. Then, the problem can be solved by a Dantzig\u00c2 Wolfe decomposition approach including both ship routing subproblems and inventory management subproblems. The overall problem is solved by branch-and-bound. Our computational results indicate that the proposed method works for the real planning problem.", "which has research problem ?", "Combined inventory management", 236.0, 265.0], ["Clostridium difficile infection is usually associated with antibiotic therapy and is almost always limited to the colonic mucosa. Small bowel enteritis is rare: only 9 cases have been previously cited in the literature. This report describes a case of C. difficile small bowel enteritis that occurred in a patient after total colectomy and reviews the 9 previously reported cases of C. difficile enteritis.", "which has research problem ?", "Clostridium difficile infection", 0.0, 31.0], ["This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives. First, it puts forward a novel concept of \"history of word\" to characterize attention information from the lowest word-level embedding up to the highest semantic-level representation. Second, it introduces an improved attention scoring function that better utilizes the \"history of word\" concept. Third, it proposes a fully-aware multi-level attention mechanism to capture the complete information in one text (such as a question) and exploit it in its counterpart (such as context or passage) layer by layer. We apply FusionNet to the Stanford Question Answering Dataset (SQuAD) and it achieves the first position for both single and ensemble model on the official SQuAD leaderboard at the time of writing (Oct. 4th, 2017). Meanwhile, we verify the generalization of FusionNet with two adversarial SQuAD datasets and it sets up the new state-of-the-art on both datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to 51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%.", "which has research problem ?", "Question Answering", 677.0, 695.0], ["Abstract. The core version of the Norwegian Climate Center's Earth System Model, named NorESM1-M, is presented. The NorESM family of models are based on the Community Climate System Model version 4 (CCSM4) of the University Corporation for Atmospheric Research, but differs from the latter by, in particular, an isopycnic coordinate ocean model and advanced chemistry\u2013aerosol\u2013cloud\u2013radiation interaction schemes. NorESM1-M has a horizontal resolution of approximately 2\u00b0 for the atmosphere and land components and 1\u00b0 for the ocean and ice components. NorESM is also available in a lower resolution version (NorESM1-L) and a version that includes prognostic biogeochemical cycling (NorESM1-ME). The latter two model configurations are not part of this paper. Here, a first-order assessment of the model stability, the mean model state and the internal variability based on the model experiments made available to CMIP5 are presented. Further analysis of the model performance is provided in an accompanying paper (Iversen et al., 2013), presenting the corresponding climate response and scenario projections made with NorESM1-M.", "which has research problem ?", "CMIP5", 912.0, 917.0], ["Knowledge graph embedding models have gained significant attention in AI research. The aim of knowledge graph embedding is to embed the graphs into a vector space in which the structure of the graph is preserved. Recent works have shown that the inclusion of background knowledge, such as logical rules, can improve the performance of embeddings in downstream machine learning tasks. However, so far, most existing models do not allow the inclusion of rules. We address the challenge of including rules and present a new neural based embedding model (LogicENN). We prove that LogicENN can learn every ground truth of encoded rules in a knowledge graph. To the best of our knowledge, this has not been proved so far for the neural based family of embedding models. Moreover, we derive formulae for the inclusion of various rules, including (anti-)symmetric, inverse, irreflexive and transitive, implication, composition, equivalence, and negation. Our formulation allows avoiding grounding for implication and equivalence relations. Our experiments show that LogicENN outperforms the existing models in link prediction.", "which has research problem ?", "Knowledge Graph Embedding", 0.0, 25.0], ["Abstract Background Undifferentiated embryonal sarcoma (UES) of liver is a rare malignant neoplasm, which affects mostly the pediatric population accounting for 13% of pediatric hepatic malignancies, a few cases has been reported in adults. Case presentation We report a case of undifferentiated embryonal sarcoma of the liver in a 20-year-old Caucasian male. The patient was referred to us for further investigation after a laparotomy in a district hospital for spontaneous abdominal hemorrhage, which was due to a liver mass. After a through evaluation with computed tomography scan and magnetic resonance imaging of the liver and taking into consideration the previous history of the patient, it was decided to surgically explore the patient. Resection of I\u2013IV and VIII hepatic lobe. Patient developed disseminated intravascular coagulation one day after the surgery and died the next day. Conclusion It is a rare, highly malignant hepatic neoplasm, affecting almost exclusively the pediatric population. The prognosis is poor but recent evidence has shown that long-term survival is possible after complete surgical resection with or without postoperative chemotherapy.", "which has research problem ?", "Embryonal sarcoma of the liver", 296.0, 326.0], ["Motivation: Coreference resolution, the process of identifying different mentions of an entity, is a very important component in a text-mining system. Compared with the work in news articles, the existing study of coreference resolution in biomedical texts is quite preliminary by only focusing on specific types of anaphors like pronouns or definite noun phrases, using heuristic methods, and running on small data sets. Therefore, there is a need for an in-depth exploration of this task in the biomedical domain. Results: In this article, we presented a learning-based approach to coreference resolution in the biomedical domain. We made three contributions in our study. Firstly, we annotated a large scale coreference corpus, MedCo, which consists of 1,999 medline abstracts in the GENIA data set. Secondly, we proposed a detailed framework for the coreference resolution task, in which we augmented the traditional learning model by incorporating non-anaphors into training. Lastly, we explored various sources of knowledge for coreference resolution, particularly, those that can deal with the complexity of biomedical texts. The evaluation on the MedCo corpus showed promising results. Our coreference resolution system achieved a high precision of 85.2% with a reasonable recall of 65.3%, obtaining an F-measure of 73.9%. The results also suggested that our augmented learning model significantly boosted precision (up to 24.0%) without much loss in recall (less than 5%), and brought a gain of over 8% in F-measure.", "which has research problem ?", "Coreference Resolution", 12.0, 34.0], ["Most existing methods determine relation types only after all the entities have been recognized, thus the interaction between relation types and entity mentions is not fully modeled. This paper presents a novel paradigm to deal with relation extraction by regarding the related entities as the arguments of a relation. We apply a hierarchical reinforcement learning (HRL) framework in this paradigm to enhance the interaction between entity mentions and relation types. The whole extraction process is decomposed into a hierarchy of two-level RL policies for relation detection and entity extraction respectively, so that it is more feasible and natural to deal with overlapping relations. Our model was evaluated on public datasets collected via distant supervision, and results show that it gains better performance than existing methods and is more powerful for extracting overlapping relations1.", "which has research problem ?", "Relation Extraction", 241.0, 260.0], ["A concept guided by the ISO 37120 standard for city services and quality of life is suggested as unified framework for smart city dashboards. The slow (annual, quarterly, or monthly) ISO 37120 indicators are enhanced and complemented with more detailed and person-centric indicators that can further accelerate the transition toward smart cities. The architecture supports three tasks: acquire and manage data from heterogeneous sensors; process data originated from heterogeneous sources (sensors, OpenData, social data, blogs, news, and so on); and implement such collection and processing on the cloud. A prototype application based on the proposed architecture concept is developed for the city of Skopje, Macedonia. This article is part of a special issue on smart cities.", "which has research problem ?", "Standard for city services", 34.0, 60.0], ["Biomedical research is growing at such an exponential pace that scientists, researchers, and practitioners are no more able to cope with the amount of published literature in the domain. The knowledge presented in the literature needs to be systematized in such a way that claims and hypotheses can be easily found, accessed, and validated. Knowledge graphs can provide such a framework for semantic knowledge representation from literature. However, in order to build a knowledge graph, it is necessary to extract knowledge as relationships between biomedical entities and normalize both entities and relationship types. In this paper, we present and compare a few rule-based and machine learning-based (Naive Bayes, Random Forests as examples of traditional machine learning methods and DistilBERT and T5-based models as examples of modern deep learning transformers) methods for scalable relationship extraction from biomedical literature, and for the integration into the knowledge graphs. We examine how resilient are these various methods to unbalanced and fairly small datasets, showing that transformer-based models handle well both small datasets, due to pre-training on large C4 dataset, as well as unbalanced data. The best performing model was the DistilBERT-based model \ufb01ne-tuned on balanced data, with a reported F1-score of 0.89.", "which has research problem ?", "Relationship extraction", 891.0, 914.0], ["This paper describes the Shared Task on Fine-grained Event Classification in News-like Text Snippets. The Shared Task is divided into three sub-tasks: (a) classification of text snippets reporting socio-political events (25 classes) for which vast amount of training data exists, although exhibiting different structure and style vis-a-vis test data, (b) enhancement to a generalized zero-shot learning problem, where 3 additional event types were introduced in advance, but without any training data (\u2018unseen\u2019 classes), and (c) further extension, which introduced 2 additional event types, announced shortly prior to the evaluation phase. The reported Shared Task focuses on classification of events in English texts and is organized as part of the Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021), co-located with the ACL-IJCNLP 2021 Conference. Four teams participated in the task. Best performing systems for the three aforementioned sub-tasks achieved 83.9%, 79.7% and 77.1% weighted F1 scores respectively.", "which has research problem ?", "Fine-grained Event Classification", 40.0, 73.0], ["Abstract The essential oil obtained by hydrodistillation from the aerial parts of Artemisia herba-alba Asso growing wild in M'sila-Algeria, was investigated using both capillary GC and GC/MS techniques. The oil yield was 1.02% based on dry weight. Sixty-eight components amounting to 94.7% of the oil were identifed, 33 of them being reported for the frst time in Algerian A. herba-alba oil and 21 of these components have not been previously reported in A. herba-alba oils. The oil contained camphor (19.4%), trans-pinocarveol (16.9%), chrysanthenone (15.8%) and \u03b2-thujone (15%) as major components. Monoterpenoids are the main components (86.1%), and the irregular monoterpenes fraction represented a 3.1% yield.", "which has research problem ?", "Oil", 23.0, 26.0], ["Extracting information from full documents is an important problem in many domains, but most previous work focus on identifying relationships within a sentence or a paragraph. It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections. In this paper, we introduce SciREX, a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. We annotate our dataset by integrating automatic and human annotations, leveraging existing scientific knowledge resources. We develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE. Analyzing the model performance shows a significant gap between human performance and current baselines, inviting the community to use our dataset as a challenge to develop document-level IE models. Our data and code are publicly available at https://github.com/allenai/SciREX .", "which has research problem ?", "N-ary relation identification from scientific articles", 605.0, 659.0], ["Social and semantic web can be complementary approaches searching web resources. This cooperative approach lets enable a semantic search engine to find accurate results and annotate web resources. This work develops the components of a Social-Semantic search architecture proposed by the authors to find open educational resources (OER). By means of metadata enrichment and logic inference, OER consumers get more precise results from general search engines. The Search prototype has been applied to find OER related with computers and engineering in a domain provided by OpenCourseWare materials from two universities, MIT and UTPL. The semantic search answers reasonably well the queta collected.", "which has research problem ?", "Semantic Search", 121.0, 136.0], ["Automatic extraction of biological network information is one of the most desired and most complex tasks in biological and medical text mining. Track 4 at BioCreative V attempts to approach this complexity using fragments of large-scale manually curated biological networks, represented in Biological Expression Language (BEL), as training and test data. BEL is an advanced knowledge representation format which has been designed to be both human readable and machine processable. The specific goal of track 4 was to evaluate text mining systems capable of automatically constructing BEL statements from given evidence text, and of retrieving evidence text for given BEL statements. Given the complexity of the task, we designed an evaluation methodology which gives credit to partially correct statements. We identified various levels of information expressed by BEL statements, such as entities, functions, relations, and introduced an evaluation framework which rewards systems capable of delivering useful BEL fragments at each of these levels. The aim of this evaluation method is to help identify the characteristics of the systems which, if combined, would be most useful for achieving the overall goal of automatically constructing causal biological networks from text.", "which has research problem ?", "automatically constructing causal biological networks from text", 1213.0, 1276.0], ["The \u201cmiddle-income trap\u201d is the phenomenon of hitherto rapidly growing economies stagnating at middle-income levels and failing to graduate into the ranks of high-income countries. In this study we examine the middle-income trap as a special case of growth slowdowns, which are identified as large sudden and sustained deviations from the growth path predicted by a basic conditional convergence framework. We then examine their determinants by means of probit regressions, looking into the role of institutions, demography, infrastructure, the macroeconomic environment, output structure and trade structure. Two variants of Bayesian Model Averaging are used as robustness checks. The results\u2014including some that indeed speak to the special status of middle-income countries\u2014are then used to derive policy implications, with a particular focus on Asian economies.", "which has research problem ?", "Middle-Income Trap", 5.0, 23.0], ["Transforming natural language questions into formal queries is an integral task in Question Answering (QA) systems. QA systems built on knowledge graphs like DBpedia, require a step after natural language processing for linking words, specifically including named entities and relations, to their corresponding entities in a knowledge graph. To achieve this task, several approaches rely on background knowledge bases containing semantically-typed relations, e.g., PATTY, for an extra disambiguation step. Two major factors may affect the performance of relation linking approaches whenever background knowledge bases are accessed: a) limited availability of such semantic knowledge sources, and b) lack of a systematic approach on how to maximize the benefits of the collected knowledge. We tackle this problem and devise SIBKB, a semantic-based index able to capture knowledge encoded on background knowledge bases like PATTY. SIBKB represents a background knowledge base as a bi-partite and a dynamic index over the relation patterns included in the knowledge base. Moreover, we develop a relation linking component able to exploit SIBKB features. The benefits of SIBKB are empirically studied on existing QA benchmarks and observed results suggest that SIBKB is able to enhance the accuracy of relation linking by up to three times.", "which has research problem ?", "Question Answering", 83.0, 101.0], ["Chemical compositions of 16 Artemisia herba-alba oil samples harvested in eight East Moroccan locations were investigated by GC and GC/MS. Chemical variability of the A. herba-alba oils is also discussed using statistical analysis. Detailed analysis of the essential oils led to the identification of 52 components amounting to 80.5\u201398.6 % of the total oil. The investigated chemical compositions showed significant qualitative and quantitative differences. According to their major components (camphor, chrysanthenone, and \u03b1- and \u03b2-thujone), three main groups of essential oils were found. This study also found regional specificity of the major components.", "which has research problem ?", "Oil", 49.0, 52.0], ["Coastal safety may be influenced by climate change, as changes in extreme surge levels and wave extremes may increase the vulnerability of dunes and other coastal defenses. In the North Sea, an area already prone to severe flooding, these high surge levels and waves are generated by low atmospheric pressure and severe wind speeds during storm events. As a result of the geometry of the North Sea, not only the maximum wind speed is relevant, but also wind direction. Climate change could change maximum wind conditions, with potentially negative effects for coastal safety. Here, we use an ensemble of 12 Coupled Model Intercomparison Project Phase 5 (CMIP5) General Circulation Models (GCMs) and diagnose the effect of two climate scenarios (rcp4.5 and rcp8.5) on annual maximum wind speed, wind speeds with lower return frequencies, and the direction of these annual maximum wind speeds. The 12 selected CMIP5 models do not project changes in annual maximum wind speed and in wind speeds with lower return frequencies; however, we do find an indication that the annual extreme wind events are coming more often from western directions. Our results are in line with the studies based on CMIP3 models and do not confirm the statement based on some reanalysis studies that there is a climate\u2010change\u2010related upward trend in storminess in the North Sea area.", "which has research problem ?", "climate change", 36.0, 50.0], ["In traditional assembly lines, it is reasonable to assume that task execution times are the same for each worker. However, in sheltered work centres for disabled this assumption is not valid: some workers may execute some tasks considerably slower or even be incapable of executing them. Worker heterogeneity leads to a problem called the assembly line worker assignment and balancing problem (ALWABP). For a fixed number of workers the problem is to maximize the production rate of an assembly line by assigning workers to stations and tasks to workers, while satisfying precedence constraints between the tasks. This paper introduces new heuristic and exact methods to solve this problem. We present a new MIP model, propose a novel heuristic algorithm based on beam search, as well as a task-oriented branch-and-bound procedure which uses new reduction rules and lower bounds for solving the problem. Extensive computational tests on a large set of instances show that these methods are effective and improve over existing ones.", "which has research problem ?", "Assembly line worker assignment and balancing problem", 339.0, 392.0], ["Load frequency control has been used for decades in power systems. Traditionally, this has been a centralized control by area with communication over a dedicated and closed network. New regulatory guidelines allow for competitive markets to supply this load frequency control. In order to allow an effective market operation, an open communication infrastructure is needed to support an increasing complex system of controls. While such a system has great advantage in terms of cost and reliability, the possibility of communication signal delays and other problems must be carefully analyzed. This paper presents a load frequency control method based on linear matrix inequalities. The primary aim is to find a robust controller that can ensure good performance despite indeterminate delays and other problems in the communication network.", "which has research problem ?", "Power systems", 52.0, 65.0], ["We introduce multiplicative LSTM (mLSTM), a recurrent neural network architecture for sequence modelling that combines the long short-term memory (LSTM) and multiplicative recurrent neural network architectures. mLSTM is characterised by its ability to have different recurrent transition functions for each possible input, which we argue makes it more expressive for autoregressive density estimation. We demonstrate empirically that mLSTM outperforms standard LSTM and its deep variants for a range of character level language modelling tasks. In this version of the paper, we regularise mLSTM to achieve 1.27 bits/char on text8 and 1.24 bits/char on Hutter Prize. We also apply a purely byte-level mLSTM on the WikiText-2 dataset to achieve a character level entropy of 1.26 bits/char, corresponding to a word level perplexity of 88.8, which is comparable to word level LSTMs regularised in similar ways on the same task.", "which has research problem ?", "Language Modelling", 520.0, 538.0], ["4OASIS3.2\u20135 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 \u00b0C in ACCESS1.0 and 0.04 \u00b0C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.", "which has research problem ?", "CMIP5", 277.0, 282.0], ["In the recent years, different Web knowledge graphs, both free and commercial, have been created. While Google coined the term \"Knowledge Graph\" in 2012, there are also a few openly available knowledge graphs, with DBpedia, YAGO, and Freebase being among the most prominent ones. Those graphs are often constructed from semi-structured knowledge, such as Wikipedia, or harvested from the web with a combination of statistical and linguistic methods. The result are large-scale knowledge graphs that try to make a good trade-off between completeness and correctness. In order to further increase the utility of such knowledge graphs, various refinement methods have been proposed, which try to infer and add missing knowledge to the graph, or identify erroneous pieces of information. In this article, we provide a survey of such knowledge graph refinement approaches, with a dual look at both the methods being proposed as well as the evaluation methodologies used.", "which has research problem ?", "knowledge graph refinement approaches", 829.0, 866.0], ["Named entity recognition and relation extraction are two important fundamental problems. Joint learning algorithms have been proposed to solve both tasks simultaneously, and many of them cast the joint task as a table-filling problem. However, they typically focused on learning a single encoder (usually learning representation in the form of a table) to capture information required for both tasks within the same space. We argue that it can be beneficial to design two distinct encoders to capture such two different types of information in the learning process. In this work, we propose the novel {\\em table-sequence encoders} where two different encoders -- a table encoder and a sequence encoder are designed to help each other in the representation learning process. Our experiments confirm the advantages of having {\\em two} encoders over {\\em one} encoder. On several standard datasets, our model shows significant improvements over existing approaches.", "which has research problem ?", "Relation Extraction", 29.0, 48.0], ["Knowledge graphs in manufacturing and production aim to make production lines more efficient and flexible with higher quality output. This makes knowledge graphs attractive for companies to reach Industry 4.0 goals. However, existing research in the field is quite preliminary, and more research effort on analyzing how knowledge graphs can be applied in the field of manufacturing and production is needed. Therefore, we have conducted a systematic literature review as an attempt to characterize the state-of-the-art in this field, i.e., by identifying existing research and by identifying gaps and opportunities for further research. We have focused on finding the primary studies in the existing literature, which were classified and analyzed according to four criteria: bibliometric key facts, research type facets, knowledge graph characteristics, and application scenarios. Besides, an evaluation of the primary studies has also been carried out to gain deeper insights in terms of methodology, empirical evidence, and relevance. As a result, we can offer a complete picture of the domain, which includes such interesting aspects as the fact that knowledge fusion is currently the main use case for knowledge graphs, that empirical research and industrial application are still missing to a large extent, that graph embeddings are not fully exploited, and that technical literature is fast-growing but still seems to be far from its peak.", "which has research problem ?", " finding the primary studies in the existing literature", 655.0, 710.0], ["Classi cation modeling (a.k.a. supervised learning) is an extremely useful analytical technique for developing predictive and forecasting applications. The explosive growth in data warehousing and internet usage has made large amounts of data potentially available for developing classi cation models. For example, natural language text is widely available in many forms (e.g., electronic mail, news articles, reports, and web page contents). Categorization of data is a common activity which can be automated to a large extent using supervised learning methods. Examples of this include routing of electronic mail, satellite image classi cation, and character recognition. However, these tasks require labeled data sets of su ciently high quality with adequate instances for training the predictive models. Much of the on-line data, particularly the unstructured variety (e.g., text), is unlabeled. Labeling is usually a expensive manual process done by domain experts. Active learning is an approach to solving this problem and works by identifying a subset of the data that needs to be labeled and uses this subset to generate classi cation models. We present an active learning method that uses adaptive resampling in a natural way to signi cantly reduce the size of the required labeled set and generates a classi cation model that achieves the high accuracies possible with current adaptive resampling methods.", "which has research problem ?", "Active learning", 971.0, 986.0], ["Within this article, the strengths and weaknesses of crowdsourcing for idea generation and idea selection in the context of smart city innovation are investigated. First, smart cities are defined next to similar but different concepts such as digital cities, intelligent cities or ubiquitous cities. It is argued that the smart city-concept is in fact a more user-centered evolution of the other city-concepts which seem to be more technological deterministic in nature. The principles of crowdsourcing are explained and the different manifestations are demonstrated. By means of a case study, the generation of ideas for innovative uses of ICT for city innovation by citizens through an online platform is studied, as well as the selection process. For this selection, a crowdsourcing solution is compared to a selection made by external experts. The comparison of both indicates that using the crowd as gatekeeper and selector of innovative ideas yields a long list with high user benefits. However, the generation of ideas in itself appeared not to deliver extremely innovative ideas. Crowdsourcing thus appears to be a useful and effective tool in the context of smart city innovation, but should be thoughtfully used and combined with other user involvement approaches and within broader frameworks such as Living Labs.", "which has research problem ?", "Smart cities", 171.0, 183.0], ["To improve designs of e-learning materials, it is necessary to know which word or figure a learner felt \"difficult\" in the materials. In this pilot study, we measured electroencephalography (EEG) and eye gaze data of learners and analyzed to estimate which area they had difficulty to learn. The developed system realized simultaneous measurements of physiological data and subjective evaluations during learning. Using this system, we observed specific EEG activity in difficult pages. Integrating of eye gaze and EEG measurements raised a possibility to determine where a learner felt \"difficult\" in a page of learning materials. From these results, we could suggest that the multimodal measurements of EEG and eye gaze would lead to effective improvement of learning materials. For future study, more data collection using various materials and learners with different backgrounds is necessary. This study could lead to establishing a method to improve e-learning materials based on learners' mental states.", "which has research problem ?", "Establishing a method to improve e-learning materials based on learners' mental states.", NaN, NaN], ["A crucial step toward the goal of automatic extraction of propositional information from natural language text is the identification of semantic relations between constituents in sentences. We examine the problem of distinguishing among seven relation types that can occur between the entities \"treatment\" and \"disease\" in bioscience text, and the problem of identifying such entities. We compare five generative graphical models and a neural network, using lexical, syntactic, and semantic features, finding that the latter help achieve high classification accuracy.", "which has research problem ?", "Identification of semantic relations", 118.0, 154.0], ["ABSTRACT\u2010 In 1974 we examined 30 patients 0.5\u201314 (mean 5) years after acute unilateral optic neuritis (ON), when no clinical signs of multiple sclerosis (MS) were discernable. 11 of the patients had oligoclonal bands in the cerebrospinal fluid (CSF). Re\u2010examination after an additional 6 years revealed that 9 of the 11 ON patients with oligoclonal bands (but only 1 of the 19 without this CSF abnormality) had developed MS. The occurrence of oligoclonal bands in CSF in a patient with ON is \u2010 within the limits of the present observation time \u2010 accompanied by a significantly increased risk of the future development of MS. Recurrent ON also occurred significantly more often in those ON patients who later developed MS.", "which has research problem ?", "Multiple sclerosis", 134.0, 152.0], ["Open Educational Resources (OER) are a direct reaction to knowledge privatization; they foment their exchange to the entire world with the aim of increase the human intellectual capacity.In this document, we describe the committment of Universidad T\u00e9cnica Particular de Loja (UTPL), Ecuador, in the promotion of open educational practices and resources and their impact in society and knowledge economy through the use of Social Software.", "which has research problem ?", "Open Education", NaN, NaN], ["Choosing a higher education course at university is not an easy task for students. A wide range of courses are offered by the individual universities whose delivery mode and entry requirements differ. A personalized recommendation system can be an effective way of suggesting the relevant courses to the prospective students. This paper introduces a novel approach that personalizes course recommendations that will match the individual needs of users. The proposed approach developed a framework of an ontology-based hybrid-filtering system called the ontology-based personalized course recommendation (OPCR). This approach aims to integrate the information from multiple sources based on the hierarchical ontology similarity with a view to enhancing the efficiency and the user satisfaction and to provide students with appropriate recommendations. The OPCR combines collaborative-based filtering with content-based filtering. It also considers familiar related concepts that are evident in the profiles of both the student and the course, determining the similarity between them. Furthermore, OPCR uses an ontology mapping technique, recommending jobs that will be available following the completion of each course. This method can enable students to gain a comprehensive knowledge of courses based on their relevance, using dynamic ontology mapping to link the course profiles and student profiles with job profiles. Results show that a filtering algorithm that uses hierarchically related concepts produces better outcomes compared to a filtering method that considers only keyword similarity. In addition, the quality of the recommendations is improved when the ontology similarity between the items\u2019 and the users\u2019 profiles were utilized. This approach, using a dynamic ontology mapping, is flexible and can be adapted to different domains. The proposed framework can be used to filter the items for both postgraduate courses and items from other domains.", "which has research problem ?", "Ontology mapping", 1109.0, 1125.0], ["This review summarizes research on enterprise resource planning (ERP) systems in small and medium-size enterprises (SMEs). Due to the close-to-saturation of ERP adoptions in large enterprises (LEs), ERP vendors now focus more on SMEs. Moreover, because of globalization, partnerships, value networks, and the huge information flow across and within SMEs nowadays, more and more SMEs are adopting ERP systems. Risks of adoption rely on the fact that SMEs have limited resources and specific characteristics that make their case different from LEs. The main focus of this article is to shed the light on the areas that lack sufficient research within the ERP in SMEs domain, suggest future research avenues, as well as, present the current research findings that could aid practitioners, suppliers, and SMEs when embarking on ERP projects.", "which has research problem ?", "Enterprise resource planning", 35.0, 63.0], ["Although it has become common to assess publications and researchers by means of their citation count (e.g., using the h-index), measuring the impact of scientific methods and datasets (e.g., using an \u201ch-index for datasets\u201d) has been performed only to a limited extent. This is not surprising because the usage information of methods and datasets is typically not explicitly provided by the authors, but hidden in a publication\u2019s text. In this paper, we propose an approach to identifying methods and datasets in texts that have actually been used by the authors. Our approach first recognizes datasets and methods in the text by means of a domain-specific named entity recognition method with minimal human interaction. It then classifies these mentions into used vs. non-used based on the textual contexts. The obtained labels are aggregated on the document level and integrated into the Microsoft Academic Knowledge Graph modeling publications\u2019 metadata. In experiments based on the Microsoft Academic Graph, we show that both method and dataset mentions can be identified and correctly classified with respect to their usage to a high degree. Overall, our approach facilitates method and dataset recommendation, enhanced paper recommendation, and scientific impact quantification. It can be extended in such a way that it can identify mentions of any entity type (e.g., task).", "which has research problem ?", "Domain-specific named entity recognition", 641.0, 681.0], ["Interaction testing offers a stable cost-benefit ratio in identifying faults. But in many testing scenarios, the entire test suite cannot be fully executed due to limited time or cost. In these situations, it is essential to take the importance of interactions into account and prioritize these tests. To tackle this issue, the biased covering array is proposed and the Weighted Density Algorithm (WDA) is developed. To find a better solution, in this paper we adopt ant colony optimization (ACO) to build this prioritized pairwise interaction test suite (PITS). In our research, we propose four concrete test generation algorithms based on Ant System, Ant System with Elitist, Ant Colony System and Max-Min Ant System respectively. We also implement these algorithms and apply them to two typical inputs and report experimental results. The results show the effectiveness of these algorithms.", "which has research problem ?", "Ant Colony Optimization", 467.0, 490.0], ["Neural network approaches to Named-Entity Recognition reduce the need for carefully hand-crafted features. While some features do remain in state-of-the-art systems, lexical features have been mostly discarded, with the exception of gazetteers. In this work, we show that this is unfair: lexical features are actually quite useful. We propose to embed words and entity types into a low-dimensional vector space we train from annotated data produced by distant supervision thanks to Wikipedia. From this, we compute \u2014 offline \u2014 a feature vector representing each word. When used with a vanilla recurrent neural network model, this representation yields substantial improvements. We establish a new state-of-the-art F1 score of 87.95 on ONTONOTES 5.0, while matching state-of-the-art performance with a F1 score of 91.73 on the over-studied CONLL-2003 dataset.", "which has research problem ?", "Neural network approaches to Named-Entity Recognition", 0.0, 53.0], ["In the last few years, several studies have found an inverted-U relationship between per capita income and environmental degradation. This relationship, known as the environmental Kuznets curve (EKC), suggests that environmental degradation increases in the early stages of growth, but it eventually decreases as income exceeds a threshold level. However, this paper investigation relationship between per capita CO2 emission, growth economics and trade liberalization based on econometric techniques of unit root test, co-integration and a panel data set during the period 1960-1996 for BRICS countries. Data properties were analyzed to determine their stationarity using the LLC , IPS , ADF and PP unit root tests which indicated that the series are I(1). We find a cointegration relationship between per capita CO2 emission, growth economics and trade liberalization by applying Kao panel cointegration test. The evidence indi cates that in the long-run trade liberalization has a positive significant impact on CO2 emissions and impact of trade liberalization on emissions growth depends on the level of income Our findings suggest that there is a quadratic relationship between relationship between real GDP and CO2 emissions for the region as a whole. The estimated long-run coefficients of real GDP and its square satisfy the EKC hypothesis in all of studied countries. Our estimation shows that the inflection point or optimal point real GDP per capita is about 5269.4 dollars. The results show that on average, sample countries are on the positive side of the inverted U curve. The turning points are very low in some cases and very high in other cases, hence providing poor evidence in support of the EKC hypothesis. Thus, our findings suggest that all BRICS countries need to sacrifice economic growth to decrease their emission levels", "which has research problem ?", "CO2 emissions", 1015.0, 1028.0], ["In this paper, we present the main findings and compare the results of SemEval-2020 Task 10, Emphasis Selection for Written Text in Visual Media. The goal of this shared task is to design automatic methods for emphasis selection, i.e. choosing candidates for emphasis in textual content to enable automated design assistance in authoring. The main focus is on short text instances for social media, with a variety of examples, from social media posts to inspirational quotes. Participants were asked to model emphasis using plain text with no additional context from the user or other design considerations. SemEval-2020 Emphasis Selection shared task attracted 197 participants in the early phase and a total of 31 teams made submissions to this task. The highest-ranked submission achieved 0.823 Matchm score. The analysis of systems submitted to the task indicates that BERT and RoBERTa were the most common choice of pre-trained models used, and part of speech tag (POS) was the most useful feature. Full results can be found on the task\u2019s website.", "which has research problem ?", "Emphasis Selection for Written Text", 93.0, 128.0], ["Large-scale events are always vulnerable to natural disasters and man-made chaos which poses great threat to crowd safety. Such events need an appropriate evacuation plan to alleviate the risk of causalities. We propose a modeling framework for large-scale evacuation of pedestrians during emergency situation. Proposed framework presents optimal and safest path evacuation for a hypothetical large-scale crowd scenario. The main aim is to provide the safest and nearest evacuation path because during disastrous situations there is possibility of exit gate blockade and directions of evacuees may have to be changed at run time. For this purpose run time diversions are given to evacuees to ensure their quick and safest exit. In this work, different evacuation algorithms are implemented and compared to determine the optimal solution in terms of evacuation time and crowd safety. The recommended framework incorporates Anylogic simulation environment to design complex spatial environment for large-scale pedestrians as agents.", "which has research problem ?", "crowd safety", 109.0, 121.0], ["Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task\u2014a result that approaches the human inter-annotator agreement (0.8875)\u2014and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system\u2019s ability to return real-time results: the average response time for each team\u2019s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "which has research problem ?", "chemical-induced disease (CID) relation extraction", NaN, NaN], ["Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.", "which has research problem ?", "Data Publishing", 79.0, 94.0], ["QA TempEval shifts the goal of previous TempEvals away from an intrinsic evaluation methodology toward a more extrinsic goal of question answering. This evaluation requires systems to capture temporal information relevant to perform an end-user task, as opposed to corpus-based evaluation where all temporal information is equally important. Evaluation results show that the best automated TimeML annotations reach over 30% recall on questions with \u2018yes\u2019 answer and about 50% on easier questions with \u2018no\u2019 answers. Features that helped achieve better results are event coreference and a time expression reasoner.", "which has research problem ?", "QA TempEval", 0.0, 11.0], ["A statistical\u2010dynamical downscaling method is used to estimate future changes of wind energy output (Eout) of a benchmark wind turbine across Europe at the regional scale. With this aim, 22 global climate models (GCMs) of the Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble are considered. The downscaling method uses circulation weather types and regional climate modelling with the COSMO\u2010CLM model. Future projections are computed for two time periods (2021\u20132060 and 2061\u20132100) following two scenarios (RCP4.5 and RCP8.5). The CMIP5 ensemble mean response reveals a more likely than not increase of mean annual Eout over Northern and Central Europe and a likely decrease over Southern Europe. There is some uncertainty with respect to the magnitude and the sign of the changes. Higher robustness in future changes is observed for specific seasons. Except from the Mediterranean area, an ensemble mean increase of Eout is simulated for winter and a decreasing for the summer season, resulting in a strong increase of the intra\u2010annual variability for most of Europe. The latter is, in particular, probable during the second half of the 21st century under the RCP8.5 scenario. In general, signals are stronger for 2061\u20132100 compared to 2021\u20132060 and for RCP8.5 compared to RCP4.5. Regarding changes of the inter\u2010annual variability of Eout for Central Europe, the future projections strongly vary between individual models and also between future periods and scenarios within single models. This study showed for an ensemble of 22 CMIP5 models that changes in the wind energy potentials over Europe may take place in future decades. However, due to the uncertainties detected in this research, further investigations with multi\u2010model ensembles are needed to provide a better quantification and understanding of the future changes.", "which has research problem ?", "multi\u2010model ensemble", NaN, NaN], ["We introduce the first end-to-end coreference resolution model and show that it significantly outperforms all previous work without using a syntactic parser or hand-engineered mention detector. The key idea is to directly consider all spans in a document as potential mentions and learn distributions over possible antecedents for each. The model computes span embeddings that combine context-dependent boundary representations with a head-finding attention mechanism. It is trained to maximize the marginal likelihood of gold antecedent spans from coreference clusters and is factored to enable aggressive pruning of potential mentions. Experiments demonstrate state-of-the-art performance, with a gain of 1.5 F1 on the OntoNotes benchmark and by 3.1 F1 using a 5-model ensemble, despite the fact that this is the first approach to be successfully trained with no external resources.", "which has research problem ?", "End-to-end coreference resolution", 23.0, 56.0], ["Organizations perceive ERP as a vital tool for organizational competition as it integrates dispersed organizational systems and enables flawless transactions and production. This review examines studies investigating Critical Success Factors (CSFs) in implementing Enterprise Resource Planning (ERP) systems. Keywords relating to the theme of this study were defined and used to search known Web engines and journal databases for studies on both implementing ERP systems per se and integrating ERP systems with other well- known systems (e.g., SCM, CRM) whose importance to business organizations and academia is acknowledged to work in a complementary fashion. A total of 341 articles were reviewed to address three main goals. This study structures previous research by presenting a comprehensive taxonomy of CSFs in the area of ERP. Second, it maps studies, identified through an exhaustive and comprehensive literature review, to different dimensions and facets of ERP system implementation. Third, it presents studies investigating CSFs in terms of a specific ERP lifecycle phase and across the entire ERP life cycle. This study not only reviews articles in which an ERP system is the sole or primary field of research, but also articles that refer to an integration of ERP systems and other popular systems (e.g., SCM, CRM). Finally it provides a comprehensive bibliography of the articles published during this period that can serve as a guide for future research.", "which has research problem ?", "Enterprise resource planning", 265.0, 293.0], ["Model-driven game development (MDGD) is an emerging paradigm where models become first-order elements in game development, maintenance, and evolution. In this article, we present a first approach to 2D platform game prototyping automatization through the use of model-driven engineering (MDE). Platform-independent models (PIM) define the structure and the behavior of the games and a platform-specific model (PSM) describes the game control mapping. Automatic MOFscript transformations from these models generate the software prototype code in C++. As an example, Bubble Bobble has been prototyped in a few hours following the MDGD approach. The resulting code generation represents 93% of the game prototype.", "which has research problem ?", "Model-driven Game Development", 0.0, 29.0], ["Current state-of-the-art relation extraction methods typically rely on a set of lexical, syntactic, and semantic features, explicitly computed in a pre-processing step. Training feature extraction models requires additional annotated language resources, which severely restricts the applicability and portability of relation extraction to novel languages. Similarly, pre-processing introduces an additional source of error. To address these limitations, we introduce TRE, a Transformer for Relation Extraction, extending the OpenAI Generative Pre-trained Transformer [Radford et al., 2018]. Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions. TRE allows us to learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task. TRE obtains a new state-of-the-art result on the TACRED and SemEval 2010 Task 8 datasets, achieving a test F1 of 67.4 and 87.1, respectively. Furthermore, we observe a significant increase in sample efficiency. With only 20% of the training examples, TRE matches the performance of our baselines and our model trained from scratch on 100% of the TACRED dataset. We open-source our trained models, experiments, and source code.", "which has research problem ?", "Relation Extraction", 25.0, 44.0], ["In recent years, deep learning\u2019s revolutionary advances in speech recognition, image analysis, and natural language processing have gained significant attention. Deep learning technology has become a hotspot research field in the artificial intelligence and has been applied into recommender system. In contrast to traditional recommendation models, deep learning is able to effectively capture the non-linear and non-trivial user-item relationships and enables the codification of more complex abstractions as data representations in the higher layers. In this paper, we provide a comprehensive review of the related research contents of deep learning-based recommender systems. First, we introduce the basic terminologies and the background concepts of recommender systems and deep learning technology. Second, we describe the main current research on deep learning-based recommender systems. Third, we provide the possible research directions of deep learning-based recommender systems in the future. Finally, concludes this paper.", "which has research problem ?", "Recommender Systems", 659.0, 678.0], ["We analyze the effects of monetary policy on economic activity in the proposed African monetary unions. Findings broadly show that: (1) but for financial efficiency in the EAMZ, monetary policy variables affect output neither in the short-run nor in the long-term and; (2) with the exception of financial size that impacts inflation in the EAMZ in the short-term, monetary policy variables generally have no effect on prices in the short-run. The WAMZ may not use policy instruments to offset adverse shocks to output by pursuing either an expansionary or a contractionary policy, while the EAMZ can do with the \u2018financial allocation efficiency\u2019 instrument. Policy implications are discussed.", "which has research problem ?", "African monetary unions", 79.0, 102.0], ["Neural architecture search (NAS) has emerged as a promising avenue for automatically designing task-specific neural networks. Existing NAS approaches require one complete search for each deployment specification of hardware or objective. This is a computationally impractical endeavor given the potentially large number of application scenarios. In this paper, we propose Neural Architecture Transfer (NAT) to overcome this limitation. NAT is designed to efficiently generate task-specific custom models that are competitive under multiple conflicting objectives. To realize this goal we learn task-specific supernets from which specialized subnets can be sampled without any additional training. The key to our approach is an integrated online transfer learning and many-objective evolutionary search procedure. A pre-trained supernet is iteratively adapted while simultaneously searching for task-specific subnets. We demonstrate the efficacy of NAT on 11 benchmark image classification tasks ranging from large-scale multi-class to small-scale fine-grained datasets. In all cases, including ImageNet, NATNets improve upon the state-of-the-art under mobile settings ($\\leq$\u2264 600M Multiply-Adds). Surprisingly, small-scale fine-grained datasets benefit the most from NAT. At the same time, the architecture search and transfer is orders of magnitude more efficient than existing NAS methods. Overall, experimental evaluation indicates that, across diverse image classification tasks and computational objectives, NAT is an appreciably more effective alternative to conventional transfer learning of fine-tuning weights of an existing network architecture learned on standard datasets. Code is available at https://github.com/human-analysis/neural-architecture-transfer.", "which has research problem ?", "Neural Architecture Search", 0.0, 26.0], ["Most question answering (QA) systems over Linked Data, i.e. Knowledge Graphs, approach the question answering task as a conversion from a natural language question to its corresponding SPARQL query. A common approach is to use query templates to generate SPARQL queries with slots that need to be filled. Using templates instead of running an extensive NLP pipeline or end-to-end model shifts the QA problem into a classification task, where the system needs to match the input question to the appropriate template. This paper presents an approach to automatically learn and classify natural language questions into corresponding templates using recursive neural networks. Our model was trained on 5000 questions and their respective SPARQL queries from the preexisting LC-QuAD dataset grounded in DBpedia, spanning 5042 entities and 615 predicates. The resulting model was evaluated using the FAIR GERBIL QA framework resulting in 0.419 macro f-measure on LC-QuAD and 0.417 macro f-measure on QALD-7.", "which has research problem ?", "DBPedia", 798.0, 805.0], ["We present the results of the Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge, aiming to bring together researchers in educational NLP technology and textual entailment. The task of giving feedback on student answers requires semantic inference and therefore is related to recognizing textual entailment. Thus, we offered to the community a 5-way student response labeling task, as well as 3-way and 2way RTE-style tasks on educational data. In addition, a partial entailment task was piloted. We present and compare results from 9 participating teams, and discuss future directions.", "which has research problem ?", "Recognizing Textual Entailment", 70.0, 100.0], ["Interaction testing (also called combinatorial testing) is an cost-effective test generation technique in software testing. Most research work focuses on finding effective approaches to build optimal t-way interaction test suites. However, the strength of different factor sets may not be consistent due to the practical test requirements. To solve this problem, a variable strength combinatorial object and several approaches based on it have been proposed. These approaches include simulated annealing (SA) and greedy algorithms. SA starts with a large randomly generated test suite and then uses a binary search process to find the optimal solution. Although this approach often generates the minimal test suites, it is time consuming. Greedy algorithms avoid this shortcoming but the size of generated test suites is usually not as small as SA. In this paper, we propose a novel approach to generate variable strength interaction test suites (VSITs). In our approach, we adopt a one-test-at-a-time strategy to build final test suites. To generate a single test, we adopt ant colony system (ACS) strategy, an effective variant of ant colony optimization (ACO). In order to successfully adopt ACS, we formulize the solution space, the cost function and several heuristic settings in this framework. We also apply our approach to some typical inputs. Experimental results show the effectiveness of our approach especially compared to greedy algorithms and several existing tools.", "which has research problem ?", "Ant Colony Optimization", 1133.0, 1156.0], ["This paper addresses the problem of low impact of smart city applications observed in the fields of energy and transport, which constitute high-priority domains for the development of smart cities. However, these are not the only fields where the impact of smart cities has been limited. The paper provides an explanation for the low impact of various individual applications of smart cities and discusses ways of improving their effectiveness. We argue that the impact of applications depends primarily on their ontology, and secondarily on smart technology and programming features. Consequently, we start by creating an overall ontology for the smart city, defining the building blocks of this ontology with respect to the most cited definitions of smart cities, and structuring this ontology with the Prot\u00e9g\u00e9 5.0 editor, defining entities, class hierarchy, object properties, and data type properties. We then analyze how the ontologies of a sample of smart city applications fit into the overall Smart City Ontology, the consistency between digital spaces, knowledge processes, city domains targeted by the applications, and the types of innovation that determine their impact. In conclusion, we underline the relationships between innovation and ontology, and discuss how we can improve the effectiveness of smart city applications, combining expert and user-driven ontology design with the integration and or-chestration of applications over platforms and larger city entities such as neighborhoods, districts, clusters, and sectors of city activities.", "which has research problem ?", "Smart city ontology", 1009.0, 1028.0], ["We explore the use of Wikipedia as external knowledge to improve named entity recognition (NER). Our method retrieves the corresponding Wikipedia entry for each candidate word sequence and extracts a category label from the first sentence of the entry, which can be thought of as a definition part. These category labels are used as features in a CRF-based NE tagger. We demonstrate using the CoNLL 2003 dataset that the Wikipedia category labels extracted by such a simple method actually improve the accuracy of NER.", "which has research problem ?", "Named Entity Recognition", 65.0, 89.0], ["Dependency trees help relation extraction models capture long-range relations between words. However, existing dependency-based models either neglect crucial information (e.g., negation) by pruning the dependency trees too aggressively, or are computationally inefficient because it is difficult to parallelize over different tree structures. We propose an extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel. To incorporate relevant information while maximally removing irrelevant content, we further apply a novel pruning strategy to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold. The resulting model achieves state-of-the-art performance on the large-scale TACRED dataset, outperforming existing sequence and dependency-based neural models. We also show through detailed analysis that this model has complementary strengths to sequence models, and combining them further improves the state of the art.", "which has research problem ?", "Relation extraction", 22.0, 41.0], ["This article reports the supply chain issues in small and medium scale enterprises (SMEs) using insights from select cases of Indian origin (manufacturing SMEs). A broad range of qualitative and quantitative data were collected during interviews and plant visits in a multi-case study (of 10 SMEs) research design. Company documentation and business reports were also employed. Analysis is carried out using diagnostic tools like \u2018EBM-REP\u2019 (Thakkar, J., Kanda, A., and Deshmukh, S.G., 2008c. An enquiry-analysis framework \u201cEBM-REP\u201d for qualitative research. International Journal of Innovation and Learning (IJIL), 5 (5), 557\u2013580.) and \u2018Role Interaction Model\u2019 (Thakkar J., Kanda, A., and Deshmukh, S.G., 2008b. A conceptual role interaction model for supply chain management in SMEs. Journal of Small Business and Enterprise Development (JSBED), 15 (1), 74\u201395). This article reports a set of critical success factors and evaluates six critical research questions for the successful supply chain planning and management in SMEs. The results of this article will help SME managers to assess their supply chain function more rigorously. This article addresses the issue on supply chain management in SMEs using case study approach and diagnostic tools to add select new insights to the existing body of knowledge on supply chain issues in SMEs.", "which has research problem ?", "Supply chain management", 752.0, 775.0], ["In this paper, we describe our method for detection of lexical semantic change, i.e., word sense changes over time. We examine semantic differences between specific words in two corpora, chosen from different time periods, for English, German, Latin, and Swedish. Our method was created for the SemEval 2020 Task 1: Unsupervised Lexical Semantic Change Detection. We ranked 1st in Sub-task 1: binary change detection, and 4th in Sub-task 2: ranked change detection. We present our method which is completely unsupervised and language independent. It consists of preparing a semantic vector space for each corpus, earlier and later; computing a linear transformation between earlier and later spaces, using Canonical Correlation Analysis and orthogonal transformation;and measuring the cosines between the transformed vector for the target word from the earlier corpus and the vector for the target word in the later corpus.", "which has research problem ?", "Unsupervised Lexical Semantic Change Detection", 316.0, 362.0], ["In this work we target the problem of hate speech detection in multimodal publications formed by a text and an image. We gather and annotate a large scale dataset from Twitter, MMHS150K, and propose different models that jointly analyze textual and visual information for hate speech detection, comparing them with unimodal detection. We provide quantitative and qualitative results and analyze the challenges of the proposed task. We find that, even though images are useful for the hate speech detection task, current multimodal models cannot outperform models analyzing only text. We discuss why and open the field and the dataset for further research.", "which has research problem ?", "hate speech detection", 38.0, 59.0], ["The role of customers and their involvement in green supply chain management (GSCM) has been recognised as an important research area. This paper is an attempt to explore factors influencing involvement of customers towards greening the supply chain (SC). Twenty-five critical success factors (CSFs) of customer involvement in GSCM have been identified from literature review and through extensive discussions with senior and middle level SC professionals. Interviews and questionnaire-based survey have been used to indicate the significance of these CSFs. A total of 478 valid responses were received to rate these CSFs on a five-point Likert scale (ranging from unimportant to most important). Statistical analysis has been carried out to establish the reliability and to test the validity of the questionnaires. Subsequent factor analysis has identified seven major components covering 79.24% of total variance. This paper may help to establish the importance of customer role in promoting green concept in SCs and to develop an understanding of factors influencing customer involvement \u2013 key input towards creating \u2018greening pull system\u2019 (GPSYS). This understanding may further help in framing the policies and strategies to green the SC.", "which has research problem ?", "Supply chain management", 53.0, 76.0], ["Episodic control provides a highly sample-efficient method for reinforcement learning while enforcing high memory and computational requirements. This work proposes a simple heuristic for reducing these requirements, and an application to Model-Free Episodic Control (MFEC) is presented. Experiments on Atari games show that this heuristic successfully reduces MFEC computational demands while producing no significant loss of performance when conservative choices of hyperparameters are used. Consequently, episodic control becomes a more feasible option when dealing with reinforcement learning tasks.", "which has research problem ?", "Atari Games", 303.0, 314.0], ["The digitization of manufacturing systems is at the crux of the next industrial revolutions. The digital representation of the \u201cPhysical Twin,\u201d also known as the \u201cDigital Twin,\u201d will help in maintaining the process quality effectively by allowing easy visualization and incorporation of cognitive capability in the system. In this technical report, we tackle two issues regarding the Digital Twin: (1) modeling the Digital Twin by extracting information from the side-channel emissions, and (2) making sure that the Digital Twin is up-to-date (or \u201calive\u201d). We will first analyze various analog emissions to figure out if they behave as side-channels, informing about the various states of both cyber and physical domains. Then, we will present a dynamic data-driven application system enabled Digital Twin, which is able to check if it is the most up-to-date version of the Physical Twin. Index Terms Digital Twin, Cyber-Physical Systems, Digitization, Additive Manufacturing, Machine Learning, Sensor Fusion, Dynamic Data-Driven Application Systems", "which has research problem ?", "digital twin", 163.0, 175.0], ["Recent advances in data-to-text generation have led to the use of large-scale datasets and neural network models which are trained end-to-end, without explicitly modeling what to say and in what order. In this work, we present a neural network architecture which incorporates content selection and planning without sacrificing end-to-end training. We decompose the generation task into two stages. Given a corpus of data records (paired with descriptive documents), we first generate a content plan highlighting which information should be mentioned and in which order and then generate the document while taking the content plan into account. Automatic and human-based evaluation experiments show that our model1 outperforms strong baselines improving the state-of-the-art on the recently released RotoWIRE dataset.", "which has research problem ?", "Data-to-Text Generation", 27.0, 50.0], ["We propose RUDDER, a novel reinforcement learning approach for delayed rewards in finite Markov decision processes (MDPs). In MDPs the Q-values are equal to the expected immediate reward plus the expected future rewards. The latter are related to bias problems in temporal difference (TD) learning and to high variance problems in Monte Carlo (MC) learning. Both problems are even more severe when rewards are delayed. RUDDER aims at making the expected future rewards zero, which simplifies Q-value estimation to computing the mean of the immediate reward. We propose the following two new concepts to push the expected future rewards toward zero. (i) Reward redistribution that leads to return-equivalent decision processes with the same optimal policies and, when optimal, zero expected future rewards. (ii) Return decomposition via contribution analysis which transforms the reinforcement learning task into a regression task at which deep learning excels. On artificial tasks with delayed rewards, RUDDER is significantly faster than MC and exponentially faster than Monte Carlo Tree Search (MCTS), TD({\\lambda}), and reward shaping approaches. At Atari games, RUDDER on top of a Proximal Policy Optimization (PPO) baseline improves the scores, which is most prominent at games with delayed rewards. Source code is available at \\url{this https URL} and demonstration videos at \\url{this https URL}.", "which has research problem ?", "Atari Games", 1153.0, 1164.0], ["In the third shared task of the Computational Approaches to Linguistic Code-Switching (CALCS) workshop, we focus on Named Entity Recognition (NER) on code-switched social-media data. We divide the shared task into two competitions based on the English-Spanish (ENG-SPA) and Modern Standard Arabic-Egyptian (MSA-EGY) language pairs. We use Twitter data and 9 entity types to establish a new dataset for code-switched NER benchmarks. In addition to the CS phenomenon, the diversity of the entities and the social media challenges make the task considerably hard to process. As a result, the best scores of the competitions are 63.76% and 71.61% for ENG-SPA and MSA-EGY, respectively. We present the scores of 9 participants and discuss the most common challenges among submissions.", "which has research problem ?", "Named Entity Recognition", 116.0, 140.0], ["BioNLP Open Shared Tasks (BioNLP-OST) is an international competition organized to facilitate development and sharing of computational tasks of biomedical text mining and solutions to them. For BioNLP-OST 2019, we introduced a new mental health informatics task called \u201cRDoC Task\u201d, which is composed of two subtasks: information retrieval and sentence extraction through National Institutes of Mental Health\u2019s Research Domain Criteria framework. Five and four teams around the world participated in the two tasks, respectively. According to the performance on the two tasks, we observe that there is room for improvement for text mining on brain research and mental illness.", "which has research problem ?", "Sentence extraction", 343.0, 362.0], ["We describe ParsCit, a freely available, open-source implementation of a reference string parsing package. At the core of ParsCit is a trained conditional random field (CRF) model used to label the token sequences in the reference string. A heuristic model wraps this core with added functionality to identify reference strings from a plain text file, and to retrieve the citation contexts. The package comes with utilities to run it as a web service or as a standalone utility. We compare ParsCit on three distinct reference string datasets and show that it compares well with other previously published work.", "which has research problem ?", "Reference string parsing", 73.0, 97.0], ["Software Testing is one of the indispensable parts of the software development lifecycle and structural testing is one of the most widely used testing paradigms to test various software. Structural testing relies on code path identification, which in turn leads to identification of effective paths. Aim of the current paper is to present a simple and novel algorithm with the help of an ant colony optimization, for the optimal path identification by using the basic property and behavior of the ants. This novel approach uses certain set of rules to find out all the effective/optimal paths via ant colony optimization (ACO) principle. The method concentrates on generation of paths, equal to the cyclomatic complexity. This algorithm guarantees full path coverage.", "which has research problem ?", "Ant Colony Optimization", 388.0, 411.0], ["Purpose \u2013 The purpose of this paper is to determine those factors perceived by users to influence the successful on\u2010going use of e\u2010commerce systems in business\u2010to\u2010business (B2B) buying and selling transactions through examination of the views of individuals acting in both purchasing and selling roles within the UK National Health Service (NHS) pharmaceutical supply chain.Design/methodology/approach \u2013 Literature from the fields of operations and supply chain management (SCM) and information systems (IS) is used to determine candidate factors that might influence the success of the use of e\u2010commerce. A questionnaire based on these is used for primary data collection in the UK NHS pharmaceutical supply chain. Factor analysis is used to analyse the data.Findings \u2013 The paper yields five composite factors that are perceived by users to influence successful e\u2010commerce use. \u201cSystem quality,\u201d \u201cinformation quality,\u201d \u201cmanagement and use,\u201d \u201cworld wide web \u2013 assurance and empathy,\u201d and \u201ctrust\u201d are proposed as potentia...", "which has research problem ?", "Supply chain management", 449.0, 472.0], ["The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents\u2019 circumstances, such as a spouse\u2019s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.", "which has research problem ?", "COVID-19 pandemic", 24.0, 41.0], ["The purpose of this paper is to investigate the impact of exchange rate volatility on exports among 14 Asia Pacific countries, where various measures to raise the intra-region trade are being implemented. Specifically, this paper estimates a gravity model, in which the dependent variable is the product of the exports of two trading countries. In addition, it also estimates a unilateral exports model, in which the dependent variable is not the product of the exports of two trading countries but the exports from one country to another. By doing this, the depreciation rate of the exporting country's currency value can be included as one of the explanatory variables affecting the volume of exports. As the explanatory variables of the export volume, the gravity model adopts the product of the GDPs of two trading counties, their bilateral exchange rate volatility, their distance, a time trend and dummies for the share of the border line, the use of the same language, and the APEC membership. In the case of the unilateral exports model, the product of the GDPs is replaced by the GDP of the importing country, and the depreciation rate of the exporting country's currency value is dded. In addition, considering that the export volume will also depend on various onditions of the exporting country, dummies for exporting countries are also included as an explanatory variable. The empirical tests, using annual data for the period from 1980 to 2002, detect a significant negative impact of exchange rate volatility on the volume of exports. In addition, various tests using the data for sub-sample periods indicate that the negative impact had been weakened since 1989, when APEC had launched, and surged again from 1997, when the Asian financial crisis broke out. This finding implies that the impact of exchange rate volatility is time-dependent and that it is significantlynegative at least in the present time. This phenomenon is noticed regardless which estimation model is adopted. In addition, the test results show that the GDP of the importing country, the depreciation of the exporting country's currency value, the use of the same language and the membership of APEC have positive impacts on exports, while the distance between trading countries have negative impacts. Finally, it turns out that the negative impact of exchange rate volatility is much weaker among OECD countries than among non-OECD counties.", "which has research problem ?", "Exchange rate volatility", 58.0, 82.0], ["Naphtho[1,2\u2010b:5,6\u2010b\u2032]dithiophene is extended to a fused octacyclic building block, which is end capped by strong electron\u2010withdrawing 2\u2010(5,6\u2010difluoro\u20103\u2010oxo\u20102,3\u2010dihydro\u20101H\u2010inden\u20101\u2010ylidene)malononitrile to yield a fused\u2010ring electron acceptor (IOIC2) for organic solar cells (OSCs). Relative to naphthalene\u2010based IHIC2, naphthodithiophene\u2010based IOIC2 with a larger \u03c0\u2010conjugation and a stronger electron\u2010donating core shows a higher lowest unoccupied molecular orbital energy level (IOIC2: \u22123.78 eV vs IHIC2: \u22123.86 eV), broader absorption with a smaller optical bandgap (IOIC2: 1.55 eV vs IHIC2: 1.66 eV), and a higher electron mobility (IOIC2: 1.0 \u00d7 10\u22123 cm2 V\u22121 s\u22121 vs IHIC2: 5.0 \u00d7 10\u22124 cm2 V\u22121 s\u22121). Thus, IOIC2\u2010based OSCs show higher values in open\u2010circuit voltage, short\u2010circuit current density, fill factor, and thereby much higher power conversion efficiency (PCE) values than those of the IHIC2\u2010based counterpart. In particular, as\u2010cast OSCs based on FTAZ: IOIC2 yield PCEs of up to 11.2%, higher than that of the control devices based on FTAZ: IHIC2 (7.45%). Furthermore, by using 0.2% 1,8\u2010diiodooctane as the processing additive, a PCE of 12.3% is achieved from the FTAZ:IOIC2\u2010based devices, higher than that of the FTAZ:IHIC2\u2010based devices (7.31%). These results indicate that incorporating extended conjugation into the electron\u2010donating fused\u2010ring units in nonfullerene acceptors is a promising strategy for designing high\u2010performance electron acceptors.", "which has research problem ?", "Organic solar cells", 253.0, 272.0], ["Abstract Twenty-six oil samples were isolated by hydrodistillation from aerial parts of Artemisia herba-alba Asso growing wild in Tunisia (semi-arid land) and their chemical composition was determined by GC(RI), GC/MS and 13C-NMR. Various compositions were observed, dominated either by a single component (\u03b1-thujone, camphor, chrysanthenone or trans-sabinyl acetate) or characterized by the occurrence, at appreciable contents, of two or more of these compounds. These results confrmed the tremendous chemical variability of A. herba-alba.", "which has research problem ?", "Oil", 20.0, 23.0], ["This practice paper describes an ongoing research project to test the effectiveness and relevance of the FAIR Data Principles. Simultaneously, it will analyse how easy it is for data archives to adhere to the principles. The research took place from November 2016 to January 2017, and will be underpinned with feedback from the repositories. The FAIR Data Principles feature 15 facets corresponding to the four letters of FAIR - Findable, Accessible, Interoperable, Reusable. These principles have already gained traction within the research world. The European Commission has recently expanded its demand for research to produce open data. The relevant guidelines1are explicitly written in the context of the FAIR Data Principles. Given an increasing number of researchers will have exposure to the guidelines, understanding their viability and suggesting where there may be room for modification and adjustment is of vital importance. This practice paper is connected to a dataset(Dunning et al.,2017) containing the original overview of the sample group statistics and graphs, in an Excel spreadsheet. Over the course of two months, the web-interfaces, help-pages and metadata-records of over 40 data repositories have been examined, to score the individual data repository against the FAIR principles and facets. The traffic-light rating system enables colour-coding according to compliance and vagueness. The statistical analysis provides overall, categorised, on the principles focussing, and on the facet focussing results. The analysis includes the statistical and descriptive evaluation, followed by elaborations on Elements of the FAIR Data Principles, the subject specific or repository specific differences, and subsequently what repositories can do to improve their information architecture.", "which has research problem ?", "effectiveness and relevance of the FAIR Data Principles", 70.0, 125.0], ["This paper presents the second round of the task on Cross-lingual Textual Entailment for Content Synchronization, organized within SemEval-2013. The task was designed to promote research on semantic inference over texts written in different languages, targeting at the same time a real application scenario. Participants were presented with datasets for different language pairs, where multi-directional entailment relations (\u201cforward\u201d, \u201cbackward\u201d, \u201cbidirectional\u201d, \u201cno entailment\u201d) had to be identified. We report on the training and test data used for evaluation, the process of their creation, the participating systems (six teams, 61 runs), the approaches adopted and the results achieved.", "which has research problem ?", "Cross-lingual Textual Entailment", 52.0, 84.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which has research problem ?", "self-citation", 502.0, 515.0], ["Recently, the amount of semantic data available in the Web has increased dramatically. The potential of this vast amount of data is enormous but in most cases it is difficult for users to explore and use this data, especially for those without experience with Semantic Web technologies. Applying information visualization techniques to the Semantic Web helps users to easily explore large amounts of data and interact with them. In this article we devise a formal Linked Data Visualization Model (LDVM), which allows to dynamically connect data with visualizations. We report about our implementation of the LDVM comprising a library of generic visualizations that enable both users and data analysts to get an overview on, visualize and explore the Data Web and perform detailed analyzes on Linked Data.", "which has research problem ?", "Data Visualization", 471.0, 489.0], ["BACKGROUND Clostridium difficile infection of the colon is a common and well-described clinical entity. Clostridium difficile enteritis of the small bowel is believed to be less common and has been described sparsely in the literature. METHODS Case report and literature review. RESULTS We describe a patient who had undergone total proctocolectomy with ileal pouch-anal anastomosis who was treated with broad-spectrum antibiotics and contracted C. difficile refractory to metronidazole. The enteritis resolved quickly after initiation of combined oral vancomycin and metronidazole. A literature review found that eight of the fifteen previously reported cases of C. difficile-associated small-bowel enteritis resulted in death. CONCLUSIONS It is important for physicians who treat acolonic patients to be aware of C. difficile enteritis of the small bowel so that it can be suspected, diagnosed, and treated.", "which has research problem ?", "Clostridium difficile infection", 11.0, 42.0], ["Software testing is of huge importance to development of any software. The prime focus is to minimize the expenses on the testing. In software testing the major problem is generation of test data. Several metaheuristic approaches in this field have become very popular. The aim is to generate the optimum set of test data, which would still not compromise on exhaustive testing of software. Our objective is to generate such efficient test data using genetic algorithm and ant colony optimization for a given software. We have also compared the two approaches of software testing to determine which of these are effective towards generation of test data and constraints if any.", "which has research problem ?", "Ant Colony Optimization", 473.0, 496.0], ["Dependency trees convey rich structural information that is proven useful for extracting relations among entities in text. However, how to effectively make use of relevant information while ignoring irrelevant information from the dependency trees remains a challenging research question. Existing approaches employing rule based hard-pruning strategies for selecting relevant partial dependency structures may not always yield optimal results. In this work, we propose Attention Guided Graph Convolutional Networks (AGGCNs), a novel model which directly takes full dependency trees as inputs. Our model can be understood as a soft-pruning approach that automatically learns how to selectively attend to the relevant sub-structures useful for the relation extraction task. Extensive results on various tasks including cross-sentence n-ary relation extraction and large-scale sentence-level relation extraction show that our model is able to better leverage the structural information of the full dependency trees, giving significantly better results than previous approaches.", "which has research problem ?", "how to effectively make use of relevant information while ignoring irrelevant information from the dependency trees", 132.0, 247.0], ["Background: Patients with a clinically isolated demyelinating syndrome (CIS) are at risk of developing a second attack, thus converting into clinically definite multiple sclerosis (CDMS). Therefore, an accurate prognostic marker for that conversion might allow early treatment. Brain MRI and oligoclonal IgG band (OCGB) detection are the most frequent paraclinical tests used in MS diagnosis. A new OCGB test has shown high sensitivity and specificity in differential diagnosis of MS. Objective: To evaluate the accuracy of the new OCGB method and of current MRI criteria (MRI-C) to predict conversion of CIS to CDMS. Methods: Fifty-two patients with CIS were studied with OCGB detection and brain MRI, and followed up for 6 years. The sensitivity and specificity of both methods to predict conversion to CDMS were analyzed. Results: OCGB detection showed a sensitivity of 91.4% and specificity of 94.1%. MRI-C had a sensitivity of 74.23% and specificity of 88.2%. The presence of either OCGB or MRI-C studied simultaneously showed a sensitivity of 97.1% and specificity of 88.2%. Conclusions: The presence of oligoclonal IgG bands is highly specific and sensitive for early prediction of conversion to multiple sclerosis. MRI criteria have a high specificity but less sensitivity. The simultaneous use of both tests shows high sensitivity and specificity in predicting clinically isolated demyelinating syndrome conversion to clinically definite multiple sclerosis.", "which has research problem ?", "Multiple sclerosis", 161.0, 179.0], ["In this paper, 2D cascaded AdaBoost, a novel classifier designing framework, is presented and applied to eye localization. By the term \"2D\", we mean that in our method there are two cascade classifiers in two directions: The first one is a cascade designed by bootstrapping the positive samples, and the second one, as the component classifiers of the first one, is cascaded by bootstrapping the negative samples. The advantages of the 2D structure include: (1) it greatly facilitates the classifier designing on huge-scale training set; (2) it can easily deal with the significant variations within the positive (or negative) samples; (3) both the training and testing procedures are more efficient. The proposed structure is applied to eye localization and evaluated on four public face databases, extensive experimental results verified the effectiveness, efficiency, and robustness of the proposed method", "which has research problem ?", "Eye localization", 105.0, 121.0], ["Iodine deficiency disorders (IDD) has been a major global public health problem threatening more than 2 billion people worldwide. Considering various human health implications associated with iodine deficiency, universal salt iodization programme has been recognized as one of the best methods of preventing iodine deficiency disorder and iodizing table salt is currently done in many countries. In this study, comparative assessment of iodine content of commercially available table salt brands in Nigerian market were investigated and iodine content were measured in ten table salt brands samples using iodometric titration. The iodine content ranged from 14.80 mg/kg \u2013 16.90 mg/kg with mean value of 15.90 mg/kg for Sea salt; 24.30 mg/kg \u2013 25.40 mg/kg with mean value of 24.60 mg/kg for Dangote salt (blue sachet); 22.10 mg/kg \u2013 23.10 mg/kg with mean value of 22.40 mg/kg for Dangote salt (red sachet); 23.30 mg/kg \u2013 24.30 mg/kg with mean value of 23.60 mg/kg for Mr Chef salt; 23.30 mg/kg \u2013 24.30 mg/kg with mean value of 23.60 mg/kg for Annapurna; 26.80 mg/kg \u2013 27.50 mg/kg with mean value of 27.20mg/kg for Uncle Palm salt; 23.30 mg/kg \u2013 29.60 mg/kg with mean content of 26.40 mg/kg for Dangote (bag); 25.40 mg/kg \u2013 26.50 mg/kg with mean value of 26.50 mg/kg for Royal salt; 36.80 mg/kg \u2013 37.20 mg/kg with mean iodine content of 37.0 mg/kg for Abakaliki refined salt, and 30.07 mg/kg \u2013 31.20 mg/kg with mean value of 31.00 mg/kg for Ikom refined salt. The mean iodine content measured in the Sea salt brand (15.70 mg/kg) was significantly P < 0.01 lower compared to those measured in other table salt brands. Although the iodine content of Abakaliki and Ikom refined salt exceed the recommended value, it is clear that only Sea salt brand falls below the World Health Organization (WHO) recommended value (20 \u2013 30 mg/kg), while the remaining table salt samples are just within the range. The results obtained have revealed that 70 % of the table salt brands were adequately iodized while 30 % of the table salt brands were not adequately iodized and provided baseline data that can be used for potential identification of human health risks associated with inadequate and/or excess iodine content in table salt brands consumed in households in Nigeria.", "which has research problem ?", "Iodine deficiency disorders (IDD) has been a major global public health problem threatening more than 2 billion people worldwide.", NaN, NaN], ["We examine the capabilities of a unified, multi-task framework for three information extraction tasks: named entity recognition, relation extraction, and event extraction. Our framework (called DyGIE++) accomplishes all tasks by enumerating, refining, and scoring text spans designed to capture local (within-sentence) and global (cross-sentence) context. Our framework achieves state-of-the-art results across all tasks, on four datasets from a variety of domains. We perform experiments comparing different techniques to construct span representations. Contextualized embeddings like BERT perform well at capturing relationships among entities in the same or adjacent sentences, while dynamic span graph updates model long-range cross-sentence relationships. For instance, propagating span representations via predicted coreference links can enable the model to disambiguate challenging entity mentions. Our code is publicly available at https://github.com/dwadden/dygiepp and can be easily adapted for new tasks or datasets.", "which has research problem ?", "Relation Extraction", 129.0, 148.0], ["Abstract Isolation of the essential oil from Artemisia herba-alba collected in the North Sahara desert has been conducted by hydrodistillation (HD) and a microwave distillation process (MD). The chemical composition of the two oils was investigated by GC and GC/MS. In total, 94 constituents were identified. The main components were camphor (49.3 and 48.1% in HD and MD oils, respectively), 1,8-cineole (13.4\u201312.4%), borneol (7.3\u20137.1%), pinocarvone (5.6\u20135.5%), camphene (4.9\u20134.5%) and chrysanthenone (3.2\u20133.3%). In comparison with HD, MD allows one to obtain an oil in a very short time, with similar yields, comparable qualities and a substantial savings of energy.", "which has research problem ?", "Oil", 36.0, 39.0], ["One approach to continuously achieve a certain data quality level is to use an integration pipeline that continuously checks and monitors the quality of a data set according to defined metrics. This approach is inspired by Continuous Integration pipelines, that have been introduced in the area of software development and DevOps to perform continuous source code checks. By investigating in possible tools to use and discussing the specific requirements for RDF data sets, an integration pipeline is derived that joins current approaches of the areas of software-development and semantic-web as well as reuses existing tools. As these tools have not been built explicitly for CI usage, we evaluate their usability and propose possible workarounds and improvements. Furthermore, a real-world usage scenario is discussed, outlining the benefit of the usage of such a pipeline.", "which has research problem ?", "RDF", 459.0, 462.0], ["We describe a simple active learning heuristic which greatly enhances the generalization behavior of support vector machines (SVMs) on several practical document classification tasks. We observe a number of benefits, the most surprising of which is that a SVM trained on a wellchosen subset of the available corpus frequently performs better than one trained on all available data. The heuristic for choosing this subset is simple to compute, and makes no use of information about the test set. Given that the training time of SVMs depends heavily on the training set size, our heuristic not only offers better performance with fewer data, it frequently does so in less time than the naive approach of training on all available data.", "which has research problem ?", "Active learning", 21.0, 36.0], ["Research on definition extraction has been conducted for well over a decade, largely with significant constraints on the type of definitions considered. In this work, we present DeftEval, a SemEval shared task in which participants must extract definitions from free text using a term-definition pair corpus that reflects the complex reality of definitions in natural language. Definitions and glosses in free text often appear without explicit indicators, across sentences boundaries, or in an otherwise complex linguistic manner. DeftEval involved 3 distinct subtasks: 1) Sentence classification, 2) sequence labeling, and 3) relation extraction.", "which has research problem ?", "Sentence classification", 574.0, 597.0], ["Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.", "which has research problem ?", "Machine comprehension of text", 0.0, 29.0], ["With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.", "which has research problem ?", "Question Answering", 922.0, 940.0], ["Abstract The composition of the oil, steam-distilled from aerial parts of Artemisia herha-alba Asso ssp. valentina (Lam.) Marcl. (Asteraceae) collected from the south of Spain, has been analyzed by GC/MS. Among the 65 constituents investigated (representing 93.6 % of the oil composition), 61 were identified (90.3% of the oil composition). The major constituents detected were the sesquiterpene davanone (18.1%) and monoterpenes p-cymene (13.5%), 1,8-cineole (10.2%), chrysanthenone (6.7%), cis-chrysanthenyl acetate (5.6%), \u03b3-terpinene (5.5%), myrcene (5.1%) and camphor (4.0%). The oil was dominated by monoterpenes (ca. 66% of the oil), p-menthane and pinane being the most representative skeleta of the group. The oil sample studied did not contain thujones, unlike most A. herba-alba oils described in the literature.", "which has research problem ?", "Oil", 32.0, 35.0], ["Web Table Understanding in the context of Knowledge Base Population and the Semantic Web is the task of i) linking the content of tables retrieved from the Web to an RDF knowledge base, ii) of building hypotheses about the tables' structures and contents, iii) of extracting novel information from these tables, and iv) of adding this new information to a knowledge base. Knowledge Base Population has gained more and more interest in the last years due to the increased demand in large knowledge graphs which became relevant for Artificial Intelligence applications such as Question Answering and Semantic Search. In this paper we describe a set of basic tasks which are relevant for Web Table Understanding in the mentioned context. These tasks incrementally enrich a table with hypotheses about the table's content. In doing so, in the case of multiple interpretations, selecting one interpretation and thus deciding against other interpretations is avoided as much as possible. By postponing these decision, we enable table understanding approaches to decide by themselves, thus increasing the usability of the annotated table data. We present statistics from analyzing and annotating 1.000.000 tables from the Web Table Corpus 2015 and make this dataset as well as our code available online.", "which has research problem ?", "Web Table Understanding", 0.0, 23.0], ["Tabular data is an abundant source of information on the Web, but remains mostly isolated from the latter's interconnections since tables lack links and computer-accessible descriptions of their structure. In other words, the schemas of these tables -- attribute names, values, data types, etc. -- are not explicitly stored as table metadata. Consequently, the structure that these tables contain is not accessible to the crawlers that power search engines and thus not accessible to user search queries. We address this lack of structure with a new method for leveraging the principles of table construction in order to extract table schemas. Discovering the schema by which a table is constructed is achieved by harnessing the similarities and differences of nearby table rows through the use of a novel set of features and a feature processing scheme. The schemas of these data tables are determined using a classification technique based on conditional random fields in combination with a novel feature encoding method called logarithmic binning, which is specifically designed for the data table extraction task. Our method provides considerable improvement over the well-known WebTables schema extraction method. In contrast with previous work that focuses on extracting individual relations, our method excels at correctly interpreting full tables, thereby being capable of handling general tables such as those found in spreadsheets, instead of being restricted to HTML tables as is the case with the WebTables method. We also extract additional schema characteristics, such as row groupings, which are important for supporting information retrieval tasks on tabular data.", "which has research problem ?", "Table extraction", 1095.0, 1111.0], ["This paper reviews literature that examines the design, implementation and use of Enterprise Resource Planning systems (ERPs). It finds that most of this literature is managerialist in orientation, and concerned with the impact of ERPs in terms of efficiency, effectiveness and business performance. The paper seeks to provide an alternative research agenda, one that emphasises work- and organisation-based approaches to the study of the implementation and use of ERPs.", "which has research problem ?", "Enterprise resource planning", 82.0, 110.0], ["Enterprise Resource Planning (ERP) systems have become a de facto standard for integrating business functions. But an obvious question arises: if every business is using the same socalled \u201cVanilla\u201d software (e.g. an SAP ERP system) what happens to the competitive advantage from implementing IT systems? If we discard our custom-built legacy systems in favour of enterprise systems do we also jettison our valued competitive advantage from IT? While for some organisations ERPs have become just a necessity for conducting business, others want to exploit them to outperform their competitors. In the last few years, researchers have begun to study the link between ERP systems and competitive advantage. This link will be the focus of this paper. We outline a framework summarizing prior research and suggest two researchable questions. A future article will develop the framework with two empirical case studies from within part of the European food industry.", "which has research problem ?", "Enterprise resource planning", 0.0, 28.0], ["Bullying is relatively common and is considered to be a public health problem among adolescents worldwide. The present study examined the risk factors associated with bullying behavior among adolescents in a lower-middle-income country setting. Data on 6235 adolescents aged 11\u201316 years, derived from the Republic of Ghana\u2019s contribution to the Global School-based Health Survey, were analyzed using bivariate and multinomial logistic regression analysis. A high prevalence of bullying was found among Ghanaian adolescents. Alcohol-related health compromising behaviors (alcohol use, alcohol misuse and getting into trouble as a result of alcohol) increased the risk of being bullied. In addition, substance use, being physically attacked, being seriously injured, hunger and truancy were also found to increase the risk of being bullied. However, having understanding parents and having classmates who were kind and helpful reduced the likelihood of being bullied. These findings suggest that school-based intervention programs aimed at reducing rates of peer victimization should simultaneously target multiple risk behaviors. Teachers can also reduce peer victimization by introducing programs that enhance adolescents\u2019 acceptance of each other in the classroom.", "which has research problem ?", "bullying", 0.0, 8.0], ["Open Educational Resources (OER) have the potential to encourage self-learning and lifelong learning. However, there are some barriers that make it difficult to find the suitable information. The successful adoption of OER in processes of informal or self-directed learning will depend largely on what the learner is able to reach without a guide or a tutor. However, the OERs quality and their particular features can positively influence into the motivation of the self-learner to achieve their goals. In this paper, the authors present an approach to enhance the OER discovery by self-learners. This is designed to leverage open knowledge sources, which are described by semantic technologies. User's data and OER encoded in formal languages can be linked and thus find paths for suggest the most appropriate resources.", "which has research problem ?", "Open Education", NaN, NaN], ["In this work we present a characterization of spam on Twitter. We find that 8% of 25 million URLs posted to the site point to phishing, malware, and scams listed on popular blacklists. We analyze the accounts that send spam and find evidence that it originates from previously legitimate accounts that have been compromised and are now being puppeteered by spammers. Using clickthrough data, we analyze spammers' use of features unique to Twitter and the degree that they affect the success of spam. We find that Twitter is a highly successful platform for coercing users to visit spam pages, with a clickthrough rate of 0.13%, compared to much lower rates previously reported for email spam. We group spam URLs into campaigns and identify trends that uniquely distinguish phishing, malware, and spam, to gain an insight into the underlying techniques used to attract users. Given the absence of spam filtering on Twitter, we examine whether the use of URL blacklists would help to significantly stem the spread of Twitter spam. Our results indicate that blacklists are too slow at identifying new threats, allowing more than 90% of visitors to view a page before it becomes blacklisted. We also find that even if blacklist delays were reduced, the use by spammers of URL shortening services for obfuscation negates the potential gains unless tools that use blacklists develop more sophisticated spam filtering.", "which has research problem ?", "Characterization of spam on Twitter", 26.0, 61.0], ["Research on definition extraction has been conducted for well over a decade, largely with significant constraints on the type of definitions considered. In this work, we present DeftEval, a SemEval shared task in which participants must extract definitions from free text using a term-definition pair corpus that reflects the complex reality of definitions in natural language. Definitions and glosses in free text often appear without explicit indicators, across sentences boundaries, or in an otherwise complex linguistic manner. DeftEval involved 3 distinct subtasks: 1) Sentence classification, 2) sequence labeling, and 3) relation extraction.", "which has research problem ?", "Sequence labeling", 602.0, 619.0], ["The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents\u2019 circumstances, such as a spouse\u2019s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.", "which has research problem ?", "childcare", 906.0, 915.0], ["This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article. This task of machine reading at scale combines the challenges of document retrieval (finding the relevant articles) with that of machine comprehension of text (identifying the answer spans from those articles). Our approach combines a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA datasets indicate that (1) both modules are highly competitive with respect to existing counterparts and (2) multitask learning using distant supervision on their combination is an effective complete system on this challenging task.", "which has research problem ?", "Question Answering", 43.0, 61.0], ["We present the Pathway Curation (PC) task, a main event extraction task of the BioNLP shared task (ST) 2013. The PC task concerns the automatic extraction of biomolecular reactions from text. The task setting, representation and semantics are defined with respect to pathway model standards and ontologies (SBML, BioPAX, SBO) and documents selected by relevance to specific model reactions. Two BioNLP ST 2013 participants successfully completed the PC task. The highest achieved Fscore, 52.8%, indicates that event extraction is a promising approach to supporting pathway curation efforts. The PC task continues as an open challenge with data, resources and tools available from http://2013.bionlp-st.org/", "which has research problem ?", "Pathway Curation (PC) task", NaN, NaN], ["Relation extraction (RE) is an indispensable information extraction task in several disciplines. RE models typically assume that named entity recognition (NER) is already performed in a previous step by another independent model. Several recent efforts, under the theme of end-to-end RE, seek to exploit inter-task correlations by modeling both NER and RE tasks jointly. Earlier work in this area commonly reduces the task to a table-filling problem wherein an additional expensive decoding step involving beam search is applied to obtain globally consistent cell labels. In efforts that do not employ table-filling, global optimization in the form of CRFs with Viterbi decoding for the NER component is still necessary for competitive performance. We introduce a novel neural architecture utilizing the table structure, based on repeated applications of 2D convolutions for pooling local dependency and metric-based features, that improves on the state-of-the-art without the need for global optimization. We validate our model on the ADE and CoNLL04 datasets for end-to-end RE and demonstrate $\\approx 1\\%$ gain (in F-score) over prior best results with training and testing times that are seven to ten times faster --- the latter highly advantageous for time-sensitive end user applications.", "which has research problem ?", "Relation Extraction", 0.0, 19.0], ["Essential oil from Artemisia herba alba (Art) was hydrodistilled and tested as corrosion inhibitor of steel in 0.5 M H2SO4 using weight loss measurements and electrochemical polarization methods. Results gathered show that this natural oil reduced the corrosion rate by the cathodic action. Its inhibition efficiency attains the maximum (74%) at 1 g/L. The inhibition efficiency of Arm oil increases with the rise of temperature. The adsorption isotherm of natural product on the steel has been determined. A. herba alba essential oil was obtained by hydrodistillation and its chemical composition oil was investigated by capillary GC and GC/MS. The major components were chrysanthenone (30.6%) and camphor (24.4%).", "which has research problem ?", "Oil", 10.0, 13.0], ["Automatic subject indexing has been a longstanding goal of digital curators to facilitate effective retrieval access to large collections of both online and offline information resources. Controlled vocabularies are often used for this purpose, as they standardise annotation practices and help users to navigate online resources through following interlinked topical concepts. However, to this date, the assignment of suitable text annotations from a controlled vocabulary is still largely done manually, or at most (semi-)automatically, even though effective machine learning tools are already in place. This is because existing procedures require a sufficient amount of training data and they have to be adapted to each vocabulary, language and application domain anew. In this paper, we argue that there is a third solution to subject indexing which harnesses cross-domain knowledge graphs. Our KINDEX approach fuses distributed knowledge graph information from different sources. Experimental evaluation shows that the approach achieves good accuracy scores by exploiting correspondence links of publicly available knowledge graphs.", "which has research problem ?", "automatic subject indexing", 0.0, 26.0], ["In this paper, we describe the successful results of an international research project focused on the use of Web technology in the educational context. The article explains how this international project, funded by public organizations and developed over the last two academic years, focuses on the area of open educational resources (OER) and particularly the educational content of the OpenCourseWare (OCW) model. This initiative has been developed by a research group composed of researchers from three countries. The project was enabled by the Universidad Politecnica de Madrid OCW Office's leadership of the Consortium of Latin American Universities and the distance education know-how of the Universidad Tecnica Particular de Loja (UTPL, Ecuador). We give a full account of the project, methodology, main outcomes and validation. The project results have further consolidated the group, and increased the maturity of group members and networking with other groups in the area. The group is now participating in other research projects that continue the lines developed here.", "which has research problem ?", "Open Education", NaN, NaN], ["The re-use of digital content is an essential part of the knowledge-based economy. The online accessibility of open materials will make possible for teachers, students and self-learners to use them for leisure, studies or work. Recent studies have focused on the use or reuse of digital resources with specific reference to the open data field. Moreover, open data can be reused \u2014 for both commercial and non-commercial purposes \u2014 for uses such as developing learning and educational content, with full respect for copyright and related rights. This work presents a rating system for open data as OER is proposed. This rating system present a framework to search, download and re-use digital resources by teachers and students. The rating system proposed is built from the ground upwards on Open data principles and Semantic Web technologies. By following open data best practices and Linked Data principles, open data initiative ensures that data hosted can be fully connected into a Web of Linked Data. This work aims at sharing good practices for administrative/academics/researchers on gathering and disseminating good quality data, using interoperable standards and overcoming legal obstacles. In this way, open data becomes a tool for teaching and educational environments to improve engagement and student learning. Designing an open data repository that manages and shares information of different catalogues and an evaluation tool to OER will allow teachers and students to increase educational contents and to improve relationship between open data initiatives and education context.", "which has research problem ?", "Open Data", 328.0, 337.0], ["A common concern in relation to smart cities is how to turn the concept into reality. The aim of this research is to investigate the implementation process of smart cities based upon the experience of South Korea\u2019s U-City projects. The research shows that poorly-managed conflicts during implementation can diminish the potential of smart cities and discourage future improvements. The nature of smart cities is based on the concept of governance, while the planning practice is still in the notion of government. In order to facilitate the collaborative practice, the research has shown that collaborative institutional arrangements and joint fact-finding processes might secure an integrated service delivery for smart cities by overcoming operational difficulties in real-life contexts.", "which has research problem ?", "Smart cities", 32.0, 44.0], ["Software testing is an important and valuable part of the software development life cycle. Due to time, cost and other circumstances, exhaustive testing is not feasible that\u2019s why there is a need to automate the testing process. Testing effectiveness can be achieved by the State Transition Testing (STT) which is commonly used in real time, embedded and web-based kinds of software systems. The tester\u2019s main job to test all the possible transitions in the system. This paper proposed an Ant Colony Optimization (ACO) technique for the automated and full coverage of all state-transitions in the system. Present paper approach generates test sequence in order to obtain the complete software coverage. This paper also discusses the comparison between two metaheuristic techniques (Genetic Algorithm and Ant Colony optimization) for transition based testing.", "which has research problem ?", "Ant Colony Optimization", 489.0, 512.0], ["Mass customization and increasing product complexity require new methods to ensure a continuously high product quality. In the case of product failures it has to be determined what distinguishes flawed products. The data generated by cybertronic products over their lifecycle offers new possibilities to find such distinctions. To manage this data for individual product instances the concept of a Digital Twin has been proposed. This paper introduces the elements of a Digital Twin for root cause analysis and product quality monitoring and suggests a data structure that enables data analytics.", "which has research problem ?", "digital twin", 398.0, 410.0], ["A knowledge graph (KG), also known as a knowledge base, is a particular kind of network structure in which the node indicates entity and the edge represent relation. However, with the explosion of network volume, the problem of data sparsity that causes large-scale KG systems to calculate and manage difficultly has become more significant. For alleviating the issue, knowledge graph embedding is proposed to embed entities and relations in a KG to a low-, dense and continuous feature space, and endow the yield model with abilities of knowledge inference and fusion. In recent years, many researchers have poured much attention in this approach, and we will systematically introduce the existing state-of-the-art approaches and a variety of applications that benefit from these methods in this paper. In addition, we discuss future prospects for the development of techniques and application trends. Specifically, we first introduce the embedding models that only leverage the information of observed triplets in the KG. We illustrate the overall framework and specific idea and compare the advantages and disadvantages of such approaches. Next, we introduce the advanced models that utilize additional semantic information to improve the performance of the original methods. We divide the additional information into two categories, including textual descriptions and relation paths. The extension approaches in each category are described, following the same classification criteria as those defined for the triplet fact-based models. We then describe two experiments for comparing the performance of listed methods and mention some broader domain tasks such as question answering, recommender systems, and so forth. Finally, we collect several hurdles that need to be overcome and provide a few future research directions for knowledge graph embedding.", "which has research problem ?", "systematically introduce the existing state-of-the-art approaches and a variety of applications", 669.0, 764.0], ["\nPurpose\nIn terms of entrepreneurship, open data benefits include economic growth, innovation, empowerment and new or improved products and services. Hackathons encourage the development of new applications using open data and the creation of startups based on these applications. Researchers focus on factors that affect nascent entrepreneurs\u2019 decision to create a startup but researches in the field of open data hackathons have not been fully investigated yet. This paper aims to suggest a model that incorporates factors that affect the decision of establishing a startup by developers who have participated in open data hackathons.\n\n\nDesign/methodology/approach\nIn total, 70 papers were examined and analyzed using a three-phased literature review methodology, which was suggested by Webster and Watson (2002). These surveys investigated several factors that affect a nascent entrepreneur to create a startup.\n\n\nFindings\nEventually, by identifying the motivations for developers to participate in a hackathon, and understanding the benefits of the use of open data, researchers will be able to elaborate the proposed model and evaluate if the contest has contributed to the decision of establish a startup and what factors affect the decision to establish a startup apply to open data developers, and if the participants of the contest agree with these factors.\n\n\nOriginality/value\nThe paper expands the scope of open data research on entrepreneurship field, stating the need for more research to be conducted regarding the open data in entrepreneurship through hackathons.\n", "which has research problem ?", "aims to suggest a model that incorporates factors that affect the decision of establishing a startup by developers who have participated in open data hackathons", 553.0, 713.0], ["Using multivariate analyses, individual risk of clinically definite multiple sclerosis (C DMS) after monosymptomatic optic neuritis (MO N) was quantified in a prospective study with clinical MO N onset during 1990 -95 in Stockholm, Sweden. During a mean follow-up time of 3.8 years, the presence of MS-like brain magnetic resonance imaging (MRI) lesions and oligoclonal immunoglobulin (Ig) G bands in cerebrospinal fluid (CSF) were strong prognostic markers of C DMS, with relative hazard ratios of 4.68 {95% confidence interval (CI) 2.21 -9.91} and 5.39 (95% C I 1.56 -18.61), respectively. Age and season of clinical onset were also significant predictors, with relative hazard ratios of 1.76 (95% C I 1.02 -3.04) and 2.21 (95% C I 1.13 -3.98), respectively. Based on the above two strong predicto rs, individual probability of C DMS development after MO N was calculated in a three-quarter sample drawn from a cohort, with completion of follow-up at three years. The highest probability, 0.66 (95% C I 0.48 -0.80), was obtained for individuals presenting with three or more brain MRI lesions and oligoclonal bands in the C SF, and the lowest, 0.09 (95% C I 0.02 -0.32), for those not presenting with these traits. Medium values, 0.29 (95% C I 0.13 -0.53) and 0.32 (95% C I 0.07 -0.73), were obtained for individuals discordant for the presence of brain MRI lesions and oligoclonal bands in the C SF. These predictions were validated in an external one-quarter sample.", "which has research problem ?", "Multiple sclerosis", 68.0, 86.0], ["We found 42 of 74 patients (57%) with isolated monosymptomatic optic neuritis to have 1 to 20 brain lesions, by magnetic resonance imaging (MRI). All of the brain lesions were clinically silent and had characteristics consistent with multiple sclerosis (MS). None of the patients had ever experienced neurologic symptoms prior to the episode of optic neuritis. During 5.6 years of follow\u2010up, 21 patients (28%) developed definite MS on clinical grounds. Sixteen of the 21 converting patients (76%) had abnormal MRIs; the other 5 (24%) had MRIs that were normal initially (when they had optic neuritis only) and when repeated after they had developed clinical MS in 4 of the 5. Of the 53 patients who have not developed clinically definite MS, 26 (49%) have abnormal MRIs and 27 (51%) have normal MRIs. The finding of an abnormal MRI at the time of optic neuritis was significantly related to the subsequent development of MS on clinical grounds, but interpretation of the strength of that relationship must be tempered by the fact that some of the converting patients had normal MRIs and approximately half of the patients who did not develop clinical MS had abnormal MRIs. We found that abnormal IgG levels in the cerebrospinal fluid correlated more strongly than abnormal MRIs with the subsequent development of clinically definite MS.", "which has research problem ?", "Multiple sclerosis", 234.0, 252.0], ["This work explores hypernetworks: an approach of using one network, also known as a hypernetwork, to generate the weights for another network. We apply hypernetworks to generate adaptive weights for recurrent networks. In this case, hypernetworks can be viewed as a relaxed form of weight-sharing across layers. In our implementation, hypernetworks are are trained jointly with the main network in an end-to-end fashion. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks.", "which has research problem ?", "Language Modelling", 604.0, 622.0], ["Software Testing is the process of testing the software in order to ensure that it is free of errors and produces the desired outputs in any given situation. Model based testing is an approach in which software is viewed as a set of states. A usage model describes software on the basis of its statistical usage data. One of the major problems faced in such an approach is the generation of optimal sets of test sequences. The model discussed in this paper is a Markov chain based usage model. The analytical operations and results associated with Markov chains make them an appropriate choice for checking the feasibility of test sequences while they are being generated. The statistical data about the estimated usage has been used to build a stochastic model of the software under test. This paper proposes a technique to generate optimized test sequences from a markov chain based usage model. The proposed technique uses ant colony optimization as its basis and also incorporates factors like cost and criticality of various states in the model. It further takes into consideration the average number of visits to any state and the trade-off between cost considerations and optimality of the test coverage.", "which has research problem ?", "Ant Colony Optimization", 926.0, 949.0], ["The aim of ontology learning is to generate domain models (semi-) automatically. We apply an ontology learning system to create domain ontologies from scratch in a monthly interval and use the resulting data to detect and analyze trends in the domain. In contrast to traditional trend analysis on the level of single terms, the application of semantic technologies allows for a more abstract and integrated view of the domain. A Web frontend displays the resulting ontologies, and a number of analyses are performed on the data collected. This frontend can be used to detect trends and evolution in a domain, and dissect them on an aggregated, as well as a fine-grained-level.", "which has research problem ?", "an ontology learning system to create domain ontologies from scratch", 90.0, 158.0], ["MOOCs (Massive Open Online Courses) promote mass education through collaboration scenarios between participants. The purpose of this paper is to analyze the characteristics of MOOCs that can be incorporated into environments such as OpenCourseWare. We develop a study on how the concept of OCW evolves nowadays: focused on the distribution of open educational resources, towards a concept of collaborative open massive training (as with the MOOC), that not all current learning platforms provide. This new generation of social OCW will be called OCW-S.", "which has research problem ?", "Open Education", NaN, NaN], ["A long decade ago economic growth was the reigning fashion of political economy. It was simultaneously the hottest subject of economic theory and research, a slogan eagerly claimed by politicians of all stripes, and a serious objective of the policies of governments. The climate of opinion has changed dramatically. Disillusioned critics indict both economic science and economic policy for blind obeisance to aggregate material \"progress,\" and for neglect of its costly side effects. Growth, it is charged, distorts national priorities, worsens the distribution of income, and irreparably damages the environment. Paul Erlich speaks for a multitude when he says, \"We must acquire a life style which has as its goal maximum freedom and happiness for the individual, not a maximum Gross National Product.\" Growth was in an important sense a discovery of economics after the Second World War. Of course economic development has always been the grand theme of historically minded scholars of large mind and bold concept, notably Marx, Schumpeter, Kuznets. But the mainstream of economic analysis was not comfortable with phenomena of change and progress. The stationary state was the long-run equilibrium of classical and neoclassical theory, and comparison of alternative static equilibriums was the most powerful theoretical tool. Technological change and population increase were most readily accommodated as one-time exogenous shocks; comparative static analysis could be used to tell how they altered the equilibrium of the system. The obvious fact that these \"shocks\" were occurring continuously, never allowing the", "which has research problem ?", "political economy", 62.0, 79.0], ["Named Entity Recognition (NER) is a key component in NLP systems for question answering, information retrieval, relation extraction, etc. NER systems have been studied and developed widely for decades, but accurate systems using deep neural networks (NN) have only been introduced in the last few years. We present a comprehensive survey of deep neural network architectures for NER, and contrast them with previous approaches to NER based on feature engineering and other supervised or semi-supervised learning algorithms. Our results highlight the improvements achieved by neural networks, and show how incorporating some of the lessons learned from past work on feature-based NER systems can yield further improvements.", "which has research problem ?", "Named Entity Recognition", 0.0, 24.0], ["In this paper, we present an enhanced pictorial structure (PS) model for precise eye localization, a fundamental problem involved in many face processing tasks. PS is a computationally efficient framework for part-based object modelling. For face images taken under uncontrolled conditions, however, the traditional PS model is not flexible enough for handling the complicated appearance and structural variations. To extend PS, we 1) propose a discriminative PS model for a more accurate part localization when appearance changes seriously, 2) introduce a series of global constraints to improve the robustness against scale, rotation and translation, and 3) adopt a heuristic prediction method to address the difficulty of eye localization with partial occlusion. Experimental results on the challenging LFW (Labeled Face in the Wild) database show that our model can locate eyes accurately and efficiently under a broad range of uncontrolled variations involving poses, expressions, lightings, camera qualities, occlusions, etc.", "which has research problem ?", "Eye localization", 81.0, 97.0], ["Understanding unstructured text is a major goal within natural language processing. Comprehension tests pose questions based on short text passages to evaluate such understanding. In this work, we investigate machine comprehension on the challenging {\\it MCTest} benchmark. Partly because of its limited size, prior work on {\\it MCTest} has focused mainly on engineering better features. We tackle the dataset with a neural approach, harnessing simple neural networks arranged in a parallel hierarchy. The parallel hierarchy enables our model to compare the passage, question, and answer from a variety of trainable perspectives, as opposed to using a manually designed, rigid feature set. Perspectives range from the word level to sentence fragments to sequences of sentences; the networks operate only on word-embedding representations of text. When trained with a methodology designed to help cope with limited training data, our Parallel-Hierarchical model sets a new state of the art for {\\it MCTest}, outperforming previous feature-engineered approaches slightly and previous neural approaches by a significant margin (over 15\\% absolute).", "which has research problem ?", "Machine comprehension", 209.0, 230.0], ["In the past years, many methodologies and tools have been developed to assess the FAIRness of research data. These different methodologies and tools have been based on various interpretations of the FAIR principles, which makes comparison of the results of the assessments difficult. The work in the RDA FAIR Data Maturity Model Working Group reported here has delivered a set of indicators with priorities and guidelines that provide a \u2018lingua franca\u2019 that can be used to make the results of the assessment using those methodologies and tools comparable. The model can act as a tool that can be used by various stakeholders, including researchers, data stewards, policy makers and funding agencies, to gain insight into the current FAIRness of data as well as into the aspects that can be improved to increase the potential for reuse of research data. Through increased efficiency and effectiveness, it helps research activities to solve societal challenges and to support evidence-based decisions. The Maturity Model is publicly available and the Working Group is encouraging application of the model in practice. Experience with the model will be taken into account in the further development of the model.", "which has research problem ?", "assess the FAIRness of research data", 71.0, 107.0], ["In our paper, we applied a non-pheromone based intelligent swarm optimization technique namely artificial bee colony optimization (ABC) for test suite optimization. Our approach is a population based algorithm, in which each test case represents a possible solution in the optimization problem and happiness value which is a heuristic introduced to each test case corresponds to the quality or fitness of the associated solution. The functionalities of three groups of bees are extended to three agents namely Search Agent, Selector Agent and Optimizer Agent to select efficient test cases among near infinite number of test cases. Because of the parallel behavior of these agents, the solution generation becomes faster and makes the approach an efficient one. Since, the test adequacy criterion we used is path coverage; the quality of the test cases is improved during each iteration to cover the paths in the software. Finally, we compared our approach with Ant Colony Optimization (ACO), a pheromone based optimization technique in test suite optimization and finalized that, ABC based approach has several advantages over ACO based optimization.", "which has research problem ?", "Ant Colony Optimization", 962.0, 985.0], ["Urban performance currently depends not only on a city's endowment of hard infrastructure (physical capital), but also, and increasingly so, on the availability and quality of knowledge communication and social infrastructure (human and social capital). The latter form of capital is decisive for urban competitiveness. Against this background, the concept of the \u201csmart city\u201d has recently been introduced as a strategic device to encompass modern urban production factors in a common framework and, in particular, to highlight the importance of Information and Communication Technologies (ICTs) in the last 20 years for enhancing the competitive profile of a city. The present paper aims to shed light on the often elusive definition of the concept of the \u201csmart city.\u201d We provide a focused and operational definition of this construct and present consistent evidence on the geography of smart cities in the EU27. Our statistical and graphical analyses exploit in depth, for the first time to our knowledge, the most recent version of the Urban Audit data set in order to analyze the factors determining the performance of smart cities. We find that the presence of a creative class, the quality of and dedicated attention to the urban environment, the level of education, and the accessibility to and use of ICTs for public administration are all positively correlated with urban wealth. This result prompts the formulation of a new strategic agenda for European cities that will allow them to achieve sustainable urban development and a better urban landscape.", "which has research problem ?", "Smart cities", 889.0, 901.0], ["Several deep learning models have been proposed for question answering. However, due to their single-pass nature, they have no way to recover from local maxima corresponding to incorrect answers. To address this problem, we introduce the Dynamic Coattention Network (DCN) for question answering. The DCN first fuses co-dependent representations of the question and the document in order to focus on relevant parts of both. Then a dynamic pointing decoder iterates over potential answer spans. This iterative procedure enables the model to recover from initial local maxima corresponding to incorrect answers. On the Stanford question answering dataset, a single DCN model improves the previous state of the art from 71.0% F1 to 75.9%, while a DCN ensemble obtains 80.4% F1.", "which has research problem ?", "Question Answering", 52.0, 70.0], ["The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents\u2019 circumstances, such as a spouse\u2019s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.", "which has research problem ?", "gender", 401.0, 407.0], ["Facility disruptions in the supply chain often lead to catastrophic consequences, although they occur rarely. The low frequency and non-repeatability of disruptive events also make it impossible to estimate the disruption probability accurately. Therefore, we construct an uncertain programming model to design the three-echelon supply chain network with the disruption risk, in which disruptions are considered as uncertain events. Under the constraint of satisfying customer demands, the model optimises the selection of retailers with uncertain disruptions and the assignment of customers and retailers, in order to minimise the expected total cost of network design. In addition, we simplify the proposed model by analysing its properties and further linearise the simplified model. A Lagrangian relaxation algorithm for the linearised model and a genetic algorithm for the simplified model are developed to solve medium-scale problems and large-scale problems, respectively. Finally, we illustrate the effectiveness of proposed models and algorithms through several numerical examples.", "which has research problem ?", "Supply chain", 28.0, 40.0], ["Regression testing is primarily a maintenance activity that is performed frequently to ensure the validity of the modified software. In such cases, due to time and cost constraints, the entire test suite cannot be run. Thus, it becomes essential to prioritize the tests in order to cover maximum faults in minimum time. In this paper, ant colony optimization is used, which is a new way to solve time constraint prioritization problem. This paper presents the regression test prioritization technique to reorder test suites in time constraint environment along with an algorithm that implements the technique.", "which has research problem ?", "Ant Colony Optimization", 335.0, 358.0], ["Knowledge base completion (KBC) aims to predict missing information in a knowledge this http URL this paper, we address the out-of-knowledge-base (OOKB) entity problem in KBC:how to answer queries concerning test entities not observed at training time. Existing embedding-based KBC models assume that all test entities are available at training time, making it unclear how to obtain embeddings for new entities without costly retraining. To solve the OOKB entity problem without retraining, we use graph neural networks (Graph-NNs) to compute the embeddings of OOKB entities, exploiting the limited auxiliary knowledge provided at test time.The experimental results show the effectiveness of our proposed model in the OOKB setting.Additionally, in the standard KBC setting in which OOKB entities are not involved, our model achieves state-of-the-art performance on the WordNet dataset. The code and dataset are available at this https URL", "which has research problem ?", "Knowledge Base Completion", 0.0, 25.0], ["Vacuum ultraviolet (VUV) photons in plasma processing systems are known to alter surface chemistry and may damage gate dielectrics and photoresist. We characterize absolute VUV fluxes to surfaces exposed in an inductively coupled argon plasma, 1\u201350 mTorr, 25\u2013400 W, using a calibrated VUV spectrometer. We also demonstrate an alternative method to estimate VUV fluence in an inductively coupled plasma (ICP) reactor using a chemical dosimeter-type monitor. We illustrate the technique with argon ICP and xenon lamp exposure experiments, comparing direct VUV measurements with measured chemical changes in 193 nm photoresist-covered Si wafers following VUV exposure.", "which Plasma_discharge ?", "ICP", 403.0, 406.0], ["Hydrogen and nitrogen containing discharges emit intense radiation in a broad wavelength region in the VUV. The measured radiant power of individual molecular transitions and atomic lines between 117 nm and 280 nm are compared to those obtained in the visible spectral range and moreover to the RF power supplied to the ICP discharge. In hydrogen plasmas driven at 540 W of RF power up to 110 W are radiated in the VUV, whereas less than 2 W is emitted in the VIS. In nitrogen plasmas the power level of about 25 W is emitted both in the VUV and in the VIS. In hydrogen\u2013nitrogen mixtures, the NH radiation increases the VUV amount. The analysis of molecular and atomic hydrogen emission supported by a collisional radiative model allowed determining plasma parameters and particle densities and thus particle fluxes. A comparison of the fluxes showed that the photon fluxes determined from the measured emission are similar to the ion fluxes, whereas the atomic hydrogen fluxes are by far dominant. Photon fluxes up to 5 \u00d7 1020 m\u22122 s\u22121 are obtained, demonstrating that the VUV radiation should not be neglected in surface modifications processes, whereas the radiant power converted to VUV photons is to be considered in power balances. Varying the admixture of nitrogen to hydrogen offers a possibility to tune photon fluxes in the respective wavelength intervals.", "which Plasma_discharge ?", "ICP", 320.0, 323.0], ["The \u00b5-APPJ is a well-investigated atmospheric pressure RF plasma jet. Up to now, it has mainly been operated using helium as feed gas due to stability restrictions. However, the COST-Jet design including precise electrical probes now offers the stability and reproducibility to create equi-operational plasmas in helium as well as in argon. In this publication, we compare fundamental plasma parameters and physical processes inside the COST reference microplasma jet, a capacitively coupled RF atmospheric pressure plasma jet, under operation in argon and in helium. Differences already observable by the naked eye are reflected in differences in the power-voltage characteristic for both gases. Using an electrical model and a power balance, we calculated the electron density and temperature at 0.6 W to be 9e17 m-3, 1.2 eV and 7.8e16 m-3, 1.7 eV for argon and helium, respectively. In case of helium, a considerable part of the discharge power is dissipated in elastic electron-atom collisions, while for argon most of the input power is used for ionization. Phase-resolved emission spectroscopy reveals differently pronounced heating mechanisms. Whereas bulk heating is more prominent in argon compared to helium, the opposite trend is observed for sheath heating. This also explains the different behavior observed in the power-voltage characteristics.", "which Plasma_discharge ?", "COST-Jet", 178.0, 186.0], ["We have studied the impact of HBr plasma treatment and the role of the VUV light emitted by this plasma on the chemical modifications and resulting roughness of both blanket and patterned photoresists. The experimental results show that both treatments lead to similar resist bulk chemical modifications that result in a decrease of the resist glass transition temperature (Tg). This drop in Tg allows polymer chain rearrangement that favors surface roughness smoothening. The smoothening effect is mainly attributed to main chain scission induced by plasma VUV light. For increased VUV light exposure time, the crosslinking mechanism dominates over main chain scission and limits surface roughness smoothening. In the case of the HBr plasma treatment, the synergy between Bromine radicals and VUV light leads to the formation of dense graphitized layers on top and sidewalls surfaces of the resist pattern. The presence of a dense layer on the HBr cured resist sidewalls prevents from resist pattern reflowing but on the counter side leads to increased surface roughness and linewidth roughness compared to VUV light treatment.", "which Feed_gases ?", "HBr", 30.0, 33.0], ["Heat is of fundamental importance in many cellular processes such as cell metabolism, cell division and gene expression. (1-3) Accurate and noninvasive monitoring of temperature changes in individual cells could thus help clarify intricate cellular processes and develop new applications in biology and medicine. Here we report the use of green fluorescent proteins (GFP) as thermal nanoprobes suited for intracellular temperature mapping. Temperature probing is achieved by monitoring the fluorescence polarization anisotropy of GFP. The method is tested on GFP-transfected HeLa and U-87 MG cancer cell lines where we monitored the heat delivery by photothermal heating of gold nanorods surrounding the cells. A spatial resolution of 300 nm and a temperature accuracy of about 0.4 \u00b0C are achieved. Benefiting from its full compatibility with widely used GFP-transfected cells, this approach provides a noninvasive tool for fundamental and applied research in areas ranging from molecular biology to therapeutic and diagnostic studies.", "which Material ?", "Protein", NaN, NaN], ["To improve designs of e-learning materials, it is necessary to know which word or figure a learner felt \"difficult\" in the materials. In this pilot study, we measured electroencephalography (EEG) and eye gaze data of learners and analyzed to estimate which area they had difficulty to learn. The developed system realized simultaneous measurements of physiological data and subjective evaluations during learning. Using this system, we observed specific EEG activity in difficult pages. Integrating of eye gaze and EEG measurements raised a possibility to determine where a learner felt \"difficult\" in a page of learning materials. From these results, we could suggest that the multimodal measurements of EEG and eye gaze would lead to effective improvement of learning materials. For future study, more data collection using various materials and learners with different backgrounds is necessary. This study could lead to establishing a method to improve e-learning materials based on learners' mental states.", "which Material ?", "a page of learning materials", 602.0, 630.0], ["Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.", "which Material ?", "trained mono-lingual models", 130.0, 157.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Material ?", "random-tweet dataset", 1852.0, 1872.0], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "which Material ?", "a Linked Open Dataset", 718.0, 739.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "iodide-based compositions", 543.0, 568.0], ["The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.", "which Material ?", "all university sites", 250.0, 270.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Material ?", "deep learning model called sAtt-BLSTM convNet", 804.0, 849.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "modern information and communication technologies", 269.0, 318.0], ["Background Visual atypicalities in autism spectrum disorder (ASD) are a well documented phenomenon, beginning as early as 2\u20136 months of age and manifesting in a significantly decreased attention to the eyes, direct gaze and socially salient information. Early emerging neurobiological deficits in perceiving social stimuli as rewarding or its active avoidance due to the anxiety it entails have been widely purported as potential reasons for this atypicality. Parallel research evidence also points to the significant benefits of animal presence for reducing social anxiety and enhancing social interaction in children with autism. While atypicality in social attention in ASD has been widely substantiated, whether this atypicality persists equally across species types or is confined to humans has not been a key focus of research insofar. Methods We attempted a comprehensive examination of the differences in visual attention to static images of human and animal faces (40 images; 20 human faces and 20 animal faces) among children with ASD using an eye tracking paradigm. 44 children (ASD n = 21; TD n = 23) participated in the study (10,362 valid observations) across five regions of interest (left eye, right eye, eye region, face and screen). Results Results obtained revealed significantly greater social attention across human and animal stimuli in typical controls when compared to children with ASD. However in children with ASD, a significantly greater attention allocation was seen to animal faces and eye region and lesser attention to the animal mouth when compared to human faces, indicative of a clear attentional preference to socially salient regions of animal stimuli. The positive attentional bias toward animals was also seen in terms of a significantly greater visual attention to direct gaze in animal images. Conclusion Our results suggest the possibility that atypicalities in social attention in ASD may not be uniform across species. It adds to the current neural and biomarker evidence base of the potentially greater social reward processing and lesser social anxiety underlying animal stimuli as compared to human stimuli in children with ASD.", "which Material ?", "static images of human and animal faces", 933.0, 972.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "our relational process model", 929.0, 957.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "our relational process model", 929.0, 957.0], ["Dereplication represents a key step for rapidly identifying known secondary metabolites in complex biological matrices. In this context, liquid-chromatography coupled to high resolution mass spectrometry (LC-HRMS) is increasingly used and, via untargeted data-dependent MS/MS experiments, massive amounts of detailed information on the chemical composition of crude extracts can be generated. An efficient exploitation of such data sets requires automated data treatment and access to dedicated fragmentation databases. Various novel bioinformatics approaches such as molecular networking (MN) and in-silico fragmentation tools have emerged recently and provide new perspective for early metabolite identification in natural products (NPs) research. Here we propose an innovative dereplication strategy based on the combination of MN with an extensive in-silico MS/MS fragmentation database of NPs. Using two case studies, we demonstrate that this combined approach offers a powerful tool to navigate through the chemistry of complex NPs extracts, dereplicate metabolites, and annotate analogues of database entries.", "which Material ?", "complex biological matrices", 91.0, 118.0], ["It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.", "which Material ?", "domain", 66.0, 72.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "FAPbI3 NCs", 1097.0, 1107.0], ["While Wikipedia exists in 287 languages, its content is unevenly distributed among them. In this work, we investigate the generation of open domain Wikipedia summaries in underserved languages using structured data from Wikidata. To this end, we propose a neural network architecture equipped with copy actions that learns to generate single-sentence and comprehensible textual summaries from Wikidata triples. We demonstrate the effectiveness of the proposed approach by evaluating it against a set of baselines on two languages of different natures: Arabic, a morphological rich language with a larger vocabulary than English, and Esperanto, a constructed language known for its easy acquisition.", "which Material ?", "Arabic", 552.0, 558.0], ["Most question answering (QA) systems over Linked Data, i.e. Knowledge Graphs, approach the question answering task as a conversion from a natural language question to its corresponding SPARQL query. A common approach is to use query templates to generate SPARQL queries with slots that need to be filled. Using templates instead of running an extensive NLP pipeline or end-to-end model shifts the QA problem into a classification task, where the system needs to match the input question to the appropriate template. This paper presents an approach to automatically learn and classify natural language questions into corresponding templates using recursive neural networks. Our model was trained on 5000 questions and their respective SPARQL queries from the preexisting LC-QuAD dataset grounded in DBpedia, spanning 5042 entities and 615 predicates. The resulting model was evaluated using the FAIR GERBIL QA framework resulting in 0.419 macro f-measure on LC-QuAD and 0.417 macro f-measure on QALD-7.", "which Material ?", "recursive neural network", NaN, NaN], ["This paper discusses the potential of current advancements in Information Communication Technologies (ICT) for cultural heritage preservation, valorization and management within contemporary cities. The paper highlights the potential of virtual environments to assess the impacts of heritage policies on urban development. It does so by discussing the implications of virtual globes and crowdsourcing to support the participatory valuation and management of cultural heritage assets. To this purpose, a review of available valuation techniques is here presented together with a discussion on how these techniques might be coupled with ICT tools to promote inclusive governance.\u00a0", "which Material ?", "Information Communication Technologies (ICT)", NaN, NaN], ["A conformal tactile sensor based on MoS2 and graphene is demonstrated. The MoS2 tactile sensor exhibits excellent sensitivity, high uniformity, and good repeatability in terms of various strains. In addition, the outstanding flexibility enables the MoS2 strain tactile sensor to be realized conformally on a finger tip. The MoS2 -based tactile sensor can be utilized for wearable electronics, such as electronic skin.", "which Material ?", "MoS2", 36.0, 40.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "a consistent mathematical process model", 540.0, 579.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "modern information and communication technologies", 269.0, 318.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Material ?", "online post", 874.0, 885.0], ["We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a highway network over characters, whose output is given to a long short-term memory (LSTM) recurrent neural network language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60% fewer parameters. On languages with rich morphology (Arabic, Czech, French, German, Spanish, Russian), the model outperforms word-level/morpheme-level LSTM baselines, again with fewer parameters. The results suggest that on many languages, character inputs are sufficient for language modeling. Analysis of word representations obtained from the character composition part of the model reveals that the model is able to encode, from characters only, both semantic and orthographic information.", "which Material ?", "English Penn Treebank", 341.0, 362.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "nonluminescent, wide-band-gap 1D polymorph", 671.0, 713.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Material ?", "feature maps", 1107.0, 1119.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "a building", 24.0, 34.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca.", NaN, NaN], ["Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.", "which Material ?", "datasets in other languages", 750.0, 777.0], ["Flexible strain\u2010sensitive\u2010material\u2010based sensors are desired owing to their widespread applications in intelligent robots, health monitoring, human motion detection, and other fields. High electrical\u2013mechanical coupling behaviors of 2D materials make them one of the most promising candidates for miniaturized, integrated, and high\u2010resolution strain sensors, motivating to explore the influence of strain\u2010induced band\u2010gap changes on electrical properties of more materials and assess their potential application in strain sensors. Herein, a ternary SnSSe alloy nanosheet\u2010based strain sensor is reported showing an enhanced gauge factor (GF) up to 69.7 and a good reproducibility and linearity within strain of 0.9%. Such sensor holds high\u2010sensitive features under low strain, and demonstrates an improved sensitivity with a decrease in the membrane thickness. The high sensitivity is attributed to widening band gap and density of states reduction induced by strain, as verified by theoretical model and first\u2010principles calculations. These findings show that a sensor with adjustable strain sensitivity might be realized by simply changing the elemental constituents of 2D alloying materials.", "which Material ?", "SnSSe", 549.0, 554.0], ["High-performance perovskite light-emitting diodes are achieved by an interfacial engineering approach, leading to the most efficient near-infrared devices produced using solution-processed emitters and efficient green devices at high brightness conditions.", "which Material ?", "High-performance perovskite light-emitting diodes", 0.0, 49.0], ["The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.", "which Material ?", "Hsp90 family and cofactors themselves", 1405.0, 1442.0], ["In a period of fiscal constraint, when assumptions about the library as place are being challenged, administrators question the contribution of every expense to student success. Libraries have been successful in migrating resources and services to a digital environment accessible beyond the library. What is the role of the library as place when users do not need to visit the building to utilize library services and resources? We argue that the college library building's core role is as a space for collaborative learning and community interaction that cannot be jettisoned in the new normal.", "which Material ?", "college library", 448.0, 463.0], ["Knowledge graphs (KGs) are widely used for modeling scholarly communication, performing scientometric analyses, and supporting a variety of intelligent services to explore the literature and predict research dynamics. However, they often suffer from incompleteness (e.g., missing affiliations, references, research topics), leading to a reduced scope and quality of the resulting analyses. This issue is usually tackled by computing knowledge graph embeddings (KGEs) and applying link prediction techniques. However, only a few KGE models are capable of taking weights of facts in the knowledge graph into account. Such weights can have different meanings, e.g. describe the degree of association or the degree of truth of a certain triple. In this paper, we propose the Weighted Triple Loss, a new loss function for KGE models that takes full advantage of the additional numerical weights on facts and it is even tolerant to incorrect weights. We also extend the Rule Loss, a loss function that is able to exploit a set of logical rules, in order to work with weighted triples. The evaluation of our solutions on several knowledge graphs indicates significant performance improvements with respect to the state of the art. Our main use case is the large-scale AIDA knowledge graph, which describes 21 million research articles. Our approach enables to complete information about affiliation types, countries, and research topics, greatly improving the scope of the resulting scientometrics analyses and providing better support to systems for monitoring and predicting research dynamics.", "which Material ?", "AIDA knowledge graph", 1261.0, 1281.0], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "which Material ?", "the biodiversity domain", 1424.0, 1447.0], ["The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.", "which Material ?", "T-cells", 436.0, 443.0], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "which Material ?", "collective biodiversity knowledge", 271.0, 304.0], ["The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.", "which Material ?", "data across nodes", 736.0, 753.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Material ?", "convNet, LSTM, and bidirectional LSTM with/without attention", 1612.0, 1672.0], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "which Material ?", "published articles or monographs", 195.0, 227.0], ["Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.", "which Material ?", "non-English languages", 935.0, 956.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "MA- or Cs-only cousins", 989.0, 1011.0], ["This paper discusses the potential of current advancements in Information Communication Technologies (ICT) for cultural heritage preservation, valorization and management within contemporary cities. The paper highlights the potential of virtual environments to assess the impacts of heritage policies on urban development. It does so by discussing the implications of virtual globes and crowdsourcing to support the participatory valuation and management of cultural heritage assets. To this purpose, a review of available valuation techniques is here presented together with a discussion on how these techniques might be coupled with ICT tools to promote inclusive governance.\u00a0", "which Material ?", "virtual globes", 376.0, 390.0], ["This paper discusses the potential of current advancements in Information Communication Technologies (ICT) for cultural heritage preservation, valorization and management within contemporary cities. The paper highlights the potential of virtual environments to assess the impacts of heritage policies on urban development. It does so by discussing the implications of virtual globes and crowdsourcing to support the participatory valuation and management of cultural heritage assets. To this purpose, a review of available valuation techniques is here presented together with a discussion on how these techniques might be coupled with ICT tools to promote inclusive governance.\u00a0", "which Material ?", "cultural heritage assets", 466.0, 490.0], ["Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, but abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards automatically compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In pattern-based query log extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.4M statements for 130K popular Wikidata entities.", "which Material ?", "popular Wikidata entities", 1277.0, 1302.0], ["High-performance perovskite light-emitting diodes are achieved by an interfacial engineering approach, leading to the most efficient near-infrared devices produced using solution-processed emitters and efficient green devices at high brightness conditions.", "which Material ?", "solution-processed emitters", 170.0, 197.0], ["The technological development of quantum dots has ushered in a new era in fluorescence bioimaging, which was propelled with the advent of novel multiphoton fluorescence microscopes. Here, the potential use of CdSe quantum dots has been evaluated as fluorescent nanothermometers for two-photon fluorescence microscopy. In addition to the enhancement in spatial resolution inherent to any multiphoton excitation processes, two-photon (near-infrared) excitation leads to a temperature sensitivity of the emission intensity much higher than that achieved under one-photon (visible) excitation. The peak emission wavelength is also temperature sensitive, providing an additional approach for thermal imaging, which is particularly interesting for systems where nanoparticles are not homogeneously dispersed. On the basis of these superior thermal sensitivity properties of the two-photon excited fluorescence, we have demonstrated the ability of CdSe quantum dots to image a temperature gradient artificially created in a biocompatible fluid (phosphate-buffered saline) and also their ability to measure an intracellular temperature increase externally induced in a single living cell.", "which Material ?", "Quantum dots", 33.0, 45.0], ["Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .", "which Material ?", "community", 305.0, 314.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "CsPbBr3 NCs", 1243.0, 1254.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "+ (methylammonium or MA+) or", NaN, NaN], ["It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.", "which Material ?", "theoretical framework", 596.0, 617.0], ["Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service \"Where is My URI\" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.", "which Material ?", "active SPARQL endpoints", 864.0, 887.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "a consistent mathematical process model", 540.0, 579.0], ["The amount and diversity of data in the Semantic Web has grown quite. RDF datasets has proportionally more problems than relational datasets due to the way data are published, usually without formal criteria. Entity Resolution is n important issue which is related to a known task of many research communities and it aims at finding all representations that refer to the same entity in different datasets. Yet, it is still an open problem. Blocking methods are used to avoid the quadratic complexity of the brute force approach by clustering entities into blocks and limiting the evaluation of entity specifications to entity pairs within blocks. In the last years only a few blocking methods were conceived to deal with RDF data and novel blocking techniques are required for dealing with noisy and heterogeneous data in the Web of Data. In this paper we present a blocking scheme, CER-Blocking, which is based on an inverted index structure and that uses different data evidences from a triple, aiming to maximize its effectiveness. To overcome the problems of data quality or even the very absence thereof, we use two blocking key definitions. This scheme is part of an ER approach which is based on a relational learning algorithm that addresses the problem by statistical approximation. It was empirically evaluated on real and synthetic datasets which are part of consolidated benchmarks found on the literature.", "which Material ?", "the Web of Data", 822.0, 837.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "a prototype implementation", 622.0, 648.0], ["Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.", "which Material ?", "genomics metadata", 646.0, 663.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "the tool", 962.0, 970.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Material ?", "hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet)", NaN, NaN], ["Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service \"Where is My URI\" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.", "which Material ?", "the hundered of thousands of RDF datasets", 150.0, 191.0], ["Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service \"Where is My URI\" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.", "which Material ?", "These datasets", 225.0, 239.0], ["Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W\u00b7sr(-1)\u00b7m(-2) at a current density of 232 mA\u00b7cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 \u00b1 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.", "which Material ?", "injecting layer", 899.0, 914.0], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "which Material ?", "an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone", NaN, NaN], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Material ?", "papers by authors", 965.0, 982.0], ["Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service \"Where is My URI\" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.", "which Material ?", "an integrated query engine", 572.0, 598.0], ["Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service \"Where is My URI\" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.", "which Material ?", "both federated and non-federated SPARQL queries", 735.0, 782.0], ["Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W\u00b7sr(-1)\u00b7m(-2) at a current density of 232 mA\u00b7cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 \u00b1 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.", "which Material ?", "perovskite material", 1602.0, 1621.0], ["The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.", "which Material ?", "Semantic Web community", 242.0, 264.0], ["Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.", "which Material ?", "omics data paper template", 751.0, 776.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Material ?", "user", 504.0, 508.0], ["Dereplication represents a key step for rapidly identifying known secondary metabolites in complex biological matrices. In this context, liquid-chromatography coupled to high resolution mass spectrometry (LC-HRMS) is increasingly used and, via untargeted data-dependent MS/MS experiments, massive amounts of detailed information on the chemical composition of crude extracts can be generated. An efficient exploitation of such data sets requires automated data treatment and access to dedicated fragmentation databases. Various novel bioinformatics approaches such as molecular networking (MN) and in-silico fragmentation tools have emerged recently and provide new perspective for early metabolite identification in natural products (NPs) research. Here we propose an innovative dereplication strategy based on the combination of MN with an extensive in-silico MS/MS fragmentation database of NPs. Using two case studies, we demonstrate that this combined approach offers a powerful tool to navigate through the chemistry of complex NPs extracts, dereplicate metabolites, and annotate analogues of database entries.", "which Material ?", "dedicated fragmentation databases", 485.0, 518.0], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "which Material ?", "Findable, Accessible, Interoperable, Reusable (FAIR) data", NaN, NaN], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "a building", 24.0, 34.0], ["ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean\u00b1SD of 2.93\u00b10.33mg/L compared to control group\u2019s 4.42\u00b10.30mg/L and gemfibrozil group\u2019s 4.28\u00b10.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).", "which Material ?", "27, adult healthy male Sprague Dawley rats", 293.0, 335.0], ["This paper discusses the potential of current advancements in Information Communication Technologies (ICT) for cultural heritage preservation, valorization and management within contemporary cities. The paper highlights the potential of virtual environments to assess the impacts of heritage policies on urban development. It does so by discussing the implications of virtual globes and crowdsourcing to support the participatory valuation and management of cultural heritage assets. To this purpose, a review of available valuation techniques is here presented together with a discussion on how these techniques might be coupled with ICT tools to promote inclusive governance.\u00a0", "which Material ?", "virtual environments", 245.0, 265.0], ["ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean\u00b1SD of 2.93\u00b10.33mg/L compared to control group\u2019s 4.42\u00b10.30mg/L and gemfibrozil group\u2019s 4.28\u00b10.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).", "which Material ?", "groups A, B and C", 1027.0, 1044.0], ["High-performance perovskite light-emitting diodes are achieved by an interfacial engineering approach, leading to the most efficient near-infrared devices produced using solution-processed emitters and efficient green devices at high brightness conditions.", "which Material ?", "efficient near-infrared devices", 123.0, 154.0], ["Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W\u00b7sr(-1)\u00b7m(-2) at a current density of 232 mA\u00b7cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 \u00b1 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.", "which Material ?", "solution-processed hybrid lead halide perovskite (Pe)", NaN, NaN], ["The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.", "which Material ?", "our laboratory information system and research infrastructure", 455.0, 516.0], ["Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and \u201cfeeling left out\u201d can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.", "which Material ?", "feelings and emotions", 449.0, 470.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Material ?", "novel sAtt-BLSTM convNet model", 1701.0, 1731.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "Many participants", 52.0, 69.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Material ?", "online social media content", 25.0, 52.0], ["ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean\u00b1SD of 2.93\u00b10.33mg/L compared to control group\u2019s 4.42\u00b10.30mg/L and gemfibrozil group\u2019s 4.28\u00b10.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).", "which Material ?", "coconut oil 8.0% and sodium cholate", 494.0, 529.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "highly versatile photonic sources", 189.0, 222.0], ["The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.", "which Material ?", "molecular chaperone Hsp90-dependent proteome", 4.0, 48.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "these technologies", 354.0, 372.0], ["Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service \"Where is My URI\" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.", "which Material ?", "heterogeneous RDF data sources", 667.0, 697.0], ["Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and \u201cfeeling left out\u201d can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.", "which Material ?", "not reached groups", 357.0, 375.0], ["\nPurpose\nIn terms of entrepreneurship, open data benefits include economic growth, innovation, empowerment and new or improved products and services. Hackathons encourage the development of new applications using open data and the creation of startups based on these applications. Researchers focus on factors that affect nascent entrepreneurs\u2019 decision to create a startup but researches in the field of open data hackathons have not been fully investigated yet. This paper aims to suggest a model that incorporates factors that affect the decision of establishing a startup by developers who have participated in open data hackathons.\n\n\nDesign/methodology/approach\nIn total, 70 papers were examined and analyzed using a three-phased literature review methodology, which was suggested by Webster and Watson (2002). These surveys investigated several factors that affect a nascent entrepreneur to create a startup.\n\n\nFindings\nEventually, by identifying the motivations for developers to participate in a hackathon, and understanding the benefits of the use of open data, researchers will be able to elaborate the proposed model and evaluate if the contest has contributed to the decision of establish a startup and what factors affect the decision to establish a startup apply to open data developers, and if the participants of the contest agree with these factors.\n\n\nOriginality/value\nThe paper expands the scope of open data research on entrepreneurship field, stating the need for more research to be conducted regarding the open data in entrepreneurship through hackathons.\n", "which Material ?", "new applications", 268.0, 284.0], ["Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, but abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards automatically compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In pattern-based query log extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.4M statements for 130K popular Wikidata entities.", "which Material ?", "highly related entities", 801.0, 824.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Material ?", "figurative literary devices", 699.0, 726.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "red-emissive CsPbI3 NCs", 578.0, 601.0], ["Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service \"Where is My URI\" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.", "which Material ?", "raw data dumps and HDT files", 283.0, 311.0], ["The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.", "which Material ?", "many data sets", 62.0, 76.0], ["Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W\u00b7sr(-1)\u00b7m(-2) at a current density of 232 mA\u00b7cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 \u00b1 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.", "which Material ?", "electron and hole injecting layers", 558.0, 592.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013]", NaN, NaN], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "colloidal state", 1476.0, 1491.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "[A = Cs+, CH3NH3", NaN, NaN], ["In this paper, a centimeter-scale monolayer molybdenum disulfide (MoS2) film deposition method has been developed through a simple low-pressure chemical vapor deposition (LPCVD) growth system. The growth pressure dependence on film quality is investigated in this LPCVD system. The layer nature, electrical characteristic of the as-grown MoS2 films indicate that high quality films have been achieved. In addition, a hydrofluoric acid treated SiO2/Si substrate is used to improve the quality of the MoS2 films. Piezoresistive strain sensor based on the monolayer MoS2 film elements is fabricated by directly patterning metal contact pads on MoS2 films through a silicon stencil mask. A gauge factor of 104 \u00b1 26 under compressive strain is obtained by using a four-point bending method, which may inspire new possibilities for two-dimensional (2D) material-based microsystems and electronics.", "which Material ?", "MoS2", 66.0, 70.0], ["Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.", "which Material ?", "structured knowledge bases", 516.0, 542.0], ["It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.", "which Material ?", "writing and reading conceptual models", 514.0, 551.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "cubic crystal structure", 1114.0, 1137.0], ["Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information \u2013 meaningful secondary or derived data \u2013 about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known \u201c(elaborated) data products,\u201d for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation \u2013 as machine readable data and their machine-readable meaning \u2013 is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.", "which Material ?", "machine readable data", 1124.0, 1145.0], ["Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.", "which Material ?", "genomic and other omics datasets", 1397.0, 1429.0], ["Two-dimensional (2D) layered semiconductors have emerged as a highly attractive class of materials for flexible and wearable strain sensor-centric devices such as electronic-skin (e-skin). This is primarily due to their dimensionality, excellent mechanical flexibility, and unique electronic properties. However, the lack of effective and low-cost methods for wafer-scale fabrication of these materials for strain sensor arrays limits their potential for such applications. Here, we report growth of large-scale 2D In2Se3 nanosheets by templated chemical vapor deposition (CVD) method, using In2O3 and Se powders as precursors. The strain sensors fabricated from the as-grown 2D In2Se3 films show 2 orders of magnitude higher sensitivity (gauge factor \u223c237 in \u22120.39% to 0.39% uniaxial strain range along the device channel length) than what has been demonstrated from conventional metal-based (gauge factor: \u223c1\u20135) and graphene-based strain sensors (gauge factor: \u223c2\u20134) in a similar uniaxial strain range. The integrated ...", "which Material ?", "In2Se3", 515.0, 521.0], ["Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service \"Where is My URI\" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.", "which Material ?", "distributed data", 387.0, 403.0], ["The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.", "which Material ?", "key essential and oncogenic signalling pathways", 231.0, 278.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Material ?", "text message or post", 559.0, 579.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Material ?", "previously introduced tool", 1199.0, 1225.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "modern information and communication technologies", 269.0, 318.0], ["To improve designs of e-learning materials, it is necessary to know which word or figure a learner felt \"difficult\" in the materials. In this pilot study, we measured electroencephalography (EEG) and eye gaze data of learners and analyzed to estimate which area they had difficulty to learn. The developed system realized simultaneous measurements of physiological data and subjective evaluations during learning. Using this system, we observed specific EEG activity in difficult pages. Integrating of eye gaze and EEG measurements raised a possibility to determine where a learner felt \"difficult\" in a page of learning materials. From these results, we could suggest that the multimodal measurements of EEG and eye gaze would lead to effective improvement of learning materials. For future study, more data collection using various materials and learners with different backgrounds is necessary. This study could lead to establishing a method to improve e-learning materials based on learners' mental states.", "which Material ?", "difficult pages", 470.0, 485.0], ["Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.", "which Material ?", "European Nucleotide Archive, ArrayExpress, and BioSamples databases", 1027.0, 1094.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Material ?", "specific topics", 330.0, 345.0], ["Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information \u2013 meaningful secondary or derived data \u2013 about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known \u201c(elaborated) data products,\u201d for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation \u2013 as machine readable data and their machine-readable meaning \u2013 is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.", "which Material ?", "individual scientists", 376.0, 397.0], ["While Wikipedia exists in 287 languages, its content is unevenly distributed among them. In this work, we investigate the generation of open domain Wikipedia summaries in underserved languages using structured data from Wikidata. To this end, we propose a neural network architecture equipped with copy actions that learns to generate single-sentence and comprehensible textual summaries from Wikidata triples. We demonstrate the effectiveness of the proposed approach by evaluating it against a set of baselines on two languages of different natures: Arabic, a morphological rich language with a larger vocabulary than English, and Esperanto, a constructed language known for its easy acquisition.", "which Material ?", "Esperanto", 633.0, 642.0], ["AbstractThis paper presents the results of study on titanium dioxide thin films prepared by atomic layer deposition method on a silicon substrate. The changes of surface morphology have been observed in topographic images performed with the atomic force microscope (AFM) and scanning electron microscope (SEM). Obtained roughness parameters have been calculated with XEI Park Systems software. Qualitative studies of chemical composition were also performed using the energy dispersive spectrometer (EDS). The structure of titanium dioxide was investigated by X-ray crystallography. A variety of crystalline TiO2was also confirmed by using the Raman spectrometer. The optical reflection spectra have been measured with UV-Vis spectrophotometry.", "which Material ?", "Titanium dioxide", 85.0, 101.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Material ?", "user-generated content", 403.0, 425.0], ["Neural-symbolic systems combine the strengths of neural networks and symbolic formalisms. In this paper, we introduce a neural-symbolic system which combines restricted Boltzmann machines and probabilistic semi-abstract argumentation. We propose to train networks on argument labellings explaining the data, so that any sampled data outcome is associated with an argument labelling. Argument labellings are integrated as constraints within restricted Boltzmann machines, so that the neural networks are used to learn probabilistic dependencies amongst argument labels. Given a dataset and an argumentation graph as prior knowledge, for every example/case K in the dataset, we use a so-called K-maxconsistent labelling of the graph, and an explanation of case K refers to a K-maxconsistent labelling of the given argumentation graph. The abilities of the proposed system to predict correct labellings were evaluated and compared with standard machine learning techniques. Experiments revealed that such argumentation Boltzmann machines can outperform other classification models, especially in noisy settings.", "which Material ?", "probabilistic semi-abstract argumentation", 200.0, 241.0], ["To improve designs of e-learning materials, it is necessary to know which word or figure a learner felt \"difficult\" in the materials. In this pilot study, we measured electroencephalography (EEG) and eye gaze data of learners and analyzed to estimate which area they had difficulty to learn. The developed system realized simultaneous measurements of physiological data and subjective evaluations during learning. Using this system, we observed specific EEG activity in difficult pages. Integrating of eye gaze and EEG measurements raised a possibility to determine where a learner felt \"difficult\" in a page of learning materials. From these results, we could suggest that the multimodal measurements of EEG and eye gaze would lead to effective improvement of learning materials. For future study, more data collection using various materials and learners with different backgrounds is necessary. This study could lead to establishing a method to improve e-learning materials based on learners' mental states.", "which Material ?", "e-learning materials", 22.0, 42.0], ["Knowledge graph completion is still a challenging solution that uses techniques from distinct areas to solve many different tasks. Most recent works, which are based on embedding models, were conceived to improve an existing knowledge graph using the link prediction task. However, even considering the ability of these solutions to solve other tasks, they did not present results for data linking, for example. Furthermore, most of these works focuses only on structural information, i.e., the relations between entities. In this paper, we present an approach for data linking that enrich entity embeddings in a model with their literal information and that do not rely on external information of these entities. The key aspect of this proposal is that we use a blocking scheme to improve the effectiveness of the solution in relation to the use of literals. Thus, in addition to the literals from object elements in a triple, we use other literals from subjects and predicates. By merging entity embeddings with their literal information it is possible to extend many popular embedding models. Preliminary experiments were performed on real-world datasets and our solution showed competitive results to the performance of the task of data linking.", "which Material ?", "literals from subjects and predicates", 941.0, 978.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "Our project", 698.0, 709.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Material ?", "productive authors", 776.0, 794.0], ["State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data pre-processing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks --- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both the two data --- 97.55\\% accuracy for POS tagging and 91.21\\% F1 for NER.", "which Material ?", "CoNLL 2003 corpus for named entity recognition (NER)", NaN, NaN], ["Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W\u00b7sr(-1)\u00b7m(-2) at a current density of 232 mA\u00b7cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 \u00b1 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.", "which Material ?", "bright solid state light emitting diodes (LEDs)", NaN, NaN], ["The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.", "which Material ?", "scalable RDF data management system", 465.0, 500.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "a building", 24.0, 34.0], ["Representation learning is a critical ingredient for natural language processing systems. Recent Transformer language models like BERT learn powerful textual representations, but these models are targeted towards token- and sentence-level training objectives and do not leverage information on inter-document relatedness, which limits their document-level representation power. For applications on scientific documents, such as classification and recommendation, accurate embeddings of documents are a necessity. We propose SPECTER, a new method to generate document-level embedding of scientific papers based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, Specter can be easily applied to downstream applications without task-specific fine-tuning. Additionally, to encourage further research on document-level models, we introduce SciDocs, a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation. We show that Specter outperforms a variety of competitive baselines on the benchmark.", "which Material ?", "SciDocs", 945.0, 952.0], ["The integration of different datasets in the Linked Data Cloud is a key aspect to the success of the Web of Data. To tackle this problem most of existent solutions have been supported by the task of entity resolution. However, many challenges still prevail specially when considering different types, structures and vocabularies used in the Web. Another common problem is that data usually are incomplete, inconsistent and contain outliers. To overcome these limitations, some works have applied machine learning algorithms since they are typically robust to both noise and data inconsistencies and are able to efficiently utilize nondeterministic dependencies in the data. In this paper we propose an approach based in a relational learning algorithm that addresses the problem by statistical approximation method. Modeling the problem as a relational machine learning task allows exploit contextual information that might be too distant in the relational graph. The joint application of relationship patterns between entities and evidences of similarity between their descriptions can improve the effectiveness of results. Furthermore, it is based on a sparse structure that scales well to large datasets. We present initial experiments based on BTC2012 datasets.", "which Material ?", "the Web of Data", 97.0, 112.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Material ?", "media platforms", 163.0, 178.0], ["Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.", "which Material ?", "23 data sets", 584.0, 596.0], ["Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W\u00b7sr(-1)\u00b7m(-2) at a current density of 232 mA\u00b7cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 \u00b1 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.", "which Material ?", "Compact TiO2 and Spiro-OMeTAD", 515.0, 544.0], ["Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.", "which Material ?", "standard-compliant metadata", 1155.0, 1182.0], ["Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.", "which Material ?", "African languages", 46.0, 63.0], ["Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W\u00b7sr(-1)\u00b7m(-2) at a current density of 232 mA\u00b7cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 \u00b1 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.", "which Material ?", "unsealed Pe-LEDs", 1296.0, 1312.0], ["The theme of smart grids will connote in the immediate future the production and distribution of electricity, integrating effectively and in a sustainable way energy deriving from large power stations with that distributed and supplied by renewable sources. In programmes of urban redevelopment, however, the historical city has not yet been subject to significant experimentation, also due to the specific safeguard on this kind of Heritage. This reflection opens up interesting new perspectives of research and operations, which could significantly contribute to the pursuit of the aims of the Smart City. This is the main goal of the research here presented and focused on the binomial renovation of a historical complex/enhancement and upgrading of its energy efficiency.", "which Material ?", "historical city", 309.0, 324.0], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "which Material ?", "a substantial pool", 80.0, 98.0], ["Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service \"Where is My URI\" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.", "which Material ?", "the Web of Data", 21.0, 36.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Material ?", "data set", 700.0, 708.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "these technologies", 354.0, 372.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "films", 1499.0, 1504.0], ["Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.", "which Material ?", "data", 20.0, 24.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "FA0.1Cs0.9PbI3 NCs", 1149.0, 1167.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "FA0.1Cs0.9PbI3 and FAPbI3 NCs", 1727.0, 1756.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "the project leader", 178.0, 196.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "Many participants", 52.0, 69.0], ["Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and \u201cfeeling left out\u201d can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.", "which Material ?", "three different underserved audiences", 689.0, 726.0], ["Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.", "which Material ?", "English datasets (PBOH)", NaN, NaN], ["Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service \"Where is My URI\" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.", "which Material ?", "LOD Laudromat, SPARQL endpoints", 100.0, 131.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "red and infrared spectral regions", 494.0, 527.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "Many participants", 52.0, 69.0], ["State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data pre-processing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks --- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both the two data --- 97.55\\% accuracy for POS tagging and 91.21\\% F1 for NER.", "which Material ?", "Penn Treebank WSJ corpus for part-of-speech (POS) tagging", NaN, NaN], ["It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.", "which Material ?", "framework", 608.0, 617.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "a consistent mathematical process model", 540.0, 579.0], ["Dereplication represents a key step for rapidly identifying known secondary metabolites in complex biological matrices. In this context, liquid-chromatography coupled to high resolution mass spectrometry (LC-HRMS) is increasingly used and, via untargeted data-dependent MS/MS experiments, massive amounts of detailed information on the chemical composition of crude extracts can be generated. An efficient exploitation of such data sets requires automated data treatment and access to dedicated fragmentation databases. Various novel bioinformatics approaches such as molecular networking (MN) and in-silico fragmentation tools have emerged recently and provide new perspective for early metabolite identification in natural products (NPs) research. Here we propose an innovative dereplication strategy based on the combination of MN with an extensive in-silico MS/MS fragmentation database of NPs. Using two case studies, we demonstrate that this combined approach offers a powerful tool to navigate through the chemistry of complex NPs extracts, dereplicate metabolites, and annotate analogues of database entries.", "which Material ?", "complex NPs extracts", 1026.0, 1046.0], ["It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.", "which Material ?", "generalizable reusable solution", 252.0, 283.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Material ?", "SemEval 2015 Task 11", 1311.0, 1331.0], ["Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.", "which Material ?", "available resources", 95.0, 114.0], ["The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.", "which Material ?", "the central terminologies", 188.0, 213.0], ["Abstract Objective To evaluate viral loads at different stages of disease progression in patients infected with the 2019 severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) during the first four months of the epidemic in Zhejiang province, China. Design Retrospective cohort study. Setting A designated hospital for patients with covid-19 in Zhejiang province, China. Participants 96 consecutively admitted patients with laboratory confirmed SARS-CoV-2 infection: 22 with mild disease and 74 with severe disease. Data were collected from 19 January 2020 to 20 March 2020. Main outcome measures Ribonucleic acid (RNA) viral load measured in respiratory, stool, serum, and urine samples. Cycle threshold values, a measure of nucleic acid concentration, were plotted onto the standard curve constructed on the basis of the standard product. Epidemiological, clinical, and laboratory characteristics and treatment and outcomes data were obtained through data collection forms from electronic medical records, and the relation between clinical data and disease severity was analysed. Results 3497 respiratory, stool, serum, and urine samples were collected from patients after admission and evaluated for SARS-CoV-2 RNA viral load. Infection was confirmed in all patients by testing sputum and saliva samples. RNA was detected in the stool of 55 (59%) patients and in the serum of 39 (41%) patients. The urine sample from one patient was positive for SARS-CoV-2. The median duration of virus in stool (22 days, interquartile range 17-31 days) was significantly longer than in respiratory (18 days, 13-29 days; P=0.02) and serum samples (16 days, 11-21 days; P<0.001). The median duration of virus in the respiratory samples of patients with severe disease (21 days, 14-30 days) was significantly longer than in patients with mild disease (14 days, 10-21 days; P=0.04). In the mild group, the viral loads peaked in respiratory samples in the second week from disease onset, whereas viral load continued to be high during the third week in the severe group. Virus duration was longer in patients older than 60 years and in male patients. Conclusion The duration of SARS-CoV-2 is significantly longer in stool samples than in respiratory and serum samples, highlighting the need to strengthen the management of stool samples in the prevention and control of the epidemic, and the virus persists longer with higher load and peaks later in the respiratory tissue of patients with severe disease.", "which Material ?", "respiratory sample", NaN, NaN], ["Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information \u2013 meaningful secondary or derived data \u2013 about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known \u201c(elaborated) data products,\u201d for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation \u2013 as machine readable data and their machine-readable meaning \u2013 is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.", "which Material ?", "their research groups", 350.0, 371.0], ["ABSTRACT \u2018Heritage Interpretation\u2019 has always been considered as an effective learning, communication and management tool that increases visitors\u2019 awareness of and empathy to heritage sites or artefacts. Yet the definition of \u2018digital heritage interpretation\u2019 is still wide and so far, no significant method and objective are evident within the domain of \u2018digital heritage\u2019 theory and discourse. Considering \u2018digital heritage interpretation\u2019 as a process rather than as a tool to present or communicate with end-users, this paper presents a critical application of a theoretical construct ascertained from multiple disciplines and explicates four objectives for a comprehensive interpretive process. A conceptual model is proposed and further developed into a conceptual framework with fifteen considerations. This framework is then implemented and tested on an online platform to assess its impact on end-users\u2019 interpretation level. We believe the presented interpretive framework (PrEDiC) will help heritage professionals and media designers to develop interpretive heritage project.", "which Material ?", "end-users", 508.0, 517.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Material ?", "natural language", 457.0, 473.0], ["With biomolecular structure recognized as central to understanding mechanisms in the cell, computational chemists and biophysicists have spent significant efforts on modeling structure and dynamics. While significant advances have been made, particularly in the design of sophisticated energetic models and molecular representations, such efforts are experiencing diminishing returns. One of the culprits is low exploration capability. The impasse has attracted AI researchers to offer adaptations of robot motion planning algorithms for modeling biomolecular structures and motions. This tutorial introduces students and researchers to robotics-inspired treatments and methodologies for understanding and elucidating the role of structure and dynamics in the function of biomolecules. The presentation is enhanced via an open-source software developed in the Shehu Computational Biology laboratory. The software allows researchers to integrate themselves in a new research domain and drive further research via plug-and-play capabilities. The hands-on approach in the the tutorial benefits both students and senior researchers keen to make contributions in computational structural biology.", "which Material ?", "Computational Structural Biology", 1158.0, 1190.0], ["Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.", "which Material ?", "African continent", 682.0, 699.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Material ?", "the bibliographic database JSTOR", 111.0, 143.0], ["ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean\u00b1SD of 2.93\u00b10.33mg/L compared to control group\u2019s 4.42\u00b10.30mg/L and gemfibrozil group\u2019s 4.28\u00b10.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).", "which Material ?", "Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH)", NaN, NaN], ["Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information \u2013 meaningful secondary or derived data \u2013 about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known \u201c(elaborated) data products,\u201d for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation \u2013 as machine readable data and their machine-readable meaning \u2013 is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.", "which Material ?", "a Jupyter based prototype infrastructure", 1327.0, 1367.0], ["Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W\u00b7sr(-1)\u00b7m(-2) at a current density of 232 mA\u00b7cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 \u00b1 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.", "which Material ?", "injecting layers", 497.0, 513.0], ["With biomolecular structure recognized as central to understanding mechanisms in the cell, computational chemists and biophysicists have spent significant efforts on modeling structure and dynamics. While significant advances have been made, particularly in the design of sophisticated energetic models and molecular representations, such efforts are experiencing diminishing returns. One of the culprits is low exploration capability. The impasse has attracted AI researchers to offer adaptations of robot motion planning algorithms for modeling biomolecular structures and motions. This tutorial introduces students and researchers to robotics-inspired treatments and methodologies for understanding and elucidating the role of structure and dynamics in the function of biomolecules. The presentation is enhanced via an open-source software developed in the Shehu Computational Biology laboratory. The software allows researchers to integrate themselves in a new research domain and drive further research via plug-and-play capabilities. The hands-on approach in the the tutorial benefits both students and senior researchers keen to make contributions in computational structural biology.", "which Material ?", "Shehu Computational Biology laboratory", 860.0, 898.0], ["The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.", "which Material ?", "cluster", 955.0, 962.0], ["Abstract Objective To evaluate viral loads at different stages of disease progression in patients infected with the 2019 severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) during the first four months of the epidemic in Zhejiang province, China. Design Retrospective cohort study. Setting A designated hospital for patients with covid-19 in Zhejiang province, China. Participants 96 consecutively admitted patients with laboratory confirmed SARS-CoV-2 infection: 22 with mild disease and 74 with severe disease. Data were collected from 19 January 2020 to 20 March 2020. Main outcome measures Ribonucleic acid (RNA) viral load measured in respiratory, stool, serum, and urine samples. Cycle threshold values, a measure of nucleic acid concentration, were plotted onto the standard curve constructed on the basis of the standard product. Epidemiological, clinical, and laboratory characteristics and treatment and outcomes data were obtained through data collection forms from electronic medical records, and the relation between clinical data and disease severity was analysed. Results 3497 respiratory, stool, serum, and urine samples were collected from patients after admission and evaluated for SARS-CoV-2 RNA viral load. Infection was confirmed in all patients by testing sputum and saliva samples. RNA was detected in the stool of 55 (59%) patients and in the serum of 39 (41%) patients. The urine sample from one patient was positive for SARS-CoV-2. The median duration of virus in stool (22 days, interquartile range 17-31 days) was significantly longer than in respiratory (18 days, 13-29 days; P=0.02) and serum samples (16 days, 11-21 days; P<0.001). The median duration of virus in the respiratory samples of patients with severe disease (21 days, 14-30 days) was significantly longer than in patients with mild disease (14 days, 10-21 days; P=0.04). In the mild group, the viral loads peaked in respiratory samples in the second week from disease onset, whereas viral load continued to be high during the third week in the severe group. Virus duration was longer in patients older than 60 years and in male patients. Conclusion The duration of SARS-CoV-2 is significantly longer in stool samples than in respiratory and serum samples, highlighting the need to strengthen the management of stool samples in the prevention and control of the epidemic, and the virus persists longer with higher load and peaks later in the respiratory tissue of patients with severe disease.", "which Material ?", "stool sample", NaN, NaN], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "the project leader", 178.0, 196.0], ["Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.", "which Material ?", "community", 286.0, 295.0], ["Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .", "which Material ?", "large, open data", 697.0, 713.0], ["The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.", "which Material ?", "single node", 317.0, 328.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "FAPbI3 and FA-doped CsPbI3 NCs", 843.0, 873.0], ["Dereplication represents a key step for rapidly identifying known secondary metabolites in complex biological matrices. In this context, liquid-chromatography coupled to high resolution mass spectrometry (LC-HRMS) is increasingly used and, via untargeted data-dependent MS/MS experiments, massive amounts of detailed information on the chemical composition of crude extracts can be generated. An efficient exploitation of such data sets requires automated data treatment and access to dedicated fragmentation databases. Various novel bioinformatics approaches such as molecular networking (MN) and in-silico fragmentation tools have emerged recently and provide new perspective for early metabolite identification in natural products (NPs) research. Here we propose an innovative dereplication strategy based on the combination of MN with an extensive in-silico MS/MS fragmentation database of NPs. Using two case studies, we demonstrate that this combined approach offers a powerful tool to navigate through the chemistry of complex NPs extracts, dereplicate metabolites, and annotate analogues of database entries.", "which Material ?", "extensive in-silico MS/MS fragmentation database", 842.0, 890.0], ["The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.", "which Material ?", "complex protein network", 62.0, 85.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "Many participants", 52.0, 69.0], ["Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and \u201cfeeling left out\u201d can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.", "which Material ?", "reached groups", 361.0, 375.0], ["Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information \u2013 meaningful secondary or derived data \u2013 about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known \u201c(elaborated) data products,\u201d for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation \u2013 as machine readable data and their machine-readable meaning \u2013 is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.", "which Material ?", "the sciences", 57.0, 69.0], ["The effects of gallium doping into indium\u2013zinc\u2013tin oxide (IZTO) thin film transistors (TFTs) and Ar/O2 plasma treatment on the performance of a\u2010IZTO TFT are reported. The Ga doping ratio is varied from 0 to 20%, and it is found that 10% gallium doping in a\u2010IZTO TFT results in a saturation mobility (\u00b5sat) of 11.80 cm2 V\u22121 s\u22121, a threshold voltage (Vth) of 0.17 V, subthreshold swing (SS) of 94 mV dec\u22121, and on/off current ratio (Ion/Ioff) of 1.21 \u00d7 107. Additionally, the performance of 10% Ga\u2010doped IZTO TFT can be further improved by Ar/O2 plasma treatment. It is found that 30 s plasma treatment gives the best TFT performances such as \u00b5sat of 30.60 cm2 V\u22121 s\u22121, Vth of 0.12 V, SS of 92 mV dec\u22121, and Ion/Ioff ratio of 7.90 \u00d7 107. The bias\u2010stability of 10% Ga\u2010doped IZTO TFT is also improved by 30 s plasma treatment. The enhancement of the TFT performance appears to be due to the reduction in the oxygen vacancy and \uf8ffOH concentrations.", "which Material ?", "Indium\u2013zinc\u2013tin oxide (IZTO) thin film", NaN, NaN], ["Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.", "which Material ?", "European Nucleotide Archive", 673.0, 700.0], ["A metabolome-wide genome-wide association study (mGWAS) aims to discover the effects of genetic variants on metabolome phenotypes. Most mGWASes use as phenotypes concentrations of limited sets of metabolites that can be identified and quantified from spectral information. In contrast, in an untargeted mGWAS both identification and quantification are forgone and, instead, all measured metabolome features are tested for association with genetic variants. While the untargeted approach does not discard data that may have eluded identification, the interpretation of associated features remains a challenge. To address this issue, we developed metabomatching to identify the metabolites underlying significant associations observed in untargeted mGWASes on proton NMR metabolome data. Metabomatching capitalizes on genetic spiking, the concept that because metabolome features associated with a genetic variant tend to correspond to the peaks of the NMR spectrum of the underlying metabolite, genetic association can allow for identification. Applied to the untargeted mGWASes in the SHIP and CoLaus cohorts and using 180 reference NMR spectra of the urine metabolome database, metabomatching successfully identified the underlying metabolite in 14 of 19, and 8 of 9 associations, respectively. The accuracy and efficiency of our method make it a strong contender for facilitating or complementing metabolomics analyses in large cohorts, where the availability of genetic, or other data, enables our approach, but targeted quantification is limited.", "which creates ?", "Metabomatching", 645.0, 659.0], ["Accurate inference of molecular and functional interactions among genes, especially in multicellular organisms such as Drosophila, often requires statistical analysis of correlations not only between the magnitudes of gene expressions, but also between their temporal-spatial patterns. The ISH (in-situ-hybridization)-based gene expression micro-imaging technology offers an effective approach to perform large-scale spatial-temporal profiling of whole-body mRNA abundance. However, analytical tools for discovering gene interactions from such data remain an open challenge due to various reasons, including difficulties in extracting canonical representations of gene activities from images, and in inference of statistically meaningful networks from such representations. In this paper, we present GINI, a machine learning system for inferring gene interaction networks from Drosophila embryonic ISH images. GINI builds on a computer-vision-inspired vector-space representation of the spatial pattern of gene expression in ISH images, enabled by our recently developed system; and a new multi-instance-kernel algorithm that learns a sparse Markov network model, in which, every gene (i.e., node) in the network is represented by a vector-valued spatial pattern rather than a scalar-valued gene intensity as in conventional approaches such as a Gaussian graphical model. By capturing the notion of spatial similarity of gene expression, and at the same time properly taking into account the presence of multiple images per gene via multi-instance kernels, GINI is well-positioned to infer statistically sound, and biologically meaningful gene interaction networks from image data. Using both synthetic data and a small manually curated data set, we demonstrate the effectiveness of our approach in network building. Furthermore, we report results on a large publicly available collection of Drosophila embryonic ISH images from the Berkeley Drosophila Genome Project, where GINI makes novel and interesting predictions of gene interactions. Software for GINI is available at http://sailing.cs.cmu.edu/Drosophila_ISH_images/", "which creates ?", "GINI", 800.0, 804.0], ["We present a computational method for the reaction-based de novo design of drug-like molecules. The software DOGS (Design of Genuine Structures) features a ligand-based strategy for automated \u2018in silico\u2019 assembly of potentially novel bioactive compounds. The quality of the designed compounds is assessed by a graph kernel method measuring their similarity to known bioactive reference ligands in terms of structural and pharmacophoric features. We implemented a deterministic compound construction procedure that explicitly considers compound synthesizability, based on a compilation of 25'144 readily available synthetic building blocks and 58 established reaction principles. This enables the software to suggest a synthesis route for each designed compound. Two prospective case studies are presented together with details on the algorithm and its implementation. De novo designed ligand candidates for the human histamine H4 receptor and \u03b3-secretase were synthesized as suggested by the software. The computational approach proved to be suitable for scaffold-hopping from known ligands to novel chemotypes, and for generating bioactive molecules with drug-like properties.", "which creates ?", "DOGS", 109.0, 113.0], ["A user ready, portable, documented software package, NFTsim, is presented to facilitate numerical simulations of a wide range of brain systems using continuum neural field modeling. NFTsim enables users to simulate key aspects of brain activity at multiple scales. At the microscopic scale, it incorporates characteristics of local interactions between cells, neurotransmitter effects, synaptodendritic delays and feedbacks. At the mesoscopic scale, it incorporates information about medium to large scale axonal ranges of fibers, which are essential to model dissipative wave transmission and to produce synchronous oscillations and associated cross-correlation patterns as observed in local field potential recordings of active tissue. At the scale of the whole brain, NFTsim allows for the inclusion of long range pathways, such as thalamocortical projections, when generating macroscopic activity fields. The multiscale nature of the neural activity produced by NFTsim has the potential to enable the modeling of resulting quantities measurable via various neuroimaging techniques. In this work, we give a comprehensive description of the design and implementation of the software. Due to its modularity and flexibility, NFTsim enables the systematic study of an unlimited number of neural systems with multiple neural populations under a unified framework and allows for direct comparison with analytic and experimental predictions. The code is written in C++ and bundled with Matlab routines for a rapid quantitative analysis and visualization of the outputs. The output of NFTsim is stored in plain text file enabling users to select from a broad range of tools for offline analysis. This software enables a wide and convenient use of powerful physiologically-based neural field approaches to brain modeling. NFTsim is distributed under the Apache 2.0 license.", "which creates ?", "NFTsim", 53.0, 59.0], ["Algorithms for comparing protein structure are frequently used for function annotation. By searching for subtle similarities among very different proteins, these algorithms can identify remote homologs with similar biological functions. In contrast, few comparison algorithms focus on specificity annotation, where the identification of subtle differences among very similar proteins can assist in finding small structural variations that create differences in binding specificity. Few specificity annotation methods consider electrostatic fields, which play a critical role in molecular recognition. To fill this gap, this paper describes VASP-E (Volumetric Analysis of Surface Properties with Electrostatics), a novel volumetric comparison tool based on the electrostatic comparison of protein-ligand and protein-protein binding sites. VASP-E exploits the central observation that three dimensional solids can be used to fully represent and compare both electrostatic isopotentials and molecular surfaces. With this integrated representation, VASP-E is able to dissect the electrostatic environments of protein-ligand and protein-protein binding interfaces, identifying individual amino acids that have an electrostatic influence on binding specificity. VASP-E was used to examine a nonredundant subset of the serine and cysteine proteases as well as the barnase-barstar and Rap1a-raf complexes. Based on amino acids established by various experimental studies to have an electrostatic influence on binding specificity, VASP-E identified electrostatically influential amino acids with 100% precision and 83.3% recall. We also show that VASP-E can accurately classify closely related ligand binding cavities into groups with different binding preferences. These results suggest that VASP-E should prove a useful tool for the characterization of specific binding and the engineering of binding preferences in proteins.", "which creates ?", "VASP-E", 640.0, 646.0], ["Abstract Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal \u201cvirtual laboratory\u201d for such multicellular systems simulates both the biochemical microenvironment (the \u201cstage\u201d) and many mechanically and biochemically interacting cells (the \u201cplayers\u201d upon the stage). PhysiCell\u2014physics-based multicellular simulator\u2014is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility \u201cout of the box.\u201d The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 10 5 -10 6 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a \u201ccellular cargo delivery\u201d system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net. Author Summary This paper introduces PhysiCell: an open source, agent-based modeling framework for 3-D multicellular simulations. It includes a standard library of sub-models for cell fluid and solid volume changes, cycle progression, apoptosis, necrosis, mechanics, and motility. PhysiCell is directly coupled to a biotransport solver to simulate many diffusing substrates and cell-secreted signals. Each cell can dynamically update its phenotype based on its microenvironmental conditions. Users can customize or replace the included sub-models. PhysiCell runs on a variety of platforms (Linux, OSX, and Windows) with few software dependencies. Its computational cost scales linearly in the number of cells. It is feasible to simulate 500,000 cells on quad-core desktop workstations, and millions of cells on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on hanging drop tumor spheroids (HDS) and ductal carcinoma in situ (DCIS) of the breast. We demonstrate contact- and chemokine-based interactions among multiple cell types with examples in synthetic multicellular bioengineering, cancer heterogeneity, and cancer immunology. We developed PhysiCell to help the scientific community tackle multicellular systems biology problems involving many interacting cells in multi-substrate microenvironments. PhysiCell is also an independent, cross-platform codebase for replicating results from other simulators.", "which creates ?", "PhysiCell", 468.0, 477.0], ["The use of 3C-based methods has revealed the importance of the 3D organization of the chromatin for key aspects of genome biology. However, the different caveats of the variants of 3C techniques have limited their scope and the range of scientific fields that could benefit from these approaches. To address these limitations, we present 4Cin, a method to generate 3D models and derive virtual Hi-C (vHi-C) heat maps of genomic loci based on 4C-seq or any kind of 4C-seq-like data, such as those derived from NG Capture-C. 3D genome organization is determined by integrative consideration of the spatial distances derived from as few as four 4C-seq experiments. The 3D models obtained from 4C-seq data, together with their associated vHi-C maps, allow the inference of all chromosomal contacts within a given genomic region, facilitating the identification of Topological Associating Domains (TAD) boundaries. Thus, 4Cin offers a much cheaper, accessible and versatile alternative to other available techniques while providing a comprehensive 3D topological profiling. By studying TAD modifications in genomic structural variants associated to disease phenotypes and performing cross-species evolutionary comparisons of 3D chromatin structures in a quantitative manner, we demonstrate the broad potential and novel range of applications of our method.", "which creates ?", "4Cin", 338.0, 342.0], ["Chemical reaction networks are ubiquitous in biology, and their dynamics is fundamentally stochastic. Here, we present the software library pSSAlib, which provides a complete and concise implementation of the most efficient partial-propensity methods for simulating exact stochastic chemical kinetics. pSSAlib can import models encoded in Systems Biology Markup Language, supports time delays in chemical reactions, and stochastic spatiotemporal reaction-diffusion systems. It also provides tools for statistical analysis of simulation results and supports multiple output formats. It has previously been used for studies of biochemical reaction pathways and to benchmark other stochastic simulation methods. Here, we describe pSSAlib in detail and apply it to a new model of the endocytic pathway in eukaryotic cells, leading to the discovery of a stochastic counterpart of the cut-out switch motif underlying early-to-late endosome conversion. pSSAlib is provided as a stand-alone command-line tool and as a developer API. We also provide a plug-in for the SBMLToolbox. The open-source code and pre-packaged installers are freely available from http://mosaic.mpi-cbg.de.", "which creates ?", "pSSAlib", 140.0, 147.0], ["Unique molecular identifiers (UMIs) show outstanding performance in targeted high-throughput resequencing, being the most promising approach for the accurate identification of rare variants in complex DNA samples. This approach has application in multiple areas, including cancer diagnostics, thus demanding dedicated software and algorithms. Here we introduce MAGERI, a computational pipeline that efficiently handles all caveats of UMI-based analysis to obtain high-fidelity mutation profiles and call ultra-rare variants. Using an extensive set of benchmark datasets including gold-standard biological samples with known variant frequencies, cell-free DNA from tumor patient blood samples and publicly available UMI-encoded datasets we demonstrate that our method is both robust and efficient in calling rare variants. The versatility of our software is supported by accurate results obtained for both tumor DNA and viral RNA samples in datasets prepared using three different UMI-based protocols.", "which creates ?", "MAGERI", 361.0, 367.0], ["Our current understanding of the molecular mechanisms which regulate cellular processes such as vesicular trafficking has been enabled by conventional biochemical and microscopy techniques. However, these methods often obscure the heterogeneity of the cellular environment, thus precluding a quantitative assessment of the molecular interactions regulating these processes. Herein, we present Molecular Interactions in Super Resolution (MIiSR) software which provides quantitative analysis tools for use with super-resolution images. MIiSR combines multiple tools for analyzing intermolecular interactions, molecular clustering and image segmentation. These tools enable quantification, in the native environment of the cell, of molecular interactions and the formation of higher-order molecular complexes. The capabilities and limitations of these analytical tools are demonstrated using both modeled data and examples derived from the vesicular trafficking system, thereby providing an established and validated experimental workflow capable of quantitatively assessing molecular interactions and molecular complex formation within the heterogeneous environment of the cell.", "which creates ?", "Molecular Interactions in Super Resolution", 393.0, 435.0], ["Genome-scale models of metabolism and macromolecular expression (ME-models) explicitly compute the optimal proteome composition of a growing cell. ME-models expand upon the well-established genome-scale models of metabolism (M-models), and they enable a new fundamental understanding of cellular growth. ME-models have increased predictive capabilities and accuracy due to their inclusion of the biosynthetic costs for the machinery of life, but they come with a significant increase in model size and complexity. This challenge results in models which are both difficult to compute and challenging to understand conceptually. As a result, ME-models exist for only two organisms (Escherichia coli and Thermotoga maritima) and are still used by relatively few researchers. To address these challenges, we have developed a new software framework called COBRAme for building and simulating ME-models. It is coded in Python and built on COBRApy, a popular platform for using M-models. COBRAme streamlines computation and analysis of ME-models. It provides tools to simplify constructing and editing ME-models to enable ME-model reconstructions for new organisms. We used COBRAme to reconstruct a condensed E. coli ME-model called iJL1678b-ME. This reformulated model gives functionally identical solutions to previous E. coli ME-models while using 1/6 the number of free variables and solving in less than 10 minutes, a marked improvement over the 6 hour solve time of previous ME-model formulations. Errors in previous ME-models were also corrected leading to 52 additional genes that must be expressed in iJL1678b-ME to grow aerobically in glucose minimal in silico media. This manuscript outlines the architecture of COBRAme and demonstrates how ME-models can be created, modified, and shared most efficiently using the new software framework.", "which creates ?", "COBRAme", 851.0, 858.0], ["Grooming is a complex and robust innate behavior, commonly performed by most vertebrate species. In mice, grooming consists of a series of stereotyped patterned strokes, performed along the rostro-caudal axis of the body. The frequency and duration of each grooming episode is sensitive to changes in stress levels, social interactions and pharmacological manipulations, and is therefore used in behavioral studies to gain insights into the function of brain regions that control movement execution and anxiety. Traditional approaches to analyze grooming rely on manually scoring the time of onset and duration of each grooming episode, and are often performed on grooming episodes triggered by stress exposure, which may not be entirely representative of spontaneous grooming in freely-behaving mice. This type of analysis is time-consuming and provides limited information about finer aspects of grooming behaviors, which are important to understand movement stereotypy and bilateral coordination in mice. Currently available commercial and freeware video-tracking software allow automated tracking of the whole body of a mouse or of its head and tail, not of individual forepaws. Here we describe a simple experimental set-up and a novel open-source code, named M-Track, for simultaneously tracking the movement of individual forepaws during spontaneous grooming in multiple freely-behaving mice. This toolbox provides a simple platform to perform trajectory analysis of forepaw movement during distinct grooming episodes. By using M-track we show that, in C57BL/6 wild type mice, the speed and bilateral coordination of the left and right forepaws remain unaltered during the execution of distinct grooming episodes. Stress exposure induces a profound increase in the length of the forepaw grooming trajectories. M-Track provides a valuable and user-friendly interface to streamline the analysis of spontaneous grooming in biomedical research studies.", "which creates ?", "M-Track", 1265.0, 1272.0], ["Live-cell imaging by light microscopy has demonstrated that all cells are spatially and temporally organized. Quantitative, computational image analysis is an important part of cellular imaging, providing both enriched information about individual cell properties and the ability to analyze large datasets. However, such studies are often limited by the small size and variable shape of objects of interest. Here, we address two outstanding problems in bacterial cell division by developing a generally applicable, standardized, and modular software suite termed Projected System of Internal Coordinates from Interpolated Contours (PSICIC) that solves common problems in image quantitation. PSICIC implements interpolated-contour analysis for accurate and precise determination of cell borders and automatically generates internal coordinate systems that are superimposable regardless of cell geometry. We have used PSICIC to establish that the cell-fate determinant, SpoIIE, is asymmetrically localized during Bacillus subtilis sporulation, thereby demonstrating the ability of PSICIC to discern protein localization features at sub-pixel scales. We also used PSICIC to examine the accuracy of cell division in Esherichia coli and found a new role for the Min system in regulating division-site placement throughout the cell length, but only prior to the initiation of cell constriction. These results extend our understanding of the regulation of both asymmetry and accuracy in bacterial division while demonstrating the general applicability of PSICIC as a computational approach for quantitative, high-throughput analysis of cellular images.", "which creates ?", "PSICIC", 632.0, 638.0], ["A calibrated computational model reflects behaviours that are expected or observed in a complex system, providing a baseline upon which sensitivity analysis techniques can be used to analyse pathways that may impact model responses. However, calibration of a model where a behaviour depends on an intervention introduced after a defined time point is difficult, as model responses may be dependent on the conditions at the time the intervention is applied. We present ASPASIA (Automated Simulation Parameter Alteration and SensItivity Analysis), a cross-platform, open-source Java toolkit that addresses a key deficiency in software tools for understanding the impact an intervention has on system behaviour for models specified in Systems Biology Markup Language (SBML). ASPASIA can generate and modify models using SBML solver output as an initial parameter set, allowing interventions to be applied once a steady state has been reached. Additionally, multiple SBML models can be generated where a subset of parameter values are perturbed using local and global sensitivity analysis techniques, revealing the model\u2019s sensitivity to the intervention. To illustrate the capabilities of ASPASIA, we demonstrate how this tool has generated novel hypotheses regarding the mechanisms by which Th17-cell plasticity may be controlled in vivo. By using ASPASIA in conjunction with an SBML model of Th17-cell polarisation, we predict that promotion of the Th1-associated transcription factor T-bet, rather than inhibition of the Th17-associated transcription factor ROR\u03b3t, is sufficient to drive switching of Th17 cells towards an IFN-\u03b3-producing phenotype. Our approach can be applied to all SBML-encoded models to predict the effect that intervention strategies have on system behaviour. ASPASIA, released under the Artistic License (2.0), can be downloaded from http://www.york.ac.uk/ycil/software.", "which creates ?", "ASPASIA", 468.0, 475.0], ["In order to access and filter content of life-science databases, full text search is a widely applied query interface. But its high flexibility and intuitiveness is paid for with potentially imprecise and incomplete query results. To reduce this drawback, query assistance systems suggest those combinations of keywords with the highest potential to match most of the relevant data records. Widespread approaches are syntactic query corrections that avoid misspelling and support expansion of words by suffixes and prefixes. Synonym expansion approaches apply thesauri, ontologies, and query logs. All need laborious curation and maintenance. Furthermore, access to query logs is in general restricted. Approaches that infer related queries by their query profile like research field, geographic location, co-authorship, affiliation etc. require user\u2019s registration and its public accessibility that contradict privacy concerns. To overcome these drawbacks, we implemented LAILAPS-QSM, a machine learning approach that reconstruct possible linguistic contexts of a given keyword query. The context is referred from the text records that are stored in the databases that are going to be queried or extracted for a general purpose query suggestion from PubMed abstracts and UniProt data. The supplied tool suite enables the pre-processing of these text records and the further computation of customized distributed word vectors. The latter are used to suggest alternative keyword queries. An evaluated of the query suggestion quality was done for plant science use cases. Locally present experts enable a cost-efficient quality assessment in the categories trait, biological entity, taxonomy, affiliation, and metabolic function which has been performed using ontology term similarities. LAILAPS-QSM mean information content similarity for 15 representative queries is 0.70, whereas 34% have a score above 0.80. In comparison, the information content similarity for human expert made query suggestions is 0.90. The software is either available as tool set to build and train dedicated query suggestion services or as already trained general purpose RESTful web service. The service uses open interfaces to be seamless embeddable into database frontends. The JAVA implementation uses highly optimized data structures and streamlined code to provide fast and scalable response for web service calls. The source code of LAILAPS-QSM is available under GNU General Public License version 2 in Bitbucket GIT repository: https://bitbucket.org/ipk_bit_team/bioescorte-suggestion", "which creates ?", "LAILAPS-QSM", 973.0, 984.0], ["There is increasing evidence that protein dynamics and conformational changes can play an important role in modulating biological function. As a result, experimental and computational methods are being developed, often synergistically, to study the dynamical heterogeneity of a protein or other macromolecules in solution. Thus, methods such as molecular dynamics simulations or ensemble refinement approaches have provided conformational ensembles that can be used to understand protein function and biophysics. These developments have in turn created a need for algorithms and software that can be used to compare structural ensembles in the same way as the root-mean-square-deviation is often used to compare static structures. Although a few such approaches have been proposed, these can be difficult to implement efficiently, hindering a broader applications and further developments. Here, we present an easily accessible software toolkit, called ENCORE, which can be used to compare conformational ensembles generated either from simulations alone or synergistically with experiments. ENCORE implements three previously described methods for ensemble comparison, that each can be used to quantify the similarity between conformational ensembles by estimating the overlap between the probability distributions that underlie them. We demonstrate the kinds of insights that can be obtained by providing examples of three typical use-cases: comparing ensembles generated with different molecular force fields, assessing convergence in molecular simulations, and calculating differences and similarities in structural ensembles refined with various sources of experimental data. We also demonstrate efficient computational scaling for typical analyses, and robustness against both the size and sampling of the ensembles. ENCORE is freely available and extendable, integrates with the established MDAnalysis software package, reads ensemble data in many common formats, and can work with large trajectory files.", "which creates ?", "ENCORE", 953.0, 959.0], ["The Dynamic Regulatory Events Miner (DREM) software reconstructs dynamic regulatory networks by integrating static protein-DNA interaction data with time series gene expression data. In recent years, several additional types of high-throughput time series data have been profiled when studying biological processes including time series miRNA expression, proteomics, epigenomics and single cell RNA-Seq. Combining all available time series and static datasets in a unified model remains an important challenge and goal. To address this challenge we have developed a new version of DREM termed interactive DREM (iDREM). iDREM provides support for all data types mentioned above and combines them with existing interaction data to reconstruct networks that can lead to novel hypotheses on the function and timing of regulators. Users can interactively visualize and query the resulting model. We showcase the functionality of the new tool by applying it to microglia developmental data from multiple labs.", "which creates ?", "interactive DREM", 593.0, 609.0], ["Background Dog rabies annually causes 24,000\u201370,000 deaths globally. We built a spreadsheet tool, RabiesEcon, to aid public health officials to estimate the cost-effectiveness of dog rabies vaccination programs in East Africa. Methods RabiesEcon uses a mathematical model of dog-dog and dog-human rabies transmission to estimate dog rabies cases averted, the cost per human rabies death averted and cost per year of life gained (YLG) due to dog vaccination programs (US 2015 dollars). We used an East African human population of 1 million (approximately 2/3 living in urban setting, 1/3 rural). We considered, using data from the literature, three vaccination options; no vaccination, annual vaccination of 50% of dogs and 20% of dogs vaccinated semi-annually. We assessed 2 transmission scenarios: low (1.2 dogs infected per infectious dog) and high (1.7 dogs infected). We also examined the impact of annually vaccinating 70% of all dogs (World Health Organization recommendation for dog rabies elimination). Results Without dog vaccination, over 10 years there would a total of be approximately 44,000\u201365,000 rabid dogs and 2,100\u20132,900 human deaths. Annually vaccinating 50% of dogs results in 10-year reductions of 97% and 75% in rabid dogs (low and high transmissions scenarios, respectively), approximately 2,000\u20131,600 human deaths averted, and an undiscounted cost-effectiveness of $451-$385 per life saved. Semi-annual vaccination of 20% of dogs results in in 10-year reductions of 94% and 78% in rabid dogs, and approximately 2,000\u20131,900 human deaths averted, and cost $404-$305 per life saved. In the low transmission scenario, vaccinating either 50% or 70% of dogs eliminated dog rabies. Results were most sensitive to dog birth rate and the initial rate of dog-to-dog transmission (Ro). Conclusions Dog rabies vaccination programs can control, and potentially eliminate, dog rabies. The frequency and coverage of vaccination programs, along with the level of dog rabies transmission, can affect the cost-effectiveness of such programs. RabiesEcon can aid both the planning and assessment of dog rabies vaccination programs.", "which creates ?", "RabiesEcon", 98.0, 108.0], ["Identification of single nucleotide polymorphisms (SNPs) and mutations is important for the discovery of genetic predisposition to complex diseases. PCR resequencing is the method of choice for de novo SNP discovery. However, manual curation of putative SNPs has been a major bottleneck in the application of this method to high-throughput screening. Therefore it is critical to develop a more sensitive and accurate computational method for automated SNP detection. We developed a software tool, SNPdetector, for automated identification of SNPs and mutations in fluorescence-based resequencing reads. SNPdetector was designed to model the process of human visual inspection and has a very low false positive and false negative rate. We demonstrate the superior performance of SNPdetector in SNP and mutation analysis by comparing its results with those derived by human inspection, PolyPhred (a popular SNP detection tool), and independent genotype assays in three large-scale investigations. The first study identified and validated inter- and intra-subspecies variations in 4,650 traces of 25 inbred mouse strains that belong to either the Mus musculus species or the M. spretus species. Unexpected heterozgyosity in CAST/Ei strain was observed in two out of 1,167 mouse SNPs. The second study identified 11,241 candidate SNPs in five ENCODE regions of the human genome covering 2.5 Mb of genomic sequence. Approximately 50% of the candidate SNPs were selected for experimental genotyping; the validation rate exceeded 95%. The third study detected ENU-induced mutations (at 0.04% allele frequency) in 64,896 traces of 1,236 zebra fish. Our analysis of three large and diverse test datasets demonstrated that SNPdetector is an effective tool for genome-scale research and for large-sample clinical studies. SNPdetector runs on Unix/Linux platform and is available publicly (http://lpg.nci.nih.gov).", "which creates ?", "SNPdetector", 497.0, 508.0], ["The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the \u2018missing heritability,\u2019 which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS), for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/.", "which creates ?", "Pipeline for estimating EPIStatic genetic effects", 621.0, 670.0], ["Despite the growing number of immune repertoire sequencing studies, the field still lacks software for analysis and comprehension of this high-dimensional data. Here we report VDJtools, a complementary software suite that solves a wide range of T cell receptor (TCR) repertoires post-analysis tasks, provides a detailed tabular output and publication-ready graphics, and is built on top of a flexible API. Using TCR datasets for a large cohort of unrelated healthy donors, twins, and multiple sclerosis patients we demonstrate that VDJtools greatly facilitates the analysis and leads to sound biological conclusions. VDJtools software and documentation are available at https://github.com/mikessh/vdjtools.", "which creates ?", "VDJtools", 176.0, 184.0], ["Various algorithms have been developed for variant calling using next-generation sequencing data, and various methods have been applied to reduce the associated false positive and false negative rates. Few variant calling programs, however, utilize the pedigree information when the family-based sequencing data are available. Here, we present a program, FamSeq, which reduces both false positive and false negative rates by incorporating the pedigree information from the Mendelian genetic model into variant calling. To accommodate variations in data complexity, FamSeq consists of four distinct implementations of the Mendelian genetic model: the Bayesian network algorithm, a graphics processing unit version of the Bayesian network algorithm, the Elston-Stewart algorithm and the Markov chain Monte Carlo algorithm. To make the software efficient and applicable to large families, we parallelized the Bayesian network algorithm that copes with pedigrees with inbreeding loops without losing calculation precision on an NVIDIA graphics processing unit. In order to compare the difference in the four methods, we applied FamSeq to pedigree sequencing data with family sizes that varied from 7 to 12. When there is no inbreeding loop in the pedigree, the Elston-Stewart algorithm gives analytical results in a short time. If there are inbreeding loops in the pedigree, we recommend the Bayesian network method, which provides exact answers. To improve the computing speed of the Bayesian network method, we parallelized the computation on a graphics processing unit. This allowed the Bayesian network method to process the whole genome sequencing data of a family of 12 individuals within two days, which was a 10-fold time reduction compared to the time required for this computation on a central processing unit.", "which creates ?", "FamSeq", 355.0, 361.0], ["Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or \"best\" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit \"AlignerBoost\", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost\u2019s algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost.", "which creates ?", "AlignerBoost", 625.0, 637.0], ["Transmembrane channel proteins play pivotal roles in maintaining the homeostasis and responsiveness of cells and the cross-membrane electrochemical gradient by mediating the transport of ions and molecules through biological membranes. Therefore, computational methods which, given a set of 3D coordinates, can automatically identify and describe channels in transmembrane proteins are key tools to provide insights into how they function. Herein we present PoreWalker, a fully automated method, which detects and fully characterises channels in transmembrane proteins from their 3D structures. A stepwise procedure is followed in which the pore centre and pore axis are first identified and optimised using geometric criteria, and then the biggest and longest cavity through the channel is detected. Finally, pore features, including diameter profiles, pore-lining residues, size, shape and regularity of the pore are calculated, providing a quantitative and visual characterization of the channel. To illustrate the use of this tool, the method was applied to several structures of transmembrane channel proteins and was able to identify shape/size/residue features representative of specific channel families. The software is available as a web-based resource at http://www.ebi.ac.uk/thornton-srv/software/PoreWalker/.", "which creates ?", "PoreWalker", 458.0, 468.0], ["Exploiting pathogen genomes to reconstruct transmission represents a powerful tool in the fight against infectious disease. However, their interpretation rests on a number of simplifying assumptions that regularly ignore important complexities of real data, in particular within-host evolution and non-sampled patients. Here we propose a new approach to transmission inference called SCOTTI (Structured COalescent Transmission Tree Inference). This method is based on a statistical framework that models each host as a distinct population, and transmissions between hosts as migration events. Our computationally efficient implementation of this model enables the inference of host-to-host transmission while accommodating within-host evolution and non-sampled hosts. SCOTTI is distributed as an open source package for the phylogenetic software BEAST2. We show that SCOTTI can generally infer transmission events even in the presence of considerable within-host variation, can account for the uncertainty associated with the possible presence of non-sampled hosts, and can efficiently use data from multiple samples of the same host, although there is some reduction in accuracy when samples are collected very close to the infection time. We illustrate the features of our approach by investigating transmission from genetic and epidemiological data in a Foot and Mouth Disease Virus (FMDV) veterinary outbreak in England and a Klebsiella pneumoniae outbreak in a Nepali neonatal unit. Transmission histories inferred with SCOTTI will be important in devising effective measures to prevent and halt transmission.", "which creates ?", "SCOTTI", 384.0, 390.0], ["In the effort to define genes and specific neuronal circuits that control behavior and plasticity, the capacity for high-precision automated analysis of behavior is essential. We report on comprehensive computer vision software for analysis of swimming locomotion of C. elegans, a simple animal model initially developed to facilitate elaboration of genetic influences on behavior. C. elegans swim test software CeleST tracks swimming of multiple animals, measures 10 novel parameters of swim behavior that can fully report dynamic changes in posture and speed, and generates data in several analysis formats, complete with statistics. Our measures of swim locomotion utilize a deformable model approach and a novel mathematical analysis of curvature maps that enable even irregular patterns and dynamic changes to be scored without need for thresholding or dropping outlier swimmers from study. Operation of CeleST is mostly automated and only requires minimal investigator interventions, such as the selection of videotaped swim trials and choice of data output format. Data can be analyzed from the level of the single animal to populations of thousands. We document how the CeleST program reveals unexpected preferences for specific swim \u201cgaits\u201d in wild-type C. elegans, uncovers previously unknown mutant phenotypes, efficiently tracks changes in aging populations, and distinguishes \u201cgraceful\u201d from poor aging. The sensitivity, dynamic range, and comprehensive nature of CeleST measures elevate swim locomotion analysis to a new level of ease, economy, and detail that enables behavioral plasticity resulting from genetic, cellular, or experience manipulation to be analyzed in ways not previously possible.", "which creates ?", "CeleST", 412.0, 418.0], ["Over the past decades, quantitative methods linking theory and observation became increasingly important in many areas of life science. Subsequently, a large number of mathematical and computational models has been developed. The BioModels database alone lists more than 140,000 Systems Biology Markup Language (SBML) models. However, while the exchange within specific model classes has been supported by standardisation and database efforts, the generic application and especially the re-use of models is still limited by practical issues such as easy and straight forward model execution. MAGPIE, a Modeling and Analysis Generic Platform with Integrated Evaluation, closes this gap by providing a software platform for both, publishing and executing computational models without restrictions on the programming language, thereby combining a maximum on flexibility for programmers with easy handling for non-technical users. MAGPIE goes beyond classical SBML platforms by including all models, independent of the underlying programming language, ranging from simple script models to complex data integration and computations. We demonstrate the versatility of MAGPIE using four prototypic example cases. We also outline the potential of MAGPIE to improve transparency and reproducibility of computational models in life sciences. A demo server is available at magpie.imb.medizin.tu-dresden.de.", "which creates ?", "MAGPIE", 592.0, 598.0], ["Chaste \u2014 Cancer, Heart And Soft Tissue Environment \u2014 is an open source C++ library for the computational simulation of mathematical models developed for physiology and biology. Code development has been driven by two initial applications: cardiac electrophysiology and cancer development. A large number of cardiac electrophysiology studies have been enabled and performed, including high-performance computational investigations of defibrillation on realistic human cardiac geometries. New models for the initiation and growth of tumours have been developed. In particular, cell-based simulations have provided novel insight into the role of stem cells in the colorectal crypt. Chaste is constantly evolving and is now being applied to a far wider range of problems. The code provides modules for handling common scientific computing components, such as meshes and solvers for ordinary and partial differential equations (ODEs/PDEs). Re-use of these components avoids the need for researchers to \u2018re-invent the wheel\u2019 with each new project, accelerating the rate of progress in new applications. Chaste is developed using industrially-derived techniques, in particular test-driven development, to ensure code quality, re-use and reliability. In this article we provide examples that illustrate the types of problems Chaste can be used to solve, which can be run on a desktop computer. We highlight some scientific studies that have used or are using Chaste, and the insights they have provided. The source code, both for specific releases and the development version, is available to download under an open source Berkeley Software Distribution (BSD) licence at http://www.cs.ox.ac.uk/chaste, together with details of a mailing list and links to documentation and tutorials.", "which creates ?", "Cancer, Heart And Soft Tissue Environment", 9.0, 50.0], ["The genome-scale models of metabolic networks have been broadly applied in phenotype prediction, evolutionary reconstruction, community functional analysis, and metabolic engineering. Despite the development of tools that support individual steps along the modeling procedure, it is still difficult to associate mathematical simulation results with the annotation and biological interpretation of metabolic models. In order to solve this problem, here we developed a Portable System for the Analysis of Metabolic Models (PSAMM), a new open-source software package that supports the integration of heterogeneous metadata in model annotations and provides a user-friendly interface for the analysis of metabolic models. PSAMM is independent of paid software environments like MATLAB, and all its dependencies are freely available for academic users. Compared to existing tools, PSAMM significantly reduced the running time of constraint-based analysis and enabled flexible settings of simulation parameters using simple one-line commands. The integration of heterogeneous, model-specific annotation information in PSAMM is achieved with a novel format of YAML-based model representation, which has several advantages, such as providing a modular organization of model components and simulation settings, enabling model version tracking, and permitting the integration of multiple simulation problems. PSAMM also includes a number of quality checking procedures to examine stoichiometric balance and to identify blocked reactions. Applying PSAMM to 57 models collected from current literature, we demonstrated how the software can be used for managing and simulating metabolic models. We identified a number of common inconsistencies in existing models and constructed an updated model repository to document the resolution of these inconsistencies.", "which creates ?", "Portable System for the Analysis of Metabolic Models", 467.0, 519.0], ["Epigenetic regulation consists of a multitude of different modifications that determine active and inactive states of chromatin. Conditions such as cell differentiation or exposure to environmental stress require concerted changes in gene expression. To interpret epigenomics data, a spectrum of different interconnected datasets is needed, ranging from the genome sequence and positions of histones, together with their modifications and variants, to the transcriptional output of genomic regions. Here we present a tool, Podbat (Positioning database and analysis tool), that incorporates data from various sources and allows detailed dissection of the entire range of chromatin modifications simultaneously. Podbat can be used to analyze, visualize, store and share epigenomics data. Among other functions, Podbat allows data-driven determination of genome regions of differential protein occupancy or RNA expression using Hidden Markov Models. Comparisons between datasets are facilitated to enable the study of the comprehensive chromatin modification system simultaneously, irrespective of data-generating technique. Any organism with a sequenced genome can be accommodated. We exemplify the power of Podbat by reanalyzing all to-date published genome-wide data for the histone variant H2A.Z in fission yeast together with other histone marks and also phenotypic response data from several sources. This meta-analysis led to the unexpected finding of H2A.Z incorporation in the coding regions of genes encoding proteins involved in the regulation of meiosis and genotoxic stress responses. This incorporation was partly independent of the H2A.Z-incorporating remodeller Swr1. We verified an Swr1-independent role for H2A.Z following genotoxic stress in vivo. Podbat is open source software freely downloadable from www.podbat.org, distributed under the GNU LGPL license. User manuals, test data and instructions are available at the website, as well as a repository for third party\u2013developed plug-in modules. Podbat requires Java version 1.6 or higher.", "which creates ?", "Podbat", 523.0, 529.0], ["The rapid development of sequencing technology has led to an explosive accumulation of genomic sequence data. Clustering is often the first step to perform in sequence analysis, and hierarchical clustering is one of the most commonly used approaches for this purpose. However, it is currently computationally expensive to perform hierarchical clustering of extremely large sequence datasets due to its quadratic time and space complexities. In this paper we developed a new algorithm called ESPRIT-Forest for parallel hierarchical clustering of sequences. The algorithm achieves subquadratic time and space complexity and maintains a high clustering accuracy comparable to the standard method. The basic idea is to organize sequences into a pseudo-metric based partitioning tree for sub-linear time searching of nearest neighbors, and then use a new multiple-pair merging criterion to construct clusters in parallel using multiple threads. The new algorithm was tested on the human microbiome project (HMP) dataset, currently one of the largest published microbial 16S rRNA sequence dataset. Our experiment demonstrated that with the power of parallel computing it is now compu- tationally feasible to perform hierarchical clustering analysis of tens of millions of sequences. The software is available at http://www.acsu.buffalo.edu/\u223cyijunsun/lab/ESPRIT-Forest.html.", "which creates ?", "ESPRIT-Forest", 491.0, 504.0], ["Germline copy number variants (CNVs) and somatic copy number alterations (SCNAs) are of significant importance in syndromic conditions and cancer. Massively parallel sequencing is increasingly used to infer copy number information from variations in the read depth in sequencing data. However, this approach has limitations in the case of targeted re-sequencing, which leaves gaps in coverage between the regions chosen for enrichment and introduces biases related to the efficiency of target capture and library preparation. We present a method for copy number detection, implemented in the software package CNVkit, that uses both the targeted reads and the nonspecifically captured off-target reads to infer copy number evenly across the genome. This combination achieves both exon-level resolution in targeted regions and sufficient resolution in the larger intronic and intergenic regions to identify copy number changes. In particular, we successfully inferred copy number at equivalent to 100-kilobase resolution genome-wide from a platform targeting as few as 293 genes. After normalizing read counts to a pooled reference, we evaluated and corrected for three sources of bias that explain most of the extraneous variability in the sequencing read depth: GC content, target footprint size and spacing, and repetitive sequences. We compared the performance of CNVkit to copy number changes identified by array comparative genomic hybridization. We packaged the components of CNVkit so that it is straightforward to use and provides visualizations, detailed reporting of significant features, and export options for integration into existing analysis pipelines. CNVkit is freely available from https://github.com/etal/cnvkit.", "which creates ?", "CNVkit", 609.0, 615.0], ["Archaeogenomic research has proven to be a valuable tool to trace migrations of historic and prehistoric individuals and groups, whereas relationships within a group or burial site have not been investigated to a large extent. Knowing the genetic kinship of historic and prehistoric individuals would give important insights into social structures of ancient and historic cultures. Most archaeogenetic research concerning kinship has been restricted to uniparental markers, while studies using genome-wide information were mainly focused on comparisons between populations. Applications which infer the degree of relationship based on modern-day DNA information typically require diploid genotype data. Low concentration of endogenous DNA, fragmentation and other post-mortem damage to ancient DNA (aDNA) makes the application of such tools unfeasible for most archaeological samples. To infer family relationships for degraded samples, we developed the software READ (Relationship Estimation from Ancient DNA). We show that our heuristic approach can successfully infer up to second degree relationships with as little as 0.1x shotgun coverage per genome for pairs of individuals. We uncover previously unknown relationships among prehistoric individuals by applying READ to published aDNA data from several human remains excavated from different cultural contexts. In particular, we find a group of five closely related males from the same Corded Ware culture site in modern-day Germany, suggesting patrilocality, which highlights the possibility to uncover social structures of ancient populations by applying READ to genome-wide aDNA data. READ is publicly available from https://bitbucket.org/tguenther/read.", "which creates ?", "Relationship Estimation from Ancient DNA", 969.0, 1009.0], ["CellProfiler has enabled the scientific research community to create flexible, modular image analysis pipelines since its release in 2005. Here, we describe CellProfiler 3.0, a new version of the software supporting both whole-volume and plane-wise analysis of three-dimensional (3D) image stacks, increasingly common in biomedical research. CellProfiler\u2019s infrastructure is greatly improved, and we provide a protocol for cloud-based, large-scale image processing. New plugins enable running pretrained deep learning models on images. Designed by and for biologists, CellProfiler equips researchers with powerful computational tools via a well-documented user interface, empowering biologists in all fields to create quantitative, reproducible image analysis workflows.", "which creates ?", "CellProfiler", 0.0, 12.0], ["Modern DNA sequencing technologies enable geneticists to rapidly identify genetic variation among many human genomes. However, isolating the minority of variants underlying disease remains an important, yet formidable challenge for medical genetics. We have developed GEMINI (GEnome MINIng), a flexible software package for exploring all forms of human genetic variation. Unlike existing tools, GEMINI integrates genetic variation with a diverse and adaptable set of genome annotations (e.g., dbSNP, ENCODE, UCSC, ClinVar, KEGG) into a unified database to facilitate interpretation and data exploration. Whereas other methods provide an inflexible set of variant filters or prioritization methods, GEMINI allows researchers to compose complex queries based on sample genotypes, inheritance patterns, and both pre-installed and custom genome annotations. GEMINI also provides methods for ad hoc queries and data exploration, a simple programming interface for custom analyses that leverage the underlying database, and both command line and graphical tools for common analyses. We demonstrate GEMINI's utility for exploring variation in personal genomes and family based genetic studies, and illustrate its ability to scale to studies involving thousands of human samples. GEMINI is designed for reproducibility and flexibility and our goal is to provide researchers with a standard framework for medical genomics.", "which creates ?", "GEMINI", 268.0, 274.0], ["We propose a novel method and software tool, Strawberry, for transcript reconstruction and quantification from RNA-Seq data under the guidance of genome alignment and independent of gene annotation. Strawberry consists of two modules: assembly and quantification. The novelty of Strawberry is that the two modules use different optimization frameworks but utilize the same data graph structure, which allows a highly efficient, expandable and accurate algorithm for dealing large data. The assembly module parses aligned reads into splicing graphs, and uses network flow algorithms to select the most likely transcripts. The quantification module uses a latent class model to assign read counts from the nodes of splicing graphs to transcripts. Strawberry simultaneously estimates the transcript abundances and corrects for sequencing bias through an EM algorithm. Based on simulations, Strawberry outperforms Cufflinks and StringTie in terms of both assembly and quantification accuracies. Under the evaluation of a real data set, the estimated transcript expression by Strawberry has the highest correlation with Nanostring probe counts, an independent experiment measure for transcript expression. Availability: Strawberry is written in C++14, and is available as open source software at https://github.com/ruolin/strawberry under the MIT license.", "which creates ?", "Strawberry", 45.0, 55.0], ["Existing methods for identifying structural variants (SVs) from short read datasets are inaccurate. This complicates disease-gene identification and efforts to understand the consequences of genetic variation. In response, we have created Wham (Whole-genome Alignment Metrics) to provide a single, integrated framework for both structural variant calling and association testing, thereby bypassing many of the difficulties that currently frustrate attempts to employ SVs in association testing. Here we describe Wham, benchmark it against three other widely used SV identification tools\u2013Lumpy, Delly and SoftSearch\u2013and demonstrate Wham\u2019s ability to identify and associate SVs with phenotypes using data from humans, domestic pigeons, and vaccinia virus. Wham and all associated software are covered under the MIT License and can be freely downloaded from github (https://github.com/zeeev/wham), with documentation on a wiki (http://zeeev.github.io/wham/). For community support please post questions to https://www.biostars.org/.", "which creates ?", "Wham", 239.0, 243.0], ["Active matter systems, and in particular the cell cytoskeleton, exhibit complex mechanochemical dynamics that are still not well understood. While prior computational models of cytoskeletal dynamics have lead to many conceptual insights, an important niche still needs to be filled with a high-resolution structural modeling framework, which includes a minimally-complete set of cytoskeletal chemistries, stochastically treats reaction and diffusion processes in three spatial dimensions, accurately and efficiently describes mechanical deformations of the filamentous network under stresses generated by molecular motors, and deeply couples mechanics and chemistry at high spatial resolution. To address this need, we propose a novel reactive coarse-grained force field, as well as a publicly available software package, named the Mechanochemical Dynamics of Active Networks (MEDYAN), for simulating active network evolution and dynamics (available at www.medyan.org). This model can be used to study the non-linear, far from equilibrium processes in active matter systems, in particular, comprised of interacting semi-flexible polymers embedded in a solution with complex reaction-diffusion processes. In this work, we applied MEDYAN to investigate a contractile actomyosin network consisting of actin filaments, alpha-actinin cross-linking proteins, and non-muscle myosin IIA mini-filaments. We found that these systems undergo a switch-like transition in simulations from a random network to ordered, bundled structures when cross-linker concentration is increased above a threshold value, inducing contraction driven by myosin II mini-filaments. Our simulations also show how myosin II mini-filaments, in tandem with cross-linkers, can produce a range of actin filament polarity distributions and alignment, which is crucially dependent on the rate of actin filament turnover and the actin filament\u2019s resulting super-diffusive behavior in the actomyosin-cross-linker system. We discuss the biological implications of these findings for the arc formation in lamellipodium-to-lamellum architectural remodeling. Lastly, our simulations produce force-dependent accumulation of myosin II, which is thought to be responsible for their mechanosensation ability, also spontaneously generating myosin II concentration gradients in the solution phase of the simulation volume.", "which deposits ?", "Mechanochemical Dynamics of Active Networks", 832.0, 875.0], ["The rapid development of sequencing technology has led to an explosive accumulation of genomic sequence data. Clustering is often the first step to perform in sequence analysis, and hierarchical clustering is one of the most commonly used approaches for this purpose. However, it is currently computationally expensive to perform hierarchical clustering of extremely large sequence datasets due to its quadratic time and space complexities. In this paper we developed a new algorithm called ESPRIT-Forest for parallel hierarchical clustering of sequences. The algorithm achieves subquadratic time and space complexity and maintains a high clustering accuracy comparable to the standard method. The basic idea is to organize sequences into a pseudo-metric based partitioning tree for sub-linear time searching of nearest neighbors, and then use a new multiple-pair merging criterion to construct clusters in parallel using multiple threads. The new algorithm was tested on the human microbiome project (HMP) dataset, currently one of the largest published microbial 16S rRNA sequence dataset. Our experiment demonstrated that with the power of parallel computing it is now compu- tationally feasible to perform hierarchical clustering analysis of tens of millions of sequences. The software is available at http://www.acsu.buffalo.edu/\u223cyijunsun/lab/ESPRIT-Forest.html.", "which deposits ?", "software", 1281.0, 1289.0], ["We present ggsashimi, a command-line tool for the visualization of splicing events across multiple samples. Given a specified genomic region, ggsashimi creates sashimi plots for individual RNA-seq experiments as well as aggregated plots for groups of experiments, a feature unique to this software. Compared to the existing versions of programs generating sashimi plots, it uses popular bioinformatics file formats, it is annotation-independent, and allows the visualization of splicing events even for large genomic regions by scaling down the genomic segments between splice sites. ggsashimi is freely available at https://github.com/guigolab/ggsashimi. It is implemented in python, and internally generates R code for plotting.", "which deposits ?", "ggsashimi", 11.0, 20.0], ["The quantification of cell shape, cell migration, and cell rearrangements is important for addressing classical questions in developmental biology such as patterning and tissue morphogenesis. Time-lapse microscopic imaging of transgenic embryos expressing fluorescent reporters is the method of choice for tracking morphogenetic changes and establishing cell lineages and fate maps in vivo. However, the manual steps involved in curating thousands of putative cell segmentations have been a major bottleneck in the application of these technologies especially for cell membranes. Segmentation of cell membranes while more difficult than nuclear segmentation is necessary for quantifying the relations between changes in cell morphology and morphogenesis. We present a novel and fully automated method to first reconstruct membrane signals and then segment out cells from 3D membrane images even in dense tissues. The approach has three stages: 1) detection of local membrane planes, 2) voting to fill structural gaps, and 3) region segmentation. We demonstrate the superior performance of the algorithms quantitatively on time-lapse confocal and two-photon images of zebrafish neuroectoderm and paraxial mesoderm by comparing its results with those derived from human inspection. We also compared with synthetic microscopic images generated by simulating the process of imaging with fluorescent reporters under varying conditions of noise. Both the over-segmentation and under-segmentation percentages of our method are around 5%. The volume overlap of individual cells, compared to expert manual segmentation, is consistently over 84%. By using our software (ACME) to study somite formation, we were able to segment touching cells with high accuracy and reliably quantify changes in morphogenetic parameters such as cell shape and size, and the arrangement of epithelial and mesenchymal cells. Our software has been developed and tested on Windows, Mac, and Linux platforms and is available publicly under an open source BSD license (https://github.com/krm15/ACME).", "which deposits ?", "software", 1650.0, 1658.0], ["Over the past decades, quantitative methods linking theory and observation became increasingly important in many areas of life science. Subsequently, a large number of mathematical and computational models has been developed. The BioModels database alone lists more than 140,000 Systems Biology Markup Language (SBML) models. However, while the exchange within specific model classes has been supported by standardisation and database efforts, the generic application and especially the re-use of models is still limited by practical issues such as easy and straight forward model execution. MAGPIE, a Modeling and Analysis Generic Platform with Integrated Evaluation, closes this gap by providing a software platform for both, publishing and executing computational models without restrictions on the programming language, thereby combining a maximum on flexibility for programmers with easy handling for non-technical users. MAGPIE goes beyond classical SBML platforms by including all models, independent of the underlying programming language, ranging from simple script models to complex data integration and computations. We demonstrate the versatility of MAGPIE using four prototypic example cases. We also outline the potential of MAGPIE to improve transparency and reproducibility of computational models in life sciences. A demo server is available at magpie.imb.medizin.tu-dresden.de.", "which deposits ?", "MAGPIE", 592.0, 598.0], ["Imaging and analyzing the locomotion behavior of small animals such as Drosophila larvae or C. elegans worms has become an integral subject of biological research. In the past we have introduced FIM, a novel imaging system feasible to extract high contrast images. This system in combination with the associated tracking software FIMTrack is already used by many groups all over the world. However, so far there has not been an in-depth discussion of the technical aspects. Here we elaborate on the implementation details of FIMTrack and give an in-depth explanation of the used algorithms. Among others, the software offers several tracking strategies to cover a wide range of different model organisms, locomotion types, and camera properties. Furthermore, the software facilitates stimuli-based analysis in combination with built-in manual tracking and correction functionalities. All features are integrated in an easy-to-use graphical user interface. To demonstrate the potential of FIMTrack we provide an evaluation of its accuracy using manually labeled data. The source code is available under the GNU GPLv3 at https://github.com/i-git/FIMTrack and pre-compiled binaries for Windows and Mac are available at http://fim.uni-muenster.de.", "which deposits ?", "source code", 1071.0, 1082.0], ["The rapid development of sequencing technology has led to an explosive accumulation of genomic sequence data. Clustering is often the first step to perform in sequence analysis, and hierarchical clustering is one of the most commonly used approaches for this purpose. However, it is currently computationally expensive to perform hierarchical clustering of extremely large sequence datasets due to its quadratic time and space complexities. In this paper we developed a new algorithm called ESPRIT-Forest for parallel hierarchical clustering of sequences. The algorithm achieves subquadratic time and space complexity and maintains a high clustering accuracy comparable to the standard method. The basic idea is to organize sequences into a pseudo-metric based partitioning tree for sub-linear time searching of nearest neighbors, and then use a new multiple-pair merging criterion to construct clusters in parallel using multiple threads. The new algorithm was tested on the human microbiome project (HMP) dataset, currently one of the largest published microbial 16S rRNA sequence dataset. Our experiment demonstrated that with the power of parallel computing it is now compu- tationally feasible to perform hierarchical clustering analysis of tens of millions of sequences. The software is available at http://www.acsu.buffalo.edu/\u223cyijunsun/lab/ESPRIT-Forest.html.", "which deposits ?", "ESPRIT-Forest", 491.0, 504.0], ["Existing methods for identifying structural variants (SVs) from short read datasets are inaccurate. This complicates disease-gene identification and efforts to understand the consequences of genetic variation. In response, we have created Wham (Whole-genome Alignment Metrics) to provide a single, integrated framework for both structural variant calling and association testing, thereby bypassing many of the difficulties that currently frustrate attempts to employ SVs in association testing. Here we describe Wham, benchmark it against three other widely used SV identification tools\u2013Lumpy, Delly and SoftSearch\u2013and demonstrate Wham\u2019s ability to identify and associate SVs with phenotypes using data from humans, domestic pigeons, and vaccinia virus. Wham and all associated software are covered under the MIT License and can be freely downloaded from github (https://github.com/zeeev/wham), with documentation on a wiki (http://zeeev.github.io/wham/). For community support please post questions to https://www.biostars.org/.", "which deposits ?", "Wham", 239.0, 243.0], ["Advances in computational metabolic optimization are required to realize the full potential of new in vivo metabolic engineering technologies by bridging the gap between computational design and strain development. We present Redirector, a new Flux Balance Analysis-based framework for identifying engineering targets to optimize metabolite production in complex pathways. Previous optimization frameworks have modeled metabolic alterations as directly controlling fluxes by setting particular flux bounds. Redirector develops a more biologically relevant approach, modeling metabolic alterations as changes in the balance of metabolic objectives in the system. This framework iteratively selects enzyme targets, adds the associated reaction fluxes to the metabolic objective, thereby incentivizing flux towards the production of a metabolite of interest. These adjustments to the objective act in competition with cellular growth and represent up-regulation and down-regulation of enzyme mediated reactions. Using the iAF1260 E. coli metabolic network model for optimization of fatty acid production as a test case, Redirector generates designs with as many as 39 simultaneous and 111 unique engineering targets. These designs discover proven in vivo targets, novel supporting pathways and relevant interdependencies, many of which cannot be predicted by other methods. Redirector is available as open and free software, scalable to computational resources, and powerful enough to find all known enzyme targets for fatty acid production.", "which deposits ?", "Redirector", 226.0, 236.0], ["Archaeogenomic research has proven to be a valuable tool to trace migrations of historic and prehistoric individuals and groups, whereas relationships within a group or burial site have not been investigated to a large extent. Knowing the genetic kinship of historic and prehistoric individuals would give important insights into social structures of ancient and historic cultures. Most archaeogenetic research concerning kinship has been restricted to uniparental markers, while studies using genome-wide information were mainly focused on comparisons between populations. Applications which infer the degree of relationship based on modern-day DNA information typically require diploid genotype data. Low concentration of endogenous DNA, fragmentation and other post-mortem damage to ancient DNA (aDNA) makes the application of such tools unfeasible for most archaeological samples. To infer family relationships for degraded samples, we developed the software READ (Relationship Estimation from Ancient DNA). We show that our heuristic approach can successfully infer up to second degree relationships with as little as 0.1x shotgun coverage per genome for pairs of individuals. We uncover previously unknown relationships among prehistoric individuals by applying READ to published aDNA data from several human remains excavated from different cultural contexts. In particular, we find a group of five closely related males from the same Corded Ware culture site in modern-day Germany, suggesting patrilocality, which highlights the possibility to uncover social structures of ancient populations by applying READ to genome-wide aDNA data. READ is publicly available from https://bitbucket.org/tguenther/read.", "which deposits ?", "READ", 963.0, 967.0], ["There is increasing evidence that protein dynamics and conformational changes can play an important role in modulating biological function. As a result, experimental and computational methods are being developed, often synergistically, to study the dynamical heterogeneity of a protein or other macromolecules in solution. Thus, methods such as molecular dynamics simulations or ensemble refinement approaches have provided conformational ensembles that can be used to understand protein function and biophysics. These developments have in turn created a need for algorithms and software that can be used to compare structural ensembles in the same way as the root-mean-square-deviation is often used to compare static structures. Although a few such approaches have been proposed, these can be difficult to implement efficiently, hindering a broader applications and further developments. Here, we present an easily accessible software toolkit, called ENCORE, which can be used to compare conformational ensembles generated either from simulations alone or synergistically with experiments. ENCORE implements three previously described methods for ensemble comparison, that each can be used to quantify the similarity between conformational ensembles by estimating the overlap between the probability distributions that underlie them. We demonstrate the kinds of insights that can be obtained by providing examples of three typical use-cases: comparing ensembles generated with different molecular force fields, assessing convergence in molecular simulations, and calculating differences and similarities in structural ensembles refined with various sources of experimental data. We also demonstrate efficient computational scaling for typical analyses, and robustness against both the size and sampling of the ensembles. ENCORE is freely available and extendable, integrates with the established MDAnalysis software package, reads ensemble data in many common formats, and can work with large trajectory files.", "which deposits ?", "ENCORE", 953.0, 959.0], ["Despite the growing number of immune repertoire sequencing studies, the field still lacks software for analysis and comprehension of this high-dimensional data. Here we report VDJtools, a complementary software suite that solves a wide range of T cell receptor (TCR) repertoires post-analysis tasks, provides a detailed tabular output and publication-ready graphics, and is built on top of a flexible API. Using TCR datasets for a large cohort of unrelated healthy donors, twins, and multiple sclerosis patients we demonstrate that VDJtools greatly facilitates the analysis and leads to sound biological conclusions. VDJtools software and documentation are available at https://github.com/mikessh/vdjtools.", "which deposits ?", "VDJtools", 176.0, 184.0], ["Our current understanding of the molecular mechanisms which regulate cellular processes such as vesicular trafficking has been enabled by conventional biochemical and microscopy techniques. However, these methods often obscure the heterogeneity of the cellular environment, thus precluding a quantitative assessment of the molecular interactions regulating these processes. Herein, we present Molecular Interactions in Super Resolution (MIiSR) software which provides quantitative analysis tools for use with super-resolution images. MIiSR combines multiple tools for analyzing intermolecular interactions, molecular clustering and image segmentation. These tools enable quantification, in the native environment of the cell, of molecular interactions and the formation of higher-order molecular complexes. The capabilities and limitations of these analytical tools are demonstrated using both modeled data and examples derived from the vesicular trafficking system, thereby providing an established and validated experimental workflow capable of quantitatively assessing molecular interactions and molecular complex formation within the heterogeneous environment of the cell.", "which deposits ?", "MIiSR", 437.0, 442.0], ["Grooming is a complex and robust innate behavior, commonly performed by most vertebrate species. In mice, grooming consists of a series of stereotyped patterned strokes, performed along the rostro-caudal axis of the body. The frequency and duration of each grooming episode is sensitive to changes in stress levels, social interactions and pharmacological manipulations, and is therefore used in behavioral studies to gain insights into the function of brain regions that control movement execution and anxiety. Traditional approaches to analyze grooming rely on manually scoring the time of onset and duration of each grooming episode, and are often performed on grooming episodes triggered by stress exposure, which may not be entirely representative of spontaneous grooming in freely-behaving mice. This type of analysis is time-consuming and provides limited information about finer aspects of grooming behaviors, which are important to understand movement stereotypy and bilateral coordination in mice. Currently available commercial and freeware video-tracking software allow automated tracking of the whole body of a mouse or of its head and tail, not of individual forepaws. Here we describe a simple experimental set-up and a novel open-source code, named M-Track, for simultaneously tracking the movement of individual forepaws during spontaneous grooming in multiple freely-behaving mice. This toolbox provides a simple platform to perform trajectory analysis of forepaw movement during distinct grooming episodes. By using M-track we show that, in C57BL/6 wild type mice, the speed and bilateral coordination of the left and right forepaws remain unaltered during the execution of distinct grooming episodes. Stress exposure induces a profound increase in the length of the forepaw grooming trajectories. M-Track provides a valuable and user-friendly interface to streamline the analysis of spontaneous grooming in biomedical research studies.", "which deposits ?", "M-Track", 1265.0, 1272.0], ["Germline copy number variants (CNVs) and somatic copy number alterations (SCNAs) are of significant importance in syndromic conditions and cancer. Massively parallel sequencing is increasingly used to infer copy number information from variations in the read depth in sequencing data. However, this approach has limitations in the case of targeted re-sequencing, which leaves gaps in coverage between the regions chosen for enrichment and introduces biases related to the efficiency of target capture and library preparation. We present a method for copy number detection, implemented in the software package CNVkit, that uses both the targeted reads and the nonspecifically captured off-target reads to infer copy number evenly across the genome. This combination achieves both exon-level resolution in targeted regions and sufficient resolution in the larger intronic and intergenic regions to identify copy number changes. In particular, we successfully inferred copy number at equivalent to 100-kilobase resolution genome-wide from a platform targeting as few as 293 genes. After normalizing read counts to a pooled reference, we evaluated and corrected for three sources of bias that explain most of the extraneous variability in the sequencing read depth: GC content, target footprint size and spacing, and repetitive sequences. We compared the performance of CNVkit to copy number changes identified by array comparative genomic hybridization. We packaged the components of CNVkit so that it is straightforward to use and provides visualizations, detailed reporting of significant features, and export options for integration into existing analysis pipelines. CNVkit is freely available from https://github.com/etal/cnvkit.", "which deposits ?", "CNVkit", 609.0, 615.0], ["Zoonotic diseases are a major cause of morbidity, and productivity losses in both human and animal populations. Identifying the source of food-borne zoonoses (e.g. an animal reservoir or food product) is crucial for the identification and prioritisation of food safety interventions. For many zoonotic diseases it is difficult to attribute human cases to sources of infection because there is little epidemiological information on the cases. However, microbial strain typing allows zoonotic pathogens to be categorised, and the relative frequencies of the strain types among the sources and in human cases allows inference on the likely source of each infection. We introduce sourceR, an R package for quantitative source attribution, aimed at food-borne diseases. It implements a Bayesian model using strain-typed surveillance data from both human cases and source samples, capable of identifying important sources of infection. The model measures the force of infection from each source, allowing for varying survivability, pathogenicity and virulence of pathogen strains, and varying abilities of the sources to act as vehicles of infection. A Bayesian non-parametric (Dirichlet process) approach is used to cluster pathogen strain types by epidemiological behaviour, avoiding model overfitting and allowing detection of strain types associated with potentially high \u201cvirulence\u201d. sourceR is demonstrated using Campylobacter jejuni isolate data collected in New Zealand between 2005 and 2008. Chicken from a particular poultry supplier was identified as the major source of campylobacteriosis, which is qualitatively similar to results of previous studies using the same dataset. Additionally, the software identifies a cluster of 9 multilocus sequence types with abnormally high \u2018virulence\u2019 in humans. sourceR enables straightforward attribution of cases of zoonotic infection to putative sources of infection. As sourceR develops, we intend it to become an important and flexible resource for food-borne disease attribution studies.", "which deposits ?", "sourceR", 676.0, 683.0], ["Abstract Single-cell RNA sequencing (scRNA-seq) technology allows researchers to profile the transcriptomes of thousands of cells simultaneously. Protocols that incorpo-rate both designed and random barcodes have greatly increased the throughput of scRNA-seq, but give rise to a more complex data structure. There is a need for new tools that can handle the various barcoding strategies used by different protocols and exploit this information for quality assessment at the sample-level and provide effective visualization of these results in preparation for higher-level analyses. To this end, we developed scPipe , a R/Bioconductor package that integrates barcode demultiplexing, read alignment, UMI-aware gene-level quantification and quality control of raw sequencing data generated by multiple 3-prime-end sequencing protocols that include CEL-seq, MARS-seq, Chromium 10X and Drop-seq. scPipe produces a count matrix that is essential for downstream analysis along with an HTML report that summarises data quality. These results can be used as input for downstream analyses including normalization, visualization and statistical testing. scPipe performs this processing in a few simple R commands, promoting reproducible analysis of single-cell data that is compatible with the emerging suite of scRNA-seq analysis tools available in R/Bioconductor. The scPipe R package is available for download from https://www.bioconductor.org/packages/scPipe.", "which deposits ?", "scPipe", 608.0, 614.0], ["The Dynamic Regulatory Events Miner (DREM) software reconstructs dynamic regulatory networks by integrating static protein-DNA interaction data with time series gene expression data. In recent years, several additional types of high-throughput time series data have been profiled when studying biological processes including time series miRNA expression, proteomics, epigenomics and single cell RNA-Seq. Combining all available time series and static datasets in a unified model remains an important challenge and goal. To address this challenge we have developed a new version of DREM termed interactive DREM (iDREM). iDREM provides support for all data types mentioned above and combines them with existing interaction data to reconstruct networks that can lead to novel hypotheses on the function and timing of regulators. Users can interactively visualize and query the resulting model. We showcase the functionality of the new tool by applying it to microglia developmental data from multiple labs.", "which deposits ?", "iDREM", 611.0, 616.0], ["Integrated information theory provides a mathematical framework to fully characterize the cause-effect structure of a physical system. Here, we introduce PyPhi, a Python software package that implements this framework for causal analysis and unfolds the full cause-effect structure of discrete dynamical systems of binary elements. The software allows users to easily study these structures, serves as an up-to-date reference implementation of the formalisms of integrated information theory, and has been applied in research on complexity, emergence, and certain biological questions. We first provide an overview of the main algorithm and demonstrate PyPhi\u2019s functionality in the course of analyzing an example system, and then describe details of the algorithm\u2019s design and implementation. PyPhi can be installed with Python\u2019s package manager via the command \u2018pip install pyphi\u2019 on Linux and macOS systems equipped with Python 3.4 or higher. PyPhi is open-source and licensed under the GPLv3; the source code is hosted on GitHub at https://github.com/wmayner/pyphi. Comprehensive and continually-updated documentation is available at https://pyphi.readthedocs.io. The pyphi-users mailing list can be joined at https://groups.google.com/forum/#!forum/pyphi-users. A web-based graphical interface to the software is available at http://integratedinformationtheory.org/calculate.html.", "which deposits ?", "PyPhi", 154.0, 159.0], ["Identification of single nucleotide polymorphisms (SNPs) and mutations is important for the discovery of genetic predisposition to complex diseases. PCR resequencing is the method of choice for de novo SNP discovery. However, manual curation of putative SNPs has been a major bottleneck in the application of this method to high-throughput screening. Therefore it is critical to develop a more sensitive and accurate computational method for automated SNP detection. We developed a software tool, SNPdetector, for automated identification of SNPs and mutations in fluorescence-based resequencing reads. SNPdetector was designed to model the process of human visual inspection and has a very low false positive and false negative rate. We demonstrate the superior performance of SNPdetector in SNP and mutation analysis by comparing its results with those derived by human inspection, PolyPhred (a popular SNP detection tool), and independent genotype assays in three large-scale investigations. The first study identified and validated inter- and intra-subspecies variations in 4,650 traces of 25 inbred mouse strains that belong to either the Mus musculus species or the M. spretus species. Unexpected heterozgyosity in CAST/Ei strain was observed in two out of 1,167 mouse SNPs. The second study identified 11,241 candidate SNPs in five ENCODE regions of the human genome covering 2.5 Mb of genomic sequence. Approximately 50% of the candidate SNPs were selected for experimental genotyping; the validation rate exceeded 95%. The third study detected ENU-induced mutations (at 0.04% allele frequency) in 64,896 traces of 1,236 zebra fish. Our analysis of three large and diverse test datasets demonstrated that SNPdetector is an effective tool for genome-scale research and for large-sample clinical studies. SNPdetector runs on Unix/Linux platform and is available publicly (http://lpg.nci.nih.gov).", "which deposits ?", "SNPdetector", 497.0, 508.0], ["Bayesian Networks (BN) have been a popular predictive modeling formalism in bioinformatics, but their application in modern genomics has been slowed by an inability to cleanly handle domains with mixed discrete and continuous variables. Existing free BN software packages either discretize continuous variables, which can lead to information loss, or do not include inference routines, which makes prediction with the BN impossible. We present CGBayesNets, a BN package focused around prediction of a clinical phenotype from mixed discrete and continuous variables, which fills these gaps. CGBayesNets implements Bayesian likelihood and inference algorithms for the conditional Gaussian Bayesian network (CGBNs) formalism, one appropriate for predicting an outcome of interest from, e.g., multimodal genomic data. We provide four different network learning algorithms, each making a different tradeoff between computational cost and network likelihood. CGBayesNets provides a full suite of functions for model exploration and verification, including cross validation, bootstrapping, and AUC manipulation. We highlight several results obtained previously with CGBayesNets, including predictive models of wood properties from tree genomics, leukemia subtype classification from mixed genomic data, and robust prediction of intensive care unit mortality outcomes from metabolomic profiles. We also provide detailed example analysis on public metabolomic and gene expression datasets. CGBayesNets is implemented in MATLAB and available as MATLAB source code, under an Open Source license and anonymous download at http://www.cgbayesnets.com.", "which deposits ?", "CGBayesNets", 444.0, 455.0], ["In order to access and filter content of life-science databases, full text search is a widely applied query interface. But its high flexibility and intuitiveness is paid for with potentially imprecise and incomplete query results. To reduce this drawback, query assistance systems suggest those combinations of keywords with the highest potential to match most of the relevant data records. Widespread approaches are syntactic query corrections that avoid misspelling and support expansion of words by suffixes and prefixes. Synonym expansion approaches apply thesauri, ontologies, and query logs. All need laborious curation and maintenance. Furthermore, access to query logs is in general restricted. Approaches that infer related queries by their query profile like research field, geographic location, co-authorship, affiliation etc. require user\u2019s registration and its public accessibility that contradict privacy concerns. To overcome these drawbacks, we implemented LAILAPS-QSM, a machine learning approach that reconstruct possible linguistic contexts of a given keyword query. The context is referred from the text records that are stored in the databases that are going to be queried or extracted for a general purpose query suggestion from PubMed abstracts and UniProt data. The supplied tool suite enables the pre-processing of these text records and the further computation of customized distributed word vectors. The latter are used to suggest alternative keyword queries. An evaluated of the query suggestion quality was done for plant science use cases. Locally present experts enable a cost-efficient quality assessment in the categories trait, biological entity, taxonomy, affiliation, and metabolic function which has been performed using ontology term similarities. LAILAPS-QSM mean information content similarity for 15 representative queries is 0.70, whereas 34% have a score above 0.80. In comparison, the information content similarity for human expert made query suggestions is 0.90. The software is either available as tool set to build and train dedicated query suggestion services or as already trained general purpose RESTful web service. The service uses open interfaces to be seamless embeddable into database frontends. The JAVA implementation uses highly optimized data structures and streamlined code to provide fast and scalable response for web service calls. The source code of LAILAPS-QSM is available under GNU General Public License version 2 in Bitbucket GIT repository: https://bitbucket.org/ipk_bit_team/bioescorte-suggestion", "which deposits ?", "LAILAPS-QSM", 973.0, 984.0], ["A user ready, portable, documented software package, NFTsim, is presented to facilitate numerical simulations of a wide range of brain systems using continuum neural field modeling. NFTsim enables users to simulate key aspects of brain activity at multiple scales. At the microscopic scale, it incorporates characteristics of local interactions between cells, neurotransmitter effects, synaptodendritic delays and feedbacks. At the mesoscopic scale, it incorporates information about medium to large scale axonal ranges of fibers, which are essential to model dissipative wave transmission and to produce synchronous oscillations and associated cross-correlation patterns as observed in local field potential recordings of active tissue. At the scale of the whole brain, NFTsim allows for the inclusion of long range pathways, such as thalamocortical projections, when generating macroscopic activity fields. The multiscale nature of the neural activity produced by NFTsim has the potential to enable the modeling of resulting quantities measurable via various neuroimaging techniques. In this work, we give a comprehensive description of the design and implementation of the software. Due to its modularity and flexibility, NFTsim enables the systematic study of an unlimited number of neural systems with multiple neural populations under a unified framework and allows for direct comparison with analytic and experimental predictions. The code is written in C++ and bundled with Matlab routines for a rapid quantitative analysis and visualization of the outputs. The output of NFTsim is stored in plain text file enabling users to select from a broad range of tools for offline analysis. This software enables a wide and convenient use of powerful physiologically-based neural field approaches to brain modeling. NFTsim is distributed under the Apache 2.0 license.", "which deposits ?", "NFTsim", 53.0, 59.0], ["We propose a novel method and software tool, Strawberry, for transcript reconstruction and quantification from RNA-Seq data under the guidance of genome alignment and independent of gene annotation. Strawberry consists of two modules: assembly and quantification. The novelty of Strawberry is that the two modules use different optimization frameworks but utilize the same data graph structure, which allows a highly efficient, expandable and accurate algorithm for dealing large data. The assembly module parses aligned reads into splicing graphs, and uses network flow algorithms to select the most likely transcripts. The quantification module uses a latent class model to assign read counts from the nodes of splicing graphs to transcripts. Strawberry simultaneously estimates the transcript abundances and corrects for sequencing bias through an EM algorithm. Based on simulations, Strawberry outperforms Cufflinks and StringTie in terms of both assembly and quantification accuracies. Under the evaluation of a real data set, the estimated transcript expression by Strawberry has the highest correlation with Nanostring probe counts, an independent experiment measure for transcript expression. Availability: Strawberry is written in C++14, and is available as open source software at https://github.com/ruolin/strawberry under the MIT license.", "which deposits ?", "Strawberry", 45.0, 55.0], ["We present a new open source, extensible and flexible software platform for Bayesian evolutionary analysis called BEAST 2. This software platform is a re-design of the popular BEAST 1 platform to correct structural deficiencies that became evident as the BEAST 1 software evolved. Key among those deficiencies was the lack of post-deployment extensibility. BEAST 2 now has a fully developed package management system that allows third party developers to write additional functionality that can be directly installed to the BEAST 2 analysis platform via a package manager without requiring a new software release of the platform. This package architecture is showcased with a number of recently published new models encompassing birth-death-sampling tree priors, phylodynamics and model averaging for substitution models and site partitioning. A second major improvement is the ability to read/write the entire state of the MCMC chain to/from disk allowing it to be easily shared between multiple instances of the BEAST software. This facilitates checkpointing and better support for multi-processor and high-end computing extensions. Finally, the functionality in new packages can be easily added to the user interface (BEAUti 2) by a simple XML template-based mechanism because BEAST 2 has been re-designed to provide greater integration between the analysis engine and the user interface so that, for example BEAST and BEAUti use exactly the same XML file format.", "which deposits ?", "BEAST", 114.0, 119.0], ["Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. AVAILABILITY: QuIN\u2019s web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.", "which deposits ?", "QuIN", 1082.0, 1086.0], ["Genome-scale models of metabolism and macromolecular expression (ME-models) explicitly compute the optimal proteome composition of a growing cell. ME-models expand upon the well-established genome-scale models of metabolism (M-models), and they enable a new fundamental understanding of cellular growth. ME-models have increased predictive capabilities and accuracy due to their inclusion of the biosynthetic costs for the machinery of life, but they come with a significant increase in model size and complexity. This challenge results in models which are both difficult to compute and challenging to understand conceptually. As a result, ME-models exist for only two organisms (Escherichia coli and Thermotoga maritima) and are still used by relatively few researchers. To address these challenges, we have developed a new software framework called COBRAme for building and simulating ME-models. It is coded in Python and built on COBRApy, a popular platform for using M-models. COBRAme streamlines computation and analysis of ME-models. It provides tools to simplify constructing and editing ME-models to enable ME-model reconstructions for new organisms. We used COBRAme to reconstruct a condensed E. coli ME-model called iJL1678b-ME. This reformulated model gives functionally identical solutions to previous E. coli ME-models while using 1/6 the number of free variables and solving in less than 10 minutes, a marked improvement over the 6 hour solve time of previous ME-model formulations. Errors in previous ME-models were also corrected leading to 52 additional genes that must be expressed in iJL1678b-ME to grow aerobically in glucose minimal in silico media. This manuscript outlines the architecture of COBRAme and demonstrates how ME-models can be created, modified, and shared most efficiently using the new software framework.", "which deposits ?", "COBRAme", 851.0, 858.0], ["Detecting similarities between ligand binding sites in the absence of global homology between target proteins has been recognized as one of the critical components of modern drug discovery. Local binding site alignments can be constructed using sequence order-independent techniques, however, to achieve a high accuracy, many current algorithms for binding site comparison require high-quality experimental protein structures, preferably in the bound conformational state. This, in turn, complicates proteome scale applications, where only various quality structure models are available for the majority of gene products. To improve the state-of-the-art, we developed eMatchSite, a new method for constructing sequence order-independent alignments of ligand binding sites in protein models. Large-scale benchmarking calculations using adenine-binding pockets in crystal structures demonstrate that eMatchSite generates accurate alignments for almost three times more protein pairs than SOIPPA. More importantly, eMatchSite offers a high tolerance to structural distortions in ligand binding regions in protein models. For example, the percentage of correctly aligned pairs of adenine-binding sites in weakly homologous protein models is only 4\u20139% lower than those aligned using crystal structures. This represents a significant improvement over other algorithms, e.g. the performance of eMatchSite in recognizing similar binding sites is 6% and 13% higher than that of SiteEngine using high- and moderate-quality protein models, respectively. Constructing biologically correct alignments using predicted ligand binding sites in protein models opens up the possibility to investigate drug-protein interaction networks for complete proteomes with prospective systems-level applications in polypharmacology and rational drug repositioning. eMatchSite is freely available to the academic community as a web-server and a stand-alone software distribution at http://www.brylinski.org/ematchsite.", "which deposits ?", "eMatchSite", 668.0, 678.0], ["Live-cell imaging by light microscopy has demonstrated that all cells are spatially and temporally organized. Quantitative, computational image analysis is an important part of cellular imaging, providing both enriched information about individual cell properties and the ability to analyze large datasets. However, such studies are often limited by the small size and variable shape of objects of interest. Here, we address two outstanding problems in bacterial cell division by developing a generally applicable, standardized, and modular software suite termed Projected System of Internal Coordinates from Interpolated Contours (PSICIC) that solves common problems in image quantitation. PSICIC implements interpolated-contour analysis for accurate and precise determination of cell borders and automatically generates internal coordinate systems that are superimposable regardless of cell geometry. We have used PSICIC to establish that the cell-fate determinant, SpoIIE, is asymmetrically localized during Bacillus subtilis sporulation, thereby demonstrating the ability of PSICIC to discern protein localization features at sub-pixel scales. We also used PSICIC to examine the accuracy of cell division in Esherichia coli and found a new role for the Min system in regulating division-site placement throughout the cell length, but only prior to the initiation of cell constriction. These results extend our understanding of the regulation of both asymmetry and accuracy in bacterial division while demonstrating the general applicability of PSICIC as a computational approach for quantitative, high-throughput analysis of cellular images.", "which deposits ?", "PSICIC", 632.0, 638.0], ["Various algorithms have been developed for variant calling using next-generation sequencing data, and various methods have been applied to reduce the associated false positive and false negative rates. Few variant calling programs, however, utilize the pedigree information when the family-based sequencing data are available. Here, we present a program, FamSeq, which reduces both false positive and false negative rates by incorporating the pedigree information from the Mendelian genetic model into variant calling. To accommodate variations in data complexity, FamSeq consists of four distinct implementations of the Mendelian genetic model: the Bayesian network algorithm, a graphics processing unit version of the Bayesian network algorithm, the Elston-Stewart algorithm and the Markov chain Monte Carlo algorithm. To make the software efficient and applicable to large families, we parallelized the Bayesian network algorithm that copes with pedigrees with inbreeding loops without losing calculation precision on an NVIDIA graphics processing unit. In order to compare the difference in the four methods, we applied FamSeq to pedigree sequencing data with family sizes that varied from 7 to 12. When there is no inbreeding loop in the pedigree, the Elston-Stewart algorithm gives analytical results in a short time. If there are inbreeding loops in the pedigree, we recommend the Bayesian network method, which provides exact answers. To improve the computing speed of the Bayesian network method, we parallelized the computation on a graphics processing unit. This allowed the Bayesian network method to process the whole genome sequencing data of a family of 12 individuals within two days, which was a 10-fold time reduction compared to the time required for this computation on a central processing unit.", "which deposits ?", "FamSeq", 355.0, 361.0], ["Accurate inference of molecular and functional interactions among genes, especially in multicellular organisms such as Drosophila, often requires statistical analysis of correlations not only between the magnitudes of gene expressions, but also between their temporal-spatial patterns. The ISH (in-situ-hybridization)-based gene expression micro-imaging technology offers an effective approach to perform large-scale spatial-temporal profiling of whole-body mRNA abundance. However, analytical tools for discovering gene interactions from such data remain an open challenge due to various reasons, including difficulties in extracting canonical representations of gene activities from images, and in inference of statistically meaningful networks from such representations. In this paper, we present GINI, a machine learning system for inferring gene interaction networks from Drosophila embryonic ISH images. GINI builds on a computer-vision-inspired vector-space representation of the spatial pattern of gene expression in ISH images, enabled by our recently developed system; and a new multi-instance-kernel algorithm that learns a sparse Markov network model, in which, every gene (i.e., node) in the network is represented by a vector-valued spatial pattern rather than a scalar-valued gene intensity as in conventional approaches such as a Gaussian graphical model. By capturing the notion of spatial similarity of gene expression, and at the same time properly taking into account the presence of multiple images per gene via multi-instance kernels, GINI is well-positioned to infer statistically sound, and biologically meaningful gene interaction networks from image data. Using both synthetic data and a small manually curated data set, we demonstrate the effectiveness of our approach in network building. Furthermore, we report results on a large publicly available collection of Drosophila embryonic ISH images from the Berkeley Drosophila Genome Project, where GINI makes novel and interesting predictions of gene interactions. Software for GINI is available at http://sailing.cs.cmu.edu/Drosophila_ISH_images/", "which deposits ?", "GINI", 800.0, 804.0], ["A calibrated computational model reflects behaviours that are expected or observed in a complex system, providing a baseline upon which sensitivity analysis techniques can be used to analyse pathways that may impact model responses. However, calibration of a model where a behaviour depends on an intervention introduced after a defined time point is difficult, as model responses may be dependent on the conditions at the time the intervention is applied. We present ASPASIA (Automated Simulation Parameter Alteration and SensItivity Analysis), a cross-platform, open-source Java toolkit that addresses a key deficiency in software tools for understanding the impact an intervention has on system behaviour for models specified in Systems Biology Markup Language (SBML). ASPASIA can generate and modify models using SBML solver output as an initial parameter set, allowing interventions to be applied once a steady state has been reached. Additionally, multiple SBML models can be generated where a subset of parameter values are perturbed using local and global sensitivity analysis techniques, revealing the model\u2019s sensitivity to the intervention. To illustrate the capabilities of ASPASIA, we demonstrate how this tool has generated novel hypotheses regarding the mechanisms by which Th17-cell plasticity may be controlled in vivo. By using ASPASIA in conjunction with an SBML model of Th17-cell polarisation, we predict that promotion of the Th1-associated transcription factor T-bet, rather than inhibition of the Th17-associated transcription factor ROR\u03b3t, is sufficient to drive switching of Th17 cells towards an IFN-\u03b3-producing phenotype. Our approach can be applied to all SBML-encoded models to predict the effect that intervention strategies have on system behaviour. ASPASIA, released under the Artistic License (2.0), can be downloaded from http://www.york.ac.uk/ycil/software.", "which deposits ?", "ASPASIA", 468.0, 475.0], ["Imaging and analyzing the locomotion behavior of small animals such as Drosophila larvae or C. elegans worms has become an integral subject of biological research. In the past we have introduced FIM, a novel imaging system feasible to extract high contrast images. This system in combination with the associated tracking software FIMTrack is already used by many groups all over the world. However, so far there has not been an in-depth discussion of the technical aspects. Here we elaborate on the implementation details of FIMTrack and give an in-depth explanation of the used algorithms. Among others, the software offers several tracking strategies to cover a wide range of different model organisms, locomotion types, and camera properties. Furthermore, the software facilitates stimuli-based analysis in combination with built-in manual tracking and correction functionalities. All features are integrated in an easy-to-use graphical user interface. To demonstrate the potential of FIMTrack we provide an evaluation of its accuracy using manually labeled data. The source code is available under the GNU GPLv3 at https://github.com/i-git/FIMTrack and pre-compiled binaries for Windows and Mac are available at http://fim.uni-muenster.de.", "which deposits ?", "pre-compiled binaries", 1157.0, 1178.0], ["The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the \u2018missing heritability,\u2019 which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS), for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/.", "which deposits ?", "PEPIS", 672.0, 677.0], ["Since its identification in 1983, HIV-1 has been the focus of a research effort unprecedented in scope and difficulty, whose ultimate goals \u2014 a cure and a vaccine \u2013 remain elusive. One of the fundamental challenges in accomplishing these goals is the tremendous genetic variability of the virus, with some genes differing at as many as 40% of nucleotide positions among circulating strains. Because of this, the genetic bases of many viral phenotypes, most notably the susceptibility to neutralization by a particular antibody, are difficult to identify computationally. Drawing upon open-source general-purpose machine learning algorithms and libraries, we have developed a software package IDEPI (IDentify EPItopes) for learning genotype-to-phenotype predictive models from sequences with known phenotypes. IDEPI can apply learned models to classify sequences of unknown phenotypes, and also identify specific sequence features which contribute to a particular phenotype. We demonstrate that IDEPI achieves performance similar to or better than that of previously published approaches on four well-studied problems: finding the epitopes of broadly neutralizing antibodies (bNab), determining coreceptor tropism of the virus, identifying compartment-specific genetic signatures of the virus, and deducing drug-resistance associated mutations. The cross-platform Python source code (released under the GPL 3.0 license), documentation, issue tracking, and a pre-configured virtual machine for IDEPI can be found at https://github.com/veg/idepi.", "which deposits ?", "IDEPI", 692.0, 697.0], ["Characterization of Human Endogenous Retrovirus (HERV) expression within the transcriptomic landscape using RNA-seq is complicated by uncertainty in fragment assignment because of sequence similarity. We present Telescope, a computational software tool that provides accurate estimation of transposable element expression (retrotranscriptome) resolved to specific genomic locations. Telescope directly addresses uncertainty in fragment assignment by reassigning ambiguously mapped fragments to the most probable source transcript as determined within a Bayesian statistical model. We demonstrate the utility of our approach through single locus analysis of HERV expression in 13 ENCODE cell types. When examined at this resolution, we find that the magnitude and breadth of the retrotranscriptome can be vastly different among cell types. Furthermore, our approach is robust to differences in sequencing technology and demonstrates that the retrotranscriptome has potential to be used for cell type identification. We compared our tool with other approaches for quantifying transposable element (TE) expression, and found that Telescope has the greatest resolution, as it estimates expression at specific TE insertions rather than at the TE subfamily level. Telescope performs highly accurate quantification of the retrotranscriptomic landscape in RNA-seq experiments, revealing a differential complexity in the transposable element biology of complex systems not previously observed. Telescope is available at https://github.com/mlbendall/telescope.", "which deposits ?", "Telescope", 212.0, 221.0], ["Unique molecular identifiers (UMIs) show outstanding performance in targeted high-throughput resequencing, being the most promising approach for the accurate identification of rare variants in complex DNA samples. This approach has application in multiple areas, including cancer diagnostics, thus demanding dedicated software and algorithms. Here we introduce MAGERI, a computational pipeline that efficiently handles all caveats of UMI-based analysis to obtain high-fidelity mutation profiles and call ultra-rare variants. Using an extensive set of benchmark datasets including gold-standard biological samples with known variant frequencies, cell-free DNA from tumor patient blood samples and publicly available UMI-encoded datasets we demonstrate that our method is both robust and efficient in calling rare variants. The versatility of our software is supported by accurate results obtained for both tumor DNA and viral RNA samples in datasets prepared using three different UMI-based protocols.", "which deposits ?", "MAGERI", 361.0, 367.0], ["PathVisio is a commonly used pathway editor, visualization and analysis software. Biological pathways have been used by biologists for many years to describe the detailed steps in biological processes. Those powerful, visual representations help researchers to better understand, share and discuss knowledge. Since the first publication of PathVisio in 2008, the original paper was cited more than 170 times and PathVisio was used in many different biological studies. As an online editor PathVisio is also integrated in the community curated pathway database WikiPathways. Here we present the third version of PathVisio with the newest additions and improvements of the application. The core features of PathVisio are pathway drawing, advanced data visualization and pathway statistics. Additionally, PathVisio 3 introduces a new powerful extension systems that allows other developers to contribute additional functionality in form of plugins without changing the core application. PathVisio can be downloaded from http://www.pathvisio.org and in 2014 PathVisio 3 has been downloaded over 5,500 times. There are already more than 15 plugins available in the central plugin repository. PathVisio is a freely available, open-source tool published under the Apache 2.0 license (http://www.apache.org/licenses/LICENSE-2.0). It is implemented in Java and thus runs on all major operating systems. The code repository is available at http://svn.bigcat.unimaas.nl/pathvisio. The support mailing list for users is available on https://groups.google.com/forum/#!forum/wikipathways-discuss and for developers on https://groups.google.com/forum/#!forum/wikipathways-devel.", "which deposits ?", "PathVisio", 0.0, 9.0], ["Nonribosomally and ribosomally synthesized bioactive peptides constitute a source of molecules of great biomedical importance, including antibiotics such as penicillin, immunosuppressants such as cyclosporine, and cytostatics such as bleomycin. Recently, an innovative mass-spectrometry-based strategy, peptidogenomics, has been pioneered to effectively mine microbial strains for novel peptidic metabolites. Even though mass-spectrometric peptide detection can be performed quite fast, true high-throughput natural product discovery approaches have still been limited by the inability to rapidly match the identified tandem mass spectra to the gene clusters responsible for the biosynthesis of the corresponding compounds. With Pep2Path, we introduce a software package to fully automate the peptidogenomics approach through the rapid Bayesian probabilistic matching of mass spectra to their corresponding biosynthetic gene clusters. Detailed benchmarking of the method shows that the approach is powerful enough to correctly identify gene clusters even in data sets that consist of hundreds of genomes, which also makes it possible to match compounds from unsequenced organisms to closely related biosynthetic gene clusters in other genomes. Applying Pep2Path to a data set of compounds without known biosynthesis routes, we were able to identify candidate gene clusters for the biosynthesis of five important compounds. Notably, one of these clusters was detected in a genome from a different subphylum of Proteobacteria than that in which the molecule had first been identified. All in all, our approach paves the way towards high-throughput discovery of novel peptidic natural products. Pep2Path is freely available from http://pep2path.sourceforge.net/, implemented in Python, licensed under the GNU General Public License v3 and supported on MS Windows, Linux and Mac OS X.", "which deposits ?", "Pep2Path", 729.0, 737.0], ["The method of phylogenetic ancestral sequence reconstruction is a powerful approach for studying evolutionary relationships among protein sequence, structure, and function. In particular, this approach allows investigators to (1) reconstruct and \u201cresurrect\u201d (that is, synthesize in vivo or in vitro) extinct proteins to study how they differ from modern proteins, (2) identify key amino acid changes that, over evolutionary timescales, have altered the function of the protein, and (3) order historical events in the evolution of protein function. Widespread use of this approach has been slow among molecular biologists, in part because the methods require significant computational expertise. Here we present PhyloBot, a web-based software tool that makes ancestral sequence reconstruction easy. Designed for non-experts, it integrates all the necessary software into a single user interface. Additionally, PhyloBot provides interactive tools to explore evolutionary trajectories between ancestors, enabling the rapid generation of hypotheses that can be tested using genetic or biochemical approaches. Early versions of this software were used in previous studies to discover genetic mechanisms underlying the functions of diverse protein families, including V-ATPase ion pumps, DNA-binding transcription regulators, and serine/threonine protein kinases. PhyloBot runs in a web browser, and is available at the following URL: http://www.phylobot.com. The software is implemented in Python using the Django web framework, and runs on elastic cloud computing resources from Amazon Web Services. Users can create and submit jobs on our free server (at the URL listed above), or use our open-source code to launch their own PhyloBot server.", "which deposits ?", "PhyloBot", 711.0, 719.0], ["PhyloGibbs, our recent Gibbs-sampling motif-finder, takes phylogeny into account in detecting binding sites for transcription factors in DNA and assigns posterior probabilities to its predictions obtained by sampling the entire configuration space. Here, in an extension called PhyloGibbs-MP, we widen the scope of the program, addressing two major problems in computational regulatory genomics. First, PhyloGibbs-MP can localise predictions to small, undetermined regions of a large input sequence, thus effectively predicting cis-regulatory modules (CRMs) ab initio while simultaneously predicting binding sites in those modules\u2014tasks that are usually done by two separate programs. PhyloGibbs-MP's performance at such ab initio CRM prediction is comparable with or superior to dedicated module-prediction software that use prior knowledge of previously characterised transcription factors. Second, PhyloGibbs-MP can predict motifs that differentiate between two (or more) different groups of regulatory regions, that is, motifs that occur preferentially in one group over the others. While other \u201cdiscriminative motif-finders\u201d have been published in the literature, PhyloGibbs-MP's implementation has some unique features and flexibility. Benchmarks on synthetic and actual genomic data show that this algorithm is successful at enhancing predictions of differentiating sites and suppressing predictions of common sites and compares with or outperforms other discriminative motif-finders on actual genomic data. Additional enhancements include significant performance and speed improvements, the ability to use \u201cinformative priors\u201d on known transcription factors, and the ability to output annotations in a format that can be visualised with the Generic Genome Browser. In stand-alone motif-finding, PhyloGibbs-MP remains competitive, outperforming PhyloGibbs-1.0 and other programs on benchmark data.", "which uses ?", "PhyloGibbs", 0.0, 10.0], ["Background Dog rabies annually causes 24,000\u201370,000 deaths globally. We built a spreadsheet tool, RabiesEcon, to aid public health officials to estimate the cost-effectiveness of dog rabies vaccination programs in East Africa. Methods RabiesEcon uses a mathematical model of dog-dog and dog-human rabies transmission to estimate dog rabies cases averted, the cost per human rabies death averted and cost per year of life gained (YLG) due to dog vaccination programs (US 2015 dollars). We used an East African human population of 1 million (approximately 2/3 living in urban setting, 1/3 rural). We considered, using data from the literature, three vaccination options; no vaccination, annual vaccination of 50% of dogs and 20% of dogs vaccinated semi-annually. We assessed 2 transmission scenarios: low (1.2 dogs infected per infectious dog) and high (1.7 dogs infected). We also examined the impact of annually vaccinating 70% of all dogs (World Health Organization recommendation for dog rabies elimination). Results Without dog vaccination, over 10 years there would a total of be approximately 44,000\u201365,000 rabid dogs and 2,100\u20132,900 human deaths. Annually vaccinating 50% of dogs results in 10-year reductions of 97% and 75% in rabid dogs (low and high transmissions scenarios, respectively), approximately 2,000\u20131,600 human deaths averted, and an undiscounted cost-effectiveness of $451-$385 per life saved. Semi-annual vaccination of 20% of dogs results in in 10-year reductions of 94% and 78% in rabid dogs, and approximately 2,000\u20131,900 human deaths averted, and cost $404-$305 per life saved. In the low transmission scenario, vaccinating either 50% or 70% of dogs eliminated dog rabies. Results were most sensitive to dog birth rate and the initial rate of dog-to-dog transmission (Ro). Conclusions Dog rabies vaccination programs can control, and potentially eliminate, dog rabies. The frequency and coverage of vaccination programs, along with the level of dog rabies transmission, can affect the cost-effectiveness of such programs. RabiesEcon can aid both the planning and assessment of dog rabies vaccination programs.", "which uses ?", "RabiesEcon", 98.0, 108.0], ["Objective: We consider challenges in accurate segmentation of heart sound signals recorded under noisy clinical environments for subsequent classification of pathological events. Existing state-of-the-art solutions to heart sound segmentation use probabilistic models such as hidden Markov models (HMMs), which, however, are limited by its observation independence assumption and rely on pre-extraction of noise-robust features. Methods: We propose a Markov-switching autoregressive (MSAR) process to model the raw heart sound signals directly, which allows efficient segmentation of the cyclical heart sound states according to the distinct dependence structure in each state. To enhance robustness, we extend the MSAR model to a switching linear dynamic system (SLDS) that jointly model both the switching AR dynamics of underlying heart sound signals and the noise effects. We introduce a novel algorithm via fusion of switching Kalman filter and the duration-dependent Viterbi algorithm, which incorporates the duration of heart sound states to improve state decoding. Results: Evaluated on Physionet/CinC Challenge 2016 dataset, the proposed MSAR-SLDS approach significantly outperforms the hidden semi-Markov model (HSMM) in heart sound segmentation based on raw signals and comparable to a feature-based HSMM. The segmented labels were then used to train Gaussian-mixture HMM classifier for identification of abnormal beats, achieving high average precision of 86.1% on the same dataset including very noisy recordings. Conclusion: The proposed approach shows noticeable performance in heart sound segmentation and classification on a large noisy dataset. Significance: It is potentially useful in developing automated heart monitoring systems for pre-screening of heart pathologies.", "which uses ?", "Switching linear dynamic system (SLDS)", NaN, NaN], ["The rapidly expanding body of available genomic and protein structural data provides a rich resource for understanding protein dynamics with biomolecular simulation. While computational infrastructure has grown rapidly, simulations on an omics scale are not yet widespread, primarily because software infrastructure to enable simulations at this scale has not kept pace. It should now be possible to study protein dynamics across entire (super)families, exploiting both available structural biology data and conformational similarities across homologous proteins. Here, we present a new tool for enabling high-throughput simulation in the genomics era. Ensembler takes any set of sequences - from a single sequence to an entire superfamily - and shepherds them through various stages of modeling and refinement to produce simulation-ready structures. This includes comparative modeling to all relevant PDB structures (which may span multiple conformational states of interest), reconstruction of missing loops, addition of missing atoms, culling of nearly identical structures, assignment of appropriate protonation states, solvation in explicit solvent, and refinement and filtering with molecular simulation to ensure stable simulation. The output of this pipeline is an ensemble of structures ready for subsequent molecular simulations using computer clusters, supercomputers, or distributed computing projects like Folding@home. Ensembler thus automates much of the time-consuming process of preparing protein models suitable for simulation, while allowing scalability up to entire superfamilies. A particular advantage of this approach can be found in the construction of kinetic models of conformational dynamics - such as Markov state models (MSMs) - which benefit from a diverse array of initial configurations that span the accessible conformational states to aid sampling. We demonstrate the power of this approach by constructing models for all catalytic domains in the human tyrosine kinase family, using all available kinase catalytic domain structures from any organism as structural templates. Ensembler is free and open source software licensed under the GNU General Public License (GPL) v2. It is compatible with Linux and OS X. The latest release can be installed via the conda package manager, and the latest source can be downloaded from https://github.com/choderalab/ensembler.", "which uses ?", "GitHub", 2366.0, 2372.0], ["Identification of single nucleotide polymorphisms (SNPs) and mutations is important for the discovery of genetic predisposition to complex diseases. PCR resequencing is the method of choice for de novo SNP discovery. However, manual curation of putative SNPs has been a major bottleneck in the application of this method to high-throughput screening. Therefore it is critical to develop a more sensitive and accurate computational method for automated SNP detection. We developed a software tool, SNPdetector, for automated identification of SNPs and mutations in fluorescence-based resequencing reads. SNPdetector was designed to model the process of human visual inspection and has a very low false positive and false negative rate. We demonstrate the superior performance of SNPdetector in SNP and mutation analysis by comparing its results with those derived by human inspection, PolyPhred (a popular SNP detection tool), and independent genotype assays in three large-scale investigations. The first study identified and validated inter- and intra-subspecies variations in 4,650 traces of 25 inbred mouse strains that belong to either the Mus musculus species or the M. spretus species. Unexpected heterozgyosity in CAST/Ei strain was observed in two out of 1,167 mouse SNPs. The second study identified 11,241 candidate SNPs in five ENCODE regions of the human genome covering 2.5 Mb of genomic sequence. Approximately 50% of the candidate SNPs were selected for experimental genotyping; the validation rate exceeded 95%. The third study detected ENU-induced mutations (at 0.04% allele frequency) in 64,896 traces of 1,236 zebra fish. Our analysis of three large and diverse test datasets demonstrated that SNPdetector is an effective tool for genome-scale research and for large-sample clinical studies. SNPdetector runs on Unix/Linux platform and is available publicly (http://lpg.nci.nih.gov).", "which uses ?", "SNPdetector", 497.0, 508.0], ["The quantification of cell shape, cell migration, and cell rearrangements is important for addressing classical questions in developmental biology such as patterning and tissue morphogenesis. Time-lapse microscopic imaging of transgenic embryos expressing fluorescent reporters is the method of choice for tracking morphogenetic changes and establishing cell lineages and fate maps in vivo. However, the manual steps involved in curating thousands of putative cell segmentations have been a major bottleneck in the application of these technologies especially for cell membranes. Segmentation of cell membranes while more difficult than nuclear segmentation is necessary for quantifying the relations between changes in cell morphology and morphogenesis. We present a novel and fully automated method to first reconstruct membrane signals and then segment out cells from 3D membrane images even in dense tissues. The approach has three stages: 1) detection of local membrane planes, 2) voting to fill structural gaps, and 3) region segmentation. We demonstrate the superior performance of the algorithms quantitatively on time-lapse confocal and two-photon images of zebrafish neuroectoderm and paraxial mesoderm by comparing its results with those derived from human inspection. We also compared with synthetic microscopic images generated by simulating the process of imaging with fluorescent reporters under varying conditions of noise. Both the over-segmentation and under-segmentation percentages of our method are around 5%. The volume overlap of individual cells, compared to expert manual segmentation, is consistently over 84%. By using our software (ACME) to study somite formation, we were able to segment touching cells with high accuracy and reliably quantify changes in morphogenetic parameters such as cell shape and size, and the arrangement of epithelial and mesenchymal cells. Our software has been developed and tested on Windows, Mac, and Linux platforms and is available publicly under an open source BSD license (https://github.com/krm15/ACME).", "which uses ?", "Mac", 1950.0, 1953.0], ["Integrated information theory provides a mathematical framework to fully characterize the cause-effect structure of a physical system. Here, we introduce PyPhi, a Python software package that implements this framework for causal analysis and unfolds the full cause-effect structure of discrete dynamical systems of binary elements. The software allows users to easily study these structures, serves as an up-to-date reference implementation of the formalisms of integrated information theory, and has been applied in research on complexity, emergence, and certain biological questions. We first provide an overview of the main algorithm and demonstrate PyPhi\u2019s functionality in the course of analyzing an example system, and then describe details of the algorithm\u2019s design and implementation. PyPhi can be installed with Python\u2019s package manager via the command \u2018pip install pyphi\u2019 on Linux and macOS systems equipped with Python 3.4 or higher. PyPhi is open-source and licensed under the GPLv3; the source code is hosted on GitHub at https://github.com/wmayner/pyphi. Comprehensive and continually-updated documentation is available at https://pyphi.readthedocs.io. The pyphi-users mailing list can be joined at https://groups.google.com/forum/#!forum/pyphi-users. A web-based graphical interface to the software is available at http://integratedinformationtheory.org/calculate.html.", "which uses ?", "GitHub", 1025.0, 1031.0], ["Since its identification in 1983, HIV-1 has been the focus of a research effort unprecedented in scope and difficulty, whose ultimate goals \u2014 a cure and a vaccine \u2013 remain elusive. One of the fundamental challenges in accomplishing these goals is the tremendous genetic variability of the virus, with some genes differing at as many as 40% of nucleotide positions among circulating strains. Because of this, the genetic bases of many viral phenotypes, most notably the susceptibility to neutralization by a particular antibody, are difficult to identify computationally. Drawing upon open-source general-purpose machine learning algorithms and libraries, we have developed a software package IDEPI (IDentify EPItopes) for learning genotype-to-phenotype predictive models from sequences with known phenotypes. IDEPI can apply learned models to classify sequences of unknown phenotypes, and also identify specific sequence features which contribute to a particular phenotype. We demonstrate that IDEPI achieves performance similar to or better than that of previously published approaches on four well-studied problems: finding the epitopes of broadly neutralizing antibodies (bNab), determining coreceptor tropism of the virus, identifying compartment-specific genetic signatures of the virus, and deducing drug-resistance associated mutations. The cross-platform Python source code (released under the GPL 3.0 license), documentation, issue tracking, and a pre-configured virtual machine for IDEPI can be found at https://github.com/veg/idepi.", "which uses ?", "Python", 1363.0, 1369.0], ["Background Depression during pregnancy is a major health problem because it is prevalent and chronic, and its impact on birth outcome and child health is serious. Several psychosocial and obstetric factors have been identified as predictors. Evidence on the prevalence and predictors of antenatal depression is very limited in Ethiopia. This study aims to determine prevalence and associated factors with antenatal depression. Methods Community based cross-sectional study was conducted among 527 pregnant women recruited in a cluster sampling method. Data were collected by face-to-face interviews on socio-demographic, obstetric, and psychosocial characteristics. Depression symptoms were assessed using the Edinburgh Postnatal Depression Scale (EPDS). The List of Threatening Experiences questionnaire (LTE-Q) and the Oslo Social Support Scale (OSS-3) were used to assess stressful events and social support, respectively. Data were entered into Epi-info and analyzed using SPSS-20. Descriptive and logistic regression analyses were carried out. Results The prevalence of antenatal depression was found to be 11.8%. Having debt (OR = 2.79, 95% CI = 1.33, 5.85), unplanned pregnancy (OR = 2.39, 95% CI = (1.20, 4.76), history of stillbirth (OR = 3.97, 95% CI = (1.67,9.41), history of abortion (OR = 2.57, 95% CI = 1.005, 6.61), being in the third trimester of pregnancy (OR = 1.70, 95% CI = 1.07,2.72), presence of a complication in the current pregnancy (OR = 3.29, 95% CI = 1.66,6.53), and previous history of depression (OR = 3.48, 95% CI = 1.71,7.06) were factors significantly associated with antenatal depression. Conclusion The prevalence of antenatal depression was high, especially in the third trimester. Poverty, unmet reproductive health needs, and obstetric complications are the main determinants of antenatal depression. For early detection and appropriate intervention, screening for depression during the routine antenatal care should be promoted.", "which uses ?", "SPSS", 977.0, 981.0], ["Epigenetic regulation consists of a multitude of different modifications that determine active and inactive states of chromatin. Conditions such as cell differentiation or exposure to environmental stress require concerted changes in gene expression. To interpret epigenomics data, a spectrum of different interconnected datasets is needed, ranging from the genome sequence and positions of histones, together with their modifications and variants, to the transcriptional output of genomic regions. Here we present a tool, Podbat (Positioning database and analysis tool), that incorporates data from various sources and allows detailed dissection of the entire range of chromatin modifications simultaneously. Podbat can be used to analyze, visualize, store and share epigenomics data. Among other functions, Podbat allows data-driven determination of genome regions of differential protein occupancy or RNA expression using Hidden Markov Models. Comparisons between datasets are facilitated to enable the study of the comprehensive chromatin modification system simultaneously, irrespective of data-generating technique. Any organism with a sequenced genome can be accommodated. We exemplify the power of Podbat by reanalyzing all to-date published genome-wide data for the histone variant H2A.Z in fission yeast together with other histone marks and also phenotypic response data from several sources. This meta-analysis led to the unexpected finding of H2A.Z incorporation in the coding regions of genes encoding proteins involved in the regulation of meiosis and genotoxic stress responses. This incorporation was partly independent of the H2A.Z-incorporating remodeller Swr1. We verified an Swr1-independent role for H2A.Z following genotoxic stress in vivo. Podbat is open source software freely downloadable from www.podbat.org, distributed under the GNU LGPL license. User manuals, test data and instructions are available at the website, as well as a repository for third party\u2013developed plug-in modules. Podbat requires Java version 1.6 or higher.", "which uses ?", "Java", 2030.0, 2034.0], ["Almost all current dependency parsers classify based on millions of sparse indicator features. Not only do these features generalize poorly, but the cost of feature computation restricts parsing speed significantly. In this work, we propose a novel way of learning a neural network classifier for use in a greedy, transition-based dependency parser. Because this classifier learns and uses just a small number of dense features, it can work very fast, while achieving an about 2% improvement in unlabeled and labeled attachment scores on both English and Chinese datasets. Concretely, our parser is able to parse more than 1000 sentences per second at 92.2% unlabeled attachment score on the English Penn Treebank.", "which uses ?", "a small number of dense features", 395.0, 427.0], ["Despite the growing number of immune repertoire sequencing studies, the field still lacks software for analysis and comprehension of this high-dimensional data. Here we report VDJtools, a complementary software suite that solves a wide range of T cell receptor (TCR) repertoires post-analysis tasks, provides a detailed tabular output and publication-ready graphics, and is built on top of a flexible API. Using TCR datasets for a large cohort of unrelated healthy donors, twins, and multiple sclerosis patients we demonstrate that VDJtools greatly facilitates the analysis and leads to sound biological conclusions. VDJtools software and documentation are available at https://github.com/mikessh/vdjtools.", "which uses ?", "GitHub", 678.0, 684.0], ["In order to access and filter content of life-science databases, full text search is a widely applied query interface. But its high flexibility and intuitiveness is paid for with potentially imprecise and incomplete query results. To reduce this drawback, query assistance systems suggest those combinations of keywords with the highest potential to match most of the relevant data records. Widespread approaches are syntactic query corrections that avoid misspelling and support expansion of words by suffixes and prefixes. Synonym expansion approaches apply thesauri, ontologies, and query logs. All need laborious curation and maintenance. Furthermore, access to query logs is in general restricted. Approaches that infer related queries by their query profile like research field, geographic location, co-authorship, affiliation etc. require user\u2019s registration and its public accessibility that contradict privacy concerns. To overcome these drawbacks, we implemented LAILAPS-QSM, a machine learning approach that reconstruct possible linguistic contexts of a given keyword query. The context is referred from the text records that are stored in the databases that are going to be queried or extracted for a general purpose query suggestion from PubMed abstracts and UniProt data. The supplied tool suite enables the pre-processing of these text records and the further computation of customized distributed word vectors. The latter are used to suggest alternative keyword queries. An evaluated of the query suggestion quality was done for plant science use cases. Locally present experts enable a cost-efficient quality assessment in the categories trait, biological entity, taxonomy, affiliation, and metabolic function which has been performed using ontology term similarities. LAILAPS-QSM mean information content similarity for 15 representative queries is 0.70, whereas 34% have a score above 0.80. In comparison, the information content similarity for human expert made query suggestions is 0.90. The software is either available as tool set to build and train dedicated query suggestion services or as already trained general purpose RESTful web service. The service uses open interfaces to be seamless embeddable into database frontends. The JAVA implementation uses highly optimized data structures and streamlined code to provide fast and scalable response for web service calls. The source code of LAILAPS-QSM is available under GNU General Public License version 2 in Bitbucket GIT repository: https://bitbucket.org/ipk_bit_team/bioescorte-suggestion", "which uses ?", "Bitbucket", 2486.0, 2495.0], ["Background Volunteers in phase I/II HIV vaccine trials are assumed to be at low risk of acquiring HIV infection and are expected to have normal lives in the community. However, during participation in the trials, volunteers may encounter social harm and changes in their sexual behaviours. The current study aimed to study persistence of social harm and changes in sexual practices over time among phase I/II HIV vaccine immunogenicity (HIVIS03) trial volunteers in Dar es Salaam, Tanzania. Methods and Results A descriptive prospective cohort study was conducted among 33 out of 60 volunteers of HIVIS03 trial in Dar es Salaam, Tanzania, who had received three HIV-1 DNA injections boosted with two HIV-1 MVA doses. A structured interview was administered to collect data. Analysis was carried out using SPSS and McNemars\u2019 chi-square (\u03c72) was used to test the association within-subjects. Participants reported experiencing negative comments from their colleagues about the trial; but such comments were less severe during the second follow up visits (\u03c72 = 8.72; P<0.001). Most of the comments were associated with discrimination (\u03c72 = 26.72; P<0.001), stigma (\u03c72 = 6.06; P<0.05), and mistrust towards the HIV vaccine trial (\u03c72 = 4.9; P<0.05). Having a regular sexual partner other than spouse or cohabitant declined over the two follow-up periods (\u03c72 = 4.45; P<0.05). Conclusion Participants in the phase I/II HIV vaccine trial were likely to face negative comments from relatives and colleagues after the end of the trial, but those comments decreased over time. In this study, the inherent sexual practice of having extra sexual partners other than spouse declined over time. Therefore, prolonged counselling and support appears important to minimize risky sexual behaviour among volunteers after participation in HIV Vaccine trials.", "which uses ?", "SPSS", 805.0, 809.0], ["Abstract Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal \u201cvirtual laboratory\u201d for such multicellular systems simulates both the biochemical microenvironment (the \u201cstage\u201d) and many mechanically and biochemically interacting cells (the \u201cplayers\u201d upon the stage). PhysiCell\u2014physics-based multicellular simulator\u2014is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility \u201cout of the box.\u201d The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 10 5 -10 6 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a \u201ccellular cargo delivery\u201d system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net. Author Summary This paper introduces PhysiCell: an open source, agent-based modeling framework for 3-D multicellular simulations. It includes a standard library of sub-models for cell fluid and solid volume changes, cycle progression, apoptosis, necrosis, mechanics, and motility. PhysiCell is directly coupled to a biotransport solver to simulate many diffusing substrates and cell-secreted signals. Each cell can dynamically update its phenotype based on its microenvironmental conditions. Users can customize or replace the included sub-models. PhysiCell runs on a variety of platforms (Linux, OSX, and Windows) with few software dependencies. Its computational cost scales linearly in the number of cells. It is feasible to simulate 500,000 cells on quad-core desktop workstations, and millions of cells on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on hanging drop tumor spheroids (HDS) and ductal carcinoma in situ (DCIS) of the breast. We demonstrate contact- and chemokine-based interactions among multiple cell types with examples in synthetic multicellular bioengineering, cancer heterogeneity, and cancer immunology. We developed PhysiCell to help the scientific community tackle multicellular systems biology problems involving many interacting cells in multi-substrate microenvironments. PhysiCell is also an independent, cross-platform codebase for replicating results from other simulators.", "which uses ?", "OSX", 3119.0, 3122.0], ["Objective: We consider challenges in accurate segmentation of heart sound signals recorded under noisy clinical environments for subsequent classification of pathological events. Existing state-of-the-art solutions to heart sound segmentation use probabilistic models such as hidden Markov models (HMMs), which, however, are limited by its observation independence assumption and rely on pre-extraction of noise-robust features. Methods: We propose a Markov-switching autoregressive (MSAR) process to model the raw heart sound signals directly, which allows efficient segmentation of the cyclical heart sound states according to the distinct dependence structure in each state. To enhance robustness, we extend the MSAR model to a switching linear dynamic system (SLDS) that jointly model both the switching AR dynamics of underlying heart sound signals and the noise effects. We introduce a novel algorithm via fusion of switching Kalman filter and the duration-dependent Viterbi algorithm, which incorporates the duration of heart sound states to improve state decoding. Results: Evaluated on Physionet/CinC Challenge 2016 dataset, the proposed MSAR-SLDS approach significantly outperforms the hidden semi-Markov model (HSMM) in heart sound segmentation based on raw signals and comparable to a feature-based HSMM. The segmented labels were then used to train Gaussian-mixture HMM classifier for identification of abnormal beats, achieving high average precision of 86.1% on the same dataset including very noisy recordings. Conclusion: The proposed approach shows noticeable performance in heart sound segmentation and classification on a large noisy dataset. Significance: It is potentially useful in developing automated heart monitoring systems for pre-screening of heart pathologies.", "which uses ?", "Markov-switching autoregressive (MSAR)", NaN, NaN], ["Nonribosomally and ribosomally synthesized bioactive peptides constitute a source of molecules of great biomedical importance, including antibiotics such as penicillin, immunosuppressants such as cyclosporine, and cytostatics such as bleomycin. Recently, an innovative mass-spectrometry-based strategy, peptidogenomics, has been pioneered to effectively mine microbial strains for novel peptidic metabolites. Even though mass-spectrometric peptide detection can be performed quite fast, true high-throughput natural product discovery approaches have still been limited by the inability to rapidly match the identified tandem mass spectra to the gene clusters responsible for the biosynthesis of the corresponding compounds. With Pep2Path, we introduce a software package to fully automate the peptidogenomics approach through the rapid Bayesian probabilistic matching of mass spectra to their corresponding biosynthetic gene clusters. Detailed benchmarking of the method shows that the approach is powerful enough to correctly identify gene clusters even in data sets that consist of hundreds of genomes, which also makes it possible to match compounds from unsequenced organisms to closely related biosynthetic gene clusters in other genomes. Applying Pep2Path to a data set of compounds without known biosynthesis routes, we were able to identify candidate gene clusters for the biosynthesis of five important compounds. Notably, one of these clusters was detected in a genome from a different subphylum of Proteobacteria than that in which the molecule had first been identified. All in all, our approach paves the way towards high-throughput discovery of novel peptidic natural products. Pep2Path is freely available from http://pep2path.sourceforge.net/, implemented in Python, licensed under the GNU General Public License v3 and supported on MS Windows, Linux and Mac OS X.", "which uses ?", "Mac OS X", 1871.0, 1879.0], ["I introduce an open-source R package \u2018dcGOR\u2019 to provide the bioinformatics community with the ease to analyse ontologies and protein domain annotations, particularly those in the dcGO database. The dcGO is a comprehensive resource for protein domain annotations using a panel of ontologies including Gene Ontology. Although increasing in popularity, this database needs statistical and graphical support to meet its full potential. Moreover, there are no bioinformatics tools specifically designed for domain ontology analysis. As an add-on package built in the R software environment, dcGOR offers a basic infrastructure with great flexibility and functionality. It implements new data structure to represent domains, ontologies, annotations, and all analytical outputs as well. For each ontology, it provides various mining facilities, including: (i) domain-based enrichment analysis and visualisation; (ii) construction of a domain (semantic similarity) network according to ontology annotations; and (iii) significance analysis for estimating a contact (statistical significance) network. To reduce runtime, most analyses support high-performance parallel computing. Taking as inputs a list of protein domains of interest, the package is able to easily carry out in-depth analyses in terms of functional, phenotypic and diseased relevance, and network-level understanding. More importantly, dcGOR is designed to allow users to import and analyse their own ontologies and annotations on domains (taken from SCOP, Pfam and InterPro) and RNAs (from Rfam) as well. The package is freely available at CRAN for easy installation, and also at GitHub for version control. The dedicated website with reproducible demos can be found at http://supfam.org/dcGOR.", "which uses ?", "GitHub", 1640.0, 1646.0], ["Surveys of 16S rDNA sequences from the honey bee, Apis mellifera, have revealed the presence of eight distinctive bacterial phylotypes in intestinal tracts of adult worker bees. Because previous studies have been limited to relatively few sequences from samples pooled from multiple hosts, the extent of variation in this microbiota among individuals within and between colonies and locations has been unclear. We surveyed the gut microbiota of 40 individual workers from two sites, Arizona and Maryland USA, sampling four colonies per site. Universal primers were used to amplify regions of 16S ribosomal RNA genes, and amplicons were sequenced using 454 pyrotag methods, enabling analysis of about 330,000 bacterial reads. Over 99% of these sequences belonged to clusters for which the first blastn hits in GenBank were members of the known bee phylotypes. Four phylotypes, one within Gammaproteobacteria (corresponding to \u201cCandidatus Gilliamella apicola\u201d) one within Betaproteobacteria (\u201cCandidatus Snodgrassella alvi\u201d), and two within Lactobacillus, were present in every bee, though their frequencies varied. The same typical bacterial phylotypes were present in all colonies and at both sites. Community profiles differed significantly among colonies and between sites, mostly due to the presence in some Arizona colonies of two species of Enterobacteriaceae not retrieved previously from bees. Analysis of Sanger sequences of rRNA of the Snodgrassella and Gilliamella phylotypes revealed that single bees contain numerous distinct strains of each phylotype. Strains showed some differentiation between localities, especially for the Snodgrassella phylotype.", "which uses ?", "blastn", 794.0, 800.0], ["Identification of single nucleotide polymorphisms (SNPs) and mutations is important for the discovery of genetic predisposition to complex diseases. PCR resequencing is the method of choice for de novo SNP discovery. However, manual curation of putative SNPs has been a major bottleneck in the application of this method to high-throughput screening. Therefore it is critical to develop a more sensitive and accurate computational method for automated SNP detection. We developed a software tool, SNPdetector, for automated identification of SNPs and mutations in fluorescence-based resequencing reads. SNPdetector was designed to model the process of human visual inspection and has a very low false positive and false negative rate. We demonstrate the superior performance of SNPdetector in SNP and mutation analysis by comparing its results with those derived by human inspection, PolyPhred (a popular SNP detection tool), and independent genotype assays in three large-scale investigations. The first study identified and validated inter- and intra-subspecies variations in 4,650 traces of 25 inbred mouse strains that belong to either the Mus musculus species or the M. spretus species. Unexpected heterozgyosity in CAST/Ei strain was observed in two out of 1,167 mouse SNPs. The second study identified 11,241 candidate SNPs in five ENCODE regions of the human genome covering 2.5 Mb of genomic sequence. Approximately 50% of the candidate SNPs were selected for experimental genotyping; the validation rate exceeded 95%. The third study detected ENU-induced mutations (at 0.04% allele frequency) in 64,896 traces of 1,236 zebra fish. Our analysis of three large and diverse test datasets demonstrated that SNPdetector is an effective tool for genome-scale research and for large-sample clinical studies. SNPdetector runs on Unix/Linux platform and is available publicly (http://lpg.nci.nih.gov).", "which uses ?", "Linux", 1836.0, 1841.0], ["Abstract Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal \u201cvirtual laboratory\u201d for such multicellular systems simulates both the biochemical microenvironment (the \u201cstage\u201d) and many mechanically and biochemically interacting cells (the \u201cplayers\u201d upon the stage). PhysiCell\u2014physics-based multicellular simulator\u2014is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility \u201cout of the box.\u201d The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 10 5 -10 6 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a \u201ccellular cargo delivery\u201d system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net. Author Summary This paper introduces PhysiCell: an open source, agent-based modeling framework for 3-D multicellular simulations. It includes a standard library of sub-models for cell fluid and solid volume changes, cycle progression, apoptosis, necrosis, mechanics, and motility. PhysiCell is directly coupled to a biotransport solver to simulate many diffusing substrates and cell-secreted signals. Each cell can dynamically update its phenotype based on its microenvironmental conditions. Users can customize or replace the included sub-models. PhysiCell runs on a variety of platforms (Linux, OSX, and Windows) with few software dependencies. Its computational cost scales linearly in the number of cells. It is feasible to simulate 500,000 cells on quad-core desktop workstations, and millions of cells on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on hanging drop tumor spheroids (HDS) and ductal carcinoma in situ (DCIS) of the breast. We demonstrate contact- and chemokine-based interactions among multiple cell types with examples in synthetic multicellular bioengineering, cancer heterogeneity, and cancer immunology. We developed PhysiCell to help the scientific community tackle multicellular systems biology problems involving many interacting cells in multi-substrate microenvironments. PhysiCell is also an independent, cross-platform codebase for replicating results from other simulators.", "which uses ?", "C++", NaN, NaN], ["Background Ethiopia is ninth among the world high tuberculosis (TB) burden countries, pastoralists being the most affected population. However, there is no published report whether the behavior related to TB are different between pastoralist and the sedentary communities. Therefore, the main aim of this study is to assess the pastoralist community knowledge, attitude and perceived stigma towards tuberculosis and their health care seeking behavior in comparison to the neighboring sedentary communities and this may help to plan TB control interventions specifically for the pastoralist communities. Method A community-based cross-sectional survey was carried out from September 2014 to January 2015, among 337 individuals from pastoralist and 247 from the sedentary community of Kereyu district. Data were collected using structured questionnaires. Three focus group discussions were used to collect qualitative data, one with men and the other with women in the pastoralist and one with men in the sedentary groups. Data were analyzed using Statistical Software for Social Science, SPSS V 22 and STATA. Results A Lower proportion of pastoralists mentioned bacilli (bacteria) as the cause of PTB compared to the sedentary group (63.9% vs. 81.0%, p<0.01), respectively. However, witchcraft was reported as the causes of TB by a higher proportion of pastoralists than the sedentary group (53.6% vs.23.5%, p<0.01), respectively. Similarly, a lower proportion of pastoralists indicated PTB is preventable compared to the sedentary group (95.8% vs. 99.6%, p<0.01), respectively. Moreover, majority of the pastoralists mentioned that most people would reject a TB patient in their community compared to the sedentary group (39.9% vs. 8.9%, p<0.001), respectively, and the pastoralists expressed that they would be ashamed/embarrassed if they had TB 68% vs.36.4%, p<0.001), respectively. Conclusion The finding indicates that there is a lower awareness about TB, a negative attitude towards TB patients and a higher perceived stigma among pastoralists compared to their neighbor sedentary population. Strategic health communications pertinent to the pastoralists way of life should be planned and implemented to improve the awareness gap about tuberculosis.", "which uses ?", "STATA", 1101.0, 1106.0], ["Existing methods for identifying structural variants (SVs) from short read datasets are inaccurate. This complicates disease-gene identification and efforts to understand the consequences of genetic variation. In response, we have created Wham (Whole-genome Alignment Metrics) to provide a single, integrated framework for both structural variant calling and association testing, thereby bypassing many of the difficulties that currently frustrate attempts to employ SVs in association testing. Here we describe Wham, benchmark it against three other widely used SV identification tools\u2013Lumpy, Delly and SoftSearch\u2013and demonstrate Wham\u2019s ability to identify and associate SVs with phenotypes using data from humans, domestic pigeons, and vaccinia virus. Wham and all associated software are covered under the MIT License and can be freely downloaded from github (https://github.com/zeeev/wham), with documentation on a wiki (http://zeeev.github.io/wham/). For community support please post questions to https://www.biostars.org/.", "which uses ?", "github", 855.0, 861.0], ["Abstract Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal \u201cvirtual laboratory\u201d for such multicellular systems simulates both the biochemical microenvironment (the \u201cstage\u201d) and many mechanically and biochemically interacting cells (the \u201cplayers\u201d upon the stage). PhysiCell\u2014physics-based multicellular simulator\u2014is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility \u201cout of the box.\u201d The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 10 5 -10 6 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a \u201ccellular cargo delivery\u201d system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net. Author Summary This paper introduces PhysiCell: an open source, agent-based modeling framework for 3-D multicellular simulations. It includes a standard library of sub-models for cell fluid and solid volume changes, cycle progression, apoptosis, necrosis, mechanics, and motility. PhysiCell is directly coupled to a biotransport solver to simulate many diffusing substrates and cell-secreted signals. Each cell can dynamically update its phenotype based on its microenvironmental conditions. Users can customize or replace the included sub-models. PhysiCell runs on a variety of platforms (Linux, OSX, and Windows) with few software dependencies. Its computational cost scales linearly in the number of cells. It is feasible to simulate 500,000 cells on quad-core desktop workstations, and millions of cells on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on hanging drop tumor spheroids (HDS) and ductal carcinoma in situ (DCIS) of the breast. We demonstrate contact- and chemokine-based interactions among multiple cell types with examples in synthetic multicellular bioengineering, cancer heterogeneity, and cancer immunology. We developed PhysiCell to help the scientific community tackle multicellular systems biology problems involving many interacting cells in multi-substrate microenvironments. PhysiCell is also an independent, cross-platform codebase for replicating results from other simulators.", "which uses ?", "PhysiCell", 468.0, 477.0], ["Objective There is a general agreement that physical pain serves as an alarm signal for the prevention of and reaction to physical harm. It has recently been hypothesized that \u201csocial pain,\u201d as induced by social rejection or abandonment, may rely on comparable, phylogenetically old brain structures. As plausible as this theory may sound, scientific evidence for this idea is sparse. This study therefore attempts to link both types of pain directly. We studied patients with borderline personality disorder (BPD) because BPD is characterized by opposing alterations in physical and social pain; hyposensitivity to physical pain is associated with hypersensitivity to social pain, as indicated by an enhanced rejection sensitivity. Method Twenty unmedicated female BPD patients and 20 healthy participants (HC, matched for age and education) played a virtual ball-tossing game (cyberball), with the conditions for exclusion, inclusion, and a control condition with predefined game rules. Each cyberball block was followed by a temperature stimulus (with a subjective pain intensity of 60% in half the cases). The cerebral responses were measured by functional magnetic resonance imaging. The Adult Rejection Sensitivity Questionnaire was used to assess rejection sensitivity. Results Higher temperature heat stimuli had to be applied to BPD patients relative to HCs to reach a comparable subjective experience of painfulness in both groups, which suggested a general hyposensitivity to pain in BPD patients. Social exclusion led to a subjectively reported hypersensitivity to physical pain in both groups that was accompanied by an enhanced activation in the anterior insula and the thalamus. In BPD, physical pain processing after exclusion was additionally linked to enhanced posterior insula activation. After inclusion, BPD patients showed reduced amygdala activation during pain in comparison with HC. In BPD patients, higher rejection sensitivity was associated with lower activation differences during pain processing following social exclusion and inclusion in the insula and in the amygdala. Discussion Despite the similar behavioral effects in both groups, BPD patients differed from HC in their neural processing of physical pain depending on the preceding social situation. Rejection sensitivity further modulated the impact of social exclusion on neural pain processing in BPD, but not in healthy controls.", "which uses ?", "PROCESS", NaN, NaN], ["Abstract Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal \u201cvirtual laboratory\u201d for such multicellular systems simulates both the biochemical microenvironment (the \u201cstage\u201d) and many mechanically and biochemically interacting cells (the \u201cplayers\u201d upon the stage). PhysiCell\u2014physics-based multicellular simulator\u2014is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility \u201cout of the box.\u201d The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 10 5 -10 6 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a \u201ccellular cargo delivery\u201d system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net. Author Summary This paper introduces PhysiCell: an open source, agent-based modeling framework for 3-D multicellular simulations. It includes a standard library of sub-models for cell fluid and solid volume changes, cycle progression, apoptosis, necrosis, mechanics, and motility. PhysiCell is directly coupled to a biotransport solver to simulate many diffusing substrates and cell-secreted signals. Each cell can dynamically update its phenotype based on its microenvironmental conditions. Users can customize or replace the included sub-models. PhysiCell runs on a variety of platforms (Linux, OSX, and Windows) with few software dependencies. Its computational cost scales linearly in the number of cells. It is feasible to simulate 500,000 cells on quad-core desktop workstations, and millions of cells on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on hanging drop tumor spheroids (HDS) and ductal carcinoma in situ (DCIS) of the breast. We demonstrate contact- and chemokine-based interactions among multiple cell types with examples in synthetic multicellular bioengineering, cancer heterogeneity, and cancer immunology. We developed PhysiCell to help the scientific community tackle multicellular systems biology problems involving many interacting cells in multi-substrate microenvironments. PhysiCell is also an independent, cross-platform codebase for replicating results from other simulators.", "which uses ?", "Windows", 3128.0, 3135.0], ["CellProfiler has enabled the scientific research community to create flexible, modular image analysis pipelines since its release in 2005. Here, we describe CellProfiler 3.0, a new version of the software supporting both whole-volume and plane-wise analysis of three-dimensional (3D) image stacks, increasingly common in biomedical research. CellProfiler\u2019s infrastructure is greatly improved, and we provide a protocol for cloud-based, large-scale image processing. New plugins enable running pretrained deep learning models on images. Designed by and for biologists, CellProfiler equips researchers with powerful computational tools via a well-documented user interface, empowering biologists in all fields to create quantitative, reproducible image analysis workflows.", "which uses ?", "CellProfiler", 0.0, 12.0], ["Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. AVAILABILITY: QuIN\u2019s web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.", "which uses ?", "Java", 1584.0, 1588.0], ["Background Most of child mortality and under nutrition in developing world were attributed to suboptimal childcare and feeding, which needs detailed investigation beyond the proximal factors. This study was conducted with the aim of assessing associations of women\u2019s autonomy and men\u2019s involvement with child anthropometric indices in cash crop livelihood areas of South West Ethiopia. Methods Multi-stage stratified sampling was used to select 749 farming households living in three coffee producing sub-districts of Jimma zone, Ethiopia. Domains of women\u2019s Autonomy were measured by a tool adapted from demographic health survey. A model for determination of paternal involvement in childcare was employed. Caring practices were assessed through the WHO Infant and young child feeding practice core indicators. Length and weight measurements were taken in duplicate using standard techniques. Data were analyzed using SPSS for windows version 21. A multivariable linear regression was used to predict weight for height Z-scores and length for age Z-scores after adjusting for various factors. Results The mean (sd) scores of weight for age (WAZ), height for age (HAZ), weight for height (WHZ) and BMI for age (BAZ) was -0.52(1.26), -0.73(1.43), -0.13(1.34) and -0.1(1.39) respectively. The results of multi variable linear regression analyses showed that WHZ scores of children of mothers who had autonomy of conducting big purchase were higher by 0.42 compared to children's whose mothers had not. In addition, a child whose father was involved in childcare and feeding had higher HAZ score by 0.1. Regarding age, as for every month increase in age of child, a 0.04 point decrease in HAZ score and a 0.01 point decrease in WHZ were noted. Similarly, a child living in food insecure households had lower HAZ score by 0.29 compared to child of food secured households. As family size increased by a person a WHZ score of a child is decreased by 0.08. WHZ and HAZ scores of male child was found lower by 0.25 and 0.38 respectively compared to a female child of same age. Conclusion Women\u2019s autonomy and men\u2019s involvement appeared in tandem with better child anthropometric outcomes. Nutrition interventions in such setting should integrate enhancing women\u2019s autonomy over resource and men\u2019s involvement in childcare and feeding, in addition to food security measures.", "which uses ?", "windows", 929.0, 936.0], ["When we observe a dynamic emotional facial expression, we usually automatically anticipate how that expression will develop. Our objective was to study a neurocognitive biomarker of this anticipatory process for facial pain expressions, operationalized as a mismatch effect. For this purpose, we studied the behavioral and neuroelectric (Event-Related Potential, ERP) correlates, of a match or mismatch, between the intensity of an expression of pain anticipated by the participant, and the intensity of a static test expression of pain displayed with the use of a representational momentum paradigm. Here, the paradigm consisted in displaying a dynamic facial pain expression which suddenly disappeared, and participants had to memorize the final intensity of the dynamic expression. We compared ERPs in response to congruent (intensity the same as the one memorized) and incongruent (intensity different from the one memorized) static expression intensities displayed after the dynamic expression. This paradigm allowed us to determine the amplitude and direction of this intensity anticipation by measuring the observer\u2019s memory bias. Results behaviorally showed that the anticipation was backward (negative memory bias) for high intensity expressions of pain (participants expected a return to a neutral state) and more forward (memory bias less negative, or even positive) for less intense expressions (participants expected increased intensity). Detecting mismatch (incongruent intensity) led to faster responses than detecting match (congruent intensity). The neuroelectric correlates of this mismatch effect in response to the testing of expression intensity ranged from P100 to LPP (Late Positive Potential). Path analysis and source localization suggested that the medial frontal gyrus was instrumental in mediating the mismatch effect through top-down influence on both the occipital and temporal regions. Moreover, having the facility to detect incongruent expressions, by anticipating emotional state, could be useful for prosocial behavior and the detection of trustworthiness.", "which uses ?", "PROCESS", 200.0, 207.0], ["Over the past decades, quantitative methods linking theory and observation became increasingly important in many areas of life science. Subsequently, a large number of mathematical and computational models has been developed. The BioModels database alone lists more than 140,000 Systems Biology Markup Language (SBML) models. However, while the exchange within specific model classes has been supported by standardisation and database efforts, the generic application and especially the re-use of models is still limited by practical issues such as easy and straight forward model execution. MAGPIE, a Modeling and Analysis Generic Platform with Integrated Evaluation, closes this gap by providing a software platform for both, publishing and executing computational models without restrictions on the programming language, thereby combining a maximum on flexibility for programmers with easy handling for non-technical users. MAGPIE goes beyond classical SBML platforms by including all models, independent of the underlying programming language, ranging from simple script models to complex data integration and computations. We demonstrate the versatility of MAGPIE using four prototypic example cases. We also outline the potential of MAGPIE to improve transparency and reproducibility of computational models in life sciences. A demo server is available at magpie.imb.medizin.tu-dresden.de.", "which uses ?", "MAGPIE", 592.0, 598.0], ["Background Pliocene uplifting of the Qinghai-Tibetan Plateau (QTP) and Quaternary glaciation may have impacted the Asian biota more than any other events. Little is documented with respect to how the geological and climatological events influenced speciation as well as spatial and genetic structuring, especially in vertebrate endotherms. Macaca mulatta is the most widely distributed non-human primate. It may be the most suitable model to test hypotheses regarding the genetic consequences of orogenesis on an endotherm. Methodology and Principal Findings Using a large dataset of maternally inherited mitochondrial DNA gene sequences and nuclear microsatellite DNA data, we discovered two maternal super-haplogroups exist, one in western China and the other in eastern China. M. mulatta formed around 2.31 Ma (1.51\u20133.15, 95%), and divergence of the two major matrilines was estimated at 1.15 Ma (0.78\u20131.55, 95%). The western super-haplogroup exhibits significant geographic structure. In contrast, the eastern super-haplogroup has far greater haplotypic variability with little structure based on analyses of six variable microsatellite loci using Structure and Geneland. Analysis using Migrate detected greater gene flow from WEST to EAST than vice versa. We did not detect signals of bottlenecking in most populations. Conclusions Analyses of the nuclear and mitochondrial datasets obtained large differences in genetic patterns for M. mulatta. The difference likely reflects inheritance mechanisms of the maternally inherited mtDNA genome versus nuclear biparentally inherited STRs and male-mediated gene flow. Dramatic environmental changes may be responsible for shaping the matrilineal history of macaques. The timing of events, the formation of M. mulatta, and the divergence of the super-haplogroups, corresponds to both the uplifting of the QTP and Quaternary climatic oscillations. Orogenesis likely drove divergence of western populations in China, and Pleistocene glaciations are likely responsible for genetic structuring in the eastern super-haplogroup via geographic isolation and secondary contact.", "which uses ?", "Bottleneck", NaN, NaN], ["Bayesian Networks (BN) have been a popular predictive modeling formalism in bioinformatics, but their application in modern genomics has been slowed by an inability to cleanly handle domains with mixed discrete and continuous variables. Existing free BN software packages either discretize continuous variables, which can lead to information loss, or do not include inference routines, which makes prediction with the BN impossible. We present CGBayesNets, a BN package focused around prediction of a clinical phenotype from mixed discrete and continuous variables, which fills these gaps. CGBayesNets implements Bayesian likelihood and inference algorithms for the conditional Gaussian Bayesian network (CGBNs) formalism, one appropriate for predicting an outcome of interest from, e.g., multimodal genomic data. We provide four different network learning algorithms, each making a different tradeoff between computational cost and network likelihood. CGBayesNets provides a full suite of functions for model exploration and verification, including cross validation, bootstrapping, and AUC manipulation. We highlight several results obtained previously with CGBayesNets, including predictive models of wood properties from tree genomics, leukemia subtype classification from mixed genomic data, and robust prediction of intensive care unit mortality outcomes from metabolomic profiles. We also provide detailed example analysis on public metabolomic and gene expression datasets. CGBayesNets is implemented in MATLAB and available as MATLAB source code, under an Open Source license and anonymous download at http://www.cgbayesnets.com.", "which uses ?", "MATLAB", 1511.0, 1517.0], ["The carambola fruit fly, Bactrocera carambolae, is a tephritid native to Asia that has invaded South America through small-scale trade of fruits from Indonesia. The economic losses associated with biological invasions of other fruit flies around the world and the polyphagous behaviour of B. carambolae have prompted much concern among government agencies and farmers with the potential spread of this pest. Here, ecological niche models were employed to identify suitable environments available to B. carambolae in a global scale and assess the extent of the fruit acreage that may be at risk of attack in Brazil. Overall, 30 MaxEnt models built with different combinations of environmental predictors and settings were evaluated for predicting the potential distribution of the carambola fruit fly. The best model was selected based on threshold-independent and threshold-dependent metrics. Climatically suitable areas were identified in tropical and subtropical regions of Central and South America, Sub-Saharan Africa, west and east coast of India and northern Australia. The suitability map of B. carambola was intersected against maps of fruit acreage in Brazil. The acreage under potential risk of attack varied widely among fruit species, which is expected because the production areas are concentrated in different regions of the country. The production of cashew is the one that is at higher risk, with almost 90% of its acreage within the suitable range of B. carambolae, followed by papaya (78%), tangerine (51%), guava (38%), lemon (30%), orange (29%), mango (24%) and avocado (20%). This study provides an important contribution to the knowledge of the ecology of B. carambolae, and the information generated here can be used by government agencies as a decision-making tool to prevent the carambola fruit fly spread across the world.", "which uses ?", "MaxEnt", 627.0, 633.0], ["The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the \u2018missing heritability,\u2019 which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS), for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/.", "which uses ?", "C++", NaN, NaN], ["The science of meditation has grown tremendously in the last two decades. Most studies have focused on evaluating the clinical effectiveness of mindfulness-based interventions, neural and other physiological correlates of meditation, and individual cognitive and emotional aspects of meditation. Far less research has been conducted on more challenging domains to measure, such as group and relational, transpersonal and mystical, and difficult aspects of meditation; anomalous or extraordinary phenomena related to meditation; and post-conventional stages of development associated with meditation. However, these components of meditation may be crucial to people\u2019s psychological and spiritual development, could represent important mediators and/or mechanisms by which meditation confers benefits, and could themselves be important outcomes of meditation practices. In addition, since large numbers of novices are being introduced to meditation, it is helpful to investigate experiences they may encounter that are not well understood. Over the last four years, a task force of meditation researchers and teachers met regularly to develop recommendations for expanding the current meditation research field to include these important yet often neglected topics. These meetings led to a cross-sectional online survey to investigate the prevalence of a wide range of experiences in 1120 meditators. Results show that the majority of respondents report having had many of these anomalous and extraordinary experiences. While some of the topics are potentially controversial, they can be subjected to rigorous scientific investigation. These arenas represent largely uncharted scientific terrain and provide excellent opportunities for both new and experienced researchers. We provide suggestions for future directions, with accompanying online materials to encourage such research.", "which uses ?", "Excel", NaN, NaN], ["Background Pliocene uplifting of the Qinghai-Tibetan Plateau (QTP) and Quaternary glaciation may have impacted the Asian biota more than any other events. Little is documented with respect to how the geological and climatological events influenced speciation as well as spatial and genetic structuring, especially in vertebrate endotherms. Macaca mulatta is the most widely distributed non-human primate. It may be the most suitable model to test hypotheses regarding the genetic consequences of orogenesis on an endotherm. Methodology and Principal Findings Using a large dataset of maternally inherited mitochondrial DNA gene sequences and nuclear microsatellite DNA data, we discovered two maternal super-haplogroups exist, one in western China and the other in eastern China. M. mulatta formed around 2.31 Ma (1.51\u20133.15, 95%), and divergence of the two major matrilines was estimated at 1.15 Ma (0.78\u20131.55, 95%). The western super-haplogroup exhibits significant geographic structure. In contrast, the eastern super-haplogroup has far greater haplotypic variability with little structure based on analyses of six variable microsatellite loci using Structure and Geneland. Analysis using Migrate detected greater gene flow from WEST to EAST than vice versa. We did not detect signals of bottlenecking in most populations. Conclusions Analyses of the nuclear and mitochondrial datasets obtained large differences in genetic patterns for M. mulatta. The difference likely reflects inheritance mechanisms of the maternally inherited mtDNA genome versus nuclear biparentally inherited STRs and male-mediated gene flow. Dramatic environmental changes may be responsible for shaping the matrilineal history of macaques. The timing of events, the formation of M. mulatta, and the divergence of the super-haplogroups, corresponds to both the uplifting of the QTP and Quaternary climatic oscillations. Orogenesis likely drove divergence of western populations in China, and Pleistocene glaciations are likely responsible for genetic structuring in the eastern super-haplogroup via geographic isolation and secondary contact.", "which uses ?", "migrate", 1191.0, 1198.0], ["Identification of single nucleotide polymorphisms (SNPs) and mutations is important for the discovery of genetic predisposition to complex diseases. PCR resequencing is the method of choice for de novo SNP discovery. However, manual curation of putative SNPs has been a major bottleneck in the application of this method to high-throughput screening. Therefore it is critical to develop a more sensitive and accurate computational method for automated SNP detection. We developed a software tool, SNPdetector, for automated identification of SNPs and mutations in fluorescence-based resequencing reads. SNPdetector was designed to model the process of human visual inspection and has a very low false positive and false negative rate. We demonstrate the superior performance of SNPdetector in SNP and mutation analysis by comparing its results with those derived by human inspection, PolyPhred (a popular SNP detection tool), and independent genotype assays in three large-scale investigations. The first study identified and validated inter- and intra-subspecies variations in 4,650 traces of 25 inbred mouse strains that belong to either the Mus musculus species or the M. spretus species. Unexpected heterozgyosity in CAST/Ei strain was observed in two out of 1,167 mouse SNPs. The second study identified 11,241 candidate SNPs in five ENCODE regions of the human genome covering 2.5 Mb of genomic sequence. Approximately 50% of the candidate SNPs were selected for experimental genotyping; the validation rate exceeded 95%. The third study detected ENU-induced mutations (at 0.04% allele frequency) in 64,896 traces of 1,236 zebra fish. Our analysis of three large and diverse test datasets demonstrated that SNPdetector is an effective tool for genome-scale research and for large-sample clinical studies. SNPdetector runs on Unix/Linux platform and is available publicly (http://lpg.nci.nih.gov).", "which uses ?", "PolyPhred", 884.0, 893.0], ["Background Pliocene uplifting of the Qinghai-Tibetan Plateau (QTP) and Quaternary glaciation may have impacted the Asian biota more than any other events. Little is documented with respect to how the geological and climatological events influenced speciation as well as spatial and genetic structuring, especially in vertebrate endotherms. Macaca mulatta is the most widely distributed non-human primate. It may be the most suitable model to test hypotheses regarding the genetic consequences of orogenesis on an endotherm. Methodology and Principal Findings Using a large dataset of maternally inherited mitochondrial DNA gene sequences and nuclear microsatellite DNA data, we discovered two maternal super-haplogroups exist, one in western China and the other in eastern China. M. mulatta formed around 2.31 Ma (1.51\u20133.15, 95%), and divergence of the two major matrilines was estimated at 1.15 Ma (0.78\u20131.55, 95%). The western super-haplogroup exhibits significant geographic structure. In contrast, the eastern super-haplogroup has far greater haplotypic variability with little structure based on analyses of six variable microsatellite loci using Structure and Geneland. Analysis using Migrate detected greater gene flow from WEST to EAST than vice versa. We did not detect signals of bottlenecking in most populations. Conclusions Analyses of the nuclear and mitochondrial datasets obtained large differences in genetic patterns for M. mulatta. The difference likely reflects inheritance mechanisms of the maternally inherited mtDNA genome versus nuclear biparentally inherited STRs and male-mediated gene flow. Dramatic environmental changes may be responsible for shaping the matrilineal history of macaques. The timing of events, the formation of M. mulatta, and the divergence of the super-haplogroups, corresponds to both the uplifting of the QTP and Quaternary climatic oscillations. Orogenesis likely drove divergence of western populations in China, and Pleistocene glaciations are likely responsible for genetic structuring in the eastern super-haplogroup via geographic isolation and secondary contact.", "which uses ?", "Structure", 978.0, 987.0], ["A user ready, portable, documented software package, NFTsim, is presented to facilitate numerical simulations of a wide range of brain systems using continuum neural field modeling. NFTsim enables users to simulate key aspects of brain activity at multiple scales. At the microscopic scale, it incorporates characteristics of local interactions between cells, neurotransmitter effects, synaptodendritic delays and feedbacks. At the mesoscopic scale, it incorporates information about medium to large scale axonal ranges of fibers, which are essential to model dissipative wave transmission and to produce synchronous oscillations and associated cross-correlation patterns as observed in local field potential recordings of active tissue. At the scale of the whole brain, NFTsim allows for the inclusion of long range pathways, such as thalamocortical projections, when generating macroscopic activity fields. The multiscale nature of the neural activity produced by NFTsim has the potential to enable the modeling of resulting quantities measurable via various neuroimaging techniques. In this work, we give a comprehensive description of the design and implementation of the software. Due to its modularity and flexibility, NFTsim enables the systematic study of an unlimited number of neural systems with multiple neural populations under a unified framework and allows for direct comparison with analytic and experimental predictions. The code is written in C++ and bundled with Matlab routines for a rapid quantitative analysis and visualization of the outputs. The output of NFTsim is stored in plain text file enabling users to select from a broad range of tools for offline analysis. This software enables a wide and convenient use of powerful physiologically-based neural field approaches to brain modeling. NFTsim is distributed under the Apache 2.0 license.", "which uses ?", "C++", NaN, NaN], ["With the aim of uncovering all of the most basal variation in the northern Asian mitochondrial DNA (mtDNA) haplogroups, we have analyzed mtDNA control region and coding region sequence variation in 98 Altaian Kazakhs from southern Siberia and 149 Barghuts from Inner Mongolia, China. Both populations exhibit the prevalence of eastern Eurasian lineages accounting for 91.9% in Barghuts and 60.2% in Altaian Kazakhs. The strong affinity of Altaian Kazakhs and populations of northern and central Asia has been revealed, reflecting both influences of central Asian inhabitants and essential genetic interaction with the Altai region indigenous populations. Statistical analyses data demonstrate a close positioning of all Mongolic-speaking populations (Mongolians, Buryats, Khamnigans, Kalmyks as well as Barghuts studied here) and Turkic-speaking Sojots, thus suggesting their origin from a common maternal ancestral gene pool. In order to achieve a thorough coverage of DNA lineages revealed in the northern Asian matrilineal gene pool, we have completely sequenced the mtDNA of 55 samples representing haplogroups R11b, B4, B5, F2, M9, M10, M11, M13, N9a and R9c1, which were pinpointed from a massive collection (over 5000 individuals) of northern and eastern Asian, as well as European control region mtDNA sequences. Applying the newly updated mtDNA tree to the previously reported northern Asian and eastern Asian mtDNA data sets has resolved the status of the poorly classified mtDNA types and allowed us to obtain the coalescence age estimates of the nodes of interest using different calibrated rates. Our findings confirm our previous conclusion that northern Asian maternal gene pool consists of predominantly post-LGM components of eastern Asian ancestry, though some genetic lineages may have a pre-LGM/LGM origin.", "which uses ?", "STATISTICA", NaN, NaN], ["Objective The aim of this study was to assess the quality of life (QOL) of medical students during their medical education and explore the influencing factors of the QOL of students. Methods A cross-sectional study was conducted in June 2011. The study population was composed of 1686 medical students in years 1 to 5 at China Medical University. The Chinese version of WHOQOL-BREF instrument was used to assess the QOL of medical students. The reliability and validity of the questionnaire were assessed by Cronbach\u2019s \u03b1 coefficient and factor analysis respectively. The relationships between QOL and the factors including gender, academic year level, and specialty were examined using t-test or one-way ANOVA followed by Student-Newman\u2013Keuls test. Statistic analysis was performed by SPSS 13.0. Results The overall Cronbach\u2019s \u03b1 coefficient of the WHOQOL-BREF questionnaire was 0.731. The confirmatory factor analysis provided an acceptable fit to a four-factor model in the medical student sample. The scores of different academic years were significantly different in the psychological health and social relations domains (p<0.05). Third year students had the lowest scores in psychological health and social relations domains. The scores of different specialties had significant differences in psychological health and social relations domains (p<0.05). Students from clinical medicine had the highest scores. Gender, interest in the area of study, confidence in career development, hometown location, and physical exercise were significantly associated with the quality of life of students in some domains (p<0.05). Conclusions The WHOQOL-BREF was reliable and valid in the assessment of the QOL of Chinese medical students. In order to cope with the influencing factors of the QOL, medical schools should carry out curriculum innovation and give the necessary support for medical students, especially for 3rd year students.", "which uses ?", "SPSS", 785.0, 789.0], ["Cognitive processes, such as the generation of language, can be mapped onto the brain using fMRI. These maps can in turn be used for decoding the respective processes from the brain activation patterns. Given individual variations in brain anatomy and organization, analyzes on the level of the single person are important to improve our understanding of how cognitive processes correspond to patterns of brain activity. They also allow to advance clinical applications of fMRI, because in the clinical setting making diagnoses for single cases is imperative. In the present study, we used mental imagery tasks to investigate language production, motor functions, visuo-spatial memory, face processing, and resting-state activity in a single person. Analysis methods were based on similarity metrics, including correlations between training and test data, as well as correlations with maps from the NeuroSynth meta-analysis. The goal was to make accurate predictions regarding the cognitive domain (e.g. language) and the specific content (e.g. animal names) of single 30-second blocks. Four teams used the dataset, each blinded regarding the true labels of the test data. Results showed that the similarity metrics allowed to reach the highest degrees of accuracy when predicting the cognitive domain of a block. Overall, 23 of the 25 test blocks could be correctly predicted by three of the four teams. Excluding the unspecific rest condition, up to 10 out of 20 blocks could be successfully decoded regarding their specific content. The study shows how the information contained in a single fMRI session and in each of its single blocks can allow to draw inferences about the cognitive processes an individual engaged in. Simple methods like correlations between blocks of fMRI data can serve as highly reliable approaches for cognitive decoding. We discuss the implications of our results in the context of clinical fMRI applications, with a focus on how decoding can support functional localization.", "which uses ?", "NeuroSynth", 907.0, 917.0], ["Background Pliocene uplifting of the Qinghai-Tibetan Plateau (QTP) and Quaternary glaciation may have impacted the Asian biota more than any other events. Little is documented with respect to how the geological and climatological events influenced speciation as well as spatial and genetic structuring, especially in vertebrate endotherms. Macaca mulatta is the most widely distributed non-human primate. It may be the most suitable model to test hypotheses regarding the genetic consequences of orogenesis on an endotherm. Methodology and Principal Findings Using a large dataset of maternally inherited mitochondrial DNA gene sequences and nuclear microsatellite DNA data, we discovered two maternal super-haplogroups exist, one in western China and the other in eastern China. M. mulatta formed around 2.31 Ma (1.51\u20133.15, 95%), and divergence of the two major matrilines was estimated at 1.15 Ma (0.78\u20131.55, 95%). The western super-haplogroup exhibits significant geographic structure. In contrast, the eastern super-haplogroup has far greater haplotypic variability with little structure based on analyses of six variable microsatellite loci using Structure and Geneland. Analysis using Migrate detected greater gene flow from WEST to EAST than vice versa. We did not detect signals of bottlenecking in most populations. Conclusions Analyses of the nuclear and mitochondrial datasets obtained large differences in genetic patterns for M. mulatta. The difference likely reflects inheritance mechanisms of the maternally inherited mtDNA genome versus nuclear biparentally inherited STRs and male-mediated gene flow. Dramatic environmental changes may be responsible for shaping the matrilineal history of macaques. The timing of events, the formation of M. mulatta, and the divergence of the super-haplogroups, corresponds to both the uplifting of the QTP and Quaternary climatic oscillations. Orogenesis likely drove divergence of western populations in China, and Pleistocene glaciations are likely responsible for genetic structuring in the eastern super-haplogroup via geographic isolation and secondary contact.", "which uses ?", "Geneland", 1166.0, 1174.0], ["Identification of single nucleotide polymorphisms (SNPs) and mutations is important for the discovery of genetic predisposition to complex diseases. PCR resequencing is the method of choice for de novo SNP discovery. However, manual curation of putative SNPs has been a major bottleneck in the application of this method to high-throughput screening. Therefore it is critical to develop a more sensitive and accurate computational method for automated SNP detection. We developed a software tool, SNPdetector, for automated identification of SNPs and mutations in fluorescence-based resequencing reads. SNPdetector was designed to model the process of human visual inspection and has a very low false positive and false negative rate. We demonstrate the superior performance of SNPdetector in SNP and mutation analysis by comparing its results with those derived by human inspection, PolyPhred (a popular SNP detection tool), and independent genotype assays in three large-scale investigations. The first study identified and validated inter- and intra-subspecies variations in 4,650 traces of 25 inbred mouse strains that belong to either the Mus musculus species or the M. spretus species. Unexpected heterozgyosity in CAST/Ei strain was observed in two out of 1,167 mouse SNPs. The second study identified 11,241 candidate SNPs in five ENCODE regions of the human genome covering 2.5 Mb of genomic sequence. Approximately 50% of the candidate SNPs were selected for experimental genotyping; the validation rate exceeded 95%. The third study detected ENU-induced mutations (at 0.04% allele frequency) in 64,896 traces of 1,236 zebra fish. Our analysis of three large and diverse test datasets demonstrated that SNPdetector is an effective tool for genome-scale research and for large-sample clinical studies. SNPdetector runs on Unix/Linux platform and is available publicly (http://lpg.nci.nih.gov).", "which uses ?", "Unix", 1831.0, 1835.0], ["Objective Psychological distress remains a major challenge in cancer care. The complexity of psychological symptoms in cancer patients requires multifaceted symptom management tailored to individual patient characteristics and active patient involvement. We assessed the relationship between resilience, psychological distress and physical activity in cancer patients to elucidate potential moderators of the identified relationships. Method A cross-sectional observational study to assess the prevalence of symptoms and supportive care needs of oncology patients undergoing chemotherapy, radiotherapy or chemo-radiation therapy in a tertiary oncology service. Resilience was assessed using the 10-item Connor-Davidson Resilience Scale (CD-RISC 10), social support was evaluated using the 12-item Multidimensional Scale of Perceived Social Support (MSPSS) and both psychological distress and activity level were measured using corresponding subscales of the Rotterdam Symptom Checklist (RSCL). Socio-demographic and medical data were extracted from patient medical records. Correlation analyses were performed and structural equation modeling was employed to assess the associations between resilience, psychological distress and activity level as well as selected socio-demographic variables. Results Data from 343 patients were included in the analysis. Our revised model demonstrated an acceptable fit to the data (\u03c72(163) = 313.76, p = .000, comparative fit index (CFI) = .942, Tucker-Lewis index (TLI) = .923, root mean square error of approximation (RMSEA) = .053, 90% CI [.044.062]). Resilience was negatively associated with psychological distress (\u03b2 = -.59), and positively associated with activity level (\u03b2 = .20). The relationship between resilience and psychological distress was moderated by age (\u03b2 = -0.33) but not social support (\u03b2 = .10, p = .12). Conclusion Cancer patients with higher resilience, particularly older patients, experience lower psychological distress. Patients with higher resilience are physically more active. Evaluating levels of resilience in cancer patients then tailoring targeted interventions to facilitate resilience may help improve the effectiveness of psychological symptom management interventions.", "which uses ?", "SPSS", NaN, NaN], ["The jaguar, Panthera onca, is a top predator with the extant population found within the Brazilian Caatinga biome now known to be on the brink of extinction. Designing new conservation units and potential corridors are therefore crucial for the long-term survival of the species within the Caatinga biome. Thus, our aims were: 1) to recognize suitable areas for jaguar occurrence, 2) to delineate areas for jaguar conservation (PJCUs), 3) to design corridors among priority areas, and 4) to prioritize PJCUs. A total of 62 points records of jaguar occurrence and 10 potential predictors were analyzed in a GIS environment. A predictive distributional map was obtained using Species Distribution Modeling (SDM) as performed by the Maximum Entropy (Maxent) algorithm. Areas equal to or higher than the median suitability value of 0.595 were selected as of high suitability for jaguar occurrence and named as Priority Jaguar Conservation Units (PJCU). Ten PJCUs with sizes varying from 23.6 km2 to 4,311.0 km2 were identified. Afterwards, we combined the response curve, as generated by SDM, and expert opinions to create a permeability matrix and to identify least cost corridors and buffer zones between each PJCU pair. Connectivity corridors and buffer zone for jaguar movement included an area of 8.884,26 km2 and the total corridor length is about 160.94 km. Prioritizing criteria indicated the PJCU representing c.a. 68.61% of the total PJCU area (PJCU # 1) as of high priority for conservation and connectivity with others PJCUs (PJCUs # 4, 5 and 7) desirable for the long term survival of the species. In conclusion, by using the jaguar as a focal species and combining SDM and expert opinion we were able to create a valid framework for practical conservation actions at the Caatinga biome. The same approach could be used for the conservation of other carnivores.", "which uses ?", "Maxent", 747.0, 753.0], ["Background Age, period and cohort (APC) analyses, using representative, population-based descriptive data, provide additional understanding behind increased prevalence rates. Methods Data on obesity and diabetes from the South Australian (SA) monthly chronic disease and risk factor surveillance system from July 2002 to December 2013 (n = 59,025) were used. Age was the self-reported age of the respondent at the time of the interview. Period was the year of the interview and cohort was age subtracted from the survey year. Cohort years were 1905 to 1995. All variables were treated as continuous. The age-sex standardised prevalence for obesity and diabetes was calculated using the Australia 2011 census. The APC models were constructed with \u2018\u2018apcfit\u2019\u2019 in Stata. Results The age-sex standardised prevalence of obesity and diabetes increased in 2002-2013 from 18.6% to 24.1% and from 6.2% to 7.9%. The peak age for obesity was approximately 70 years with a steady increasing rate from 20 to 70 years of age. The peak age for diabetes was approximately 80 years. There were strong cohort effects and no period effects for both obesity and diabetes. The magnitude of the cohort effect is much more pronounced for obesity than for diabetes. Conclusion The APC analyses showed a higher than expected peak age for both obesity and diabetes, strong cohort effects with an acceleration of risk after 1960s for obesity and after 1940s for diabetes, and no period effects. By simultaneously considering the effects of age, period and cohort we have provided additional evidence for effective public health interventions.", "which uses ?", "Stata", 760.0, 765.0], ["There is increasing evidence that protein dynamics and conformational changes can play an important role in modulating biological function. As a result, experimental and computational methods are being developed, often synergistically, to study the dynamical heterogeneity of a protein or other macromolecules in solution. Thus, methods such as molecular dynamics simulations or ensemble refinement approaches have provided conformational ensembles that can be used to understand protein function and biophysics. These developments have in turn created a need for algorithms and software that can be used to compare structural ensembles in the same way as the root-mean-square-deviation is often used to compare static structures. Although a few such approaches have been proposed, these can be difficult to implement efficiently, hindering a broader applications and further developments. Here, we present an easily accessible software toolkit, called ENCORE, which can be used to compare conformational ensembles generated either from simulations alone or synergistically with experiments. ENCORE implements three previously described methods for ensemble comparison, that each can be used to quantify the similarity between conformational ensembles by estimating the overlap between the probability distributions that underlie them. We demonstrate the kinds of insights that can be obtained by providing examples of three typical use-cases: comparing ensembles generated with different molecular force fields, assessing convergence in molecular simulations, and calculating differences and similarities in structural ensembles refined with various sources of experimental data. We also demonstrate efficient computational scaling for typical analyses, and robustness against both the size and sampling of the ensembles. ENCORE is freely available and extendable, integrates with the established MDAnalysis software package, reads ensemble data in many common formats, and can work with large trajectory files.", "which uses ?", "MDAnalysis", 1898.0, 1908.0], ["Introduction In Ethiopia, the burden of malaria during pregnancy remains a public health problem. Having a good malaria knowledge leads to practicing the prevention of malaria and seeking a health care. Researches regarding pregnant women\u2019s knowledge on malaria in Ethiopia is limited. So the aim of this study was to assess malaria knowledge and its associated factors among pregnant woman, 2018. Methods An institutional-basedcross-sectional study was conducted in Adis Zemen Hospital. Data were collected using pre-tested, an interviewer-administered structured questionnaire among 236 mothers. Women\u2019s knowledge on malaria was measured using six malaria-related questions (cause of malaria, mode of transmission, signs and symptoms, complication and prevention of malaria). The collected data were entered using Epidata version 3.1 and exported to SPSS version 20 for analysis. Bivariate and multivariate logistic regressions were computed to identify predictor variables at 95% confidence interval. Variables having P value of <0.05 were considered as predictor variables of malaria knowledge. Result A total of 235 pregnant women participated which makes the response rate 99.6%. One hundred seventy two pregnant women (73.2%) of mothers had good knowledge on malaria.Women who were from urban (AOR; 2.4: CI; 1.8, 5.7), had better family monthly income (AOR; 3.4: CI; 2.7, 3.8), attended education (AOR; 1.8: CI; 1.4, 3.5) were more knowledgeable. Conclusion and recommendation Majority of participants had good knowledge on malaria. Educational status, household monthly income and residence werepredictors of malaria knowledge. Increasing women\u2019s knowledge especially for those who are from rural, have no education, and have low monthly income is still needed.", "which uses ?", "epidata", 816.0, 823.0], ["This paper proposes a robust and real-time capable algorithm for classification of the first and second heart sounds. The classification algorithm is based on the evaluation of the envelope curve of the phonocardiogram. For the evaluation, in contrast to other studies, measurements on 12 probands were conducted in different physiological conditions. Moreover, for each measurement the auscultation point, posture and physical stress were varied. The proposed envelope-based algorithm is tested with two different methods for envelope curve extraction: the Hilbert transform and the short-time Fourier transform. The performance of the classification of the first heart sounds is evaluated by using a reference electrocardiogram. Overall, by using the Hilbert transform, the algorithm has a better performance regarding the F1-score and computational effort. The proposed algorithm achieves for the S1 classification an F1-score up to 95.7% and in average 90.5%. The algorithm is robust against the age, BMI, posture, heart rate and auscultation point (except measurements on the back) of the subjects.", "which uses ?", "Short-time Fourier transform", 592.0, 620.0], ["Chaste \u2014 Cancer, Heart And Soft Tissue Environment \u2014 is an open source C++ library for the computational simulation of mathematical models developed for physiology and biology. Code development has been driven by two initial applications: cardiac electrophysiology and cancer development. A large number of cardiac electrophysiology studies have been enabled and performed, including high-performance computational investigations of defibrillation on realistic human cardiac geometries. New models for the initiation and growth of tumours have been developed. In particular, cell-based simulations have provided novel insight into the role of stem cells in the colorectal crypt. Chaste is constantly evolving and is now being applied to a far wider range of problems. The code provides modules for handling common scientific computing components, such as meshes and solvers for ordinary and partial differential equations (ODEs/PDEs). Re-use of these components avoids the need for researchers to \u2018re-invent the wheel\u2019 with each new project, accelerating the rate of progress in new applications. Chaste is developed using industrially-derived techniques, in particular test-driven development, to ensure code quality, re-use and reliability. In this article we provide examples that illustrate the types of problems Chaste can be used to solve, which can be run on a desktop computer. We highlight some scientific studies that have used or are using Chaste, and the insights they have provided. The source code, both for specific releases and the development version, is available to download under an open source Berkeley Software Distribution (BSD) licence at http://www.cs.ox.ac.uk/chaste, together with details of a mailing list and links to documentation and tutorials.", "which uses ?", "C++", NaN, NaN], ["The broadnose sevengill shark, Notorynchus cepedianus, a common coastal species in the eastern North Pacific, was sampled during routine capture and tagging operations conducted from 2005\u20132012. One hundred and thirty three biopsy samples were taken during these research operations in Willapa Bay, Washington and in San Francisco Bay, California. Genotypic data from seven polymorphic microsatellites (derived from the related sixgill shark, Hexanchus griseus) were used to describe N. cepedianus genetic diversity, population structure and relatedness. Diversity within N. cepedianus was found to be low to moderate with an average observed heterozygosity of 0.41, expected heterozygosity of 0.53, and an average of 5.1 alleles per microsatellite locus. There was no evidence of a recent population bottleneck based on genetic data. Analyses of genetic differences between the two sampled estuaries suggest two distinct populations with some genetic mixing of sharks sampled during 2005\u20132006. Relatedness within sampled populations was high, with percent relatedness among sharks caught in the same area indicating 42.30% first-order relative relationships (full or half siblings). Estuary-specific familial relationships suggest that management of N. cepedianus on the U.S. West Coast should incorporate stock-specific management goals to conserve this ecologically important predator.", "which uses ?", "BOTTLENECK", 800.0, 810.0], ["Background Historical biogeography and evolutionary processes of cave taxa have been widely studied in temperate regions. However, Southeast Asian cave ecosystems remain largely unexplored despite their high scientific interest. Here we studied the phylogeography of Leopoldamys neilli, a cave-dwelling murine rodent living in limestone karsts of Thailand, and compared the molecular signature of mitochondrial and nuclear markers. Methodology/Principal Findings We used a large sampling (n = 225) from 28 localities in Thailand and a combination of mitochondrial and nuclear markers with various evolutionary rates (two intronic regions and 12 microsatellites). The evolutionary history of L. neilli and the relative role of vicariance and dispersal were investigated using ancestral range reconstruction analysis and Approximate Bayesian computation (ABC). Both mitochondrial and nuclear markers support a large-scale population structure of four main groups (west, centre, north and northeast) and a strong finer structure within each of these groups. A deep genealogical divergence among geographically close lineages is observed and denotes a high population fragmentation. Our findings suggest that the current phylogeographic pattern of this species results from the fragmentation of a widespread ancestral population and that vicariance has played a significant role in the evolutionary history of L. neilli. These deep vicariant events that occurred during Plio-Pleistocene are related to the formation of the Central Plain of Thailand. Consequently, the western, central, northern and northeastern groups of populations were historically isolated and should be considered as four distinct Evolutionarily Significant Units (ESUs). Conclusions/Significance Our study confirms the benefit of using several independent genetic markers to obtain a comprehensive and reliable picture of L. neilli evolutionary history at different levels of resolution. The complex genetic structure of Leopoldamys neilli is supported by congruent mitochondrial and nuclear markers and has been influenced by the geological history of Thailand during Plio-Pleistocene.", "which uses ?", "STRUCTURE", 931.0, 940.0], ["Characterization of Human Endogenous Retrovirus (HERV) expression within the transcriptomic landscape using RNA-seq is complicated by uncertainty in fragment assignment because of sequence similarity. We present Telescope, a computational software tool that provides accurate estimation of transposable element expression (retrotranscriptome) resolved to specific genomic locations. Telescope directly addresses uncertainty in fragment assignment by reassigning ambiguously mapped fragments to the most probable source transcript as determined within a Bayesian statistical model. We demonstrate the utility of our approach through single locus analysis of HERV expression in 13 ENCODE cell types. When examined at this resolution, we find that the magnitude and breadth of the retrotranscriptome can be vastly different among cell types. Furthermore, our approach is robust to differences in sequencing technology and demonstrates that the retrotranscriptome has potential to be used for cell type identification. We compared our tool with other approaches for quantifying transposable element (TE) expression, and found that Telescope has the greatest resolution, as it estimates expression at specific TE insertions rather than at the TE subfamily level. Telescope performs highly accurate quantification of the retrotranscriptomic landscape in RNA-seq experiments, revealing a differential complexity in the transposable element biology of complex systems not previously observed. Telescope is available at https://github.com/mlbendall/telescope.", "which uses ?", "Telescope", 212.0, 221.0], ["PathVisio is a commonly used pathway editor, visualization and analysis software. Biological pathways have been used by biologists for many years to describe the detailed steps in biological processes. Those powerful, visual representations help researchers to better understand, share and discuss knowledge. Since the first publication of PathVisio in 2008, the original paper was cited more than 170 times and PathVisio was used in many different biological studies. As an online editor PathVisio is also integrated in the community curated pathway database WikiPathways. Here we present the third version of PathVisio with the newest additions and improvements of the application. The core features of PathVisio are pathway drawing, advanced data visualization and pathway statistics. Additionally, PathVisio 3 introduces a new powerful extension systems that allows other developers to contribute additional functionality in form of plugins without changing the core application. PathVisio can be downloaded from http://www.pathvisio.org and in 2014 PathVisio 3 has been downloaded over 5,500 times. There are already more than 15 plugins available in the central plugin repository. PathVisio is a freely available, open-source tool published under the Apache 2.0 license (http://www.apache.org/licenses/LICENSE-2.0). It is implemented in Java and thus runs on all major operating systems. The code repository is available at http://svn.bigcat.unimaas.nl/pathvisio. The support mailing list for users is available on https://groups.google.com/forum/#!forum/wikipathways-discuss and for developers on https://groups.google.com/forum/#!forum/wikipathways-devel.", "which uses ?", "Java", 1343.0, 1347.0], ["Background Suicide is one of the top ten leading causes of death in North America and represents a major public health burden, partcularly for people with Major Depressive disorder (MD). Many studies have suggested that suicidal behavior runs in families, however, identification of genomic loci that drive this efffect remain to be identified. Methodology/Principal Findings Using subjects collected as part of STAR*D, we genotyped 189 subjects with MD with history of a suicide attempt and 1073 subjects with Major Depressive disorder that had never attempted suicide. Copy Number Variants (CNVs) were called in Birdsuite and analyzed in PLINK. We found a set of CNVs present in the suicide attempter group that were not present in in the non-attempter group including in SNTG2 and MACROD2 \u2013 two brain expressed genes previously linked to psychopathology; however, these results failed to reach genome-wide signifigance. Conclusions These data suggest potential CNVs to be investigated further in relation to suicide attempts in MD using large sample sizes.", "which uses ?", "PLINK", 640.0, 645.0], ["Background Chronic hepatitis C infection is a major public health concern, with a high burden in Sub-Saharan Africa. There is growing evidence that chronic hepatitis C virus (HCV) infection causes neurological complications. This study aimed at assessing the prevalence and factors associated with neurological manifestations in chronic hepatitis C patients. Methods Through a cross-sectional design, a semi-structured questionnaire was used to collect data from consecutive chronic HCV infected patients attending the outpatient gastroenterology unit of the Douala General Hospital (DGH). Data collection was by interview, patient record review (including HCV RNA quantification, HCV genotyping and the assessment of liver fibrosis and necroinflammatory activity), clinical examination complemented by 3 tools; Neuropathic pain diagnostic questionnaire, Brief peripheral neuropathy screen and mini mental state examination score. Data were analysed using Statistical package for social sciences version 20 for windows. Results Of the 121 chronic hepatitis C patients (51.2% males) recruited, 54.5% (95% Confidence interval: 46.3%, 62.8%) had at least one neurological manifestation, with peripheral nervous system manifestations being more common (50.4%). Age \u2265 55 years (Adjusted Odds Ratio: 4.82, 95%CI: 1.02\u201318.81, p = 0.02), longer duration of illness (AOR: 1.012, 95%CI: 1.00\u20131.02, p = 0.01) and high viral load (AOR: 3.40, 95% CI: 1.20\u20139.64, p = 0.02) were significantly associated with neurological manifestations. Peripheral neuropathy was the most common neurological manifestation (49.6%), presenting mainly as sensory neuropathy (47.9%). Age \u2265 55 years (AOR: 6.25, 95%CI: 1.33\u201329.08, p = 0.02) and longer duration of illness (AOR: 1.01, 1.00\u20131.02, p = 0.01) were significantly associated with peripheral neuropathy. Conclusion Over half of the patients with chronic hepatitis C attending the DGH have a neurological manifestation, mainly presenting as sensory peripheral neuropathy. Routine screening of chronic hepatitis C patients for peripheral neuropathy is therefore necessary, with prime focus on those with older age and longer duration of illness.", "which uses ?", "Statistical Package for Social Sciences", 956.0, 995.0], ["Genome-scale models of metabolism and macromolecular expression (ME-models) explicitly compute the optimal proteome composition of a growing cell. ME-models expand upon the well-established genome-scale models of metabolism (M-models), and they enable a new fundamental understanding of cellular growth. ME-models have increased predictive capabilities and accuracy due to their inclusion of the biosynthetic costs for the machinery of life, but they come with a significant increase in model size and complexity. This challenge results in models which are both difficult to compute and challenging to understand conceptually. As a result, ME-models exist for only two organisms (Escherichia coli and Thermotoga maritima) and are still used by relatively few researchers. To address these challenges, we have developed a new software framework called COBRAme for building and simulating ME-models. It is coded in Python and built on COBRApy, a popular platform for using M-models. COBRAme streamlines computation and analysis of ME-models. It provides tools to simplify constructing and editing ME-models to enable ME-model reconstructions for new organisms. We used COBRAme to reconstruct a condensed E. coli ME-model called iJL1678b-ME. This reformulated model gives functionally identical solutions to previous E. coli ME-models while using 1/6 the number of free variables and solving in less than 10 minutes, a marked improvement over the 6 hour solve time of previous ME-model formulations. Errors in previous ME-models were also corrected leading to 52 additional genes that must be expressed in iJL1678b-ME to grow aerobically in glucose minimal in silico media. This manuscript outlines the architecture of COBRAme and demonstrates how ME-models can be created, modified, and shared most efficiently using the new software framework.", "which uses ?", "Python", 913.0, 919.0], ["Nonribosomally and ribosomally synthesized bioactive peptides constitute a source of molecules of great biomedical importance, including antibiotics such as penicillin, immunosuppressants such as cyclosporine, and cytostatics such as bleomycin. Recently, an innovative mass-spectrometry-based strategy, peptidogenomics, has been pioneered to effectively mine microbial strains for novel peptidic metabolites. Even though mass-spectrometric peptide detection can be performed quite fast, true high-throughput natural product discovery approaches have still been limited by the inability to rapidly match the identified tandem mass spectra to the gene clusters responsible for the biosynthesis of the corresponding compounds. With Pep2Path, we introduce a software package to fully automate the peptidogenomics approach through the rapid Bayesian probabilistic matching of mass spectra to their corresponding biosynthetic gene clusters. Detailed benchmarking of the method shows that the approach is powerful enough to correctly identify gene clusters even in data sets that consist of hundreds of genomes, which also makes it possible to match compounds from unsequenced organisms to closely related biosynthetic gene clusters in other genomes. Applying Pep2Path to a data set of compounds without known biosynthesis routes, we were able to identify candidate gene clusters for the biosynthesis of five important compounds. Notably, one of these clusters was detected in a genome from a different subphylum of Proteobacteria than that in which the molecule had first been identified. All in all, our approach paves the way towards high-throughput discovery of novel peptidic natural products. Pep2Path is freely available from http://pep2path.sourceforge.net/, implemented in Python, licensed under the GNU General Public License v3 and supported on MS Windows, Linux and Mac OS X.", "which uses ?", "Python", 1775.0, 1781.0], ["Genome-scale models of metabolism and macromolecular expression (ME-models) explicitly compute the optimal proteome composition of a growing cell. ME-models expand upon the well-established genome-scale models of metabolism (M-models), and they enable a new fundamental understanding of cellular growth. ME-models have increased predictive capabilities and accuracy due to their inclusion of the biosynthetic costs for the machinery of life, but they come with a significant increase in model size and complexity. This challenge results in models which are both difficult to compute and challenging to understand conceptually. As a result, ME-models exist for only two organisms (Escherichia coli and Thermotoga maritima) and are still used by relatively few researchers. To address these challenges, we have developed a new software framework called COBRAme for building and simulating ME-models. It is coded in Python and built on COBRApy, a popular platform for using M-models. COBRAme streamlines computation and analysis of ME-models. It provides tools to simplify constructing and editing ME-models to enable ME-model reconstructions for new organisms. We used COBRAme to reconstruct a condensed E. coli ME-model called iJL1678b-ME. This reformulated model gives functionally identical solutions to previous E. coli ME-models while using 1/6 the number of free variables and solving in less than 10 minutes, a marked improvement over the 6 hour solve time of previous ME-model formulations. Errors in previous ME-models were also corrected leading to 52 additional genes that must be expressed in iJL1678b-ME to grow aerobically in glucose minimal in silico media. This manuscript outlines the architecture of COBRAme and demonstrates how ME-models can be created, modified, and shared most efficiently using the new software framework.", "which uses ?", "COBRAme", 851.0, 858.0], ["Advances in computational metabolic optimization are required to realize the full potential of new in vivo metabolic engineering technologies by bridging the gap between computational design and strain development. We present Redirector, a new Flux Balance Analysis-based framework for identifying engineering targets to optimize metabolite production in complex pathways. Previous optimization frameworks have modeled metabolic alterations as directly controlling fluxes by setting particular flux bounds. Redirector develops a more biologically relevant approach, modeling metabolic alterations as changes in the balance of metabolic objectives in the system. This framework iteratively selects enzyme targets, adds the associated reaction fluxes to the metabolic objective, thereby incentivizing flux towards the production of a metabolite of interest. These adjustments to the objective act in competition with cellular growth and represent up-regulation and down-regulation of enzyme mediated reactions. Using the iAF1260 E. coli metabolic network model for optimization of fatty acid production as a test case, Redirector generates designs with as many as 39 simultaneous and 111 unique engineering targets. These designs discover proven in vivo targets, novel supporting pathways and relevant interdependencies, many of which cannot be predicted by other methods. Redirector is available as open and free software, scalable to computational resources, and powerful enough to find all known enzyme targets for fatty acid production.", "which uses ?", "Redirector", 226.0, 236.0], ["We propose a novel method and software tool, Strawberry, for transcript reconstruction and quantification from RNA-Seq data under the guidance of genome alignment and independent of gene annotation. Strawberry consists of two modules: assembly and quantification. The novelty of Strawberry is that the two modules use different optimization frameworks but utilize the same data graph structure, which allows a highly efficient, expandable and accurate algorithm for dealing large data. The assembly module parses aligned reads into splicing graphs, and uses network flow algorithms to select the most likely transcripts. The quantification module uses a latent class model to assign read counts from the nodes of splicing graphs to transcripts. Strawberry simultaneously estimates the transcript abundances and corrects for sequencing bias through an EM algorithm. Based on simulations, Strawberry outperforms Cufflinks and StringTie in terms of both assembly and quantification accuracies. Under the evaluation of a real data set, the estimated transcript expression by Strawberry has the highest correlation with Nanostring probe counts, an independent experiment measure for transcript expression. Availability: Strawberry is written in C++14, and is available as open source software at https://github.com/ruolin/strawberry under the MIT license.", "which uses ?", "C++", NaN, NaN], ["Objective To investigate the efficacy and tolerability of duloxetine during short-term treatment in adults with generalized anxiety disorder (GAD). Methods We conducted a comprehensive literature review of the PubMed, Embase, Cochrane Central Register of Controlled Trials, Web of Science, and ClinicalTrials databases for randomized controlled trials(RCTs) comparing duloxetine or duloxetine plus other antipsychotics with placebo for the treatment of GAD in adults. Outcome measures were (1) efficacy, assessed by the Hospital Anxiety and Depression Scale(HADS) anxiety subscale score, the Hamilton Rating Scale for Anxiety(HAM-A) psychic and somatic anxiety factor scores, and response and remission rates based on total scores of HAM-A; (2) tolerability, assessed by discontinuation rate due to adverse events, the incidence of treatment emergent adverse events(TEAEs) and serious adverse events(SAEs). Review Manager 5.3 and Stata Version 12.0 software were used for all statistical analyses. Results The meta-analysis included 8 RCTs. Mean changes in the HADS anxiety subscale score [mean difference(MD) = 2.32, 95% confidence interval(CI) 1.77\u20132.88, P<0.00001] and HAM-A psychic anxiety factor score were significantly greater in patients with GAD that received duloxetine compared to those that received placebo (MD = 2.15, 95%CI 1.61\u20132.68, P<0.00001). However, there was no difference in mean change in the HAM-A somatic anxiety factor score (MD = 1.13, 95%CI 0.67\u20131.58, P<0.00001). Discontinuation rate due to AEs in the duloxetine group was significantly higher than the placebo group [odds ratio(OR) = 2.62, 95%CI 1.35\u20135.06, P = 0.004]. The incidence of any TEAE was significantly increased in patients that received duloxetine (OR = 1.76, 95%CI 1.36\u20132.28, P<0.0001), but there was no significant difference in the incidence of SAEs (OR = 1.13, 95%CI 0.52\u20132.47, P = 0.75). Conclusion Duloxetine resulted in a greater improvement in symptoms of psychic anxiety and similar changes in symptoms of somatic anxiety compared to placebo during short-term treatment in adults with GAD and its tolerability was acceptable.", "which uses ?", "Stata", 930.0, 935.0], ["Algorithms for comparing protein structure are frequently used for function annotation. By searching for subtle similarities among very different proteins, these algorithms can identify remote homologs with similar biological functions. In contrast, few comparison algorithms focus on specificity annotation, where the identification of subtle differences among very similar proteins can assist in finding small structural variations that create differences in binding specificity. Few specificity annotation methods consider electrostatic fields, which play a critical role in molecular recognition. To fill this gap, this paper describes VASP-E (Volumetric Analysis of Surface Properties with Electrostatics), a novel volumetric comparison tool based on the electrostatic comparison of protein-ligand and protein-protein binding sites. VASP-E exploits the central observation that three dimensional solids can be used to fully represent and compare both electrostatic isopotentials and molecular surfaces. With this integrated representation, VASP-E is able to dissect the electrostatic environments of protein-ligand and protein-protein binding interfaces, identifying individual amino acids that have an electrostatic influence on binding specificity. VASP-E was used to examine a nonredundant subset of the serine and cysteine proteases as well as the barnase-barstar and Rap1a-raf complexes. Based on amino acids established by various experimental studies to have an electrostatic influence on binding specificity, VASP-E identified electrostatically influential amino acids with 100% precision and 83.3% recall. We also show that VASP-E can accurately classify closely related ligand binding cavities into groups with different binding preferences. These results suggest that VASP-E should prove a useful tool for the characterization of specific binding and the engineering of binding preferences in proteins.", "which uses ?", "VASP-E", 640.0, 646.0], ["This paper proposes a robust and real-time capable algorithm for classification of the first and second heart sounds. The classification algorithm is based on the evaluation of the envelope curve of the phonocardiogram. For the evaluation, in contrast to other studies, measurements on 12 probands were conducted in different physiological conditions. Moreover, for each measurement the auscultation point, posture and physical stress were varied. The proposed envelope-based algorithm is tested with two different methods for envelope curve extraction: the Hilbert transform and the short-time Fourier transform. The performance of the classification of the first heart sounds is evaluated by using a reference electrocardiogram. Overall, by using the Hilbert transform, the algorithm has a better performance regarding the F1-score and computational effort. The proposed algorithm achieves for the S1 classification an F1-score up to 95.7% and in average 90.5%. The algorithm is robust against the age, BMI, posture, heart rate and auscultation point (except measurements on the back) of the subjects.", "which uses ?", "Hilbert transform", 566.0, 583.0], ["Genome-scale models of metabolism and macromolecular expression (ME-models) explicitly compute the optimal proteome composition of a growing cell. ME-models expand upon the well-established genome-scale models of metabolism (M-models), and they enable a new fundamental understanding of cellular growth. ME-models have increased predictive capabilities and accuracy due to their inclusion of the biosynthetic costs for the machinery of life, but they come with a significant increase in model size and complexity. This challenge results in models which are both difficult to compute and challenging to understand conceptually. As a result, ME-models exist for only two organisms (Escherichia coli and Thermotoga maritima) and are still used by relatively few researchers. To address these challenges, we have developed a new software framework called COBRAme for building and simulating ME-models. It is coded in Python and built on COBRApy, a popular platform for using M-models. COBRAme streamlines computation and analysis of ME-models. It provides tools to simplify constructing and editing ME-models to enable ME-model reconstructions for new organisms. We used COBRAme to reconstruct a condensed E. coli ME-model called iJL1678b-ME. This reformulated model gives functionally identical solutions to previous E. coli ME-models while using 1/6 the number of free variables and solving in less than 10 minutes, a marked improvement over the 6 hour solve time of previous ME-model formulations. Errors in previous ME-models were also corrected leading to 52 additional genes that must be expressed in iJL1678b-ME to grow aerobically in glucose minimal in silico media. This manuscript outlines the architecture of COBRAme and demonstrates how ME-models can be created, modified, and shared most efficiently using the new software framework.", "which uses ?", "COBRApy", 933.0, 940.0], ["Objective To enable early prediction of strong traction force vacuum extraction. Design Observational cohort. Setting Karolinska University Hospital delivery ward, tertiary unit. Population and sample size Term mid and low metal cup vacuum extraction deliveries June 2012\u2014February 2015, n = 277. Methods Traction forces during vacuum extraction were collected prospectively using an intelligent handle. Levels of traction force were analysed pairwise by subjective category strong versus non-strong extraction, in order to define an objective predictive value for strong extraction. Statistical analysis A logistic regression model based on the shrinkage and selection method lasso was used to identify the predictive capacity of the different traction force variables. Predictors Total (time force integral, Newton minutes) and peak traction (Newton) force in the first to third pull; difference in traction force between the second and first pull, as well as the third and first pull respectively. Accumulated traction force at the second and third pull. Outcome Subjectively categorized extraction as strong versus non-strong. Results The prevalence of strong extraction was 26%. Prediction including the first and second pull: AUC 0,85 (CI 0,80\u20130,90); specificity 0,76; sensitivity 0,87; PPV 0,56; NPV 0,94. Prediction including the first to third pull: AUC 0,86 (CI 0,80\u20130,91); specificity 0,87; sensitivity 0,70; PPV 0,65; NPV 0,89. Conclusion Traction force measurement during vacuum extraction can help exclude strong category extraction from the second pull. From the third pull, two-thirds of strong extractions can be predicted.", "which uses ?", "Statistica", NaN, NaN], ["The increasing usage of social media for conversations, together with the availability of its data to researchers, provides an opportunity to study human conversations on a large scale. Twitter, which allows its users to post messages of up to a limit of 140 characters, is one such social media. Previous studies of utterances in books, movies and Twitter have shown that most of these utterances, when transcribed, are much shorter than 140 characters. Furthermore, the median length of Twitter messages was found to vary across US states. Here, we investigate whether the length of Twitter messages varies across different regions in the UK. We find that the median message length, depending on grouping, can differ by up to 2 characters.", "which uses ?", "Twitter", 186.0, 193.0], ["Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. AVAILABILITY: QuIN\u2019s web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.", "which uses ?", "JavaScript", 1593.0, 1603.0], ["Objective To investigate the efficacy and tolerability of duloxetine during short-term treatment in adults with generalized anxiety disorder (GAD). Methods We conducted a comprehensive literature review of the PubMed, Embase, Cochrane Central Register of Controlled Trials, Web of Science, and ClinicalTrials databases for randomized controlled trials(RCTs) comparing duloxetine or duloxetine plus other antipsychotics with placebo for the treatment of GAD in adults. Outcome measures were (1) efficacy, assessed by the Hospital Anxiety and Depression Scale(HADS) anxiety subscale score, the Hamilton Rating Scale for Anxiety(HAM-A) psychic and somatic anxiety factor scores, and response and remission rates based on total scores of HAM-A; (2) tolerability, assessed by discontinuation rate due to adverse events, the incidence of treatment emergent adverse events(TEAEs) and serious adverse events(SAEs). Review Manager 5.3 and Stata Version 12.0 software were used for all statistical analyses. Results The meta-analysis included 8 RCTs. Mean changes in the HADS anxiety subscale score [mean difference(MD) = 2.32, 95% confidence interval(CI) 1.77\u20132.88, P<0.00001] and HAM-A psychic anxiety factor score were significantly greater in patients with GAD that received duloxetine compared to those that received placebo (MD = 2.15, 95%CI 1.61\u20132.68, P<0.00001). However, there was no difference in mean change in the HAM-A somatic anxiety factor score (MD = 1.13, 95%CI 0.67\u20131.58, P<0.00001). Discontinuation rate due to AEs in the duloxetine group was significantly higher than the placebo group [odds ratio(OR) = 2.62, 95%CI 1.35\u20135.06, P = 0.004]. The incidence of any TEAE was significantly increased in patients that received duloxetine (OR = 1.76, 95%CI 1.36\u20132.28, P<0.0001), but there was no significant difference in the incidence of SAEs (OR = 1.13, 95%CI 0.52\u20132.47, P = 0.75). Conclusion Duloxetine resulted in a greater improvement in symptoms of psychic anxiety and similar changes in symptoms of somatic anxiety compared to placebo during short-term treatment in adults with GAD and its tolerability was acceptable.", "which uses ?", "Review Manager", 907.0, 921.0], ["The quantification of cell shape, cell migration, and cell rearrangements is important for addressing classical questions in developmental biology such as patterning and tissue morphogenesis. Time-lapse microscopic imaging of transgenic embryos expressing fluorescent reporters is the method of choice for tracking morphogenetic changes and establishing cell lineages and fate maps in vivo. However, the manual steps involved in curating thousands of putative cell segmentations have been a major bottleneck in the application of these technologies especially for cell membranes. Segmentation of cell membranes while more difficult than nuclear segmentation is necessary for quantifying the relations between changes in cell morphology and morphogenesis. We present a novel and fully automated method to first reconstruct membrane signals and then segment out cells from 3D membrane images even in dense tissues. The approach has three stages: 1) detection of local membrane planes, 2) voting to fill structural gaps, and 3) region segmentation. We demonstrate the superior performance of the algorithms quantitatively on time-lapse confocal and two-photon images of zebrafish neuroectoderm and paraxial mesoderm by comparing its results with those derived from human inspection. We also compared with synthetic microscopic images generated by simulating the process of imaging with fluorescent reporters under varying conditions of noise. Both the over-segmentation and under-segmentation percentages of our method are around 5%. The volume overlap of individual cells, compared to expert manual segmentation, is consistently over 84%. By using our software (ACME) to study somite formation, we were able to segment touching cells with high accuracy and reliably quantify changes in morphogenetic parameters such as cell shape and size, and the arrangement of epithelial and mesenchymal cells. Our software has been developed and tested on Windows, Mac, and Linux platforms and is available publicly under an open source BSD license (https://github.com/krm15/ACME).", "which uses ?", "Linux", 1959.0, 1964.0], ["Existing methods for identifying structural variants (SVs) from short read datasets are inaccurate. This complicates disease-gene identification and efforts to understand the consequences of genetic variation. In response, we have created Wham (Whole-genome Alignment Metrics) to provide a single, integrated framework for both structural variant calling and association testing, thereby bypassing many of the difficulties that currently frustrate attempts to employ SVs in association testing. Here we describe Wham, benchmark it against three other widely used SV identification tools\u2013Lumpy, Delly and SoftSearch\u2013and demonstrate Wham\u2019s ability to identify and associate SVs with phenotypes using data from humans, domestic pigeons, and vaccinia virus. Wham and all associated software are covered under the MIT License and can be freely downloaded from github (https://github.com/zeeev/wham), with documentation on a wiki (http://zeeev.github.io/wham/). For community support please post questions to https://www.biostars.org/.", "which uses ?", "Wham", 239.0, 243.0], ["Leukemia is a fatal cancer and has two main types: Acute and chronic. Each type has two more subtypes: Lymphoid and myeloid. Hence, in total, there are four subtypes of leukemia. This study proposes a new approach for diagnosis of all subtypes of leukemia from microscopic blood cell images using convolutional neural networks (CNN), which requires a large training data set. Therefore, we also investigated the effects of data augmentation for an increasing number of training samples synthetically. We used two publicly available leukemia data sources: ALL-IDB and ASH Image Bank. Next, we applied seven different image transformation techniques as data augmentation. We designed a CNN architecture capable of recognizing all subtypes of leukemia. Besides, we also explored other well-known machine learning algorithms such as naive Bayes, support vector machine, k-nearest neighbor, and decision tree. To evaluate our approach, we set up a set of experiments and used 5-fold cross-validation. The results we obtained from experiments showed that our CNN model performance has 88.25% and 81.74% accuracy, in leukemia versus healthy and multi-class classification of all subtypes, respectively. Finally, we also showed that the CNN model has a better performance than other well-known machine learning algorithms.", "which Used models ?", "CNN", 328.0, 331.0], ["Bipolar disorder, also known as manic depression, is a brain disorder that affects the brain structure of a patient. It results in extreme mood swings, severe states of depression, and overexcitement simultaneously. It is estimated that roughly 3% of the population of the United States (about 5.3 million adults) suffers from bipolar disorder. Recent research efforts like the Twin studies have demonstrated a high heritability factor for the disorder, making genomics a viable alternative for detecting and treating bipolar disorder, in addition to the conventional lengthy and costly postsymptom clinical diagnosis. Motivated by this study, leveraging several emerging deep learning algorithms, we design an end\u2010to\u2010end deep learning architecture (called DeepBipolar) to predict bipolar disorder based on limited genomic data. DeepBipolar adopts the Deep Convolutional Neural Network (DCNN) architecture that automatically extracts features from genotype information to predict the bipolar phenotype. We participated in the Critical Assessment of Genome Interpretation (CAGI) bipolar disorder challenge and DeepBipolar was considered the most successful by the independent assessor. In this work, we thoroughly evaluate the performance of DeepBipolar and analyze the type of signals we believe could have affected the classifier in distinguishing the case samples from the control set.", "which Used models ?", "CNN", NaN, NaN], ["Abstract This paper attempts to provide methods to estimate the real scenario of the novel coronavirus pandemic crisis on Brazil and the states of Sao Paulo, Pernambuco, Espirito Santo, Amazonas and Distrito Federal. By the use of a SEIRD mathematical model with age division, we predict the infection and death curve, stating the peak date for Brazil and these states. We also carry out a prediction for the ICU demand on these states for a visualization of the size of a possible collapse on the local health system. By the end, we establish some future scenarios including the stopping of social isolation and the introduction of vaccines and efficient medicine against the virus.", "which Used models ?", "SEIRD", 233.0, 238.0], ["This paper presents a novel and effective audio based method on depression classification. It focuses on two important issues, \\emph{i.e.} data representation and sample imbalance, which are not well addressed in literature. For the former one, in contrast to traditional shallow hand-crafted features, we propose a deep model, namely DepAudioNet, to encode the depression related characteristics in the vocal channel, combining Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) to deliver a more comprehensive audio representation. For the latter one, we introduce a random sampling strategy in the model training phase to balance the positive and negative samples, which largely alleviates the bias caused by uneven sample distribution. Evaluations are carried out on the DAIC-WOZ dataset for the Depression Classification Sub-challenge (DCC) at the 2016 Audio-Visual Emotion Challenge (AVEC), and the experimental results achieved clearly demonstrate the effectiveness of the proposed approach.", "which Used models ?", "CNN", 459.0, 462.0], ["In early stages, patients with bipolar disorder are often diagnosed as having unipolar depression in mood disorder diagnosis. Because the long-term monitoring is limited by the delayed detection of mood disorder, an accurate and one-time diagnosis is desirable to avoid delay in appropriate treatment due to misdiagnosis. In this paper, an elicitation-based approach is proposed for realizing a one-time diagnosis by using responses elicited from patients by having them watch six emotion-eliciting videos. After watching each video clip, the conversations, including patient facial expressions and speech responses, between the participant and the clinician conducting the interview were recorded. Next, the hierarchical spectral clustering algorithm was employed to adapt the facial expression and speech response features by using the extended Cohn\u2013Kanade and eNTERFACE databases. A denoizing autoencoder was further applied to extract the bottleneck features of the adapted data. Then, the facial and speech bottleneck features were input into support vector machines to obtain speech emotion profiles (EPs) and the modulation spectrum (MS) of the facial action unit sequence for each elicited response. Finally, a cell-coupled long short-term memory (LSTM) network with an $L$ -skip fusion mechanism was proposed to model the temporal information of all elicited responses and to loosely fuse the EPs and the MS for conducting mood disorder detection. The experimental results revealed that the cell-coupled LSTM with the $L$ -skip fusion mechanism has promising advantages and efficacy for mood disorder detection.", "which Used models ?", "LSTM", 1256.0, 1260.0], ["This work presents a novel method for learning a model that can diagnose Attention Deficit Hyperactivity Disorder (ADHD), as well as Autism, using structural texture and functional connectivity features obtained from 3-dimensional structural magnetic resonance imaging (MRI) and 4-dimensional resting-state functional magnetic resonance imaging (fMRI) scans of subjects. We explore a series of three learners: (1) The LeFMS learner first extracts features from the structural MRI images using the texture-based filters produced by a sparse autoencoder. These filters are then convolved with the original MRI image using an unsupervised convolutional network. The resulting features are used as input to a linear support vector machine (SVM) classifier. (2) The LeFMF learner produces a diagnostic model by first computing spatial non-stationary independent components of the fMRI scans, which it uses to decompose each subject\u2019s fMRI scan into the time courses of these common spatial components. These features can then be used with a learner by themselves or in combination with other features to produce the model. Regardless of which approach is used, the final set of features are input to a linear support vector machine (SVM) classifier. (3) Finally, the overall LeFMSF learner uses the combined features obtained from the two feature extraction processes in (1) and (2) above as input to an SVM classifier, achieving an accuracy of 0.673 on the ADHD-200 holdout data and 0.643 on the ABIDE holdout data. Both of these results, obtained with the same LeFMSF framework, are the best known, over all hold-out accuracies on these datasets when only using imaging data\u2014exceeding previously-published results by 0.012 for ADHD and 0.042 for Autism. Our results show that combining multi-modal features can yield good classification accuracy for diagnosis of ADHD and Autism, which is an important step towards computer-aided diagnosis of these psychiatric diseases and perhaps others as well.", "which Used models ?", "Autoencoder", 540.0, 551.0], ["We propose DeepBreath, a deep learning model which automatically recognises people's psychological stress level (mental overload) from their breathing patterns. Using a low cost thermal camera, we track a person's breathing patterns as temperature changes around his/her nostril. The paper's technical contribution is threefold. First of all, instead of creating handcrafted features to capture aspects of the breathing patterns, we transform the uni-dimensional breathing signals into two dimensional respiration variability spectrogram (RVS) sequences. The spectrograms easily capture the complexity of the breathing dynamics. Second, a spatial pattern analysis based on a deep Convolutional Neural Network (CNN) is directly applied to the spectrogram sequences without the need of hand-crafting features. Finally, a data augmentation technique, inspired from solutions for over-fitting problems in deep learning, is applied to allow the CNN to learn with a small-scale dataset from short-term measurements (e.g., up to a few hours). The model is trained and tested with data collected from people exposed to two types of cognitive tasks (Stroop Colour Word Test, Mental Computation test) with sessions of different difficulty levels. Using normalised self-report as ground truth, the CNN reaches 84.59% accuracy in discriminating between two levels of stress and 56.52% in discriminating between three levels. In addition, the CNN outperformed powerful shallow learning methods based on a single layer neural network. Finally, the dataset of labelled thermal images will be open to the community.", "which Used models ?", "CNN", 710.0, 713.0], ["Leukemia is a fatal cancer and has two main types: Acute and chronic. Each type has two more subtypes: Lymphoid and myeloid. Hence, in total, there are four subtypes of leukemia. This study proposes a new approach for diagnosis of all subtypes of leukemia from microscopic blood cell images using convolutional neural networks (CNN), which requires a large training data set. Therefore, we also investigated the effects of data augmentation for an increasing number of training samples synthetically. We used two publicly available leukemia data sources: ALL-IDB and ASH Image Bank. Next, we applied seven different image transformation techniques as data augmentation. We designed a CNN architecture capable of recognizing all subtypes of leukemia. Besides, we also explored other well-known machine learning algorithms such as naive Bayes, support vector machine, k-nearest neighbor, and decision tree. To evaluate our approach, we set up a set of experiments and used 5-fold cross-validation. The results we obtained from experiments showed that our CNN model performance has 88.25% and 81.74% accuracy, in leukemia versus healthy and multi-class classification of all subtypes, respectively. Finally, we also showed that the CNN model has a better performance than other well-known machine learning algorithms.", "which Used models ?", "support vector machine", 842.0, 864.0], ["In mood disorder diagnosis, bipolar disorder (BD) patients are often misdiagnosed as unipolar depression (UD) on initial presentation. It is crucial to establish an accurate distinction between BD and UD to make a correct and early diagnosis, leading to improvements in treatment and course of illness. To deal with this misdiagnosis problem, in this study, we experimented on eliciting subjects' emotions by watching six eliciting emotional video clips. After watching each video clips, their speech responses were collected when they were interviewing with a clinician. In mood disorder detection, speech emotions play an import role to detect manic or depressive symptoms. Therefore, speech emotion profiles (EP) are obtained by using the support vector machine (SVM) which are built via speech features adapted from selected databases using a denoising autoencoder-based method. Finally, a Long Short-Term Memory (LSTM) recurrent neural network is employed to characterize the temporal information of the EPs with respect to six emotional videos. Comparative experiments clearly show the promising advantage and efficacy of the LSTM-based approach for mood disorder detection.", "which Used models ?", "Autoencoder", 857.0, 868.0], ["Leukemia is a fatal cancer and has two main types: Acute and chronic. Each type has two more subtypes: Lymphoid and myeloid. Hence, in total, there are four subtypes of leukemia. This study proposes a new approach for diagnosis of all subtypes of leukemia from microscopic blood cell images using convolutional neural networks (CNN), which requires a large training data set. Therefore, we also investigated the effects of data augmentation for an increasing number of training samples synthetically. We used two publicly available leukemia data sources: ALL-IDB and ASH Image Bank. Next, we applied seven different image transformation techniques as data augmentation. We designed a CNN architecture capable of recognizing all subtypes of leukemia. Besides, we also explored other well-known machine learning algorithms such as naive Bayes, support vector machine, k-nearest neighbor, and decision tree. To evaluate our approach, we set up a set of experiments and used 5-fold cross-validation. The results we obtained from experiments showed that our CNN model performance has 88.25% and 81.74% accuracy, in leukemia versus healthy and multi-class classification of all subtypes, respectively. Finally, we also showed that the CNN model has a better performance than other well-known machine learning algorithms.", "which Used models ?", "k-nearest neighbor", 866.0, 884.0], ["This paper presents a novel and effective audio based method on depression classification. It focuses on two important issues, \\emph{i.e.} data representation and sample imbalance, which are not well addressed in literature. For the former one, in contrast to traditional shallow hand-crafted features, we propose a deep model, namely DepAudioNet, to encode the depression related characteristics in the vocal channel, combining Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) to deliver a more comprehensive audio representation. For the latter one, we introduce a random sampling strategy in the model training phase to balance the positive and negative samples, which largely alleviates the bias caused by uneven sample distribution. Evaluations are carried out on the DAIC-WOZ dataset for the Depression Classification Sub-challenge (DCC) at the 2016 Audio-Visual Emotion Challenge (AVEC), and the experimental results achieved clearly demonstrate the effectiveness of the proposed approach.", "which Used models ?", "LSTM", 492.0, 496.0], ["Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.", "which Used models ?", "VGG-16", 1220.0, 1226.0], ["In clinical diagnosis of mood disorder, depression is one of the most common psychiatric disorders. There are two major types of mood disorders: major depressive disorder (MDD) and bipolar disorder (BPD). A large portion of BPD are misdiagnosed as MDD in the diagnostic of mood disorders. Short-term detection which could be used in early detection and intervention is thus desirable. This study investigates microscopic facial expression changes for the subjects with MDD, BPD and control group (CG), when elicited by emotional video clips. This study uses eight basic orientations of motion vector (MV) to characterize the subtle changes in microscopic facial expression. Then, wavelet decomposition is applied to extract entropy and energy of different frequency bands. Next, an autoencoder neural network is adopted to extract the bottleneck features for dimensionality reduction. Finally, the long short term memory (LSTM) is employed for modeling the long-term variation among different mood disorders types. For evaluation of the proposed method, the elicited data from 36 subjects (12 for each of MDD, BPD and CG) were considered in the K-fold (K=12) cross validation experiments, and the performance for distinguishing among MDD, BPD and CG achieved 67.7% accuracy.", "which Used models ?", "LSTM", 922.0, 926.0], ["In this paper, we aim to develop a deep learning based automatic Attention Deficit Hyperactive Disorder (ADHD) diagnosis algorithm using resting state functional magnetic resonance imaging (rs-fMRI) scans. However, relative to millions of parameters in deep neural networks (DNN), the number of fMRI samples is still limited to learn discriminative features from the raw data. In light of this, we first encode our prior knowledge on 3D features voxel-wisely, including Regional Homogeneity (ReHo), fractional Amplitude of Low Frequency Fluctuations (fALFF) and Voxel-Mirrored Homotopic Connectivity (VMHC), and take these 3D images as the input to the DNN. Inspired by the way that radiologists examine brain images, we further investigate a novel 3D convolutional neural network (CNN) architecture to learn 3D local patterns which may boost the diagnosis accuracy. Investigation on the hold-out testing data of the ADHD-200 Global competition demonstrates that the proposed 3D CNN approach yields superior performances when compared to the reported classifiers in the literature, even with less training samples.", "which Used models ?", "3D CNN", 976.0, 982.0], ["Leukemia is a fatal cancer and has two main types: Acute and chronic. Each type has two more subtypes: Lymphoid and myeloid. Hence, in total, there are four subtypes of leukemia. This study proposes a new approach for diagnosis of all subtypes of leukemia from microscopic blood cell images using convolutional neural networks (CNN), which requires a large training data set. Therefore, we also investigated the effects of data augmentation for an increasing number of training samples synthetically. We used two publicly available leukemia data sources: ALL-IDB and ASH Image Bank. Next, we applied seven different image transformation techniques as data augmentation. We designed a CNN architecture capable of recognizing all subtypes of leukemia. Besides, we also explored other well-known machine learning algorithms such as naive Bayes, support vector machine, k-nearest neighbor, and decision tree. To evaluate our approach, we set up a set of experiments and used 5-fold cross-validation. The results we obtained from experiments showed that our CNN model performance has 88.25% and 81.74% accuracy, in leukemia versus healthy and multi-class classification of all subtypes, respectively. Finally, we also showed that the CNN model has a better performance than other well-known machine learning algorithms.", "which Used models ?", "decision tree", 890.0, 903.0], ["Depression is a typical mood disorder, which affects people in mental and even physical problems. People who suffer depression always behave abnormal in visual behavior and the voice. In this paper, an audio visual based multimodal depression scale prediction system is proposed. Firstly, features are extracted from video and audio are fused in feature level to represent the audio visual behavior. Secondly, long short memory recurrent neural network (LSTM-RNN) is utilized to encode the dynamic temporal information of the abnormal audio visual behavior. Thirdly, emotion information is utilized by multi-task learning to boost the performance further. The proposed approach is evaluated on the Audio-Visual Emotion Challenge (AVEC2014) dataset. Experiments results show the dimensional emotion recognition helps to depression scale prediction.", "which Used models ?", "LSTM", 454.0, 458.0], ["In mood disorder diagnosis, bipolar disorder (BD) patients are often misdiagnosed as unipolar depression (UD) on initial presentation. It is crucial to establish an accurate distinction between BD and UD to make a correct and early diagnosis, leading to improvements in treatment and course of illness. To deal with this misdiagnosis problem, in this study, we experimented on eliciting subjects' emotions by watching six eliciting emotional video clips. After watching each video clips, their speech responses were collected when they were interviewing with a clinician. In mood disorder detection, speech emotions play an import role to detect manic or depressive symptoms. Therefore, speech emotion profiles (EP) are obtained by using the support vector machine (SVM) which are built via speech features adapted from selected databases using a denoising autoencoder-based method. Finally, a Long Short-Term Memory (LSTM) recurrent neural network is employed to characterize the temporal information of the EPs with respect to six emotional videos. Comparative experiments clearly show the promising advantage and efficacy of the LSTM-based approach for mood disorder detection.", "which Used models ?", "LSTM", 918.0, 922.0], ["Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.", "which Used models ?", "InceptionResNet-V2", 1257.0, 1275.0], ["In early stages, patients with bipolar disorder are often diagnosed as having unipolar depression in mood disorder diagnosis. Because the long-term monitoring is limited by the delayed detection of mood disorder, an accurate and one-time diagnosis is desirable to avoid delay in appropriate treatment due to misdiagnosis. In this paper, an elicitation-based approach is proposed for realizing a one-time diagnosis by using responses elicited from patients by having them watch six emotion-eliciting videos. After watching each video clip, the conversations, including patient facial expressions and speech responses, between the participant and the clinician conducting the interview were recorded. Next, the hierarchical spectral clustering algorithm was employed to adapt the facial expression and speech response features by using the extended Cohn\u2013Kanade and eNTERFACE databases. A denoizing autoencoder was further applied to extract the bottleneck features of the adapted data. Then, the facial and speech bottleneck features were input into support vector machines to obtain speech emotion profiles (EPs) and the modulation spectrum (MS) of the facial action unit sequence for each elicited response. Finally, a cell-coupled long short-term memory (LSTM) network with an $L$ -skip fusion mechanism was proposed to model the temporal information of all elicited responses and to loosely fuse the EPs and the MS for conducting mood disorder detection. The experimental results revealed that the cell-coupled LSTM with the $L$ -skip fusion mechanism has promising advantages and efficacy for mood disorder detection.", "which Used models ?", "Autoencoder", 896.0, 907.0], ["In this paper, we propose an audio visual multimodal depression recognition framework composed of deep convolutional neural network (DCNN) and deep neural network (DNN) models. For each modality, corresponding feature descriptors are input into a DCNN to learn high-level global features with compact dynamic information, which are then fed into a DNN to predict the PHQ-8 score. For multi-modal depression recognition, the predicted PHQ-8 scores from each modality are integrated in a DNN for the final prediction. In addition, we propose the Histogram of Displacement Range as a novel global visual descriptor to quantify the range and speed of the facial landmarks' displacements. Experiments have been carried out on the Distress Analysis Interview Corpus-Wizard of Oz (DAIC-WOZ) dataset for the Depression Sub-challenge of the Audio-Visual Emotion Challenge (AVEC 2016), results show that the proposed multi-modal depression recognition framework obtains very promising results on both the development set and test set, which outperforms the state-of-the-art results.", "which Used models ?", "CNN", NaN, NaN], ["In clinical diagnosis of mood disorder, depression is one of the most common psychiatric disorders. There are two major types of mood disorders: major depressive disorder (MDD) and bipolar disorder (BPD). A large portion of BPD are misdiagnosed as MDD in the diagnostic of mood disorders. Short-term detection which could be used in early detection and intervention is thus desirable. This study investigates microscopic facial expression changes for the subjects with MDD, BPD and control group (CG), when elicited by emotional video clips. This study uses eight basic orientations of motion vector (MV) to characterize the subtle changes in microscopic facial expression. Then, wavelet decomposition is applied to extract entropy and energy of different frequency bands. Next, an autoencoder neural network is adopted to extract the bottleneck features for dimensionality reduction. Finally, the long short term memory (LSTM) is employed for modeling the long-term variation among different mood disorders types. For evaluation of the proposed method, the elicited data from 36 subjects (12 for each of MDD, BPD and CG) were considered in the K-fold (K=12) cross validation experiments, and the performance for distinguishing among MDD, BPD and CG achieved 67.7% accuracy.", "which Used models ?", "Autoencoder", 782.0, 793.0], ["Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.", "which Used models ?", "Deep CNN", NaN, NaN], ["Background and objective Efficiently capturing the severity of positive valence symptoms could aid in risk stratification for adverse outcomes among patients with psychiatric disorders and identify optimal treatment strategies for patient subgroups. Motivated by the success of convolutional neural networks (CNNs) in classification tasks, we studied the application of various CNN architectures and their performance in predicting the severity of positive valence symptoms in patients with psychiatric disorders based on initial psychiatric evaluation records. Methods Psychiatric evaluation records contain unstructured text and semi-structured data such as question\u2013answer pairs. For a given record, we tokenise and normalise the semi-structured content. Pre-processed tokenised words are represented as one-hot encoded word vectors. We then apply different configurations of convolutional and max pooling layers to automatically learn important features from various word representations. We conducted a series of experiments to explore the effect of different CNN architectures on the classification of psychiatric records. Results Our best CNN model achieved a mean absolute error (MAE) of 0.539 and a normalized MAE of 0.785 on the test dataset, which is comparable to the other well-known text classification algorithms studied in this work. Our results also suggest that the normalisation step has a great impact on the performance of the developed models. Conclusions We demonstrate that normalisation of the semi-structured contents can improve the MAE among all CNN configurations. Without advanced feature engineering, CNN-based approaches can provide a comparable solution for classifying positive valence symptom severity in initial psychiatric evaluation records. Although word embedding is well known for its ability to capture relatively low-dimensional similarity between words, our experimental results show that pre-trained embeddings do not improve the classification performance. This phenomenon may be due to the inability of word embeddings to capture problem specific contextual semantic information implying the quality of the employing embedding is critical for obtaining an accurate CNN model.", "which Used models ?", "CNN", 378.0, 381.0], ["It is of significant importance to detect and manage stress before it turns into severe problems. However, existing stress detection methods usually rely on psychological scales or physiological devices, making the detection complicated and costly. In this paper, we explore to automatically detect individuals' psychological stress via social media. Employing real online micro-blog data, we first investigate the correlations between users' stress and their tweeting content, social engagement and behavior patterns. Then we define two types of stress-related attributes: 1) low-level content attributes from a single tweet, including text, images and social interactions; 2) user-scope statistical attributes through their weekly micro-blog postings, leveraging information of tweeting time, tweeting types and linguistic styles. To combine content attributes with statistical attributes, we further design a convolutional neural network (CNN) with cross autoencoders to generate user-scope content attributes from low-level content attributes. Finally, we propose a deep neural network (DNN) model to incorporate the two types of user-scope attributes to detect users' psychological stress. We test the trained model on four different datasets from major micro-blog platforms including Sina Weibo, Tencent Weibo and Twitter. Experimental results show that the proposed model is effective and efficient on detecting psychological stress from micro-blog data. We believe our model would be useful in developing stress detection tools for mental health agencies and individuals.", "which Used models ?", "CNN", 942.0, 945.0], ["Software-intensive systems often consist of cooperating reactive components. In mobile and reconfigurable systems, their topology changes at run-time, which influences how the components must cooperate. The Scenario Modeling Language (SML) offers a formal approach for specifying the reactive behavior such systems that aligns with how humans conceive and communicate behavioral requirements. Simulation and formal checks can find specification flaws early. We present a framework for the Scenario-based Programming (SBP) that reflects the concepts of SML in Java and makes the scenario modeling approach available for programming. SBP code can also be generated from SML and extended with platform-specific code, thus streamlining the transition from design to implementation. As an example serves a car-to-x communication system. Demo video and artifact: http://scenariotools.org/esecfse-2017-tool-demo/", "which model ?", "Scenario Modeling", 207.0, 224.0], ["We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random tokens, and (2) training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it. SpanBERT consistently outperforms BERT and our better-tuned baselines, with substantial gains on span selection tasks such as question answering and coreference resolution. In particular, with the same training data and model size as BERT large , our single model obtains 94.6% and 88.7% F1 on SQuAD 1.1 and 2.0 respectively. We also achieve a new state of the art on the OntoNotes coreference resolution task (79.6% F1), strong performance on the TACRED relation extraction benchmark, and even gains on GLUE. 1", "which model ?", "SpanBERT", 11.0, 19.0], ["We introduce the first end-to-end coreference resolution model and show that it significantly outperforms all previous work without using a syntactic parser or hand-engineered mention detector. The key idea is to directly consider all spans in a document as potential mentions and learn distributions over possible antecedents for each. The model computes span embeddings that combine context-dependent boundary representations with a head-finding attention mechanism. It is trained to maximize the marginal likelihood of gold antecedent spans from coreference clusters and is factored to enable aggressive pruning of potential mentions. Experiments demonstrate state-of-the-art performance, with a gain of 1.5 F1 on the OntoNotes benchmark and by 3.1 F1 using a 5-model ensemble, despite the fact that this is the first approach to be successfully trained with no external resources.", "which model ?", "end-to-end coreference resolution model", 23.0, 62.0], ["SUMMARY We present a part-of-speech tagger that achieves over 97% accuracy on MEDLINE citations. AVAILABILITY Software, documentation and a corpus of 5700 manually tagged sentences are available at ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedPost/medpost.tar.gz", "which model ?", "MedPost", 236.0, 243.0], ["Abstract Background The task of recognizing and identifying species names in biomedical literature has recently been regarded as critical for a number of applications in text and data mining, including gene name recognition, species-specific document retrieval, and semantic enrichment of biomedical articles. Results In this paper we describe an open-source species name recognition and normalization software system, LINNAEUS, and evaluate its performance relative to several automatically generated biomedical corpora, as well as a novel corpus of full-text documents manually annotated for species mentions. LINNAEUS uses a dictionary-based approach (implemented as an efficient deterministic finite-state automaton) to identify species names and a set of heuristics to resolve ambiguous mentions. When compared against our manually annotated corpus, LINNAEUS performs with 94% recall and 97% precision at the mention level, and 98% recall and 90% precision at the document level. Our system successfully solves the problem of disambiguating uncertain species mentions, with 97% of all mentions in PubMed Central full-text documents resolved to unambiguous NCBI taxonomy identifiers. Conclusions LINNAEUS is an open source, stand-alone software system capable of recognizing and normalizing species name mentions with speed and accuracy, and can therefore be integrated into a range of bioinformatics and text-mining applications. The software and manually annotated corpus can be downloaded freely at http://linnaeus.sourceforge.net/.", "which model ?", "LINNAEUS", 419.0, 427.0], ["This paper presents the SCENARIOTOOLS solution for developing a cleaning robot system, an instance of the rover problem of the MDE Tools Challenge 2017. We present an MDE process that consists of (1) the modeling of the system behavior as a scenario-based assume-guarantee specification with SML (Scenario Modeling Language), (2) the formal realizabilitychecking and verification of the specification, (3) the generation of SBP (Scenario-Based Programming) Java code from the SML specification, and, finally, (4) adding platform-specific code to connect specification-level events with platform-level sensorand actuator-events. The resulting code can be executed on a RaspberryPi-based robot. The approach is suited for developing reactive systems with multiple cooperating components. Its strength is that the scenario-based modeling corresponds closely to how humans conceive and communicate behavioral requirements. SML in particular supports the modeling of environment assumptions and dynamic component structures. The formal checks ensure that the system satisfies its specification.", "which model ?", "Scenario Modeling", 297.0, 314.0], ["We describe ParsCit, a freely available, open-source implementation of a reference string parsing package. At the core of ParsCit is a trained conditional random field (CRF) model used to label the token sequences in the reference string. A heuristic model wraps this core with added functionality to identify reference strings from a plain text file, and to retrieve the citation contexts. The package comes with utilities to run it as a web service or as a standalone utility. We compare ParsCit on three distinct reference string datasets and show that it compares well with other previously published work.", "which model ?", "ParsCit", 12.0, 19.0], ["Abstract Background The task of recognizing and identifying species names in biomedical literature has recently been regarded as critical for a number of applications in text and data mining, including gene name recognition, species-specific document retrieval, and semantic enrichment of biomedical articles. Results In this paper we describe an open-source species name recognition and normalization software system, LINNAEUS, and evaluate its performance relative to several automatically generated biomedical corpora, as well as a novel corpus of full-text documents manually annotated for species mentions. LINNAEUS uses a dictionary-based approach (implemented as an efficient deterministic finite-state automaton) to identify species names and a set of heuristics to resolve ambiguous mentions. When compared against our manually annotated corpus, LINNAEUS performs with 94% recall and 97% precision at the mention level, and 98% recall and 90% precision at the document level. Our system successfully solves the problem of disambiguating uncertain species mentions, with 97% of all mentions in PubMed Central full-text documents resolved to unambiguous NCBI taxonomy identifiers. Conclusions LINNAEUS is an open source, stand-alone software system capable of recognizing and normalizing species name mentions with speed and accuracy, and can therefore be integrated into a range of bioinformatics and text-mining applications. The software and manually annotated corpus can be downloaded freely at http://linnaeus.sourceforge.net/.", "which Other resources ?", "NCBI Taxonomy", 1161.0, 1174.0], ["This paper presents the task definition, resources, and the single participant system for Task 12: Turkish Lexical Sample Task (TLST), which was organized in the SemEval-2007 evaluation exercise. The methodology followed for developing the specific linguistic resources necessary for the task has been described in this context. A language-specific feature set was defined for Turkish. TLST consists of three pieces of data: The dictionary, the training data, and the evaluation data. Finally, a single system that utilizes a simple statistical method was submitted for the task and evaluated.", "which Other resources ?", "dictionary", 429.0, 439.0], ["We describe the annotation of chemical named entities in scientific text. A set of annotation guidelines defines 5 types of named entities, and provides instructions for the resolution of special cases. A corpus of fulltext chemistry papers was annotated, with an inter-annotator agreement F score of 93%. An investigation of named entity recognition using LingPipe suggests that F scores of 63% are possible without customisation, and scores of 74% are possible with the addition of custom tokenisation and the use of dictionaries.", "which Other resources ?", "LingPipe", 357.0, 365.0], ["This paper describes a system for extracting typed dependency parses of English sentences from phrase structure parses. In order to capture inherent relations occurring in corpus texts that can be critical in real-world applications, many NP relations are included in the set of grammatical relations used. We provide a comparison of our system with Minipar and the Link parser. The typed dependency extraction facility described here is integrated in the Stanford Parser, available for download.", "which Other resources ?", "the Stanford parser", 452.0, 471.0], ["MOTIVATION The MEDLINE database of biomedical abstracts contains scientific knowledge about thousands of interacting genes and proteins. Automated text processing can aid in the comprehension and synthesis of this valuable information. The fundamental task of identifying gene and protein names is a necessary first step towards making full use of the information encoded in biomedical text. This remains a challenging task due to the irregularities and ambiguities in gene and protein nomenclature. We propose to approach the detection of gene and protein names in scientific abstracts as part-of-speech tagging, the most basic form of linguistic corpus annotation. RESULTS We present a method for tagging gene and protein names in biomedical text using a combination of statistical and knowledge-based strategies. This method incorporates automatically generated rules from a transformation-based part-of-speech tagger, and manually generated rules from morphological clues, low frequency trigrams, indicator terms, suffixes and part-of-speech information. Results of an experiment on a test corpus of 56K MEDLINE documents demonstrate that our method to extract gene and protein names can be applied to large sets of MEDLINE abstracts, without the need for special conditions or human experts to predetermine relevant subsets. AVAILABILITY The programs are available on request from the authors.", "which Other resources ?", "MEDLINE", 15.0, 22.0], ["We present a database of annotated biomedical text corpora merged into a portable data structure with uniform conventions. MedTag combines three corpora, MedPost, ABGene and GENETAG, within a common relational database data model. The GENETAG corpus has been modified to reflect new definitions of genes and proteins. The MedPost corpus has been updated to include 1,000 additional sentences from the clinical medicine domain. All data have been updated with original MEDLINE text excerpts, PubMed identifiers, and tokenization independence to facilitate data accuracy, consistency and usability. The data are available in flat files along with software to facilitate loading the data into a relational SQL database from ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedTag/medtag.tar.gz.", "which Other resources ?", "ABGene", 163.0, 169.0], ["Abstract Background Molecular Biology accumulated substantial amounts of data concerning functions of genes and proteins. Information relating to functional descriptions is generally extracted manually from textual data and stored in biological databases to build up annotations for large collections of gene products. Those annotation databases are crucial for the interpretation of large scale analysis approaches using bioinformatics or experimental techniques. Due to the growing accumulation of functional descriptions in biomedical literature the need for text mining tools to facilitate the extraction of such annotations is urgent. In order to make text mining tools useable in real world scenarios, for instance to assist database curators during annotation of protein function, comparisons and evaluations of different approaches on full text articles are needed. Results The Critical Assessment for Information Extraction in Biology (BioCreAtIvE) contest consists of a community wide competition aiming to evaluate different strategies for text mining tools, as applied to biomedical literature. We report on task two which addressed the automatic extraction and assignment of Gene Ontology (GO) annotations of human proteins, using full text articles. The predictions of task 2 are based on triplets of protein \u2013 GO term \u2013 article passage . The annotation-relevant text passages were returned by the participants and evaluated by expert curators of the GO annotation (GOA) team at the European Institute of Bioinformatics (EBI). Each participant could submit up to three results for each sub-task comprising task 2. In total more than 15,000 individual results were provided by the participants. The curators evaluated in addition to the annotation itself, whether the protein and the GO term were correctly predicted and traceable through the submitted text fragment. Conclusion Concepts provided by GO are currently the most extended set of terms used for annotating gene products, thus they were explored to assess how effectively text mining tools are able to extract those annotations automatically. Although the obtained results are promising, they are still far from reaching the required performance demanded by real world applications. Among the principal difficulties encountered to address the proposed task, were the complex nature of the GO terms and protein names (the large range of variants which are used to express proteins and especially GO terms in free text), and the lack of a standard training set. A range of very different strategies were used to tackle this task. The dataset generated in line with the BioCreative challenge is publicly available and will allow new possibilities for training information extraction methods in the domain of molecular biology.", "which Other resources ?", "Gene Ontology", 1188.0, 1201.0], ["Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.", "which Other resources ?", "Gene Regulation Ontology", 890.0, 914.0], ["SUMMARY We present a part-of-speech tagger that achieves over 97% accuracy on MEDLINE citations. AVAILABILITY Software, documentation and a corpus of 5700 manually tagged sentences are available at ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedPost/medpost.tar.gz", "which Other resources ?", "MEDLINE", 78.0, 85.0], ["The mentions of human health perturbations such as the diseases and adverse effects denote a special entity class in the biomedical literature. They help in understanding the underlying risk factors and develop a preventive rationale. The recognition of these named entities in texts through dictionary-based approaches relies on the availability of appropriate terminological resources. Although few resources are publicly available, not all are suitable for the text mining needs. Therefore, this work provides an overview of the well known resources with respect to human diseases and adverse effects such as the MeSH, MedDRA, ICD-10, SNOMED CT, and UMLS. Individual dictionaries are generated from these resources and their performance in recognizing the named entities is evaluated over a manually annotated corpus. In addition, the steps for curating the dictionaries, rule-based acronym disambiguation and their impact on the dictionary performance is discussed. The results show that the MedDRA and UMLS achieve the best recall. Besides this, MedDRA provides an additional benefit of achieving a higher precision. The combination of search results of all the dictionaries achieve a considerably high recall. The corpus is available on http://www.scai.fraunhofer.de/disease-ae-corpus.html", "which Other resources ?", "SNOMED CT", 638.0, 647.0], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Other resources ?", "EntrezGene", 467.0, 477.0], ["The mentions of human health perturbations such as the diseases and adverse effects denote a special entity class in the biomedical literature. They help in understanding the underlying risk factors and develop a preventive rationale. The recognition of these named entities in texts through dictionary-based approaches relies on the availability of appropriate terminological resources. Although few resources are publicly available, not all are suitable for the text mining needs. Therefore, this work provides an overview of the well known resources with respect to human diseases and adverse effects such as the MeSH, MedDRA, ICD-10, SNOMED CT, and UMLS. Individual dictionaries are generated from these resources and their performance in recognizing the named entities is evaluated over a manually annotated corpus. In addition, the steps for curating the dictionaries, rule-based acronym disambiguation and their impact on the dictionary performance is discussed. The results show that the MedDRA and UMLS achieve the best recall. Besides this, MedDRA provides an additional benefit of achieving a higher precision. The combination of search results of all the dictionaries achieve a considerably high recall. The corpus is available on http://www.scai.fraunhofer.de/disease-ae-corpus.html", "which Other resources ?", "ICD-10", 630.0, 636.0], ["In Semantic Textual Similarity (STS), systems rate the degree of semantic equivalence, on a graded scale from 0 to 5, with 5 being the most similar. This year we set up two tasks: (i) a core task (CORE), and (ii) a typed-similarity task (TYPED). CORE is similar in set up to SemEval STS 2012 task with pairs of sentences from sources related to those of 2012, yet different in genre from the 2012 set, namely, this year we included newswire headlines, machine translation evaluation datasets and multiple lexical resource glossed sets. TYPED, on the other hand, is novel and tries to characterize why two items are deemed similar, using cultural heritage items which are described with metadata such as title, author or description. Several types of similarity have been defined, including similar author, similar time period or similar location. The annotation for both tasks leverages crowdsourcing, with relative high interannotator correlation, ranging from 62% to 87%. The CORE task attracted 34 participants with 89 runs, and the TYPED task attracted 6 teams with 14 runs.", "which Other resources ?", "crowdsourcing", 887.0, 900.0], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Other resources ?", "ChEBI", 479.0, 484.0], ["The mentions of human health perturbations such as the diseases and adverse effects denote a special entity class in the biomedical literature. They help in understanding the underlying risk factors and develop a preventive rationale. The recognition of these named entities in texts through dictionary-based approaches relies on the availability of appropriate terminological resources. Although few resources are publicly available, not all are suitable for the text mining needs. Therefore, this work provides an overview of the well known resources with respect to human diseases and adverse effects such as the MeSH, MedDRA, ICD-10, SNOMED CT, and UMLS. Individual dictionaries are generated from these resources and their performance in recognizing the named entities is evaluated over a manually annotated corpus. In addition, the steps for curating the dictionaries, rule-based acronym disambiguation and their impact on the dictionary performance is discussed. The results show that the MedDRA and UMLS achieve the best recall. Besides this, MedDRA provides an additional benefit of achieving a higher precision. The combination of search results of all the dictionaries achieve a considerably high recall. The corpus is available on http://www.scai.fraunhofer.de/disease-ae-corpus.html", "which Other resources ?", "MedDRA", 622.0, 628.0], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Other resources ?", "RefSeq", 459.0, 465.0], ["To improve designs of e-learning materials, it is necessary to know which word or figure a learner felt \"difficult\" in the materials. In this pilot study, we measured electroencephalography (EEG) and eye gaze data of learners and analyzed to estimate which area they had difficulty to learn. The developed system realized simultaneous measurements of physiological data and subjective evaluations during learning. Using this system, we observed specific EEG activity in difficult pages. Integrating of eye gaze and EEG measurements raised a possibility to determine where a learner felt \"difficult\" in a page of learning materials. From these results, we could suggest that the multimodal measurements of EEG and eye gaze would lead to effective improvement of learning materials. For future study, more data collection using various materials and learners with different backgrounds is necessary. This study could lead to establishing a method to improve e-learning materials based on learners' mental states.", "which Data ?", "learners' mental states", 986.0, 1009.0], ["Attention deficit hyperactivity disorder (ADHD) is one of the most common mental-health disorders. As a neurodevelopment disorder, neuroimaging technologies, such as magnetic resonance imaging (MRI), coupled with machine learning algorithms, are being increasingly explored as biomarkers in ADHD. Among various machine learning methods, deep learning has demonstrated excellent performance on many imaging tasks. With the availability of publically-available, large neuroimaging data sets for training purposes, deep learning-based automatic diagnosis of psychiatric disorders can become feasible. In this paper, we develop a deep learning-based ADHD classification method via 3-D convolutional neural networks (CNNs) applied to MRI scans. Since deep neural networks may utilize millions of parameters, even the large number of MRI samples in pooled data sets is still relatively limited if one is to learn discriminative features from the raw data. Instead, here we propose to first extract meaningful 3-D low-level features from functional MRI (fMRI) and structural MRI (sMRI) data. Furthermore, inspired by radiologists\u2019 typical approach for examining brain images, we design a 3-D CNN model to investigate the local spatial patterns of MRI features. Finally, we discover that brain functional and structural information are complementary, and design a multi-modality CNN architecture to combine fMRI and sMRI features. Evaluations on the hold-out testing data of the ADHD-200 global competition shows that the proposed multi-modality 3-D CNN approach achieves the state-of-the-art accuracy of 69.15% and outperforms reported classifiers in the literature, even with fewer training samples. We suggest that multi-modality classification will be a promising direction to find potential neuroimaging biomarkers of neurodevelopment disorders.", "which Data ?", "sMRI", 1073.0, 1077.0], ["Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.", "which Data ?", "social and cultural values", 1869.0, 1895.0], ["It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.", "which Data ?", "increased performance", 489.0, 510.0], ["An increasing number of people suffering from mental health conditions resort to online resources (specialized websites, social media, etc.) to share their feelings. Early depression detection using social media data through deep learning models can help to change life trajectories and save lives. But the accuracy of these models was not satisfying due to the real-world imbalanced data distributions. To tackle this problem, we propose a deep learning model (X-A-BiLSTM) for depression detection in imbalanced social media data. The X-A-BiLSTM model consists of two essential components: the first one is XGBoost, which is used to reduce data imbalance; and the second one is an Attention-BiLSTM neural network, which enhances classification capacity. The Reddit Self-reported Depression Diagnosis (RSDD) dataset was chosen, which included approximately 9,000 users who claimed to have been diagnosed with depression (\u201ddiagnosed users and approximately 107,000 matched control users. Results demonstrate that our approach significantly outperforms the previous state-of-the-art models on the RSDD dataset.", "which Data ?", "The Reddit Self-reported Depression Diagnosis (RSDD) dataset", NaN, NaN], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Data ?", "semantic word embeddings", 1062.0, 1086.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Data ?", "overall sentiment polarity", 525.0, 551.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Data ?", "citation-barriers", 903.0, 920.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Data ?", "exact sentiment", 472.0, 487.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Data ?", "human annotation", 1592.0, 1608.0], ["Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.", "which Data ?", "categories, ingredients and cooking directions", 552.0, 598.0], ["Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service \"Where is My URI\" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.", "which Data ?", "a total of 221.7 billion", 918.0, 942.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Data ?", "F1 score equal to 45.9%", NaN, NaN], ["Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .", "which Data ?", "21 billion triples with minimal publishing effort", 1314.0, 1363.0], ["ABSTRACT \u2018Heritage Interpretation\u2019 has always been considered as an effective learning, communication and management tool that increases visitors\u2019 awareness of and empathy to heritage sites or artefacts. Yet the definition of \u2018digital heritage interpretation\u2019 is still wide and so far, no significant method and objective are evident within the domain of \u2018digital heritage\u2019 theory and discourse. Considering \u2018digital heritage interpretation\u2019 as a process rather than as a tool to present or communicate with end-users, this paper presents a critical application of a theoretical construct ascertained from multiple disciplines and explicates four objectives for a comprehensive interpretive process. A conceptual model is proposed and further developed into a conceptual framework with fifteen considerations. This framework is then implemented and tested on an online platform to assess its impact on end-users\u2019 interpretation level. We believe the presented interpretive framework (PrEDiC) will help heritage professionals and media designers to develop interpretive heritage project.", "which Data ?", "end-users\u2019 interpretation level", 902.0, 933.0], ["As a severe psychiatric disorder disease, depression is a state of low mood and aversion to activity, which prevents a person from functioning normally in both work and daily lives. The study on automated mental health assessment has been given increasing attentions in recent years. In this paper, we study the problem of automatic diagnosis of depression. A new approach to predict the Beck Depression Inventory II (BDI-II) values from video data is proposed based on the deep networks. The proposed framework is designed in a two stream manner, aiming at capturing both the facial appearance and dynamics. Further, we employ joint tuning layers that can implicitly integrate the appearance and dynamic information. Experiments are conducted on two depression databases, AVEC2013 and AVEC2014. The experimental results show that our proposed approach significantly improve the depression prediction performance, compared to other visual-based approaches.", "which Data ?", "Video data", 438.0, 448.0], ["Attention deficit hyperactivity disorder creates conditions for the child as s/he cannot sit calm and still, control his/her behavior and focus his/her attention on a particular issue. Five out of every hundred children are affected by the disease. Boys are three times more than girls at risk for this complication. The disorder often begins before age seven, and parents may not realize their children problem until they get older. Children with hyperactivity and attention deficit are at high risk of conduct disorder, antisocial personality, and drug abuse. Most children suffering from the disease will develop a feeling of depression, anxiety and lack of self-confidence. Given the importance of diagnosis the disease, Deep Belief Networks (DBNs) were used as a deep learning model to predict the disease. In this system, in addition to FMRI images features, sophisticated features such as age and IQ as well as functional characteristics, etc. were used. The proposed method was evaluated by two standard data sets of ADHD-200 Global Competitions, including NeuroImage and NYU data sets, and compared with state-of-the-art algorithms. The results showed the superiority of the proposed method rather than other systems. The prediction accuracy has improved respectively as +12.04 and +27.81 over NeuroImage and NYU datasets compared to the best proposed method in the ADHD-200 Global competition.", "which Data ?", "fMRI", 843.0, 847.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Data ?", "different scores", 1023.0, 1039.0], ["Land-use classification is essential for urban planning. Urban land-use types can be differentiated either by their physical characteristics (such as reflectivity and texture) or social functions. Remote sensing techniques have been recognized as a vital method for urban land-use classification because of their ability to capture the physical characteristics of land use. Although significant progress has been achieved in remote sensing methods designed for urban land-use classification, most techniques focus on physical characteristics, whereas knowledge of social functions is not adequately used. Owing to the wide usage of mobile phones, the activities of residents, which can be retrieved from the mobile phone data, can be determined in order to indicate the social function of land use. This could bring about the opportunity to derive land-use information from mobile phone data. To verify the application of this new data source to urban land-use classification, we first construct a vector of aggregated mobile phone data to characterize land-use types. This vector is composed of two aspects: the normalized hourly call volume and the total call volume. A semi-supervised fuzzy c-means clustering approach is then applied to infer the land-use types. The method is validated using mobile phone data collected in Singapore. Land use is determined with a detection rate of 58.03%. An analysis of the land-use classification results shows that the detection rate decreases as the heterogeneity of land use increases, and increases as the density of cell phone towers increases.", "which Data ?", "Normalized hourly call volume", 1113.0, 1142.0], ["Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .", "which Data ?", "higher level of maintainability", 1204.0, 1235.0], ["The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.", "which Data ?", "protein decay rates", 1253.0, 1272.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Data ?", "nearly cubic in shape", 914.0, 935.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Data ?", "high external quantum efficiency of 2.3%", NaN, NaN], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Data ?", "participants, tasks and building data", 213.0, 250.0], ["ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean\u00b1SD of 2.93\u00b10.33mg/L compared to control group\u2019s 4.42\u00b10.30mg/L and gemfibrozil group\u2019s 4.28\u00b10.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).", "which Data ?", "/L at 4th week", 1009.0, 1023.0], ["Land-use classification is essential for urban planning. Urban land-use types can be differentiated either by their physical characteristics (such as reflectivity and texture) or social functions. Remote sensing techniques have been recognized as a vital method for urban land-use classification because of their ability to capture the physical characteristics of land use. Although significant progress has been achieved in remote sensing methods designed for urban land-use classification, most techniques focus on physical characteristics, whereas knowledge of social functions is not adequately used. Owing to the wide usage of mobile phones, the activities of residents, which can be retrieved from the mobile phone data, can be determined in order to indicate the social function of land use. This could bring about the opportunity to derive land-use information from mobile phone data. To verify the application of this new data source to urban land-use classification, we first construct a vector of aggregated mobile phone data to characterize land-use types. This vector is composed of two aspects: the normalized hourly call volume and the total call volume. A semi-supervised fuzzy c-means clustering approach is then applied to infer the land-use types. The method is validated using mobile phone data collected in Singapore. Land use is determined with a detection rate of 58.03%. An analysis of the land-use classification results shows that the detection rate decreases as the heterogeneity of land use increases, and increases as the density of cell phone towers increases.", "which Data ?", "Total call volume", 1151.0, 1168.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Data ?", "the m odel", 987.0, 997.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Data ?", "training- and test-set accuracy metrics", 1512.0, 1551.0], ["Accurate diagnosis of psychiatric disorders plays a critical role in improving the quality of life for patients and potentially supports the development of new treatments. Many studies have been conducted on machine learning techniques that seek brain imaging data for specific biomarkers of disorders. These studies have encountered the following dilemma: A direct classification overfits to a small number of high-dimensional samples but unsupervised feature-extraction has the risk of extracting a signal of no interest. In addition, such studies often provided only diagnoses for patients without presenting the reasons for these diagnoses. This study proposed a deep neural generative model of resting-state functional magnetic resonance imaging (fMRI) data. The proposed model is conditioned by the assumption of the subject's state and estimates the posterior probability of the subject's state given the imaging data, using Bayes\u2019 rule. This study applied the proposed model to diagnose schizophrenia and bipolar disorders. Diagnostic accuracy was improved by a large margin over competitive approaches, namely classifications of functional connectivity, discriminative/generative models of regionwise signals, and those with unsupervised feature-extractors. The proposed model visualizes brain regions largely related to the disorders, thus motivating further biological investigation.", "which Data ?", "fMRI", 752.0, 756.0], ["Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.", "which Data ?", "multiple factors", 185.0, 201.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Data ?", "certain tasks", 132.0, 145.0], ["The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.", "which Data ?", "critical biological and medical relevance", 89.0, 130.0], ["The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.", "which Data ?", "376 LOINC codes", 604.0, 619.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Data ?", "peak PL wavelengths", 1510.0, 1529.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Data ?", "uniform in size (10\u201315 nm)", NaN, NaN], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Data ?", "people\u2019s behavior", 88.0, 105.0], ["Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.", "which Data ?", "genomics metadata", 646.0, 663.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Data ?", "a formal description", 381.0, 401.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Data ?", "Author-ity, a version of PubMed", 216.0, 247.0], ["ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean\u00b1SD of 2.93\u00b10.33mg/L compared to control group\u2019s 4.42\u00b10.30mg/L and gemfibrozil group\u2019s 4.28\u00b10.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).", "which Data ?", "mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg", 895.0, 1009.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Data ?", "sentiments", 853.0, 863.0], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "which Data ?", "taxonomic treatments", 890.0, 910.0], ["To improve designs of e-learning materials, it is necessary to know which word or figure a learner felt \"difficult\" in the materials. In this pilot study, we measured electroencephalography (EEG) and eye gaze data of learners and analyzed to estimate which area they had difficulty to learn. The developed system realized simultaneous measurements of physiological data and subjective evaluations during learning. Using this system, we observed specific EEG activity in difficult pages. Integrating of eye gaze and EEG measurements raised a possibility to determine where a learner felt \"difficult\" in a page of learning materials. From these results, we could suggest that the multimodal measurements of EEG and eye gaze would lead to effective improvement of learning materials. For future study, more data collection using various materials and learners with different backgrounds is necessary. This study could lead to establishing a method to improve e-learning materials based on learners' mental states.", "which Data ?", "electroencephalography (EEG) and eye gaze data", NaN, NaN], ["Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.", "which Data ?", "genomic and other omics datasets", 1397.0, 1429.0], ["Background Visual atypicalities in autism spectrum disorder (ASD) are a well documented phenomenon, beginning as early as 2\u20136 months of age and manifesting in a significantly decreased attention to the eyes, direct gaze and socially salient information. Early emerging neurobiological deficits in perceiving social stimuli as rewarding or its active avoidance due to the anxiety it entails have been widely purported as potential reasons for this atypicality. Parallel research evidence also points to the significant benefits of animal presence for reducing social anxiety and enhancing social interaction in children with autism. While atypicality in social attention in ASD has been widely substantiated, whether this atypicality persists equally across species types or is confined to humans has not been a key focus of research insofar. Methods We attempted a comprehensive examination of the differences in visual attention to static images of human and animal faces (40 images; 20 human faces and 20 animal faces) among children with ASD using an eye tracking paradigm. 44 children (ASD n = 21; TD n = 23) participated in the study (10,362 valid observations) across five regions of interest (left eye, right eye, eye region, face and screen). Results Results obtained revealed significantly greater social attention across human and animal stimuli in typical controls when compared to children with ASD. However in children with ASD, a significantly greater attention allocation was seen to animal faces and eye region and lesser attention to the animal mouth when compared to human faces, indicative of a clear attentional preference to socially salient regions of animal stimuli. The positive attentional bias toward animals was also seen in terms of a significantly greater visual attention to direct gaze in animal images. Conclusion Our results suggest the possibility that atypicalities in social attention in ASD may not be uniform across species. It adds to the current neural and biomarker evidence base of the potentially greater social reward processing and lesser social anxiety underlying animal stimuli as compared to human stimuli in children with ASD.", "which Data ?", "attention", 185.0, 194.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Data ?", "subject-matter novelty", 640.0, 662.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Data ?", "superior sarcasm-classification accuracy of 97.87% for the Twitter dataset", 1758.0, 1832.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Data ?", "participants, tasks and building data", 213.0, 250.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Data ?", "posts", 365.0, 370.0], ["Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .", "which Data ?", "size and data quality", 346.0, 367.0], ["Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.", "which Data ?", "lack of focus", 216.0, 229.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Data ?", "an extensive set of features", 522.0, 550.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Data ?", "results", 1533.0, 1540.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Data ?", "available information", 182.0, 203.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Data ?", "any gender", 799.0, 809.0], ["The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.", "which Data ?", "our challenges and the lessons", 529.0, 559.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Data ?", "participants, tasks and building data", 213.0, 250.0], ["It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.", "which Data ?", "properties", 458.0, 468.0], ["Background Visual atypicalities in autism spectrum disorder (ASD) are a well documented phenomenon, beginning as early as 2\u20136 months of age and manifesting in a significantly decreased attention to the eyes, direct gaze and socially salient information. Early emerging neurobiological deficits in perceiving social stimuli as rewarding or its active avoidance due to the anxiety it entails have been widely purported as potential reasons for this atypicality. Parallel research evidence also points to the significant benefits of animal presence for reducing social anxiety and enhancing social interaction in children with autism. While atypicality in social attention in ASD has been widely substantiated, whether this atypicality persists equally across species types or is confined to humans has not been a key focus of research insofar. Methods We attempted a comprehensive examination of the differences in visual attention to static images of human and animal faces (40 images; 20 human faces and 20 animal faces) among children with ASD using an eye tracking paradigm. 44 children (ASD n = 21; TD n = 23) participated in the study (10,362 valid observations) across five regions of interest (left eye, right eye, eye region, face and screen). Results Results obtained revealed significantly greater social attention across human and animal stimuli in typical controls when compared to children with ASD. However in children with ASD, a significantly greater attention allocation was seen to animal faces and eye region and lesser attention to the animal mouth when compared to human faces, indicative of a clear attentional preference to socially salient regions of animal stimuli. The positive attentional bias toward animals was also seen in terms of a significantly greater visual attention to direct gaze in animal images. Conclusion Our results suggest the possibility that atypicalities in social attention in ASD may not be uniform across species. It adds to the current neural and biomarker evidence base of the potentially greater social reward processing and lesser social anxiety underlying animal stimuli as compared to human stimuli in children with ASD.", "which Data ?", "Visual atypicalities", 11.0, 31.0], ["We propose DeepBreath, a deep learning model which automatically recognises people's psychological stress level (mental overload) from their breathing patterns. Using a low cost thermal camera, we track a person's breathing patterns as temperature changes around his/her nostril. The paper's technical contribution is threefold. First of all, instead of creating handcrafted features to capture aspects of the breathing patterns, we transform the uni-dimensional breathing signals into two dimensional respiration variability spectrogram (RVS) sequences. The spectrograms easily capture the complexity of the breathing dynamics. Second, a spatial pattern analysis based on a deep Convolutional Neural Network (CNN) is directly applied to the spectrogram sequences without the need of hand-crafting features. Finally, a data augmentation technique, inspired from solutions for over-fitting problems in deep learning, is applied to allow the CNN to learn with a small-scale dataset from short-term measurements (e.g., up to a few hours). The model is trained and tested with data collected from people exposed to two types of cognitive tasks (Stroop Colour Word Test, Mental Computation test) with sessions of different difficulty levels. Using normalised self-report as ground truth, the CNN reaches 84.59% accuracy in discriminating between two levels of stress and 56.52% in discriminating between three levels. In addition, the CNN outperformed powerful shallow learning methods based on a single layer neural network. Finally, the dataset of labelled thermal images will be open to the community.", "which Data ?", "Thermal images", 1554.0, 1568.0], ["Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19\u2013related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.", "which Data ?", "60% (n=18)", NaN, NaN], ["ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean\u00b1SD of 2.93\u00b10.33mg/L compared to control group\u2019s 4.42\u00b10.30mg/L and gemfibrozil group\u2019s 4.28\u00b10.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).", "which Data ?", "p-value", 1365.0, 1372.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Data ?", "3D orthorhombic structure", 1174.0, 1199.0], ["It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.", "which Data ?", "modeling patterns", 173.0, 190.0], ["High-performance perovskite light-emitting diodes are achieved by an interfacial engineering approach, leading to the most efficient near-infrared devices produced using solution-processed emitters and efficient green devices at high brightness conditions.", "which Data ?", "high brightness conditions", 229.0, 255.0], ["Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information \u2013 meaningful secondary or derived data \u2013 about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known \u201c(elaborated) data products,\u201d for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation \u2013 as machine readable data and their machine-readable meaning \u2013 is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.", "which Data ?", "meaningful secondary or derived data", 538.0, 574.0], ["Background Visual atypicalities in autism spectrum disorder (ASD) are a well documented phenomenon, beginning as early as 2\u20136 months of age and manifesting in a significantly decreased attention to the eyes, direct gaze and socially salient information. Early emerging neurobiological deficits in perceiving social stimuli as rewarding or its active avoidance due to the anxiety it entails have been widely purported as potential reasons for this atypicality. Parallel research evidence also points to the significant benefits of animal presence for reducing social anxiety and enhancing social interaction in children with autism. While atypicality in social attention in ASD has been widely substantiated, whether this atypicality persists equally across species types or is confined to humans has not been a key focus of research insofar. Methods We attempted a comprehensive examination of the differences in visual attention to static images of human and animal faces (40 images; 20 human faces and 20 animal faces) among children with ASD using an eye tracking paradigm. 44 children (ASD n = 21; TD n = 23) participated in the study (10,362 valid observations) across five regions of interest (left eye, right eye, eye region, face and screen). Results Results obtained revealed significantly greater social attention across human and animal stimuli in typical controls when compared to children with ASD. However in children with ASD, a significantly greater attention allocation was seen to animal faces and eye region and lesser attention to the animal mouth when compared to human faces, indicative of a clear attentional preference to socially salient regions of animal stimuli. The positive attentional bias toward animals was also seen in terms of a significantly greater visual attention to direct gaze in animal images. Conclusion Our results suggest the possibility that atypicalities in social attention in ASD may not be uniform across species. It adds to the current neural and biomarker evidence base of the potentially greater social reward processing and lesser social anxiety underlying animal stimuli as compared to human stimuli in children with ASD.", "which Data ?", "10,362 valid observations", 1140.0, 1165.0], ["\nPurpose\nIn terms of entrepreneurship, open data benefits include economic growth, innovation, empowerment and new or improved products and services. Hackathons encourage the development of new applications using open data and the creation of startups based on these applications. Researchers focus on factors that affect nascent entrepreneurs\u2019 decision to create a startup but researches in the field of open data hackathons have not been fully investigated yet. This paper aims to suggest a model that incorporates factors that affect the decision of establishing a startup by developers who have participated in open data hackathons.\n\n\nDesign/methodology/approach\nIn total, 70 papers were examined and analyzed using a three-phased literature review methodology, which was suggested by Webster and Watson (2002). These surveys investigated several factors that affect a nascent entrepreneur to create a startup.\n\n\nFindings\nEventually, by identifying the motivations for developers to participate in a hackathon, and understanding the benefits of the use of open data, researchers will be able to elaborate the proposed model and evaluate if the contest has contributed to the decision of establish a startup and what factors affect the decision to establish a startup apply to open data developers, and if the participants of the contest agree with these factors.\n\n\nOriginality/value\nThe paper expands the scope of open data research on entrepreneurship field, stating the need for more research to be conducted regarding the open data in entrepreneurship through hackathons.\n", "which Data ?", "open data", 117.0, 126.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Data ?", "venue", 723.0, 728.0], ["The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.", "which Data ?", "pro-survival and anti-proliferative functions", 1906.0, 1951.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Data ?", "single sentiment label", 925.0, 947.0], ["Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.", "which Data ?", "micro F-measure", 703.0, 718.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Data ?", "their novel journal publications", 820.0, 852.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Data ?", "computationally disambiguated author names", 253.0, 295.0], ["Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.", "which Data ?", "small portion", 78.0, 91.0], ["Attention deficit hyperactivity disorder (ADHD) is one of the most common mental-health disorders. As a neurodevelopment disorder, neuroimaging technologies, such as magnetic resonance imaging (MRI), coupled with machine learning algorithms, are being increasingly explored as biomarkers in ADHD. Among various machine learning methods, deep learning has demonstrated excellent performance on many imaging tasks. With the availability of publically-available, large neuroimaging data sets for training purposes, deep learning-based automatic diagnosis of psychiatric disorders can become feasible. In this paper, we develop a deep learning-based ADHD classification method via 3-D convolutional neural networks (CNNs) applied to MRI scans. Since deep neural networks may utilize millions of parameters, even the large number of MRI samples in pooled data sets is still relatively limited if one is to learn discriminative features from the raw data. Instead, here we propose to first extract meaningful 3-D low-level features from functional MRI (fMRI) and structural MRI (sMRI) data. Furthermore, inspired by radiologists\u2019 typical approach for examining brain images, we design a 3-D CNN model to investigate the local spatial patterns of MRI features. Finally, we discover that brain functional and structural information are complementary, and design a multi-modality CNN architecture to combine fMRI and sMRI features. Evaluations on the hold-out testing data of the ADHD-200 global competition shows that the proposed multi-modality 3-D CNN approach achieves the state-of-the-art accuracy of 69.15% and outperforms reported classifiers in the literature, even with fewer training samples. We suggest that multi-modality classification will be a promising direction to find potential neuroimaging biomarkers of neurodevelopment disorders.", "which Data ?", "fMRI", 1047.0, 1051.0], ["Psychological stress is threatening people\u2019s health. It is non-trivial to detect stress timely for proactive care. With the popularity of social media, people are used to sharing their daily activities and interacting with friends on social media platforms, making it feasible to leverage online social network data for stress detection. In this paper, we find that users stress state is closely related to that of his/her friends in social media, and we employ a large-scale dataset from real-world social platforms to systematically study the correlation of users\u2019 stress states and social interactions. We first define a set of stress-related textual, visual, and social attributes from various aspects, and then propose a novel hybrid model - a factor graph model combined with Convolutional Neural Network to leverage tweet content and social interaction information for stress detection. Experimental results show that the proposed model can improve the detection performance by 6-9 percent in F1-score. By further analyzing the social interaction data, we also discover several intriguing phenomena, i.e., the number of social structures of sparse connections (i.e., with no delta connections) of stressed users is around 14 percent higher than that of non-stressed users, indicating that the social structure of stressed users\u2019 friends tend to be less connected and less complicated than that of non-stressed users.", "which Data ?", "social interactions", 585.0, 604.0], ["Major depressive disorder (MDD) is a mental disorder characterized by at least two weeks of low mood which is present across most situations. Diagnosis of MDD using rest-state functional magnetic resonance imaging (fMRI) data faces many challenges due to the high dimensionality, small samples, noisy and individual variability. No method can automatically extract discriminative features from the origin time series in fMRI images for MDD diagnosis. In this study, we proposed a new method for feature extraction and a workflow which can make an automatic feature extraction and classification without a prior knowledge. An autoencoder was used to learn pre-training parameters of a dimensionality reduction process using 3-D convolution network. Through comparison with the other three feature extraction methods, our method achieved the best classification performance. This method can be used not only in MDD diagnosis, but also other similar disorders.", "which Data ?", "fMRI", 215.0, 219.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Data ?", "sarcastic tone", 439.0, 453.0], ["The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.", "which Data ?", "routine laboratory data", 658.0, 681.0], ["Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W\u00b7sr(-1)\u00b7m(-2) at a current density of 232 mA\u00b7cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 \u00b1 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.", "which Data ?", "maximum EQE value", 973.0, 990.0], ["This work presents a novel method for learning a model that can diagnose Attention Deficit Hyperactivity Disorder (ADHD), as well as Autism, using structural texture and functional connectivity features obtained from 3-dimensional structural magnetic resonance imaging (MRI) and 4-dimensional resting-state functional magnetic resonance imaging (fMRI) scans of subjects. We explore a series of three learners: (1) The LeFMS learner first extracts features from the structural MRI images using the texture-based filters produced by a sparse autoencoder. These filters are then convolved with the original MRI image using an unsupervised convolutional network. The resulting features are used as input to a linear support vector machine (SVM) classifier. (2) The LeFMF learner produces a diagnostic model by first computing spatial non-stationary independent components of the fMRI scans, which it uses to decompose each subject\u2019s fMRI scan into the time courses of these common spatial components. These features can then be used with a learner by themselves or in combination with other features to produce the model. Regardless of which approach is used, the final set of features are input to a linear support vector machine (SVM) classifier. (3) Finally, the overall LeFMSF learner uses the combined features obtained from the two feature extraction processes in (1) and (2) above as input to an SVM classifier, achieving an accuracy of 0.673 on the ADHD-200 holdout data and 0.643 on the ABIDE holdout data. Both of these results, obtained with the same LeFMSF framework, are the best known, over all hold-out accuracies on these datasets when only using imaging data\u2014exceeding previously-published results by 0.012 for ADHD and 0.042 for Autism. Our results show that combining multi-modal features can yield good classification accuracy for diagnosis of ADHD and Autism, which is an important step towards computer-aided diagnosis of these psychiatric diseases and perhaps others as well.", "which Data ?", "fMRI", 346.0, 350.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Data ?", "high quantum yield (QY > 70%)", NaN, NaN], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Data ?", "a wide variety of disciplines", 78.0, 107.0], ["It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.", "which Data ?", "properties of patterns", 458.0, 480.0], ["Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .", "which Data ?", "challenges of size and complexity", 910.0, 943.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Data ?", "byline position", 569.0, 584.0], ["In this article, we share our experiences of using digital technologies and various media to present historical narratives of a museum object collection aiming to provide an engaging experience on multiple platforms. Based on P. Joseph\u2019s article, Dawson presented multiple interpretations and historical views of the Markham car collection across various platforms using multimedia resources. Through her creative production, she explored how to use cylindrical panoramas and rich media to offer new ways of telling the controversial story of the contested heritage of a museum\u2019s veteran and vintage car collection. The production\u2019s usability was investigated involving five experts before it was published online and the general users\u2019 experience was investigated. In this article, we present an important component of findings which indicates that virtual panorama tours featuring multimedia elements could be successful in attracting new audiences and that using this type of storytelling technique can be effective in the museum sector. The storyteller panorama tour presented here may stimulate GLAM (galleries, libraries, archives, and museums) professionals to think of new approaches, implement new strategies or services to engage their audiences more effectively. The research may ameliorate the education of future professionals as well.", "which Data ?", "general users\u2019 experience", 722.0, 747.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Data ?", "similar sizes and morphologies", 1017.0, 1047.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Data ?", "low thresholds of 28 and 7.5 \u03bcJ cm\u20132", 1646.0, 1682.0], ["Despite progress in defining genetic risk for psychiatric disorders, their molecular mechanisms remain elusive. Addressing this, the PsychENCODE Consortium has generated a comprehensive online resource for the adult brain across 1866 individuals. The PsychENCODE resource contains ~79,000 brain-active enhancers, sets of Hi-C linkages, and topologically associating domains; single-cell expression profiles for many cell types; expression quantitative-trait loci (QTLs); and further QTLs associated with chromatin, splicing, and cell-type proportions. Integration shows that varying cell-type proportions largely account for the cross-population variation in expression (with >88% reconstruction accuracy). It also allows building of a gene regulatory network, linking genome-wide association study variants to genes (e.g., 321 for schizophrenia). We embed this network into an interpretable deep-learning model, which improves disease prediction by ~6-fold versus polygenic risk scores and identifies key genes and pathways in psychiatric disorders.", "which Data ?", "Regulatory network", 752.0, 770.0], ["Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W\u00b7sr(-1)\u00b7m(-2) at a current density of 232 mA\u00b7cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 \u00b1 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.", "which Data ?", "applied voltages as low as 2 V", 1021.0, 1051.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Data ?", "several months or more", 1441.0, 1463.0], ["Effective discrimination of attention deficit hyperactivity disorder (ADHD) using imaging and functional biomarkers would have fundamental influence on public health. In usual, the discrimination is based on the standards of American Psychiatric Association. In this paper, we modified one of the deep learning method on structure and parameters according to the properties of ADHD data, to discriminate ADHD on the unique public dataset of ADHD-200. We predicted the subjects as control, combined, inattentive or hyperactive through their frequency features. The results achieved improvement greatly compared to the performance released by the competition. Besides, the imbalance in datasets of deep learning model influenced the results of classification. As far as we know, it is the first time that the deep learning method has been used for the discrimination of ADHD with fMRI data.", "which Data ?", "fMRI", 878.0, 882.0], ["Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W\u00b7sr(-1)\u00b7m(-2) at a current density of 232 mA\u00b7cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 \u00b1 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.", "which Data ?", "turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 \u00b1 0.06 V", NaN, NaN], ["Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19\u2013related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.", "which Data ?", "17 (+5 optional) items", NaN, NaN], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Data ?", "different sentiment classes", 1462.0, 1489.0], ["Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service \"Where is My URI\" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.", "which Data ?", "more than 5 terabytes of information", 956.0, 992.0], ["Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service \"Where is My URI\" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.", "which Data ?", "a total of 668,166", 788.0, 806.0], ["It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.", "which Data ?", "well-understood properties", 846.0, 872.0], ["Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.", "which Data ?", "4 times worse", 733.0, 746.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Data ?", "existing sentiments", 844.0, 863.0], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "which Data ?", "OpenBiodiv encompasses data", 778.0, 805.0], ["Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information \u2013 meaningful secondary or derived data \u2013 about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known \u201c(elaborated) data products,\u201d for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation \u2013 as machine readable data and their machine-readable meaning \u2013 is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.", "which Data ?", "such primary data", 413.0, 430.0], ["The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.", "which Data ?", "strongly increased decay rates", 1481.0, 1511.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Data ?", "punctuation-based auxiliary features", 1149.0, 1185.0], ["Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W\u00b7sr(-1)\u00b7m(-2) at a current density of 232 mA\u00b7cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 \u00b1 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.", "which Data ?", "maximum external quantum efficiency (EQE) of 0.48%", NaN, NaN], ["Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service \"Where is My URI\" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.", "which Data ?", "My URI\" (WIMU)", NaN, NaN], ["ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean\u00b1SD of 2.93\u00b10.33mg/L compared to control group\u2019s 4.42\u00b10.30mg/L and gemfibrozil group\u2019s 4.28\u00b10.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).", "which Data ?", "C-reactive protein (CRP) levels", NaN, NaN], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Data ?", "the structural consistency and correctness", 1011.0, 1053.0], ["Abstract With the increased dependence on online learning platforms and educational resource repositories, a unified representation of digital learning resources becomes essential to support a dynamic and multi-source learning experience. We introduce the EduCOR ontology, an educational, career-oriented ontology that provides a foundation for representing online learning resources for personalised learning systems. The ontology is designed to enable learning material repositories to offer learning path recommendations, which correspond to the user\u2019s learning goals and preferences, academic and psychological parameters, and labour-market skills. We present the multiple patterns that compose the EduCOR ontology, highlighting its cross-domain applicability and integrability with other ontologies. A demonstration of the proposed ontology on the real-life learning platform eDoer is discussed as a use case. We evaluate the EduCOR ontology using both gold standard and task-based approaches. The comparison of EduCOR to three gold schemata, and its application in two use-cases, shows its coverage and adaptability to multiple OER repositories, which allows generating user-centric and labour-market oriented recommendations. Resource : https://tibonto.github.io/educor/.", "which Has evaluation ?", "Gold standard", 958.0, 971.0], ["ABSTRACT This paper reports on a controlled experiment evaluating how different cartographic representations of risk affect participants\u2019 performance on a complex spatial decision task: route planning. The specific experimental scenario used is oriented towards emergency route-planning during flood response. The experiment compared six common abstract and metaphorical graphical symbolizations of risk. The results indicate a pattern of less-preferred graphical symbolizations associated with slower responses and lower-risk route choices. One mechanism that might explain these observed relationships would be that more complex and effortful maps promote closer attention paid by participants and lower levels of risk taking. Such user considerations have important implications for the design of maps and mapping interfaces for emergency planning and response. The data also highlights the importance of the \u2018right decision, wrong outcome problem\u2019 inherent in decision-making under uncertainty: in individual instances, more risky decisions do not always lead to worse outcomes.", "which Has evaluation ?", "Experiment", 44.0, 54.0], ["Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and \u201cdisorder\u201d and \u201ctransmembrane\u201d analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein\u2014binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.", "which Has evaluation ?", "Date", 1105.0, 1109.0], ["Modern content sharing environments such as Flickr or YouTube contain a large amount of private resources such as photos showing weddings, family holidays, and private parties. These resources can be of a highly sensitive nature, disclosing many details of the users' private sphere. In order to support users in making privacy decisions in the context of image sharing and to provide them with a better overview on privacy related visual content available on the Web, we propose techniques to automatically detect private images, and to enable privacy-oriented image search. To this end, we learn privacy classifiers trained on a large set of manually assessed Flickr photos, combining textual metadata of images with a variety of visual features. We employ the resulting classification models for specifically searching for private photos, and for diversifying query results to provide users with a better coverage of private and public content. Large-scale classification experiments reveal insights into the predictive performance of different visual and textual features, and a user evaluation of query result rankings demonstrates the viability of our approach.", "which Has evaluation ?", "User evaluation", 1083.0, 1098.0], ["Ranking and recommendation of multimedia content such as videos is usually realized with respect to the relevance to a user query. However, for lecture videos and MOOCs (Massive Open Online Courses) it is not only required to retrieve relevant videos, but particularly to find lecture videos of high quality that facilitate learning, for instance, independent of the video's or speaker's popularity. Thus, metadata about a lecture video's quality are crucial features for learning contexts, e.g., lecture video recommendation in search as learning scenarios. In this paper, we investigate whether automatically extracted features are correlated to quality aspects of a video. A set of scholarly videos from a Mass Open Online Course (MOOC) is analyzed regarding audio, linguistic, and visual features. Furthermore, a set of cross-modal features is proposed which are derived by combining transcripts, audio, video, and slide content. A user study is conducted to investigate the correlations between the automatically collected features and human ratings of quality aspects of a lecture video. Finally, the impact of our features on the knowledge gain of the participants is discussed.", "which Has evaluation ?", "User Study", 936.0, 946.0], ["Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and \u201cdisorder\u201d and \u201ctransmembrane\u201d analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein\u2014binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.", "which Has evaluation ?", "recombination", 1140.0, 1153.0], ["This study investigates the distribution of cadmium and lead concentrations in the outcrop rock samples collected from Abakaliki anticlinorium in the Southern Benue Trough, Nigeria. The outcrop rock samples from seven sampling locations were air\u2013dried for seventy\u2013two hours, homogenized by grinding and pass through < 63 micron mesh sieve. The ground and homogenized rock samples were pulverized and analyzed for cadmium and lead using X-Ray Fluorescence Spectrometer. The concentrations of heavy metals in the outcrop rock samples ranged from < 0.10 \u2013 7.95 mg kg\u20131 for cadmium (Cd) and < 1.00 \u2013 4966.00 mg kg\u20131 for lead (Pb). Apart from an anomalous concentration measured in Afikpo Shale (Middle Segment), the results obtained revealed that rock samples from all the sampling locations yielded cadmium concentrations of < 0.10 mg kg\u20131 and the measured concentrations were below the average crustal abundance of 0.50 mg kg\u20131. Although background concentration of <1.00 \u00b1 0.02 mg kg\u20131 was measured in Abakaliki Shale, rock samples from all the sampling locations revealed anomalous lead concentrations above average crustal abundance of 30 mg kg\u20131. The results obtained reveal important contributions towards understanding of heavy metal distribution patterns and provide baseline data that can be used for potential identification of areas at risk associated with natural sources of heavy metals contamination in the region. The use of outcrop rocks provides a cost\u2013effective approach for monitoring regional heavy metal contamination associated with dissolution and/or weathering of rocks or parent materials. Evaluation of heavy metals may be effectively used in large scale regional pollution monitoring of soil, groundwater, atmospheric and marine environment. Therefore, monitoring of heavy metal concentrations in soils, groundwater and atmospheric environment is imperative in order to prevent bioaccumulation in various ecological receptors.", "which Has evaluation ?", "The results obtained reveal important contributions towards understanding of heavy metal distribution patterns and provide baseline data that can be used for potential identification of areas at risk associated with natural sources of heavy metals contamination in the region.", NaN, NaN], ["Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and \u201cdisorder\u201d and \u201ctransmembrane\u201d analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein\u2014binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.", "which Has evaluation ?", "trimming", 1216.0, 1224.0], ["Representation learning is a critical ingredient for natural language processing systems. Recent Transformer language models like BERT learn powerful textual representations, but these models are targeted towards token- and sentence-level training objectives and do not leverage information on inter-document relatedness, which limits their document-level representation power. For applications on scientific documents, such as classification and recommendation, accurate embeddings of documents are a necessity. We propose SPECTER, a new method to generate document-level embedding of scientific papers based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, Specter can be easily applied to downstream applications without task-specific fine-tuning. Additionally, to encourage further research on document-level models, we introduce SciDocs, a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation. We show that Specter outperforms a variety of competitive baselines on the benchmark.", "which Has evaluation ?", "Recommendation", 447.0, 461.0], ["Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and \u201cdisorder\u201d and \u201ctransmembrane\u201d analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein\u2014binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.", "which Has evaluation ?", "Y14_SARS2", 1824.0, 1833.0], ["Abstract With the increased dependence on online learning platforms and educational resource repositories, a unified representation of digital learning resources becomes essential to support a dynamic and multi-source learning experience. We introduce the EduCOR ontology, an educational, career-oriented ontology that provides a foundation for representing online learning resources for personalised learning systems. The ontology is designed to enable learning material repositories to offer learning path recommendations, which correspond to the user\u2019s learning goals and preferences, academic and psychological parameters, and labour-market skills. We present the multiple patterns that compose the EduCOR ontology, highlighting its cross-domain applicability and integrability with other ontologies. A demonstration of the proposed ontology on the real-life learning platform eDoer is discussed as a use case. We evaluate the EduCOR ontology using both gold standard and task-based approaches. The comparison of EduCOR to three gold schemata, and its application in two use-cases, shows its coverage and adaptability to multiple OER repositories, which allows generating user-centric and labour-market oriented recommendations. Resource : https://tibonto.github.io/educor/.", "which Has evaluation ?", "task-based", 976.0, 986.0], ["Continuous health monitoring is a hopeful solution that can efficiently provide health-related services to elderly people suffering from chronic diseases. The emergence of the Internet of Things (IoT) technologies have led to their adoption in the development of new healthcare systems for efficient healthcare monitoring, diagnosis and treatment. This paper presents a healthcare-IoT based system where an ontology is proposed to provide semantic interoperability among heterogeneous devices and users in healthcare domain. Our work consists on integrating existing ontologies related to health, IoT domain and time, instantiating classes, and establishing reasoning rules. The model created has been validated by semantic querying. The results show the feasibility and efficiency of the proposed ontology and its capability to grow into a more understanding and specialized ontology for health monitoring and treatment.", "which Has evaluation ?", "by semantic querying", 712.0, 732.0], ["Modern content sharing environments such as Flickr or YouTube contain a large amount of private resources such as photos showing weddings, family holidays, and private parties. These resources can be of a highly sensitive nature, disclosing many details of the users' private sphere. In order to support users in making privacy decisions in the context of image sharing and to provide them with a better overview on privacy related visual content available on the Web, we propose techniques to automatically detect private images, and to enable privacy-oriented image search. To this end, we learn privacy classifiers trained on a large set of manually assessed Flickr photos, combining textual metadata of images with a variety of visual features. We employ the resulting classification models for specifically searching for private photos, and for diversifying query results to provide users with a better coverage of private and public content. Large-scale classification experiments reveal insights into the predictive performance of different visual and textual features, and a user evaluation of query result rankings demonstrates the viability of our approach.", "which Has evaluation ?", "Classification experiments", 960.0, 986.0], ["In the recent years, different Web knowledge graphs, both free and commercial, have been created. While Google coined the term \"Knowledge Graph\" in 2012, there are also a few openly available knowledge graphs, with DBpedia, YAGO, and Freebase being among the most prominent ones. Those graphs are often constructed from semi-structured knowledge, such as Wikipedia, or harvested from the web with a combination of statistical and linguistic methods. The result are large-scale knowledge graphs that try to make a good trade-off between completeness and correctness. In order to further increase the utility of such knowledge graphs, various refinement methods have been proposed, which try to infer and add missing knowledge to the graph, or identify erroneous pieces of information. In this article, we provide a survey of such knowledge graph refinement approaches, with a dual look at both the methods being proposed as well as the evaluation methodologies used.", "which Has evaluation ?", "Methods", 441.0, 448.0], ["As a severe psychiatric disorder disease, depression is a state of low mood and aversion to activity, which prevents a person from functioning normally in both work and daily lives. The study on automated mental health assessment has been given increasing attentions in recent years. In this paper, we study the problem of automatic diagnosis of depression. A new approach to predict the Beck Depression Inventory II (BDI-II) values from video data is proposed based on the deep networks. The proposed framework is designed in a two stream manner, aiming at capturing both the facial appearance and dynamics. Further, we employ joint tuning layers that can implicitly integrate the appearance and dynamic information. Experiments are conducted on two depression databases, AVEC2013 and AVEC2014. The experimental results show that our proposed approach significantly improve the depression prediction performance, compared to other visual-based approaches.", "which Outcome assessment ?", "BDI-II", 418.0, 424.0], ["A human being\u2019s cognitive system can be simulated by artificial intelligent systems. Machines and robots equipped with cognitive capability can automatically recognize a humans mental state through their gestures and facial expressions. In this paper, an artificial intelligent system is proposed to monitor depression. It can predict the scales of Beck depression inventory II (BDI-II) from vocal and visual expressions. First, different visual features are extracted from facial expression images. Deep learning method is utilized to extract key visual features from the facial expression frames. Second, spectral low-level descriptors and mel-frequency cepstral coefficients features are extracted from short audio segments to capture the vocal expressions. Third, feature dynamic history histogram (FDHH) is proposed to capture the temporal movement on the feature space. Finally, these FDHH and audio features are fused using regression techniques for the prediction of the BDI-II scales. The proposed method has been tested on the public Audio/Visual Emotion Challenges 2014 dataset as it is tuned to be more focused on the study of depression. The results outperform all the other existing methods on the same dataset.", "which Outcome assessment ?", "BDI-II", 379.0, 385.0], ["Humans use emotional expressions to communicate their internal affective states. These behavioral expressions are often multi-modal (e.g. facial expression, voice and gestures) and researchers have proposed several schemes to predict the latent affective states based on these expressions. The relationship between the latent affective states and their expression is hypothesized to be affected by several factors; depression disorder being one of them. Despite a wide interest in affect prediction, and several studies linking the effect of depression on affective expressions, only a limited number of affect prediction models account for the depression severity. In this work, we present a novel scheme that incorporates depression severity as a parameter in Deep Neural Networks (DNNs). In order to predict affective dimensions for an individual at hand, our scheme alters the DNN activation function based on the subject\u2019s depression severity. We perform experiments on affect prediction in two different sessions of the Audio-Visual Depressive language Corpus, which involves patients with varying degree of depression. Our results show improvements in arousal and valence prediction on both the sessions using the proposed DNN modeling. We also present analysis of the impact of such an alteration in DNNs during training and testing.", "which Outcome assessment ?", "Valence", 1171.0, 1178.0], ["In this paper, we propose an audio visual multimodal depression recognition framework composed of deep convolutional neural network (DCNN) and deep neural network (DNN) models. For each modality, corresponding feature descriptors are input into a DCNN to learn high-level global features with compact dynamic information, which are then fed into a DNN to predict the PHQ-8 score. For multi-modal depression recognition, the predicted PHQ-8 scores from each modality are integrated in a DNN for the final prediction. In addition, we propose the Histogram of Displacement Range as a novel global visual descriptor to quantify the range and speed of the facial landmarks' displacements. Experiments have been carried out on the Distress Analysis Interview Corpus-Wizard of Oz (DAIC-WOZ) dataset for the Depression Sub-challenge of the Audio-Visual Emotion Challenge (AVEC 2016), results show that the proposed multi-modal depression recognition framework obtains very promising results on both the development set and test set, which outperforms the state-of-the-art results.", "which Outcome assessment ?", "PHQ-8 score", 367.0, 378.0], ["Visible-light-responsive g-C3N4/NaNbO3 nanowires photocatalysts were fabricated by introducing polymeric g-C3N4 on NaNbO3 nanowires. The microscopic mechanisms of interface interaction, charge transfer and separation, as well as the influence on the photocatalytic activity of g-C3N4/NaNbO3 composite were systematic investigated. The high-resolution transmission electron microscopy (HR-TEM) revealed that an intimate interface between C3N4 and NaNbO3 nanowires formed in the g-C3N4/NaNbO3 heterojunctions. The photocatalytic performance of photocatalysts was evaluated for CO2 reduction under visible-light illumination. Significantly, the activity of g-C3N4/NaNbO3 composite photocatalyst for photoreduction of CO2 was higher than that of either single-phase g-C3N4 or NaNbO3. Such a remarkable enhancement of photocatalytic activity was mainly ascribed to the improved separation and transfer of photogenerated electron\u2013hole pairs at the intimate interface of g-C3N4/NaNbO3 heterojunctions, which originated from the...", "which Nb-Based Material ?", "C3N4/NaNbO3", 27.0, 38.0], ["The direct electrolytic reduction of solid SiO2 is investigated in molten CaCl2 at 1123 K to produce solar-grade silicon. The target concentrations of impurities for the primary Si are calculated from the acceptable concentrations of impurities in solar-grade silicon (SOG-Si) and the segregation coefficients for the impurity elements. The concentrations of most metal impurities are significantly decreased below their target concentrations by using a quartz vessel and new types of SiO2-contacting electrodes. The electrolytic reduction rate is increased by improving an electron pathway from the lead material to the SiO2, which demonstrates that the characteristics of the electric contact are important factors affecting the reduction rate. Pellet- and basket-type electrodes are tested to improve the process volume for powdery and granular SiO2. Based on the purity of the Si product after melting, refining, and solidifying, the potential of the technology is discussed.", "which electrolyte ?", "CaCl2", 74.0, 79.0], ["Nowadays, silicon is the most critical element in solar cells and/or solar chips. Silicon having 98 to 99% Si as being metallurgical grade, requires further refinement/purification processes such as zone refining [1,2] and/or Siemens process [3] to upgrade it for solar applications. A promising method, based on straightforward electrochemical reduction of oxides by FFC Cambridge Process [4], was adopted to form silicon from porous SiO2 pellets in molten CaCl2 and CaCl2-NaCl salt mixture [5]. It was reported that silicon powder contaminated by iron and nickel emanated from stainless steel cathode, consequently disqualified the product from solar applications. SiO2 pellets sintered at 1300oC for 4 hours, were placed in between pure silicon wafer plates to defeat the contamination problem. Encouraging results indicated a reliable alternative method of direct solar grade silicon production for expanding solar energy field.", "which electrolyte ?", "CaCl2", 458.0, 463.0], ["Silicon is a widely used semiconductor for electronic and photovoltaic devices because of its earth-abundance, chemical stability, and the tunable electrical properties by doping. Therefore, the production of pure silicon films by simple and inexpensive methods has been the subject of many investigations. The desire for lower-cost silicon-based solar photovoltaic devices has encouraged the quest for solar-grade silicon production through processes alternative to the currently used Czochralski process or other processes. Electrodeposition is one of the least expensive methods for fabricating films of metals and semiconductors. Electrodeposition of silicon has been studied for over 30 years, in various solution media such as molten salts (LiF-KF-K2SiF6 at 745 8C and BaO-SiO2-BaF2 at 1465 8C ), organic solvents (acetonitrile, tetrahydrofuran), and room-temperature ionic liquids. Recently, the direct electrochemical reduction of bulk solid silicon dioxide in a CaCl2 melt was reported. [7] A key factor for silicon electrodeposition is the purity of silicon deposit because Si for the use in photovoltaic devices is solargrade silicon (> 99.9999% or 6N) and its grade is even higher in electronic devices (electronic-grade silicon or 11N). In most cases, the electrodeposited silicon does not meet these requirements without further purification and, to our knowledge, none have been shown to exhibit a photoresponse. In fact, silicon electrodeposition is not as straightforward as metal deposition, since the deposited semiconductor layer is resistive at room temperature, which complicates electron transfer through the deposit. In many cases, for example in room-temperature aprotic solvents, the deposited silicon acts as an insulating layer and prevents a continuous deposition reaction. In some cases, the silicon deposit contains a high level of impurities (> 2%). Moreover, the nucleation and growth of silicon requires a large amount of energy. The deposition is made even more challenging if the Si precursor is SiO2, which is a very resistive material. We reported previously the electrochemical formation of silicon on molybdenum from a CaCl2 molten salt (850 8C) containing a SiO2 nanoparticle (NP with a diameter of 5\u2013 15 nm) suspension by applying a constant reduction current. However this Si film did not show photoactivity. Here we show the electrodeposition of photoactive crystalline silicon directly from SiO2 NPs from CaCl2 molten salt on a silver electrode that shows a clear photoresponse. To the best of our knowledge, this is a first report of the direct electrodeposition of photoactive silicon. The electrochemical reduction and the cyclic voltammetry (CV) of SiO2 were investigated as described previously. [8] In this study, we found that the replacement of the Mo substrate by silver leads to a dramatic change in the properties of the silicon deposit. The silver substrate exhibited essentially the same electrochemical and CV behavior as other metal substrates, that is, a high reduction current for SiO2 at negative potentials of 1.0 V with the development of a new redox couple near 0.65 V vs. a graphite quasireference electrode (QRE) (Figure 1a). Figure 1b shows a change in the reduction current as a function of the reduction potential, and the optical images of silver electrodes before and after the electrolysis, which displays a dark gray-colored deposit after the reduction. Figure 2 shows SEM images of silicon deposits grown potentiostatically ( 1.25 V vs. graphite QRE) on silver. The amount of silicon deposit increased with the deposition time, and the deposit finally covered the whole silver surface (Figure 2). High-magnification images show that the silicon deposit is not a film but rather platelets or clusters of silicon crystals of domain sizes in the range of tens of micrometers. The average height of the platelets was around 25 mm after a 10000 s deposition (Figure 2b), and 45 mm after a 20000s deposition (Figure 2c), respectively. The edges of the silicon crystals were clearly observed. Contrary to other substrates, silver enhanced the crystallization of silicon produced from silicon dioxide reduction and it is known that silver induces the crystallization of amorphous silicon. Energy-dispersive spectrometry (EDS) elemental mapping (images shown in the bottom row of Figure 2) revealed that small silver islands exist on the top of the silicon deposits, which we think is closely related to the growth mechanism of silicon on silver. The EDS spectrum of the silicon deposit (Figure 3a) suggested that the deposited silicon was quite pure and the amounts of other elements such as C, Ca, and Cl were below the detection limit (about 0.1 atom%). Since the oxygen signal was probably from the native oxide formed on exposure of the deposit to air and silicon does not form an alloy with silver, the purity of silicon was estimated to be at least 99.9 atom%. The successful reduction of Si(4+) in silicon dioxide to elemental silicon (Si) was confirmed by Xray photoelectron spectroscopy (XPS) of the silicon deposit [*] Dr. S. K. Cho, Dr. F.-R. F. Fan, Prof. A. J. Bard Center for Electrochemistry, Department of Chemistry and Biochemistry, The University of Texas at Austin Austin, TX 78712 (USA) E-mail: ajbard@mail.utexas.edu", "which electrolyte ?", "CaCl2", 971.0, 976.0], ["The electrochemical reduction of solid silica has been investigated in molten CaCl2 at 900 \u00b0C for the one-step, up-scalable, controllable and affordable production of nanostructured silicon with promising photo-responsive properties. Cyclic voltammetry of the metallic cavity electrode loaded with fine silica powder was performed to elaborate the electrochemical reduction mechanism. Potentiostatic electrolysis of porous and dense silica pellets was carried out at different potentials, focusing on the influences of the electrolysis potential and the microstructure of the precursory silica on the product purity and microstructure. The findings suggest a potential range between \u22120.60 and \u22120.95 V (vs. Ag/AgCl) for the production of nanostructured silicon with high purity (>99 wt%). According to the elucidated mechanism on the electro-growth of the silicon nanostructures, optimal process parameters for the controllable preparation of high-purity silicon nanoparticles and nanowires were identified. Scaling-up the optimal electrolysis was successful at the gram-scale for the preparation of high-purity silicon nanowires which exhibited promising photo-responsive properties.", "which electrolyte ?", "CaCl2", 78.0, 83.0], ["A hybrid of CdS/HCa2Nb3O10 ultrathin nanosheets was synthesized successfully through a multistep approach. The structures, constitutions, morphologies and specific surface areas of the obtained CdS/HCa2Nb3O10 were characterized well by XRD, XPS, TEM/HRTEM and BET, respectively. The TEM and BET results demonstrated that the unique structural features of CdS/HCa2Nb3O10 restrained the aggregation of CdS nanoparticles as well as the restacking of nanosheets effectively. HRTEM showed that CdS nanocrystals of about 25-30 nm were firmly anchored on HCa2Nb3O10 nanosheets and a tough heterointerface between CdS and the nanosheets was formed. Efficient interfacial charge transfer from CdS to HCa2Nb3O10 nanosheets was also confirmed by EPR and photocurrent responses. The photocatalytic activity tests (\u03bb > 400 nm) showed that the optimal hydrogen evolution activity of CdS/HCa2Nb3O10 was about 4 times that of the bare CdS, because of the efficient separation of photo-generated carriers.", "which Niobate ?", "HCa2Nb3O10", 16.0, 26.0], ["In order to assess future sea level rise and its societal impacts, we need to study climate change pathways combined with different scenarios of socioeconomic development. Here, we present Sea Level Rise (SLR) projections for the Shared Socioeconomic Pathway (SSP) storylines and different year-2100 radiative Forcing Targets (FTs). Future SLR is estimated with a comprehensive SLR emulator that accounts for Antarctic rapid discharge from hydrofracturing and ice cliff instability. Across all baseline scenario realizations (no dedicated climate mitigation), we find 2100 median SLR relative to 1986-2005 of 89 cm (likely range: 57 to 130 cm) for SSP1, 105 cm (73 to 150 cm) for SSP2, 105 cm (75 to 147 cm) for SSP3, 93 cm (63 to 133 cm) for SSP4, and 132 cm (95 to 189 cm) for SSP5. The 2100 sea level responses for combined SSP-FT scenarios are dominated by the mitigation targets and yield median estimates of 52 cm (34 to 75 cm) for FT 2.6 Wm-2, 62 cm (40 to 96 cm) for FT 3.4 Wm-2, 75 cm (47 to 113 cm) for FT 4.5 Wm-2, and 91 cm (61 to 132 cm) for FT 6.0 Wm-2. Average 2081-2100 annual SLR rates are 5 mm yr-1 and 19 mm yr-1 for FT 2.6 Wm-2 and the baseline scenarios, respectively. Our model setup allows linking scenario-specific emission and socioeconomic indicators to projected SLR. We find that 2100 median SSP SLR projections could be limited to around 50 cm if 2050 cumulative CO2 emissions since pre-industrial stay below 850 GtC ,with a global coal phase-out nearly completed by that time. For SSP mitigation scenarios, a 2050 carbon price of 100 US$2005 tCO2 -1 would correspond to a median 2100 SLR of around 65 cm. Our results confirm that rapid and early emission reductions are essential for limiting 2100 SLR.", "which has start of period ?", "1986-2005", 596.0, 605.0], ["This article presents a case study of a collaborative public history project between participants in two countries, the United Kingdom and Italy. Its subject matter is the bombing war in Europe, 1939-1945, which is remembered and commemorated in very different ways in these two countries: the sensitivities involved thus constitute not only a case of public history conducted at the national level but also one involving contested heritage. An account of the ways in which public history has developed in the UK and Italy is presented. This is followed by an explanation of how the bombing war has been remembered in each country. In the UK, veterans of RAF Bomber Command have long felt a sense of neglect, largely because the deliberate targeting of civilians has not fitted comfortably into the dominant victor narrative. In Italy, recollections of being bombed have remained profoundly dissonant within the received liberation discourse. The International Bomber Command Centre Digital Archive (or Archive) is then described as a case study that employs a public history approach, focusing on various aspects of its inclusive ethos, intended to preserve multiple perspectives. The Italian component of the project is highlighted, problematising the digitisation of contested heritage within the broader context of twentieth-century history. Reflections on the use of digital archiving practices and working in partnership are offered, as well as a brief account of user analytics of the Archive through its first eighteen months online.", "which has start of period ?", "1939", 195.0, 199.0], ["In order to assess future sea level rise and its societal impacts, we need to study climate change pathways combined with different scenarios of socioeconomic development. Here, we present Sea Level Rise (SLR) projections for the Shared Socioeconomic Pathway (SSP) storylines and different year-2100 radiative Forcing Targets (FTs). Future SLR is estimated with a comprehensive SLR emulator that accounts for Antarctic rapid discharge from hydrofracturing and ice cliff instability. Across all baseline scenario realizations (no dedicated climate mitigation), we find 2100 median SLR relative to 1986-2005 of 89 cm (likely range: 57 to 130 cm) for SSP1, 105 cm (73 to 150 cm) for SSP2, 105 cm (75 to 147 cm) for SSP3, 93 cm (63 to 133 cm) for SSP4, and 132 cm (95 to 189 cm) for SSP5. The 2100 sea level responses for combined SSP-FT scenarios are dominated by the mitigation targets and yield median estimates of 52 cm (34 to 75 cm) for FT 2.6 Wm-2, 62 cm (40 to 96 cm) for FT 3.4 Wm-2, 75 cm (47 to 113 cm) for FT 4.5 Wm-2, and 91 cm (61 to 132 cm) for FT 6.0 Wm-2. Average 2081-2100 annual SLR rates are 5 mm yr-1 and 19 mm yr-1 for FT 2.6 Wm-2 and the baseline scenarios, respectively. Our model setup allows linking scenario-specific emission and socioeconomic indicators to projected SLR. We find that 2100 median SSP SLR projections could be limited to around 50 cm if 2050 cumulative CO2 emissions since pre-industrial stay below 850 GtC ,with a global coal phase-out nearly completed by that time. For SSP mitigation scenarios, a 2050 carbon price of 100 US$2005 tCO2 -1 would correspond to a median 2100 SLR of around 65 cm. Our results confirm that rapid and early emission reductions are essential for limiting 2100 SLR.", "which has end of period ?", "2081-2100", 1076.0, 1085.0], ["In order to assess future sea level rise and its societal impacts, we need to study climate change pathways combined with different scenarios of socioeconomic development. Here, we present Sea Level Rise (SLR) projections for the Shared Socioeconomic Pathway (SSP) storylines and different year-2100 radiative Forcing Targets (FTs). Future SLR is estimated with a comprehensive SLR emulator that accounts for Antarctic rapid discharge from hydrofracturing and ice cliff instability. Across all baseline scenario realizations (no dedicated climate mitigation), we find 2100 median SLR relative to 1986-2005 of 89 cm (likely range: 57 to 130 cm) for SSP1, 105 cm (73 to 150 cm) for SSP2, 105 cm (75 to 147 cm) for SSP3, 93 cm (63 to 133 cm) for SSP4, and 132 cm (95 to 189 cm) for SSP5. The 2100 sea level responses for combined SSP-FT scenarios are dominated by the mitigation targets and yield median estimates of 52 cm (34 to 75 cm) for FT 2.6 Wm-2, 62 cm (40 to 96 cm) for FT 3.4 Wm-2, 75 cm (47 to 113 cm) for FT 4.5 Wm-2, and 91 cm (61 to 132 cm) for FT 6.0 Wm-2. Average 2081-2100 annual SLR rates are 5 mm yr-1 and 19 mm yr-1 for FT 2.6 Wm-2 and the baseline scenarios, respectively. Our model setup allows linking scenario-specific emission and socioeconomic indicators to projected SLR. We find that 2100 median SSP SLR projections could be limited to around 50 cm if 2050 cumulative CO2 emissions since pre-industrial stay below 850 GtC ,with a global coal phase-out nearly completed by that time. For SSP mitigation scenarios, a 2050 carbon price of 100 US$2005 tCO2 -1 would correspond to a median 2100 SLR of around 65 cm. Our results confirm that rapid and early emission reductions are essential for limiting 2100 SLR.", "which has end of period ?", "2050", 1376.0, 1380.0], ["This article presents a case study of a collaborative public history project between participants in two countries, the United Kingdom and Italy. Its subject matter is the bombing war in Europe, 1939-1945, which is remembered and commemorated in very different ways in these two countries: the sensitivities involved thus constitute not only a case of public history conducted at the national level but also one involving contested heritage. An account of the ways in which public history has developed in the UK and Italy is presented. This is followed by an explanation of how the bombing war has been remembered in each country. In the UK, veterans of RAF Bomber Command have long felt a sense of neglect, largely because the deliberate targeting of civilians has not fitted comfortably into the dominant victor narrative. In Italy, recollections of being bombed have remained profoundly dissonant within the received liberation discourse. The International Bomber Command Centre Digital Archive (or Archive) is then described as a case study that employs a public history approach, focusing on various aspects of its inclusive ethos, intended to preserve multiple perspectives. The Italian component of the project is highlighted, problematising the digitisation of contested heritage within the broader context of twentieth-century history. Reflections on the use of digital archiving practices and working in partnership are offered, as well as a brief account of user analytics of the Archive through its first eighteen months online.", "which has end of period ?", "1945", 200.0, 204.0], ["Aurein 1.2 is an antimicrobial peptide from the skin secretion of an Australian frog. In previous experimental work, we reported a differential action of aurein 1.2 on two probiotic strains Lactobacillus delbrueckii subsp. Bulgaricus (CIDCA331) and Lactobacillus delbrueckii subsp. Lactis (CIDCA133). The differences found were attributed to the bilayer compositions. Cell cultures and CIDCA331-derived liposomes showed higher susceptibility than the ones derived from the CIDCA133 strain, leading to content leakage and structural disruption. Here, we used Molecular Dynamics simulations to explore these systems at atomistic level. We hypothesize that if the antimicrobial peptides organized themselves to form a pore, it will be more stable in membranes that emulate the CIDCA331 strain than in those of the CIDCA133 strain. To test this hypothesis, we simulated pre-assembled aurein 1.2 pores embedded into bilayer models that emulate the two probiotic strains. It was found that the general behavior of the systems depends on the composition of the membrane rather than the pre-assemble system characteristics. Overall, it was observed that aurein 1.2 pores are more stable in the CIDCA331 model membranes. This fact coincides with the high susceptibility of this strain against antimicrobial peptide. In contrast, in the case of the CIDCA133 model membranes, peptides migrate to the water-lipid interphase, the pore shrinks and the transport of water through the pore is reduced. The tendency of glycolipids to make hydrogen bonds with peptides destabilize the pore structures. This feature is observed to a lesser extent in CIDCA 331 due to the presence of anionic lipids. Glycolipid transverse diffusion (flip-flop) between monolayers occurs in the pore surface region in all the cases considered. These findings expand our understanding of the antimicrobial peptide resistance properties of probiotic strains.", "which Has method ?", "Molecular Dynamics Simulations", 558.0, 588.0], ["Televised public service announcements are video ads that are a key component of public health campaigns against smoking. Understanding the neurophysiological correlates of anti-tobacco ads is an important step toward novel objective methods of their evaluation and design. In the present study, we used functional magnetic resonance imaging (fMRI) to investigate the brain and behavioral effects of the interaction between content (\u201cargument strength,\u201d AS) and format (\u201cmessage sensation value,\u201d MSV) of anti-smoking ads in humans. Seventy-one nontreatment-seeking smokers viewed a sequence of 16 high or 16 low AS ads during an fMRI scan. Dependent variables were brain fMRI signal, the immediate recall of the ads, the immediate change in intentions to quit smoking, and the urine levels of a major nicotine metabolite cotinine at a 1 month follow-up. Whole-brain ANOVA revealed that AS and MSV interacted in the inferior frontal, inferior parietal, and fusiform gyri; the precuneus; and the dorsomedial prefrontal cortex (dMPFC). Regression analysis showed that the activation in the dMPFC predicted the urine cotinine levels 1 month later. These results characterize the key brain regions engaged in the processing of persuasive communications and suggest that brain fMRI response to anti-smoking ads could predict subsequent smoking severity in nontreatment-seeking smokers. Our findings demonstrate the importance of the quality of content for objective ad outcomes and suggest that fMRI investigation may aid the prerelease evaluation of televised public health ads.", "which Has method ?", "fMRI", 343.0, 347.0], ["Human knowledge provides a formal understanding of the world. Knowledge graphs that represent structural relations between entities have become an increasingly popular research direction toward cognition and human-level intelligence. In this survey, we provide a comprehensive review of the knowledge graph covering overall research topics about: 1) knowledge graph representation learning; 2) knowledge acquisition and completion; 3) temporal knowledge graph; and 4) knowledge-aware applications and summarize recent breakthroughs and perspective directions to facilitate future research. We propose a full-view categorization and new taxonomies on these topics. Knowledge graph embedding is organized from four aspects of representation space, scoring function, encoding models, and auxiliary information. For knowledge acquisition, especially knowledge graph completion, embedding methods, path inference, and logical rule reasoning are reviewed. We further explore several emerging topics, including metarelational learning, commonsense reasoning, and temporal knowledge graphs. To facilitate future research on knowledge graphs, we also provide a curated collection of data sets and open-source libraries on different tasks. In the end, we have a thorough outlook on several promising research directions.", "which Has method ?", " knowledge graph completion, embedding methods, path inference, and logical rule reasoning", 845.0, 935.0], ["In this paper, we present the virtual knowledge graph (VKG) paradigm for data integration and access, also known in the literature as Ontology-based Data Access. Instead of structuring the integration layer as a collection of relational tables, the VKG paradigm replaces the rigid structure of tables with the flexibility of graphs that are kept virtual and embed domain knowledge. We explain the main notions of this paradigm, its tooling ecosystem and significant use cases in a wide range of applications. Finally, we discuss future research directions.", "which Has method ?", "VKG paradigm replaces the rigid structure of tables with the flexibility of graphs", 249.0, 331.0], ["This study investigates the distribution of cadmium and lead concentrations in the outcrop rock samples collected from Abakaliki anticlinorium in the Southern Benue Trough, Nigeria. The outcrop rock samples from seven sampling locations were air\u2013dried for seventy\u2013two hours, homogenized by grinding and pass through < 63 micron mesh sieve. The ground and homogenized rock samples were pulverized and analyzed for cadmium and lead using X-Ray Fluorescence Spectrometer. The concentrations of heavy metals in the outcrop rock samples ranged from < 0.10 \u2013 7.95 mg kg\u20131 for cadmium (Cd) and < 1.00 \u2013 4966.00 mg kg\u20131 for lead (Pb). Apart from an anomalous concentration measured in Afikpo Shale (Middle Segment), the results obtained revealed that rock samples from all the sampling locations yielded cadmium concentrations of < 0.10 mg kg\u20131 and the measured concentrations were below the average crustal abundance of 0.50 mg kg\u20131. Although background concentration of <1.00 \u00b1 0.02 mg kg\u20131 was measured in Abakaliki Shale, rock samples from all the sampling locations revealed anomalous lead concentrations above average crustal abundance of 30 mg kg\u20131. The results obtained reveal important contributions towards understanding of heavy metal distribution patterns and provide baseline data that can be used for potential identification of areas at risk associated with natural sources of heavy metals contamination in the region. The use of outcrop rocks provides a cost\u2013effective approach for monitoring regional heavy metal contamination associated with dissolution and/or weathering of rocks or parent materials. Evaluation of heavy metals may be effectively used in large scale regional pollution monitoring of soil, groundwater, atmospheric and marine environment. Therefore, monitoring of heavy metal concentrations in soils, groundwater and atmospheric environment is imperative in order to prevent bioaccumulation in various ecological receptors.", "which Has method ?", "The outcrop rock samples from seven sampling locations were air\u2013dried for seventy\u2013two hours, homogenized by grinding and pass through < 63 micron mesh sieve. The ground and homogenized rock samples were pulverized and analyzed for cadmium and lead using X-Ray Fluorescence Spectrometer. ", 182.0, 469.0], ["Abstract A fundamental aspect of well performing cities is successful public spaces. For centuries, understanding these places has been limited to sporadic observations and laborious data collection. This study proposes a novel methodology to analyze citywide, discrete urban spaces using highly accurate anonymized telecom data and machine learning algorithms. Through superposition of human dynamics and urban features, this work aims to expose clear correlations between the design of the city and the behavioral patterns of its users. Geolocated telecom data, obtained for the state of Andorra, were initially analyzed to identify \u201cstay-points\u201d\u2014events in which cellular devices remain within a certain roaming distance for a given length of time. These stay-points were then further analyzed to find clusters of activity characterized in terms of their size, persistence, and diversity. Multivariate linear regression models were used to identify associations between the formation of these clusters and various urban features such as urban morphology or land-use within a 25\u201350 meters resolution. Some of the urban features that were found to be highly related to the creation of large, diverse and long-lasting clusters were the presence of service and entertainment amenities, natural water features, and the betweenness centrality of the road network; others, such as educational and park amenities were shown to have a negative impact. Ultimately, this study suggests a \u201creversed urbanism\u201d methodology: an evidence-based approach to urban design, planning, and decision making, in which human behavioral patterns are instilled as a foundational design tool for inferring the success rates of highly performative urban places.", "which Has method ?", "Multivariate linear regression", 891.0, 921.0], ["Abstract Emerging infectious diseases, such as severe acute respiratory syndrome (SARS) and Zika virus disease, present a major threat to public health 1\u20133 . Despite intense research efforts, how, when and where new diseases appear are still a source of considerable uncertainty. A severe respiratory disease was recently reported in Wuhan, Hubei province, China. As of 25 January 2020, at least 1,975 cases had been reported since the first patient was hospitalized on 12 December 2019. Epidemiological investigations have suggested that the outbreak was associated with a seafood market in Wuhan. Here we study a single patient who was a worker at the market and who was admitted to the Central Hospital of Wuhan on 26 December 2019 while experiencing a severe respiratory syndrome that included fever, dizziness and a cough. Metagenomic RNA sequencing 4 of a sample of bronchoalveolar lavage fluid from the patient identified a new RNA virus strain from the family Coronaviridae , which is designated here \u2018WH-Human 1\u2019 coronavirus (and has also been referred to as \u20182019-nCoV\u2019). Phylogenetic analysis of the complete viral genome (29,903 nucleotides) revealed that the virus was most closely related (89.1% nucleotide similarity) to a group of SARS-like coronaviruses (genus Betacoronavirus, subgenus Sarbecovirus) that had previously been found in bats in China 5 . This outbreak highlights the ongoing ability of viral spill-over from animals to cause severe disease in humans.", "which Has method ?", "sequencing", 844.0, 854.0], ["In order to improve the signal-to-noise ratio of the hyperspectral sensors and exploit the potential of satellite hyperspectral data for predicting soil properties, we took MingShui County as the study area, which the study area is approximately 1481 km2, and we selected Gaofen-5 (GF-5) satellite hyperspectral image of the study area to explore an applicable and accurate denoising method that can effectively improve the prediction accuracy of soil organic matter (SOM) content. First, fractional-order derivative (FOD) processing is performed on the original reflectance (OR) to evaluate the optimal FOD. Second, singular value decomposition (SVD), Fourier transform (FT) and discrete wavelet transform (DWT) are used to denoise the OR and optimal FOD reflectance. Third, the spectral indexes of the reflectance under different denoising methods are extracted by optimal band combination algorithm, and the input variables of different denoising methods are selected by the recursive feature elimination (RFE) algorithm. Finally, the SOM content is predicted by a random forest prediction model. The results reveal that 0.6-order reflectance describes more useful details in satellite hyperspectral data. Five spectral indexes extracted from the reflectance under different denoising methods have a strong correlation with the SOM content, which is helpful for realizing high-accuracy SOM predictions. All three denoising methods can reduce the noise in hyperspectral data, and the accuracies of the different denoising methods are ranked DWT > FT > SVD, where 0.6-order-DWT has the highest accuracy (R2 = 0.84, RMSE = 3.36 g kg\u22121, and RPIQ = 1.71). This paper is relatively novel, in that GF-5 satellite hyperspectral data based on different denoising methods are used to predict SOM, and the results provide a highly robust and novel method for mapping the spatial distribution of SOM content at the regional scale.", "which Has method ?", "Fourier transform (FT)", NaN, NaN], ["In this work; we investigated the differential interaction of amphiphilic antimicrobial peptides with 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) lipid structures by means of extensive molecular dynamics simulations. By using a coarse-grained (CG) model within the MARTINI force field; we simulated the peptide\u2013lipid system from three different initial configurations: (a) peptides in water in the presence of a pre-equilibrated lipid bilayer; (b) peptides inside the hydrophobic core of the membrane; and (c) random configurations that allow self-assembled molecular structures. This last approach allowed us to sample the structural space of the systems and consider cooperative effects. The peptides used in our simulations are aurein 1.2 and maculatin 1.1; two well-known antimicrobial peptides from the Australian tree frogs; and molecules that present different membrane-perturbing behaviors. Our results showed differential behaviors for each type of peptide seen in a different organization that could guide a molecular interpretation of the experimental data. While both peptides are capable of forming membrane aggregates; the aurein 1.2 ones have a pore-like structure and exhibit a higher level of organization than those conformed by maculatin 1.1. Furthermore; maculatin 1.1 has a strong tendency to form clusters and induce curvature at low peptide\u2013lipid ratios. The exploration of the possible lipid\u2013peptide structures; as the one carried out here; could be a good tool for recognizing specific configurations that should be further studied with more sophisticated methodologies.", "which Has method ?", "Molecular Dynamics Simulations", 197.0, 227.0], ["Hackathons have become an increasingly popular approach for organizations to both test their new products and services as well as to generate new ideas. Most events either focus on attracting external developers or requesting employees of the organization to focus on a specific problem. In this paper we describe extensions to this paradigm that open up the event to internal employees and preserve the open-ended nature of the hackathon itself. In this paper we describe our initial motivation and objectives for conducting an internal hackathon, our experience in pioneering an internal hackathon at AT&T including specific things we did to make the internal hackathon successful. We conclude with the benefits (both expected and unexpected) we achieved from the internal hackathon approach, and recommendations for continuing the use of this valuable tool within AT&T.", "which Has method ?", "Hackathon", 429.0, 438.0], ["Background & objectives: Since December 2019, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has globally affected 195 countries. In India, suspected cases were screened for SARS-CoV-2 as per the advisory of the Ministry of Health and Family Welfare. The objective of this study was to characterize SARS-CoV-2 sequences from three identified positive cases as on February 29, 2020. Methods: Throat swab/nasal swab specimens for a total of 881 suspected cases were screened by E gene and confirmed by RdRp (1), RdRp (2) and N gene real-time reverse transcription-polymerase chain reactions and next-generation sequencing. Phylogenetic analysis, molecular characterization and prediction of B- and T-cell epitopes for Indian SARS-CoV-2 sequences were undertaken. Results: Three cases with a travel history from Wuhan, China, were confirmed positive for SARS-CoV-2. Almost complete (29,851 nucleotides) genomes of case 1, case 3 and a fragmented genome for case 2 were obtained. The sequences of Indian SARS-CoV-2 though not identical showed high (~99.98%) identity with Wuhan seafood market pneumonia virus (accession number: NC 045512). Phylogenetic analysis showed that the Indian sequences belonged to different clusters. Predicted linear B-cell epitopes were found to be concentrated in the S1 domain of spike protein, and a conformational epitope was identified in the receptor-binding domain. The predicted T-cell epitopes showed broad human leucocyte antigen allele coverage of A and B supertypes predominant in the Indian population. Interpretation & conclusions: The two SARS-CoV-2 sequences obtained from India represent two different introductions into the country. The genetic heterogeneity is as noted globally. The identified B- and T-cell epitopes may be considered suitable for future experiments towards the design of vaccines and diagnostics. Continuous monitoring and analysis of the sequences of new cases from India and the other affected countries would be vital to understand the genetic evolution and rates of substitution of the SARS-CoV-2.", "which Has method ?", "sequencing", 623.0, 633.0], ["Internet of Things (IoT) covers a variety of applications including the Healthcare field. Consequently, medical objects become connected to each other with the purpose to share and exchange health data. These medical connected objects raise issues on how to ensure the analysis, interpretation and semantic interoperability of the extensive obtained health data with the purpose to make an appropriate decision. This paper proposes a HealthIoT ontology for representing the semantic interoperability of the medical connected objects and their data; while an algorithm alleviates the analysis of the detected vital signs and the decision-making of the doctor. The execution of this algorithm needs the definition of several SWRL rules (Semantic Web Rule Language).", "which Has method ?", "Several SWRL", 715.0, 727.0], ["Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.", "which Has method ?", "comparative analysis ", 1196.0, 1217.0], ["Practical implications Personas are a practical and meaningful tool for thinking about library space and service design in the development stage. Several examples of library spaces that focus on the needs of specific personas are provided.", "which Has method ?", "Personas", 23.0, 31.0], ["Objective \u2013 Evidence from systematic reviews a decade ago suggested that face-to-face and online methods to provide information literacy training in universities were equally effective in terms of skills learnt, but there was a lack of robust comparative research. The objectives of this review were (1) to update these findings with the inclusion of more recent primary research; (2) to further enhance the summary of existing evidence by including studies of blended formats (with components of both online and face-to-face teaching) compared to single format education; and (3) to explore student views on the various formats employed. Methods \u2013 Authors searched seven databases along with a range of supplementary search methods to identify comparative research studies, dated January 1995 to October 2016, exploring skill outcomes for students enrolled in higher education programs. There were 33 studies included, of which 19 also contained comparative data on student views. Where feasible, meta-analyses were carried out to provide summary estimates of skills development and a thematic analysis was completed to identify student views across the different formats. Results \u2013 A large majority of studies (27 of 33; 82%) found no statistically significant difference between formats in skills outcomes for students. Of 13 studies that could be included in a meta-analysis, the standardized mean difference (SMD) between skill test results for face-to-face versus online formats was -0.01 (95% confidence interval -0.28 to 0.26). Of ten studies comparing blended to single delivery format, seven (70%) found no statistically significant difference between formats, and the remaining studies had mixed outcomes. From the limited evidence available across all studies, there is a potential dichotomy between outcomes measured via skill test and assignment (course work) which is worthy of further investigation. The thematic analysis of student views found no preference in relation to format on a range of measures in 14 of 19 studies (74%). The remainder identified that students perceived advantages and disadvantages for each format but had no overall preference. Conclusions \u2013 There is compelling evidence that information literacy training is effective and well received across a range of delivery formats. Further research looking at blended versus single format methods, and the time implications for each, as well as comparing assignment to skill test outcomes would be valuable. Future studies should adopt a methodologically robust design (such as the randomized controlled trial) with a large student population and validated outcome measures.", "which Has method ?", "Meta-analysis", 1365.0, 1378.0], ["Abstract A fundamental aspect of well performing cities is successful public spaces. For centuries, understanding these places has been limited to sporadic observations and laborious data collection. This study proposes a novel methodology to analyze citywide, discrete urban spaces using highly accurate anonymized telecom data and machine learning algorithms. Through superposition of human dynamics and urban features, this work aims to expose clear correlations between the design of the city and the behavioral patterns of its users. Geolocated telecom data, obtained for the state of Andorra, were initially analyzed to identify \u201cstay-points\u201d\u2014events in which cellular devices remain within a certain roaming distance for a given length of time. These stay-points were then further analyzed to find clusters of activity characterized in terms of their size, persistence, and diversity. Multivariate linear regression models were used to identify associations between the formation of these clusters and various urban features such as urban morphology or land-use within a 25\u201350 meters resolution. Some of the urban features that were found to be highly related to the creation of large, diverse and long-lasting clusters were the presence of service and entertainment amenities, natural water features, and the betweenness centrality of the road network; others, such as educational and park amenities were shown to have a negative impact. Ultimately, this study suggests a \u201creversed urbanism\u201d methodology: an evidence-based approach to urban design, planning, and decision making, in which human behavioral patterns are instilled as a foundational design tool for inferring the success rates of highly performative urban places.", "which Has method ?", "Linear Regression", 904.0, 921.0], ["Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19\u2013related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.", "which Has method ?", "online survey", 723.0, 736.0], ["In order to improve the signal-to-noise ratio of the hyperspectral sensors and exploit the potential of satellite hyperspectral data for predicting soil properties, we took MingShui County as the study area, which the study area is approximately 1481 km2, and we selected Gaofen-5 (GF-5) satellite hyperspectral image of the study area to explore an applicable and accurate denoising method that can effectively improve the prediction accuracy of soil organic matter (SOM) content. First, fractional-order derivative (FOD) processing is performed on the original reflectance (OR) to evaluate the optimal FOD. Second, singular value decomposition (SVD), Fourier transform (FT) and discrete wavelet transform (DWT) are used to denoise the OR and optimal FOD reflectance. Third, the spectral indexes of the reflectance under different denoising methods are extracted by optimal band combination algorithm, and the input variables of different denoising methods are selected by the recursive feature elimination (RFE) algorithm. Finally, the SOM content is predicted by a random forest prediction model. The results reveal that 0.6-order reflectance describes more useful details in satellite hyperspectral data. Five spectral indexes extracted from the reflectance under different denoising methods have a strong correlation with the SOM content, which is helpful for realizing high-accuracy SOM predictions. All three denoising methods can reduce the noise in hyperspectral data, and the accuracies of the different denoising methods are ranked DWT > FT > SVD, where 0.6-order-DWT has the highest accuracy (R2 = 0.84, RMSE = 3.36 g kg\u22121, and RPIQ = 1.71). This paper is relatively novel, in that GF-5 satellite hyperspectral data based on different denoising methods are used to predict SOM, and the results provide a highly robust and novel method for mapping the spatial distribution of SOM content at the regional scale.", "which Has method ?", "discrete wavelet transform (DWT)", NaN, NaN], ["Textural characterization and heavy mineral studies of beach sediments in Ibeno and Eastern Obolo Local Government Areas of Akwa Ibom State were carried out in the present study. The main aim was to infer their provenance, transport history and environment of deposition. Sediment samples were collected at the water\u2013sediment contact along the shoreline at an interval of about 3m. Ten samples were collected from study location 1 (Ibeno Beach) and twelve samples were collected from study location 2 (Eastern Obolo Beach). A total of twenty\u2013two samples were collected from the field and brought to the laboratory for textural and compositional analyses. The results showed that the value of graphic mean size ranged from 1.70\u0424 to 2.83\u0424, sorting values ranged from 0.39\u0424 \u2013 0.60\u0424, skewness values ranged from -0.02 to 0.10 while kurtosis values ranged from 1.02 to 2.46, indicating medium to fine grained and well sorted sediments. This suggested that the sediments have been transported far from their source. Longshore current and onshore\u2013offshore movements of sediment are primarily responsible in sorting of the heavy minerals. The histogram charts for the different samples and standard deviation versus skewness indicated a beach environment of deposition. This implies that the sediments are dominated by one class of grain size; a phenomenon characteristic of beach environments. The heavy mineral assemblages identified in this research work were rutile, zircon, tourmaline, hornblende, apatite, diopside, glauconite, pumpellyite, cassiterite, epidote, garnet, augite, enstatite, andalusite and opaque minerals. The zircon-tourmaline-rutile (ZTR) index ranged from 47.30% to 87.00% with most of the samples showing a ZTR index greater than 50%. These indicated that the sediments were mineralogically sub-mature and have been transported far from their source. The heavy minerals identified are indicative of being products of reworked sediments of both metamorphic (high rank) and igneous (both mafic and sialic) origin probably derived from the basement rocks of the Oban Massif as well as reworked sediments of the Benue Trough. Therefore, findings from the present study indicated that erosion, accretion, and stability of beaches are controlled by strong hydrodynamic and hydraulic processes.", "which Has method ?", "Sediment samples were collected at the water\u2013sediment contact along the shoreline at an interval of about 3m. Ten samples were collected from study location 1 (Ibeno Beach) and twelve samples were collected from study location 2 (Eastern Obolo Beach). A total of twenty\u2013two samples were collected from the field and brought to the laboratory for textural and compositional analyses.", NaN, NaN], ["During global health crises, such as the recent H1N1 pandemic, the mass media provide the public with timely information regarding risk. To obtain new insights into how these messages are received, we measured neural data while participants, who differed in their preexisting H1N1 risk perceptions, viewed a TV report about H1N1. Intersubject correlation (ISC) of neural time courses was used to assess how similarly the brains of viewers responded to the TV report. We found enhanced intersubject correlations among viewers with high-risk perception in the anterior cingulate, a region which classical fMRI studies associated with the appraisal of threatening information. By contrast, neural coupling in sensory-perceptual regions was similar for the high and low H1N1-risk perception groups. These results demonstrate a novel methodology for understanding how real-life health messages are processed in the human brain, with particular emphasis on the role of emotion and differences in risk perceptions.", "which Has method ?", "fMRI", 603.0, 607.0], ["Infections encompass a set of medical conditions of very diverse kinds that can pose a significant risk to health, and even death. As with many other diseases, early diagnosis can help to provide patients with proper care to minimize the damage produced by the disease, or to isolate them to avoid the risk of spread. In this context, computational intelligence can be useful to predict the risk of infection in patients, raising early alarms that can aid medical teams to respond as quick as possible. In this paper, we survey the state of the art on infection prediction using computer science by means of a systematic literature review. The objective is to find papers where computational intelligence is used to predict infections in patients using physiological data as features. We have posed one major research question along with nine specific subquestions. The whole review process is thoroughly described, and eight databases are considered which index most of the literature published in different scholarly formats. A total of 101 relevant documents have been found in the period comprised between 2003 and 2019, and a detailed study of these documents is carried out to classify the works and answer the research questions posed, resulting to our best knowledge in the most comprehensive study of its kind. We conclude that the most widely addressed infection is by far sepsis, followed by Clostridium difficile infection and surgical site infections. Most works use machine learning techniques, from which logistic regression, support vector machines, random forest and naive Bayes are the most common. Some machine learning works provide some ideas on the problems of small data and class imbalance, which can be of interest. The current systematic literature review shows that automatic diagnosis of infectious diseases using computational intelligence is well documented in the medical literature.", "which Has method ?", "Systematic Literature Review", 610.0, 638.0], ["One aspect of human commonsense reasoning is the ability to make presumptions about daily experiences, activities and social interactions with others. We propose a new commonsense reasoning benchmark where the task is to uncover commonsense presumptions implied by imprecisely stated natural language commands in the form of if-then-because statements. For example, in the command \"If it snows at night then wake me up early because I don't want to be late for work\" the speaker relies on commonsense reasoning of the listener to infer the implicit presumption that it must snow enough to cause traffic slowdowns. Such if-then-because commands are particularly important when users instruct conversational agents. We release a benchmark data set for this task, collected from humans and annotated with commonsense presumptions. We develop a neuro-symbolic theorem prover that extracts multi-hop reasoning chains and apply it to this problem. We further develop an interactive conversational framework that evokes commonsense knowledge from humans for completing reasoning chains.", "which Has method ?", "neuro-symbolic theorem prover", 841.0, 870.0], ["In order to improve the signal-to-noise ratio of the hyperspectral sensors and exploit the potential of satellite hyperspectral data for predicting soil properties, we took MingShui County as the study area, which the study area is approximately 1481 km2, and we selected Gaofen-5 (GF-5) satellite hyperspectral image of the study area to explore an applicable and accurate denoising method that can effectively improve the prediction accuracy of soil organic matter (SOM) content. First, fractional-order derivative (FOD) processing is performed on the original reflectance (OR) to evaluate the optimal FOD. Second, singular value decomposition (SVD), Fourier transform (FT) and discrete wavelet transform (DWT) are used to denoise the OR and optimal FOD reflectance. Third, the spectral indexes of the reflectance under different denoising methods are extracted by optimal band combination algorithm, and the input variables of different denoising methods are selected by the recursive feature elimination (RFE) algorithm. Finally, the SOM content is predicted by a random forest prediction model. The results reveal that 0.6-order reflectance describes more useful details in satellite hyperspectral data. Five spectral indexes extracted from the reflectance under different denoising methods have a strong correlation with the SOM content, which is helpful for realizing high-accuracy SOM predictions. All three denoising methods can reduce the noise in hyperspectral data, and the accuracies of the different denoising methods are ranked DWT > FT > SVD, where 0.6-order-DWT has the highest accuracy (R2 = 0.84, RMSE = 3.36 g kg\u22121, and RPIQ = 1.71). This paper is relatively novel, in that GF-5 satellite hyperspectral data based on different denoising methods are used to predict SOM, and the results provide a highly robust and novel method for mapping the spatial distribution of SOM content at the regional scale.", "which Has method ?", "Denoising Method", 374.0, 390.0], ["The effect of rhizosphere soil or root tissues amendments on the microbial mineralisation of hydrocarbons in soil slurry by the indigenous microbial communities has been investigated. In this study, rhizosphere soil and root tissues of reed canary grass (Phalaris arundinacea), channel grass (Vallisneria spiralis), blackberry (Rubus fructicosus) and goat willow (Salix caprea) were collected from the former Shell and Imperial Industries (ICI) Refinery site in Lancaster, UK. The rates and extents of 14C\u2013hydrocarbons (naphthalene, phenanthrene, hexadecane or octacosane) mineralisation in artificially spiked soils were monitored in the absence and presence of 5% (wet weight) of rhizosphere soil or root tissues. Respirometric and microbial assays were monitored in fresh (0 d) and pre\u2013incubated (28 d) artificially spiked soils following amendment with rhizosphere soil or root tissues. There were significant increases (P < 0.001) in the extents of 14C\u2013naphthalene and 14C\u2013phenanthrene mineralisation in fresh artificially spiked soils amended with rhizosphere soil and root tissues compared to those measured in unamended soils. However, amendment of fresh artificially spiked soils with rhizosphere soil and root tissues did not enhance the microbial mineralisation of 14C\u2013hexadecane or 14C\u2013octacosane by indigenous microbial communities. Apart from artificially spiked soil systems containing naphthalene (amended with reed canary grass and channel grass rhizosphere) and hexadecane amended with goat willow rhizosphere, microbial mineralisation of hydrocarbons was further enhanced following 28 d soil\u2013organic contaminants pre\u2013exposure and subsequent amendment with rhizosphere soil or root tissues. This study suggests that organic chemicals in roots and/or rhizosphere can enhance the microbial degradation of petroleum hydrocarbons in freshly contaminated soil by supporting higher numbers of hydrocarbon\u2013degrading populations, promoting microbial activity and/or enhancing bioavailability of organic contaminants.", "which Has method ?", "In this study, rhizosphere soil and root tissues of reed canary grass (Phalaris arundinacea), channel grass (Vallisneria spiralis), blackberry (Rubus fructicosus) and goat willow (Salix caprea) were collected from the former Shell and Imperial Industries (ICI) Refinery site in Lancaster, UK. The rates and extents of 14C\u2013hydrocarbons (naphthalene, phenanthrene, hexadecane or octacosane) mineralisation in artificially spiked soils were monitored in the absence and presence of 5% (wet weight) of rhizosphere soil or root tissues. Respirometric and microbial assays were monitored in fresh (0 d) and pre\u2013incubated (28 d) artificially spiked soils following amendment with rhizosphere soil or root tissues. ", NaN, NaN], ["Abstract As a relevant cognitive-motivational aspect of ICT literacy, a new construct ICT Engagement is theoretically based on self-determination theory and involves the factors ICT interest, Perceived ICT competence, Perceived autonomy related to ICT use, and ICT as a topic in social interaction. In this manuscript, we present different sources of validity supporting the construct interpretation of test scores in the ICT Engagement scale, which was used in PISA 2015. Specifically, we investigated the internal structure by dimensional analyses and investigated the relation of ICT Engagement aspects to other variables. The analyses are based on public data from PISA 2015 main study from Switzerland ( n = 5860) and Germany ( n = 6504). First, we could confirm the four-dimensional structure of ICT Engagement for the Swiss sample using a structural equation modelling approach. Second, ICT Engagement scales explained the highest amount of variance in ICT Use for Entertainment, followed by Practical use. Third, we found significantly lower values for girls in all ICT Engagement scales except ICT Interest. Fourth, we found a small negative correlation between the scores in the subscale \u201cICT as a topic in social interaction\u201d and reading performance in PISA 2015. We could replicate most results for the German sample. Overall, the obtained results support the construct interpretation of the four ICT Engagement subscales.", "which Has method ?", "correlation ", 1151.0, 1163.0], ["Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19\u2013related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.", "which Has method ?", "case study", 300.0, 310.0], ["The aim of the present study is twofold: (1) to identify a factor structure between variables-interest in broad science topics, perceived information and communications technology (ICT) competence, environmental awareness and optimism; and (2) to explore the relations between these variables at the country level. The first part of the aim is addressed using exploratory factor analysis with data from the Program for International Student Assessment (PISA) for 15-year-old students from Singapore and Finland. The results show that a comparable structure with four factors was verified in both countries. Correlation analyses and linear regression were used to address the second part of the aim. The results show that adolescents\u2019 interest in broad science topics can predict perceived ICT competence. Their interest in broad science topics and perceived ICT competence can predict environmental awareness in both countries. However, there is difference in predicting environmental optimism. Singaporean students\u2019 interest in broad science topics and their perceived ICT competences are positive predictors, whereas environmental awareness is a negative predictor. Finnish students\u2019 environmental awareness negatively predicted environmental optimism.", "which Has method ?", "Linear Regression", 640.0, 657.0], ["The amount and diversity of data in the Semantic Web has grown quite. RDF datasets has proportionally more problems than relational datasets due to the way data are published, usually without formal criteria. Entity Resolution is n important issue which is related to a known task of many research communities and it aims at finding all representations that refer to the same entity in different datasets. Yet, it is still an open problem. Blocking methods are used to avoid the quadratic complexity of the brute force approach by clustering entities into blocks and limiting the evaluation of entity specifications to entity pairs within blocks. In the last years only a few blocking methods were conceived to deal with RDF data and novel blocking techniques are required for dealing with noisy and heterogeneous data in the Web of Data. In this paper we present a blocking scheme, CER-Blocking, which is based on an inverted index structure and that uses different data evidences from a triple, aiming to maximize its effectiveness. To overcome the problems of data quality or even the very absence thereof, we use two blocking key definitions. This scheme is part of an ER approach which is based on a relational learning algorithm that addresses the problem by statistical approximation. It was empirically evaluated on real and synthetic datasets which are part of consolidated benchmarks found on the literature.", "which Has method ?", "Blocking", 440.0, 448.0], ["Abstract Hackathons, time-bounded events where participants write computer code and build apps, have become a popular means of socializing tech students and workers to produce \u201cinnovation\u201d despite little promise of material reward. Although they offer participants opportunities for learning new skills and face-to-face networking and set up interaction rituals that create an emotional \u201chigh,\u201d potential advantage is even greater for the events\u2019 corporate sponsors, who use them to outsource work, crowdsource innovation, and enhance their reputation. Ethnographic observations and informal interviews at seven hackathons held in New York during the course of a single school year show how the format of the event and sponsors\u2019 discursive tropes, within a dominant cultural frame reflecting the appeal of Silicon Valley, reshape unpaid and precarious work as an extraordinary opportunity, a ritual of ecstatic labor, and a collective imaginary for fictional expectations of innovation that benefits all, a powerful strategy for manufacturing workers\u2019 consent in the \u201cnew\u201d economy.", "which Has method ?", "Ethnographic observations", 553.0, 578.0], ["Iodine deficiency disorders (IDD) has been a major global public health problem threatening more than 2 billion people worldwide. Considering various human health implications associated with iodine deficiency, universal salt iodization programme has been recognized as one of the best methods of preventing iodine deficiency disorder and iodizing table salt is currently done in many countries. In this study, comparative assessment of iodine content of commercially available table salt brands in Nigerian market were investigated and iodine content were measured in ten table salt brands samples using iodometric titration. The iodine content ranged from 14.80 mg/kg \u2013 16.90 mg/kg with mean value of 15.90 mg/kg for Sea salt; 24.30 mg/kg \u2013 25.40 mg/kg with mean value of 24.60 mg/kg for Dangote salt (blue sachet); 22.10 mg/kg \u2013 23.10 mg/kg with mean value of 22.40 mg/kg for Dangote salt (red sachet); 23.30 mg/kg \u2013 24.30 mg/kg with mean value of 23.60 mg/kg for Mr Chef salt; 23.30 mg/kg \u2013 24.30 mg/kg with mean value of 23.60 mg/kg for Annapurna; 26.80 mg/kg \u2013 27.50 mg/kg with mean value of 27.20mg/kg for Uncle Palm salt; 23.30 mg/kg \u2013 29.60 mg/kg with mean content of 26.40 mg/kg for Dangote (bag); 25.40 mg/kg \u2013 26.50 mg/kg with mean value of 26.50 mg/kg for Royal salt; 36.80 mg/kg \u2013 37.20 mg/kg with mean iodine content of 37.0 mg/kg for Abakaliki refined salt, and 30.07 mg/kg \u2013 31.20 mg/kg with mean value of 31.00 mg/kg for Ikom refined salt. The mean iodine content measured in the Sea salt brand (15.70 mg/kg) was significantly P < 0.01 lower compared to those measured in other table salt brands. Although the iodine content of Abakaliki and Ikom refined salt exceed the recommended value, it is clear that only Sea salt brand falls below the World Health Organization (WHO) recommended value (20 \u2013 30 mg/kg), while the remaining table salt samples are just within the range. The results obtained have revealed that 70 % of the table salt brands were adequately iodized while 30 % of the table salt brands were not adequately iodized and provided baseline data that can be used for potential identification of human health risks associated with inadequate and/or excess iodine content in table salt brands consumed in households in Nigeria.", "which Has method ?", "In this study, comparative assessment of iodine content of commercially available table salt brands in Nigerian market were investigated and iodine content were measured in ten table salt brands samples using iodometric titration.", NaN, NaN], ["Acute lymphoblastic leukemia (ALL) is the most common childhood malignancy, and implementation of risk-adapted therapy has been instrumental in the dramatic improvements in clinical outcomes. A key to risk-adapted therapies includes the identification of genomic features of individual tumors, including chromosome number (for hyper- and hypodiploidy) and gene fusions, notably ETV6-RUNX1, TCF3-PBX1, and BCR-ABL1 in B-cell ALL (B-ALL). RNA-sequencing (RNA-seq) of large ALL cohorts has expanded the number of recurrent gene fusions recognized as drivers in ALL, and identification of these new entities will contribute to refining ALL risk stratification. We used RNA-seq on 126 ALL patients from our clinical service to test the utility of including RNA-seq in standard-of-care diagnostic pipelines to detect gene rearrangements and IKZF1 deletions. RNA-seq identified 86% of rearrangements detected by standard-of-care diagnostics. KMT2A (MLL) rearrangements, although usually identified, were the most commonly missed by RNA-seq as a result of low expression. RNA-seq identified rearrangements that were not detected by standard-of-care testing in 9 patients. These were found in patients who were not classifiable using standard molecular assessment. We developed an approach to detect the most common IKZF1 deletion from RNA-seq data and validated this using an RQ-PCR assay. We applied an expression classifier to identify Philadelphia chromosome-like B-ALL patients. T-ALL proved a rich source of novel gene fusions, which have clinical implications or provide insights into disease biology. Our experience shows that RNA-seq can be implemented within an individual clinical service to enhance the current molecular diagnostic risk classification of ALL.", "which Has method ?", "RNA-seq", 453.0, 460.0], ["Joint extraction of entities and relations is an important task in information extraction. To tackle this problem, we firstly propose a novel tagging scheme that can convert the joint extraction task to a tagging problem. Then, based on our tagging scheme, we study different end-to-end models to extract entities and their relations directly, without identifying entities and relations separately. We conduct experiments on a public dataset produced by distant supervision method and the experimental results show that the tagging based methods are better than most of the existing pipelined and joint learning methods. What's more, the end-to-end model proposed in this paper, achieves the best results on the public dataset.", "which Has method ?", "Novel tagging scheme", 136.0, 156.0], ["Originality/value This chapter adds to the body of case studies examining what the library of the future could look like in practice as well as theory.", "which Has method ?", "case studies", 51.0, 63.0], ["In the recent years, different Web knowledge graphs, both free and commercial, have been created. While Google coined the term \"Knowledge Graph\" in 2012, there are also a few openly available knowledge graphs, with DBpedia, YAGO, and Freebase being among the most prominent ones. Those graphs are often constructed from semi-structured knowledge, such as Wikipedia, or harvested from the web with a combination of statistical and linguistic methods. The result are large-scale knowledge graphs that try to make a good trade-off between completeness and correctness. In order to further increase the utility of such knowledge graphs, various refinement methods have been proposed, which try to infer and add missing knowledge to the graph, or identify erroneous pieces of information. In this article, we provide a survey of such knowledge graph refinement approaches, with a dual look at both the methods being proposed as well as the evaluation methodologies used.", "which Has method ?", "refinement methods ", 641.0, 660.0], ["OBJECTIVE Previous studies indicate that people respond defensively to threatening health information, especially when the information challenges self-relevant goals. The authors investigated whether reduced acceptance of self-relevant health risk information is already visible in early attention processes, that is, attention disengagement processes. DESIGN In a randomized, controlled trial with 29 smoking and nonsmoking students, a variant of Posner's cueing task was used in combination with the high-temporal resolution method of event-related brain potentials (ERPs). MAIN OUTCOME MEASURES Reaction times and P300 ERP. RESULTS Smokers showed lower P300 amplitudes in response to high- as opposed to low-threat invalid trials when moving their attention to a target in the opposite visual field, indicating more efficient attention disengagement processes. Furthermore, both smokers and nonsmokers showed increased P300 amplitudes in response to the presentation of high- as opposed to low-threat valid trials, indicating threat-induced attention-capturing processes. Reaction time measures did not support the ERP data, indicating that the ERP measure can be extremely informative to measure low-level attention biases in health communication. CONCLUSION The findings provide the first neuroscientific support for the hypothesis that threatening health information causes more efficient disengagement among those for whom the health threat is self-relevant.", "which Has method ?", "ERP", 622.0, 625.0], ["The effect of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids) on the microbial mineralisation of phenanthrene in soil slurry by the indigenous microbial community has been investigated. The rate and extent of 14C\u2013phenanthrenemineralisation in artificially spiked soils were monitored in the absence of hydroxycinnamic acids and presence of hydroxycinnamic acids applied at three different concentrations (50, 100 and 200 \u00b5g kg-1) either as single compounds or as a mixture of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids at a 1:1:1 ratio). The highest extent of 14C\u2013phenanthrene mineralisation (P 200 \u00b5g kg-1. Depending on its concentrationin soil, hydroxycinnamic acids can either stimulate or inhibit mineralisation of phenanthrene by indigenous soil microbial community. Therefore, effective understanding of phytochemical\u2013microbe\u2013organic contaminant interactions is essential for further development of phytotechnologies for remediation of PAH\u2013contaminated soils.", "which Has method ?", "The rate and extent of 14C\u2013phenanthrenemineralisation in artificially spiked soils were monitored in the absence of hydroxycinnamic acids and presence of hydroxycinnamic acids applied at three different concentrations (50, 100 and 200 \u00b5g kg-1) either as single compounds or as a mixture of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids at a 1:1:1 ratio). ", NaN, NaN], ["Recently, healthcare services can be delivered effectively to patients anytime and anywhere using e-Health systems. e-Health systems are developed through Information and Communication Technologies (ICT) that involve sensors, mobiles, and web-based applications for the delivery of healthcare services and information. Remote healthcare is an important purpose of the e-Health system. Usually, the eHealth system includes heterogeneous sensors from diverse manufacturers producing data in different formats. Device interoperability and data normalization is a challenging task that needs research attention. Several solutions are proposed in the literature based on manual interpretation through explicit programming. However, programmatically implementing the interpretation of the data sender and data receiver in the e-Health system for the data transmission is counterproductive as modification will be required for each new device added into the system. In this paper, an e-Health system with the Semantic Sensor Network (SSN) is proposed to address the device interoperability issue. In the proposed system, we have used IETF YANG for modeling the semantic e-Health data to represent the information of e-Health sensors. This modeling scheme helps in provisioning semantic interoperability between devices and expressing the sensing data in a user-friendly manner. For this purpose, we have developed an ontology for e-Health data that supports different styles of data formats. The ontology is defined in YANG for provisioning semantic interpretation of sensing data in the system by constructing meta-models of e-Health sensors. The proposed approach assists in the auto-configuration of eHealth sensors and querying the sensor network with semantic interoperability support for the e-Health system.", "which Has method ?", "modeling the semantic e-Health data to represent the information of e-Health sensors", 1141.0, 1225.0], ["Peroxisomes are essential organelles of eukaryotic origin, ubiquitously distributed in cells and organisms, playing key roles in lipid and antioxidant metabolism. Loss or malfunction of peroxisomes causes more than 20 fatal inherited conditions. We have created a peroxisomal database () that includes the complete peroxisomal proteome of Homo sapiens and Saccharomyces cerevisiae, by gathering, updating and integrating the available genetic and functional information on peroxisomal genes. PeroxisomeDB is structured in interrelated sections \u2018Genes\u2019, \u2018Functions\u2019, \u2018Metabolic pathways\u2019 and \u2018Diseases\u2019, that include hyperlinks to selected features of NCBI, ENSEMBL and UCSC databases. We have designed graphical depictions of the main peroxisomal metabolic routes and have included updated flow charts for diagnosis. Precomputed BLAST, PSI-BLAST, multiple sequence alignment (MUSCLE) and phylogenetic trees are provided to assist in direct multispecies comparison to study evolutionary conserved functions and pathways. Highlights of the PeroxisomeDB include new tools developed for facilitating (i) identification of novel peroxisomal proteins, by means of identifying proteins carrying peroxisome targeting signal (PTS) motifs, (ii) detection of peroxisomes in silico, particularly useful for screening the deluge of newly sequenced genomes. PeroxisomeDB should contribute to the systematic characterization of the peroxisomal proteome and facilitate system biology approaches on the organelle.", "which Has method ?", "Multiple Sequence Alignment", 847.0, 874.0], ["Reconstructing the energy landscape of a protein holds the key to characterizing its structural dynamics and function [1]. While the disparate spatio-temporal scales spanned by the slow dynamics challenge reconstruction in wet and dry laboratories, computational efforts have had recent success on proteins where a wealth of experimentally-known structures can be exploited to extract modes of motion. In [2], the authors propose the SoPriM method that extracts principle components (PCs) and utilizes them as variables of the structure space of interest. Stochastic optimization is employed to sample the structure space and its associated energy landscape in the defined varible space. We refer to this algorithm as SoPriM-PCA and compare it here to SoPriM-NMA, which investigates whether the landscape can be reconstructed with knowledge of modes of motion (normal modes) extracted from one single known structure. Some representative results are shown in Figure 1, where structures obtained by SoPriM-PCA and those obtained by SoPriM-NMA for the H-Ras enzyme are compared via color-coded projections onto the top two variables utilized by each algorithm. The results show that precious information can be obtained on the energy landscape even when one structural model is available. The presented work opens up interesting venues of research on structure-based inference of dynamics. Acknowledgment: This work is supported in part by NSF Grant No. 1421001 to AS and NSF Grant No. 1440581 to AS and EP. Computations were run on ARGO, a research computing cluster provided by the Office of Research Computing at George Mason University, VA (URL: http://orc.gmu.edu).", "which Has method ?", "SoPriM-PCA", 718.0, 728.0], ["The aim of the present study is twofold: (1) to identify a factor structure between variables-interest in broad science topics, perceived information and communications technology (ICT) competence, environmental awareness and optimism; and (2) to explore the relations between these variables at the country level. The first part of the aim is addressed using exploratory factor analysis with data from the Program for International Student Assessment (PISA) for 15-year-old students from Singapore and Finland. The results show that a comparable structure with four factors was verified in both countries. Correlation analyses and linear regression were used to address the second part of the aim. The results show that adolescents\u2019 interest in broad science topics can predict perceived ICT competence. Their interest in broad science topics and perceived ICT competence can predict environmental awareness in both countries. However, there is difference in predicting environmental optimism. Singaporean students\u2019 interest in broad science topics and their perceived ICT competences are positive predictors, whereas environmental awareness is a negative predictor. Finnish students\u2019 environmental awareness negatively predicted environmental optimism.", "which Has method ?", "correlation ", 615.0, 627.0], ["With the proliferation of the RDF data format, engines for RDF query processing are faced with very large graphs that contain hundreds of millions of RDF triples. This paper addresses the resulting scalability problems. Recent prior work along these lines has focused on indexing and other physical-design issues. The current paper focuses on join processing, as the fine-grained and schema-relaxed use of RDF often entails star- and chain-shaped join queries with many input streams from index scans. We present two contributions for scalable join processing. First, we develop very light-weight methods for sideways information passing between separate joins at query run-time, to provide highly effective filters on the input streams of joins. Second, we improve previously proposed algorithms for join-order optimization by more accurate selectivity estimations for very large RDF graphs. Experimental studies with several RDF datasets, including the UniProt collection, demonstrate the performance gains of our approach, outperforming the previously fastest systems by more than an order of magnitude.", "which Has approach ?", "selectivity estimation", NaN, NaN], ["Powerful speeches can captivate audiences, whereas weaker speeches fail to engage their listeners. What is happening in the brains of a captivated audience? Here, we assess audience-wide functional brain dynamics during listening to speeches of varying rhetorical quality. The speeches were given by German politicians and evaluated as rhetorically powerful or weak. Listening to each of the speeches induced similar neural response time courses, as measured by inter-subject correlation analysis, in widespread brain regions involved in spoken language processing. Crucially, alignment of the time course across listeners was stronger for rhetorically powerful speeches, especially for bilateral regions of the superior temporal gyri and medial prefrontal cortex. Thus, during powerful speeches, listeners as a group are more coupled to each other, suggesting that powerful speeches are more potent in taking control of the listeners' brain responses. Weaker speeches were processed more heterogeneously, although they still prompted substantially correlated responses. These patterns of coupled neural responses bear resemblance to metaphors of resonance, which are often invoked in discussions of speech impact, and contribute to the literature on auditory attention under natural circumstances. Overall, this approach opens up possibilities for research on the neural mechanisms mediating the reception of entertaining or persuasive messages.", "which Has approach ?", "inter-subject correlation analysis", 462.0, 496.0], ["Static analysis is a fundamental task in query optimization. In this paper we study static analysis and optimization techniques for SPARQL, which is the standard language for querying Semantic Web data. Of particular interest for us is the optionality feature in SPARQL. It is crucial in Semantic Web data management, where data sources are inherently incomplete and the user is usually interested in partial answers to queries. This feature is one of the most complicated constructors in SPARQL and also the one that makes this language depart from classical query languages such as relational conjunctive queries. We focus on the class of well-designed SPARQL queries, which has been proposed in the literature as a fragment of the language with good properties regarding query evaluation. We first propose a tree representation for SPARQL queries, called pattern trees, which captures the class of well-designed SPARQL graph patterns and which can be considered as a query execution plan. Among other results, we propose several transformation rules for pattern trees, a simple normal form, and study equivalence and containment. We also study the enumeration and counting problems for this class of queries.", "which Has approach ?", "graph pattern", NaN, NaN], ["AbstractSocial ties are crucial for humans. Disruption of ties through social exclusion has a marked effect on our thoughts and feelings; however, such effects can be tempered by broader social network resources. Here, we use functional magnetic resonance imaging data acquired from 80 male adolescents to investigate how social exclusion modulates functional connectivity within and across brain networks involved in social pain and understanding the mental states of others (i.e., mentalizing). Furthermore, using objectively logged friendship network data, we examine how individual variability in brain reactivity to social exclusion relates to the density of participants\u2019 friendship networks, an important aspect of social network structure. We find increased connectivity within a set of regions previously identified as a mentalizing system during exclusion relative to inclusion. These results are consistent across the regions of interest as well as a whole-brain analysis. Next, examining how social network characteristics are associated with task-based connectivity dynamics, participants who showed greater changes in connectivity within the mentalizing system when socially excluded by peers had less dense friendship networks. This work provides novel insight to understand how distributed brain systems respond to social and emotional challenges, and how such brain dynamics might vary based on broader social network characteristics.", "which Has approach ?", "functional connectivity", 382.0, 405.0], ["Graphs arise naturally in many real-world applications including social networks, recommender systems, ontologies, biology, and computational finance. Traditionally, machine learning models for graphs have been mostly designed for static graphs. However, many applications involve evolving graphs. This introduces important challenges for learning and inference since nodes, attributes, and edges change over time. In this survey, we review the recent advances in representation learning for dynamic graphs, including dynamic knowledge graphs. We describe existing models from an encoder-decoder perspective, categorize these encoders and decoders based on the techniques they employ, and analyze the approaches in each category. We also review several prominent applications and widely used datasets, and highlight directions for future research.", "which Has approach ?", "encoder-decoder perspective", 580.0, 607.0], ["Moore's Theory of Transactional Distance hypothesizes that distance is a pedagogical, not geographic phenomenon. It is a distance of understandings and perceptions that might lead to a communication gap or a psychological space of potential misunderstandings between people. Moore also suggests that this distance has to be overcome if effective, deliberate, planned learning is to occur. However, the conceptualizations of transactional distance in a telecommunication era have not been systematically addressed. Investigating 71 learners' experiences with WorldWide Web, this study examined the postulate of Moore's theory and identified the dimensions (factors) constituting transactional distance in such learning environment. Exploratory factor analysis using a principal axis factor method was carried out. It was concluded that this concept represented multifaceted ideas. Transactional distance consisted of four dimensions-instructor-learner, learner-learner, learner-content, and learner-interface transactional distance. The results inform researchers and practitioners of Web-based instruction concerning the factors of transactional distance that need to be taken into account and overcome in WWW courses.", "which Has approach ?", "Moore's Theory of Transactional Distance", 0.0, 40.0], ["In this paper, we present a novel approach to estimate the relative depth of regions in monocular images. There are several contributions. First, the task of monocular depth estimation is considered as a learning-to-rank problem which offers several advantages compared to regression approaches. Second, monocular depth clues of human perception are modeled in a systematic manner. Third, we show that these depth clues can be modeled and integrated appropriately in a Rankboost framework. For this purpose, a space-efficient version of Rankboost is derived that makes it applicable to rank a large number of objects, as posed by the given problem. Finally, the monocular depth clues are combined with results from a deep learning approach. Experimental results show that the error rate is reduced by adding the monocular features while outperforming state-of-the-art systems.", "which Has approach ?", "Deep Learning", 717.0, 730.0], ["In this paper, we present a novel approach to estimate the relative depth of regions in monocular images. There are several contributions. First, the task of monocular depth estimation is considered as a learning-to-rank problem which offers several advantages compared to regression approaches. Second, monocular depth clues of human perception are modeled in a systematic manner. Third, we show that these depth clues can be modeled and integrated appropriately in a Rankboost framework. For this purpose, a space-efficient version of Rankboost is derived that makes it applicable to rank a large number of objects, as posed by the given problem. Finally, the monocular depth clues are combined with results from a deep learning approach. Experimental results show that the error rate is reduced by adding the monocular features while outperforming state-of-the-art systems.", "which Has approach ?", "RankBoost", 469.0, 478.0], ["In this paper, we present the virtual knowledge graph (VKG) paradigm for data integration and access, also known in the literature as Ontology-based Data Access. Instead of structuring the integration layer as a collection of relational tables, the VKG paradigm replaces the rigid structure of tables with the flexibility of graphs that are kept virtual and embed domain knowledge. We explain the main notions of this paradigm, its tooling ecosystem and significant use cases in a wide range of applications. Finally, we discuss future research directions.", "which Has approach ?", "ontology-based data access", 134.0, 160.0], ["Abstract Background Current approaches to identifying drug-drug interactions (DDIs), include safety studies during drug development and post-marketing surveillance after approval, offer important opportunities to identify potential safety issues, but are unable to provide complete set of all possible DDIs. Thus, the drug discovery researchers and healthcare professionals might not be fully aware of potentially dangerous DDIs. Predicting potential drug-drug interaction helps reduce unanticipated drug interactions and drug development costs and optimizes the drug design process. Methods for prediction of DDIs have the tendency to report high accuracy but still have little impact on translational research due to systematic biases induced by networked/paired data. In this work, we aimed to present realistic evaluation settings to predict DDIs using knowledge graph embeddings. We propose a simple disjoint cross-validation scheme to evaluate drug-drug interaction predictions for the scenarios where the drugs have no known DDIs. Results We designed different evaluation settings to accurately assess the performance for predicting DDIs. The settings for disjoint cross-validation produced lower performance scores, as expected, but still were good at predicting the drug interactions. We have applied Logistic Regression, Naive Bayes and Random Forest on DrugBank knowledge graph with the 10-fold traditional cross validation using RDF2Vec, TransE and TransD. RDF2Vec with Skip-Gram generally surpasses other embedding methods. We also tested RDF2Vec on various drug knowledge graphs such as DrugBank, PharmGKB and KEGG to predict unknown drug-drug interactions. The performance was not enhanced significantly when an integrated knowledge graph including these three datasets was used. Conclusion We showed that the knowledge embeddings are powerful predictors and comparable to current state-of-the-art methods for inferring new DDIs. We addressed the evaluation biases by introducing drug-wise and pairwise disjoint test classes. Although the performance scores for drug-wise and pairwise disjoint seem to be low, the results can be considered to be realistic in predicting the interactions for drugs with limited interaction information.", "which Has approach ?", "Knowledge Graph Embedding", NaN, NaN], ["Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.", "which Dataset name ?", "Gene Regulation Event Corpus", 1074.0, 1102.0], ["Linguistically annotated corpus based on texts in biomedical domain has been constructed to tune natural language processing (NLP) tools for biotextmining. As the focus of information extraction is shifting from \"nominal\" information such as named entity to \"verbal\" information such as function and interaction of substances, application of parsers has become one of the key technologies and thus the corpus annotated for syntactic structure of sentences is in demand. A subset of the GENIA corpus consisting of 500 MEDLINE abstracts has been annotated for syntactic structure in an XMLbased format based on Penn Treebank II (PTB) scheme. Inter-annotator agreement test indicated that the writing style rather than the contents of the research abstracts is the source of the difficulty in tree annotation, and that annotation can be stably done by linguists without much knowledge of biology with appropriate guidelines regarding to linguistic phenomena particular to scientific texts.", "which Dataset name ?", "GENIA corpus", 486.0, 498.0], ["We introduce the STEM (Science, Technology, Engineering, and Medicine) Dataset for Scientific Entity Extraction, Classification, and Resolution, version 1.0 (STEM-ECR v1.0). The STEM-ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises abstracts in 10 STEM disciplines that were found to be the most prolific ones on a major publishing platform. We describe the creation of such a multidisciplinary corpus and highlight the obtained findings in terms of the following features: 1) a generic conceptual formalism for scientific entities in a multidisciplinary scientific context; 2) the feasibility of the domain-independent human annotation of scientific entities under such a generic formalism; 3) a performance benchmark obtainable for automatic extraction of multidisciplinary scientific entities using BERT-based neural models; 4) a delineated 3-step entity resolution procedure for human annotation of the scientific entities via encyclopedic entity linking and lexicographic word sense disambiguation; and 5) human evaluations of Babelfy returned encyclopedic links and lexicographic senses for our entities. Our findings cumulatively indicate that human annotation and automatic learning of multidisciplinary scientific concepts as well as their semantic disambiguation in a wide-ranging setting as STEM is reasonable.", "which Dataset name ?", "STEM-ECR v1.0 dataset", 178.0, 199.0], ["In the absence of sufficient medication for COVID patients due to the increased demand, disused drugs have been employed or the doses of those available were modified by hospital pharmacists. Some evidences for the use of alternative drugs can be found in the existing scientific literature that could assist in such decisions. However, exploiting large corpus of documents in an efficient manner is not easy, since drugs may not appear explicitly related in the texts and could be mentioned under different brand names. Drugs4Covid combines word embedding techniques and semantic web technologies to enable a drug-oriented exploration of large medical literature. Drugs and diseases are identified according to the ATC classification and MeSH categories respectively. More than 60K articles and 2M paragraphs have been processed from the CORD-19 corpus with information of COVID-19, SARS, and other related coronaviruses. An open catalogue of drugs has been created and results are publicly available through a drug browser, a keyword-guided text explorer, and a knowledge graph.", "which Dataset name ?", "Drugs4Covid", 521.0, 532.0], ["We introduce a multi-task setup of identifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called SciIE with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.", "which Dataset name ?", "SciERC", 127.0, 133.0], ["We revisit the idea of mining Wikipedia in order to generate named-entity annotations. We propose a new methodology that we applied to English Wikipedia to build WiNER, a large, high quality, annotated corpus. We evaluate its usefulness on 6 NER tasks, comparing 4 popular state-of-the art approaches. We show that LSTM-CRF is the approach that benefits the most from our corpus. We report impressive gains with this model when using a small portion of WiNER on top of the CONLL training material. Last, we propose a simple but efficient method for exploiting the full range of WiNER, leading to further improvements.", "which Dataset name ?", "WiNER", 162.0, 167.0], ["Building predictive models for information extraction from text, such as named entity recognition or the extraction of semantic relationships between named entities in text, requires a large corpus of annotated text. Wikipedia is often used as a corpus for these tasks where the annotation is a named entity linked by a hyperlink to its article. However, editors on Wikipedia are only expected to link these mentions in order to help the reader to understand the content, but are discouraged from adding links that do not add any benefit for understanding an article. Therefore, many mentions of popular entities (such as countries or popular events in history), or previously linked articles, as well as the article\u2019s entity itself, are not linked. In this paper, we discuss WEXEA, a Wikipedia EXhaustive Entity Annotation system, to create a text corpus based on Wikipedia with exhaustive annotations of entity mentions, i.e. linking all mentions of entities to their corresponding articles. This results in a huge potential for additional annotations that can be used for downstream NLP tasks, such as Relation Extraction. We show that our annotations are useful for creating distantly supervised datasets for this task. Furthermore, we publish all code necessary to derive a corpus from a raw Wikipedia dump, so that it can be reproduced by everyone.", "which Dataset name ?", "WEXEA", 776.0, 781.0], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Dataset name ?", "ITI TXM corpora", 805.0, 820.0], ["With the information overload in genome-related field, there is an increasing need for natural language processing technology to extract information from literature and various attempts of information extraction using NLP has been being made. We are developing the necessary resources including domain ontology and annotated corpus from research abstracts in MEDLINE database (GENIA corpus). We are building the ontology and the corpus simultaneously, using each other. In this paper we report on our new corpus, its ontological basis, annotation scheme, and statistics of annotated objects. We also describe the tools used for corpus annotation and management.", "which Dataset name ?", "GENIA corpus", 377.0, 389.0], ["The active gene annotation corpus (AGAC) was developed to support knowledge discovery for drug repurposing. Based on the corpus, the AGAC track of the BioNLP Open Shared Tasks 2019 was organized, to facilitate cross-disciplinary collaboration across BioNLP and Pharmacoinformatics communities, for drug repurposing. The AGAC track consists of three subtasks: 1) named entity recognition, 2) thematic relation extraction, and 3) loss of function (LOF) / gain of function (GOF) topic classification. The AGAC track was participated by five teams, of which the performance are compared and analyzed. The the results revealed a substantial room for improvement in the design of the task, which we analyzed in terms of \u201cimbalanced data\u201d, \u201cselective annotation\u201d and \u201clatent topic annotation\u201d.", "which Dataset name ?", "Active Gene Annotation Corpus (AGAC)", NaN, NaN], ["We present a database of annotated biomedical text corpora merged into a portable data structure with uniform conventions. MedTag combines three corpora, MedPost, ABGene and GENETAG, within a common relational database data model. The GENETAG corpus has been modified to reflect new definitions of genes and proteins. The MedPost corpus has been updated to include 1,000 additional sentences from the clinical medicine domain. All data have been updated with original MEDLINE text excerpts, PubMed identifiers, and tokenization independence to facilitate data accuracy, consistency and usability. The data are available in flat files along with software to facilitate loading the data into a relational SQL database from ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedTag/medtag.tar.gz.", "which Dataset name ?", "MedTag", 123.0, 129.0], ["Abstract Automatic keyphrase extraction techniques aim to extract quality keyphrases for higher level summarization of a document. Majority of the existing techniques are mainly domain-specific, which require application domain knowledge and employ higher order statistical methods, and computationally expensive and require large train data, which is rare for many applications. Overcoming these issues, this paper proposes a new unsupervised keyphrase extraction technique. The proposed unsupervised keyphrase extraction technique, named TeKET or Tree-based Keyphrase Extraction Technique , is a domain-independent technique that employs limited statistical knowledge and requires no train data. This technique also introduces a new variant of a binary tree, called KeyPhrase Extraction ( KePhEx ) tree, to extract final keyphrases from candidate keyphrases. In addition, a measure, called Cohesiveness Index or CI , is derived which denotes a given node\u2019s degree of cohesiveness with respect to the root. The CI is used in flexibly extracting final keyphrases from the KePhEx tree and is co-utilized in the ranking process. The effectiveness of the proposed technique and its domain and language independence are experimentally evaluated using available benchmark corpora, namely SemEval-2010 (a scientific articles dataset), Theses100 (a thesis dataset), and a German Research Article dataset, respectively. The acquired results are compared with other relevant unsupervised techniques belonging to both statistical and graph-based techniques. The obtained results demonstrate the improved performance of the proposed technique over other compared techniques in terms of precision, recall, and F1 scores.", "which Dataset name ?", "SemEval-2010", 1283.0, 1295.0], ["Motivation: Coreference resolution, the process of identifying different mentions of an entity, is a very important component in a text-mining system. Compared with the work in news articles, the existing study of coreference resolution in biomedical texts is quite preliminary by only focusing on specific types of anaphors like pronouns or definite noun phrases, using heuristic methods, and running on small data sets. Therefore, there is a need for an in-depth exploration of this task in the biomedical domain. Results: In this article, we presented a learning-based approach to coreference resolution in the biomedical domain. We made three contributions in our study. Firstly, we annotated a large scale coreference corpus, MedCo, which consists of 1,999 medline abstracts in the GENIA data set. Secondly, we proposed a detailed framework for the coreference resolution task, in which we augmented the traditional learning model by incorporating non-anaphors into training. Lastly, we explored various sources of knowledge for coreference resolution, particularly, those that can deal with the complexity of biomedical texts. The evaluation on the MedCo corpus showed promising results. Our coreference resolution system achieved a high precision of 85.2% with a reasonable recall of 65.3%, obtaining an F-measure of 73.9%. The results also suggested that our augmented learning model significantly boosted precision (up to 24.0%) without much loss in recall (less than 5%), and brought a gain of over 8% in F-measure.", "which Dataset name ?", "MedCo", 731.0, 736.0], ["In the third shared task of the Computational Approaches to Linguistic Code-Switching (CALCS) workshop, we focus on Named Entity Recognition (NER) on code-switched social-media data. We divide the shared task into two competitions based on the English-Spanish (ENG-SPA) and Modern Standard Arabic-Egyptian (MSA-EGY) language pairs. We use Twitter data and 9 entity types to establish a new dataset for code-switched NER benchmarks. In addition to the CS phenomenon, the diversity of the entities and the social media challenges make the task considerably hard to process. As a result, the best scores of the competitions are 63.76% and 71.61% for ENG-SPA and MSA-EGY, respectively. We present the scores of 9 participants and discuss the most common challenges among submissions.", "which Dataset name ?", "MSA-EGY", 307.0, 314.0], ["Abstract Background Effective response to public health emergencies, such as we are now experiencing with COVID-19, requires data sharing across multiple disciplines and data systems. Ontologies offer a powerful data sharing tool, and this holds especially for those ontologies built on the design principles of the Open Biomedical Ontologies Foundry. These principles are exemplified by the Infectious Disease Ontology (IDO), a suite of interoperable ontology modules aiming to provide coverage of all aspects of the infectious disease domain. At its center is IDO Core, a disease- and pathogen-neutral ontology covering just those types of entities and relations that are relevant to infectious diseases generally. IDO Core is extended by disease and pathogen-specific ontology modules. Results To assist the integration and analysis of COVID-19 data, and viral infectious disease data more generally, we have recently developed three new IDO extensions: IDO Virus (VIDO); the Coronavirus Infectious Disease Ontology (CIDO); and an extension of CIDO focusing on COVID-19 (IDO-COVID-19). Reflecting the fact that viruses lack cellular parts, we have introduced into IDO Core the term acellular structure to cover viruses and other acellular entities studied by virologists. We now distinguish between infectious agents \u2013 organisms with an infectious disposition \u2013 and infectious structures \u2013 acellular structures with an infectious disposition. This in turn has led to various updates and refinements of IDO Core\u2019s content. We believe that our work on VIDO, CIDO, and IDO-COVID-19 can serve as a model for yielding greater conformance with ontology building best practices. Conclusions IDO provides a simple recipe for building new pathogen-specific ontologies in a way that allows data about novel diseases to be easily compared, along multiple dimensions, with data represented by existing disease ontologies. The IDO strategy, moreover, supports ontology coordination, providing a powerful method of data integration and sharing that allows physicians, researchers, and public health organizations to respond rapidly and efficiently to current and future public health crises.", "which Dataset name ?", "IDO-COVID-19", 1074.0, 1086.0], ["Cross-domain named entity recognition (NER) models are able to cope with the scarcity issue of NER samples in target domains. However, most of the existing NER benchmarks lack domain-specialized entity types or do not focus on a certain domain, leading to a less effective cross-domain evaluation. To address these obstacles, we introduce a cross-domain NER dataset (CrossNER), a fully-labeled collection of NER data spanning over five diverse domains with specialized entity categories for different domains. Additionally, we also provide a domain-related corpus since using it to continue pre-training language models (domain-adaptive pre-training) is effective for the domain adaptation. We then conduct comprehensive experiments to explore the effectiveness of leveraging different levels of the domain corpus and pre-training strategies to do domain-adaptive pre-training for the cross-domain task. Results show that focusing on the fractional corpus containing domain-specialized entities and utilizing a more challenging pre-training strategy in domain-adaptive pre-training are beneficial for the NER domain adaptation, and our proposed method can consistently outperform existing cross-domain NER baselines. Nevertheless, experiments also illustrate the challenge of this cross-domain NER task. We hope that our dataset and baselines will catalyze research in the NER domain adaptation area. The code and data are available at this https URL.", "which Dataset name ?", "CrossNER", 367.0, 375.0], ["Abstract Background Named entity recognition (NER) is an important first step for text mining the biomedical literature. Evaluating the performance of biomedical NER systems is impossible without a standardized test corpus. The annotation of such a corpus for gene/protein name NER is a difficult process due to the complexity of gene/protein names. We describe the construction and annotation of GENETAG, a corpus of 20K MEDLINE \u00ae sentences for gene/protein NER. 15K GENETAG sentences were used for the BioCreAtIvE Task 1A Competition. Results To ensure heterogeneity of the corpus, MEDLINE sentences were first scored for term similarity to documents with known gene names, and 10K high- and 10K low-scoring sentences were chosen at random. The original 20K sentences were run through a gene/protein name tagger, and the results were modified manually to reflect a wide definition of gene/protein names subject to a specificity constraint, a rule that required the tagged entities to refer to specific entities. Each sentence in GENETAG was annotated with acceptable alternatives to the gene/protein names it contained, allowing for partial matching with semantic constraints. Semantic constraints are rules requiring the tagged entity to contain its true meaning in the sentence context. Application of these constraints results in a more meaningful measure of the performance of an NER system than unrestricted partial matching. Conclusion The annotation of GENETAG required intricate manual judgments by annotators which hindered tagging consistency. The data were pre-segmented into words, to provide indices supporting comparison of system responses to the \"gold standard\". However, character-based indices would have been more robust than word-based indices. GENETAG Train, Test and Round1 data and ancillary programs are freely available at ftp://ftp.ncbi.nlm.nih.gov/pub/tanabe/GENETAG.tar.gz. A newer version of GENETAG-05, will be released later this year.", "which Dataset name ?", "GENETAG", 397.0, 404.0], ["In the third shared task of the Computational Approaches to Linguistic Code-Switching (CALCS) workshop, we focus on Named Entity Recognition (NER) on code-switched social-media data. We divide the shared task into two competitions based on the English-Spanish (ENG-SPA) and Modern Standard Arabic-Egyptian (MSA-EGY) language pairs. We use Twitter data and 9 entity types to establish a new dataset for code-switched NER benchmarks. In addition to the CS phenomenon, the diversity of the entities and the social media challenges make the task considerably hard to process. As a result, the best scores of the competitions are 63.76% and 71.61% for ENG-SPA and MSA-EGY, respectively. We present the scores of 9 participants and discuss the most common challenges among submissions.", "which Dataset name ?", "ENG-SPA", 261.0, 268.0], ["We introduce a multi-task setup of identifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called SciIE with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.", "which Concept types ?", "Task", 21.0, 25.0], ["This paper introduces the ACL Reference Dataset for Terminology Extraction and Classification, version 2.0 (ACL RD-TEC 2.0). The ACL RD-TEC 2.0 has been developed with the aim of providing a benchmark for the evaluation of term and entity recognition tasks based on specialised text from the computational linguistics domain. This release of the corpus consists of 300 abstracts from articles in the ACL Anthology Reference Corpus, published between 1978\u20132006. In these abstracts, terms (i.e., single or multi-word lexical units with a specialised meaning) are manually annotated. In addition to their boundaries in running text, annotated terms are classified into one of the seven categories method, tool, language resource (LR), LR product, model, measures and measurements, and other. To assess the quality of the annotations and to determine the difficulty of this annotation task, more than 171 of the abstracts are annotated twice, independently, by each of the two annotators. In total, 6,818 terms are identified and annotated in more than 1300 sentences, resulting in a specialised vocabulary made of 3,318 lexical forms, mapped to 3,471 concepts. We explain the development of the annotation guidelines and discuss some of the challenges we encountered in this annotation task.", "which Concept types ?", "Measures and Measurements", 751.0, 776.0], ["While the fast-paced inception of novel tasks and new datasets helps foster active research in a community towards interesting directions, keeping track of the abundance of research activity in different areas on different datasets is likely to become increasingly difficult. The community could greatly benefit from an automatic system able to summarize scientific results, e.g., in the form of a leaderboard. In this paper we build two datasets and develop a framework (TDMS-IE) aimed at automatically extracting task, dataset, metric and score from NLP papers, towards the automatic construction of leaderboards. Experiments show that our model outperforms several baselines by a large margin. Our model is a first step towards automatic leaderboard construction, e.g., in the NLP domain.", "which Concept types ?", "Task", 515.0, 519.0], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Concept types ?", "Mutant", 375.0, 381.0], ["While the fast-paced inception of novel tasks and new datasets helps foster active research in a community towards interesting directions, keeping track of the abundance of research activity in different areas on different datasets is likely to become increasingly difficult. The community could greatly benefit from an automatic system able to summarize scientific results, e.g., in the form of a leaderboard. In this paper we build two datasets and develop a framework (TDMS-IE) aimed at automatically extracting task, dataset, metric and score from NLP papers, towards the automatic construction of leaderboards. Experiments show that our model outperforms several baselines by a large margin. Our model is a first step towards automatic leaderboard construction, e.g., in the NLP domain.", "which Concept types ?", "Dataset", 521.0, 528.0], ["We present the design, preparation, results and analysis of the Cancer Genetics (CG) event extraction task, a main task of the BioNLP Shared Task (ST) 2013. The CG task is an information extraction task targeting the recognition of events in text, represented as structured n-ary associations of given physical entities. In addition to addressing the cancer domain, the CG task is differentiated from previous event extraction tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple levels of biological organization, ranging from the molecular through the cellular and organ levels up to whole organisms. Final test set submissions were accepted from six teams. The highest-performing system achieved an Fscore of 55.4%. This level of performance is broadly comparable with the state of the art for established molecular-level extraction tasks, demonstrating that event extraction resources and methods generalize well to higher levels of biological organization and are applicable to the analysis of scientific texts on cancer. The CG task continues as an open challenge to all interested parties, with tools and resources available from http://2013. bionlp-st.org/.", "which Concept types ?", "Organism", NaN, NaN], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Concept types ?", "ExperimentalMethod", 300.0, 318.0], ["This paper presents the preparation, resources, results and analysis of the Epigenetics and Post-translational Modifications (EPI) task, a main task of the BioNLP Shared Task 2011. The task concerns the extraction of detailed representations of 14 protein and DNA modification events, the catalysis of these reactions, and the identification of instances of negated or speculatively stated event instances. Seven teams submitted final results to the EPI task in the shared task, with the highest-performing system achieving 53% F-score in the full task and 69% F-score in the extraction of a simplified set of core event arguments.", "which Concept types ?", "Protein", 248.0, 255.0], ["MOTIVATION The MEDLINE database of biomedical abstracts contains scientific knowledge about thousands of interacting genes and proteins. Automated text processing can aid in the comprehension and synthesis of this valuable information. The fundamental task of identifying gene and protein names is a necessary first step towards making full use of the information encoded in biomedical text. This remains a challenging task due to the irregularities and ambiguities in gene and protein nomenclature. We propose to approach the detection of gene and protein names in scientific abstracts as part-of-speech tagging, the most basic form of linguistic corpus annotation. RESULTS We present a method for tagging gene and protein names in biomedical text using a combination of statistical and knowledge-based strategies. This method incorporates automatically generated rules from a transformation-based part-of-speech tagger, and manually generated rules from morphological clues, low frequency trigrams, indicator terms, suffixes and part-of-speech information. Results of an experiment on a test corpus of 56K MEDLINE documents demonstrate that our method to extract gene and protein names can be applied to large sets of MEDLINE abstracts, without the need for special conditions or human experts to predetermine relevant subsets. AVAILABILITY The programs are available on request from the authors.", "which Concept types ?", "Protein", 281.0, 288.0], ["The Clinical E-Science Framework (CLEF) project is building a framework for the capture, integration and presentation of clinical information: for clinical research, evidence-based health care and genotype-meets-phenotype informatics. A significant portion of the information required by such a framework originates as text, even in EHR-savvy organizations. CLEF uses Information Extraction (IE) to make this unstructured information available. An important part of IE is the identification of semantic entities and relationships. Typical approaches require human annotated documents to provide both evaluation standards and material for system development. CLEF has a corpus of clinical narratives, histopathology reports and imaging reports from 20 thousand patients. We describe the selection of a subset of this corpus for manual annotation of clinical entities and relationships. We describe an annotation methodology and report encouraging initial results of inter-annotator agreement. Comparisons are made between different text sub-genres, and between annotators with different skills.", "which Concept types ?", "Result", NaN, NaN], ["The mentions of human health perturbations such as the diseases and adverse effects denote a special entity class in the biomedical literature. They help in understanding the underlying risk factors and develop a preventive rationale. The recognition of these named entities in texts through dictionary-based approaches relies on the availability of appropriate terminological resources. Although few resources are publicly available, not all are suitable for the text mining needs. Therefore, this work provides an overview of the well known resources with respect to human diseases and adverse effects such as the MeSH, MedDRA, ICD-10, SNOMED CT, and UMLS. Individual dictionaries are generated from these resources and their performance in recognizing the named entities is evaluated over a manually annotated corpus. In addition, the steps for curating the dictionaries, rule-based acronym disambiguation and their impact on the dictionary performance is discussed. The results show that the MedDRA and UMLS achieve the best recall. Besides this, MedDRA provides an additional benefit of achieving a higher precision. The combination of search results of all the dictionaries achieve a considerably high recall. The corpus is available on http://www.scai.fraunhofer.de/disease-ae-corpus.html", "which Concept types ?", "Adverse Effects", 68.0, 83.0], ["Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.", "which Concept types ?", "Software", 16.0, 24.0], ["This paper presents the fourth edition of the Bacteria Biotope task at BioNLP Open Shared Tasks 2019. The task focuses on the extraction of the locations and phenotypes of microorganisms from PubMed abstracts and full-text excerpts, and the characterization of these entities with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on biodiversity for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, and the challenge organization. We also provide an analysis of the results obtained by participants, and inspect the evolution of the results since the last edition in 2016.", "which Concept types ?", "Phenotype", NaN, NaN], ["The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records presented three tasks: a concept extraction task focused on the extraction of medical concepts from patient reports; an assertion classification task focused on assigning assertion types for medical problem concepts; and a relation classification task focused on assigning relation types that hold between medical problems, tests, and treatments. i2b2 and the VA provided an annotated reference standard corpus for the three tasks. Using this reference standard, 22 systems were developed for concept extraction, 21 for assertion classification, and 16 for relation classification. These systems showed that machine learning approaches could be augmented with rule-based systems to determine concepts, assertions, and relations. Depending on the task, the rule-based systems can either provide input for machine learning or post-process the output of machine learning. Ensembles of classifiers, information from unlabeled data, and external knowledge sources can help when the training data are inadequate.", "which Concept types ?", "Test", NaN, NaN], ["A considerable effort has been made to extract biological and chemical entities, as well as their relationships, from the scientific literature, either manually through traditional literature curation or by using information extraction and text mining technologies. Medicinal chemistry patents contain a wealth of information, for instance to uncover potential biomarkers that might play a role in cancer treatment and prognosis. However, current biomedical annotation databases do not cover such information, partly due to limitations of publicly available biomedical patent mining software. As part of the BioCreative V CHEMDNER patents track, we present the results of the first named entity recognition (NER) assignment carried out to detect mentions of chemical compounds and genes/proteins in running patent text. More specifically, this task aimed to evaluate the performance of automatic name recognition strategies capable of isolating chemical names and gene and gene product mentions from surrounding text within patent titles and abstracts. A total of 22 unique teams submitted results for at least one of the three CHEMDNER subtasks. The first subtask, called the CEMP (chemical entity mention in patents) task, focused on the detection of chemical named entity mentions in patents, requesting teams to return the start and end indices corresponding to all the chemical entities found in a given record. A total of 21 teams submitted 93 runs, for this subtask. The top performing team reached an f-measure of 0.89 with a precision of 0.87 and a recall of 0.91. The CPD (chemical passage detection) task required the classification of patent titles and abstracts whether they do or do not contain chemical compound mentions. Nine teams returned predictions for this task (40 runs). The top run in terms of Matthew\u2019s correlation coefficient (MCC) had a score of 0.88, the highest sensitivity ? Corresponding author", "which Concept types ?", "gene", 964.0, 968.0], ["We present a method for characterizing a research work in terms of its focus, domain of application, and techniques used. We show how tracing these aspects over time provides a novel measure of the influence of research communities on each other. We extract these characteristics by matching semantic extraction patterns, learned using bootstrapping, to the dependency trees of sentences in an article\u2019s", "which Concept types ?", "Technique", NaN, NaN], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Concept types ?", "Fragment", 320.0, 328.0], ["Abstract Background The biological research literature is a major repository of knowledge. As the amount of literature increases, it will get harder to find the information of interest on a particular topic. There has been an increasing amount of work on text mining this literature, but comparing this work is hard because of a lack of standards for making comparisons. To address this, we worked with colleagues at the Protein Design Group, CNB-CSIC, Madrid to develop BioCreAtIvE (Critical Assessment for Information Extraction in Biology), an open common evaluation of systems on a number of biological text mining tasks. We report here on task 1A, which deals with finding mentions of genes and related entities in text. \"Finding mentions\" is a basic task, which can be used as a building block for other text mining tasks. The task makes use of data and evaluation software provided by the (US) National Center for Biotechnology Information (NCBI). Results 15 teams took part in task 1A. A number of teams achieved scores over 80% F-measure (balanced precision and recall). The teams that tried to use their task 1A systems to help on other BioCreAtIvE tasks reported mixed results. Conclusion The 80% plus F-measure results are good, but still somewhat lag the best scores achieved in some other domains such as newswire, due in part to the complexity and length of gene names, compared to person or organization names in newswire.", "which Concept types ?", "Gene", 1373.0, 1377.0], ["We describe the SemEval task of extracting keyphrases and relations between them from scientific documents, which is crucial for understanding which publications describe which processes, tasks and materials. Although this was a new task, we had a total of 26 submissions across 3 evaluation scenarios. We expect the task and the findings reported in this paper to be relevant for researchers working on understanding scientific content, as well as the broader knowledge base population and information extraction communities.", "which Concept types ?", "Task", 24.0, 28.0], ["The mentions of human health perturbations such as the diseases and adverse effects denote a special entity class in the biomedical literature. They help in understanding the underlying risk factors and develop a preventive rationale. The recognition of these named entities in texts through dictionary-based approaches relies on the availability of appropriate terminological resources. Although few resources are publicly available, not all are suitable for the text mining needs. Therefore, this work provides an overview of the well known resources with respect to human diseases and adverse effects such as the MeSH, MedDRA, ICD-10, SNOMED CT, and UMLS. Individual dictionaries are generated from these resources and their performance in recognizing the named entities is evaluated over a manually annotated corpus. In addition, the steps for curating the dictionaries, rule-based acronym disambiguation and their impact on the dictionary performance is discussed. The results show that the MedDRA and UMLS achieve the best recall. Besides this, MedDRA provides an additional benefit of achieving a higher precision. The combination of search results of all the dictionaries achieve a considerably high recall. The corpus is available on http://www.scai.fraunhofer.de/disease-ae-corpus.html", "which Concept types ?", "Disease", 1273.0, 1280.0], ["Although it has become common to assess publications and researchers by means of their citation count (e.g., using the h-index), measuring the impact of scientific methods and datasets (e.g., using an \u201ch-index for datasets\u201d) has been performed only to a limited extent. This is not surprising because the usage information of methods and datasets is typically not explicitly provided by the authors, but hidden in a publication\u2019s text. In this paper, we propose an approach to identifying methods and datasets in texts that have actually been used by the authors. Our approach first recognizes datasets and methods in the text by means of a domain-specific named entity recognition method with minimal human interaction. It then classifies these mentions into used vs. non-used based on the textual contexts. The obtained labels are aggregated on the document level and integrated into the Microsoft Academic Knowledge Graph modeling publications\u2019 metadata. In experiments based on the Microsoft Academic Graph, we show that both method and dataset mentions can be identified and correctly classified with respect to their usage to a high degree. Overall, our approach facilitates method and dataset recommendation, enhanced paper recommendation, and scientific impact quantification. It can be extended in such a way that it can identify mentions of any entity type (e.g., task).", "which Concept types ?", "Dataset", 1041.0, 1048.0], ["We introduce a multi-task setup of identifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called SciIE with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.", "which Concept types ?", "Task", 21.0, 25.0], ["One of the biomedical entity types of relevance for medicine or biosciences are chemical compounds and drugs. The correct detection these entities is critical for other text mining applications building on them, such as adverse drug-reaction detection, medication-related fake news or drug-target extraction. Although a significant effort was made to detect mentions of drugs/chemicals in English texts, so far only very limited attempts were made to recognize them in medical documents in other languages. Taking into account the growing amount of medical publications and clinical records written in Spanish, we have organized the first shared task on detecting drug and chemical entities in Spanish medical documents. Additionally, we included a clinical concept-indexing sub-track asking teams to return SNOMED-CT identifiers related to drugs/chemicals for a collection of documents. For this task, named PharmaCoNER, we generated annotation guidelines together with a corpus of 1,000 manually annotated clinical case studies. A total of 22 teams participated in the sub-track 1, (77 system runs), and 7 teams in the sub-track 2 (19 system runs). Top scoring teams used sophisticated deep learning approaches yielding very competitive results with F-measures above 0.91. These results indicate that there is a real interest in promoting biomedical text mining efforts beyond English. We foresee that the PharmaCoNER annotation guidelines, corpus and participant systems will foster the development of new resources for clinical and biomedical text mining systems of Spanish medical data.", "which Concept types ?", "drug", 228.0, 232.0], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Concept types ?", "Fusion", 330.0, 336.0], ["While the fast-paced inception of novel tasks and new datasets helps foster active research in a community towards interesting directions, keeping track of the abundance of research activity in different areas on different datasets is likely to become increasingly difficult. The community could greatly benefit from an automatic system able to summarize scientific results, e.g., in the form of a leaderboard. In this paper we build two datasets and develop a framework (TDMS-IE) aimed at automatically extracting task, dataset, metric and score from NLP papers, towards the automatic construction of leaderboards. Experiments show that our model outperforms several baselines by a large margin. Our model is a first step towards automatic leaderboard construction, e.g., in the NLP domain.", "which Concept types ?", "Score", 541.0, 546.0], ["This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2016, which follows the previous 2013 and 2011 editions. The task focuses on the extraction of the locations (biotopes and geographical places) of bacteria from PubMe abstracts and the characterization of bacteria and their associated habitats with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on bacteria habitats for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, the challenge organization, and the evaluation metrics. We also provide an analysis of the results obtained by participants.", "which Concept types ?", "Geographical", 195.0, 207.0], ["We present a method for characterizing a research work in terms of its focus, domain of application, and techniques used. We show how tracing these aspects over time provides a novel measure of the influence of research communities on each other. We extract these characteristics by matching semantic extraction patterns, learned using bootstrapping, to the dependency trees of sentences in an article\u2019s", "which Concept types ?", "Domain", 78.0, 84.0], ["This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2016, which follows the previous 2013 and 2011 editions. The task focuses on the extraction of the locations (biotopes and geographical places) of bacteria from PubMe abstracts and the characterization of bacteria and their associated habitats with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on bacteria habitats for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, the challenge organization, and the evaluation metrics. We also provide an analysis of the results obtained by participants.", "which Concept types ?", "Bacteria", 24.0, 32.0], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Concept types ?", "Modification", 351.0, 363.0], ["Abstract Background Named entity recognition (NER) is an important first step for text mining the biomedical literature. Evaluating the performance of biomedical NER systems is impossible without a standardized test corpus. The annotation of such a corpus for gene/protein name NER is a difficult process due to the complexity of gene/protein names. We describe the construction and annotation of GENETAG, a corpus of 20K MEDLINE \u00ae sentences for gene/protein NER. 15K GENETAG sentences were used for the BioCreAtIvE Task 1A Competition. Results To ensure heterogeneity of the corpus, MEDLINE sentences were first scored for term similarity to documents with known gene names, and 10K high- and 10K low-scoring sentences were chosen at random. The original 20K sentences were run through a gene/protein name tagger, and the results were modified manually to reflect a wide definition of gene/protein names subject to a specificity constraint, a rule that required the tagged entities to refer to specific entities. Each sentence in GENETAG was annotated with acceptable alternatives to the gene/protein names it contained, allowing for partial matching with semantic constraints. Semantic constraints are rules requiring the tagged entity to contain its true meaning in the sentence context. Application of these constraints results in a more meaningful measure of the performance of an NER system than unrestricted partial matching. Conclusion The annotation of GENETAG required intricate manual judgments by annotators which hindered tagging consistency. The data were pre-segmented into words, to provide indices supporting comparison of system responses to the \"gold standard\". However, character-based indices would have been more robust than word-based indices. GENETAG Train, Test and Round1 data and ancillary programs are freely available at ftp://ftp.ncbi.nlm.nih.gov/pub/tanabe/GENETAG.tar.gz. A newer version of GENETAG-05, will be released later this year.", "which Concept types ?", "Gene", 260.0, 264.0], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Concept types ?", "Protein", 383.0, 390.0], ["Despite significant progress in natural language processing, machine learning models require substantial expertannotated training data to perform well in tasks such as named entity recognition (NER) and entity relations extraction. Furthermore, NER is often more complicated when working with scientific text. For example, in polymer science, chemical structure may be encoded using nonstandard naming conventions, the same concept can be expressed using many different terms (synonymy), and authors may refer to polymers with ad-hoc labels. These challenges, which are not unique to polymer science, make it difficult to generate training data, as specialized skills are needed to label text correctly. We have previously designed polyNER, a semi-automated system for efficient identification of scientific entities in text. PolyNER applies word embedding models to generate entity-rich corpora for productive expert labeling, and then uses the resulting labeled data to bootstrap a context-based classifier. PolyNER facilitates a labeling process that is otherwise tedious and expensive. Here, we use active learning to efficiently obtain more annotations from experts and improve performance. Our approach requires just five hours of expert time to achieve discrimination capacity comparable to that of a state-of-the-art chemical NER toolkit.", "which Concept types ?", "Polymers", 513.0, 521.0], ["BioC is a simple XML format for text, annotations and relations, and was developed to achieve interoperability for biomedical text processing. Following the success of BioC in BioCreative IV, the BioCreative V BioC track addressed a collaborative task to build an assistant system for BioGRID curation. In this paper, we describe the framework of the collaborative BioC task and discuss our findings based on the user survey. This track consisted of eight subtasks including gene/protein/organism named entity recognition, protein\u2013protein/genetic interaction passage identification and annotation visualization. Using BioC as their data-sharing and communication medium, nine teams, world-wide, participated and contributed either new methods or improvements of existing tools to address different subtasks of the BioC track. Results from different teams were shared in BioC and made available to other teams as they addressed different subtasks of the track. In the end, all submitted runs were merged using a machine learning classifier to produce an optimized output. The biocurator assistant system was evaluated by four BioGRID curators in terms of practical usability. The curators\u2019 feedback was overall positive and highlighted the user-friendly design and the convenient gene/protein curation tool based on text mining. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-1-bioc/", "which Concept types ?", "protein", 480.0, 487.0], ["A crucial step toward the goal of automatic extraction of propositional information from natural language text is the identification of semantic relations between constituents in sentences. We examine the problem of distinguishing among seven relation types that can occur between the entities \"treatment\" and \"disease\" in bioscience text, and the problem of identifying such entities. We compare five generative graphical models and a neural network, using lexical, syntactic, and semantic features, finding that the latter help achieve high classification accuracy.", "which Concept types ?", "Disease", 311.0, 318.0], ["Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task\u2014a result that approaches the human inter-annotator agreement (0.8875)\u2014and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system\u2019s ability to return real-time results: the average response time for each team\u2019s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "which Concept types ?", "diseases", 29.0, 37.0], ["With the information overload in genome-related field, there is an increasing need for natural language processing technology to extract information from literature and various attempts of information extraction using NLP has been being made. We are developing the necessary resources including domain ontology and annotated corpus from research abstracts in MEDLINE database (GENIA corpus). We are building the ontology and the corpus simultaneously, using each other. In this paper we report on our new corpus, its ontological basis, annotation scheme, and statistics of annotated objects. We also describe the tools used for corpus annotation and management.", "which Concept types ?", "Source", NaN, NaN], ["The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records presented three tasks: a concept extraction task focused on the extraction of medical concepts from patient reports; an assertion classification task focused on assigning assertion types for medical problem concepts; and a relation classification task focused on assigning relation types that hold between medical problems, tests, and treatments. i2b2 and the VA provided an annotated reference standard corpus for the three tasks. Using this reference standard, 22 systems were developed for concept extraction, 21 for assertion classification, and 16 for relation classification. These systems showed that machine learning approaches could be augmented with rule-based systems to determine concepts, assertions, and relations. Depending on the task, the rule-based systems can either provide input for machine learning or post-process the output of machine learning. Ensembles of classifiers, information from unlabeled data, and external knowledge sources can help when the training data are inadequate.", "which Concept types ?", "Treatment", NaN, NaN], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Concept types ?", "CellLine", 238.0, 246.0], ["We describe the SemEval task of extracting keyphrases and relations between them from scientific documents, which is crucial for understanding which publications describe which processes, tasks and materials. Although this was a new task, we had a total of 26 submissions across 3 evaluation scenarios. We expect the task and the findings reported in this paper to be relevant for researchers working on understanding scientific content, as well as the broader knowledge base population and information extraction communities.", "which Concept types ?", "Process", NaN, NaN], ["This paper studies the importance of identifying and categorizing scientific concepts as a way to achieve a deeper understanding of the research literature of a scientific community. To reach this goal, we propose an unsupervised bootstrapping algorithm for identifying and categorizing mentions of concepts. We then propose a new clustering algorithm that uses citations' context as a way to cluster the extracted mentions into coherent concepts. Our evaluation of the algorithms against gold standards shows significant improvement over state-of-the-art results. More importantly, we analyze the computational linguistic literature using the proposed algorithms and show four different ways to summarize and understand the research community which are difficult to obtain using existing techniques.", "which Concept types ?", "Technique", NaN, NaN], ["Extracting information from full documents is an important problem in many domains, but most previous work focus on identifying relationships within a sentence or a paragraph. It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections. In this paper, we introduce SciREX, a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. We annotate our dataset by integrating automatic and human annotations, leveraging existing scientific knowledge resources. We develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE. Analyzing the model performance shows a significant gap between human performance and current baselines, inviting the community to use our dataset as a challenge to develop document-level IE models. Our data and code are publicly available at https://github.com/allenai/SciREX .", "which Concept types ?", "Dataset", 246.0, 253.0], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Concept types ?", "Complex", 248.0, 255.0], ["A considerable effort has been made to extract biological and chemical entities, as well as their relationships, from the scientific literature, either manually through traditional literature curation or by using information extraction and text mining technologies. Medicinal chemistry patents contain a wealth of information, for instance to uncover potential biomarkers that might play a role in cancer treatment and prognosis. However, current biomedical annotation databases do not cover such information, partly due to limitations of publicly available biomedical patent mining software. As part of the BioCreative V CHEMDNER patents track, we present the results of the first named entity recognition (NER) assignment carried out to detect mentions of chemical compounds and genes/proteins in running patent text. More specifically, this task aimed to evaluate the performance of automatic name recognition strategies capable of isolating chemical names and gene and gene product mentions from surrounding text within patent titles and abstracts. A total of 22 unique teams submitted results for at least one of the three CHEMDNER subtasks. The first subtask, called the CEMP (chemical entity mention in patents) task, focused on the detection of chemical named entity mentions in patents, requesting teams to return the start and end indices corresponding to all the chemical entities found in a given record. A total of 21 teams submitted 93 runs, for this subtask. The top performing team reached an f-measure of 0.89 with a precision of 0.87 and a recall of 0.91. The CPD (chemical passage detection) task required the classification of patent titles and abstracts whether they do or do not contain chemical compound mentions. Nine teams returned predictions for this task (40 runs). The top run in terms of Matthew\u2019s correlation coefficient (MCC) had a score of 0.88, the highest sensitivity ? Corresponding author", "which Concept types ?", "protein", NaN, NaN], ["We present a database of annotated biomedical text corpora merged into a portable data structure with uniform conventions. MedTag combines three corpora, MedPost, ABGene and GENETAG, within a common relational database data model. The GENETAG corpus has been modified to reflect new definitions of genes and proteins. The MedPost corpus has been updated to include 1,000 additional sentences from the clinical medicine domain. All data have been updated with original MEDLINE text excerpts, PubMed identifiers, and tokenization independence to facilitate data accuracy, consistency and usability. The data are available in flat files along with software to facilitate loading the data into a relational SQL database from ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedTag/medtag.tar.gz.", "which Concept types ?", "Gene", NaN, NaN], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Concept types ?", "DrugCompound", 286.0, 298.0], ["Natural language processing (NLP) is widely applied in biological domains to retrieve information from publications. Systems to address numerous applications exist, such as biomedical named entity recognition (BNER), named entity normalization (NEN) and protein-protein interaction extraction (PPIE). High-quality datasets can assist the development of robust and reliable systems; however, due to the endless applications and evolving techniques, the annotations of benchmark datasets may become outdated and inappropriate. In this study, we first review commonlyused BNER datasets and their potential annotation problems such as inconsistency and low portability. Then, we introduce a revised version of the JNLPBA dataset that solves potential problems in the original and use state-of-the-art named entity recognition systems to evaluate its portability to different kinds of biomedical literature, including protein-protein interaction and biology events. Lastly, we introduce an ensembled biomedical entity dataset (EBED) by extending the revised JNLPBA dataset with PubMed Central full-text paragraphs, figure captions and patent abstracts. This EBED is a multi-task dataset that covers annotations including gene, disease and chemical entities. In total, it contains 85000 entity mentions, 25000 entity mentions with database identifiers and 5000 attribute tags. To demonstrate the usage of the EBED, we review the BNER track from the AI CUP Biomedical Paper Analysis challenge. Availability: The revised JNLPBA dataset is available at https://iasl-btm.iis.sinica.edu.tw/BNER/Content/Re vised_JNLPBA.zip. The EBED dataset is available at https://iasl-btm.iis.sinica.edu.tw/BNER/Content/AICUP _EBED_dataset.rar. Contact: Email: thtsai@g.ncu.edu.tw, Tel. 886-3-4227151 ext. 35203, Fax: 886-3-422-2681 Email: hsu@iis.sinica.edu.tw, Tel. 886-2-2788-3799 ext. 2211, Fax: 886-2-2782-4814 Supplementary information: Supplementary data are available at Briefings in Bioinformatics online.", "which Concept types ?", "Protein", 254.0, 261.0], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Concept types ?", "GOMOP", 338.0, 343.0], ["MOTIVATION The MEDLINE database of biomedical abstracts contains scientific knowledge about thousands of interacting genes and proteins. Automated text processing can aid in the comprehension and synthesis of this valuable information. The fundamental task of identifying gene and protein names is a necessary first step towards making full use of the information encoded in biomedical text. This remains a challenging task due to the irregularities and ambiguities in gene and protein nomenclature. We propose to approach the detection of gene and protein names in scientific abstracts as part-of-speech tagging, the most basic form of linguistic corpus annotation. RESULTS We present a method for tagging gene and protein names in biomedical text using a combination of statistical and knowledge-based strategies. This method incorporates automatically generated rules from a transformation-based part-of-speech tagger, and manually generated rules from morphological clues, low frequency trigrams, indicator terms, suffixes and part-of-speech information. Results of an experiment on a test corpus of 56K MEDLINE documents demonstrate that our method to extract gene and protein names can be applied to large sets of MEDLINE abstracts, without the need for special conditions or human experts to predetermine relevant subsets. AVAILABILITY The programs are available on request from the authors.", "which Concept types ?", "Gene", 272.0, 276.0], ["Abstract Background Named entity recognition (NER) is an important first step for text mining the biomedical literature. Evaluating the performance of biomedical NER systems is impossible without a standardized test corpus. The annotation of such a corpus for gene/protein name NER is a difficult process due to the complexity of gene/protein names. We describe the construction and annotation of GENETAG, a corpus of 20K MEDLINE \u00ae sentences for gene/protein NER. 15K GENETAG sentences were used for the BioCreAtIvE Task 1A Competition. Results To ensure heterogeneity of the corpus, MEDLINE sentences were first scored for term similarity to documents with known gene names, and 10K high- and 10K low-scoring sentences were chosen at random. The original 20K sentences were run through a gene/protein name tagger, and the results were modified manually to reflect a wide definition of gene/protein names subject to a specificity constraint, a rule that required the tagged entities to refer to specific entities. Each sentence in GENETAG was annotated with acceptable alternatives to the gene/protein names it contained, allowing for partial matching with semantic constraints. Semantic constraints are rules requiring the tagged entity to contain its true meaning in the sentence context. Application of these constraints results in a more meaningful measure of the performance of an NER system than unrestricted partial matching. Conclusion The annotation of GENETAG required intricate manual judgments by annotators which hindered tagging consistency. The data were pre-segmented into words, to provide indices supporting comparison of system responses to the \"gold standard\". However, character-based indices would have been more robust than word-based indices. GENETAG Train, Test and Round1 data and ancillary programs are freely available at ftp://ftp.ncbi.nlm.nih.gov/pub/tanabe/GENETAG.tar.gz. A newer version of GENETAG-05, will be released later this year.", "which Concept types ?", "Protein", 265.0, 272.0], ["Although it has become common to assess publications and researchers by means of their citation count (e.g., using the h-index), measuring the impact of scientific methods and datasets (e.g., using an \u201ch-index for datasets\u201d) has been performed only to a limited extent. This is not surprising because the usage information of methods and datasets is typically not explicitly provided by the authors, but hidden in a publication\u2019s text. In this paper, we propose an approach to identifying methods and datasets in texts that have actually been used by the authors. Our approach first recognizes datasets and methods in the text by means of a domain-specific named entity recognition method with minimal human interaction. It then classifies these mentions into used vs. non-used based on the textual contexts. The obtained labels are aggregated on the document level and integrated into the Microsoft Academic Knowledge Graph modeling publications\u2019 metadata. In experiments based on the Microsoft Academic Graph, we show that both method and dataset mentions can be identified and correctly classified with respect to their usage to a high degree. Overall, our approach facilitates method and dataset recommendation, enhanced paper recommendation, and scientific impact quantification. It can be extended in such a way that it can identify mentions of any entity type (e.g., task).", "which Concept types ?", "Method", 682.0, 688.0], ["The active gene annotation corpus (AGAC) was developed to support knowledge discovery for drug repurposing. Based on the corpus, the AGAC track of the BioNLP Open Shared Tasks 2019 was organized, to facilitate cross-disciplinary collaboration across BioNLP and Pharmacoinformatics communities, for drug repurposing. The AGAC track consists of three subtasks: 1) named entity recognition, 2) thematic relation extraction, and 3) loss of function (LOF) / gain of function (GOF) topic classification. The AGAC track was participated by five teams, of which the performance are compared and analyzed. The the results revealed a substantial room for improvement in the design of the task, which we analyzed in terms of \u201cimbalanced data\u201d, \u201cselective annotation\u201d and \u201clatent topic annotation\u201d.", "which Concept types ?", "Gene", 11.0, 15.0], ["The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records presented three tasks: a concept extraction task focused on the extraction of medical concepts from patient reports; an assertion classification task focused on assigning assertion types for medical problem concepts; and a relation classification task focused on assigning relation types that hold between medical problems, tests, and treatments. i2b2 and the VA provided an annotated reference standard corpus for the three tasks. Using this reference standard, 22 systems were developed for concept extraction, 21 for assertion classification, and 16 for relation classification. These systems showed that machine learning approaches could be augmented with rule-based systems to determine concepts, assertions, and relations. Depending on the task, the rule-based systems can either provide input for machine learning or post-process the output of machine learning. Ensembles of classifiers, information from unlabeled data, and external knowledge sources can help when the training data are inadequate.", "which Concept types ?", "Problem", 288.0, 295.0], ["Chemical compounds like small signal molecules or other biological active chemical substances are an important entity class in life science publications and patents. The recognition of these named entities relies on appropriate dictionary resources as well as on training and evaluation corpora. In this work we give an overview of publicly available chemical information resources with respect to chemical terminology. The coverage, amount of synonyms, and especially the inclusion of SMILES or InChI are considered. Normalization of different chemical names to a unique structure is only possible with these structure representations. In addition, the generation and annotation of training and testing corpora is presented. We describe a small corpus for the evaluation of dictionaries containing chemical enities as well as a training and test corpus for the recognition of IUPAC and IUPAC-like names, which cannot be fully enumerated in dictionaries. Corpora can be found on http://www.scai.fraunhofer.de/chem-corpora.html", "which classes ?", "IUPAC", 877.0, 882.0], ["We present two related tasks of the BioNLP Shared Tasks 2011: Bacteria Gene Renaming (Rename) and Bacteria Gene Interactions (GI). We detail the objectives, the corpus specification, the evaluation metrics, and we summarize the participants' results. Both issued from PubMed scientific literature abstracts, the Rename task aims at extracting gene name synonyms, and the GI task aims at extracting genic interaction events, mainly about gene transcriptional regulations in bacteria.", "which data source ?", "PubMed", 268.0, 274.0], ["Abstract The text-mining services for kinome curation track, part of BioCreative VI, proposed a competition to assess the effectiveness of text mining to perform literature triage. The track has exploited an unpublished curated data set from the neXtProt database. This data set contained comprehensive annotations for 300 human protein kinases. For a given protein and a given curation axis [diseases or gene ontology (GO) biological processes], participants\u2019 systems had to identify and rank relevant articles in a collection of 5.2 M MEDLINE citations (task 1) or 530 000 full-text articles (task 2). Explored strategies comprised named-entity recognition and machine-learning frameworks. For that latter approach, participants developed methods to derive a set of negative instances, as the databases typically do not store articles that were judged as irrelevant by curators. The supervised approaches proposed by the participating groups achieved significant improvements compared to the baseline established in a previous study and compared to a basic PubMed search.", "which data source ?", "neXtProt database", 246.0, 263.0], ["Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.", "which data source ?", "American recipe site", 1110.0, 1130.0], ["In order to deal with heterogeneous knowledge in the medical field, this paper proposes a method which can learn a heavy-weighted medical ontology based on medical glossaries and Web resources. Firstly, terms and taxonomic relations are extracted based on disease and drug glossaries and a light-weighted ontology is constructed, Secondly, non-taxonomic relations are automatically learned from Web resources with linguistic patterns, and the two ontologies (disease and drug) are expanded from light-weighted level towards heavy-weighted level, At last, the disease ontology and drug ontology are integrated to create a practical medical ontology. Experiment shows that this method can integrate and expand medical terms with taxonomic and different kinds of non-taxonomic relations. Our experiments show that the performance is promising.", "which data source ?", "Web resources", 179.0, 192.0], ["In order to deal with heterogeneous knowledge in the medical field, this paper proposes a method which can learn a heavy-weighted medical ontology based on medical glossaries and Web resources. Firstly, terms and taxonomic relations are extracted based on disease and drug glossaries and a light-weighted ontology is constructed, Secondly, non-taxonomic relations are automatically learned from Web resources with linguistic patterns, and the two ontologies (disease and drug) are expanded from light-weighted level towards heavy-weighted level, At last, the disease ontology and drug ontology are integrated to create a practical medical ontology. Experiment shows that this method can integrate and expand medical terms with taxonomic and different kinds of non-taxonomic relations. Our experiments show that the performance is promising.", "which data source ?", "Drug glossaries", 268.0, 283.0], ["The BioCreative NLM-Chem track calls for a community effort to fine-tune automated recognition of chemical names in biomedical literature. Chemical names are one of the most searched biomedical entities in PubMed and \u2013 as highlighted during the COVID-19 pandemic \u2013 their identification may significantly advance research in multiple biomedical subfields. While previous community challenges focused on identifying chemical names mentioned in titles and abstracts, the full text contains valuable additional detail. We organized the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document. This manuscript summarizes the BioCreative NLM-Chem track. We received a total of 88 submissions in total from 17 teams worldwide. The highest performance achieved for the Chemical Identification task was 0.8672 f-score (0.8759 precision, 0.8587 recall) for strict NER performance and 0.8136 f-score (0.8621 precision, 0.7702 recall) for strict normalization performance. The highest performance achieved for the Chemical Indexing task was 0.4825 f-score (0.4397 precision, 0.5344 recall). The NLM-Chem track dataset and other challenge materials are publicly available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/. This community challenge demonstrated 1) the current substantial achievements in deep learning technologies can be utilized to further improve automated prediction accuracy, and 2) the Chemical Indexing task is substantially more challenging. We look forward to further development of biomedical text mining methods to respond to the rapid growth of biomedical literature. Keywords\u2014 biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; text mining; chemical entity recognition; chemical indexing", "which data source ?", "PubMed", 206.0, 212.0], ["By 27 February 2020, the outbreak of coronavirus disease 2019 (COVID\u201019) caused 82 623 confirmed cases and 2858 deaths globally, more than severe acute respiratory syndrome (SARS) (8273 cases, 775 deaths) and Middle East respiratory syndrome (MERS) (1139 cases, 431 deaths) caused in 2003 and 2013, respectively. COVID\u201019 has spread to 46 countries internationally. Total fatality rate of COVID\u201019 is estimated at 3.46% by far based on published data from the Chinese Center for Disease Control and Prevention (China CDC). Average incubation period of COVID\u201019 is around 6.4 days, ranges from 0 to 24 days. The basic reproductive number (R0) of COVID\u201019 ranges from 2 to 3.5 at the early phase regardless of different prediction models, which is higher than SARS and MERS. A study from China CDC showed majority of patients (80.9%) were considered asymptomatic or mild pneumonia but released large amounts of viruses at the early phase of infection, which posed enormous challenges for containing the spread of COVID\u201019. Nosocomial transmission was another severe problem. A total of 3019 health workers were infected by 12 February 2020, which accounted for 3.83% of total number of infections, and extremely burdened the health system, especially in Wuhan. Limited epidemiological and clinical data suggest that the disease spectrum of COVID\u201019 may differ from SARS or MERS. We summarize latest literatures on genetic, epidemiological, and clinical features of COVID\u201019 in comparison to SARS and MERS and emphasize special measures on diagnosis and potential interventions. This review will improve our understanding of the unique features of COVID\u201019 and enhance our control measures in the future.", "which data source ?", "China CDC", 511.0, 520.0], ["One of the main holdbacks towards a wide use of ontologies is the high building cost. In order to reduce this effort, reuse of existing Knowledge Organization Systems (KOSs), and in particular thesauri, is a valuable and much cheaper alternative to build ontologies from scratch. In the literature tools to support such reuse and conversion of thesauri as well as re-engineering patterns already exist. However, few of these tools rely on a sort of semi-automatic reasoning on the structure of the thesaurus being converted. Furthermore, patterns proposed in the literature are not updated considering the new ISO 25964 standard on thesauri. This paper introduces a new application framework aimed to convert thesauri into OWL ontologies, differing from the existing approaches for taking into consideration ISO 25964 compliant thesauri and for applying completely automatic conversion rules.", "which data source ?", "iso 25964", 610.0, 619.0], ["Abstract Background Biology-focused databases and software define bioinformatics and their use is central to computational biology. In such a complex and dynamic field, it is of interest to understand what resources are available, which are used, how much they are used, and for what they are used. While scholarly literature surveys can provide some insights, large-scale computer-based approaches to identify mentions of bioinformatics databases and software from primary literature would automate systematic cataloguing, facilitate the monitoring of usage, and provide the foundations for the recovery of computational methods for analysing biological data, with the long-term aim of identifying best/common practice in different areas of biology. Results We have developed bioNerDS, a named entity recogniser for the recovery of bioinformatics databases and software from primary literature. We identify such entities with an F-measure ranging from 63% to 91% at the mention level and 63-78% at the document level, depending on corpus. Not attaining a higher F-measure is mostly due to high ambiguity in resource naming, which is compounded by the on-going introduction of new resources. To demonstrate the software, we applied bioNerDS to full-text articles from BMC Bioinformatics and Genome Biology. General mention patterns reflect the remit of these journals, highlighting BMC Bioinformatics\u2019s emphasis on new tools and Genome Biology\u2019s greater emphasis on data analysis. The data also illustrates some shifts in resource usage: for example, the past decade has seen R and the Gene Ontology join BLAST and GenBank as the main components in bioinformatics processing. Conclusions We demonstrate the feasibility of automatically identifying resource names on a large-scale from the scientific literature and show that the generated data can be used for exploration of bioinformatics database and software usage. For example, our results help to investigate the rate of change in resource usage and corroborate the suspicion that a vast majority of resources are created, but rarely (if ever) used thereafter. bioNerDS is available at http://bionerds.sourceforge.net/.", "which data source ?", "Genome Biology", 1291.0, 1305.0], ["Existing taxonomies are valuable input for creating ontologies, because they reflect some degree of community consensus and contain, readily available, a wealth of concept definitions plus a hierarchy. However, the transformation of such taxonomies into useful ontologies is not as straightforward as it appears, because simply taking the hierarchy of concepts, which was originally developed for some external purpose other than ontology engineering, as the subsumption hierarchy using rdfs:subClassOf can yield useless ontologies. In this paper, we (1) illustrate the problem by analyzing OWL and RDF-S ontologies derived from UNSPSC (a products and services taxonomy), (2) detail how the interpretation and representation of the original taxonomic relationship is an important modeling decision when deriving ontologies from existing taxonomies, (3) propose a novel \u201cgen/tax\u201d approach to capture the original semantics of taxonomies in OWL, based on the split of each category in the taxonomy into two concepts, a generic concept and a taxonomy concept, and (4) show the usefulness of this approach by transforming eCl@ss into a fully-fledged products and services ontology.", "which data source ?", "eCl@ss", 1118.0, 1124.0], ["Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task\u2014a result that approaches the human inter-annotator agreement (0.8875)\u2014and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system\u2019s ability to return real-time results: the average response time for each team\u2019s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "which data source ?", "PubMed", 955.0, 961.0], ["The construction of complex ontologies can be faci lit ted by adapting existing vocabularies. There is little clarity and i fact little consensus as to what modifications of vocabularies are necessary in orde r to re-engineer them into ontologies. In this paper we present a method that provides clear steps to follow when re-engineering a thesaurus. The method makes u se of top-level ontologies and was derived from the structural differences bet we n thesauri and ontologies as well as from best practices in modeling, some of wh ich ave been advocated in the biomedical domain. We illustrate each step of our m ethod with examples from a re-engineering case study about agricultural fertil izers based on the AGROVOC thesaurus. Our method makes clear that re-engineeri ng thesauri requires far more than just a syntactic conversion into a formal lang uage or other easily automatable steps. The method can not only be used for re-engin ering thesauri, but does also summarize steps for building ontologies in general, and can hence be adapted for the re-engineering of other types of vocabularies o r terminologies.", "which data source ?", "AGROVOC", 713.0, 720.0], ["Abstract Background Biology-focused databases and software define bioinformatics and their use is central to computational biology. In such a complex and dynamic field, it is of interest to understand what resources are available, which are used, how much they are used, and for what they are used. While scholarly literature surveys can provide some insights, large-scale computer-based approaches to identify mentions of bioinformatics databases and software from primary literature would automate systematic cataloguing, facilitate the monitoring of usage, and provide the foundations for the recovery of computational methods for analysing biological data, with the long-term aim of identifying best/common practice in different areas of biology. Results We have developed bioNerDS, a named entity recogniser for the recovery of bioinformatics databases and software from primary literature. We identify such entities with an F-measure ranging from 63% to 91% at the mention level and 63-78% at the document level, depending on corpus. Not attaining a higher F-measure is mostly due to high ambiguity in resource naming, which is compounded by the on-going introduction of new resources. To demonstrate the software, we applied bioNerDS to full-text articles from BMC Bioinformatics and Genome Biology. General mention patterns reflect the remit of these journals, highlighting BMC Bioinformatics\u2019s emphasis on new tools and Genome Biology\u2019s greater emphasis on data analysis. The data also illustrates some shifts in resource usage: for example, the past decade has seen R and the Gene Ontology join BLAST and GenBank as the main components in bioinformatics processing. Conclusions We demonstrate the feasibility of automatically identifying resource names on a large-scale from the scientific literature and show that the generated data can be used for exploration of bioinformatics database and software usage. For example, our results help to investigate the rate of change in resource usage and corroborate the suspicion that a vast majority of resources are created, but rarely (if ever) used thereafter. bioNerDS is available at http://bionerds.sourceforge.net/.", "which data source ?", "BMC Bioinformatics", 1268.0, 1286.0], ["Abstract The Precision Medicine Initiative is a multicenter effort aiming at formulating personalized treatments leveraging on individual patient data (clinical, genome sequence and functional genomic data) together with the information in large knowledge bases (KBs) that integrate genome annotation, disease association studies, electronic health records and other data types. The biomedical literature provides a rich foundation for populating these KBs, reporting genetic and molecular interactions that provide the scaffold for the cellular regulatory systems and detailing the influence of genetic variants in these interactions. The goal of BioCreative VI Precision Medicine Track was to extract this particular type of information and was organized in two tasks: (i) document triage task, focused on identifying scientific literature containing experimentally verified protein\u2013protein interactions (PPIs) affected by genetic mutations and (ii) relation extraction task, focused on extracting the affected interactions (protein pairs). To assist system developers and task participants, a large-scale corpus of PubMed documents was manually annotated for this task. Ten teams worldwide contributed 22 distinct text-mining models for the document triage task, and six teams worldwide contributed 14 different text-mining systems for the relation extraction task. When comparing the text-mining system predictions with human annotations, for the triage task, the best F-score was 69.06%, the best precision was 62.89%, the best recall was 98.0% and the best average precision was 72.5%. For the relation extraction task, when taking homologous genes into account, the best F-score was 37.73%, the best precision was 46.5% and the best recall was 54.1%. Submitted systems explored a wide range of methods, from traditional rule-based, statistical and machine learning systems to state-of-the-art deep learning methods. Given the level of participation and the individual team results we find the precision medicine track to be successful in engaging the text-mining research community. In the meantime, the track produced a manually annotated corpus of 5509 PubMed documents developed by BioGRID curators and relevant for precision medicine. The data set is freely available to the community, and the specific interactions have been integrated into the BioGRID data set. In addition, this challenge provided the first results of automatically identifying PubMed articles that describe PPI affected by mutations, as well as extracting the affected relations from those articles. Still, much progress is needed for computer-assisted precision medicine text mining to become mainstream. Future work should focus on addressing the remaining technical challenges and incorporating the practical benefits of text-mining tools into real-world precision medicine information-related curation.", "which data source ?", "PubMed", 1118.0, 1124.0], ["Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.", "which data source ?", "Japanese recipe sites", 1045.0, 1066.0], ["Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.", "which data source ?", "PubMed Central", 652.0, 666.0], ["Existing taxonomies are valuable input for creating ontologies, because they reflect some degree of community consensus and contain, readily available, a wealth of concept definitions plus a hierarchy. However, the transformation of such taxonomies into useful ontologies is not as straightforward as it appears, because simply taking the hierarchy of concepts, which was originally developed for some external purpose other than ontology engineering, as the subsumption hierarchy using rdfs:subClassOf can yield useless ontologies. In this paper, we (1) illustrate the problem by analyzing OWL and RDF-S ontologies derived from UNSPSC (a products and services taxonomy), (2) detail how the interpretation and representation of the original taxonomic relationship is an important modeling decision when deriving ontologies from existing taxonomies, (3) propose a novel \u201cgen/tax\u201d approach to capture the original semantics of taxonomies in OWL, based on the split of each category in the taxonomy into two concepts, a generic concept and a taxonomy concept, and (4) show the usefulness of this approach by transforming eCl@ss into a fully-fledged products and services ontology.", "which data source ?", "UNSPSC", 629.0, 635.0], ["Linguistically annotated corpus based on texts in biomedical domain has been constructed to tune natural language processing (NLP) tools for biotextmining. As the focus of information extraction is shifting from \"nominal\" information such as named entity to \"verbal\" information such as function and interaction of substances, application of parsers has become one of the key technologies and thus the corpus annotated for syntactic structure of sentences is in demand. A subset of the GENIA corpus consisting of 500 MEDLINE abstracts has been annotated for syntactic structure in an XMLbased format based on Penn Treebank II (PTB) scheme. Inter-annotator agreement test indicated that the writing style rather than the contents of the research abstracts is the source of the difficulty in tree annotation, and that annotation can be stably done by linguists without much knowledge of biology with appropriate guidelines regarding to linguistic phenomena particular to scientific texts.", "which Data formats ?", "PTB", 627.0, 630.0], ["Much attention has been paid to translating isolated chemical names into forms such as connection tables, but less effort has been expended in identifying substance names in running text to make them available for processing. The requirement for automatic name identification becomes a more urgent priority today, not the least in light of the inherent importance of patents and the increasing complexity of newly synthesized substances and, with these, the need for error-free processing of information from patent and other documents. The elaboration of a methodology for isolating substance names in the text of English-language patents is described here, using, in part, the SGML (Standard Generalized Markup Language) of the patent text as an aid to this process. Evaluation of the procedures, which are still at an early stage of development, demonstrates that even simple methods can achieve very high degrees of success.", "which Data formats ?", "SGML (Standard Generalized Markup Language) ", NaN, NaN], ["Knowledge base question answering (KBQA) is an important task in Natural Language Processing. Existing approaches face significant challenges including complex question understanding, necessity for reasoning, and lack of large end-to-end training datasets. In this work, we propose Neuro-Symbolic Question Answering (NSQA), a modular KBQA system, that leverages (1) Abstract Meaning Representation (AMR) parses for task-independent question understanding; (2) a simple yet effective graph transformation approach to convert AMR parses into candidate logical queries that are aligned to the KB; (3) a pipeline-based approach which integrates multiple, reusable modules that are trained specifically for their individual tasks (semantic parser, entity and relationship linkers, and neuro-symbolic reasoner) and do not require end-to-end training data. NSQA achieves state-of-the-art performance on two prominent KBQA datasets based on DBpedia (QALD-9 and LC-QuAD 1.0). Furthermore, our analysis emphasizes that AMR is a powerful tool for KBQA systems.", "which On evaluation dataset ?", "QALD-9", 942.0, 948.0], ["Representation learning is a critical ingredient for natural language processing systems. Recent Transformer language models like BERT learn powerful textual representations, but these models are targeted towards token- and sentence-level training objectives and do not leverage information on inter-document relatedness, which limits their document-level representation power. For applications on scientific documents, such as classification and recommendation, accurate embeddings of documents are a necessity. We propose SPECTER, a new method to generate document-level embedding of scientific papers based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, Specter can be easily applied to downstream applications without task-specific fine-tuning. Additionally, to encourage further research on document-level models, we introduce SciDocs, a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation. We show that Specter outperforms a variety of competitive baselines on the benchmark.", "which On evaluation dataset ?", "SciDocs", 945.0, 952.0], ["Knowledge base question answering (KBQA) is an important task in Natural Language Processing. Existing approaches face significant challenges including complex question understanding, necessity for reasoning, and lack of large end-to-end training datasets. In this work, we propose Neuro-Symbolic Question Answering (NSQA), a modular KBQA system, that leverages (1) Abstract Meaning Representation (AMR) parses for task-independent question understanding; (2) a simple yet effective graph transformation approach to convert AMR parses into candidate logical queries that are aligned to the KB; (3) a pipeline-based approach which integrates multiple, reusable modules that are trained specifically for their individual tasks (semantic parser, entity and relationship linkers, and neuro-symbolic reasoner) and do not require end-to-end training data. NSQA achieves state-of-the-art performance on two prominent KBQA datasets based on DBpedia (QALD-9 and LC-QuAD 1.0). Furthermore, our analysis emphasizes that AMR is a powerful tool for KBQA systems.", "which On evaluation dataset ?", "LC-QuAD 1.0", 953.0, 964.0], ["This paper introduces a new viewpoint in knowledge management by introducing KM-Services as a basic concept for Knowledge Management. This text discusses the vision of service oriented knowledge management (KM) as a realisation approach of process oriented knowledge management. In the following process oriented knowledge management as it was defined in the EU-project PROMOTE (IST-1999-11658) is presented and the KM-Service approach to realise process oriented knowledge management is explained. The last part is concerned with an implementation scenario that uses Web-technology to realise a service framework for a KM-system.", "which Approach name ?", "PROMOTE", 370.0, 377.0], ["The need for an effective management of knowledge is gaining increasing recognition in today's economy. To acknowledge this fact, new promising and powerful technologies have emerged from industrial and academic research. With these innovations maturing, organizations are increasingly willing to adapt such new knowledge management technologies to improve their knowledge-intensive businesses. However, the successful application in given business contexts is a complex, multidimensional challenge and a current research topic. Therefore, this contribution addresses this challenge and introduces a framework for the development of business process-supportive, technological knowledge infrastructures. While business processes represent the organizational setting for the application of knowledge management technologies, knowledge infrastructures represent a concept that can enable knowledge management in organizations. The B-KIDE Framework introduced in this work provides support for the development of knowledge infrastructures that comprise innovative knowledge management functionality and are visibly supportive of an organization's business processes. The developed B-KIDE Tool eases the application of the B-KIDE Framework for knowledge infrastructure developers. Three empirical studies that were conducted with industrial partners from heterogeneous industry sectors corroborate the relevance and viability of the introduced concepts. Copyright \u00a9 2005 John Wiley & Sons, Ltd.", "which Approach name ?", "B-KIDE", 928.0, 934.0], ["Facilitating the transfer of knowledge between knowledge workers represents one of the main challenges of knowledge management. Knowledge transfer instruments, such as the experience factory concept, represent means for facilitating knowledge transfer in organizations. As past research has shown, effectiveness of knowledge transfer instruments strongly depends on their situational context, on the stakeholders involved in knowledge transfer, and on their acceptance, motivation and goals. In this paper, we introduce an agent-oriented modeling approach for analyzing the effectiveness of knowledge transfer instruments in the light of (potentially conflicting) stakeholders' goals. We apply this intentional approach to the experience factory concept and analyze under which conditions it can fail, and how adaptations to the experience factory can be explored in a structured way", "which Approach name ?", "Knowledge Transfer", 128.0, 146.0], ["Business architecture became a well-known tool for business transformations. According to a recent study by Forrester, 50 percent of the companies polled claimed to have an active business architecture initiative, whereas 20 percent were planning to engage in business architecture work in the near future. However, despite the high interest in BA, there is not yet a common understanding of the main concepts. There is a lack for the business architecture framework which provides a complete metamodel, suggests methodology for business architecture development and enables tool support for it. The ORG- Master framework is designed to solve this problem using the ontology as a core of the metamodel. This paper describes the ORG-Master framework, its implementation and dissemination.", "which Approach name ?", "ORG-Master framework", 728.0, 748.0], ["The formal description of experiments for efficient analysis, annotation and sharing of results is a fundamental part of the practice of science. Ontologies are required to achieve this objective. A few subject-specific ontologies of experiments currently exist. However, despite the unity of scientific experimentation, no general ontology of experiments exists. We propose the ontology EXPO to meet this need. EXPO links the SUMO (the Suggested Upper Merged Ontology) with subject-specific ontologies of experiments by formalizing the generic concepts of experimental design, methodology and results representation. EXPO is expressed in the W3C standard ontology language OWL-DL. We demonstrate the utility of EXPO and its ability to describe different experimental domains, by applying it to two experiments: one in high-energy physics and the other in phylogenetics. The use of EXPO made the goals and structure of these experiments more explicit, revealed ambiguities, and highlighted an unexpected similarity. We conclude that, EXPO is of general value in describing experiments and a step towards the formalization of science.", "which Ontology ?", "EXPO", 388.0, 392.0], ["The formal description of experiments for efficient analysis, annotation and sharing of results is a fundamental part of the practice of science. Ontologies are required to achieve this objective. A few subject-specific ontologies of experiments currently exist. However, despite the unity of scientific experimentation, no general ontology of experiments exists. We propose the ontology EXPO to meet this need. EXPO links the SUMO (the Suggested Upper Merged Ontology) with subject-specific ontologies of experiments by formalizing the generic concepts of experimental design, methodology and results representation. EXPO is expressed in the W3C standard ontology language OWL-DL. We demonstrate the utility of EXPO and its ability to describe different experimental domains, by applying it to two experiments: one in high-energy physics and the other in phylogenetics. The use of EXPO made the goals and structure of these experiments more explicit, revealed ambiguities, and highlighted an unexpected similarity. We conclude that, EXPO is of general value in describing experiments and a step towards the formalization of science.", "which Ontology ?", "EXPO", 388.0, 392.0], ["The formal description of experiments for efficient analysis, annotation and sharing of results is a fundamental part of the practice of science. Ontologies are required to achieve this objective. A few subject-specific ontologies of experiments currently exist. However, despite the unity of scientific experimentation, no general ontology of experiments exists. We propose the ontology EXPO to meet this need. EXPO links the SUMO (the Suggested Upper Merged Ontology) with subject-specific ontologies of experiments by formalizing the generic concepts of experimental design, methodology and results representation. EXPO is expressed in the W3C standard ontology language OWL-DL. We demonstrate the utility of EXPO and its ability to describe different experimental domains, by applying it to two experiments: one in high-energy physics and the other in phylogenetics. The use of EXPO made the goals and structure of these experiments more explicit, revealed ambiguities, and highlighted an unexpected similarity. We conclude that, EXPO is of general value in describing experiments and a step towards the formalization of science.", "which Ontology ?", "EXPO", 388.0, 392.0], [". In this paper we introduce the Publishing Work\ufb02ow Ontology ( PWO ), i.e., an OWL 2 DL ontology for the description of work\ufb02ows that is particularly suitable for formalising typical publishing processes such as the publication of articles in journals. We support the presentation with a discussion of all the ontology design patterns that have been reused for modelling the main characteristics of publishing work\ufb02ows. In addition, we present two possible application of PWO in the publishing and legislative domains.", "which Ontology ?", "PWO", 63.0, 66.0], [". In this paper we introduce the Publishing Work\ufb02ow Ontology ( PWO ), i.e., an OWL 2 DL ontology for the description of work\ufb02ows that is particularly suitable for formalising typical publishing processes such as the publication of articles in journals. We support the presentation with a discussion of all the ontology design patterns that have been reused for modelling the main characteristics of publishing work\ufb02ows. In addition, we present two possible application of PWO in the publishing and legislative domains.", "which Ontology ?", "PWO", 63.0, 66.0], ["The availability in machine-readable form of descriptions of the structure of documents, as well as of the document discourse (e.g. the scientific discourse within scholarly articles), is crucial for facilitating semantic publishing and the overall comprehension of documents by both users and machines. In this paper we introduce DoCO, the Document Components Ontology, an OWL 2 DL ontology that provides a general-purpose structured vocabulary of document elements to describe both structural and rhetorical document components in RDF. In addition to describing the formal description of the ontology, this paper showcases its utility in practice in a variety of our own applications and other activities of the Semantic Publishing community that rely on DoCO to annotate and retrieve document components of scholarly articles.", "which Ontology ?", "DoCO", 331.0, 335.0], [". In this paper we introduce the Publishing Work\ufb02ow Ontology ( PWO ), i.e., an OWL 2 DL ontology for the description of work\ufb02ows that is particularly suitable for formalising typical publishing processes such as the publication of articles in journals. We support the presentation with a discussion of all the ontology design patterns that have been reused for modelling the main characteristics of publishing work\ufb02ows. In addition, we present two possible application of PWO in the publishing and legislative domains.", "which Ontology ?", "PWO", 63.0, 66.0], ["This work explores hypernetworks: an approach of using one network, also known as a hypernetwork, to generate the weights for another network. We apply hypernetworks to generate adaptive weights for recurrent networks. In this case, hypernetworks can be viewed as a relaxed form of weight-sharing across layers. In our implementation, hypernetworks are are trained jointly with the main network in an end-to-end fashion. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks.", "which has model ?", "Hypernetworks", 19.0, 32.0], ["We introduce BilBOWA (Bilingual Bag-of-Words without Alignments), a simple and computationally-efficient model for learning bilingual distributed representations of words which can scale to large monolingual datasets and does not require word-aligned parallel training data. Instead it trains directly on monolingual data and extracts a bilingual signal from a smaller set of raw-text sentence-aligned data. This is achieved using a novel sampled bag-of-words cross-lingual objective, which is used to regularize two noise-contrastive language models for efficient cross-lingual feature learning. We show that bilingual embeddings learned using the proposed model outperform state-of-the-art methods on a cross-lingual document classification task as well as a lexical translation task on WMT11 data.", "which has model ?", "BilBOWA", 13.0, 20.0], ["Even in the absence of any explicit semantic annotation, vast collections of audio recordings provide valuable information for learning the categorical structure of sounds. We consider several class-agnostic semantic constraints that apply to unlabeled nonspeech audio: (i) noise and translations in time do not change the underlying sound category, (ii) a mixture of two sound events inherits the categories of the constituents, and (iii) the categories of events in close temporal proximity are likely to be the same or related. Without labels to ground them, these constraints are incompatible with classification loss functions. However, they may still be leveraged to identify geometric inequalities needed for triplet loss-based training of convolutional neural networks. The result is low-dimensional embeddings of the input spectrograms that recover 41% and 84% of the performance of their fully-supervised counterparts when applied to downstream query-by-example sound retrieval and sound event classification tasks, respectively. Moreover, in limited-supervision settings, our unsupervised embeddings double the state-of-the-art classification performance.", "which has model ?", "Triplet", 716.0, 723.0], ["We propose RUDDER, a novel reinforcement learning approach for delayed rewards in finite Markov decision processes (MDPs). In MDPs the Q-values are equal to the expected immediate reward plus the expected future rewards. The latter are related to bias problems in temporal difference (TD) learning and to high variance problems in Monte Carlo (MC) learning. Both problems are even more severe when rewards are delayed. RUDDER aims at making the expected future rewards zero, which simplifies Q-value estimation to computing the mean of the immediate reward. We propose the following two new concepts to push the expected future rewards toward zero. (i) Reward redistribution that leads to return-equivalent decision processes with the same optimal policies and, when optimal, zero expected future rewards. (ii) Return decomposition via contribution analysis which transforms the reinforcement learning task into a regression task at which deep learning excels. On artificial tasks with delayed rewards, RUDDER is significantly faster than MC and exponentially faster than Monte Carlo Tree Search (MCTS), TD({\\lambda}), and reward shaping approaches. At Atari games, RUDDER on top of a Proximal Policy Optimization (PPO) baseline improves the scores, which is most prominent at games with delayed rewards. Source code is available at \\url{this https URL} and demonstration videos at \\url{this https URL}.", "which has model ?", "RUDDER", 11.0, 17.0], ["The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG. Here we propose a new model, QA-GNN, which addresses the above challenges through two key innovations: (i) relevance scoring, where we use LMs to estimate the importance of KG nodes relative to the given QA context, and (ii) joint reasoning, where we connect the QA context and KG to form a joint graph, and mutually update their representations through graph-based message passing. We evaluate QA-GNN on the CommonsenseQA and OpenBookQA datasets, and show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning, e.g., correctly handling negation in questions.", "which has model ?", "QA-GNN", 345.0, 351.0], ["We present CURL: Contrastive Unsupervised Representations for Reinforcement Learning. CURL extracts high-level features from raw pixels using contrastive learning and performs off-policy control on top of the extracted features. CURL outperforms prior pixel-based methods, both model-based and model-free, on complex tasks in the DeepMind Control Suite and Atari Games showing 1.9x and 1.2x performance gains at the 100K environment and interaction steps benchmarks respectively. On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency of methods that use state-based features. Our code is open-sourced and available at this https URL.", "which has model ?", "CURL", 11.0, 15.0], ["We propose a new algorithm, Mean Actor-Critic (MAC), for discrete-action continuous-state reinforcement learning. MAC is a policy gradient algorithm that uses the agent's explicit representation of all action values to estimate the gradient of the policy, rather than using only the actions that were actually executed. We prove that this approach reduces variance in the policy gradient estimate relative to traditional actor-critic methods. We show empirical results on two control domains and on six Atari games, where MAC is competitive with state-of-the-art policy search algorithms.", "which has model ?", "MAC", 47.0, 50.0], ["This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives. First, it puts forward a novel concept of \"history of word\" to characterize attention information from the lowest word-level embedding up to the highest semantic-level representation. Second, it introduces an improved attention scoring function that better utilizes the \"history of word\" concept. Third, it proposes a fully-aware multi-level attention mechanism to capture the complete information in one text (such as a question) and exploit it in its counterpart (such as context or passage) layer by layer. We apply FusionNet to the Stanford Question Answering Dataset (SQuAD) and it achieves the first position for both single and ensemble model on the official SQuAD leaderboard at the time of writing (Oct. 4th, 2017). Meanwhile, we verify the generalization of FusionNet with two adversarial SQuAD datasets and it sets up the new state-of-the-art on both datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to 51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%.", "which has model ?", "FusionNet", 52.0, 61.0], ["Episodic control provides a highly sample-efficient method for reinforcement learning while enforcing high memory and computational requirements. This work proposes a simple heuristic for reducing these requirements, and an application to Model-Free Episodic Control (MFEC) is presented. Experiments on Atari games show that this heuristic successfully reduces MFEC computational demands while producing no significant loss of performance when conservative choices of hyperparameters are used. Consequently, episodic control becomes a more feasible option when dealing with reinforcement learning tasks.", "which has model ?", "MFEC", 268.0, 272.0], ["Biological systems understand the world by simultaneously processing high-dimensional inputs from modalities as diverse as vision, audition, touch, proprioception, etc. The perception models used in deep learning on the other hand are designed for individual modalities, often relying on domain-specific assumptions such as the local grid structures exploited by virtually all existing vision models. These priors introduce helpful inductive biases, but also lock models to individual modalities. In this paper we introduce the Perceiver \u2013 a model that builds upon Transformers and hence makes few architectural assumptions about the relationship between its inputs, but that also scales to hundreds of thousands of inputs, like ConvNets. The model leverages an asymmetric attention mechanism to iteratively distill inputs into a tight latent bottleneck, allowing it to scale to handle very large inputs. We show that this architecture performs competitively or beyond strong, specialized models on classification tasks across various modalities: images, point clouds, audio, video and video+audio. The Perceiver obtains performance comparable to ResNet-50 on ImageNet without convolutions and by directly attending to 50,000 pixels. It also surpasses state-of-the-art results for all modalities in AudioSet.", "which has model ?", "Perceiver", 528.0, 537.0], ["As the most successful variant and improvement for Trust Region Policy Optimization (TRPO), proximal policy optimization (PPO) has been widely applied across various domains with several advantages: efficient data utilization, easy implementation, and good parallelism. In this paper, a first-order gradient reinforcement learning algorithm called Policy Optimization with Penalized Point Probability Distance (POP3D), which is a lower bound to the square of total variance divergence is proposed as another powerful variant. Firstly, we talk about the shortcomings of several commonly used algorithms, by which our method is partly motivated. Secondly, we address to overcome these shortcomings by applying POP3D. Thirdly, we dive into its mechanism from the perspective of solution manifold. Finally, we make quantitative comparisons among several state-of-the-art algorithms based on common benchmarks. Simulation results show that POP3D is highly competitive compared with PPO. Besides, our code is released in this https URL.", "which has model ?", "POP3D", 411.0, 416.0], ["We propose an unsupervised method for sentence summarization using only language modeling. The approach employs two language models, one that is generic (i.e. pretrained), and the other that is specific to the target domain. We show that by using a product-of-experts criteria these are enough for maintaining continuous contextual matching while maintaining output fluency. Experiments on both abstractive and extractive sentence summarization data sets show promising results of our method without being exposed to any paired data.", "which has model ?", "Contextual Match", NaN, NaN], ["Multilayer transformer networks consist of interleaved self-attention and feedforward sublayers. Could ordering the sublayers in a different pattern lead to better performance? We generate randomly ordered transformers and train them with the language modeling objective. We observe that some of these models are able to achieve better performance than the interleaved baseline, and that those successful variants tend to have more self-attention at the bottom and more feedforward sublayers at the top. We propose a new transformer pattern that adheres to this property, the sandwich transformer, and show that it improves perplexity on multiple word-level and character-level language modeling benchmarks, at no cost in parameters, memory, or training time. However, the sandwich reordering pattern does not guarantee performance gains across every task, as we demonstrate on machine translation models. Instead, we suggest that further exploration of task-specific sublayer reorderings is needed in order to unlock additional gains.", "which has model ?", "Sandwich Transformer", 576.0, 596.0], ["We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access to the underlying state of the game, and occasionally completes the first level.", "which has model ?", "RND", 419.0, 422.0], ["We examine the capabilities of a unified, multi-task framework for three information extraction tasks: named entity recognition, relation extraction, and event extraction. Our framework (called DyGIE++) accomplishes all tasks by enumerating, refining, and scoring text spans designed to capture local (within-sentence) and global (cross-sentence) context. Our framework achieves state-of-the-art results across all tasks, on four datasets from a variety of domains. We perform experiments comparing different techniques to construct span representations. Contextualized embeddings like BERT perform well at capturing relationships among entities in the same or adjacent sentences, while dynamic span graph updates model long-range cross-sentence relationships. For instance, propagating span representations via predicted coreference links can enable the model to disambiguate challenging entity mentions. Our code is publicly available at https://github.com/dwadden/dygiepp and can be easily adapted for new tasks or datasets.", "which has model ?", "DYGIE++", NaN, NaN], ["Joint extraction refers to extracting triples, composed of entities and relations, simultaneously from the text with a single model. However, most existing methods fail to extract all triples accurately and efficiently from sentences with overlapping issue, i.e., the same entity is included in multiple triples. In this paper, we propose a novel scheme called Bidirectional Tree Tagging (BiTT) to label overlapping triples in text. In BiTT, the triples with the same relation category in a sentence are especially represented as two binary trees, each of which is converted into a word-level tags sequence to label each word. Based on BiTT scheme, we develop an end-to-end extraction framework to predict the BiTT tags and further extract triples efficiently. We adopt the Bi-LSTM and the BERT as the encoder in our framework respectively, and obtain promising results in public English as well as Chinese datasets.", "which has model ?", "BiTT", 389.0, 393.0], ["In this paper, we present a modular robotic system to tackle the problem of generating and performing antipodal robotic grasps for unknown objects from the n-channel image of the scene. We propose a novel Generative Residual Convolutional Neural Network (GR-ConvNet) model that can generate robust antipodal grasps from n-channel input at real-time speeds (\u223c20ms). We evaluate the proposed model architecture on standard datasets and a diverse set of household objects. We achieved state-of-the-art accuracy of 97.7% and 94.6% on Cornell and Jacquard grasping datasets, respectively. We also demonstrate a grasp success rate of 95.4% and 93% on household and adversarial objects, respectively, using a 7 DoF robotic arm.", "which has model ?", "GR-ConvNet", 255.0, 265.0], ["We present trellis networks, a new architecture for sequence modeling. On the one hand, a trellis network is a temporal convolutional network with special structure, characterized by weight tying across depth and direct injection of the input into deep layers. On the other hand, we show that truncated recurrent networks are equivalent to trellis networks with special sparsity structure in their weight matrices. Thus trellis networks with general weight matrices generalize truncated recurrent networks. We leverage these connections to design high-performing trellis networks that absorb structural and algorithmic elements from both recurrent and convolutional models. Experiments demonstrate that trellis networks outperform the current state of the art methods on a variety of challenging benchmarks, including word-level language modeling and character-level language modeling tasks, and stress tests designed to evaluate long-term memory retention. The code is available at this https URL .", "which has model ?", "Trellis Network", 90.0, 105.0], ["Efficient exploration in complex environments remains a major challenge for reinforcement learning. We propose bootstrapped DQN, a simple algorithm that explores in a computationally and statistically efficient manner through use of randomized value functions. Unlike dithering strategies such as epsilon-greedy exploration, bootstrapped DQN carries out temporally-extended (or deep) exploration; this can lead to exponentially faster learning. We demonstrate these benefits in complex stochastic MDPs and in the large-scale Arcade Learning Environment. Bootstrapped DQN substantially improves learning times and performance across most Atari games.", "which has model ?", "Bootstrapped DQN", 111.0, 127.0], ["Recurrent Neural Networks have long been the dominating choice for sequence modeling. However, it severely suffers from two issues: impotent in capturing very long-term dependencies and unable to parallelize the sequential computation procedure. Therefore, many non-recurrent sequence models that are built on convolution and attention operations have been proposed recently. Notably, models with multi-head attention such as Transformer have demonstrated extreme effectiveness in capturing long-term dependencies in a variety of sequence modeling tasks. Despite their success, however, these models lack necessary components to model local structures in sequences and heavily rely on position embeddings that have limited effects and require a considerable amount of design efforts. In this paper, we propose the R-Transformer which enjoys the advantages of both RNNs and the multi-head attention mechanism while avoids their respective drawbacks. The proposed model can effectively capture both local structures and global long-term dependencies in sequences without any use of position embeddings. We evaluate R-Transformer through extensive experiments with data from a wide range of domains and the empirical results show that R-Transformer outperforms the state-of-the-art methods by a large margin in most of the tasks. We have made the code publicly available at \\url{this https URL}.", "which has model ?", "R-Transformer", 814.0, 827.0], ["Recurrent Neural Networks have long been the dominating choice for sequence modeling. However, it severely suffers from two issues: impotent in capturing very long-term dependencies and unable to parallelize the sequential computation procedure. Therefore, many non-recurrent sequence models that are built on convolution and attention operations have been proposed recently. Notably, models with multi-head attention such as Transformer have demonstrated extreme effectiveness in capturing long-term dependencies in a variety of sequence modeling tasks. Despite their success, however, these models lack necessary components to model local structures in sequences and heavily rely on position embeddings that have limited effects and require a considerable amount of design efforts. In this paper, we propose the R-Transformer which enjoys the advantages of both RNNs and the multi-head attention mechanism while avoids their respective drawbacks. The proposed model can effectively capture both local structures and global long-term dependencies in sequences without any use of position embeddings. We evaluate R-Transformer through extensive experiments with data from a wide range of domains and the empirical results show that R-Transformer outperforms the state-of-the-art methods by a large margin in most of the tasks. We have made the code publicly available at \\url{this https URL}.", "which has model ?", "Transformer", 426.0, 437.0], ["Previous machine comprehension (MC) datasets are either too small to train end-to-end deep learning models, or not difficult enough to evaluate the ability of current MC techniques. The newly released SQuAD dataset alleviates these limitations, and gives us a chance to develop more realistic MC models. Based on this dataset, we propose a Multi-Perspective Context Matching (MPCM) model, which is an end-to-end system that directly predicts the answer beginning and ending points in a passage. Our model first adjusts each word-embedding vector in the passage by multiplying a relevancy weight computed against the question. Then, we encode the question and weighted passage by using bi-directional LSTMs. For each point in the passage, our model matches the context of this point against the encoded question from multiple perspectives and produces a matching vector. Given those matched vectors, we employ another bi-directional LSTM to aggregate all the information and predict the beginning and ending points. Experimental result on the test set of SQuAD shows that our model achieves a competitive result on the leaderboard.", "which has model ?", "MPCM", 376.0, 380.0], ["We present in this paper a new architecture, named Convolutional vision Transformer (CvT), that improves Vision Transformer (ViT) in performance and efficiency by introducing convolutions into ViT to yield the best of both de-signs. This is accomplished through two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs) to the ViT architecture (i.e. shift, scale, and distortion invariance) while maintaining the merits of Transformers (i.e. dynamic attention, global context, and better generalization). We validate CvT by conducting extensive experiments, showing that this approach achieves state-of-the-art performance over other Vision Transformers and ResNets on ImageNet-1k, with fewer parameters and lower FLOPs. In addition, performance gains are maintained when pretrained on larger datasets (e.g. ImageNet-22k) and fine-tuned to downstream tasks. Pretrained on ImageNet-22k, our CvT-W24 obtains a top-1 accuracy of 87.7% on the ImageNet-1k val set. Finally, our results show that the positional encoding, a crucial component in existing Vision Transformers, can be safely re-moved in our model, simplifying the design for higher resolution vision tasks. Code will be released at https://github.com/microsoft/CvT.", "which has model ?", "CvT-W24", 1097.0, 1104.0], ["Neural machine translation has recently achieved impressive results, while using little in the way of external linguistic information. In this paper we show that the strong learning capability of neural MT models does not make linguistic features redundant; they can be easily incorporated to provide further improvements in performance. We generalize the embedding layer of the encoder in the attentional encoder--decoder architecture to support the inclusion of arbitrary features, in addition to the baseline word feature. We add morphological features, part-of-speech tags, and syntactic dependency labels as input features to English German, and English->Romanian neural machine translation systems. In experiments on WMT16 training and test sets, we find that linguistic input features improve model quality according to three metrics: perplexity, BLEU and CHRF3. An open-source implementation of our neural MT system is available, as are sample files and configurations.", "which has model ?", "Linguistic Input Features", 766.0, 791.0], ["Multi-task learning (MTL) is an effective method for learning related tasks, but designing MTL models necessitates deciding which and how many parameters should be task-specific, as opposed to shared between tasks. We investigate this issue for the problem of jointly learning named entity recognition (NER) and relation extraction (RE) and propose a novel neural architecture that allows for deeper task-specificity than does prior work. In particular, we introduce additional task-specific bidirectional RNN layers for both the NER and RE tasks and tune the number of shared and task-specific layers separately for different datasets. We achieve state-of-the-art (SOTA) results for both tasks on the ADE dataset; on the CoNLL04 dataset, we achieve SOTA results on the NER task and competitive results on the RE task while using an order of magnitude fewer trainable parameters than the current SOTA architecture. An ablation study confirms the importance of the additional task-specific layers for achieving these results. Our work suggests that previous solutions to joint NER and RE undervalue task-specificity and demonstrates the importance of correctly balancing the number of shared and task-specific parameters for MTL approaches in general.", "which has model ?", "Deeper", 393.0, 399.0], ["Tracking progress in machine learning has become increasingly difficult with the recent explosion in the number of papers. In this paper, we present AxCell, an automatic machine learning pipeline for extracting results from papers. AxCell uses several novel components, including a table segmentation subtask, to learn relevant structural knowledge that aids extraction. When compared with existing methods, our approach significantly improves the state of the art for results extraction. We also release a structured, annotated dataset for training models for results extraction, and a dataset for evaluating the performance of models on this task. Lastly, we show the viability of our approach enables it to be used for semi-automated results extraction in production, suggesting our improvements make this task practically viable for the first time. Code is available on GitHub.", "which has model ?", "AxCell", 149.0, 155.0], ["Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SciBERT, a pretrained language model based on BERT (Devlin et. al., 2018) to address the lack of high-quality, large-scale labeled scientific data. SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks. The code and pretrained models are available at https://github.com/allenai/scibert/.", "which has model ?", "SciBERT", 117.0, 124.0], ["We present the Compressive Transformer, an attentive sequence model which compresses past memories for long-range sequence learning. We find the Compressive Transformer obtains state-of-the-art language modelling results in the WikiText-103 and Enwik8 benchmarks, achieving 17.1 ppl and 0.97 bpc respectively. We also find it can model high-frequency speech effectively and can be used as a memory mechanism for RL, demonstrated on an object matching task. To promote the domain of long-range sequence learning, we propose a new open-vocabulary language modelling benchmark derived from books, PG-19.", "which has model ?", "Compressive Transformer", 15.0, 38.0], ["Can a computer determine a piano player\u2019s skill level? Is it preferable to base this assessment on visual analysis of the player\u2019s performance or should we trust our ears over our eyes? Since current convolutional neural networks (CNNs) have difficulty processing long video videos, how can shorter clips be sampled to best reflect the players skill level? In this work, we collect and release a first-of-its-kind dataset for multimodal skill assessment focusing on assessing piano player\u2019s skill level, answer the asked questions, initiate work in automated evaluation of piano playing skills and provide baselines for future work. Dataset can be accessed from: https://github.com/ParitoshParmar/Piano-Skills-Assessment.", "which has model ?", "Video", 269.0, 274.0], ["We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and other recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa on GLUE and SQuAD, and achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 3.5 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also replicate other pretraining schemes within the BART framework, to understand their effect on end-task performance.", "which has model ?", "BART", 11.0, 15.0], ["Knowledge graph embedding is an important task and it will benefit lots of downstream applications. Currently, deep neural networks based methods achieve state-of-the-art performance. However, most of these existing methods are very complex and need much time for training and inference. To address this issue, we propose a simple but effective atrous convolution based knowledge graph embedding method. Compared with existing state-of-the-art methods, our method has following main characteristics. First, it effectively increases feature interactions by using atrous convolutions. Second, to address the original information forgotten issue and vanishing/exploding gradient issue, it uses the residual learning method. Third, it has simpler structure but much higher parameter efficiency. We evaluate our method on six benchmark datasets with different evaluation metrics. Extensive experiments show that our model is very effective. On these diverse datasets, it achieves better results than the compared state-of-the-art methods on most of evaluation metrics. The source codes of our model could be found at https://github.com/neukg/AcrE.", "which has model ?", "AcrE", 1137.0, 1141.0], ["The Tsetlin Machine (TM) is an interpretable mechanism for pattern recognition that constructs conjunctive clauses from data. The clauses capture frequent patterns with high discriminating power, providing increasing expression power with each additional clause. However, the resulting accuracy gain comes at the cost of linear growth in computation time and memory usage. In this paper, we present the Weighted Tsetlin Machine (WTM), which reduces computation time and memory usage by weighting the clauses. Real-valued weighting allows one clause to replace multiple, and supports fine-tuning the impact of each clause. Our novel scheme simultaneously learns both the composition of the clauses and their weights. Furthermore, we increase training efficiency by replacing $k$ Bernoulli trials of success probability $p$ with a uniform sample of average size $p k$, the size drawn from a binomial distribution. In our empirical evaluation, the WTM achieved the same accuracy as the TM on MNIST, IMDb, and Connect-4, requiring only $1/4$, $1/3$, and $1/50$ of the clauses, respectively. With the same number of clauses, the WTM outperformed the TM, obtaining peak test accuracies of respectively $98.63\\%$, $90.37\\%$, and $87.91\\%$. Finally, our novel sampling scheme reduced sample generation time by a factor of $7$.", "which has model ?", "Weighted Tsetlin Machine", 403.0, 427.0], ["Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present SEE, a step towards semi-supervised neural networks for scene text detection and recognition, that can be optimized end-to-end. Most existing works consist of multiple deep neural networks and several pre-processing steps. In contrast to this, we propose to use a single deep neural network, that learns to detect and recognize text from natural images, in a semi-supervised way. SEE is a network that integrates and jointly learns a spatial transformer network, which can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We introduce the idea behind our novel approach and show its feasibility, by performing a range of experiments on standard benchmark datasets, where we achieve competitive results.", "which has model ?", "SEE", 276.0, 279.0], ["Deep Convolutional Neural Network (DCNN) and Transformer have achieved remarkable successes in image recognition. However, their performance in fine-grained image recognition is still difficult to meet the requirements of actual needs. This paper proposes a Sequence Random Network (SRN) to enhance the performance of DCNN. The output of DCNN is one-dimensional features. This onedimensional feature abstractly represents image information, but it does not express well the detailed information of image. To address this issue, we use the proposed SRN, which composed of BiLSTM and several Tanh-Dropout blocks (called BiLSTM-TDN), to further process DCNN one-dimensional features for highlighting the detail information of image. After the feature transform by BiLSTMTDN, the recognition performance has been greatly improved. We conducted the experiments on six fine-grained image datasets. Except for FGVC-Aircraft, the accuracy of the proposed methods on the other datasets exceeded 99%. Experimental results show that BiLSTM-TDN is far superior to the existing state-of-the-art methods. In addition to DCNN, BiLSTM-TDN can also be extended to other models, such as Transformer.", "which has model ?", "BiLSTM-TDN", 618.0, 628.0], ["Current state-of-the-art relation extraction methods typically rely on a set of lexical, syntactic, and semantic features, explicitly computed in a pre-processing step. Training feature extraction models requires additional annotated language resources, which severely restricts the applicability and portability of relation extraction to novel languages. Similarly, pre-processing introduces an additional source of error. To address these limitations, we introduce TRE, a Transformer for Relation Extraction, extending the OpenAI Generative Pre-trained Transformer [Radford et al., 2018]. Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions. TRE allows us to learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task. TRE obtains a new state-of-the-art result on the TACRED and SemEval 2010 Task 8 datasets, achieving a test F1 of 67.4 and 87.1, respectively. Furthermore, we observe a significant increase in sample efficiency. With only 20% of the training examples, TRE matches the performance of our baselines and our model trained from scratch on 100% of the TACRED dataset. We open-source our trained models, experiments, and source code.", "which has model ?", "TRE", 467.0, 470.0], ["While modern machine translation has relied on large parallel corpora, a recent line of work has managed to train Neural Machine Translation (NMT) systems from monolingual corpora only (Artetxe et al., 2018c; Lample et al., 2018). Despite the potential of this approach for low-resource settings, existing systems are far behind their supervised counterparts, limiting their practical interest. In this paper, we propose an alternative approach based on phrase-based Statistical Machine Translation (SMT) that significantly closes the gap with supervised systems. Our method profits from the modular architecture of SMT: we first induce a phrase table from monolingual corpora through cross-lingual embedding mappings, combine it with an n-gram language model, and fine-tune hyperparameters through an unsupervised MERT variant. In addition, iterative backtranslation improves results further, yielding, for instance, 14.08 and 26.22 BLEU points in WMT 2014 English-German and English-French, respectively, an improvement of more than 7-10 BLEU points over previous unsupervised systems, and closing the gap with supervised SMT (Moses trained on Europarl) down to 2-5 BLEU points. Our implementation is available at https://github.com/artetxem/monoses.", "which has model ?", "SMT", 500.0, 503.0], ["PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.", "which keywords ?", "cancer", 668.0, 674.0], ["In recent years, the development of recommender systems has attracted increased interest in several domains, especially in e-learning. Massive Open Online Courses have brought a revolution. However, deficiency in support and personalization in this context drive learners to lose their motivation and leave the learning process. To overcome this problem we focus on adapting learning activities to learners' needs using a recommender system.This paper attempts to provide an introduction to different recommender systems for e-learning settings, as well as to present our proposed recommender system for massive learning activities in order to provide learners with the suitable learning activities to follow the learning process and maintain their motivation. We propose a hybrid knowledge-based recommender system based on ontology for recommendation of e-learning activities to learners in the context of MOOCs. In the proposed recommendation approach, ontology is used to model and represent the knowledge about the domain model, learners and learning activities.", "which keywords ?", "personalization", 225.0, 240.0], [" Open educational resources are currently becoming increasingly available from a multitude of sources and are consequently annotated in many diverse ways. Interoperability concerns that naturally arise can often be resolved through the semantification of metadata descriptions, while at the same time strengthening the knowledge value of resources. SKOS can be a solid linking point offering a standard vocabulary for thematic descriptions, by referencing semantic thesauri. We propose the enhancement and maintenance of educational resources\u2019 metadata in the form of learning object ontologies and introduce the notion of a learning object ontology repository that can help towards their publication, discovery and reuse. At the same time, linking to thesauri datasets and contextualized sources interrelates learning objects with linked data and exposes them to the Web of Data. We build a set of extensions and workflows on top of contemporary ontology management tools, such as WebProt\u00e9g\u00e9, that can make it suitable as a learning object ontology repository. The proposed approach and implementation can help libraries and universities in discovering, managing and incorporating open educational resources and enhancing current curricula. ", "which keywords ?", "linked data", 840.0, 851.0], ["Abstract Aldehyde dehydrogenase 2 deficiency (ALDH2*2) causes facial flushing in response to alcohol consumption in approximately 560 million East Asians. Recent meta-analysis demonstrated the potential link between ALDH2*2 mutation and Alzheimer\u2019s Disease (AD). Other studies have linked chronic alcohol consumption as a risk factor for AD. In the present study, we show that fibroblasts of an AD patient that also has an ALDH2*2 mutation or overexpression of ALDH2*2 in fibroblasts derived from AD patients harboring ApoE \u03b54 allele exhibited increased aldehydic load, oxidative stress, and increased mitochondrial dysfunction relative to healthy subjects and exposure to ethanol exacerbated these dysfunctions. In an in vivo model, daily exposure of WT mice to ethanol for 11 weeks resulted in mitochondrial dysfunction, oxidative stress and increased aldehyde levels in their brains and these pathologies were greater in ALDH2*2/*2 (homozygous) mice. Following chronic ethanol exposure, the levels of the AD-associated protein, amyloid-\u03b2, and neuroinflammation were higher in the brains of the ALDH2*2/*2 mice relative to WT. Cultured primary cortical neurons of ALDH2*2/*2 mice showed increased sensitivity to ethanol and there was a greater activation of their primary astrocytes relative to the responses of neurons or astrocytes from the WT mice. Importantly, an activator of ALDH2 and ALDH2*2, Alda-1, blunted the ethanol-induced increases in A\u03b2, and the neuroinflammation in vitro and in vivo. These data indicate that impairment in the metabolism of aldehydes, and specifically ethanol-derived acetaldehyde, is a contributor to AD associated pathology and highlights the likely risk of alcohol consumption in the general population and especially in East Asians that carry ALDH2*2 mutation.", "which keywords ?", "Neuroinflammation", 1046.0, 1063.0], ["Herein, a novel electrochemical glucose biosensor based on glucose oxidase (GOx) immobilized on a surface containing platinum nanoparticles (PtNPs) electrodeposited on poly(Azure A) (PAA) previously electropolymerized on activated screen-printed carbon electrodes (GOx-PtNPs-PAA-aSPCEs) is reported. The resulting electrochemical biosensor was validated towards glucose oxidation in real samples and further electrochemical measurement associated with the generated H2O2. The electrochemical biosensor showed an excellent sensitivity (42.7 \u03bcA mM\u22121 cm\u22122), limit of detection (7.6 \u03bcM), linear range (20 \u03bcM\u20132.3 mM), and good selectivity towards glucose determination. Furthermore, and most importantly, the detection of glucose was performed at a low potential (0.2 V vs. Ag). The high performance of the electrochemical biosensor was explained through surface exploration using field emission SEM, XPS, and impedance measurements. The electrochemical biosensor was successfully applied to glucose quantification in several real samples (commercial juices and a plant cell culture medium), exhibiting a high accuracy when compared with a classical spectrophotometric method. This electrochemical biosensor can be easily prepared and opens up a good alternative in the development of new sensitive glucose sensors.", "which keywords ?", "glucose oxidase", 59.0, 74.0], ["A novel sensitivity-enhanced intrinsic fiber Fabry-Perot interferometer (IFFPI) high temperature sensor based on a hollow- core photonic crystal fiber (HC-PCF) and modified Vernier effect is proposed and experimentally demonstrated. The all fiber IFFPIs are easily constructed by splicing one end of the HC-PCF to a leading single mode fiber (SMF) and applying an arc at the other end of the HC-PCF to form a pure silica tip. The modified Vernier effect is formed by three beams of lights reflected from the SMF-PCF splicing joint, and the two air/glass interfaces on the ends of the collapsed HC-PCF tip, respectively. Vernier effect was first applied to high temperature sensing up to 1200\u00b0C, in this work, and the experimental results exhibit good stability and repeatability. The temperature sensitivity, measured from the spectrum envelope, is 14 to 57 times higher than that of other configurations using similar HC-PCFs without the Vernier effect. The proposed sensor has the advantages of high sensitivity, good stability, compactness, ease of fabrication, and has potential application in practical high-temperature measurements.", "which keywords ?", "High temperature sensor", 80.0, 103.0], ["Realizing the theoretical limiting power conversion efficiency (PCE) in perovskite solar cells requires a better understanding and control over the fundamental loss processes occurring in the bulk of the perovskite layer and at the internal semiconductor interfaces in devices. One of the main challenges is to eliminate the presence of charge recombination centres throughout the film which have been observed to be most densely located at regions near the grain boundaries. Here, we introduce aluminium acetylacetonate to the perovskite precursor solution, which improves the crystal quality by reducing the microstrain in the polycrystalline film. At the same time, we achieve a reduction in the non-radiative recombination rate, a remarkable improvement in the photoluminescence quantum efficiency (PLQE) and a reduction in the electronic disorder deduced from an Urbach energy of only 12.6 meV in complete devices. As a result, we demonstrate a PCE of 19.1% with negligible hysteresis in planar heterojunction solar cells comprising all organic p and n-type charge collection layers. Our work shows that an additional level of control of perovskite thin film quality is possible via impurity cation doping, and further demonstrates the continuing importance of improving the electronic quality of the perovskite absorber and the nature of the heterojunctions to further improve the solar cell performance.", "which keywords ?", "Solar Cells", 83.0, 94.0], ["PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.", "which keywords ?", "GMO", 108.0, 111.0], ["A high-sensitivity fiber-optic strain sensor, based on the Vernier effect and separated Fabry\u2013Perot interferometers (FPIs), is proposed and experimentally demonstrated. One air-cavity FPI is used as a sensing FPI (SFPI) and another is used as a matched FPI (MFPI) to generate the Vernier effect. The two FPIs are connected by a fiber link but separated by a long section of single-mode fiber (SMF). The SFPI is fabricated by splicing a section of microfiber between two SMFs with a large lateral offset, and the MFPI is formed by a section of hollow-core fiber sandwiched between two SMFs. By using the Vernier effect, the strain sensitivity of the proposed sensor reaches $\\text{1.15 nm/}\\mu \\varepsilon $, which is the highest strain sensitivity of an FPI-based sensor reported so far. Owing to the separated structure of the proposed sensor, the MFPI can be isolated from the SFPI and the detection environment. Therefore, the MFPI is not affected by external physical quantities (such as strain and temperature) and thus has a very low temperature cross-sensitivity. The experimental results show that a low-temperature cross-sensitivity of $\\text{0.056 } \\mu \\varepsilon /^ {\\circ }{\\text{C}}$ can be obtained with the proposed sensor. With its advantages of simple fabrication, high strain sensitivity, and low-temperature cross-sensitivity, the proposed sensor has great application prospects in several fields.", "which keywords ?", "Vernier effect", 59.0, 73.0], ["The simultaneous doping effect of Gadolinium (Gd) and Lithium (Li) on zinc oxide (ZnO) thin\u2010film transistor (TFT) by spray pyrolysis using a ZrOx gate insulator is reported. Li doping in ZnO increases mobility significantly, whereas the presence of Gd improves the stability of the device. The Gd ratio in ZnO is varied from 0% to 20% and the Li ratio from 0% to 10%. The optimized ZnO TFT with codoping of 5% Li and 10% Gd exhibits the linear mobility of 25.87 cm2 V\u22121 s\u22121, the subthreshold swing of 204 mV dec\u22121, on/off current ratio of \u2248108, and zero hysteresis voltage. The enhancement of both mobility and stability is due to an increase in grain size by Li incorporation and decrease of defect states by Gd doping. The negligible threshold voltage shift (\u2206VTH) under gate bias and zero hysteresis are due to the reduced defects in an oxide semiconductor and decreased traps at the LiGdZnO/ZrOx interface. Li doping can balance the reduction of the carrier concentration by Gd doping, which improves the mobility and stability of the ZnO TFT. Therefore, LiGdZnO TFT shows excellent electrical performance with high stability.", "which keywords ?", "Spray Pyrolysis", 117.0, 132.0], ["Inverted perovskite solar cells (PSCs) have been becoming more and more attractive, owing to their easy-fabrication and suppressed hysteresis, while the ion diffusion between metallic electrode and perovskite layer limit the long-term stability of devices. In this work, we employed a novel polyethylenimine (PEI) modified cross-stacked superaligned carbon nanotube (CSCNT) film in the inverted planar PSCs configurated FTO/NiO x/methylammonium lead tri-iodide (MAPbI3)/6, 6-phenyl C61-butyric acid methyl ester (PCBM)/CSCNT:PEI. By modifying CSCNT with a certain concentration of PEI (0.5 wt %), suitable energy level alignment and promoted interfacial charge transfer have been achieved, leading to a significant enhancement in the photovoltaic performance. As a result, a champion power conversion efficiency (PCE) of \u223c11% was obtained with a Voc of 0.95 V, a Jsc of 18.7 mA cm-2, a FF of 0.61 as well as negligible hysteresis. Moreover, CSCNT:PEI based inverted PSCs show superior durability in comparison to the standard silver based devices, remaining over 85% of the initial PCE after 500 h aging under various conditions, including long-term air exposure, thermal, and humid treatment. This work opens up a new avenue of facile modified carbon electrodes for highly stable and hysteresis suppressed PSCs.", "which keywords ?", "Polyethylenimine", 291.0, 307.0], ["PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.", "which keywords ?", "MDA-MB-231", 682.0, 692.0], ["In recent times, polymer-based flexible pressure sensors have been attracting a lot of attention because of their various applications. A highly sensitive and flexible sensor is suggested, capable of being attached to the human body, based on a three-dimensional dielectric elastomeric structure of polydimethylsiloxane (PDMS) and microsphere composite. This sensor has maximal porosity due to macropores created by sacrificial layer grains and micropores generated by microspheres pre-mixed with PDMS, allowing it to operate at a wider pressure range (~150 kPa) while maintaining a sensitivity (of 0.124 kPa\u22121 in a range of 0~15 kPa) better than in previous studies. The maximized pores can cause deformation in the structure, allowing for the detection of small changes in pressure. In addition to exhibiting a fast rise time (~167 ms) and fall time (~117 ms), as well as excellent reproducibility, the fabricated pressure sensor exhibits reliability in its response to repeated mechanical stimuli (2.5 kPa, 1000 cycles). As an application, we develop a wearable device for monitoring repeated tiny motions, such as the pulse on the human neck and swallowing at the Adam\u2019s apple. This sensory device is also used to detect movements in the index finger and to monitor an insole system in real-time.", "which keywords ?", "Pressure sensor", 924.0, 939.0], ["We study localized dissipative structures in a generalized Lugiato-Lefever equation, exhibiting normal group-velocity dispersion and anomalous quartic group-velocity dispersion. In the conservative system, this parameter-regime has proven to enable generalized dispersion Kerr solitons. Here, we demonstrate via numerical simulations that our dissipative system also exhibits equivalent localized states, including special molecule-like two-color bound states recently reported. We investigate their generation, characterize the observed steady-state solution, and analyze their propagation dynamics under perturbations.", "which keywords ?", "Lugiato-Lefever equation", 59.0, 83.0], ["A hybrid cascaded configuration consisting of a fiber Sagnac interferometer (FSI) and a Fabry-Perot interferometer (FPI) was proposed and experimentally demonstrated to enhance the temperature intensity by the Vernier-effect. The FSI, which consists of a certain length of Panda fiber, is for temperature sensing, while the FPI acts as a filter due to its temperature insensitivity. The two interferometers have almost the same free spectral range, with the spectral envelope of the cascaded sensor shifting much more than the single FSI. Experimental results show that the temperature sensitivity is enhanced from \u22121.4 nm/\u00b0C (single FSI) to \u221229.0 (cascaded configuration). The enhancement factor is 20.7, which is basically consistent with theoretical analysis (19.9).", "which keywords ?", "Sensitivity", 586.0, 597.0], ["E-learning recommender systems are gaining significance nowadays due to its ability to enhance the learning experience by providing tailor-made services based on learner preferences. A Personalized Learning Environment (PLE) that automatically adapts to learner characteristics such as learning styles and knowledge level can recommend appropriate learning resources that would favor the learning process and improve learning outcomes. The pure cold-start problem is a relevant issue in PLEs, which arises due to the lack of prior information about the new learner in the PLE to create appropriate recommendations. This article introduces a semantic framework based on ontology to address the pure cold-start problem in content recommenders. The ontology encapsulates the domain knowledge about the learners as well as Learning Objects (LOs). The semantic model that we built has been experimented with different combinations of the key learner parameters such as learning style, knowledge level, and background knowledge. The proposed framework utilizes these parameters to build natural learner groups from the learner ontology using SPARQL queries. The ontology holds 480 learners\u2019 data, 468 annotated learning objects with 5,600 learner ratings. A multivariate k-means clustering algorithm, an unsupervised machine learning technique for grouping similar data, is used to evaluate the learner similarity computation accuracy. The learner satisfaction achieved with the proposed model is measured based on the ratings given by the 40 participants of the experiments. From the evaluation perspective, it is evident that 79% of the learners are satisfied with the recommendations generated by the proposed model in pure cold-start condition.", "which keywords ?", "ontology", 669.0, 677.0], ["Recently, pure transformer-based models have shown great potentials for vision tasks such as image classification and detection. However, the design of transformer networks is challenging. It has been observed that the depth, embedding dimension, and number of heads can largely affect the performance of vision transformers. Previous models configure these dimensions based upon manual crafting. In this work, we propose a new one-shot architecture search framework, namely AutoFormer, dedicated to vision transformer search. AutoFormer entangles the weights of different blocks in the same layers during supernet training. Benefiting from the strategy, the trained supernet allows thousands of subnets to be very well-trained. Specifically, the performance of these subnets with weights inherited from the supernet is comparable to those retrained from scratch. Besides, the searched models, which we refer to AutoFormers, surpass the recent state-of-the-arts such as ViT and DeiT. In particular, AutoFormer-tiny/small/base achieve 74.7%/81.7%/82.4% top-1 accuracy on ImageNet with 5.7M/22.9M/53.7M parameters, respectively. Lastly, we verify the transferability of AutoFormer by providing the performance on downstream benchmarks and distillation experiments. Code and models are available at https://github.com/microsoft/Cream.", "which keywords ?", "One-Shot", 428.0, 436.0], ["Abstract New resonant emission of dispersive waves by oscillating solitary structures in optical fiber cavities is considered analytically and numerically. The pulse propagation is described in the framework of the Lugiato-Lefever equation when a Hopf-bifurcation can result in the formation of oscillating dissipative solitons. The resonance condition for the radiation of the dissipative oscillating solitons is derived and it is demonstrated that the predicted resonances match the spectral lines observed in numerical simulations perfectly. The complex recoil of the radiation on the soliton dynamics is discussed. The reported effect can have importance for the generation of frequency combs in nonlinear microring resonators.", "which keywords ?", "Oscillating dissipative solitons", 295.0, 327.0], ["In recent years, the development of electronic skin and smart wearable body sensors has put forward high requirements for flexible pressure sensors with high sensitivity and large linear measuring range. However it turns out to be difficult to increase both of them simultaneously. In this paper, a flexible capacitive pressure sensor based on porous carbon conductive paste-PDMS composite is reported, the sensitivity and the linear measuring range of which were developed using multiple methods including adjusting the stiffness of the dielectric layer material, fabricating micro-structure and increasing dielectric permittivity of dielectric layer. The capacitive pressure sensor reported here has a relatively high sensitivity of 1.1 kPa-1 and a large linear measuring range of 10 kPa, making the product of the sensitivity and linear measuring range is 11, which is higher than that of the most reported capacitive pressure sensor to our best knowledge. The sensor has a detection of limit of 4 Pa, response time of 60 ms and great stability. Some potential applications of the sensor were demonstrated such as arterial pulse wave measuring and breathe measuring, which shows a promising candidate for wearable biomedical devices. In addition, a pressure sensor array based on the material was also fabricated and it could identify objects in the shape of different letters clearly, which shows a promising application in the future electronic skins.", "which keywords ?", "Pressure sensor", 319.0, 334.0], ["In this paper, we investigated the performance of an n-type tin-oxide (SnOx) thin film transistor (TFT) by experiments and simulation. The fabricated SnOx TFT device by oxygen plasma treatment on the channel exhibited n-type conduction with an on/off current ratio of 4.4x104, a high field-effect mobility of 18.5 cm2/V.s and a threshold swing of 405 mV/decade, which could be attributed to the excess reacted oxygen incorporated to the channel to form the oxygen-rich n-type SnOx. Furthermore, a TCAD simulation based on the n-type SnOx TFT device was performed by fitting the experimental data to investigate the effect of the channel traps on the device performance, indicating that performance enhancements were further achieved by suppressing the density of channel traps. In addition, the n-type SnOx TFT device exhibited high stability upon illumination with visible light. The results show that the n-type SnOx TFT device by channel plasma processing has considerable potential for next-generation high-performance display application.", "which keywords ?", "TCAD", 497.0, 501.0], ["Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5\uff05. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel\uff1bDrug delivery\uff1bNanoparticle\uff1bRadiotherapy\uff1bHypoxia\uff1bHuman tumor cells\uff1bcellular uptake", "which keywords ?", "Drug delivery", 1811.0, 1824.0], ["The Vernier effect of two cascaded in-fiber Mach-Zehnder interferometers (MZIs) based on a spherical-shaped structure has been investigated. The envelope based on the Vernier effect is actually formed by a frequency component of the superimposed spectrum, and the frequency value is determined by the subtraction between the optical path differences of two cascaded MZIs. A method based on band-pass filtering is put forward to extract the envelope efficiently; strain and curvature measurements are carried out to verify the validity of the method. The results show that the strain and curvature sensitivities are enhanced to -8.47 pm/\u03bc\u03b5 and -33.70 nm/m-1 with magnification factors of 5.4 and -5.4, respectively. The detection limit of the sensors with the Vernier effect is also discussed.", "which keywords ?", "interferometers", 57.0, 72.0], ["Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.", "which keywords ?", "instructional design", 90.0, 110.0], ["In this paper, we investigated the performance of an n-type tin-oxide (SnOx) thin film transistor (TFT) by experiments and simulation. The fabricated SnOx TFT device by oxygen plasma treatment on the channel exhibited n-type conduction with an on/off current ratio of 4.4x104, a high field-effect mobility of 18.5 cm2/V.s and a threshold swing of 405 mV/decade, which could be attributed to the excess reacted oxygen incorporated to the channel to form the oxygen-rich n-type SnOx. Furthermore, a TCAD simulation based on the n-type SnOx TFT device was performed by fitting the experimental data to investigate the effect of the channel traps on the device performance, indicating that performance enhancements were further achieved by suppressing the density of channel traps. In addition, the n-type SnOx TFT device exhibited high stability upon illumination with visible light. The results show that the n-type SnOx TFT device by channel plasma processing has considerable potential for next-generation high-performance display application.", "which keywords ?", "Thin film transistor (TFT)", NaN, NaN], ["An electrically conductive ultralow percolation threshold of 0.1 wt% graphene was observed in the thermoplastic polyurethane (TPU) nanocomposites. The homogeneously dispersed graphene effectively enhanced the mechanical properties of TPU significantly at a low graphene loading of 0.2 wt%. These nanocomposites were subjected to cyclic loading to investigate the influences of graphene loading, strain amplitude and strain rate on the strain sensing performances. The two dimensional graphene and the flexible TPU matrix were found to endow these nanocomposites with a wide range of strain sensitivity (gauge factor ranging from 0.78 for TPU with 0.6 wt% graphene at the strain rate of 0.1 min\u22121 to 17.7 for TPU with 0.2 wt% graphene at the strain rate of 0.3 min\u22121) and good sensing stability for different strain patterns. In addition, these nanocomposites demonstrated good recoverability and reproducibility after stabilization by cyclic loading. An analytical model based on tunneling theory was used to simulate the resistance response to strain under different strain rates. The change in the number of conductive pathways and tunneling distance under strain was responsible for the observed resistance-strain behaviors. This study provides guidelines for the fabrication of graphene based polymer strain sensors.", "which keywords ?", "Nanocomposites", 131.0, 145.0], ["Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179\u00b122 nm were produced. Res was loaded with high EE of 73\u00b10.9% and DL content of 6.2\u00b10.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", "which keywords ?", "Breast cancer", 13.0, 26.0], ["A new variant of the classic pulsed laser deposition (PLD) process is introduced as a room-temperature dry process for the growth and stoichiometry control of hybrid perovskite films through the use of nonstoichiometric single target ablation and off-axis growth. Mixed halide hybrid perovskite films nominally represented by CH3NH3PbI3\u2013xAx (A = Cl or F) are also grown and are shown to reveal interesting trends in the optical properties and photoresponse. Growth of good quality lead-free CH3NH3SnI3 films is also demonstrated, and the corresponding optical properties are presented. Finally, perovskite solar cells fabricated at room temperature (which makes the process adaptable to flexible substrates) are shown to yield a conversion efficiency of about 7.7%.", "which keywords ?", "Deposition", 42.0, 52.0], ["The aim of this work is to develop a smart flexible sensor adapted to textile structures, able to measure their strain deformations. The sensors are \u201csmart\u201d because of their capacity to adapt to the specific mechanical properties of textile structures that are lightweight, highly flexible, stretchable, elastic, etc. Because of these properties, textile structures are continuously in movement and easily deformed, even under very low stresses. It is therefore important that the integration of a sensor does not modify their general behavior. The material used for the sensor is based on a thermoplastic elastomer (Evoprene)/carbon black nanoparticle composite, and presents general mechanical properties strongly compatible with the textile substrate. Two preparation techniques are investigated: the conventional melt-mixing process, and the solvent process which is found to be more adapted for this particular application. The preparation procedure is fully described, namely the optimization of the process in terms of filler concentration in which the percolation theory aspects have to be considered. The sensor is then integrated on a thin, lightweight Nylon fabric, and the electromechanical characterization is performed to demonstrate the adaptability and the correct functioning of the sensor as a strain gauge on the fabric. A normalized relative resistance is defined in order to characterize the electrical response of the sensor. Finally, the influence of environmental factors, such as temperature and atmospheric humidity, on the sensor performance is investigated. The results show that the sensor's electrical resistance is particularly affected by humidity. This behavior is discussed in terms of the sensitivity of the carbon black filler particles to the presence of water.", "which keywords ?", "carbon black", 627.0, 639.0], ["Highly sensitive, transparent, and durable pressure sensors are fabricated using sea-urchin-shaped metal nanoparticles and insulating polyurethane elastomer. The pressure sensors exhibit outstanding sensitivity (2.46 kPa-1 ), superior optical transmittance (84.8% at 550 nm), fast response/relaxation time (30 ms), and excellent operational durability. In addition, the pressure sensors successfully detect minute movements of human muscles.", "which keywords ?", "metal nanoparticles", 99.0, 118.0], ["A highly sensitive fiber temperature sensor based on in-line Mach-Zehnder interferometers (MZIs) and Vernier effect was proposed and experimentally demonstrated. The MZI was fabricated by splicing a section of hollow core fiber between two pieces of multimode fiber. The temperature sensitivity obtained by extracting envelope dip shift of the superimposed spectrum reaches to 528.5 pm/\u00b0C in the range of 0 \u00b0C\u2013100 \u00b0C, which is 17.5 times as high as that without enhanced by the Vernier effect. The experimental sensitivity amplification factor is close to the theoretical predication (18.3 times).The proposed sensitivity enhancement system employs parallel connecting to implement the Vernier effect, which possesses the advantages of easy fabrication and high flexibility.", "which keywords ?", "Fiber temperature sensor", 19.0, 43.0], ["Ni-MOF (metal-organic framework)/Ni/NiO/carbon frame nanocomposite was formed by combing Ni and NiO nanoparticles and a C frame with Ni-MOF using an efficient one-step calcination method. The morphology and structure of Ni-MOF/Ni/NiO/C nanocomposite were characterized by transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), and energy disperse spectroscopy (EDS) mapping. Ni-MOF/Ni/NiO/C nanocomposites were immobilized onto glassy carbon electrodes (GCEs) with Nafion film to construct high-performance nonenzymatic glucose and H2O2 electrochemical sensors. Cyclic voltammetric (CV) study showed Ni-MOF/Ni/NiO/C nanocomposite displayed better electrocatalytic activity toward glucose oxidation as compared to Ni-MOF. Amperometric study indicated the glucose sensor displayed high performance, offering a low detection limit (0.8 \u03bcM), a high sensitivity of 367.45 mA M-1 cm-2, and a wide linear range (from 4 to 5664 \u03bcM). Importantly, good reproducibility, long-time stability, and excellent selectivity were obtained within the as-fabricated glucose sensor. Furthermore, the constructed high-performance sensor was utilized to monitor the glucose levels in human serum, and satisfactory results were obtained. It demonstrated the Ni-MOF/Ni/NiO/C nanocomposite can be used as a good electrochemical sensing material in practical biological applications.", "which keywords ?", "Glucose sensor", 804.0, 818.0], ["Background: Paclitaxel (PTX) is one of the most important and effective anticancer drugs for the treatment of human cancer. However, its low solubility and severe adverse effects limited clinical use. To overcome this limitation, nanotechnology has been used to overcome tumors due to its excellent antimicrobial activity. Objective: This study was to demonstrate the anticancer properties of functionalization silver nanoparticles loaded with paclitaxel (Ag@PTX) induced A549 cells apoptosis through ROS-mediated signaling pathways. Methods: The Ag@PTX nanoparticles were charged with a zeta potential of about -17 mv and characterized around 2 nm with a narrow size distribution. Results: Ag@PTX significantly decreased the viability of A549 cells and possessed selectivity between cancer and normal cells. Ag@PTX induced A549 cells apoptosis was confirmed by nuclear condensation, DNA fragmentation, and activation of caspase-3. Furthermore, Ag@PTX enhanced the anti-cancer activity of A549 cells through ROS-mediated p53 and AKT signalling pathways. Finally, in a xenograft nude mice model, Ag@PTX suppressed the growth of tumors. Conclusion: Our findings suggest that Ag@PTX may be a candidate as a chemopreventive agent and could be a highly efficient way to achieve anticancer synergism for human cancers.", "which keywords ?", "Silver nanoparticles", 411.0, 431.0], ["Inverted perovskite solar cells (PSCs) have been becoming more and more attractive, owing to their easy-fabrication and suppressed hysteresis, while the ion diffusion between metallic electrode and perovskite layer limit the long-term stability of devices. In this work, we employed a novel polyethylenimine (PEI) modified cross-stacked superaligned carbon nanotube (CSCNT) film in the inverted planar PSCs configurated FTO/NiO x/methylammonium lead tri-iodide (MAPbI3)/6, 6-phenyl C61-butyric acid methyl ester (PCBM)/CSCNT:PEI. By modifying CSCNT with a certain concentration of PEI (0.5 wt %), suitable energy level alignment and promoted interfacial charge transfer have been achieved, leading to a significant enhancement in the photovoltaic performance. As a result, a champion power conversion efficiency (PCE) of \u223c11% was obtained with a Voc of 0.95 V, a Jsc of 18.7 mA cm-2, a FF of 0.61 as well as negligible hysteresis. Moreover, CSCNT:PEI based inverted PSCs show superior durability in comparison to the standard silver based devices, remaining over 85% of the initial PCE after 500 h aging under various conditions, including long-term air exposure, thermal, and humid treatment. This work opens up a new avenue of facile modified carbon electrodes for highly stable and hysteresis suppressed PSCs.", "which keywords ?", "Stability", 235.0, 244.0], ["SnO2 nanowire gas sensors have been fabricated on Cd\u2212Au comb-shaped interdigitating electrodes using thermal evaporation of the mixed powders of SnO2 and active carbon. The self-assembly grown sensors have excellent performance in sensor response to hydrogen concentration in the range of 10 to 1000 ppm. This high response is attributed to the large portion of undercoordinated atoms on the surface of the SnO2 nanowires. The influence of the Debye length of the nanowires and the gap between electrodes in the gas sensor response is examined and discussed.", "which keywords ?", "Hydrogen", 250.0, 258.0], ["Bladder cancer (BC) is a very common cancer. Nonmuscle-invasive bladder cancer (NMIBC) is the most common type of bladder cancer. After postoperative tumor resection, chemotherapy intravesical instillation is recommended as a standard treatment to significantly reduce recurrences. Nanomedicine-mediated delivery of a chemotherapeutic agent targeting cancer could provide a solution to obtain longer residence time and high bioavailability of an anticancer drug. The approach described here provides a nanomedicine with sustained and prolonged delivery of paclitaxel and enhanced therapy of intravesical bladder cancer, which is paclitaxel/chitosan (PTX/CS) nanosupensions (NSs). The positively charged PTX/CS NSs exhibited a rod-shaped morphology with a mean diameter about 200 nm. They have good dispersivity in water without any protective agents, and the positively charged properties make them easy to be adsorbed on the inner mucosa of the bladder through electrostatic adsorption. PTX/CS NSs also had a high drug loading capacity and can maintain sustained release of paclitaxel which could be prolonged over 10 days. Cell experiments in vitro demonstrated that PTX/CS NSs had good biocompatibility and effective bladder cancer cell proliferation inhibition. The significant anticancer efficacy against intravesical bladder cancer was verified by an in situ bladder cancer model. The paclitaxel/chitosan nanosupensions could provide sustained delivery of chemotherapeutic agents with significant anticancer efficacy against intravesical bladder cancer.", "which keywords ?", "intravesical instillation", 180.0, 205.0], ["CdTe-based solar cells exhibiting 19% power conversion efficiency were produced using widely available thermal evaporation deposition of the absorber layers on SnO2-coated glass with or without a t...", "which keywords ?", "Layers", 150.0, 156.0], ["Background: Paclitaxel (PTX) is one of the most important and effective anticancer drugs for the treatment of human cancer. However, its low solubility and severe adverse effects limited clinical use. To overcome this limitation, nanotechnology has been used to overcome tumors due to its excellent antimicrobial activity. Objective: This study was to demonstrate the anticancer properties of functionalization silver nanoparticles loaded with paclitaxel (Ag@PTX) induced A549 cells apoptosis through ROS-mediated signaling pathways. Methods: The Ag@PTX nanoparticles were charged with a zeta potential of about -17 mv and characterized around 2 nm with a narrow size distribution. Results: Ag@PTX significantly decreased the viability of A549 cells and possessed selectivity between cancer and normal cells. Ag@PTX induced A549 cells apoptosis was confirmed by nuclear condensation, DNA fragmentation, and activation of caspase-3. Furthermore, Ag@PTX enhanced the anti-cancer activity of A549 cells through ROS-mediated p53 and AKT signalling pathways. Finally, in a xenograft nude mice model, Ag@PTX suppressed the growth of tumors. Conclusion: Our findings suggest that Ag@PTX may be a candidate as a chemopreventive agent and could be a highly efficient way to achieve anticancer synergism for human cancers.", "which keywords ?", "Cancer", 116.0, 122.0], ["We have proposed and experimentally demonstrated an ultrasensitive fiber-optic temperature sensor based on two cascaded Fabry\u2013Perot interferometers (FPIs). Vernier effect that significantly improves the sensitivity is generated due to the slight cavity length difference of the sensing and reference FPI. The sensing FPI is composed of a cleaved fiber end-face and UV-cured adhesive while the reference FPI is fabricated by splicing SMF with hollow core fiber. Temperature sensitivity of the sensing FPI is much higher than the reference FPI, which means that the reference FPI need not to be thermally isolated. By curve fitting method, three different temperature sensitivities of 33.07, \u221258.60, and 67.35 nm/\u00b0C have been experimentally demonstrated with different cavity lengths ratio of the sensing and reference FPI, which can be flexibly adjusted to meet different application demands. The proposed probe-type ultrahigh sensitivity temperature sensor is compact and cost effective, which can be applied to special fields, such as biochemical engineering, medical treatment, and nuclear test.", "which keywords ?", "Vernier effect", 156.0, 170.0], ["Herein, we incorporated dual biotemplates, i.e., cellulose nanocrystals (CNC) and apoferritin, into electrospinning solution to achieve three distinct benefits, i.e., (i) facile synthesis of a WO3 nanotube by utilizing the self-agglomerating nature of CNC in the core of as-spun nanofibers, (ii) effective sensitization by partial phase transition from WO3 to Na2W4O13 induced by interaction between sodium-doped CNC and WO3 during calcination, and (iii) uniform functionalization with monodispersive apoferritin-derived Pt catalytic nanoparticles (2.22 \u00b1 0.42 nm). Interestingly, the sensitization effect of Na2W4O13 on WO3 resulted in highly selective H2S sensing characteristics against seven different interfering molecules. Furthermore, synergistic effects with a bioinspired Pt catalyst induced a remarkably enhanced H2S response ( Rair/ Rgas = 203.5), unparalleled selectivity ( Rair/ Rgas < 1.3 for the interfering molecules), and rapid response (<10 s)/recovery (<30 s) time at 1 ppm of H2S under 95% relative humidity level. This work paves the way for a new class of cosensitization routes to overcome critical shortcomings of SMO-based chemical sensors, thus providing a potential platform for diagnosis of halitosis.", "which keywords ?", " WO3 nanotube", 192.0, 205.0], ["This paper reports on the experimental and theoretical characterization of RF microelectromechanical systems (MEMS) switches for high-power applications. First, we investigate the problem of self-actuation due to high RF power and we demonstrate switches that do not self-actuate or catastrophically fail with a measured RF power of up to 5.5 W. Second, the problem of switch stiction to the down state as a function of the applied RF power is also theoretically and experimentally studied. Finally, a novel switch design with a top electrode is introduced and its advantages related to RF power-handling capabilities are presented. By applying this technology, we demonstrate hot-switching measurements with a maximum power of 0.8 W. Our results, backed by theory and measurements, illustrate that careful design can significantly improve the power-handling capabilities of RF MEMS switches.", "which keywords ?", "RF MEMS switches", 875.0, 891.0], ["In this study, temperature-dependent electrical properties of n-type Ga-doped ZnO thin film / p-type Si nanowire heterojunction diodes were reported. Metal-assisted chemical etching (MACE) process was performed to fabricate Si nanowires. Ga-doped ZnO films were then deposited onto nanowires through chemical bath deposition (CBD) technique to build three-dimensional nanowire-based heterojunction diodes. Fabricated devices revealed significant diode characteristics in the temperature range of 220 - 360 K. Electrical measurements shown that diodes had a well-defined rectifying behavior with a good rectification ratio of 103 \u00b13 V at room temperature. Ideality factor (n) were changed from 2.2 to 1.2 with increasing temperature.In this study, temperature-dependent electrical properties of n-type Ga-doped ZnO thin film / p-type Si nanowire heterojunction diodes were reported. Metal-assisted chemical etching (MACE) process was performed to fabricate Si nanowires. Ga-doped ZnO films were then deposited onto nanowires through chemical bath deposition (CBD) technique to build three-dimensional nanowire-based heterojunction diodes. Fabricated devices revealed significant diode characteristics in the temperature range of 220 - 360 K. Electrical measurements shown that diodes had a well-defined rectifying behavior with a good rectification ratio of 103 \u00b13 V at room temperature. Ideality factor (n) were changed from 2.2 to 1.2 with increasing temperature.", "which keywords ?", "Si nanowires", 224.0, 236.0], ["Abstract Aldehyde dehydrogenase 2 deficiency (ALDH2*2) causes facial flushing in response to alcohol consumption in approximately 560 million East Asians. Recent meta-analysis demonstrated the potential link between ALDH2*2 mutation and Alzheimer\u2019s Disease (AD). Other studies have linked chronic alcohol consumption as a risk factor for AD. In the present study, we show that fibroblasts of an AD patient that also has an ALDH2*2 mutation or overexpression of ALDH2*2 in fibroblasts derived from AD patients harboring ApoE \u03b54 allele exhibited increased aldehydic load, oxidative stress, and increased mitochondrial dysfunction relative to healthy subjects and exposure to ethanol exacerbated these dysfunctions. In an in vivo model, daily exposure of WT mice to ethanol for 11 weeks resulted in mitochondrial dysfunction, oxidative stress and increased aldehyde levels in their brains and these pathologies were greater in ALDH2*2/*2 (homozygous) mice. Following chronic ethanol exposure, the levels of the AD-associated protein, amyloid-\u03b2, and neuroinflammation were higher in the brains of the ALDH2*2/*2 mice relative to WT. Cultured primary cortical neurons of ALDH2*2/*2 mice showed increased sensitivity to ethanol and there was a greater activation of their primary astrocytes relative to the responses of neurons or astrocytes from the WT mice. Importantly, an activator of ALDH2 and ALDH2*2, Alda-1, blunted the ethanol-induced increases in A\u03b2, and the neuroinflammation in vitro and in vivo. These data indicate that impairment in the metabolism of aldehydes, and specifically ethanol-derived acetaldehyde, is a contributor to AD associated pathology and highlights the likely risk of alcohol consumption in the general population and especially in East Asians that carry ALDH2*2 mutation.", "which keywords ?", "Alda-1", 1402.0, 1408.0], ["Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p\u02c20.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.", "which keywords ?", "Breast cancer", 0.0, 13.0], ["Recently, pure transformer-based models have shown great potentials for vision tasks such as image classification and detection. However, the design of transformer networks is challenging. It has been observed that the depth, embedding dimension, and number of heads can largely affect the performance of vision transformers. Previous models configure these dimensions based upon manual crafting. In this work, we propose a new one-shot architecture search framework, namely AutoFormer, dedicated to vision transformer search. AutoFormer entangles the weights of different blocks in the same layers during supernet training. Benefiting from the strategy, the trained supernet allows thousands of subnets to be very well-trained. Specifically, the performance of these subnets with weights inherited from the supernet is comparable to those retrained from scratch. Besides, the searched models, which we refer to AutoFormers, surpass the recent state-of-the-arts such as ViT and DeiT. In particular, AutoFormer-tiny/small/base achieve 74.7%/81.7%/82.4% top-1 accuracy on ImageNet with 5.7M/22.9M/53.7M parameters, respectively. Lastly, we verify the transferability of AutoFormer by providing the performance on downstream benchmarks and distillation experiments. Code and models are available at https://github.com/microsoft/Cream.", "which keywords ?", "Vision Transformer", 500.0, 518.0], ["In this paper, a novel sensitivity amplification method for fiber-optic in-line Mach-Zehnder interferometer (MZI) sensors has been proposed and demonstrated. The sensitivity magnification is achieved through a modified Vernier-effect. Two cascaded in-line MZIs based on offset splicing of single mode fiber (SMF) have been used to verify the effect of sensitivity amplification. Vernier-effect is generated due to the small free spectral range (FSR) difference between the cascaded in-line MZIs. Frequency component corresponding to the envelope of the superimposed spectrum is extracted to take Inverse Fast Fourier Transform (IFFT). Thus we can obtain the envelope precisely from the messy superimposed spectrum. Experimental results show that a maximum sensitivity amplification factor of nearly 9 is realized. The proposed sensitivity amplification method is universal for the vast majority of in-line MZIs.", "which keywords ?", "Interferometer", 93.0, 107.0], ["A highly sensitive fiber temperature sensor based on in-line Mach-Zehnder interferometers (MZIs) and Vernier effect was proposed and experimentally demonstrated. The MZI was fabricated by splicing a section of hollow core fiber between two pieces of multimode fiber. The temperature sensitivity obtained by extracting envelope dip shift of the superimposed spectrum reaches to 528.5 pm/\u00b0C in the range of 0 \u00b0C\u2013100 \u00b0C, which is 17.5 times as high as that without enhanced by the Vernier effect. The experimental sensitivity amplification factor is close to the theoretical predication (18.3 times).The proposed sensitivity enhancement system employs parallel connecting to implement the Vernier effect, which possesses the advantages of easy fabrication and high flexibility.", "which keywords ?", "Mach-Zehnder Interferometer", NaN, NaN], ["Background: Paclitaxel (PTX) is one of the most important and effective anticancer drugs for the treatment of human cancer. However, its low solubility and severe adverse effects limited clinical use. To overcome this limitation, nanotechnology has been used to overcome tumors due to its excellent antimicrobial activity. Objective: This study was to demonstrate the anticancer properties of functionalization silver nanoparticles loaded with paclitaxel (Ag@PTX) induced A549 cells apoptosis through ROS-mediated signaling pathways. Methods: The Ag@PTX nanoparticles were charged with a zeta potential of about -17 mv and characterized around 2 nm with a narrow size distribution. Results: Ag@PTX significantly decreased the viability of A549 cells and possessed selectivity between cancer and normal cells. Ag@PTX induced A549 cells apoptosis was confirmed by nuclear condensation, DNA fragmentation, and activation of caspase-3. Furthermore, Ag@PTX enhanced the anti-cancer activity of A549 cells through ROS-mediated p53 and AKT signalling pathways. Finally, in a xenograft nude mice model, Ag@PTX suppressed the growth of tumors. Conclusion: Our findings suggest that Ag@PTX may be a candidate as a chemopreventive agent and could be a highly efficient way to achieve anticancer synergism for human cancers.", "which keywords ?", "Paclitaxel", 12.0, 22.0], ["Mixed tin (Sn)-lead (Pb) perovskites with high Sn content exhibit low bandgaps suitable for fabricating the bottom cell of perovskite-based tandem solar cells. In this work, we report on the fabrication of efficient mixed Sn-Pb perovskite solar cells using precursors combining formamidinium tin iodide (FASnI3) and methylammonium lead iodide (MAPbI3). The best-performing cell fabricated using a (FASnI3)0.6(MAPbI3)0.4 absorber with an absorption edge of \u223c1.2 eV achieved a power conversion efficiency (PCE) of 15.08 (15.00)% with an open-circuit voltage of 0.795 (0.799) V, a short-circuit current density of 26.86(26.82) mA/cm(2), and a fill factor of 70.6(70.0)% when measured under forward (reverse) voltage scan. The average PCE of 50 cells we have fabricated is 14.39 \u00b1 0.33%, indicating good reproducibility.", "which keywords ?", "Solar Cells", 147.0, 158.0], ["Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p\u02c20.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.", "which keywords ?", "Doxorubicin", 259.0, 270.0], ["Despite significant progress in challenging problems across various domains, applying state-of-the-art deep reinforcement learning (RL) algorithms remains challenging due to their sensitivity to the choice of hyperparameters. This sensitivity can partly be attributed to the non-stationarity of the RL problem, potentially requiring different hyperparameter settings at various stages of the learning process. Additionally, in the RL setting, hyperparameter optimization (HPO) requires a large number of environment interactions, hindering the transfer of the successes in RL to real-world applications. In this work, we tackle the issues of sample-efficient and dynamic HPO in RL. We propose a population-based automated RL (AutoRL) framework to meta-optimize arbitrary off-policy RL algorithms. In this framework, we optimize the hyperparameters and also the neural architecture while simultaneously training the agent. By sharing the collected experience across the population, we substantially increase the sample efficiency of the meta-optimization. We demonstrate the capabilities of our sample-efficient AutoRL approach in a case study with the popular TD3 algorithm in the MuJoCo benchmark suite, where we reduce the number of environment interactions needed for meta-optimization by up to an order of magnitude compared to population-based training.", "which keywords ?", "Off-Policy RL", 771.0, 784.0], ["Recently, the $\\beta $ -Ga2O3-based solar-blind ultraviolet photodetector has attracted intensive attention due to its wide application prospects. Photodetector arrays can act as an imaging detector and also improve the detecting sensitivity by series or parallel of detector cells. In this letter, the highly integrated metal-semiconductor-metal structured photodetector arrays of $32\\times 32$ , $16\\times 16$ , $8\\times 8$ , and $4\\times 4$ have been designed and fabricated for the first time. Herein, we present a 4-1 photodetector cell chosen from a $4\\times 4$ photodetector array as an example to demonstrate the performance. The photo responsivity is $8.926 \\times 10^{-1}$ A/W @ 250 nm at a 10-V bias voltage, corresponding to a quantum efficiency of 444%. All of the photodetector cells exhibit the solar-blind ultraviolet photoelectric characteristic and the consistent photo responsivity with a standard deviation of 12.1%. The outcome of the study offers an efficient route toward the development of high-performance and low-cost DUV photodetector arrays.", "which keywords ?", "Metal-semiconductor-metal structure", NaN, NaN], ["RF microelectromechanical systems (MEMS) capacitive switches for two different dielectrics, aluminum nitride (AlN) and silicon nitride (Si3N4), are presented. The switches have been characterized and compared in terms of DC and RF performance (5-40 GHz). Switches based on AlN have higher down-state capacitance for similar dielectric thicknesses and provide better isolation and smaller insertion losses compared to Si3N4 switches. Experiments were carried out on RF MEMS switches with stiffening bars to prevent membrane deformation due to residual stress and with different spring and meander-type anchor designs. For a ~300-nm dielectric thickness, an air gap of 2.3 \u03bcm and identical spring-type designs, the AlN switches systematically show an improvement in the isolation by more than -12 dB (-35.8 dB versus -23.7 dB) and a better insertion loss (-0.68 dB versus -0.90 dB) at 40 GHz compared to Si3N4. DC measurements show small leakage current densities for both dielectrics (<;10-8 A/cm2 at 1 MV/cm). However, the resulting leakage current for AlN devices is ten times higher than for Si3N4 when applying a larger electric field. The fabricated switches were also stressed by applying different voltages in air and vacuum, and dielectric charging effects were investigated. AlN switches eliminate the residual or injected charge faster than the Si3N4 devices do.", "which keywords ?", "aluminum nitride", 92.0, 108.0], ["CdTe-based solar cells exhibiting 19% power conversion efficiency were produced using widely available thermal evaporation deposition of the absorber layers on SnO2-coated glass with or without a t...", "which keywords ?", "Deposition", 123.0, 133.0], ["The construction of a continuous conductive network with a low percolation threshold plays a key role in fabricating a high performance strain sensor. Herein, a highly stretchable and sensitive strain sensor based on binary rubber blend/graphene was fabricated by a simple and effective assembly approach. A novel double-interconnected network composed of compactly continuous graphene conductive networks was designed and constructed using the composites, thereby resulting in an ultralow percolation threshold of 0.3 vol%, approximately 12-fold lower than that of the conventional graphene-based composites with a homogeneously dispersed morphology (4.0 vol%). Near the percolation threshold, the sensors could be stretched in excess of 100% applied strain, and exhibited a high stretchability, sensitivity (gauge factor \u223c82.5) and good reproducibility (\u223c300 cycles) of up to 100% strain under cyclic tensile tests. The proposed strategy provides a novel effective approach for constructing a double-interconnected conductive network using polymer composites, and is very competitive for developing and designing high performance strain sensors.", "which keywords ?", "Graphene ", 237.0, 246.0], [" Open educational resources are currently becoming increasingly available from a multitude of sources and are consequently annotated in many diverse ways. Interoperability concerns that naturally arise can often be resolved through the semantification of metadata descriptions, while at the same time strengthening the knowledge value of resources. SKOS can be a solid linking point offering a standard vocabulary for thematic descriptions, by referencing semantic thesauri. We propose the enhancement and maintenance of educational resources\u2019 metadata in the form of learning object ontologies and introduce the notion of a learning object ontology repository that can help towards their publication, discovery and reuse. At the same time, linking to thesauri datasets and contextualized sources interrelates learning objects with linked data and exposes them to the Web of Data. We build a set of extensions and workflows on top of contemporary ontology management tools, such as WebProt\u00e9g\u00e9, that can make it suitable as a learning object ontology repository. The proposed approach and implementation can help libraries and universities in discovering, managing and incorporating open educational resources and enhancing current curricula. ", "which keywords ?", "ontology", 649.0, 657.0], ["Herein, a novel electrochemical glucose biosensor based on glucose oxidase (GOx) immobilized on a surface containing platinum nanoparticles (PtNPs) electrodeposited on poly(Azure A) (PAA) previously electropolymerized on activated screen-printed carbon electrodes (GOx-PtNPs-PAA-aSPCEs) is reported. The resulting electrochemical biosensor was validated towards glucose oxidation in real samples and further electrochemical measurement associated with the generated H2O2. The electrochemical biosensor showed an excellent sensitivity (42.7 \u03bcA mM\u22121 cm\u22122), limit of detection (7.6 \u03bcM), linear range (20 \u03bcM\u20132.3 mM), and good selectivity towards glucose determination. Furthermore, and most importantly, the detection of glucose was performed at a low potential (0.2 V vs. Ag). The high performance of the electrochemical biosensor was explained through surface exploration using field emission SEM, XPS, and impedance measurements. The electrochemical biosensor was successfully applied to glucose quantification in several real samples (commercial juices and a plant cell culture medium), exhibiting a high accuracy when compared with a classical spectrophotometric method. This electrochemical biosensor can be easily prepared and opens up a good alternative in the development of new sensitive glucose sensors.", "which keywords ?", "glucose", 32.0, 39.0], ["RF microelectromechanical systems (MEMS) capacitive switches for two different dielectrics, aluminum nitride (AlN) and silicon nitride (Si3N4), are presented. The switches have been characterized and compared in terms of DC and RF performance (5-40 GHz). Switches based on AlN have higher down-state capacitance for similar dielectric thicknesses and provide better isolation and smaller insertion losses compared to Si3N4 switches. Experiments were carried out on RF MEMS switches with stiffening bars to prevent membrane deformation due to residual stress and with different spring and meander-type anchor designs. For a ~300-nm dielectric thickness, an air gap of 2.3 \u03bcm and identical spring-type designs, the AlN switches systematically show an improvement in the isolation by more than -12 dB (-35.8 dB versus -23.7 dB) and a better insertion loss (-0.68 dB versus -0.90 dB) at 40 GHz compared to Si3N4. DC measurements show small leakage current densities for both dielectrics (<;10-8 A/cm2 at 1 MV/cm). However, the resulting leakage current for AlN devices is ten times higher than for Si3N4 when applying a larger electric field. The fabricated switches were also stressed by applying different voltages in air and vacuum, and dielectric charging effects were investigated. AlN switches eliminate the residual or injected charge faster than the Si3N4 devices do.", "which keywords ?", "RF MEMS", 465.0, 472.0], ["In this letter, the AC performance and influence of bending on flexible IGZO thin-film transistors, exhibiting a maximum oscillation frequency (maximum power gain frequency) ${f}_{\\textsf {max}}$ beyond 300 MHz, are presented. Self-alignment was used to realize TFTs with channel length down to 0.5 $\\mu \\text{m}$ . The layout of these TFTs was optimized for good AC performance. Besides the channel dimensions, this includes ground-signal-ground contact pads. The AC performance of these short channel devices was evaluated by measuring their two port scattering parameters. These measurements were used to extract the unity gain power frequency from the maximum stable gain and the unilateral gain. The two complimentary definitions result in ${f}_{\\textsf {max}}$ values of (304 \u00b1 12) and (398 \u00b1 53) MHz, respectively. Furthermore, the transistor performance is not significantly altered by mechanical strain. Here, ${f}_{\\textsf {max}}$ reduces by 3.6% when a TFT is bent to a tensile radius of 3.5 mm.", "which keywords ?", "Gain", 158.0, 162.0], ["Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.", "which keywords ?", "scale", 895.0, 900.0], ["Mixed tin (Sn)-lead (Pb) perovskites with high Sn content exhibit low bandgaps suitable for fabricating the bottom cell of perovskite-based tandem solar cells. In this work, we report on the fabrication of efficient mixed Sn-Pb perovskite solar cells using precursors combining formamidinium tin iodide (FASnI3) and methylammonium lead iodide (MAPbI3). The best-performing cell fabricated using a (FASnI3)0.6(MAPbI3)0.4 absorber with an absorption edge of \u223c1.2 eV achieved a power conversion efficiency (PCE) of 15.08 (15.00)% with an open-circuit voltage of 0.795 (0.799) V, a short-circuit current density of 26.86(26.82) mA/cm(2), and a fill factor of 70.6(70.0)% when measured under forward (reverse) voltage scan. The average PCE of 50 cells we have fabricated is 14.39 \u00b1 0.33%, indicating good reproducibility.", "which keywords ?", "Power Conversion Efficiency", 475.0, 502.0], ["SnO2 nanowire gas sensors have been fabricated on Cd\u2212Au comb-shaped interdigitating electrodes using thermal evaporation of the mixed powders of SnO2 and active carbon. The self-assembly grown sensors have excellent performance in sensor response to hydrogen concentration in the range of 10 to 1000 ppm. This high response is attributed to the large portion of undercoordinated atoms on the surface of the SnO2 nanowires. The influence of the Debye length of the nanowires and the gap between electrodes in the gas sensor response is examined and discussed.", "which keywords ?", "Sensors", 18.0, 25.0], ["This paper, the first of two parts, presents an electromagnetic model for membrane microelectromechanical systems (MEMS) shunt switches for microwave/millimeter-wave applications. The up-state capacitance can be accurately modeled using three-dimensional static solvers, and full-wave solvers are used to predict the current distribution and inductance of the switch. The loss in the up-state position is equivalent to the coplanar waveguide line loss and is 0.01-0.02 dB at 10-30 GHz for a 2-/spl mu/m-thick Au MEMS shunt switch. It is seen that the capacitance, inductance, and series resistance can be accurately extracted from DC-40 GHz S-parameter measurements. It is also shown that dramatic increase in the down-state isolation (20/sup +/ dB) can be achieved with the choice of the correct LC series resonant frequency of the switch. In part 2 of this paper, the equivalent capacitor-inductor-resistor model is used in the design of tuned high isolation switches at 10 and 30 GHz.", "which keywords ?", "Capacitance", 193.0, 204.0], ["Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5\uff05. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel\uff1bDrug delivery\uff1bNanoparticle\uff1bRadiotherapy\uff1bHypoxia\uff1bHuman tumor cells\uff1bcellular uptake", "which keywords ?", "Human tumor cells", 1859.0, 1876.0], ["Flexible large area electronics promise to enable new devices such as rollable displays and electronic skins. Radio frequency (RF) applications demand circuits operating in the megahertz regime, which is hard to achieve for electronics fabricated on amorphous and temperature sensitive plastic substrates. Here, we present self-aligned amorphous indium-gallium-zinc oxide-based thin-film transistors (TFTs) fabricated on free-standing plastic foil using fabrication temperatures . Self-alignment by backside illumination between gate and source/drain electrodes was used to realize flexible transistors with a channel length of 0.5 \u03bcm and reduced parasitic capacities. The flexible TFTs exhibit a transit frequency of 135 MHz when operated at 2 V. The device performance is maintained when the TFTs are bent to a tensile radius of 3.5 mm, which makes this technology suitable for flexible RFID tags and AM radios.", "which keywords ?", "Fabrication", 454.0, 465.0], ["In this report, we demonstrate high spectral responsivity (SR) solar blind deep ultraviolet (UV) \u03b2-Ga2O3 metal-semiconductor-metal (MSM) photodetectors grown by the mist chemical-vapor deposition (Mist-CVD) method. The \u03b2-Ga2O3 thin film was grown on c-plane sapphire substrates, and the fabricated MSM PDs with Al contacts in an interdigitated geometry were found to exhibit peak SR>150A/W for the incident light wavelength of 254 nm at a bias of 20 V. The devices exhibited very low dark current, about 14 pA at 20 V, and showed sharp transients with a photo-to-dark current ratio>105. The corresponding external quantum efficiency is over 7 \u00d7 104%. The excellent deep UV \u03b2-Ga2O3 photodetectors will enable significant advancements for the next-generation photodetection applications.", "which keywords ?", "CVD", 202.0, 205.0], ["Device modeling of CH3NH3PbI3\u2212xCl3 perovskite-based solar cells was performed. The perovskite solar cells employ a similar structure with inorganic semiconductor solar cells, such as Cu(In,Ga)Se2, and the exciton in the perovskite is Wannier-type. We, therefore, applied one-dimensional device simulator widely used in the Cu(In,Ga)Se2 solar cells. A high open-circuit voltage of 1.0 V reported experimentally was successfully reproduced in the simulation, and also other solar cell parameters well consistent with real devices were obtained. In addition, the effect of carrier diffusion length of the absorber and interface defect densities at front and back sides and the optimum thickness of the absorber were analyzed. The results revealed that the diffusion length experimentally reported is long enough for high efficiency, and the defect density at the front interface is critical for high efficiency. Also, the optimum absorber thickness well consistent with the thickness range of real devices was derived.", "which keywords ?", "Solar Cells", 52.0, 63.0], ["Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5\uff05. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel\uff1bDrug delivery\uff1bNanoparticle\uff1bRadiotherapy\uff1bHypoxia\uff1bHuman tumor cells\uff1bcellular uptake", "which keywords ?", "Paclitaxel", 170.0, 180.0], ["This paper presents the design, fabrication and measurements of a novel vertical electrostatic RF MEMS switch which utilizes the lateral thermal buckle-beam actuator design in order to reduce the switch sensitivity to thermal stresses. The effect of biaxial and stress gradients are taken into consideration, and the buckle-beam designs show minimal sensitivity to these stresses. Several switches with 4,8, and 12 suspension beams are presented. All the switches demonstrate a low sensitivity to temperature, and the variation in the pull-in voltage is ~ -50 mV/\u00b0C from 25-125\u00b0C. The change in the up-state capacitance for the same temperature range is <; \u00b1 3%. The switches also exhibit excellent RF and mechanical performances, and a capacitance ratio of ~ 20-23 (C\u03c5. = 85-115 fF, Cd = 1.7-2.6 pF) with Q > 150 at 10 GHz in the up-state position is reported. The mechanical resonant frequencies and quality factors are f\u03bf = 60-160 kHz and Qm = 2.3-4.5, respectively. The measured switching and release times are ~ 2-5 \u03bcs and ~ 5-6.5 \u03bcs, respectively. Power handling measurements show good stability with ~ 4 W of incident power at 10 GHz.", "which keywords ?", "Switches", 389.0, 397.0], ["Programs offered by academic institutions in higher education need to meet specific standards that are established by the appropriate accreditation bodies. Curriculum mapping is an important part of the curriculum management process that is used to document the expected learning outcomes, ensure quality, and align programs and courses with industry standards. Semantic web languages can be used to express and share common agreement about the vocabularies used in the domain under study. In this paper, we present an approach based on ontology for curriculum mapping in higher education. Our proposed approach is focused on the creation of a core curriculum ontology that can support effective knowledge representation and knowledge discovery. The research work presents the case of ontology reuse through the extension of the curriculum ontology to support the creation of micro-credentials. We also present a conceptual framework for knowledge discovery to support various business use case scenarios based on ontology inferencing and querying operations.", "which keywords ?", "curriculum mapping", 156.0, 174.0], ["Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p\u02c20.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.", "which keywords ?", "Cisplatin", 248.0, 257.0], ["Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p\u02c20.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.", "which keywords ?", "anti-cancer drugs", 311.0, 328.0], ["In this letter, we report a flexible Indium-Gallium-Zinc-Oxide quasi-vertical thin-film transistor (QVTFT) with 300-nm channel length, fabricated on a free-standing polyimide foil, using a low-temperature process <;150 \u00b0C. A bilayer lift-off process is used to structure a spacing layer with a tilted sidewall and the drain contact on top of the source electrode. The resulting quasi-vertical profile ensures a good coverage of the successive device layers. The fabricated flexible QVTFT exhibits an ON/OFF current ratio of 104, a threshold voltage of 1.5 V, a maximum transconductance of 0.73 \u03bcS \u03bcm-1, and a total gate capacitance of 76 nF \u03bcm-1. From S-parameter measurements, we extracted a transit frequency of 1.5 MHz. Furthermore, the flexible QVTFT is fully operational when bent to a tensile radius of 5 mm.", "which keywords ?", "Capacitance", 620.0, 631.0], ["Compositional engineering of recently arising methylammonium (MA) lead (Pb) halide based perovskites is an essential approach for finding better perovskite compositions to resolve still remaining issues of toxic Pb, long-term instability, etc. In this work, we carried out crystallographic, morphological, optical, and photovoltaic characterization of compositional MASn0.6Pb0.4I3-xBrx by gradually introducing bromine (Br) into parental Pb-Sn binary perovskite (MASn0.6Pb0.4I3) to elucidate its function in Sn-rich (Sn:Pb = 6:4) perovskites. We found significant advances in crystallinity and dense coverage of the perovskite films by inserting the Br into Sn-rich perovskite lattice. Furthermore, light-intensity-dependent open circuit voltage (Voc) measurement revealed much suppressed trap-assisted recombination for a proper Br-added (x = 0.4) device. These contributed to attaining the unprecedented power conversion efficiency of 12.1% and Voc of 0.78 V, which are, to the best of our knowledge, the highest performance in the Sn-rich (\u226560%) perovskite solar cells reported so far. In addition, impressive enhancement of photocurrent-output stability and little hysteresis were found, which paves the way for the development of environmentally benign (Pb reduction), stable monolithic tandem cells using the developed low band gap (1.24-1.26 eV) MASn0.6Pb0.4I3-xBrx with suggested composition (x = 0.2-0.4).", "which keywords ?", "Bromine", 411.0, 418.0], ["Inverted perovskite solar cells (PSCs) have been becoming more and more attractive, owing to their easy-fabrication and suppressed hysteresis, while the ion diffusion between metallic electrode and perovskite layer limit the long-term stability of devices. In this work, we employed a novel polyethylenimine (PEI) modified cross-stacked superaligned carbon nanotube (CSCNT) film in the inverted planar PSCs configurated FTO/NiO x/methylammonium lead tri-iodide (MAPbI3)/6, 6-phenyl C61-butyric acid methyl ester (PCBM)/CSCNT:PEI. By modifying CSCNT with a certain concentration of PEI (0.5 wt %), suitable energy level alignment and promoted interfacial charge transfer have been achieved, leading to a significant enhancement in the photovoltaic performance. As a result, a champion power conversion efficiency (PCE) of \u223c11% was obtained with a Voc of 0.95 V, a Jsc of 18.7 mA cm-2, a FF of 0.61 as well as negligible hysteresis. Moreover, CSCNT:PEI based inverted PSCs show superior durability in comparison to the standard silver based devices, remaining over 85% of the initial PCE after 500 h aging under various conditions, including long-term air exposure, thermal, and humid treatment. This work opens up a new avenue of facile modified carbon electrodes for highly stable and hysteresis suppressed PSCs.", "which keywords ?", "Power conversion efficiency", 784.0, 811.0], ["Herein, we incorporated dual biotemplates, i.e., cellulose nanocrystals (CNC) and apoferritin, into electrospinning solution to achieve three distinct benefits, i.e., (i) facile synthesis of a WO3 nanotube by utilizing the self-agglomerating nature of CNC in the core of as-spun nanofibers, (ii) effective sensitization by partial phase transition from WO3 to Na2W4O13 induced by interaction between sodium-doped CNC and WO3 during calcination, and (iii) uniform functionalization with monodispersive apoferritin-derived Pt catalytic nanoparticles (2.22 \u00b1 0.42 nm). Interestingly, the sensitization effect of Na2W4O13 on WO3 resulted in highly selective H2S sensing characteristics against seven different interfering molecules. Furthermore, synergistic effects with a bioinspired Pt catalyst induced a remarkably enhanced H2S response ( Rair/ Rgas = 203.5), unparalleled selectivity ( Rair/ Rgas < 1.3 for the interfering molecules), and rapid response (<10 s)/recovery (<30 s) time at 1 ppm of H2S under 95% relative humidity level. This work paves the way for a new class of cosensitization routes to overcome critical shortcomings of SMO-based chemical sensors, thus providing a potential platform for diagnosis of halitosis.", "which keywords ?", "biotemplates", 29.0, 41.0], ["In recent years, the development of recommender systems has attracted increased interest in several domains, especially in e-learning. Massive Open Online Courses have brought a revolution. However, deficiency in support and personalization in this context drive learners to lose their motivation and leave the learning process. To overcome this problem we focus on adapting learning activities to learners' needs using a recommender system.This paper attempts to provide an introduction to different recommender systems for e-learning settings, as well as to present our proposed recommender system for massive learning activities in order to provide learners with the suitable learning activities to follow the learning process and maintain their motivation. We propose a hybrid knowledge-based recommender system based on ontology for recommendation of e-learning activities to learners in the context of MOOCs. In the proposed recommendation approach, ontology is used to model and represent the knowledge about the domain model, learners and learning activities.", "which keywords ?", "learning activities", 375.0, 394.0], ["Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever. Here, we combine the advantages of the popular bandit-based HPO method Hyperband (HB) and the evolutionary search approach of Differential Evolution (DE) to yield a new HPO method which we call DEHB. Comprehensive results on a very broad range of HPO problems, as well as a wide range of tabular benchmarks from neural architecture search, demonstrate that DEHB achieves strong performance far more robustly than all previous HPO methods we are aware of, especially for high-dimensional problems with discrete input dimensions. For example, DEHB is up to 1000x faster than random search. It is also efficient in computational time, conceptually simple and easy to implement, positioning it well to become a new default HPO method.", "which keywords ?", "DEHB", 385.0, 389.0], ["Herein, we incorporated dual biotemplates, i.e., cellulose nanocrystals (CNC) and apoferritin, into electrospinning solution to achieve three distinct benefits, i.e., (i) facile synthesis of a WO3 nanotube by utilizing the self-agglomerating nature of CNC in the core of as-spun nanofibers, (ii) effective sensitization by partial phase transition from WO3 to Na2W4O13 induced by interaction between sodium-doped CNC and WO3 during calcination, and (iii) uniform functionalization with monodispersive apoferritin-derived Pt catalytic nanoparticles (2.22 \u00b1 0.42 nm). Interestingly, the sensitization effect of Na2W4O13 on WO3 resulted in highly selective H2S sensing characteristics against seven different interfering molecules. Furthermore, synergistic effects with a bioinspired Pt catalyst induced a remarkably enhanced H2S response ( Rair/ Rgas = 203.5), unparalleled selectivity ( Rair/ Rgas < 1.3 for the interfering molecules), and rapid response (<10 s)/recovery (<30 s) time at 1 ppm of H2S under 95% relative humidity level. This work paves the way for a new class of cosensitization routes to overcome critical shortcomings of SMO-based chemical sensors, thus providing a potential platform for diagnosis of halitosis.", "which keywords ?", " chemical sensor", NaN, NaN], ["An electrically conductive ultralow percolation threshold of 0.1 wt% graphene was observed in the thermoplastic polyurethane (TPU) nanocomposites. The homogeneously dispersed graphene effectively enhanced the mechanical properties of TPU significantly at a low graphene loading of 0.2 wt%. These nanocomposites were subjected to cyclic loading to investigate the influences of graphene loading, strain amplitude and strain rate on the strain sensing performances. The two dimensional graphene and the flexible TPU matrix were found to endow these nanocomposites with a wide range of strain sensitivity (gauge factor ranging from 0.78 for TPU with 0.6 wt% graphene at the strain rate of 0.1 min\u22121 to 17.7 for TPU with 0.2 wt% graphene at the strain rate of 0.3 min\u22121) and good sensing stability for different strain patterns. In addition, these nanocomposites demonstrated good recoverability and reproducibility after stabilization by cyclic loading. An analytical model based on tunneling theory was used to simulate the resistance response to strain under different strain rates. The change in the number of conductive pathways and tunneling distance under strain was responsible for the observed resistance-strain behaviors. This study provides guidelines for the fabrication of graphene based polymer strain sensors.", "which keywords ?", "Strain Sensors", 1305.0, 1319.0], ["The construction of a continuous conductive network with a low percolation threshold plays a key role in fabricating a high performance strain sensor. Herein, a highly stretchable and sensitive strain sensor based on binary rubber blend/graphene was fabricated by a simple and effective assembly approach. A novel double-interconnected network composed of compactly continuous graphene conductive networks was designed and constructed using the composites, thereby resulting in an ultralow percolation threshold of 0.3 vol%, approximately 12-fold lower than that of the conventional graphene-based composites with a homogeneously dispersed morphology (4.0 vol%). Near the percolation threshold, the sensors could be stretched in excess of 100% applied strain, and exhibited a high stretchability, sensitivity (gauge factor \u223c82.5) and good reproducibility (\u223c300 cycles) of up to 100% strain under cyclic tensile tests. The proposed strategy provides a novel effective approach for constructing a double-interconnected conductive network using polymer composites, and is very competitive for developing and designing high performance strain sensors.", "which keywords ?", "Strain Sensor", 136.0, 149.0], ["This paper reports on the experimental and theoretical characterization of RF microelectromechanical systems (MEMS) switches for high-power applications. First, we investigate the problem of self-actuation due to high RF power and we demonstrate switches that do not self-actuate or catastrophically fail with a measured RF power of up to 5.5 W. Second, the problem of switch stiction to the down state as a function of the applied RF power is also theoretically and experimentally studied. Finally, a novel switch design with a top electrode is introduced and its advantages related to RF power-handling capabilities are presented. By applying this technology, we demonstrate hot-switching measurements with a maximum power of 0.8 W. Our results, backed by theory and measurements, illustrate that careful design can significantly improve the power-handling capabilities of RF MEMS switches.", "which keywords ?", "Self-actuation", 191.0, 205.0], ["A new technique for the fabrication of radio frequency (RF) microelectromechanical systems (MEMS) shunt switches in recessed coplaner waveguide (CPW) configuration on glass substrates is presented. Membranes with low spring constant are used for reducing the pull-in voltage. A layer of silicon dioxide is deposited on glass wafer and is used to form the recess, which partially defines the gap between the membrane and signal line. Positive photoresist S1813 is used as a sacrificial layer and gold as the membrane material. The membranes are released with the help of Pirhana solution and finally rinsed in low surface tension liquid to avoid stiction during release. Switches with 500 \u00b5m long two-meander membranes show very high isolation of greater than 40 dB at their resonant frequency of 61 GHz and pull-in voltage less than 15 V, while switches with 700 \u00b5m long six-strip membranes show isolation greater than 30 dB at the frequency of 65 GHz and pull-in voltage less than 10 V. Both types of switches show insertion loss less than 0.65 dB up to 65 GHz.", "which keywords ?", "Switches", 104.0, 112.0], ["RF microelectromechanical systems (MEMS) capacitive switches for two different dielectrics, aluminum nitride (AlN) and silicon nitride (Si3N4), are presented. The switches have been characterized and compared in terms of DC and RF performance (5-40 GHz). Switches based on AlN have higher down-state capacitance for similar dielectric thicknesses and provide better isolation and smaller insertion losses compared to Si3N4 switches. Experiments were carried out on RF MEMS switches with stiffening bars to prevent membrane deformation due to residual stress and with different spring and meander-type anchor designs. For a ~300-nm dielectric thickness, an air gap of 2.3 \u03bcm and identical spring-type designs, the AlN switches systematically show an improvement in the isolation by more than -12 dB (-35.8 dB versus -23.7 dB) and a better insertion loss (-0.68 dB versus -0.90 dB) at 40 GHz compared to Si3N4. DC measurements show small leakage current densities for both dielectrics (<;10-8 A/cm2 at 1 MV/cm). However, the resulting leakage current for AlN devices is ten times higher than for Si3N4 when applying a larger electric field. The fabricated switches were also stressed by applying different voltages in air and vacuum, and dielectric charging effects were investigated. AlN switches eliminate the residual or injected charge faster than the Si3N4 devices do.", "which keywords ?", "silicon nitride", 119.0, 134.0], ["In this study, temperature-dependent electrical properties of n-type Ga-doped ZnO thin film / p-type Si nanowire heterojunction diodes were reported. Metal-assisted chemical etching (MACE) process was performed to fabricate Si nanowires. Ga-doped ZnO films were then deposited onto nanowires through chemical bath deposition (CBD) technique to build three-dimensional nanowire-based heterojunction diodes. Fabricated devices revealed significant diode characteristics in the temperature range of 220 - 360 K. Electrical measurements shown that diodes had a well-defined rectifying behavior with a good rectification ratio of 103 \u00b13 V at room temperature. Ideality factor (n) were changed from 2.2 to 1.2 with increasing temperature.In this study, temperature-dependent electrical properties of n-type Ga-doped ZnO thin film / p-type Si nanowire heterojunction diodes were reported. Metal-assisted chemical etching (MACE) process was performed to fabricate Si nanowires. Ga-doped ZnO films were then deposited onto nanowires through chemical bath deposition (CBD) technique to build three-dimensional nanowire-based heterojunction diodes. Fabricated devices revealed significant diode characteristics in the temperature range of 220 - 360 K. Electrical measurements shown that diodes had a well-defined rectifying behavior with a good rectification ratio of 103 \u00b13 V at room temperature. Ideality factor (n) were changed from 2.2 to 1.2 with increasing temperature.", "which keywords ?", "Ga-doped ZnO films", 238.0, 256.0], ["An ultrasensitive refractive index (RI) sensor based on enhanced Vernier effect is proposed, which consists of two cascaded fiber core-offset pairs. One pair functions as a Mach-Zehnder interferometer (MZI), the other with larger core offset as a low-finesse Fabry-Perot interferometer (FPI). In traditional Vernier-effect based sensors, an interferometer insensitive to environment change is used as sensing reference. Here in the proposed sensor, interference fringes of the MZI and the FPI shift to opposite directions as ambient RI varies, and to the same direction as surrounding temperature changes. Thus, the envelope of superimposed fringe manifests enhanced Vernier effect for RI sensing while reduced Vernier effect for temperature change. As a result, an ultra-high RI sensitivity of -87261.06 nm/RIU is obtained near the RI of 1.33 with good linearity, while the temperature sensitivity is as low as 204.7 pm/ \u00b0C. The proposed structure is robust and of low cost. Furthermore, the proposed scheme of enhanced Vernier effect provides a new perspective and idea in other sensing field. \u00a9 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement", "which keywords ?", "Vernier effect", 65.0, 79.0], ["In this paper, a liquid-based micro thermal convective accelerometer (MTCA) is optimized by the Rayleigh number (Ra) based compact model and fabricated using the $0.35\\mu $ m CMOS MEMS technology. To achieve water-proof performance, the conformal Parylene C coating was adopted as the isolation layer with the accelerated life-testing results of a 9-year-lifetime for liquid-based MTCA. Then, the device performance was characterized considering sensitivity, response time, and noise. Both the theoretical and experimental results demonstrated that fluid with a larger Ra number can provide better performance for the MTCA. More significantly, Ra based model showed its advantage to make a more accurate prediction than the simple linear model to select suitable fluid to enhance the sensitivity and balance the linear range of the device. Accordingly, an alcohol-based MTCA was achieved with a two-order-of magnitude increase in sensitivity (43.8 mV/g) and one-order-of-magnitude decrease in the limit of detection (LOD) ( $61.9~\\mu \\text{g}$ ) compared with the air-based MTCA. [2021-0092]", "which keywords ?", "MEMS", 270.0, 274.0], ["Recently, pure transformer-based models have shown great potentials for vision tasks such as image classification and detection. However, the design of transformer networks is challenging. It has been observed that the depth, embedding dimension, and number of heads can largely affect the performance of vision transformers. Previous models configure these dimensions based upon manual crafting. In this work, we propose a new one-shot architecture search framework, namely AutoFormer, dedicated to vision transformer search. AutoFormer entangles the weights of different blocks in the same layers during supernet training. Benefiting from the strategy, the trained supernet allows thousands of subnets to be very well-trained. Specifically, the performance of these subnets with weights inherited from the supernet is comparable to those retrained from scratch. Besides, the searched models, which we refer to AutoFormers, surpass the recent state-of-the-arts such as ViT and DeiT. In particular, AutoFormer-tiny/small/base achieve 74.7%/81.7%/82.4% top-1 accuracy on ImageNet with 5.7M/22.9M/53.7M parameters, respectively. Lastly, we verify the transferability of AutoFormer by providing the performance on downstream benchmarks and distillation experiments. Code and models are available at https://github.com/microsoft/Cream.", "which keywords ?", "ImageNet", 1070.0, 1078.0], ["Mechanically flexible vertical-channel-structured thin-film transistors (VTFTs) with a channel length of 200 nm were fabricated on 1.2 \u03bcm thick colorless polyimide (CPI) substrates. All layers comp...", "which keywords ?", "Layers", 186.0, 192.0], ["Two organolead halide perovskite nanocrystals, CH(3)NH(3)PbBr(3) and CH(3)NH(3)PbI(3), were found to efficiently sensitize TiO(2) for visible-light conversion in photoelectrochemical cells. When self-assembled on mesoporous TiO(2) films, the nanocrystalline perovskites exhibit strong band-gap absorptions as semiconductors. The CH(3)NH(3)PbI(3)-based photocell with spectral sensitivity of up to 800 nm yielded a solar energy conversion efficiency of 3.8%. The CH(3)NH(3)PbBr(3)-based cell showed a high photovoltage of 0.96 V with an external quantum conversion efficiency of 65%.", "which keywords ?", "Perovskites", 258.0, 269.0], ["Device modeling of CH3NH3PbI3\u2212xCl3 perovskite-based solar cells was performed. The perovskite solar cells employ a similar structure with inorganic semiconductor solar cells, such as Cu(In,Ga)Se2, and the exciton in the perovskite is Wannier-type. We, therefore, applied one-dimensional device simulator widely used in the Cu(In,Ga)Se2 solar cells. A high open-circuit voltage of 1.0 V reported experimentally was successfully reproduced in the simulation, and also other solar cell parameters well consistent with real devices were obtained. In addition, the effect of carrier diffusion length of the absorber and interface defect densities at front and back sides and the optimum thickness of the absorber were analyzed. The results revealed that the diffusion length experimentally reported is long enough for high efficiency, and the defect density at the front interface is critical for high efficiency. Also, the optimum absorber thickness well consistent with the thickness range of real devices was derived.", "which keywords ?", "Inorganic Semiconductor", 138.0, 161.0], ["Abstract With the increased dependence on online learning platforms and educational resource repositories, a unified representation of digital learning resources becomes essential to support a dynamic and multi-source learning experience. We introduce the EduCOR ontology, an educational, career-oriented ontology that provides a foundation for representing online learning resources for personalised learning systems. The ontology is designed to enable learning material repositories to offer learning path recommendations, which correspond to the user\u2019s learning goals and preferences, academic and psychological parameters, and labour-market skills. We present the multiple patterns that compose the EduCOR ontology, highlighting its cross-domain applicability and integrability with other ontologies. A demonstration of the proposed ontology on the real-life learning platform eDoer is discussed as a use case. We evaluate the EduCOR ontology using both gold standard and task-based approaches. The comparison of EduCOR to three gold schemata, and its application in two use-cases, shows its coverage and adaptability to multiple OER repositories, which allows generating user-centric and labour-market oriented recommendations. Resource : https://tibonto.github.io/educor/.", "which keywords ?", "Education", NaN, NaN], ["Current approaches to RDF graph indexing suffer from weak data locality, i.e., information regarding a piece of data appears in multiple locations, spanning multiple data structures. Weak data locality negatively impacts storage and query processing costs. Towards stronger data locality, we propose a Three-way Triple Tree (TripleT) secondary memory indexing technique to facilitate flexible and efficient join evaluation on RDF data. The novelty of TripleT is that the index is built over the atoms occurring in the data set, rather than at a coarser granularity, such as whole triples occurring in the data set; and, the atoms are indexed regardless of the roles (i.e., subjects, predicates, or objects) they play in the triples of the data set. We show through extensive empirical evaluation that TripleT exhibits multiple orders of magnitude improvement over the state-of-the-art, in terms of both storage and query processing costs.", "which keywords ?", "Weak data Locality", 53.0, 71.0], ["A novel parallel structured fiber-optic Fabry-Perot interferometer (FPI) based on Vernier-effect is theoretically proposed and experimentally demonstrated for ultrasensitive strain measurement. This proposed sensor consists of open-cavity and closed-cavity fiber-optic FPI, both of which are connected in parallel via a 3 dB coupler. The open-cavity is implemented for sensing, while the closed-cavity for reference. Experimental results show that the proposed parallel structured fiber-optic FPI can provide an ultra-high strain sensitivity of -43.2 pm/\u03bc\u03b5, which is 4.6 times higher than that of a single open-cavity FPI. Furthermore, the sensor is simple in fabrication, robust in structure, and stable in measurement. Finally, the parallel structured fiber-optic FPI scheme proposed in this paper can also be applied to other sensing field, and provide a new perspective idea for high sensitivity sensing.", "which keywords ?", "Strain", 174.0, 180.0], ["Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.", "which keywords ?", "ontology", 551.0, 559.0], ["E-learning recommender systems are gaining significance nowadays due to its ability to enhance the learning experience by providing tailor-made services based on learner preferences. A Personalized Learning Environment (PLE) that automatically adapts to learner characteristics such as learning styles and knowledge level can recommend appropriate learning resources that would favor the learning process and improve learning outcomes. The pure cold-start problem is a relevant issue in PLEs, which arises due to the lack of prior information about the new learner in the PLE to create appropriate recommendations. This article introduces a semantic framework based on ontology to address the pure cold-start problem in content recommenders. The ontology encapsulates the domain knowledge about the learners as well as Learning Objects (LOs). The semantic model that we built has been experimented with different combinations of the key learner parameters such as learning style, knowledge level, and background knowledge. The proposed framework utilizes these parameters to build natural learner groups from the learner ontology using SPARQL queries. The ontology holds 480 learners\u2019 data, 468 annotated learning objects with 5,600 learner ratings. A multivariate k-means clustering algorithm, an unsupervised machine learning technique for grouping similar data, is used to evaluate the learner similarity computation accuracy. The learner satisfaction achieved with the proposed model is measured based on the ratings given by the 40 participants of the experiments. From the evaluation perspective, it is evident that 79% of the learners are satisfied with the recommendations generated by the proposed model in pure cold-start condition.", "which keywords ?", "pure cold-start problem", 440.0, 463.0], ["Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever. Here, we combine the advantages of the popular bandit-based HPO method Hyperband (HB) and the evolutionary search approach of Differential Evolution (DE) to yield a new HPO method which we call DEHB. Comprehensive results on a very broad range of HPO problems, as well as a wide range of tabular benchmarks from neural architecture search, demonstrate that DEHB achieves strong performance far more robustly than all previous HPO methods we are aware of, especially for high-dimensional problems with discrete input dimensions. For example, DEHB is up to 1000x faster than random search. It is also efficient in computational time, conceptually simple and easy to implement, positioning it well to become a new default HPO method.", "which keywords ?", "evolutionary search", 285.0, 304.0], ["A novel and highly sensitive nonenzymatic glucose biosensor was developed by nucleating colloidal silver nanoparticles (AgNPs) on MoS2. The facile fabrication method, high reproducibility (97.5%) and stability indicates a promising capability for large-scale manufacturing. Additionally, the excellent sensitivity (9044.6 \u03bcA\u00b7mM\u22121\u00b7cm\u22122), low detection limit (0.03 \u03bcM), appropriate linear range of 0.1\u20131000 \u03bcM, and high selectivity suggests that this biosensor has a great potential to be applied for noninvasive glucose detection in human body fluids, such as sweat and saliva.", "which keywords ?", "colloidal silver nanoparticle", NaN, NaN], ["A highly sensitive fiber temperature sensor based on in-line Mach-Zehnder interferometers (MZIs) and Vernier effect was proposed and experimentally demonstrated. The MZI was fabricated by splicing a section of hollow core fiber between two pieces of multimode fiber. The temperature sensitivity obtained by extracting envelope dip shift of the superimposed spectrum reaches to 528.5 pm/\u00b0C in the range of 0 \u00b0C\u2013100 \u00b0C, which is 17.5 times as high as that without enhanced by the Vernier effect. The experimental sensitivity amplification factor is close to the theoretical predication (18.3 times).The proposed sensitivity enhancement system employs parallel connecting to implement the Vernier effect, which possesses the advantages of easy fabrication and high flexibility.", "which keywords ?", "sensitivity enhancement", 610.0, 633.0], ["A novel sensitivity-enhanced intrinsic fiber Fabry-Perot interferometer (IFFPI) high temperature sensor based on a hollow- core photonic crystal fiber (HC-PCF) and modified Vernier effect is proposed and experimentally demonstrated. The all fiber IFFPIs are easily constructed by splicing one end of the HC-PCF to a leading single mode fiber (SMF) and applying an arc at the other end of the HC-PCF to form a pure silica tip. The modified Vernier effect is formed by three beams of lights reflected from the SMF-PCF splicing joint, and the two air/glass interfaces on the ends of the collapsed HC-PCF tip, respectively. Vernier effect was first applied to high temperature sensing up to 1200\u00b0C, in this work, and the experimental results exhibit good stability and repeatability. The temperature sensitivity, measured from the spectrum envelope, is 14 to 57 times higher than that of other configurations using similar HC-PCFs without the Vernier effect. The proposed sensor has the advantages of high sensitivity, good stability, compactness, ease of fabrication, and has potential application in practical high-temperature measurements.", "which keywords ?", "Vernier effect", 173.0, 187.0], ["Two organolead halide perovskite nanocrystals, CH(3)NH(3)PbBr(3) and CH(3)NH(3)PbI(3), were found to efficiently sensitize TiO(2) for visible-light conversion in photoelectrochemical cells. When self-assembled on mesoporous TiO(2) films, the nanocrystalline perovskites exhibit strong band-gap absorptions as semiconductors. The CH(3)NH(3)PbI(3)-based photocell with spectral sensitivity of up to 800 nm yielded a solar energy conversion efficiency of 3.8%. The CH(3)NH(3)PbBr(3)-based cell showed a high photovoltage of 0.96 V with an external quantum conversion efficiency of 65%.", "which keywords ?", "Electrochemical Cells", NaN, NaN], ["E-learning recommender systems are gaining significance nowadays due to its ability to enhance the learning experience by providing tailor-made services based on learner preferences. A Personalized Learning Environment (PLE) that automatically adapts to learner characteristics such as learning styles and knowledge level can recommend appropriate learning resources that would favor the learning process and improve learning outcomes. The pure cold-start problem is a relevant issue in PLEs, which arises due to the lack of prior information about the new learner in the PLE to create appropriate recommendations. This article introduces a semantic framework based on ontology to address the pure cold-start problem in content recommenders. The ontology encapsulates the domain knowledge about the learners as well as Learning Objects (LOs). The semantic model that we built has been experimented with different combinations of the key learner parameters such as learning style, knowledge level, and background knowledge. The proposed framework utilizes these parameters to build natural learner groups from the learner ontology using SPARQL queries. The ontology holds 480 learners\u2019 data, 468 annotated learning objects with 5,600 learner ratings. A multivariate k-means clustering algorithm, an unsupervised machine learning technique for grouping similar data, is used to evaluate the learner similarity computation accuracy. The learner satisfaction achieved with the proposed model is measured based on the ratings given by the 40 participants of the experiments. From the evaluation perspective, it is evident that 79% of the learners are satisfied with the recommendations generated by the proposed model in pure cold-start condition.", "which keywords ?", "learning objects", 819.0, 835.0], ["The development of multidrug resistance (due to drug efflux by P-glycoproteins) is a major drawback with the use of paclitaxel (PTX) in the treatment of cancer. The rationale behind this study is to prepare PTX nanoparticles (NPs) for the reversal of multidrug resistance based on the fact that PTX loaded into NPs is not recognized by P-glycoproteins and hence is not effluxed out of the cell. Also, the intracellular penetration of the NPs could be enhanced by anchoring transferrin (Tf) on the PTX-PLGA-NPs. PTX-loaded PLGA NPs (PTX-PLGA-NPs), Pluronic\u00aeP85-coated PLGA NPs (P85-PTX-PLGA-NPs), and Tf-anchored PLGA NPs (Tf-PTX-PLGA-NPs) were prepared and evaluted for cytotoxicity and intracellular uptake using C6 rat glioma cell line. A significant increase in cytotoxicity was observed in the order of Tf-PTX-PLGA-NPs > P85-PTX-PLGA-NPs > PTX-PLGA-NPs in comparison to drug solution. In vivo biodistribution on male Sprague\u2013Dawley rats bearing C6 glioma (subcutaneous) showed higher tumor PTX concentrations in animals administered with PTX-NPs compared to drug solution.", "which keywords ?", "multidrug resistance", 19.0, 39.0], ["The simultaneous doping effect of Gadolinium (Gd) and Lithium (Li) on zinc oxide (ZnO) thin\u2010film transistor (TFT) by spray pyrolysis using a ZrOx gate insulator is reported. Li doping in ZnO increases mobility significantly, whereas the presence of Gd improves the stability of the device. The Gd ratio in ZnO is varied from 0% to 20% and the Li ratio from 0% to 10%. The optimized ZnO TFT with codoping of 5% Li and 10% Gd exhibits the linear mobility of 25.87 cm2 V\u22121 s\u22121, the subthreshold swing of 204 mV dec\u22121, on/off current ratio of \u2248108, and zero hysteresis voltage. The enhancement of both mobility and stability is due to an increase in grain size by Li incorporation and decrease of defect states by Gd doping. The negligible threshold voltage shift (\u2206VTH) under gate bias and zero hysteresis are due to the reduced defects in an oxide semiconductor and decreased traps at the LiGdZnO/ZrOx interface. Li doping can balance the reduction of the carrier concentration by Gd doping, which improves the mobility and stability of the ZnO TFT. Therefore, LiGdZnO TFT shows excellent electrical performance with high stability.", "which keywords ?", "Zinc Oxide", 70.0, 80.0], ["This paper presents the design, fabrication and measurements of a novel vertical electrostatic RF MEMS switch which utilizes the lateral thermal buckle-beam actuator design in order to reduce the switch sensitivity to thermal stresses. The effect of biaxial and stress gradients are taken into consideration, and the buckle-beam designs show minimal sensitivity to these stresses. Several switches with 4,8, and 12 suspension beams are presented. All the switches demonstrate a low sensitivity to temperature, and the variation in the pull-in voltage is ~ -50 mV/\u00b0C from 25-125\u00b0C. The change in the up-state capacitance for the same temperature range is <; \u00b1 3%. The switches also exhibit excellent RF and mechanical performances, and a capacitance ratio of ~ 20-23 (C\u03c5. = 85-115 fF, Cd = 1.7-2.6 pF) with Q > 150 at 10 GHz in the up-state position is reported. The mechanical resonant frequencies and quality factors are f\u03bf = 60-160 kHz and Qm = 2.3-4.5, respectively. The measured switching and release times are ~ 2-5 \u03bcs and ~ 5-6.5 \u03bcs, respectively. Power handling measurements show good stability with ~ 4 W of incident power at 10 GHz.", "which keywords ?", "Electrostatic", 81.0, 94.0], ["In recent years, the development of recommender systems has attracted increased interest in several domains, especially in e-learning. Massive Open Online Courses have brought a revolution. However, deficiency in support and personalization in this context drive learners to lose their motivation and leave the learning process. To overcome this problem we focus on adapting learning activities to learners' needs using a recommender system.This paper attempts to provide an introduction to different recommender systems for e-learning settings, as well as to present our proposed recommender system for massive learning activities in order to provide learners with the suitable learning activities to follow the learning process and maintain their motivation. We propose a hybrid knowledge-based recommender system based on ontology for recommendation of e-learning activities to learners in the context of MOOCs. In the proposed recommendation approach, ontology is used to model and represent the knowledge about the domain model, learners and learning activities.", "which keywords ?", "e-learning", 123.0, 133.0], ["Herein, a novel electrochemical glucose biosensor based on glucose oxidase (GOx) immobilized on a surface containing platinum nanoparticles (PtNPs) electrodeposited on poly(Azure A) (PAA) previously electropolymerized on activated screen-printed carbon electrodes (GOx-PtNPs-PAA-aSPCEs) is reported. The resulting electrochemical biosensor was validated towards glucose oxidation in real samples and further electrochemical measurement associated with the generated H2O2. The electrochemical biosensor showed an excellent sensitivity (42.7 \u03bcA mM\u22121 cm\u22122), limit of detection (7.6 \u03bcM), linear range (20 \u03bcM\u20132.3 mM), and good selectivity towards glucose determination. Furthermore, and most importantly, the detection of glucose was performed at a low potential (0.2 V vs. Ag). The high performance of the electrochemical biosensor was explained through surface exploration using field emission SEM, XPS, and impedance measurements. The electrochemical biosensor was successfully applied to glucose quantification in several real samples (commercial juices and a plant cell culture medium), exhibiting a high accuracy when compared with a classical spectrophotometric method. This electrochemical biosensor can be easily prepared and opens up a good alternative in the development of new sensitive glucose sensors.", "which keywords ?", "platinum nanoparticles", 117.0, 139.0], ["Sensitivity of the sensor is of great importance in practical applications of wearable electronics or smart robotics. In the present study, a capacitive sensor enhanced by a tilted micropillar array-structured dielectric layer is developed. Because the tilted micropillars undergo bending deformation rather than compression deformation, the distance between the electrodes is easier to change, even discarding the contribution of the air gap at the interface of the structured dielectric layer and the electrode, thus resulting in high pressure sensitivity (0.42 kPa-1) and very small detection limit (1 Pa). In addition, eliminating the presence of uncertain air gap, the dielectric layer is strongly bonded with the electrode, which makes the structure robust and endows the sensor with high stability and reliable capacitance response. These characteristics allow the device to remain in normal use without the need for repair or replacement despite mechanical damage. Moreover, the proposed sensor can be tailored to any size and shape, which is further demonstrated in wearable application. This work provides a new strategy for sensors that are required to be sensitive and reliable in actual applications.", "which keywords ?", "Capacitive", 142.0, 152.0], ["Compositional engineering of recently arising methylammonium (MA) lead (Pb) halide based perovskites is an essential approach for finding better perovskite compositions to resolve still remaining issues of toxic Pb, long-term instability, etc. In this work, we carried out crystallographic, morphological, optical, and photovoltaic characterization of compositional MASn0.6Pb0.4I3-xBrx by gradually introducing bromine (Br) into parental Pb-Sn binary perovskite (MASn0.6Pb0.4I3) to elucidate its function in Sn-rich (Sn:Pb = 6:4) perovskites. We found significant advances in crystallinity and dense coverage of the perovskite films by inserting the Br into Sn-rich perovskite lattice. Furthermore, light-intensity-dependent open circuit voltage (Voc) measurement revealed much suppressed trap-assisted recombination for a proper Br-added (x = 0.4) device. These contributed to attaining the unprecedented power conversion efficiency of 12.1% and Voc of 0.78 V, which are, to the best of our knowledge, the highest performance in the Sn-rich (\u226560%) perovskite solar cells reported so far. In addition, impressive enhancement of photocurrent-output stability and little hysteresis were found, which paves the way for the development of environmentally benign (Pb reduction), stable monolithic tandem cells using the developed low band gap (1.24-1.26 eV) MASn0.6Pb0.4I3-xBrx with suggested composition (x = 0.2-0.4).", "which keywords ?", "Solar Cells", 1060.0, 1071.0], ["Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.", "which keywords ?", "educational technologies", 45.0, 69.0], ["SnO2 nanowire gas sensors have been fabricated on Cd\u2212Au comb-shaped interdigitating electrodes using thermal evaporation of the mixed powders of SnO2 and active carbon. The self-assembly grown sensors have excellent performance in sensor response to hydrogen concentration in the range of 10 to 1000 ppm. This high response is attributed to the large portion of undercoordinated atoms on the surface of the SnO2 nanowires. The influence of the Debye length of the nanowires and the gap between electrodes in the gas sensor response is examined and discussed.", "which keywords ?", "Electrodes", 84.0, 94.0], [" Open educational resources are currently becoming increasingly available from a multitude of sources and are consequently annotated in many diverse ways. Interoperability concerns that naturally arise can often be resolved through the semantification of metadata descriptions, while at the same time strengthening the knowledge value of resources. SKOS can be a solid linking point offering a standard vocabulary for thematic descriptions, by referencing semantic thesauri. We propose the enhancement and maintenance of educational resources\u2019 metadata in the form of learning object ontologies and introduce the notion of a learning object ontology repository that can help towards their publication, discovery and reuse. At the same time, linking to thesauri datasets and contextualized sources interrelates learning objects with linked data and exposes them to the Web of Data. We build a set of extensions and workflows on top of contemporary ontology management tools, such as WebProt\u00e9g\u00e9, that can make it suitable as a learning object ontology repository. The proposed approach and implementation can help libraries and universities in discovering, managing and incorporating open educational resources and enhancing current curricula. ", "which keywords ?", "learning objects", 818.0, 834.0], ["In this work, pure and IIIA element doped ZnO thin films were grown on p type silicon (Si) with (100) orientated surface by sol-gel method, and were characterized for comparing their electrical characteristics. The heterojunction parameters were obtained from the current-voltage (I-V) and capacitance-voltage (C-V) characteristics at room temperature. The ideality factor (n), saturation current (Io) and junction resistance of ZnO/p-Si heterojunction for both pure and doped (with Al or In) cases were determined by using different methods at room ambient. Other electrical parameters such as Fermi energy level (EF), barrier height (\u03a6B), acceptor concentration (Na), built-in potential (\u03a6i) and voltage dependence of surface states (Nss) profile were obtained from the C-V measurements. The results reveal that doping ZnO with IIIA (Al or In) elements to fabricate n-ZnO/p-Si heterojunction can result in high performance diode characteristics.", "which keywords ?", "Heterojunction parameters", 215.0, 240.0], ["E-learning recommender systems are gaining significance nowadays due to its ability to enhance the learning experience by providing tailor-made services based on learner preferences. A Personalized Learning Environment (PLE) that automatically adapts to learner characteristics such as learning styles and knowledge level can recommend appropriate learning resources that would favor the learning process and improve learning outcomes. The pure cold-start problem is a relevant issue in PLEs, which arises due to the lack of prior information about the new learner in the PLE to create appropriate recommendations. This article introduces a semantic framework based on ontology to address the pure cold-start problem in content recommenders. The ontology encapsulates the domain knowledge about the learners as well as Learning Objects (LOs). The semantic model that we built has been experimented with different combinations of the key learner parameters such as learning style, knowledge level, and background knowledge. The proposed framework utilizes these parameters to build natural learner groups from the learner ontology using SPARQL queries. The ontology holds 480 learners\u2019 data, 468 annotated learning objects with 5,600 learner ratings. A multivariate k-means clustering algorithm, an unsupervised machine learning technique for grouping similar data, is used to evaluate the learner similarity computation accuracy. The learner satisfaction achieved with the proposed model is measured based on the ratings given by the 40 participants of the experiments. From the evaluation perspective, it is evident that 79% of the learners are satisfied with the recommendations generated by the proposed model in pure cold-start condition.", "which keywords ?", "content recommenders", 720.0, 740.0], ["The development of multidrug resistance (due to drug efflux by P-glycoproteins) is a major drawback with the use of paclitaxel (PTX) in the treatment of cancer. The rationale behind this study is to prepare PTX nanoparticles (NPs) for the reversal of multidrug resistance based on the fact that PTX loaded into NPs is not recognized by P-glycoproteins and hence is not effluxed out of the cell. Also, the intracellular penetration of the NPs could be enhanced by anchoring transferrin (Tf) on the PTX-PLGA-NPs. PTX-loaded PLGA NPs (PTX-PLGA-NPs), Pluronic\u00aeP85-coated PLGA NPs (P85-PTX-PLGA-NPs), and Tf-anchored PLGA NPs (Tf-PTX-PLGA-NPs) were prepared and evaluted for cytotoxicity and intracellular uptake using C6 rat glioma cell line. A significant increase in cytotoxicity was observed in the order of Tf-PTX-PLGA-NPs > P85-PTX-PLGA-NPs > PTX-PLGA-NPs in comparison to drug solution. In vivo biodistribution on male Sprague\u2013Dawley rats bearing C6 glioma (subcutaneous) showed higher tumor PTX concentrations in animals administered with PTX-NPs compared to drug solution.", "which keywords ?", "Paclitaxel", 116.0, 126.0], ["The effects of gallium doping into indium\u2013zinc\u2013tin oxide (IZTO) thin film transistors (TFTs) and Ar/O2 plasma treatment on the performance of a\u2010IZTO TFT are reported. The Ga doping ratio is varied from 0 to 20%, and it is found that 10% gallium doping in a\u2010IZTO TFT results in a saturation mobility (\u00b5sat) of 11.80 cm2 V\u22121 s\u22121, a threshold voltage (Vth) of 0.17 V, subthreshold swing (SS) of 94 mV dec\u22121, and on/off current ratio (Ion/Ioff) of 1.21 \u00d7 107. Additionally, the performance of 10% Ga\u2010doped IZTO TFT can be further improved by Ar/O2 plasma treatment. It is found that 30 s plasma treatment gives the best TFT performances such as \u00b5sat of 30.60 cm2 V\u22121 s\u22121, Vth of 0.12 V, SS of 92 mV dec\u22121, and Ion/Ioff ratio of 7.90 \u00d7 107. The bias\u2010stability of 10% Ga\u2010doped IZTO TFT is also improved by 30 s plasma treatment. The enhancement of the TFT performance appears to be due to the reduction in the oxygen vacancy and \uf8ffOH concentrations.", "which keywords ?", "Transistors", 74.0, 85.0], ["

We introduce a solution-processed copper tin sulfide (CTS) thin film to realize high-performance of thin-film transistors (TFT) by optimizing the CTS precursor solution concentration.

", "which keywords ?", "Transistors", 113.0, 124.0], ["This paper reports on the experimental and theoretical characterization of RF microelectromechanical systems (MEMS) switches for high-power applications. First, we investigate the problem of self-actuation due to high RF power and we demonstrate switches that do not self-actuate or catastrophically fail with a measured RF power of up to 5.5 W. Second, the problem of switch stiction to the down state as a function of the applied RF power is also theoretically and experimentally studied. Finally, a novel switch design with a top electrode is introduced and its advantages related to RF power-handling capabilities are presented. By applying this technology, we demonstrate hot-switching measurements with a maximum power of 0.8 W. Our results, backed by theory and measurements, illustrate that careful design can significantly improve the power-handling capabilities of RF MEMS switches.", "which keywords ?", "High-power applications", 129.0, 152.0], ["Compositional engineering of recently arising methylammonium (MA) lead (Pb) halide based perovskites is an essential approach for finding better perovskite compositions to resolve still remaining issues of toxic Pb, long-term instability, etc. In this work, we carried out crystallographic, morphological, optical, and photovoltaic characterization of compositional MASn0.6Pb0.4I3-xBrx by gradually introducing bromine (Br) into parental Pb-Sn binary perovskite (MASn0.6Pb0.4I3) to elucidate its function in Sn-rich (Sn:Pb = 6:4) perovskites. We found significant advances in crystallinity and dense coverage of the perovskite films by inserting the Br into Sn-rich perovskite lattice. Furthermore, light-intensity-dependent open circuit voltage (Voc) measurement revealed much suppressed trap-assisted recombination for a proper Br-added (x = 0.4) device. These contributed to attaining the unprecedented power conversion efficiency of 12.1% and Voc of 0.78 V, which are, to the best of our knowledge, the highest performance in the Sn-rich (\u226560%) perovskite solar cells reported so far. In addition, impressive enhancement of photocurrent-output stability and little hysteresis were found, which paves the way for the development of environmentally benign (Pb reduction), stable monolithic tandem cells using the developed low band gap (1.24-1.26 eV) MASn0.6Pb0.4I3-xBrx with suggested composition (x = 0.2-0.4).", "which keywords ?", "Sn-rich", 508.0, 515.0], ["Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5\uff05. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel\uff1bDrug delivery\uff1bNanoparticle\uff1bRadiotherapy\uff1bHypoxia\uff1bHuman tumor cells\uff1bcellular uptake", "which keywords ?", "Cellular uptake", 766.0, 781.0], ["Abstract New resonant emission of dispersive waves by oscillating solitary structures in optical fiber cavities is considered analytically and numerically. The pulse propagation is described in the framework of the Lugiato-Lefever equation when a Hopf-bifurcation can result in the formation of oscillating dissipative solitons. The resonance condition for the radiation of the dissipative oscillating solitons is derived and it is demonstrated that the predicted resonances match the spectral lines observed in numerical simulations perfectly. The complex recoil of the radiation on the soliton dynamics is discussed. The reported effect can have importance for the generation of frequency combs in nonlinear microring resonators.", "which keywords ?", "Microring resonators", 710.0, 730.0], ["Lightweight, stretchable, and wearable strain sensors have recently been widely studied for the development of health monitoring systems, human-machine interfaces, and wearable devices. Herein, highly stretchable polymer elastomer-wrapped carbon nanocomposite piezoresistive core-sheath fibers are successfully prepared using a facile and scalable one-step coaxial wet-spinning assembly approach. The carbon nanotube-polymeric composite core of the stretchable fiber is surrounded by an insulating sheath, similar to conventional cables, and shows excellent electrical conductivity with a low percolation threshold (0.74 vol %). The core-sheath elastic fibers are used as wearable strain sensors, exhibiting ultra-high stretchability (above 300%), excellent stability (>10 000 cycles), fast response, low hysteresis, and good washability. Furthermore, the piezoresistive core-sheath fiber possesses bending-insensitiveness and negligible torsion-sensitive properties, and the strain sensing performance of piezoresistive fibers maintains a high degree of stability under harsh conditions. On the basis of this high level of performance, the fiber-shaped strain sensor can accurately detect both subtle and large-scale human movements by embedding it in gloves and garments or by directly attaching it to the skin. The current results indicate that the proposed stretchable strain sensor has many potential applications in health monitoring, human-machine interfaces, soft robotics, and wearable electronics.", "which keywords ?", "wet-spinning ", 365.0, 378.0], ["This paper presents a new RF MEMS tunable capacitor based on the zipper principle and with interdigitated RF and actuation electrodes. The electrode configuration prevents dielectric charging under high actuation voltages. It also increases the capacitance ratio and the tunable analog range. The effect of the residual stress on the capacitance tunability is also investigated. Two devices with different interdigital RF and actuation electrodes are fabricated on an alumina substrate and result in a capacitance ratio around 3.0 (Cmin = 70?90 fF, Cmax = 240?270 fF) and with a Q > 100 at 3 GHz. This design can be used in wideband tunable filters and matching networks.", "which keywords ?", "RF MEMS", 26.0, 33.0], ["Abstract With the increased dependence on online learning platforms and educational resource repositories, a unified representation of digital learning resources becomes essential to support a dynamic and multi-source learning experience. We introduce the EduCOR ontology, an educational, career-oriented ontology that provides a foundation for representing online learning resources for personalised learning systems. The ontology is designed to enable learning material repositories to offer learning path recommendations, which correspond to the user\u2019s learning goals and preferences, academic and psychological parameters, and labour-market skills. We present the multiple patterns that compose the EduCOR ontology, highlighting its cross-domain applicability and integrability with other ontologies. A demonstration of the proposed ontology on the real-life learning platform eDoer is discussed as a use case. We evaluate the EduCOR ontology using both gold standard and task-based approaches. The comparison of EduCOR to three gold schemata, and its application in two use-cases, shows its coverage and adaptability to multiple OER repositories, which allows generating user-centric and labour-market oriented recommendations. Resource : https://tibonto.github.io/educor/.", "which keywords ?", "OER", 1134.0, 1137.0], ["One of the efficient techniques to enhance the sensitivity of optical fiber sensor is to utilize Vernier effect. However, the complex system structure, precisely controlled device fabrication, or expensive materials required for implementing the technique creates the difficulties for practical applications. Here, we propose a highly sensitive optical fiber strain sensor based on two cascaded Fabry\u2013Perot interferometers and Vernier effect. Of the two interferometers, one is for sensing and the other for referencing, and they are formed by two pairs of in-fiber reflection mirrors fabricated by femtosecond laser pulse illumination to induce refractive-index-modified area in the fiber core. A relatively large distance between the two Fabry\u2013Perot interferometers needs to be used to ensure the independent operation of the two interferometers. The fabrication of the device is simple, and the cavity's length can be precisely controlled by a computer-controlled three-dimensional micromachining platform. Moreover, as the device is based on the inner structure inside the optical fiber, good robustness of the device can be guaranteed. The experimental results obtained show that the strain sensitivity of the device is \u223c28.11 pm/\u03bc\u03f5, while the temperature sensitivity achieved is \u223c278.48 pm/\u00b0C.", "which keywords ?", "Strain", 359.0, 365.0], ["Highly sensitive, transparent, and durable pressure sensors are fabricated using sea-urchin-shaped metal nanoparticles and insulating polyurethane elastomer. The pressure sensors exhibit outstanding sensitivity (2.46 kPa-1 ), superior optical transmittance (84.8% at 550 nm), fast response/relaxation time (30 ms), and excellent operational durability. In addition, the pressure sensors successfully detect minute movements of human muscles.", "which keywords ?", "pressure sensors", 43.0, 59.0], ["Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever. Here, we combine the advantages of the popular bandit-based HPO method Hyperband (HB) and the evolutionary search approach of Differential Evolution (DE) to yield a new HPO method which we call DEHB. Comprehensive results on a very broad range of HPO problems, as well as a wide range of tabular benchmarks from neural architecture search, demonstrate that DEHB achieves strong performance far more robustly than all previous HPO methods we are aware of, especially for high-dimensional problems with discrete input dimensions. For example, DEHB is up to 1000x faster than random search. It is also efficient in computational time, conceptually simple and easy to implement, positioning it well to become a new default HPO method.", "which keywords ?", "hyperparameter optimization", 131.0, 158.0], ["Poor delivery of insoluble anticancer drugs has so far precluded their clinical application. In this study, we developed a tumor-targeting delivery system for insoluble drug (paclitaxel, PTX) by PEGylated O-carboxymethyl-chitosan (CMC) nanoparticles grafted with cyclic Arg-Gly-Asp (RGD) peptide. To improve the loading efficiency (LE), we combined O/W/O double emulsion method with temperature-programmed solidification technique and controlled PTX within the matrix network as in situ nanocrystallite form. Furthermore, these CMC nanoparticles were PEGylated, which could reduce recognition by the reticuloendothelial system (RES) and prolong the circulation time in blood. In addition, further graft of cyclic RGD peptide at the terminal of PEG chain endowed these nanoparticles with higher affinity to in vitro Lewis lung carcinoma (LLC) cells and in vivo tumor tissue. These outstanding properties enabled as-designed nanodevice to exhibit a greater tumor growth inhibition effect and much lower side effects over the commercial formulation Taxol.", "which keywords ?", "Arg-Gly-Asp (RGD)", NaN, NaN], ["A high-sensitivity fiber-optic strain sensor, based on the Vernier effect and separated Fabry\u2013Perot interferometers (FPIs), is proposed and experimentally demonstrated. One air-cavity FPI is used as a sensing FPI (SFPI) and another is used as a matched FPI (MFPI) to generate the Vernier effect. The two FPIs are connected by a fiber link but separated by a long section of single-mode fiber (SMF). The SFPI is fabricated by splicing a section of microfiber between two SMFs with a large lateral offset, and the MFPI is formed by a section of hollow-core fiber sandwiched between two SMFs. By using the Vernier effect, the strain sensitivity of the proposed sensor reaches $\\text{1.15 nm/}\\mu \\varepsilon $, which is the highest strain sensitivity of an FPI-based sensor reported so far. Owing to the separated structure of the proposed sensor, the MFPI can be isolated from the SFPI and the detection environment. Therefore, the MFPI is not affected by external physical quantities (such as strain and temperature) and thus has a very low temperature cross-sensitivity. The experimental results show that a low-temperature cross-sensitivity of $\\text{0.056 } \\mu \\varepsilon /^ {\\circ }{\\text{C}}$ can be obtained with the proposed sensor. With its advantages of simple fabrication, high strain sensitivity, and low-temperature cross-sensitivity, the proposed sensor has great application prospects in several fields.", "which keywords ?", "interferometers", 100.0, 115.0], ["Abstract Objective: Paclitaxel (PTX)-loaded polymer (Poly(lactic-co-glycolic acid), PLGA)-based nanoformulation was developed with the objective of formulating cremophor EL-free nanoformulation intended for intravenous use. Significance: The polymeric PTX nanoparticles free from the cremophor EL will help in eliminating the shortcomings of the existing delivery system as cremophor EL causes serious allergic reactions to the subjects after intravenous use. Methods and results: Paclitaxel-loaded nanoparticles were formulated by nanoprecipitation method. The diminutive nanoparticles (143.2 nm) with uniform size throughout (polydispersity index, 0.115) and high entrapment efficiency (95.34%) were obtained by employing the Box\u2013Behnken design for the optimization of the formulation with the aid of desirability approach-based numerical optimization technique. Optimized levels for each factor viz. polymer concentration (X1), amount of organic solvent (X2), and surfactant concentration (X3) were 0.23%, 5 ml %, and 1.13%, respectively. The results of the hemocompatibility studies confirmed the safety of PLGA-based nanoparticles for intravenous administration. Pharmacokinetic evaluations confirmed the longer retention of PTX in systemic circulation. Conclusion: In a nutshell, the developed polymeric nanoparticle formulation of PTX precludes the inadequacy of existing PTX formulation and can be considered as superior alternative carrier system of the same.", "which keywords ?", "optimization", 755.0, 767.0], ["Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever. Here, we combine the advantages of the popular bandit-based HPO method Hyperband (HB) and the evolutionary search approach of Differential Evolution (DE) to yield a new HPO method which we call DEHB. Comprehensive results on a very broad range of HPO problems, as well as a wide range of tabular benchmarks from neural architecture search, demonstrate that DEHB achieves strong performance far more robustly than all previous HPO methods we are aware of, especially for high-dimensional problems with discrete input dimensions. For example, DEHB is up to 1000x faster than random search. It is also efficient in computational time, conceptually simple and easy to implement, positioning it well to become a new default HPO method.", "which keywords ?", "Machine Learning", 7.0, 23.0], ["Bladder cancer (BC) is a very common cancer. Nonmuscle-invasive bladder cancer (NMIBC) is the most common type of bladder cancer. After postoperative tumor resection, chemotherapy intravesical instillation is recommended as a standard treatment to significantly reduce recurrences. Nanomedicine-mediated delivery of a chemotherapeutic agent targeting cancer could provide a solution to obtain longer residence time and high bioavailability of an anticancer drug. The approach described here provides a nanomedicine with sustained and prolonged delivery of paclitaxel and enhanced therapy of intravesical bladder cancer, which is paclitaxel/chitosan (PTX/CS) nanosupensions (NSs). The positively charged PTX/CS NSs exhibited a rod-shaped morphology with a mean diameter about 200 nm. They have good dispersivity in water without any protective agents, and the positively charged properties make them easy to be adsorbed on the inner mucosa of the bladder through electrostatic adsorption. PTX/CS NSs also had a high drug loading capacity and can maintain sustained release of paclitaxel which could be prolonged over 10 days. Cell experiments in vitro demonstrated that PTX/CS NSs had good biocompatibility and effective bladder cancer cell proliferation inhibition. The significant anticancer efficacy against intravesical bladder cancer was verified by an in situ bladder cancer model. The paclitaxel/chitosan nanosupensions could provide sustained delivery of chemotherapeutic agents with significant anticancer efficacy against intravesical bladder cancer.", "which keywords ?", "Paclitaxel", 556.0, 566.0], ["This paper studies the effect of surface roughness on up-state and down-state capacitances of microelectromechanical systems (MEMS) capacitive switches. When the root-mean-square (RMS) roughness is 10 nm, the up-state capacitance is approximately 9% higher than the theoretical value. When the metal bridge is driven down, the normalized contact area between the metal bridge and the surface of the dielectric layer is less than 1% if the RMS roughness is larger than 2 nm. Therefore, the down-state capacitance is actually determined by the non-contact part of the metal bridge. The normalized isolation is only 62% for RMS roughness of 10 nm when the hold-down voltage is 30 V. The analysis also shows that the down-state capacitance and the isolation increase with the hold-down voltage. The normalized isolation increases from 58% to 65% when the hold-down voltage increases from 10 V to 60 V for RMS roughness of 10 nm.", "which keywords ?", "Surface roughness", 33.0, 50.0], ["This paper, the first of two parts, presents an electromagnetic model for membrane microelectromechanical systems (MEMS) shunt switches for microwave/millimeter-wave applications. The up-state capacitance can be accurately modeled using three-dimensional static solvers, and full-wave solvers are used to predict the current distribution and inductance of the switch. The loss in the up-state position is equivalent to the coplanar waveguide line loss and is 0.01-0.02 dB at 10-30 GHz for a 2-/spl mu/m-thick Au MEMS shunt switch. It is seen that the capacitance, inductance, and series resistance can be accurately extracted from DC-40 GHz S-parameter measurements. It is also shown that dramatic increase in the down-state isolation (20/sup +/ dB) can be achieved with the choice of the correct LC series resonant frequency of the switch. In part 2 of this paper, the equivalent capacitor-inductor-resistor model is used in the design of tuned high isolation switches at 10 and 30 GHz.", "which keywords ?", "Switches", 127.0, 135.0], ["A capacitance-voltage (C- V) model is developed for RF microelectromechanical systems (MEMS) switches at upstate and downstate. The transient capacitance response of the RF MEMS switches at different switch states was measured for different humidity levels. By using the C -V model as well as the voltage shift dependent of trapped charges, the transient trapped charges at different switch states and humidity levels are obtained. Charging models at different switch states are explored in detail. It is shown that the injected charges increase linearly with humidity levels and the internal polarization increases with increasing humidity at downstate. The speed of charge injection at 80% relative humidity (RH) is about ten times faster than that at 20% RH. A measurement of pull-in voltage shifts by C- V sweep cycles at 20% and 80 % RH gives a reasonable evidence. The present model is useful to understand the pull-in voltage shift of the RF MEMS switch.", "which keywords ?", "Capacitance", 2.0, 13.0], ["One of the efficient techniques to enhance the sensitivity of optical fiber sensor is to utilize Vernier effect. However, the complex system structure, precisely controlled device fabrication, or expensive materials required for implementing the technique creates the difficulties for practical applications. Here, we propose a highly sensitive optical fiber strain sensor based on two cascaded Fabry\u2013Perot interferometers and Vernier effect. Of the two interferometers, one is for sensing and the other for referencing, and they are formed by two pairs of in-fiber reflection mirrors fabricated by femtosecond laser pulse illumination to induce refractive-index-modified area in the fiber core. A relatively large distance between the two Fabry\u2013Perot interferometers needs to be used to ensure the independent operation of the two interferometers. The fabrication of the device is simple, and the cavity's length can be precisely controlled by a computer-controlled three-dimensional micromachining platform. Moreover, as the device is based on the inner structure inside the optical fiber, good robustness of the device can be guaranteed. The experimental results obtained show that the strain sensitivity of the device is \u223c28.11 pm/\u03bc\u03f5, while the temperature sensitivity achieved is \u223c278.48 pm/\u00b0C.", "which keywords ?", "Sensitivity", 47.0, 58.0], ["Ni-MOF (metal-organic framework)/Ni/NiO/carbon frame nanocomposite was formed by combing Ni and NiO nanoparticles and a C frame with Ni-MOF using an efficient one-step calcination method. The morphology and structure of Ni-MOF/Ni/NiO/C nanocomposite were characterized by transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), and energy disperse spectroscopy (EDS) mapping. Ni-MOF/Ni/NiO/C nanocomposites were immobilized onto glassy carbon electrodes (GCEs) with Nafion film to construct high-performance nonenzymatic glucose and H2O2 electrochemical sensors. Cyclic voltammetric (CV) study showed Ni-MOF/Ni/NiO/C nanocomposite displayed better electrocatalytic activity toward glucose oxidation as compared to Ni-MOF. Amperometric study indicated the glucose sensor displayed high performance, offering a low detection limit (0.8 \u03bcM), a high sensitivity of 367.45 mA M-1 cm-2, and a wide linear range (from 4 to 5664 \u03bcM). Importantly, good reproducibility, long-time stability, and excellent selectivity were obtained within the as-fabricated glucose sensor. Furthermore, the constructed high-performance sensor was utilized to monitor the glucose levels in human serum, and satisfactory results were obtained. It demonstrated the Ni-MOF/Ni/NiO/C nanocomposite can be used as a good electrochemical sensing material in practical biological applications.", "which keywords ?", "Metal-organic", 8.0, 21.0], ["An ultrasensitive refractive index (RI) sensor based on enhanced Vernier effect is proposed, which consists of two cascaded fiber core-offset pairs. One pair functions as a Mach-Zehnder interferometer (MZI), the other with larger core offset as a low-finesse Fabry-Perot interferometer (FPI). In traditional Vernier-effect based sensors, an interferometer insensitive to environment change is used as sensing reference. Here in the proposed sensor, interference fringes of the MZI and the FPI shift to opposite directions as ambient RI varies, and to the same direction as surrounding temperature changes. Thus, the envelope of superimposed fringe manifests enhanced Vernier effect for RI sensing while reduced Vernier effect for temperature change. As a result, an ultra-high RI sensitivity of -87261.06 nm/RIU is obtained near the RI of 1.33 with good linearity, while the temperature sensitivity is as low as 204.7 pm/ \u00b0C. The proposed structure is robust and of low cost. Furthermore, the proposed scheme of enhanced Vernier effect provides a new perspective and idea in other sensing field. \u00a9 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement", "which keywords ?", "refractive index", 18.0, 34.0], ["Herein, we incorporated dual biotemplates, i.e., cellulose nanocrystals (CNC) and apoferritin, into electrospinning solution to achieve three distinct benefits, i.e., (i) facile synthesis of a WO3 nanotube by utilizing the self-agglomerating nature of CNC in the core of as-spun nanofibers, (ii) effective sensitization by partial phase transition from WO3 to Na2W4O13 induced by interaction between sodium-doped CNC and WO3 during calcination, and (iii) uniform functionalization with monodispersive apoferritin-derived Pt catalytic nanoparticles (2.22 \u00b1 0.42 nm). Interestingly, the sensitization effect of Na2W4O13 on WO3 resulted in highly selective H2S sensing characteristics against seven different interfering molecules. Furthermore, synergistic effects with a bioinspired Pt catalyst induced a remarkably enhanced H2S response ( Rair/ Rgas = 203.5), unparalleled selectivity ( Rair/ Rgas < 1.3 for the interfering molecules), and rapid response (<10 s)/recovery (<30 s) time at 1 ppm of H2S under 95% relative humidity level. This work paves the way for a new class of cosensitization routes to overcome critical shortcomings of SMO-based chemical sensors, thus providing a potential platform for diagnosis of halitosis.", "which keywords ?", " cellulose nanocrystals", NaN, NaN], ["The low power conversion efficiency (PCE) of tin\u2010based hybrid perovskite solar cells (HPSCs) is mainly attributed to the high background carrier density due to a high density of intrinsic defects such as Sn vacancies and oxidized species (Sn4+) that characterize Sn\u2010based HPSCs. Herein, this study reports on the successful reduction of the background carrier density by more than one order of magnitude by depositing near\u2010single\u2010crystalline formamidinium tin iodide (FASnI3) films with the orthorhombic a\u2010axis in the out\u2010of\u2010plane direction. Using these highly crystalline films, obtained by mixing a very small amount (0.08 m) of layered (2D) Sn perovskite with 0.92 m (3D) FASnI3, for the first time a PCE as high as 9.0% in a planar p\u2013i\u2013n device structure is achieved. These devices display negligible hysteresis and light soaking, as they benefit from very low trap\u2010assisted recombination, low shunt losses, and more efficient charge collection. This represents a 50% improvement in PCE compared to the best reference cell based on a pure FASnI3 film using SnF2 as a reducing agent. Moreover, the 2D/3D\u2010based HPSCs show considerable improved stability due to the enhanced robustness of the perovskite film compared to the reference cell.", "which keywords ?", "Solar Cells", 73.0, 84.0], ["We designed a cylinder-type fiber-optic Vernier probe based on cascaded Fabry-Perot interferometers (FPIs) in this paper. It is fabricated by inserting a short single-mode fiber (SMF) column into a large-aperture hollow-core fiber (LA-HCF) with an internal diameter of 150 \u00b5m, which structures a length adjusted air microcavity with the lead-in SMF inserted into the LA-HCF from the other end. The length of the SMF column is 537.9 \u00b5m. By adjusting the distance between the SMF column and the lead-in SMF, the spectral change is displayed intuitively, and the Vernier spectra are recorded and analyzed. In sensitivity analysis, the probe is encapsulated in the medical needle by ultraviolet glue as a small body thermometer when the length of the air microcavity is 715.5 \u00b5m. The experiment shows that the sensitivity of the Vernier envelope is 12.55 times higher than that of the high-frequency comb. This design can effectively reduce the preparation difficulty of the optical fiber Vernier sensor based on cascaded FPIs, and can expand the applied fields by using different fibers and materials.", "which keywords ?", "Fabry-Perot interferometer", NaN, NaN], ["This paper presents the effect of thermal annealing on \u03b2\u2010Ga2O3 thin film solar\u2010blind (SB) photodetector (PD) synthesized on c\u2010plane sapphire substrates by a low pressure chemical vapor deposition (LPCVD). The thin films were synthesized using high purity gallium (Ga) and oxygen (O2) as source precursors. The annealing was performed ex situ the under the oxygen atmosphere, which helped to reduce oxygen or oxygen\u2010related vacancies in the thin film. Metal/semiconductor/metal (MSM) type photodetectors were fabricated using both the as\u2010grown and annealed films. The PDs fabricated on the annealed films had lower dark current, higher photoresponse and improved rejection ratio (R250/R370 and R250/R405) compared to the ones fabricated on the as\u2010grown films. These improved PD performances are due to the significant reduction of the photo\u2010generated carriers trapped by oxygen or oxygen\u2010related vacancies.", "which keywords ?", "photodetector", 90.0, 103.0], ["Lead free perovskite solar cells based on a CsSnI3 light absorber with a spectral response from 950 nm is demonstrated. The high photocurrents noted in the system are a consequence of SnF2 addition which reduces defect concentrations and hence the background charge carrier density.", "which keywords ?", "Solar Cells", 21.0, 32.0], ["The aim of this work is to develop a smart flexible sensor adapted to textile structures, able to measure their strain deformations. The sensors are \u201csmart\u201d because of their capacity to adapt to the specific mechanical properties of textile structures that are lightweight, highly flexible, stretchable, elastic, etc. Because of these properties, textile structures are continuously in movement and easily deformed, even under very low stresses. It is therefore important that the integration of a sensor does not modify their general behavior. The material used for the sensor is based on a thermoplastic elastomer (Evoprene)/carbon black nanoparticle composite, and presents general mechanical properties strongly compatible with the textile substrate. Two preparation techniques are investigated: the conventional melt-mixing process, and the solvent process which is found to be more adapted for this particular application. The preparation procedure is fully described, namely the optimization of the process in terms of filler concentration in which the percolation theory aspects have to be considered. The sensor is then integrated on a thin, lightweight Nylon fabric, and the electromechanical characterization is performed to demonstrate the adaptability and the correct functioning of the sensor as a strain gauge on the fabric. A normalized relative resistance is defined in order to characterize the electrical response of the sensor. Finally, the influence of environmental factors, such as temperature and atmospheric humidity, on the sensor performance is investigated. The results show that the sensor's electrical resistance is particularly affected by humidity. This behavior is discussed in terms of the sensitivity of the carbon black filler particles to the presence of water.", "which keywords ?", "flexible sensor", 43.0, 58.0], ["Abstract With the increased dependence on online learning platforms and educational resource repositories, a unified representation of digital learning resources becomes essential to support a dynamic and multi-source learning experience. We introduce the EduCOR ontology, an educational, career-oriented ontology that provides a foundation for representing online learning resources for personalised learning systems. The ontology is designed to enable learning material repositories to offer learning path recommendations, which correspond to the user\u2019s learning goals and preferences, academic and psychological parameters, and labour-market skills. We present the multiple patterns that compose the EduCOR ontology, highlighting its cross-domain applicability and integrability with other ontologies. A demonstration of the proposed ontology on the real-life learning platform eDoer is discussed as a use case. We evaluate the EduCOR ontology using both gold standard and task-based approaches. The comparison of EduCOR to three gold schemata, and its application in two use-cases, shows its coverage and adaptability to multiple OER repositories, which allows generating user-centric and labour-market oriented recommendations. Resource : https://tibonto.github.io/educor/.", "which keywords ?", "ontology", 263.0, 271.0], ["This paper presents a new RF MEMS tunable capacitor based on the zipper principle and with interdigitated RF and actuation electrodes. The electrode configuration prevents dielectric charging under high actuation voltages. It also increases the capacitance ratio and the tunable analog range. The effect of the residual stress on the capacitance tunability is also investigated. Two devices with different interdigital RF and actuation electrodes are fabricated on an alumina substrate and result in a capacitance ratio around 3.0 (Cmin = 70?90 fF, Cmax = 240?270 fF) and with a Q > 100 at 3 GHz. This design can be used in wideband tunable filters and matching networks.", "which keywords ?", "Electrodes", 123.0, 133.0], ["The development of p-type metal-oxide semiconductors (MOSs) is of increasing interest for applications in next-generation optoelectronic devices, display backplane, and low-power-consumption complementary MOS circuits. Here, we report the high performance of solution-processed, p-channel copper-tin-sulfide-gallium oxide (CTSGO) thin-film transistors (TFTs) using UV/O3 exposure. Hall effect measurement confirmed the p-type conduction of CTSGO with Hall mobility of 6.02 \u00b1 0.50 cm2 V-1 s-1. The p-channel CTSGO TFT using UV/O3 treatment exhibited the field-effect mobility (\u03bcFE) of 1.75 \u00b1 0.15 cm2 V-1 s-1 and an on/off current ratio (ION/IOFF) of \u223c104 at a low operating voltage of -5 V. The significant enhancement in the device performance is due to the good p-type CTSGO material, smooth surface morphology, and fewer interfacial traps between the semiconductor and the Al2O3 gate insulator. Therefore, the p-channel CTSGO TFT can be applied for CMOS MOS TFT circuits for next-generation display.", "which keywords ?", "Semiconductors", 38.0, 52.0], ["In recent times, polymer-based flexible pressure sensors have been attracting a lot of attention because of their various applications. A highly sensitive and flexible sensor is suggested, capable of being attached to the human body, based on a three-dimensional dielectric elastomeric structure of polydimethylsiloxane (PDMS) and microsphere composite. This sensor has maximal porosity due to macropores created by sacrificial layer grains and micropores generated by microspheres pre-mixed with PDMS, allowing it to operate at a wider pressure range (~150 kPa) while maintaining a sensitivity (of 0.124 kPa\u22121 in a range of 0~15 kPa) better than in previous studies. The maximized pores can cause deformation in the structure, allowing for the detection of small changes in pressure. In addition to exhibiting a fast rise time (~167 ms) and fall time (~117 ms), as well as excellent reproducibility, the fabricated pressure sensor exhibits reliability in its response to repeated mechanical stimuli (2.5 kPa, 1000 cycles). As an application, we develop a wearable device for monitoring repeated tiny motions, such as the pulse on the human neck and swallowing at the Adam\u2019s apple. This sensory device is also used to detect movements in the index finger and to monitor an insole system in real-time.", "which keywords ?", "Microspheres", 477.0, 489.0], ["We study localized dissipative structures in a generalized Lugiato-Lefever equation, exhibiting normal group-velocity dispersion and anomalous quartic group-velocity dispersion. In the conservative system, this parameter-regime has proven to enable generalized dispersion Kerr solitons. Here, we demonstrate via numerical simulations that our dissipative system also exhibits equivalent localized states, including special molecule-like two-color bound states recently reported. We investigate their generation, characterize the observed steady-state solution, and analyze their propagation dynamics under perturbations.", "which keywords ?", "Localized dissipative structures", 9.0, 41.0], ["The application of thinner cadmium sulfide (CdS) window layer is a feasible approach to improve the performance of cadmium telluride (CdTe) thin film solar cells. However, the reduction of compactness and continuity of thinner CdS always deteriorates the device performance. In this work, transparent Al2O3 films with different thicknesses, deposited by using atomic layer deposition (ALD), were utilized as buffer layers between the front electrode transparent conductive oxide (TCO) and CdS layers to solve this problem, and then, thin-film solar cells with a structure of TCO/Al2O3/CdS/CdTe/BC/Ni were fabricated. The characteristics of the ALD-Al2O3 films were studied by UV\u2013visible transmittance spectrum, Raman spectroscopy, and atomic force microscopy (AFM). The light and dark J\u2013V performances of solar cells were also measured by specific instrumentations. The transmittance measurement conducted on the TCO/Al2O3 films verified that the transmittance of TCO/Al2O3 were comparable to that of single TCO layer, meaning that no extra absorption loss occurred when Al2O3 buffer layers were introduced into cells. Furthermore, due to the advantages of the ALD method, the ALD-Al2O3 buffer layers formed an extremely continuous and uniform coverage on the substrates to effectively fill and block the tiny leakage channels in CdS/CdTe polycrystalline films and improve the characteristics of the interface between TCO and CdS. However, as the thickness of alumina increased, the negative effects of cells were gradually exposed, especially the increase of the series resistance (Rs) and the more serious \u201croll-over\u201d phenomenon. Finally, the cell conversion efficiency (\u03b7) of more than 13.0% accompanied by optimized uniformity performances was successfully achieved corresponding to the 10 nm thick ALD-Al2O3 thin film.", "which keywords ?", "atomic layer deposition", 368.0, 391.0], ["High-performance p-type oxide thin film transistors (TFTs) have great potential for many semiconductor applications. However, these devices typically suffer from low hole mobility and high off-state currents. We fabricated p-type TFTs with a phase-pure polycrystalline Cu2O semiconductor channel grown by atomic layer deposition (ALD). The TFT switching characteristics were improved by applying a thin ALD Al2O3 passivation layer on the Cu2O channel, followed by vacuum annealing at 300 \u00b0C. Detailed characterization by transmission electron microscopy-energy dispersive X-ray analysis and X-ray photoelectron spectroscopy shows that the surface of Cu2O is reduced following Al2O3 deposition and indicates the formation of a 1-2 nm thick CuAlO2 interfacial layer. This, together with field-effect passivation caused by the high negative fixed charge of the ALD Al2O3, leads to an improvement in the TFT performance by reducing the density of deep trap states as well as by reducing the accumulation of electrons in the semiconducting layer in the device off-state.", "which keywords ?", "Transistors", 40.0, 51.0], ["A highly sensitive fiber temperature sensor based on in-line Mach-Zehnder interferometers (MZIs) and Vernier effect was proposed and experimentally demonstrated. The MZI was fabricated by splicing a section of hollow core fiber between two pieces of multimode fiber. The temperature sensitivity obtained by extracting envelope dip shift of the superimposed spectrum reaches to 528.5 pm/\u00b0C in the range of 0 \u00b0C\u2013100 \u00b0C, which is 17.5 times as high as that without enhanced by the Vernier effect. The experimental sensitivity amplification factor is close to the theoretical predication (18.3 times).The proposed sensitivity enhancement system employs parallel connecting to implement the Vernier effect, which possesses the advantages of easy fabrication and high flexibility.", "which keywords ?", "Vernier effect", 101.0, 115.0], ["PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.", "which keywords ?", "Paclitaxel", 174.0, 184.0], ["Highly robust poly\u2010Si thin\u2010film transistor (TFT) on polyimide (PI) substrate using blue laser annealing (BLA) of amorphous silicon (a\u2010Si) for lateral crystallization is demonstrated. Its foldability is compared with the conventional excimer laser annealing (ELA) poly\u2010Si TFT on PI used for foldable displays exhibiting field\u2010effect mobility of 85 cm2 (V s)\u22121. The BLA poly\u2010Si TFT on PI exhibits the field\u2010effect mobility, threshold voltage (VTH), and subthreshold swing of 153 cm2 (V s)\u22121, \u22122.7 V, and 0.2 V dec\u22121, respectively. Most important finding is the excellent foldability of BLA TFT compared with the ELA poly\u2010Si TFTs on PI substrates. The VTH shift of BLA poly\u2010Si TFT is \u22480.1 V, which is much smaller than that (\u22482 V) of ELA TFT on PI upon 30 000 cycle folding. The defects are generated at the grain boundary region of ELA poly\u2010Si during folding. However, BLA poly\u2010Si has no protrusion in the poly\u2010Si channel and thus no defect generation during folding. This leads to excellent foldability of BLA poly\u2010Si on PI substrate.", "which keywords ?", "Amorphous Si", NaN, NaN], ["This paper, the first of two parts, presents an electromagnetic model for membrane microelectromechanical systems (MEMS) shunt switches for microwave/millimeter-wave applications. The up-state capacitance can be accurately modeled using three-dimensional static solvers, and full-wave solvers are used to predict the current distribution and inductance of the switch. The loss in the up-state position is equivalent to the coplanar waveguide line loss and is 0.01-0.02 dB at 10-30 GHz for a 2-/spl mu/m-thick Au MEMS shunt switch. It is seen that the capacitance, inductance, and series resistance can be accurately extracted from DC-40 GHz S-parameter measurements. It is also shown that dramatic increase in the down-state isolation (20/sup +/ dB) can be achieved with the choice of the correct LC series resonant frequency of the switch. In part 2 of this paper, the equivalent capacitor-inductor-resistor model is used in the design of tuned high isolation switches at 10 and 30 GHz.", "which keywords ?", "Microelectromechanical systems", 83.0, 113.0], ["Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5\uff05. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel\uff1bDrug delivery\uff1bNanoparticle\uff1bRadiotherapy\uff1bHypoxia\uff1bHuman tumor cells\uff1bcellular uptake", "which keywords ?", "Radiotherapy", 1838.0, 1850.0], ["Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever. Here, we combine the advantages of the popular bandit-based HPO method Hyperband (HB) and the evolutionary search approach of Differential Evolution (DE) to yield a new HPO method which we call DEHB. Comprehensive results on a very broad range of HPO problems, as well as a wide range of tabular benchmarks from neural architecture search, demonstrate that DEHB achieves strong performance far more robustly than all previous HPO methods we are aware of, especially for high-dimensional problems with discrete input dimensions. For example, DEHB is up to 1000x faster than random search. It is also efficient in computational time, conceptually simple and easy to implement, positioning it well to become a new default HPO method.", "which keywords ?", "Hyperband", 262.0, 271.0], ["Inverted perovskite solar cells (PSCs) have been becoming more and more attractive, owing to their easy-fabrication and suppressed hysteresis, while the ion diffusion between metallic electrode and perovskite layer limit the long-term stability of devices. In this work, we employed a novel polyethylenimine (PEI) modified cross-stacked superaligned carbon nanotube (CSCNT) film in the inverted planar PSCs configurated FTO/NiO x/methylammonium lead tri-iodide (MAPbI3)/6, 6-phenyl C61-butyric acid methyl ester (PCBM)/CSCNT:PEI. By modifying CSCNT with a certain concentration of PEI (0.5 wt %), suitable energy level alignment and promoted interfacial charge transfer have been achieved, leading to a significant enhancement in the photovoltaic performance. As a result, a champion power conversion efficiency (PCE) of \u223c11% was obtained with a Voc of 0.95 V, a Jsc of 18.7 mA cm-2, a FF of 0.61 as well as negligible hysteresis. Moreover, CSCNT:PEI based inverted PSCs show superior durability in comparison to the standard silver based devices, remaining over 85% of the initial PCE after 500 h aging under various conditions, including long-term air exposure, thermal, and humid treatment. This work opens up a new avenue of facile modified carbon electrodes for highly stable and hysteresis suppressed PSCs.", "which keywords ?", "Electrodes", 1252.0, 1262.0], ["This paper presents the effect of thermal annealing on \u03b2\u2010Ga2O3 thin film solar\u2010blind (SB) photodetector (PD) synthesized on c\u2010plane sapphire substrates by a low pressure chemical vapor deposition (LPCVD). The thin films were synthesized using high purity gallium (Ga) and oxygen (O2) as source precursors. The annealing was performed ex situ the under the oxygen atmosphere, which helped to reduce oxygen or oxygen\u2010related vacancies in the thin film. Metal/semiconductor/metal (MSM) type photodetectors were fabricated using both the as\u2010grown and annealed films. The PDs fabricated on the annealed films had lower dark current, higher photoresponse and improved rejection ratio (R250/R370 and R250/R405) compared to the ones fabricated on the as\u2010grown films. These improved PD performances are due to the significant reduction of the photo\u2010generated carriers trapped by oxygen or oxygen\u2010related vacancies.", "which keywords ?", "thin film", 63.0, 72.0], ["Query optimization in RDF Stores is a challenging problem as SPARQL queries typically contain many more joins than equivalent relational plans, and hence lead to a large join order search space. In such cases, cost-based query optimization often is not possible. One practical reason for this is that statistics typically are missing in web scale setting such as the Linked Open Datasets (LOD). The more profound reason is that due to the absence of schematic structure in RDF, join-hit ratio estimation requires complicated forms of correlated join statistics; and currently there are no methods to identify the relevant correlations beforehand. For this reason, the use of good heuristics is essential in SPARQL query optimization, even in the case that are partially used with cost-based statistics (i.e., hybrid query optimization). In this paper we describe a set of useful heuristics for SPARQL query optimizers. We present these in the context of a new Heuristic SPARQL Planner (HSP) that is capable of exploiting the syntactic and the structural variations of the triple patterns in a SPARQL query in order to choose an execution plan without the need of any cost model. For this, we define the variable graph and we show a reduction of the SPARQL query optimization problem to the maximum weight independent set problem. We implemented our planner on top of the MonetDB open source column-store and evaluated its effectiveness against the state-of-the-art RDF-3X engine as well as comparing the plan quality with a relational (SQL) equivalent of the benchmarks.", "which Algorithm ?", "HSP", 986.0, 989.0], ["MOTIVATION Array Comparative Genomic Hybridization (CGH) can reveal chromosomal aberrations in the genomic DNA. These amplifications and deletions at the DNA level are important in the pathogenesis of cancer and other diseases. While a large number of approaches have been proposed for analyzing the large array CGH datasets, the relative merits of these methods in practice are not clear. RESULTS We compare 11 different algorithms for analyzing array CGH data. These include both segment detection methods and smoothing methods, based on diverse techniques such as mixture models, Hidden Markov Models, maximum likelihood, regression, wavelets and genetic algorithms. We compute the Receiver Operating Characteristic (ROC) curves using simulated data to quantify sensitivity and specificity for various levels of signal-to-noise ratio and different sizes of abnormalities. We also characterize their performance on chromosomal regions of interest in a real dataset obtained from patients with Glioblastoma Multiforme. While comparisons of this type are difficult due to possibly sub-optimal choice of parameters in the methods, they nevertheless reveal general characteristics that are helpful to the biological investigator.", "which Algorithm ?", "Wavelet", NaN, NaN], ["Determination of copy number variants (CNVs) inferred in genome wide single nucleotide polymorphism arrays has shown increasing utility in genetic variant disease associations. Several CNV detection methods are available, but differences in CNV call thresholds and characteristics exist. We evaluated the relative performance of seven methods: circular binary segmentation, CNVFinder, cnvPartition, gain and loss of DNA, Nexus algorithms, PennCNV and QuantiSNP. Tested data included real and simulated Illumina HumHap 550 data from the Singapore cohort study of the risk factors for Myopia (SCORM) and simulated data from Affymetrix 6.0 and platform-independent distributions. The normalized singleton ratio (NSR) is proposed as a metric for parameter optimization before enacting full analysis. We used 10 SCORM samples for optimizing parameter settings for each method and then evaluated method performance at optimal parameters using 100 SCORM samples. The statistical power, false positive rates, and receiver operating characteristic (ROC) curve residuals were evaluated by simulation studies. Optimal parameters, as determined by NSR and ROC curve residuals, were consistent across datasets. QuantiSNP outperformed other methods based on ROC curve residuals over most datasets. Nexus Rank and SNPRank have low specificity and high power. Nexus Rank calls oversized CNVs. PennCNV detects one of the fewest numbers of CNVs.", "which Algorithm ?", "Nexus", 421.0, 426.0], ["Background The detection of copy number variants (CNVs) and the results of CNV-disease association studies rely on how CNVs are defined, and because array-based technologies can only infer CNVs, CNV-calling algorithms can produce vastly different findings. Several authors have noted the large-scale variability between CNV-detection methods, as well as the substantial false positive and false negative rates associated with those methods. In this study, we use variations of four common algorithms for CNV detection (PennCNV, QuantiSNP, HMMSeg, and cnvPartition) and two definitions of overlap (any overlap and an overlap of at least 40% of the smaller CNV) to illustrate the effects of varying algorithms and definitions of overlap on CNV discovery. Methodology and Principal Findings We used a 56 K Illumina genotyping array enriched for CNV regions to generate hybridization intensities and allele frequencies for 48 Caucasian schizophrenia cases and 48 age-, ethnicity-, and gender-matched control subjects. No algorithm found a difference in CNV burden between the two groups. However, the total number of CNVs called ranged from 102 to 3,765 across algorithms. The mean CNV size ranged from 46 kb to 787 kb, and the average number of CNVs per subject ranged from 1 to 39. The number of novel CNVs not previously reported in normal subjects ranged from 0 to 212. Conclusions and Significance Motivated by the availability of multiple publicly available genome-wide SNP arrays, investigators are conducting numerous analyses to identify putative additional CNVs in complex genetic disorders. However, the number of CNVs identified in array-based studies, and whether these CNVs are novel or valid, will depend on the algorithm(s) used. Thus, given the variety of methods used, there will be many false positives and false negatives. Both guidelines for the identification of CNVs inferred from high-density arrays and the establishment of a gold standard for validation of CNVs are needed.", "which Algorithm ?", "PennCNV", 519.0, 526.0], ["A variety of platforms, such as micro-unmanned vehicles, are limited in the amount of computational hardware they can support due to weight and power constraints. An efficient stereo vision algorithm implemented on an FPGA would be able to minimize payload and power consumption in microunmanned vehicles, while providing 3D information and still leaving computational resources available for other processing tasks. This work presents a hardware design of the efficient profile shape matching stereo vision algorithm. Hardware resource usage is presented for the targeted micro-UV platform, Helio-copter, that uses the Xilinx Virtex 4 FX60 FPGA. Less than a fifth of the resources on this FGPA were used to produce dense disparity maps for image sizes up to 450 \u00d7 375, with the ability to scale up easily by increasing BRAM usage. A comparison is given of accuracy, speed performance, and resource usage of a census transform-based stereo vision FPGA implementation by Jin et al. Results show that the profile shape matching algorithm is an efficient real-time stereo vision algorithm for hardware implementation for resource limited systems such as microunmanned vehicles.", "which Algorithm ?", "Profile shape matching", 479.0, 501.0], ["High\u2010throughput single nucleotide polymorphism (SNP)\u2010array technologies allow to investigate copy number variants (CNVs) in genome\u2010wide scans and specific calling algorithms have been developed to determine CNV location and copy number. We report the results of a reliability analysis comparing data from 96 pairs of samples processed with CNVpartition, PennCNV, and QuantiSNP for Infinium Illumina Human 1Million probe chip data. We also performed a validity assessment with multiplex ligation\u2010dependent probe amplification (MLPA) as a reference standard. The number of CNVs per individual varied according to the calling algorithm. Higher numbers of CNVs were detected in saliva than in blood DNA samples regardless of the algorithm used. All algorithms presented low agreement with mean Kappa Index (KI) <66. PennCNV was the most reliable algorithm (KIw=98.96) when assessing the number of copies. The agreement observed in detecting CNV was higher in blood than in saliva samples. When comparing to MLPA, all algorithms identified poorly known copy aberrations (sensitivity = 0.19\u20130.28). In contrast, specificity was very high (0.97\u20130.99). Once a CNV was detected, the number of copies was truly assessed (sensitivity >0.62). Our results indicate that the current calling algorithms should be improved for high performance CNV analysis in genome\u2010wide scans. Further refinement is required to assess CNVs as risk factors in complex diseases.Hum Mutat 32:1\u201310, 2011. \u00a9 2011 Wiley\u2010Liss, Inc.", "which Algorithm ?", "PennCNV", 354.0, 361.0], ["Determination of copy number variants (CNVs) inferred in genome wide single nucleotide polymorphism arrays has shown increasing utility in genetic variant disease associations. Several CNV detection methods are available, but differences in CNV call thresholds and characteristics exist. We evaluated the relative performance of seven methods: circular binary segmentation, CNVFinder, cnvPartition, gain and loss of DNA, Nexus algorithms, PennCNV and QuantiSNP. Tested data included real and simulated Illumina HumHap 550 data from the Singapore cohort study of the risk factors for Myopia (SCORM) and simulated data from Affymetrix 6.0 and platform-independent distributions. The normalized singleton ratio (NSR) is proposed as a metric for parameter optimization before enacting full analysis. We used 10 SCORM samples for optimizing parameter settings for each method and then evaluated method performance at optimal parameters using 100 SCORM samples. The statistical power, false positive rates, and receiver operating characteristic (ROC) curve residuals were evaluated by simulation studies. Optimal parameters, as determined by NSR and ROC curve residuals, were consistent across datasets. QuantiSNP outperformed other methods based on ROC curve residuals over most datasets. Nexus Rank and SNPRank have low specificity and high power. Nexus Rank calls oversized CNVs. PennCNV detects one of the fewest numbers of CNVs.", "which Algorithm ?", "cnvPartition", 385.0, 397.0], ["SymStereo is a new algorithm used for stereo estimation. Instead of measuring photo-similarity, it proposes novel cost functions that measure symmetry for evaluating the likelihood of two pixels being a match. In this work we propose a parallel approach of the LogN matching cost variant of SymStereo capable of processing pairs of images in real-time for depth estimation. The power of the graphics processing units utilized allows exploring more efficiently the bank of log-Gabor wavelets developed to analyze symmetry, in the spectral domain. We analyze tradeoffs and propose different parameter-izations of the signal processing algorithm to accommodate image size, dimension of the filter bank, number of wavelets and also the number of disparities that controls the space density of the estimation, and still process up to 53 frames per second (fps) for images with size 288 \u00d7 384 and up to 3 fps for 768 \u00d7 1024 images.", "which Algorithm ?", "SymStereo", 0.0, 9.0], ["High\u2010throughput single nucleotide polymorphism (SNP)\u2010array technologies allow to investigate copy number variants (CNVs) in genome\u2010wide scans and specific calling algorithms have been developed to determine CNV location and copy number. We report the results of a reliability analysis comparing data from 96 pairs of samples processed with CNVpartition, PennCNV, and QuantiSNP for Infinium Illumina Human 1Million probe chip data. We also performed a validity assessment with multiplex ligation\u2010dependent probe amplification (MLPA) as a reference standard. The number of CNVs per individual varied according to the calling algorithm. Higher numbers of CNVs were detected in saliva than in blood DNA samples regardless of the algorithm used. All algorithms presented low agreement with mean Kappa Index (KI) <66. PennCNV was the most reliable algorithm (KIw=98.96) when assessing the number of copies. The agreement observed in detecting CNV was higher in blood than in saliva samples. When comparing to MLPA, all algorithms identified poorly known copy aberrations (sensitivity = 0.19\u20130.28). In contrast, specificity was very high (0.97\u20130.99). Once a CNV was detected, the number of copies was truly assessed (sensitivity >0.62). Our results indicate that the current calling algorithms should be improved for high performance CNV analysis in genome\u2010wide scans. Further refinement is required to assess CNVs as risk factors in complex diseases.Hum Mutat 32:1\u201310, 2011. \u00a9 2011 Wiley\u2010Liss, Inc.", "which Algorithm ?", "and QuantiSNP", 363.0, 376.0], ["Background The detection of copy number variants (CNVs) and the results of CNV-disease association studies rely on how CNVs are defined, and because array-based technologies can only infer CNVs, CNV-calling algorithms can produce vastly different findings. Several authors have noted the large-scale variability between CNV-detection methods, as well as the substantial false positive and false negative rates associated with those methods. In this study, we use variations of four common algorithms for CNV detection (PennCNV, QuantiSNP, HMMSeg, and cnvPartition) and two definitions of overlap (any overlap and an overlap of at least 40% of the smaller CNV) to illustrate the effects of varying algorithms and definitions of overlap on CNV discovery. Methodology and Principal Findings We used a 56 K Illumina genotyping array enriched for CNV regions to generate hybridization intensities and allele frequencies for 48 Caucasian schizophrenia cases and 48 age-, ethnicity-, and gender-matched control subjects. No algorithm found a difference in CNV burden between the two groups. However, the total number of CNVs called ranged from 102 to 3,765 across algorithms. The mean CNV size ranged from 46 kb to 787 kb, and the average number of CNVs per subject ranged from 1 to 39. The number of novel CNVs not previously reported in normal subjects ranged from 0 to 212. Conclusions and Significance Motivated by the availability of multiple publicly available genome-wide SNP arrays, investigators are conducting numerous analyses to identify putative additional CNVs in complex genetic disorders. However, the number of CNVs identified in array-based studies, and whether these CNVs are novel or valid, will depend on the algorithm(s) used. Thus, given the variety of methods used, there will be many false positives and false negatives. Both guidelines for the identification of CNVs inferred from high-density arrays and the establishment of a gold standard for validation of CNVs are needed.", "which Algorithm ?", "HMMSeg", 539.0, 545.0], ["Background The detection of copy number variants (CNVs) and the results of CNV-disease association studies rely on how CNVs are defined, and because array-based technologies can only infer CNVs, CNV-calling algorithms can produce vastly different findings. Several authors have noted the large-scale variability between CNV-detection methods, as well as the substantial false positive and false negative rates associated with those methods. In this study, we use variations of four common algorithms for CNV detection (PennCNV, QuantiSNP, HMMSeg, and cnvPartition) and two definitions of overlap (any overlap and an overlap of at least 40% of the smaller CNV) to illustrate the effects of varying algorithms and definitions of overlap on CNV discovery. Methodology and Principal Findings We used a 56 K Illumina genotyping array enriched for CNV regions to generate hybridization intensities and allele frequencies for 48 Caucasian schizophrenia cases and 48 age-, ethnicity-, and gender-matched control subjects. No algorithm found a difference in CNV burden between the two groups. However, the total number of CNVs called ranged from 102 to 3,765 across algorithms. The mean CNV size ranged from 46 kb to 787 kb, and the average number of CNVs per subject ranged from 1 to 39. The number of novel CNVs not previously reported in normal subjects ranged from 0 to 212. Conclusions and Significance Motivated by the availability of multiple publicly available genome-wide SNP arrays, investigators are conducting numerous analyses to identify putative additional CNVs in complex genetic disorders. However, the number of CNVs identified in array-based studies, and whether these CNVs are novel or valid, will depend on the algorithm(s) used. Thus, given the variety of methods used, there will be many false positives and false negatives. Both guidelines for the identification of CNVs inferred from high-density arrays and the establishment of a gold standard for validation of CNVs are needed.", "which Algorithm ?", "QuantiSNP", 528.0, 537.0], ["Determination of copy number variants (CNVs) inferred in genome wide single nucleotide polymorphism arrays has shown increasing utility in genetic variant disease associations. Several CNV detection methods are available, but differences in CNV call thresholds and characteristics exist. We evaluated the relative performance of seven methods: circular binary segmentation, CNVFinder, cnvPartition, gain and loss of DNA, Nexus algorithms, PennCNV and QuantiSNP. Tested data included real and simulated Illumina HumHap 550 data from the Singapore cohort study of the risk factors for Myopia (SCORM) and simulated data from Affymetrix 6.0 and platform-independent distributions. The normalized singleton ratio (NSR) is proposed as a metric for parameter optimization before enacting full analysis. We used 10 SCORM samples for optimizing parameter settings for each method and then evaluated method performance at optimal parameters using 100 SCORM samples. The statistical power, false positive rates, and receiver operating characteristic (ROC) curve residuals were evaluated by simulation studies. Optimal parameters, as determined by NSR and ROC curve residuals, were consistent across datasets. QuantiSNP outperformed other methods based on ROC curve residuals over most datasets. Nexus Rank and SNPRank have low specificity and high power. Nexus Rank calls oversized CNVs. PennCNV detects one of the fewest numbers of CNVs.", "which Algorithm ?", "cnvFinder", 374.0, 383.0], ["Vision is becoming more and more common in applications such as localization, autonomous navigation, path finding and many other computer vision applications. This paper presents an improved technique for feature matching in the stereo images captured by the autonomous vehicle. The Scale Invariant Feature Transform (SIFT) algorithm is used to extract distinctive invariant features from images but this algorithm has a high complexity and a long computational time. In order to reduce the computation time, this paper proposes a SIFT improvement technique based on a Self-Organizing Map (SOM) to perform the matching procedure more efficiently for feature matching problems. Experimental results on real stereo images show that the proposed algorithm performs feature group matching with lower computation time than the original SIFT algorithm. The results showing improvement over the original SIFT are validated through matching examples between different pairs of stereo images. The proposed algorithm can be applied to stereo vision based autonomous vehicle navigation for obstacle avoidance, as well as many other feature matching and computer vision applications.", "which Algorithm ?", "SIFT", 318.0, 322.0], ["High\u2010throughput single nucleotide polymorphism (SNP)\u2010array technologies allow to investigate copy number variants (CNVs) in genome\u2010wide scans and specific calling algorithms have been developed to determine CNV location and copy number. We report the results of a reliability analysis comparing data from 96 pairs of samples processed with CNVpartition, PennCNV, and QuantiSNP for Infinium Illumina Human 1Million probe chip data. We also performed a validity assessment with multiplex ligation\u2010dependent probe amplification (MLPA) as a reference standard. The number of CNVs per individual varied according to the calling algorithm. Higher numbers of CNVs were detected in saliva than in blood DNA samples regardless of the algorithm used. All algorithms presented low agreement with mean Kappa Index (KI) <66. PennCNV was the most reliable algorithm (KIw=98.96) when assessing the number of copies. The agreement observed in detecting CNV was higher in blood than in saliva samples. When comparing to MLPA, all algorithms identified poorly known copy aberrations (sensitivity = 0.19\u20130.28). In contrast, specificity was very high (0.97\u20130.99). Once a CNV was detected, the number of copies was truly assessed (sensitivity >0.62). Our results indicate that the current calling algorithms should be improved for high performance CNV analysis in genome\u2010wide scans. Further refinement is required to assess CNVs as risk factors in complex diseases.Hum Mutat 32:1\u201310, 2011. \u00a9 2011 Wiley\u2010Liss, Inc.", "which Algorithm ?", "cnvPartition", 340.0, 352.0], ["Several computer programs are available for detecting copy number variants (CNVs) using genome-wide SNP arrays. We evaluated the performance of four CNV detection software suites\u2014Birdsuite, Partek, HelixTree, and PennCNV-Affy\u2014in the identification of both rare and common CNVs. Each program's performance was assessed in two ways. The first was its recovery rate, i.e., its ability to call 893 CNVs previously identified in eight HapMap samples by paired-end sequencing of whole-genome fosmid clones, and 51,440 CNVs identified by array Comparative Genome Hybridization (aCGH) followed by validation procedures, in 90 HapMap CEU samples. The second evaluation was program performance calling rare and common CNVs in the Bipolar Genome Study (BiGS) data set (1001 bipolar cases and 1033 controls, all of European ancestry) as measured by the Affymetrix SNP 6.0 array. Accuracy in calling rare CNVs was assessed by positive predictive value, based on the proportion of rare CNVs validated by quantitative real-time PCR (qPCR), while accuracy in calling common CNVs was assessed by false positive/false negative rates based on qPCR validation results from a subset of common CNVs. Birdsuite recovered the highest percentages of known HapMap CNVs containing >20 markers in two reference CNV datasets. The recovery rate increased with decreased CNV frequency. In the tested rare CNV data, Birdsuite and Partek had higher positive predictive values than the other software suites. In a test of three common CNVs in the BiGS dataset, Birdsuite's call was 98.8% consistent with qPCR quantification in one CNV region, but the other two regions showed an unacceptable degree of accuracy. We found relatively poor consistency between the two \u201cgold standards,\u201d the sequence data of Kidd et al., and aCGH data of Conrad et al. Algorithms for calling CNVs especially common ones need substantial improvement, and a \u201cgold standard\u201d for detection of CNVs remains to be established.", "which Algorithm ?", "Birdsuite", 179.0, 188.0], ["An algorithm is described that allows log (n) processors to sort n records in just over 2n write cycles, together with suitable hardware to support the algorithm. The algorithm is a parallel version of the straight merge sort. The passes of the merge sort are run overlapped, with each pass supported by a separate processor. The intermediate files of a serial merge sort are replaced by first-in first-out queues. The processors and queues may be implemented in conventional solid logic technology or in bubble technology. A hybrid technology is also appropriate.", "which Algorithm ?", "Merge sort", 215.0, 225.0], ["Transforming natural language questions into formal queries is an integral task in Question Answering (QA) systems. QA systems built on knowledge graphs like DBpedia, require a step after natural language processing for linking words, specifically including named entities and relations, to their corresponding entities in a knowledge graph. To achieve this task, several approaches rely on background knowledge bases containing semantically-typed relations, e.g., PATTY, for an extra disambiguation step. Two major factors may affect the performance of relation linking approaches whenever background knowledge bases are accessed: a) limited availability of such semantic knowledge sources, and b) lack of a systematic approach on how to maximize the benefits of the collected knowledge. We tackle this problem and devise SIBKB, a semantic-based index able to capture knowledge encoded on background knowledge bases like PATTY. SIBKB represents a background knowledge base as a bi-partite and a dynamic index over the relation patterns included in the knowledge base. Moreover, we develop a relation linking component able to exploit SIBKB features. The benefits of SIBKB are empirically studied on existing QA benchmarks and observed results suggest that SIBKB is able to enhance the accuracy of relation linking by up to three times.", "which presents ?", "SIBKB", 823.0, 828.0], ["Query optimization in RDF Stores is a challenging problem as SPARQL queries typically contain many more joins than equivalent relational plans, and hence lead to a large join order search space. In such cases, cost-based query optimization often is not possible. One practical reason for this is that statistics typically are missing in web scale setting such as the Linked Open Datasets (LOD). The more profound reason is that due to the absence of schematic structure in RDF, join-hit ratio estimation requires complicated forms of correlated join statistics; and currently there are no methods to identify the relevant correlations beforehand. For this reason, the use of good heuristics is essential in SPARQL query optimization, even in the case that are partially used with cost-based statistics (i.e., hybrid query optimization). In this paper we describe a set of useful heuristics for SPARQL query optimizers. We present these in the context of a new Heuristic SPARQL Planner (HSP) that is capable of exploiting the syntactic and the structural variations of the triple patterns in a SPARQL query in order to choose an execution plan without the need of any cost model. For this, we define the variable graph and we show a reduction of the SPARQL query optimization problem to the maximum weight independent set problem. We implemented our planner on top of the MonetDB open source column-store and evaluated its effectiveness against the state-of-the-art RDF-3X engine as well as comparing the plan quality with a relational (SQL) equivalent of the benchmarks.", "which Has implementation ?", "Heuristic SPARQL Planner (HSP)", NaN, NaN], ["Deep Learning has achieved state of the art performance in medical imaging. However, these methods for disease detection focus exclusively on improving the accuracy of classification or predictions without quantifying uncertainty in a decision. Knowing how much confidence there is in a computer-based medical diagnosis is essential for gaining clinicians trust in the technology and therefore improve treatment. Today, the 2019 Coronavirus (SARS-CoV-2) infections are a major healthcare challenge around the world. Detecting COVID-19 in X-ray images is crucial for diagnosis, assessment and treatment. However, diagnostic uncertainty in the report is a challenging and yet inevitable task for radiologist. In this paper, we investigate how drop-weights based Bayesian Convolutional Neural Networks (BCNN) can estimate uncertainty in Deep Learning solution to improve the diagnostic performance of the human-machine team using publicly available COVID-19 chest X-ray dataset and show that the uncertainty in prediction is highly correlates with accuracy of prediction. We believe that the availability of uncertainty-aware deep learning solution will enable a wider adoption of Artificial Intelligence (AI) in a clinical setting.", "which Has implementation ?", "Bayesian Convolutional Neural Network", NaN, NaN], ["Static analysis is a fundamental task in query optimization. In this paper we study static analysis and optimization techniques for SPARQL, which is the standard language for querying Semantic Web data. Of particular interest for us is the optionality feature in SPARQL. It is crucial in Semantic Web data management, where data sources are inherently incomplete and the user is usually interested in partial answers to queries. This feature is one of the most complicated constructors in SPARQL and also the one that makes this language depart from classical query languages such as relational conjunctive queries. We focus on the class of well-designed SPARQL queries, which has been proposed in the literature as a fragment of the language with good properties regarding query evaluation. We first propose a tree representation for SPARQL queries, called pattern trees, which captures the class of well-designed SPARQL graph patterns and which can be considered as a query execution plan. Among other results, we propose several transformation rules for pattern trees, a simple normal form, and study equivalence and containment. We also study the enumeration and counting problems for this class of queries.", "which Has implementation ?", "Pattern trees", 858.0, 871.0], ["This Editorial describes the rationale, focus, scope and technology behind the newly launched, open access, innovative Food Modelling Journal (FMJ). The Journal is designed to publish those outputs of the research cycle that usually precede the publication of the research article, but have their own value and re-usability potential. Such outputs are methods, models, software and data. The Food Modelling Journal is launched by the AGINFRA+ community and is integrated with the AGINFRA+ Virtual Research Environment (VRE) to facilitate and streamline the authoring, peer review and publication of the manuscripts via the ARPHA Publishing Platform.", "which Has implementation ?", "Food Modelling Journal", 127.0, 149.0], ["Query optimization in RDF Stores is a challenging problem as SPARQL queries typically contain many more joins than equivalent relational plans, and hence lead to a large join order search space. In such cases, cost-based query optimization often is not possible. One practical reason for this is that statistics typically are missing in web scale setting such as the Linked Open Datasets (LOD). The more profound reason is that due to the absence of schematic structure in RDF, join-hit ratio estimation requires complicated forms of correlated join statistics; and currently there are no methods to identify the relevant correlations beforehand. For this reason, the use of good heuristics is essential in SPARQL query optimization, even in the case that are partially used with cost-based statistics (i.e., hybrid query optimization). In this paper we describe a set of useful heuristics for SPARQL query optimizers. We present these in the context of a new Heuristic SPARQL Planner (HSP) that is capable of exploiting the syntactic and the structural variations of the triple patterns in a SPARQL query in order to choose an execution plan without the need of any cost model. For this, we define the variable graph and we show a reduction of the SPARQL query optimization problem to the maximum weight independent set problem. We implemented our planner on top of the MonetDB open source column-store and evaluated its effectiveness against the state-of-the-art RDF-3X engine as well as comparing the plan quality with a relational (SQL) equivalent of the benchmarks.", "which Has implementation ?", "MonetDB", 1371.0, 1378.0], ["This study investigates the distribution of cadmium and lead concentrations in the outcrop rock samples collected from Abakaliki anticlinorium in the Southern Benue Trough, Nigeria. The outcrop rock samples from seven sampling locations were air\u2013dried for seventy\u2013two hours, homogenized by grinding and pass through < 63 micron mesh sieve. The ground and homogenized rock samples were pulverized and analyzed for cadmium and lead using X-Ray Fluorescence Spectrometer. The concentrations of heavy metals in the outcrop rock samples ranged from < 0.10 \u2013 7.95 mg kg\u20131 for cadmium (Cd) and < 1.00 \u2013 4966.00 mg kg\u20131 for lead (Pb). Apart from an anomalous concentration measured in Afikpo Shale (Middle Segment), the results obtained revealed that rock samples from all the sampling locations yielded cadmium concentrations of < 0.10 mg kg\u20131 and the measured concentrations were below the average crustal abundance of 0.50 mg kg\u20131. Although background concentration of <1.00 \u00b1 0.02 mg kg\u20131 was measured in Abakaliki Shale, rock samples from all the sampling locations revealed anomalous lead concentrations above average crustal abundance of 30 mg kg\u20131. The results obtained reveal important contributions towards understanding of heavy metal distribution patterns and provide baseline data that can be used for potential identification of areas at risk associated with natural sources of heavy metals contamination in the region. The use of outcrop rocks provides a cost\u2013effective approach for monitoring regional heavy metal contamination associated with dissolution and/or weathering of rocks or parent materials. Evaluation of heavy metals may be effectively used in large scale regional pollution monitoring of soil, groundwater, atmospheric and marine environment. Therefore, monitoring of heavy metal concentrations in soils, groundwater and atmospheric environment is imperative in order to prevent bioaccumulation in various ecological receptors.", "which Has implementation ?", "Evaluation of heavy metals may be effectively used in large scale regional pollution monitoring of soil, groundwater, atmospheric and marine environment.", NaN, NaN], ["Two modalities are often used to convey information in a complementary and beneficial manner, e.g., in online news, videos, educational resources, or scientific publications. The automatic understanding of semantic correlations between text and associated images as well as their interplay has a great potential for enhanced multimodal web search and recommender systems. However, automatic understanding of multimodal information is still an unsolved research problem. Recent approaches such as image captioning focus on precisely describing visual content and translating it to text, but typically address neither semantic interpretations nor the specific role or purpose of an image-text constellation. In this paper, we go beyond previous work and investigate, inspired by research in visual communication, useful semantic image-text relations for multimodal information retrieval. We derive a categorization of eight semantic image-text classes (e.g., \"illustration\" or \"anchorage\") and show how they can systematically be characterized by a set of three metrics: cross-modal mutual information, semantic correlation, and the status relation of image and text. Furthermore, we present a deep learning system to predict these classes by utilizing multimodal embeddings. To obtain a sufficiently large amount of training data, we have automatically collected and augmented data from a variety of datasets and web resources, which enables future research on this topic. Experimental results on a demanding test set demonstrate the feasibility of the approach.", "which Has implementation ?", "Multimodal Embedding", NaN, NaN], ["Most Semantic Web applications rely on querying graphs, typically by using SPARQL with a triple store. Increasingly, applications also analyze properties of the graph structure to compute statistical inferences. The current Semantic Web infrastructure, however, does not efficiently support such operations. This forces developers to extract the relevant data for external statistical post-processing. In this paper we propose to rethink query execution in a triple store as a highly parallelized asynchronous graph exploration on an active index data structure. This approach also allows to integrate SPARQL-querying with the sampling of graph properties. To evaluate this architecture we implemented Random Walk TripleRush, which is built on a distributed graph processing system. Our evaluations show that this architecture enables both competitive graph querying, as well as the ability to execute various types of random walks with restarts that sample interesting graph properties. Thanks to the asynchronous architecture, first results are sometimes returned in a fraction of the full execution time. We also evaluate the scalability and show that the architecture supports fast query-times on a dataset with more than a billion triples.", "which Has implementation ?", "Triple Store", 89.0, 101.0], ["Most Semantic Web applications rely on querying graphs, typically by using SPARQL with a triple store. Increasingly, applications also analyze properties of the graph structure to compute statistical inferences. The current Semantic Web infrastructure, however, does not efficiently support such operations. This forces developers to extract the relevant data for external statistical post-processing. In this paper we propose to rethink query execution in a triple store as a highly parallelized asynchronous graph exploration on an active index data structure. This approach also allows to integrate SPARQL-querying with the sampling of graph properties. To evaluate this architecture we implemented Random Walk TripleRush, which is built on a distributed graph processing system. Our evaluations show that this architecture enables both competitive graph querying, as well as the ability to execute various types of random walks with restarts that sample interesting graph properties. Thanks to the asynchronous architecture, first results are sometimes returned in a fraction of the full execution time. We also evaluate the scalability and show that the architecture supports fast query-times on a dataset with more than a billion triples.", "which Has implementation ?", "Random Walks", 919.0, 931.0], ["A fundamental problem related to RDF query processing is selectivity estimation, which is crucial to query optimization for determining a join order of RDF triple patterns. In this paper we focus research on selectivity estimation for SPARQL graph patterns. The previous work takes the join uniformity assumption when estimating the joined triple patterns. This assumption would lead to highly inaccurate estimations in the cases where properties in SPARQL graph patterns are correlated. We take into account the dependencies among properties in SPARQL graph patterns and propose a more accurate estimation model. Since star and chain query patterns are common in SPARQL graph patterns, we first focus on these two basic patterns and propose to use Bayesian network and chain histogram respectively for estimating the selectivity of them. Then, for estimating the selectivity of an arbitrary SPARQL graph pattern, we design algorithms for maximally using the precomputed statistics of the star paths and chain paths. The experiments show that our method outperforms existing approaches in accuracy.", "which Has implementation ?", "Bayesian Network", 749.0, 765.0], ["Research data publishing is today widely regarded as crucial for reproducibility, proper assessment of scientific results, and as a way for researchers to get proper credit for sharing their data. However, several challenges need to be solved to fully realize its potential, one of them being the development of a global standard for links between research data and literature. Current linking solutions are mostly based on bilateral, ad hoc agreements between publishers and data centers. These operate in silos so that content cannot be readily combined to deliver a network graph connecting research data and literature in a comprehensive and reliable way. The Research Data Alliance (RDA) Publishing Data Services Working Group (PDS-WG) aims to address this issue of fragmentation by bringing together different stakeholders to agree on a common infrastructure for sharing links between datasets and literature. The paper aims to discuss these issues.,This paper presents the synergic effort of the RDA PDS-WG and the OpenAIRE infrastructure toward enabling a common infrastructure for exchanging data-literature links by realizing and operating the Data-Literature Interlinking (DLI) Service. The DLI Service populates and provides access to a graph of data set-literature links (at the time of writing close to five million, and growing) collected from a variety of major data centers, publishers, and research organizations.,To achieve its objectives, the Service proposes an interoperable exchange data model and format, based on which it collects and publishes links, thereby offering the opportunity to validate such common approach on real-case scenarios, with real providers and consumers. Feedback of these actors will drive continuous refinement of the both data model and exchange format, supporting the further development of the Service to become an essential part of a universal, open, cross-platform, cross-discipline solution for collecting, and sharing data set-literature links.,This realization of the DLI Service is the first technical, cross-community, and collaborative effort in the direction of establishing a common infrastructure for facilitating the exchange of data set-literature links. As a result of its operation and underlying community effort, a new activity, name Scholix, has been initiated involving the technological level stakeholders such as DataCite and CrossRef.", "which Has implementation ?", "Data-Literature Interlinking (DLI) Service", NaN, NaN], ["Hajj is one of the largest mass gatherings where Muslims from all over the world gather in Makah each year for pilgrimage. A mass assembly of such scale bears a huge risk of disaster either natural or man-made. In the past few years, thousands of casualties have occurred while performing different Hajj rituals, especially during the Circumambulation of Kaba (Tawaf) due to stampede or chaos. During such calamitous situations, an appropriate evacuation strategy can help resolve the problem and mitigate further risk of causalities. It is however a daunting research problem to identify an optimal course of action based on several constraints. Modeling and analyzing such a problem of real-time and spatially explicit complexity requires a microscale crowd simulation and analysis framework. Which not only allows the modeler to express the spatial dimensions and features of the environment in real scale, but also provides modalities to capture complex crowd behaviors. In this paper, we propose an Agent-based Crowd Simulation & Analysis framework that incorporates the use of Anylogic Pedestrian library and integrates/interoperate Anylogic Simulation environment with the external modules for optimization and analysis. Hence provides a runtime environment for analyzing complex situations, e.g., emergency evacuation strategies. The key features of the proposed framework include: (i) Ability to model large crowd in a spatially explicit environment at real-scale; (ii) Simulation of complex crowd behavior such as emergency evacuation; (iii) Interoperability of optimization and analysis modules with simulation runtime for evaluating evacuation strategies. We present a case study of Hajj scenario as a proof of concept and a test bed for identifying and evaluating optimal strategies for crowd evacuation", "which Has implementation ?", "Crowd Simulation and Analysis Framework", 754.0, 793.0], ["This study investigates the distribution of cadmium and lead concentrations in the outcrop rock samples collected from Abakaliki anticlinorium in the Southern Benue Trough, Nigeria. The outcrop rock samples from seven sampling locations were air\u2013dried for seventy\u2013two hours, homogenized by grinding and pass through < 63 micron mesh sieve. The ground and homogenized rock samples were pulverized and analyzed for cadmium and lead using X-Ray Fluorescence Spectrometer. The concentrations of heavy metals in the outcrop rock samples ranged from < 0.10 \u2013 7.95 mg kg\u20131 for cadmium (Cd) and < 1.00 \u2013 4966.00 mg kg\u20131 for lead (Pb). Apart from an anomalous concentration measured in Afikpo Shale (Middle Segment), the results obtained revealed that rock samples from all the sampling locations yielded cadmium concentrations of < 0.10 mg kg\u20131 and the measured concentrations were below the average crustal abundance of 0.50 mg kg\u20131. Although background concentration of <1.00 \u00b1 0.02 mg kg\u20131 was measured in Abakaliki Shale, rock samples from all the sampling locations revealed anomalous lead concentrations above average crustal abundance of 30 mg kg\u20131. The results obtained reveal important contributions towards understanding of heavy metal distribution patterns and provide baseline data that can be used for potential identification of areas at risk associated with natural sources of heavy metals contamination in the region. The use of outcrop rocks provides a cost\u2013effective approach for monitoring regional heavy metal contamination associated with dissolution and/or weathering of rocks or parent materials. Evaluation of heavy metals may be effectively used in large scale regional pollution monitoring of soil, groundwater, atmospheric and marine environment. Therefore, monitoring of heavy metal concentrations in soils, groundwater and atmospheric environment is imperative in order to prevent bioaccumulation in various ecological receptors.", "which Has implementation ?", "The use of outcrop rocks provides a cost\u2013effective approach for monitoring regional heavy metal contamination associated with dissolution and/or weathering of rocks or parent materials. ", 1426.0, 1612.0], ["ASTER is an advanced Thermal Emission and Reflection Radiometer, a multispectral sensor, which measures reflected and emitted electromagnetic radiation of earth surface with 14 bands. The present study aims to delineate different rock types in the Sittampundi Anorthositic Complex (SAC), Tamil Nadu using Visible (VIS), near-infrared (NIR) and short wave infrared (SWIR) reflectance data of ASTER 9 band data. We used different band ratioing, band combinations in the VNIR and SWIR region for discriminating lithological boundaries. SAC is also considered as a lunar highland analog rock. Anorthosite is a plagioclase-rich igneous rock with subordinate amounts of pyroxenes, olivine and other minerals. A methodology has been applied to correct the cross talk effect and radiance to reflectance. Principal Component Analysis (PCA) has been realized on the 9 ASTER bands in order to reduce the redundancy information in highly correlated bands. PCA derived FCC results enable the validation and support to demarcate the different lithological boundaries defined on previous geological map. The image derived spectral profiles for anorthosite are compared with the ASTER resampled laboratory spectra, JHU spectral library spectra and Apollo 14 lunar anorthosites spectra. The Spectral Angle Mapping imaging spectroscopy technique has been practiced to classify the ASTER image of the study area and found that, the processing of ASTER remote sensing data set can be used as a powerful tool for mapping the terrestrial Anorthositic regions and similar kind of process could be applied to map the planetary surfaces (E.g. Moon).", "which Data used ?", "ASTER ", 0.0, 6.0], ["In the present study, we have attempted the delineation of limestone using different spectral mapping algorithms in ASTER data. Each spectral mapping algorithm derives limestone exposure map independently. Although these spectral maps are broadly similar to each other, they are also different at places in terms of spatial disposition of limestone pixels. Therefore, an attempt is made to integrate the results of these spectral maps to derive an integrated map using minimum noise fraction (MNF) method. The first MNF image is the result of two cascaded principal component methods suitable for preserving complementary information derived from each spectral map. While implementing MNF, noise or non-coherent pixels occurring within a homogeneous patch of limestone are removed first using shift difference method, before attempting principal component analysis on input spectral maps for deriving composite spectral map of limestone exposures. The limestone exposure map is further validated based on spectral data and ancillary geological data.", "which Data used ?", "ASTER ", 116.0, 122.0], ["The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a 14-band multispectral instrument on board the Earth Observing System (EOS), TERRA. The three bands between 0.52 and 0.86 \u03bc m and the six bands from 1.60 and 2.43 \u03bc m, which have 15- and 30-m spatial resolution, respectively, were selected primarily for making remote mineralogical determinations. The Cuprite, Nevada, mining district comprises two hydrothermal alteration centers where Tertiary volcanic rocks have been hydrothermally altered mainly to bleached silicified rocks and opalized rocks, with a marginal zone of limonitic argillized rocks. Country rocks are mainly Cambrian phyllitic siltstone and limestone. Evaluation of an ASTER image of the Cuprite district shows that spectral reflectance differences in the nine bands in the 0.52 to 2.43 \u03bc m region provide a basis for identifying and mapping mineralogical components which characterize the main hydrothermal alteration zones: opal is the spectrally dominant mineral in the silicified zone; whereas, alunite and kaolinite are dominant in the opalized zone. In addition, the distribution of unaltered country rocks was mapped because of the presence of spectrally dominant muscovite in the siltstone and calcite in limestone, and the tuffaceous rocks and playa deposits were distinguishable due to their relatively flat spectra and weak absorption features at 2.33 and 2.20 \u03bc m, respectively. An Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) image of the study area was processed using a similar methodology used with the ASTER data. Comparison of the ASTER and AVIRIS results shows that the results are generally similar, but the higher spectral resolution of AVIRIS (224 bands) permits identification of more individual minerals, including certain polymorphs. However, ASTER has recorded images of more than 90 percent of the Earth\u2019s land surface with less than 20 percent cloud cover, and these data are available at nominal or no cost. Landsat TM images have a similar spatial resolution to ASTER images, but TM has fewer bands, which limits its usefulness for making mineral determinations.", "which Data used ?", "ASTER ", 717.0, 723.0], ["The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a research facility instrument launched on NASA's Terra spacecraft in December 1999. Spectral indices, a kind of orthogonal transformation in the five-dimensional space formed by the five ASTER short-wave-infrared (SWIR) bands, were proposed for discrimination and mapping of surface rock types. These include Alunite Index, Kaolinite Index, Calcite Index, and Montmorillonite Index, and can be calculated by linear combination of reflectance values of the five SWIR bands. The transform coefficients were determined so as to direct transform axes to the average spectral pattern of the typical minerals. The spectral indices were applied to the simulated ASTER dataset of Cuprite, Nevada, USA after converting its digital numbers to surface reflectance. The resultant spectral index images were useful for lithologic mapping and were easy to interpret geologically. An advantage of this method is that we can use the pre-determined transform coefficients, as long as image data are converted to surface reflectance.", "which Data used ?", "ASTER ", 266.0, 272.0], ["The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a 14-band multispectral instrument on board the Earth Observing System (EOS), TERRA. The three bands between 0.52 and 0.86 \u03bc m and the six bands from 1.60 and 2.43 \u03bc m, which have 15- and 30-m spatial resolution, respectively, were selected primarily for making remote mineralogical determinations. The Cuprite, Nevada, mining district comprises two hydrothermal alteration centers where Tertiary volcanic rocks have been hydrothermally altered mainly to bleached silicified rocks and opalized rocks, with a marginal zone of limonitic argillized rocks. Country rocks are mainly Cambrian phyllitic siltstone and limestone. Evaluation of an ASTER image of the Cuprite district shows that spectral reflectance differences in the nine bands in the 0.52 to 2.43 \u03bc m region provide a basis for identifying and mapping mineralogical components which characterize the main hydrothermal alteration zones: opal is the spectrally dominant mineral in the silicified zone; whereas, alunite and kaolinite are dominant in the opalized zone. In addition, the distribution of unaltered country rocks was mapped because of the presence of spectrally dominant muscovite in the siltstone and calcite in limestone, and the tuffaceous rocks and playa deposits were distinguishable due to their relatively flat spectra and weak absorption features at 2.33 and 2.20 \u03bc m, respectively. An Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) image of the study area was processed using a similar methodology used with the ASTER data. Comparison of the ASTER and AVIRIS results shows that the results are generally similar, but the higher spectral resolution of AVIRIS (224 bands) permits identification of more individual minerals, including certain polymorphs. However, ASTER has recorded images of more than 90 percent of the Earth\u2019s land surface with less than 20 percent cloud cover, and these data are available at nominal or no cost. Landsat TM images have a similar spatial resolution to ASTER images, but TM has fewer bands, which limits its usefulness for making mineral determinations.", "which Data used ?", "AVIRIS", 1490.0, 1496.0], ["Abstract. Hyperspectral remote sensing has been widely used in mineral identification using the particularly useful short-wave infrared (SWIR) wavelengths (1.0 to 2.5 \u03bcm). Current mineral mapping methods are easily limited by the sensor\u2019s radiometric sensitivity and atmospheric effects. Therefore, a simple mineral mapping algorithm (SMMA) based on the combined application with multitype diagnostic SWIR absorption features for hyperspectral data is proposed. A total of nine absorption features are calculated, respectively, from the airborne visible/infrared imaging spectrometer data, the Hyperion hyperspectral data, and the ground reference spectra data collected from the United States Geological Survey (USGS) spectral library. Based on spectral analysis and statistics, a mineral mapping decision-tree model for the Cuprite mining district in Nevada, USA, is constructed. Then, the SMMA algorithm is used to perform mineral mapping experiments. The mineral map from the USGS (USGS map) in the Cuprite area is selected for validation purposes. Results showed that the SMMA algorithm is able to identify most minerals with high coincidence with USGS map results. Compared with Hyperion data (overall accuracy=74.54%), AVIRIS data showed overall better mineral mapping results (overall accuracy=94.82%) due to low signal-to-noise ratio and high spatial resolution.", "which Data used ?", "AVIRIS", 1226.0, 1232.0], ["Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.", "which Data used ?", "AVIRIS", 1593.0, 1599.0], ["Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.", "which Data used ?", "Hyperion", 227.0, 235.0], ["Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.", "which Data used ?", "Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)", NaN, NaN], ["The Naipa and Muiane mines are located on the Nampula complex, a stratigraphic tectonic subdivision of the Mozambique Belt, in the Alto Ligonha region. The pegmatites are of the Li-Cs-Ta type, intrude a chlorite phyllite and gneisses with amphibole and biotite. The mines are still active. The main objective of this work was to analyze the pegmatite\u2019s spectral behavior considering ASTER and Landsat 8 OLI data. An ASTER image from 27/05/2005, and an image Landsat OLI image from 02/02/2018 were considered. The data were radiometric calibrated and after atmospheric corrected considered the Dark Object Subtraction algorithm available in the Semi-Automatic Classification Plugin accessible in QGIS software. In the field, samples were collected from lepidolite waste pile in Naipa and Muaine mines. A spectroadiometer was used in order to analyze the spectral behavior of several pegmatite\u2019s samples collected in the field in Alto Ligonha (Naipa and Muiane mines). In addition, QGIS software was also used for the spectral mapping of the hypothetical hydrothermal alterations associated with occurrences of basic metals, beryl gemstones, tourmalines, columbite-tantalites, and lithium minerals. A supervised classification algorithm was employed - Spectral Angle Mapper for the data processing, and the overall accuracy achieved was 80%. The integration of ASTER and Landsat 8 OLI data have proved very useful for pegmatite\u2019s mapping. From the results obtained, we can conclude that: (i) the combination of ASTER and Landsat 8 OLI data allows us to obtain more information about mineral composition than just one sensor, i.e., these two sensors are complementary; (ii) the alteration spots identified in the mines area are composed of clay minerals. In the future, more data and others image processing algorithms can be applied in order to identify the different Lithium minerals, as spodumene, petalite, amblygonite and lepidolite.", "which Other datasets ?", "ASTER ", 383.0, 389.0], ["Abstract. The integration of Landsat 8 OLI and ASTER data is an efficient tool for interpreting lead\u2013zinc mineralization in the Huoshaoyun Pb\u2013Zn mining region located in the west Kunlun mountains at high altitude and very rugged terrain, where traditional geological work becomes limited and time-consuming. This task was accomplished by using band ratios (BRs), principal component analysis, and spectral matched filtering methods. It is concluded that some BR color composites and principal components of each imagery contain useful information for lithological mapping. SMF technique is useful for detecting lead\u2013zinc mineralization zones, and the results could be verified by handheld portable X-ray fluorescence analysis. Therefore, the proposed methodology shows strong potential of Landsat 8 OLI and ASTER data in lithological mapping and lead\u2013zinc mineralization zone extraction in carbonate stratum.", "which Other datasets ?", "ASTER ", 47.0, 53.0], ["Abstract. Lithological mapping is a fundamental step in various mineral prospecting studies because it forms the basis of the interpretation and validation of retrieved results. Therefore, this study exploited the multispectral Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Landsat 8 Operational Land Imager (OLI) data in order to map lithological units in the Bas Dr\u00e2a inlier, at the Moroccan Anti Atlas. This task was completed by using principal component analysis (PCA), band ratios (BR), and support vector machine (SVM) classification. Overall accuracy and the kappa coefficient of SVM based on ground truth in addition to the results of PCA and BR show an excellent correlation with the existing geological map of the study area. Consequently, the methodology proposed demonstrates a high potential of ASTER and Landsat 8 OLI data in lithological units discrimination.", "which Other datasets ?", "ASTER ", 838.0, 844.0], ["ABSTRACT The present study exploits high-resolution hyperspectral imagery acquired by the Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) sensor from the Hutti-Maski gold deposit area, India, to map hydrothermal alteration minerals. The study area is a volcanic-dominated late Archean greenstone belt that hosts major gold mineralization in the Eastern Dharwar Craton of southern India. The study encompasses pre-processing, spectral and spatial image reduction using Minimum Noise Fraction (MNF) and Fast Pixel Purity Index (FPPI), followed by endmember extraction using n-dimensional visualizer and the United States Geological Survey (USGS) mineral spectral library. Image derived endmembers such as goethite, chlorite, chlorite at the mine site (chlorite mixed with mined materials), kaolinite, and muscovite were subsequently used in spectral mapping methods such as Spectral Angle Mapper (SAM), Spectral Information Divergence (SID) and its hybrid, i.e. SIDSAMtan. Spectral similarity matrix of the target and non-target-based method has been proposed to find the possible optimum threshold needed to obtain mineral map using spectral mapping methods. Relative Spectral Discrimination Power (RSDPW) and Confusion Matrix (CM) have been used to evaluate the performance of SAM, SID, and SIDSAMtan. The RSDPW and CM illustrate that the SIDSAMtan benefits from the unique characteristics of SAM and SID to achieve better discrimination capability. The Overall Accuracy (OA) and kappa coefficient (\u04a1) of SAM, SID, and SIDSAMtan were computed using 900 random validation points and obtained 90% (OA) and 0.88 (\u04a1), 91.4% and 0.90, and 94.4% and 0.93, respectively. Obtained mineral map demonstrates that the northern portion of the area mainly consists of muscovite whereas the southern part is marked by chlorite, goethite, muscovite and kaolinite, indicating the propylitic alteration. Most of these minerals are associated with altered metavolcanic rocks and migmatite.", "which yields ?", "SIDSAM", NaN, NaN], ["ABSTRACT The present study exploits high-resolution hyperspectral imagery acquired by the Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) sensor from the Hutti-Maski gold deposit area, India, to map hydrothermal alteration minerals. The study area is a volcanic-dominated late Archean greenstone belt that hosts major gold mineralization in the Eastern Dharwar Craton of southern India. The study encompasses pre-processing, spectral and spatial image reduction using Minimum Noise Fraction (MNF) and Fast Pixel Purity Index (FPPI), followed by endmember extraction using n-dimensional visualizer and the United States Geological Survey (USGS) mineral spectral library. Image derived endmembers such as goethite, chlorite, chlorite at the mine site (chlorite mixed with mined materials), kaolinite, and muscovite were subsequently used in spectral mapping methods such as Spectral Angle Mapper (SAM), Spectral Information Divergence (SID) and its hybrid, i.e. SIDSAMtan. Spectral similarity matrix of the target and non-target-based method has been proposed to find the possible optimum threshold needed to obtain mineral map using spectral mapping methods. Relative Spectral Discrimination Power (RSDPW) and Confusion Matrix (CM) have been used to evaluate the performance of SAM, SID, and SIDSAMtan. The RSDPW and CM illustrate that the SIDSAMtan benefits from the unique characteristics of SAM and SID to achieve better discrimination capability. The Overall Accuracy (OA) and kappa coefficient (\u04a1) of SAM, SID, and SIDSAMtan were computed using 900 random validation points and obtained 90% (OA) and 0.88 (\u04a1), 91.4% and 0.90, and 94.4% and 0.93, respectively. Obtained mineral map demonstrates that the northern portion of the area mainly consists of muscovite whereas the southern part is marked by chlorite, goethite, muscovite and kaolinite, indicating the propylitic alteration. Most of these minerals are associated with altered metavolcanic rocks and migmatite.", "which yields ?", " Confusion Matrix", 1235.0, 1252.0], ["ABSTRACT The present study exploits high-resolution hyperspectral imagery acquired by the Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) sensor from the Hutti-Maski gold deposit area, India, to map hydrothermal alteration minerals. The study area is a volcanic-dominated late Archean greenstone belt that hosts major gold mineralization in the Eastern Dharwar Craton of southern India. The study encompasses pre-processing, spectral and spatial image reduction using Minimum Noise Fraction (MNF) and Fast Pixel Purity Index (FPPI), followed by endmember extraction using n-dimensional visualizer and the United States Geological Survey (USGS) mineral spectral library. Image derived endmembers such as goethite, chlorite, chlorite at the mine site (chlorite mixed with mined materials), kaolinite, and muscovite were subsequently used in spectral mapping methods such as Spectral Angle Mapper (SAM), Spectral Information Divergence (SID) and its hybrid, i.e. SIDSAMtan. Spectral similarity matrix of the target and non-target-based method has been proposed to find the possible optimum threshold needed to obtain mineral map using spectral mapping methods. Relative Spectral Discrimination Power (RSDPW) and Confusion Matrix (CM) have been used to evaluate the performance of SAM, SID, and SIDSAMtan. The RSDPW and CM illustrate that the SIDSAMtan benefits from the unique characteristics of SAM and SID to achieve better discrimination capability. The Overall Accuracy (OA) and kappa coefficient (\u04a1) of SAM, SID, and SIDSAMtan were computed using 900 random validation points and obtained 90% (OA) and 0.88 (\u04a1), 91.4% and 0.90, and 94.4% and 0.93, respectively. Obtained mineral map demonstrates that the northern portion of the area mainly consists of muscovite whereas the southern part is marked by chlorite, goethite, muscovite and kaolinite, indicating the propylitic alteration. Most of these minerals are associated with altered metavolcanic rocks and migmatite.", "which yields ?", " Spectral Information Divergence (SID)", NaN, NaN], ["Advanced techniques using high resolution hyperspectral remote sensing data has recently evolved as an emerging tool with potential to aid mineral exploration. In this study, pertinently, five mosaicked scenes of Airborne Visible InfraRed Imaging Spectrometer-Next Generation (AVIRIS-NG) hyperspectral data of southeastern parts of the Aravalli Fold belt in Jahazpur area, Rajasthan, were processed. The exposed Proterozoic rocks in this area is of immense economic and scientific interest because of richness of poly-metallic mineral resources and their unique metallogenesis. Analysis of high resolution multispectral satellite image reveals that there are many prominent lineaments which acted as potential conduits of hydrothermal fluid emanation, some of which resulted in altering the country rock. This study takes cues from studying those altered minerals to enrich our knowledge base on mineralized zones. In this imaging spectroscopic study we have identified different hydrothermally altered minerals consisting of hydroxyl, carbonate and iron-bearing species. Spectral signatures (image based) of minerals such as Kaosmec, Talc, Kaolinite, Dolomite, and Montmorillonite were derived in SWIR (Short wave infrared) region while Iron bearing minerals such as Goethite and Limonite were identified in the VNIR (Visible and Near Infrared) region of electromagnetic spectrum. Validation of the target minerals was done by subsequent ground truthing and X-ray diffractogram (XRD) analysis. The altered end members were further mapped by Spectral Angle Mapper (SAM) and Adaptive Coherence Estimator (ACE) techniques to detect target minerals. Accuracy assessment was reported to be 86.82% and 77.75% for SAM and ACE respectively. This study confirms that the AVIRIS-NG hyperspectral data provides better solution for identification of endmember minerals.", "which yields ?", "Accuracy Assessment", 1655.0, 1674.0], ["ABSTRACT The present study exploits high-resolution hyperspectral imagery acquired by the Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) sensor from the Hutti-Maski gold deposit area, India, to map hydrothermal alteration minerals. The study area is a volcanic-dominated late Archean greenstone belt that hosts major gold mineralization in the Eastern Dharwar Craton of southern India. The study encompasses pre-processing, spectral and spatial image reduction using Minimum Noise Fraction (MNF) and Fast Pixel Purity Index (FPPI), followed by endmember extraction using n-dimensional visualizer and the United States Geological Survey (USGS) mineral spectral library. Image derived endmembers such as goethite, chlorite, chlorite at the mine site (chlorite mixed with mined materials), kaolinite, and muscovite were subsequently used in spectral mapping methods such as Spectral Angle Mapper (SAM), Spectral Information Divergence (SID) and its hybrid, i.e. SIDSAMtan. Spectral similarity matrix of the target and non-target-based method has been proposed to find the possible optimum threshold needed to obtain mineral map using spectral mapping methods. Relative Spectral Discrimination Power (RSDPW) and Confusion Matrix (CM) have been used to evaluate the performance of SAM, SID, and SIDSAMtan. The RSDPW and CM illustrate that the SIDSAMtan benefits from the unique characteristics of SAM and SID to achieve better discrimination capability. The Overall Accuracy (OA) and kappa coefficient (\u04a1) of SAM, SID, and SIDSAMtan were computed using 900 random validation points and obtained 90% (OA) and 0.88 (\u04a1), 91.4% and 0.90, and 94.4% and 0.93, respectively. Obtained mineral map demonstrates that the northern portion of the area mainly consists of muscovite whereas the southern part is marked by chlorite, goethite, muscovite and kaolinite, indicating the propylitic alteration. Most of these minerals are associated with altered metavolcanic rocks and migmatite.", "which yields ?", "Fast Pixel Purity Index (FPPI)", NaN, NaN], ["ABSTRACT The present study exploits high-resolution hyperspectral imagery acquired by the Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) sensor from the Hutti-Maski gold deposit area, India, to map hydrothermal alteration minerals. The study area is a volcanic-dominated late Archean greenstone belt that hosts major gold mineralization in the Eastern Dharwar Craton of southern India. The study encompasses pre-processing, spectral and spatial image reduction using Minimum Noise Fraction (MNF) and Fast Pixel Purity Index (FPPI), followed by endmember extraction using n-dimensional visualizer and the United States Geological Survey (USGS) mineral spectral library. Image derived endmembers such as goethite, chlorite, chlorite at the mine site (chlorite mixed with mined materials), kaolinite, and muscovite were subsequently used in spectral mapping methods such as Spectral Angle Mapper (SAM), Spectral Information Divergence (SID) and its hybrid, i.e. SIDSAMtan. Spectral similarity matrix of the target and non-target-based method has been proposed to find the possible optimum threshold needed to obtain mineral map using spectral mapping methods. Relative Spectral Discrimination Power (RSDPW) and Confusion Matrix (CM) have been used to evaluate the performance of SAM, SID, and SIDSAMtan. The RSDPW and CM illustrate that the SIDSAMtan benefits from the unique characteristics of SAM and SID to achieve better discrimination capability. The Overall Accuracy (OA) and kappa coefficient (\u04a1) of SAM, SID, and SIDSAMtan were computed using 900 random validation points and obtained 90% (OA) and 0.88 (\u04a1), 91.4% and 0.90, and 94.4% and 0.93, respectively. Obtained mineral map demonstrates that the northern portion of the area mainly consists of muscovite whereas the southern part is marked by chlorite, goethite, muscovite and kaolinite, indicating the propylitic alteration. Most of these minerals are associated with altered metavolcanic rocks and migmatite.", "which yields ?", "Relative Spectral Discrimination Power (RSDPW)", NaN, NaN], ["ABSTRACT The present study exploits high-resolution hyperspectral imagery acquired by the Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) sensor from the Hutti-Maski gold deposit area, India, to map hydrothermal alteration minerals. The study area is a volcanic-dominated late Archean greenstone belt that hosts major gold mineralization in the Eastern Dharwar Craton of southern India. The study encompasses pre-processing, spectral and spatial image reduction using Minimum Noise Fraction (MNF) and Fast Pixel Purity Index (FPPI), followed by endmember extraction using n-dimensional visualizer and the United States Geological Survey (USGS) mineral spectral library. Image derived endmembers such as goethite, chlorite, chlorite at the mine site (chlorite mixed with mined materials), kaolinite, and muscovite were subsequently used in spectral mapping methods such as Spectral Angle Mapper (SAM), Spectral Information Divergence (SID) and its hybrid, i.e. SIDSAMtan. Spectral similarity matrix of the target and non-target-based method has been proposed to find the possible optimum threshold needed to obtain mineral map using spectral mapping methods. Relative Spectral Discrimination Power (RSDPW) and Confusion Matrix (CM) have been used to evaluate the performance of SAM, SID, and SIDSAMtan. The RSDPW and CM illustrate that the SIDSAMtan benefits from the unique characteristics of SAM and SID to achieve better discrimination capability. The Overall Accuracy (OA) and kappa coefficient (\u04a1) of SAM, SID, and SIDSAMtan were computed using 900 random validation points and obtained 90% (OA) and 0.88 (\u04a1), 91.4% and 0.90, and 94.4% and 0.93, respectively. Obtained mineral map demonstrates that the northern portion of the area mainly consists of muscovite whereas the southern part is marked by chlorite, goethite, muscovite and kaolinite, indicating the propylitic alteration. Most of these minerals are associated with altered metavolcanic rocks and migmatite.", "which yields ?", "Spectral Angle Mapper (SAM)", NaN, NaN], ["Advanced techniques using high resolution hyperspectral remote sensing data has recently evolved as an emerging tool with potential to aid mineral exploration. In this study, pertinently, five mosaicked scenes of Airborne Visible InfraRed Imaging Spectrometer-Next Generation (AVIRIS-NG) hyperspectral data of southeastern parts of the Aravalli Fold belt in Jahazpur area, Rajasthan, were processed. The exposed Proterozoic rocks in this area is of immense economic and scientific interest because of richness of poly-metallic mineral resources and their unique metallogenesis. Analysis of high resolution multispectral satellite image reveals that there are many prominent lineaments which acted as potential conduits of hydrothermal fluid emanation, some of which resulted in altering the country rock. This study takes cues from studying those altered minerals to enrich our knowledge base on mineralized zones. In this imaging spectroscopic study we have identified different hydrothermally altered minerals consisting of hydroxyl, carbonate and iron-bearing species. Spectral signatures (image based) of minerals such as Kaosmec, Talc, Kaolinite, Dolomite, and Montmorillonite were derived in SWIR (Short wave infrared) region while Iron bearing minerals such as Goethite and Limonite were identified in the VNIR (Visible and Near Infrared) region of electromagnetic spectrum. Validation of the target minerals was done by subsequent ground truthing and X-ray diffractogram (XRD) analysis. The altered end members were further mapped by Spectral Angle Mapper (SAM) and Adaptive Coherence Estimator (ACE) techniques to detect target minerals. Accuracy assessment was reported to be 86.82% and 77.75% for SAM and ACE respectively. This study confirms that the AVIRIS-NG hyperspectral data provides better solution for identification of endmember minerals.", "which yields ?", "Spectral Angle Mapper (SAM)", NaN, NaN], ["ABSTRACT The present study exploits high-resolution hyperspectral imagery acquired by the Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) sensor from the Hutti-Maski gold deposit area, India, to map hydrothermal alteration minerals. The study area is a volcanic-dominated late Archean greenstone belt that hosts major gold mineralization in the Eastern Dharwar Craton of southern India. The study encompasses pre-processing, spectral and spatial image reduction using Minimum Noise Fraction (MNF) and Fast Pixel Purity Index (FPPI), followed by endmember extraction using n-dimensional visualizer and the United States Geological Survey (USGS) mineral spectral library. Image derived endmembers such as goethite, chlorite, chlorite at the mine site (chlorite mixed with mined materials), kaolinite, and muscovite were subsequently used in spectral mapping methods such as Spectral Angle Mapper (SAM), Spectral Information Divergence (SID) and its hybrid, i.e. SIDSAMtan. Spectral similarity matrix of the target and non-target-based method has been proposed to find the possible optimum threshold needed to obtain mineral map using spectral mapping methods. Relative Spectral Discrimination Power (RSDPW) and Confusion Matrix (CM) have been used to evaluate the performance of SAM, SID, and SIDSAMtan. The RSDPW and CM illustrate that the SIDSAMtan benefits from the unique characteristics of SAM and SID to achieve better discrimination capability. The Overall Accuracy (OA) and kappa coefficient (\u04a1) of SAM, SID, and SIDSAMtan were computed using 900 random validation points and obtained 90% (OA) and 0.88 (\u04a1), 91.4% and 0.90, and 94.4% and 0.93, respectively. Obtained mineral map demonstrates that the northern portion of the area mainly consists of muscovite whereas the southern part is marked by chlorite, goethite, muscovite and kaolinite, indicating the propylitic alteration. Most of these minerals are associated with altered metavolcanic rocks and migmatite.", "which yields ?", "Spectral Similarity Matrix", 998.0, 1024.0], ["Wetlands mapping using multispectral imagery from Landsat multispectral scanner (MSS) and thematic mapper (TM) and Syst\u00e8me pour l'observation de la Terre (SPOT) does not in general provide high classification accuracies because of poor spectral and spatial resolutions. This study tests the feasibility of using high-resolution hyperspectral imagery to map wetlands in Iowa with two nontraditional classification techniques: the spectral angle mapper (SAM) method and a new nonparametric object-oriented (OO) classification. The software programs used were ENVI and eCognition. Accuracies of these classified images were assessed by using the information collected through a field survey with a global positioning system and high-resolution color infrared images. Wetlands were identified more accurately with the OO method (overall accuracy 92.3%) than with SAM (63.53%). This paper also discusses the limitations of these classification techniques for wetlands, as well as discussing future directions for study.", "which Softwares ?", " eCognition", 565.0, 576.0], ["ABSTRACT Accurate and up-to-date built-up area mapping is of great importance to the science community, decision-makers, and society. Therefore, satellite-based, built-up area (BUA) extraction at medium resolution with supervised classification has been widely carried out. However, the spectral confusion between BUA and bare land (BL) is the primary hindering factor for accurate BUA mapping over large regions. Here we propose a new methodology for the efficient BUA extraction using multi-sensor data under Google Earth Engine cloud computing platform. The proposed method mainly employs intra-annual satellite imagery for water and vegetation masks, and a random-forest machine learning classifier combined with auxiliary data to discriminate between BUA and BL. First, a vegetation mask and water mask are generated using NDVI (normalized differenced vegetation index) max in vegetation growth periods and the annual water-occurrence frequency. Second, to accurately extract BUA from unmasked pixels, consisting of BUA and BL, random-forest-based classification is conducted using multi-sensor features, including temperature, night-time light, backscattering, topography, optical spectra, and NDVI time-series metrics. This approach is applied in Zhejiang Province, China, and an overall accuracy of 92.5% is obtained, which is 3.4% higher than classification with spectral data only. For large-scale BUA mapping, it is feasible to enhance the performance of BUA mapping with multi-temporal and multi-sensor data, which takes full advantage of datasets available in Google Earth Engine.", "which Softwares ?", "Google Earth Engine", 511.0, 530.0], ["In this study, we test the potential of two different classification algorithms, namely the spectral angle mapper (SAM) and object-based classifier for mapping the land use/cover characteristics using a Hyperion imagery. We chose a study region that represents a typical Mediterranean setting in terms of landscape structure, composition and heterogeneous land cover classes. Accuracy assessment of the land cover classes was performed based on the error matrix statistics. Validation points were derived from visual interpretation of multispectral high resolution QuickBird-2 satellite imagery. Results from both the classifiers yielded more than 70% classification accuracy. However, the object-based classification clearly outperformed the SAM by 7.91% overall accuracy (OA) and a relatively high kappa coefficient. Similar results were observed in the classification of the individual classes. Our results highlight the potential of hyperspectral remote sensing data as well as object-based classification approach for mapping heterogeneous land use/cover in a typical Mediterranean setting.", "which Datasets ?", "Quickbird-2", 565.0, 576.0], ["Hyperspectral technology is useful for urban studies due to its capability in examining detailed spectral characteristics of urban materials. This study aims to develop a spectral library of urban materials and demonstrate its application in remote sensing analysis of an urban environment. Field measurements were conducted by using ASD FieldSpec 3 Spectroradiometer with wavelength range from 350 to 2500 nm. The spectral reflectance curves of urban materials were interpreted and analyzed. A collection of 22 spectral data was compiled into a spectral library. The spectral library was put to practical use by utilizing the reference spectra for WorldView-2 satellite image classification which demonstrates the usability of such infrastructure to facilitate further progress of remote sensing applications in Malaysia.", "which Sensors ?", "WorldView-2", 649.0, 660.0], ["Hyperspectral technology is useful for urban studies due to its capability in examining detailed spectral characteristics of urban materials. This study aims to develop a spectral library of urban materials and demonstrate its application in remote sensing analysis of an urban environment. Field measurements were conducted by using ASD FieldSpec 3 Spectroradiometer with wavelength range from 350 to 2500 nm. The spectral reflectance curves of urban materials were interpreted and analyzed. A collection of 22 spectral data was compiled into a spectral library. The spectral library was put to practical use by utilizing the reference spectra for WorldView-2 satellite image classification which demonstrates the usability of such infrastructure to facilitate further progress of remote sensing applications in Malaysia.", "which Sensors ?", "ASD FieldSpec 3 Spectroradiometer", 334.0, 367.0], ["Biodiversity research in tropical ecosystems-popularized as the most biodiverse habitats on Earth-often neglects invertebrates, yet invertebrates represent the bulk of local species richness. Insect communities in particular remain strongly impeded by both Linnaean and Wallacean shortfalls, and identifying species often remains a formidable challenge inhibiting the use of these organisms as indicators for ecological and conservation studies. Here we use DNA barcoding as an alternative to the traditional taxonomic approach for characterizing and comparing the diversity of moth communities in two different ecosystems in Gabon. Though sampling remains very incomplete, as evidenced by the high proportion (59%) of species represented by singletons, our results reveal an outstanding diversity. With about 3500 specimens sequenced and representing 1385 BINs (Barcode Index Numbers, used as a proxy to species) in 23 families, the diversity of moths in the two sites sampled is higher than the current number of species listed for the entire country, highlighting the huge gap in biodiversity knowledge for this country. Both seasonal and spatial turnovers are strikingly high (18.3% of BINs shared between seasons, and 13.3% between sites) and draw attention to the need to account for these when running regional surveys. Our results also highlight the richness and singularity of savannah environments and emphasize the status of Central African ecosystems as hotspots of biodiversity.", "which higher number estimated species (Method) ?", "BINs", 857.0, 861.0], ["This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences.", "which higher number estimated species (Method) ?", "BINs", 900.0, 904.0], ["Background With about 1,000 species in the Neotropics, the Eumaeini (Theclinae) are one of the most diverse butterfly tribes. Correct morphology-based identifications are challenging in many genera due to relatively little interspecific differences in wing patterns. Geographic infraspecific variation is sometimes more substantial than variation between species. In this paper we present a large DNA barcode dataset of South American Lycaenidae. We analyze how well DNA barcode BINs match morphologically delimited species. Methods We compare morphology-based species identifications with the clustering of molecular operational taxonomic units (MOTUs) delimitated by the RESL algorithm in BOLD, which assigns Barcode Index Numbers (BINs). We examine intra- and interspecific divergences for genera represented by at least four morphospecies. We discuss the existence of local barcode gaps in a genus by genus analysis. We also note differences in the percentage of species with barcode gaps in groups of lowland and high mountain genera. Results We identified 2,213 specimens and obtained 1,839 sequences of 512 species in 90 genera. Overall, the mean intraspecific divergence value of CO1 sequences was 1.20%, while the mean interspecific divergence between nearest congeneric neighbors was 4.89%, demonstrating the presence of a barcode gap. However, the gap seemed to disappear from the entire set when comparing the maximum intraspecific distance (8.40%) with the minimum interspecific distance (0.40%). Clear barcode gaps are present in many genera but absent in others. From the set of specimens that yielded COI fragment lengths of at least 650 bp, 75% of the a priori morphology-based identifications were unambiguously assigned to a single Barcode Index Number (BIN). However, after a taxonomic a posteriori review, the percentage of matched identifications rose to 85%. BIN splitting was observed for 17% of the species and BIN sharing for 9%. We found that genera that contain primarily lowland species show higher percentages of local barcode gaps and congruence between BINs and morphology than genera that contain exclusively high montane species. The divergence values to the nearest neighbors were significantly lower in high Andean species while the intra-specific divergence values were significantly lower in the lowland species. These results raise questions regarding the causes of observed low inter and high intraspecific genetic variation. We discuss incomplete lineage sorting and hybridization as most likely causes of this phenomenon, as the montane species concerned are relatively young and hybridization is probable. The release of our data set represents an essential baseline for a reference library for biological assessment studies of butterflies in mega diverse countries using modern high-throughput technologies an highlights the necessity of taxonomic revisions for various genera combining both molecular and morphological data.", "which higher number estimated species (Method) ?", "BINs", 479.0, 483.0], ["This study summarizes results of a DNA barcoding campaign on German Diptera, involving analysis of 45,040 specimens. The resultant DNA barcode library includes records for 2,453 named species comprising a total of 5,200 barcode index numbers (BINs), including 2,700 COI haplotype clusters without species\u2010level assignment, so called \u201cdark taxa.\u201d Overall, 88 out of 117 families (75%) recorded from Germany were covered, representing more than 50% of the 9,544 known species of German Diptera. Until now, most of these families, especially the most diverse, have been taxonomically inaccessible. By contrast, within a few years this study provided an intermediate taxonomic system for half of the German Dipteran fauna, which will provide a useful foundation for subsequent detailed, integrative taxonomic studies. Using DNA extracts derived from bulk collections made by Malaise traps, we further demonstrate that species delineation using BINs and operational taxonomic units (OTUs) constitutes an effective method for biodiversity studies using DNA metabarcoding. As the reference libraries continue to grow, and gaps in the species catalogue are filled, BIN lists assembled by metabarcoding will provide greater taxonomic resolution. The present study has three main goals: (a) to provide a DNA barcode library for 5,200 BINs of Diptera; (b) to demonstrate, based on the example of bulk extractions from a Malaise trap experiment, that DNA barcode clusters, labelled with globally unique identifiers (such as OTUs and/or BINs), provide a pragmatic, accurate solution to the \u201ctaxonomic impediment\u201d; and (c) to demonstrate that interim names based on BINs and OTUs obtained through metabarcoding provide an effective method for studies on species\u2010rich groups that are usually neglected in biodiversity research projects because of their unresolved taxonomy.", "which higher number estimated species (Method) ?", "BINs", 243.0, 247.0], ["For the first time, a nearly complete barcode library for European Gelechiidae is provided. DNA barcode sequences (COI gene - cytochrome c oxidase 1) from 751 out of 865 nominal species, belonging to 105 genera, were successfully recovered. A total of 741 species represented by specimens with sequences \u2265 500bp and an additional ten species represented by specimens with shorter sequences were used to produce 53 NJ trees. Intraspecific barcode divergence averaged only 0.54% whereas distance to the Nearest-Neighbour species averaged 5.58%. Of these, 710 species possessed unique DNA barcodes, but 31 species could not be reliably discriminated because of barcode sharing or partial barcode overlap. Species discrimination based on the Barcode Index System (BIN) was successful for 668 out of 723 species which clustered from minimum one to maximum 22 unique BINs. Fifty-five species shared a BIN with up to four species and identification from DNA barcode data is uncertain. Finally, 65 clusters with a unique BIN remained unidentified to species level. These putative taxa, as well as 114 nominal species with more than one BIN, suggest the presence of considerable cryptic diversity, cases which should be examined in future revisionary studies.", "which higher number estimated species (Method) ?", "BINs", 861.0, 865.0], ["Although members of the crambid subfamily Pyraustinae are frequently important crop pests, their identification is often difficult because many species lack conspicuous diagnostic morphological characters. DNA barcoding employs sequence diversity in a short standardized gene region to facilitate specimen identifications and species discovery. This study provides a DNA barcode reference library for North American pyraustines based upon the analysis of 1589 sequences recovered from 137 nominal species, 87% of the fauna. Data from 125 species were barcode compliant (>500bp, <1% n), and 99 of these taxa formed a distinct cluster that was assigned to a single BIN. The other 26 species were assigned to 56 BINs, reflecting frequent cases of deep intraspecific sequence divergence and a few instances of barcode sharing, creating a total of 155 BINs. Two systems for OTU designation, ABGD and BIN, were examined to check the correspondence between current taxonomy and sequence clusters. The BIN system performed better than ABGD in delimiting closely related species, while OTU counts with ABGD were influenced by the value employed for relative gap width. Different species with low or no interspecific divergence may represent cases of unrecognized synonymy, whereas those with high intraspecific divergence require further taxonomic scrutiny as they may involve cryptic diversity. The barcode library developed in this study will also help to advance understanding of relationships among species of Pyraustinae.", "which higher number estimated species (Method) ?", "BINs", 709.0, 713.0], ["Abstract The DNA barcode reference library for Lepidoptera holds much promise as a tool for taxonomic research and for providing the reliable identifications needed for conservation assessment programs. We gathered sequences for the barcode region of the mitochondrial cytochrome c oxidase subunit I gene from 160 of the 176 nominal species of Erebidae moths (Insecta: Lepidoptera) known from the Iberian Peninsula. These results arise from a research project which constructing a DNA barcode library for the insect species of Spain. New records for 271 specimens (122 species) are coupled with preexisting data for 38 species from the Iberian fauna. Mean interspecific distance was 12.1%, while the mean nearest neighbour divergence was 6.4%. All 160 species possessed diagnostic barcode sequences, but one pair of congeneric taxa (Eublemma rosea and Eublemma rietzi) were assigned to the same BIN. As well, intraspecific sequence divergences higher than 1.5% were detected in four species which likely represent species complexes. This study reinforces the effectiveness of DNA barcoding as a tool for monitoring biodiversity in particular geographical areas and the strong correspondence between sequence clusters delineated by BINs and species recognized through detailed taxonomic analysis.", "which lower number estimated species (Method) ?", "BINs", 1231.0, 1235.0], ["This study provides a first, comprehensive, diagnostic use of DNA barcodes for the Canadian fauna of noctuoids or \u201cowlet\u201d moths (Lepidoptera: Noctuoidea) based on vouchered records for 1,541 species (99.1% species coverage), and more than 30,000 sequences. When viewed from a Canada-wide perspective, DNA barcodes unambiguously discriminate 90% of the noctuoid species recognized through prior taxonomic study, and resolution reaches 95.6% when considered at a provincial scale. Barcode sharing is concentrated in certain lineages with 54% of the cases involving 1.8% of the genera. Deep intraspecific divergence exists in 7.7% of the species, but further studies are required to clarify whether these cases reflect an overlooked species complex or phylogeographic variation in a single species. Non-native species possess higher Nearest-Neighbour (NN) distances than native taxa, whereas generalist feeders have lower NN distances than those with more specialized feeding habits. We found high concordance between taxonomic names and sequence clusters delineated by the Barcode Index Number (BIN) system with 1,082 species (70%) assigned to a unique BIN. The cases of discordance involve both BIN mergers and BIN splits with 38 species falling into both categories, most likely reflecting bidirectional introgression. One fifth of the species are involved in a BIN merger reflecting the presence of 158 species sharing their barcode sequence with at least one other taxon, and 189 species with low, but diagnostic COI divergence. A very few cases (13) involved species whose members fell into both categories. Most of the remaining 140 species show a split into two or three BINs per species, while Virbia ferruginosa was divided into 16. The overall results confirm that DNA barcodes are effective for the identification of Canadian noctuoids. This study also affirms that BINs are a strong proxy for species, providing a pathway for a rapid, accurate estimation of animal diversity.", "which lower number estimated species (Method) ?", "BINs", 1676.0, 1680.0], ["Although members of the crambid subfamily Pyraustinae are frequently important crop pests, their identification is often difficult because many species lack conspicuous diagnostic morphological characters. DNA barcoding employs sequence diversity in a short standardized gene region to facilitate specimen identifications and species discovery. This study provides a DNA barcode reference library for North American pyraustines based upon the analysis of 1589 sequences recovered from 137 nominal species, 87% of the fauna. Data from 125 species were barcode compliant (>500bp, <1% n), and 99 of these taxa formed a distinct cluster that was assigned to a single BIN. The other 26 species were assigned to 56 BINs, reflecting frequent cases of deep intraspecific sequence divergence and a few instances of barcode sharing, creating a total of 155 BINs. Two systems for OTU designation, ABGD and BIN, were examined to check the correspondence between current taxonomy and sequence clusters. The BIN system performed better than ABGD in delimiting closely related species, while OTU counts with ABGD were influenced by the value employed for relative gap width. Different species with low or no interspecific divergence may represent cases of unrecognized synonymy, whereas those with high intraspecific divergence require further taxonomic scrutiny as they may involve cryptic diversity. The barcode library developed in this study will also help to advance understanding of relationships among species of Pyraustinae.", "which lower number estimated species (Method) ?", "current taxonomy", 950.0, 966.0], ["Biodiversity research in tropical ecosystems-popularized as the most biodiverse habitats on Earth-often neglects invertebrates, yet invertebrates represent the bulk of local species richness. Insect communities in particular remain strongly impeded by both Linnaean and Wallacean shortfalls, and identifying species often remains a formidable challenge inhibiting the use of these organisms as indicators for ecological and conservation studies. Here we use DNA barcoding as an alternative to the traditional taxonomic approach for characterizing and comparing the diversity of moth communities in two different ecosystems in Gabon. Though sampling remains very incomplete, as evidenced by the high proportion (59%) of species represented by singletons, our results reveal an outstanding diversity. With about 3500 specimens sequenced and representing 1385 BINs (Barcode Index Numbers, used as a proxy to species) in 23 families, the diversity of moths in the two sites sampled is higher than the current number of species listed for the entire country, highlighting the huge gap in biodiversity knowledge for this country. Both seasonal and spatial turnovers are strikingly high (18.3% of BINs shared between seasons, and 13.3% between sites) and draw attention to the need to account for these when running regional surveys. Our results also highlight the richness and singularity of savannah environments and emphasize the status of Central African ecosystems as hotspots of biodiversity.", "which No. of estimated species (Method) ?", "BINs", 857.0, 861.0], ["This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences.", "which No. of estimated species (Method) ?", "BINs", 900.0, 904.0], ["Background With about 1,000 species in the Neotropics, the Eumaeini (Theclinae) are one of the most diverse butterfly tribes. Correct morphology-based identifications are challenging in many genera due to relatively little interspecific differences in wing patterns. Geographic infraspecific variation is sometimes more substantial than variation between species. In this paper we present a large DNA barcode dataset of South American Lycaenidae. We analyze how well DNA barcode BINs match morphologically delimited species. Methods We compare morphology-based species identifications with the clustering of molecular operational taxonomic units (MOTUs) delimitated by the RESL algorithm in BOLD, which assigns Barcode Index Numbers (BINs). We examine intra- and interspecific divergences for genera represented by at least four morphospecies. We discuss the existence of local barcode gaps in a genus by genus analysis. We also note differences in the percentage of species with barcode gaps in groups of lowland and high mountain genera. Results We identified 2,213 specimens and obtained 1,839 sequences of 512 species in 90 genera. Overall, the mean intraspecific divergence value of CO1 sequences was 1.20%, while the mean interspecific divergence between nearest congeneric neighbors was 4.89%, demonstrating the presence of a barcode gap. However, the gap seemed to disappear from the entire set when comparing the maximum intraspecific distance (8.40%) with the minimum interspecific distance (0.40%). Clear barcode gaps are present in many genera but absent in others. From the set of specimens that yielded COI fragment lengths of at least 650 bp, 75% of the a priori morphology-based identifications were unambiguously assigned to a single Barcode Index Number (BIN). However, after a taxonomic a posteriori review, the percentage of matched identifications rose to 85%. BIN splitting was observed for 17% of the species and BIN sharing for 9%. We found that genera that contain primarily lowland species show higher percentages of local barcode gaps and congruence between BINs and morphology than genera that contain exclusively high montane species. The divergence values to the nearest neighbors were significantly lower in high Andean species while the intra-specific divergence values were significantly lower in the lowland species. These results raise questions regarding the causes of observed low inter and high intraspecific genetic variation. We discuss incomplete lineage sorting and hybridization as most likely causes of this phenomenon, as the montane species concerned are relatively young and hybridization is probable. The release of our data set represents an essential baseline for a reference library for biological assessment studies of butterflies in mega diverse countries using modern high-throughput technologies an highlights the necessity of taxonomic revisions for various genera combining both molecular and morphological data.", "which No. of estimated species (Method) ?", "BINs", 479.0, 483.0], ["Abstract The DNA barcode reference library for Lepidoptera holds much promise as a tool for taxonomic research and for providing the reliable identifications needed for conservation assessment programs. We gathered sequences for the barcode region of the mitochondrial cytochrome c oxidase subunit I gene from 160 of the 176 nominal species of Erebidae moths (Insecta: Lepidoptera) known from the Iberian Peninsula. These results arise from a research project which constructing a DNA barcode library for the insect species of Spain. New records for 271 specimens (122 species) are coupled with preexisting data for 38 species from the Iberian fauna. Mean interspecific distance was 12.1%, while the mean nearest neighbour divergence was 6.4%. All 160 species possessed diagnostic barcode sequences, but one pair of congeneric taxa (Eublemma rosea and Eublemma rietzi) were assigned to the same BIN. As well, intraspecific sequence divergences higher than 1.5% were detected in four species which likely represent species complexes. This study reinforces the effectiveness of DNA barcoding as a tool for monitoring biodiversity in particular geographical areas and the strong correspondence between sequence clusters delineated by BINs and species recognized through detailed taxonomic analysis.", "which No. of estimated species (Method) ?", "BINs", 1231.0, 1235.0], ["Although members of the crambid subfamily Pyraustinae are frequently important crop pests, their identification is often difficult because many species lack conspicuous diagnostic morphological characters. DNA barcoding employs sequence diversity in a short standardized gene region to facilitate specimen identifications and species discovery. This study provides a DNA barcode reference library for North American pyraustines based upon the analysis of 1589 sequences recovered from 137 nominal species, 87% of the fauna. Data from 125 species were barcode compliant (>500bp, <1% n), and 99 of these taxa formed a distinct cluster that was assigned to a single BIN. The other 26 species were assigned to 56 BINs, reflecting frequent cases of deep intraspecific sequence divergence and a few instances of barcode sharing, creating a total of 155 BINs. Two systems for OTU designation, ABGD and BIN, were examined to check the correspondence between current taxonomy and sequence clusters. The BIN system performed better than ABGD in delimiting closely related species, while OTU counts with ABGD were influenced by the value employed for relative gap width. Different species with low or no interspecific divergence may represent cases of unrecognized synonymy, whereas those with high intraspecific divergence require further taxonomic scrutiny as they may involve cryptic diversity. The barcode library developed in this study will also help to advance understanding of relationships among species of Pyraustinae.", "which No. of estimated species (Method) ?", "BINs", 709.0, 713.0], ["We compared community composition, density, and species richness of herbivorous insects on the introduced plant Solidago altissima L. (Asteraceae) and the related native species Solidago virgaurea L. in Japan. We found large differences in community composition on the two Solidago species. Five hemipteran sap feeders were found only on S. altissima. Two of them, the aphid Uroleucon nigrotuberculatum Olive (Hemiptera: Aphididae) and the scale insect Parasaissetia nigra Nietner (Hemiptera: Coccidae), were exotic species, accounting for 62% of the total individuals on S. altissima. These exotic sap feeders mostly determined the difference of community composition on the two plant species. In contrast, the herbivore community on S. virgaurea consisted predominately of five native insects: two lepidopteran leaf chewers and three dipteran leaf miners. Overall species richness did not differ between the plants because the increased species richness of sap feeders was offset by the decreased richness of leaf chewers and leaf miners on S. altissima. The overall density of herbivorous insects was higher on S. altissima than on S. virgaurea, because of the high density of the two exotic sap feeding species on S. altissima. We discuss the importance of analyzing community composition in terms of feeding guilds of insect herbivores for understanding how communities of insect herbivores are organized on introduced plants in novel habitats.", "which Sub-hypothesis ?", "ANI", NaN, NaN], ["Nonnative, invasive plant species often increase in growth, abundance, or habitat distribution in their introduced ranges. The enemy-release hypothesis, proposed to account for these changes, posits that herbivores and pathogens (natural enemies) limit growth or survival of plants in native areas, that natural enemies have less impact in the introduced than in the native range, and that the release from natural-enemy regulation in areas of introduction accounts in part for observed changes in plant abundance. We tested experimentally the enemy-release hypothesis with the invasive neotropical shrub Clidemia hirta (L.) D. Don (Melastomataceae). Clidemia hirta does not occur in forest in its native range but is a vigorous invader of tropical forest in its introduced range. Therefore, we tested the specific prediction that release from natural enemies has contributed to its ex- panded habitat distribution. We planted C. hirta into understory and open habitats where it is native (Costa Rica) and where it has been introduced (Hawaii) and applied pesticides to examine the effects of fungal pathogen and insect herbivore exclusion. In understory sites in Costa Rica, C. hirta survival increased by 12% if sprayed with insecticide, 19% with fungicide, and 41% with both insecticide and fungicide compared to control plants sprayed only with water. Exclusion of natural enemies had no effect on survival in open sites in Costa Rica or in either habitat in Hawaii. Fungicide application promoted relative growth rates of plants that survived to the end of the experiment in both habitats of Costa Rica but not in Hawaii, suggesting that fungal pathogens only limit growth of C. hirta where it is native. Galls, stem borers, weevils, and leaf rollers were prevalent in Costa Rica but absent in Hawaii. In addition, the standing percentage of leaf area missing on plants in the control (water only) treatment was five times greater on plants in Costa Rica than in Hawaii and did not differ between habitats. The results from this study suggest that significant effects of herbivores and fungal pathogens may be limited to particular habitats. For Clidemia hirta, its absence from forest understory in its native range likely results in part from the strong pressures of natural enemies. Its invasion into Hawaiian forests is apparently aided by a release from these herbivores and pathogens.", "which Sub-hypothesis ?", "HAD", 1385.0, 1388.0], ["It is commonly assumed that invasive plants grow more vigorously in their introduced than in their native range, which is then attributed to release from natural enemies or to microevolutionary changes, or both. However, few studies have tested this assumption by comparing the performance of invasive species in their native vs. introduced ranges. Here, we studied abundance, growth, reproduction, and herbivory in 10 native Chinese and 10 invasive German populations of the invasive shrub Buddleja davidii (Scrophulariaceae; butterfly bush). We found strong evidence for increased plant vigour in the introduced range: plants in invasive populations were significantly taller and had thicker stems, larger inflorescences, and heavier seeds than plants in native populations. These differences in plant performance could not be explained by a more benign climate in the introduced range. Since leaf herbivory was substantially reduced in invasive populations, our data rather suggest that escape from natural enemies, associated with increased plant growth and reproduction, contributes to the invasion success of B. davidii in Central Europe.", "which Sub-hypothesis ?", "HAD", 682.0, 685.0], ["A central question in ecology concerns how some exotic plants that occur at low densities in their native range are able to attain much higher densities where they are introduced. This question has remained unresolved in part due to a lack of experiments that assess factors that affect the population growth or abundance of plants in both ranges. We tested two hypotheses for exotic plant success: escape from specialist insect herbivores and a greater response to disturbance in the introduced range. Within three introduced populations in Montana, USA, and three native populations in Germany, we experimentally manipulated insect herbivore pressure and created small-scale disturbances to determine how these factors affect the performance of houndstongue (Cynoglossum officinale), a widespread exotic in western North America. Herbivores reduced plant size and fecundity in the native range but had little effect on plant performance in the introduced range. Small-scale experimental disturbances enhanced seedling recruitment in both ranges, but subsequent seedling survival was more positively affected by disturbance in the introduced range. We combined these experimental results with demographic data from each population to parameterize integral projection population models to assess how enemy escape and disturbance might differentially influence C. officinale in each range. Model results suggest that escape from specialist insects would lead to only slight increases in the growth rate (lambda) of introduced populations. In contrast, the larger response to disturbance in the introduced vs. native range had much greater positive effects on lambda. These results together suggest that, at least in the regions where the experiments were performed, the differences in response to small disturbances by C. officinale contribute more to higher abundance in the introduced range compared to at home. Despite the challenges of conducting experiments on a wide biogeographic scale and the logistical constraints of adequately sampling populations within a range, this approach is a critical step forward to understanding the success of exotic plants.", "which Sub-hypothesis ?", "HAD", 900.0, 903.0], ["To shed light on the process of how exotic species become invasive, it is necessary to study them both in their native and non\u2010native ranges. Our intent was to measure differences in herbivory, plant growth and the impact on other species in Fallopia japonica in its native and non\u2010native ranges. We performed a cross\u2010range full descriptive, field study in Japan (native range) and France (non\u2010native range). We assessed DNA ploidy levels, the presence of phytophagous enemies, the amount of leaf damage, several growth parameters and the co\u2010occurrence of Fallopia japonica with other plant species of herbaceous communities. Invasive Fallopia japonica plants were all octoploid, a ploidy level we did not encounter in the native range, where plants were all tetraploid. Octoploids in France harboured far less phytophagous enemies, suffered much lower levels of herbivory, grew larger and had a much stronger impact on plant communities than tetraploid conspecifics in the native range in Japan. Our data confirm that Fallopia japonica performs better \u2013 plant vigour and dominance in the herbaceous community \u2013 in its non\u2010native than its native range. Because we could not find octoploids in the native range, we cannot separate the effects of differences in ploidy from other biogeographic factors. To go further, common garden experiments would now be needed to disentangle the proper role of each factor, taking into account the ploidy levels of plants in their native and non\u2010native ranges. Synthesis. As the process by which invasive plants successfully invade ecosystems in their non\u2010native range is probably multifactorial in most cases, examining several components \u2013 plant growth, herbivory load, impact on recipient systems \u2013 of plant invasions through biogeographic comparisons is important. Our study contributes towards filling this gap in the research, and it is hoped that this method will spread in invasion ecology, making such an approach more common.", "which Sub-hypothesis ?", "HAD", 890.0, 893.0], ["The Enemies Hypothesis predicts that alien plants have a competitive ad- vantage over native plants because they are often introduced with few herbivores or diseases. To investigate this hypothesis, we transplanted seedlings of the invasive alien tree, Sapium sebiferum (Chinese tallow tree) and an ecologically similar native tree, Celtis laevigata (hackberry), into mesic forest, floodplain forest, and coastal prairie sites in east Texas and manipulated foliar fungal diseases and insect herbivores with fungicidal and insecticidal sprays. As predicted by the Enemies Hypothesis, insect herbivores caused significantly greater damage to untreated Celtis seedlings than to untreated Sapium seedlings. However, contrary to predictions, suppression of insect herbivores caused significantly greater in- creases in survivorship and growth of Sapium seedlings compared to Celtis seedlings. Regressions suggested that Sapium seedlings compensate for damage in the first year but that this greatly increases the risk of mortality in subsequent years. Fungal diseases had no effects on seedling survival or growth. The Recruitment Limitation Hypothesis predicts that the local abundance of a species will depend more on local seed input than on com- petitive ability at that location. To investigate this hypothesis, we added seeds of Celtis and Sapium on and off of artificial soil disturbances at all three sites. Adding seeds increased the density of Celtis seedlings and sometimes Sapium seedlings, with soil disturbance only affecting density of Celtis. Together the results of these experiments suggest that the success of Sapium may depend on high rates of seed input into these ecosystems and high growth potential, as well as performance advantages of seedlings caused by low rates of herbivory.", "which Sub-hypothesis ?", "P AN", NaN, NaN], ["Invasive species are a threat for ecosystems worldwide, especially oceanic islands. Predicting the invasive potential of introduced species remains difficult, and only a few studies have found traits correlated to invasiveness. We produced a molecular phylogenetic dataset and an ecological trait database for the entire Azorean flora and find that the phylogenetic nearest neighbour distance (PNND), a measure of evolutionary relatedness, is significantly correlated with invasiveness. We show that introduced plant species are more likely to become invasive in the absence of closely related species in the native flora of the Azores, verifying Darwin's 'naturalization hypothesis'. In addition, we find that some ecological traits (especially life form and seed size) also have predictive power on invasive success in the Azores. Therefore, we suggest a combination of PNND with ecological trait values as a universal predictor of invasiveness that takes into account characteristics of both introduced species and receiving ecosystem.", "which Measure of species relationship ?", "PNND", 394.0, 398.0], ["Highly c-axis oriented aluminum nitrade (AlN) films were successfully deposited on flexible Hastelloy tapes by middle-frequency magnetron sputtering. The microstructure and piezoelectric properties of the AlN films were investigated. The results show that the AlN films deposited directly on the bare Hastelloy substrate have rough surface with root mean square (RMS) roughness of 32.43nm and its full width at half maximum (FWHM) of the AlN (0002) peak is 12.5\u2218. However, the AlN films deposited on the Hastelloy substrate with Y2O3 buffer layer show smooth surface with RMS roughness of 5.46nm and its FWHM of the AlN (0002) peak is only 3.7\u2218. The piezoelectric coefficient d33 of the AlN films deposited on the Y2O3/Hastelloy substrate is larger than three times that of the AlN films deposited on the bare Hastelloy substrate. The prepared highly c-axis oriented AlN films can be used to develop high-temperature flexible SAW sensors.", "which substrate ?", "Y2O3/Hastelloy", 714.0, 728.0], ["We report a novel synthesis of nanoparticle Pd-Cu catalysts, containing only trace amounts of Pd, for selective hydrogenation reactions. Pd-Cu nanoparticles were designed based on model single atom alloy (SAA) surfaces, in which individual, isolated Pd atoms act as sites for hydrogen uptake, dissociation, and spillover onto the surrounding Cu surface. Pd-Cu nanoparticles were prepared by addition of trace amounts of Pd (0.18 atomic (at)%) to Cu nanoparticles supported on Al2O3 by galvanic replacement (GR). The catalytic performance of the resulting materials for the partial hydrogenation of phenylacetylene was investigated at ambient temperature in a batch reactor under a head pressure of hydrogen (6.9 bar). The bimetallic Pd-Cu nanoparticles have over an order of magnitude higher activity for phenylacetylene hydrogenation when compared to their monometallic Cu counterpart, while maintaining a high selectivity to styrene over many hours at high conversion. Greater than 94% selectivity to styrene is observed at all times, which is a marked improvement when compared to monometallic Pd catalysts with the same Pd loading, at the same total conversion. X-ray photoelectron spectroscopy and UV-visible spectroscopy measurements confirm the complete uptake and alloying of Pd with Cu by GR. Scanning tunneling microscopy and thermal desorption spectroscopy of model SAA surfaces confirmed the feasibility of hydrogen spillover onto an otherwise inert Cu surface. These model studies addressed a wide range of Pd concentrations related to the bimetallic nanoparticles.", "which substrate ?", "phenylacetylene", 598.0, 613.0], ["Aqueous acidic ozone (O3)-containing solutions are increasingly used for silicon treatment in photovoltaic and semiconductor industries. We studied the behavior of aqueous hydrofluoric acid (HF)-containing solutions (i.e., HF\u2013O3, HF\u2013H2SO4\u2013O3, and HF\u2013HCl\u2013O3 mixtures) toward boron-doped solar-grade (100) silicon wafers. The solubility of O3 and etching rates at 20 \u00b0C were investigated. The mixtures were analyzed for the potential oxidizing species by UV\u2013vis and Raman spectroscopy. Concentrations of O3 (aq), O3 (g), and Cl2 (aq) were determined by titrimetric volumetric analysis. F\u2013, Cl\u2013, and SO42\u2013 ion contents were determined by ion chromatography. Model experiments were performed to investigate the oxidation of H-terminated silicon surfaces by H2O\u2013O2, H2O\u2013O3, H2O\u2013H2SO4\u2013O3, and H2O\u2013HCl\u2013O3 mixtures. The oxidation was monitored by diffuse reflection infrared Fourier transformation (DRIFT) spectroscopy. The resulting surfaces were examined by scanning electron microscopy (SEM) and X-ray photoelectron spectrosc...", "which substrate ?", "Silicon", 73.0, 80.0], ["In this report, we demonstrate high spectral responsivity (SR) solar blind deep ultraviolet (UV) \u03b2-Ga2O3 metal-semiconductor-metal (MSM) photodetectors grown by the mist chemical-vapor deposition (Mist-CVD) method. The \u03b2-Ga2O3 thin film was grown on c-plane sapphire substrates, and the fabricated MSM PDs with Al contacts in an interdigitated geometry were found to exhibit peak SR>150A/W for the incident light wavelength of 254 nm at a bias of 20 V. The devices exhibited very low dark current, about 14 pA at 20 V, and showed sharp transients with a photo-to-dark current ratio>105. The corresponding external quantum efficiency is over 7 \u00d7 104%. The excellent deep UV \u03b2-Ga2O3 photodetectors will enable significant advancements for the next-generation photodetection applications.", "which substrate ?", "c-plane sapphire", 250.0, 266.0], ["Highly robust poly\u2010Si thin\u2010film transistor (TFT) on polyimide (PI) substrate using blue laser annealing (BLA) of amorphous silicon (a\u2010Si) for lateral crystallization is demonstrated. Its foldability is compared with the conventional excimer laser annealing (ELA) poly\u2010Si TFT on PI used for foldable displays exhibiting field\u2010effect mobility of 85 cm2 (V s)\u22121. The BLA poly\u2010Si TFT on PI exhibits the field\u2010effect mobility, threshold voltage (VTH), and subthreshold swing of 153 cm2 (V s)\u22121, \u22122.7 V, and 0.2 V dec\u22121, respectively. Most important finding is the excellent foldability of BLA TFT compared with the ELA poly\u2010Si TFTs on PI substrates. The VTH shift of BLA poly\u2010Si TFT is \u22480.1 V, which is much smaller than that (\u22482 V) of ELA TFT on PI upon 30 000 cycle folding. The defects are generated at the grain boundary region of ELA poly\u2010Si during folding. However, BLA poly\u2010Si has no protrusion in the poly\u2010Si channel and thus no defect generation during folding. This leads to excellent foldability of BLA poly\u2010Si on PI substrate.", "which substrate ?", "Polyimide (PI) ", NaN, NaN], ["The first cross-coupling of acylated phenol derivatives has been achieved. In the presence of an air-stable Ni(II) complex, readily accessible aryl pivalates participate in the Suzuki-Miyaura coupling with arylboronic acids. The process is tolerant of considerable variation in each of the cross-coupling components. In addition, a one-pot acylation/cross-coupling sequence has been developed. The potential to utilize an aryl pivalate as a directing group has also been demonstrated, along with the ability to sequentially cross-couple an aryl bromide followed by an aryl pivalate, using palladium and nickel catalysis, respectively.", "which substrate ?", "Aryl pivalate", 422.0, 435.0], ["The Ni(0)-catalyzed cross-coupling of alkenyl methyl ethers with boronic esters is described. Several types of alkenyl methyl ethers can be coupled with a wide range of boronic esters to give the stilbene derivatives.", "which substrate ?", "Boronic ester", NaN, NaN], ["A simple and inexpensive method for growing Ga2O3 using GaAs wafers is demonstrated. Si-doped GaAs wafers are heated to 1050 \u00b0C in a horizontal tube furnace in both argon and air ambients in order to convert their surfaces to \u03b2-Ga2O3. The \u03b2-Ga2O3 films are characterized using scanning electron micrograph, energy-dispersive X-ray spectroscopy, and X-ray diffraction. They are also used to fabricate solar blind photodetectors. The devices, which had nanotextured surfaces, exhibited a high sensitivity to ultraviolet (UV) illumination due in part to large surface areas. Furthermore, the films have coherent interfaces with the substrate, which leads to a robust device with high resistance to thermo-mechanical stress. The photoconductance of the \u03b2-Ga2O3 films is found to increase by more than three orders of magnitude under 270 nm ultraviolet illumination with respect to the dark current. The fabricated device shows a responsivity of \u223c292 mA/W at this wavelength.", "which substrate ?", "GaAs", 78.0, 82.0], ["Here, we report a catalytic, light-driven method for the redox-neutral depolymerization of native lignin biomass at ambient temperature. This transformation proceeds via a proton-coupled electron-transfer (PCET) activation of an alcohol O\u2013H bond to generate a key alkoxy radical intermediate, which then facilitates the \u03b2-scission of a vicinal C\u2013C bond. Notably, this single-step depolymerization is driven solely by visible-light irradiation, requires no stoichiometric chemical reagents, and produces no stoichiometric waste. This method exhibits good efficiency and excellent selectivity for the activation and fragmentation of the \u03b2-O-4 linkage in the polymer backbone, even in the presence of numerous other PCET-active functional groups. The feasibility of this protocol in enabling the cleavage of the \u03b2-1 linkage in model lignin dimers was also demonstrated. These results provide further evidence that visible-light photocatalysis can serve as a viable method for the direct conversion of lignin biomass into va...", "which substrate ?", "Native lignin", 91.0, 104.0], ["Lignin, which is a highly cross-linked and irregular biopolymer, is nature\u2019s most abundant source of aromatic compounds and constitutes an attractive renewable resource for the production of aromatic commodity chemicals. Herein, we demonstrate a practical and operationally simple two-step degradation approach involving Pd-catalyzed aerobic oxidation and visible-light photoredox-catalyzed reductive fragmentation for the chemoselective cleavage of the \u03b2-O-4 linkage\u2014the predominant linkage in lignin\u2014for the generation of lower-molecular-weight aromatic building blocks. The developed strategy affords the \u03b2-O-4 bond cleaved products with high chemoselectivity and in high yields, is amenable to continuous flow processing, operates at ambient temperature and pressure, and is moisture- and oxygen-tolerant.", "which substrate ?", "Lignin", 0.0, 6.0], ["The first cross-coupling of acylated phenol derivatives has been achieved. In the presence of an air-stable Ni(II) complex, readily accessible aryl pivalates participate in the Suzuki-Miyaura coupling with arylboronic acids. The process is tolerant of considerable variation in each of the cross-coupling components. In addition, a one-pot acylation/cross-coupling sequence has been developed. The potential to utilize an aryl pivalate as a directing group has also been demonstrated, along with the ability to sequentially cross-couple an aryl bromide followed by an aryl pivalate, using palladium and nickel catalysis, respectively.", "which substrate ?", "Boronic acid", NaN, NaN], ["A catalyst that cleaves aryl-oxygen bonds but not carbon-carbon bonds may help improve lignin processing. Selective hydrogenolysis of the aromatic carbon-oxygen (C-O) bonds in aryl ethers is an unsolved synthetic problem important for the generation of fuels and chemical feedstocks from biomass and for the liquefaction of coal. Currently, the hydrogenolysis of aromatic C-O bonds requires heterogeneous catalysts that operate at high temperature and pressure and lead to a mixture of products from competing hydrogenolysis of aliphatic C-O bonds and hydrogenation of the arene. Here, we report hydrogenolyses of aromatic C-O bonds in alkyl aryl and diaryl ethers that form exclusively arenes and alcohols. This process is catalyzed by a soluble nickel carbene complex under just 1 bar of hydrogen at temperatures of 80 to 120\u00b0C; the relative reactivity of ether substrates scale as Ar-OAr>>Ar-OMe>ArCH2-OMe (Ar, Aryl; Me, Methyl). Hydrogenolysis of lignin model compounds highlights the potential of this approach for the conversion of refractory aryl ether biopolymers to hydrocarbons.", "which substrate ?", "Aryl ether", 1049.0, 1059.0], ["Controlled synthesis of a hybrid nanomaterial based on titanium oxide and single-layer graphene (SLG) using atomic layer deposition (ALD) is reported here. The morphology and crystallinity of the oxide layer on SLG can be tuned mainly with the deposition temperature, achieving either a uniform amorphous layer at 60 \u00b0C or \u223c2 nm individual nanocrystals on the SLG at 200 \u00b0C after only 20 ALD cycles. A continuous and uniform amorphous layer formed on the SLG after 180 cycles at 60 \u00b0C can be converted to a polycrystalline layer containing domains of anatase TiO2 after a postdeposition annealing at 400 \u00b0C under vacuum. Using aberration-corrected transmission electron microscopy (AC-TEM), characterization of the structure and chemistry was performed on an atomic scale and provided insight into understanding the nucleation and growth. AC-TEM imaging and electron energy loss spectroscopy revealed that rocksalt TiO nanocrystals were occasionally formed at the early stage of nucleation after only 20 ALD cycles. Understanding and controlling nucleation and growth of the hybrid nanomaterial are crucial to achieving novel properties and enhanced performance for a wide range of applications that exploit the synergetic functionalities of the ensemble.", "which substrate ?", "Single-Layer Graphene", 74.0, 95.0], ["Abstract Cleavage of C\u2013O bonds in lignin can afford the renewable aryl sources for fine chemicals. However, the high bond energies of these C\u2013O bonds, especially the 4-O-5-type diaryl ether C\u2013O bonds (~314 kJ/mol) make the cleavage very challenging. Here, we report visible-light photoredox-catalyzed C\u2013O bond cleavage of diaryl ethers by an acidolysis with an aryl carboxylic acid and a following one-pot hydrolysis. Two molecules of phenols are obtained from one molecule of diaryl ether at room temperature. The aryl carboxylic acid used for the acidolysis can be recovered. The key to success of the acidolysis is merging visible-light photoredox catalysis using an acridinium photocatalyst and Lewis acid catalysis using Cu(TMHD) 2 . Preliminary mechanistic studies indicate that the catalytic cycle occurs via a rare selective electrophilic attack of the generated aryl carboxylic radical on the electron-rich aryl ring of the diphenyl ether. This transformation is applied to a gram-scale reaction and the model of 4-O-5 lignin linkages.", "which substrate ?", "Aryl carboxylic acid", 361.0, 381.0], ["Beech lignin was oxidatively cleaved in ionic liquids to give phenols, unsaturated propylaromatics, and aromatic aldehydes. A multiparallel batch reactor system was used to screen different ionic liquids and metal catalysts. Mn(NO(3))(2) in 1-ethyl-3-methylimidazolium trifluoromethanesulfonate [EMIM][CF(3)SO(3)] proved to be the most effective reaction system. A larger scale batch reaction with this system in a 300 mL autoclave (11 g lignin starting material) resulted in a maximum conversion of 66.3 % (24 h at 100 degrees C, 84x10(5) Pa air). By adjusting the reaction conditions and catalyst loading, the selectivity of the process could be shifted from syringaldehyde as the predominant product to 2,6-dimethoxy-1,4-benzoquinone (DMBQ). Surprisingly, the latter could be isolated as a pure substance in 11.5 wt % overall yield by a simple extraction/crystallization process.", "which substrate ?", "Beech lignin", 0.0, 12.0], ["CdTe-based solar cells exhibiting 19% power conversion efficiency were produced using widely available thermal evaporation deposition of the absorber layers on SnO2-coated glass with or without a t...", "which substrate ?", "SnO2-coated glass", 160.0, 177.0], ["

An activated carbon supported \u03b1-molybdenum carbide catalyst (\u03b1-MoC1\u2212x/AC) showed remarkable activity in the selective deoxygenation of guaiacol to substituted mono-phenols in low carbon number alcohol solvents.

", "which substrate ?", "guaiacol", 149.0, 157.0], ["Pd/Al2O3 catalysts coated with various thiolate self-assembled monolayers (SAMs) were used to direct the partial hydrogenation of 18-carbon polyunsaturated fatty acids, yielding a product stream enriched in monounsaturated fatty acids (with low saturated fatty acid content), a favorable result for increasing the oxidative stability of biodiesel. The uncoated Pd/Al2O3 catalyst quickly saturated all fatty acid reactants under hydrogenation conditions, but the addition of alkanethiol SAMs markedly increased the reaction selectivity to the monounsaturated product oleic acid to a level of 80\u201390%, even at conversions >70%. This effect, which is attributed to steric effects between the SAMs and reactants, was consistent with the relative consumption rates of linoleic and oleic acid using alkanethiol-coated and uncoated Pd/Al2O3 catalysts. With an uncoated Pd/Al2O3 catalyst, each fatty acid, regardless of its degree of saturation had a reaction rate of \u223c0.2 mol reactant consumed per mole of surface palladium per ...", "which substrate ?", "18-carbon polyunsaturated fatty acids", 130.0, 167.0], ["This study investigated atmospheric hydrodeoxygenation (HDO) of guaiacol over Ni2P-supported catalysts. Alumina, zirconia, and silica served as the supports of Ni2P catalysts. The physicochemical properties of these catalysts were surveyed by N2 physisorption, X-ray diffraction (XRD), CO chemisorption, H2 temperature-programmed reduction (H2-TPR), H2 temperature-programmed desorption (H2-TPD), and NH3 temperature-programmed desorption (NH3-TPD). The catalytic performance of these catalysts was tested in a continuous fixed-bed system. This paper proposes a plausible network of atmospheric guaiacol HDO, containing demethoxylation (DMO), demethylation (DME), direct deoxygenation (DDO), hydrogenation (HYD), transalkylation, and methylation. Pseudo-first-order kinetics analysis shows that the intrinsic activity declined in the following order: Ni2P/ZrO2 > Ni2P/Al2O3 > Ni2P/SiO2. Product selectivity at zero guaiacol conversion indicates that Ni2P/SiO2 promotes DMO and DDO routes, whereas Ni2P/ZrO2 and Ni2P/Al2O...", "which substrate ?", "guaiacol", 64.0, 72.0], ["Herein, a novel electrochemical glucose biosensor based on glucose oxidase (GOx) immobilized on a surface containing platinum nanoparticles (PtNPs) electrodeposited on poly(Azure A) (PAA) previously electropolymerized on activated screen-printed carbon electrodes (GOx-PtNPs-PAA-aSPCEs) is reported. The resulting electrochemical biosensor was validated towards glucose oxidation in real samples and further electrochemical measurement associated with the generated H2O2. The electrochemical biosensor showed an excellent sensitivity (42.7 \u03bcA mM\u22121 cm\u22122), limit of detection (7.6 \u03bcM), linear range (20 \u03bcM\u20132.3 mM), and good selectivity towards glucose determination. Furthermore, and most importantly, the detection of glucose was performed at a low potential (0.2 V vs. Ag). The high performance of the electrochemical biosensor was explained through surface exploration using field emission SEM, XPS, and impedance measurements. The electrochemical biosensor was successfully applied to glucose quantification in several real samples (commercial juices and a plant cell culture medium), exhibiting a high accuracy when compared with a classical spectrophotometric method. This electrochemical biosensor can be easily prepared and opens up a good alternative in the development of new sensitive glucose sensors.", "which substrate ?", "activated screen-printed carbon", 221.0, 252.0], ["We present a one-dimensional (1D) theoretical model for the design analysis of a micro thermal convective accelerometer (MTCA). Systematical design analysis was conducted on the sensor performance covering the sensor output, sensitivity, and power consumption. The sensor output was further normalized as a function of normalized input acceleration in terms of Rayleigh number R $_{\\mathrm {a}}$ (the product of Grashof number G $_{\\mathrm {r}}$ and Prandtl number P $_{\\mathrm {r}}$ ) for different fluids. A critical Rayleigh number (Rac = 3,000) is founded, for the first time, to determine the boundary between the linear and nonlinear response regime of MTCA. Based on the proposed 1D model, key parameters, including the location of the detectors, sensor length, thin film thickness, cavity height, heater temperature, and fluid types, were optimized to improve sensor performance. Accordingly, a CMOS compatible MTCA was designed and fabricated based on the theoretical analysis, which showed a high sensitivity of 1,289 mV/g. Therefore, this efficient 1D model, one million times faster than CFD simulation, can be a promising tool for the system-level CMOS MEMS design.", "which Sensitivity (mV/g) ?", "1,289", 1316.0, 1321.0], ["4OASIS3.2\u20135 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 \u00b0C in ACCESS1.0 and 0.04 \u00b0C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.", "which has name ?", "ACCESS1.0", 416.0, 425.0], ["4OASIS3.2\u20135 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 \u00b0C in ACCESS1.0 and 0.04 \u00b0C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.", "which has name ?", "ACCESS1.0 ", 416.0, 426.0], ["4OASIS3.2\u20135 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 \u00b0C in ACCESS1.0 and 0.04 \u00b0C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.", "which has name ?", "ACCESS1.3", 430.0, 439.0], ["Abstract. The core version of the Norwegian Climate Center's Earth System Model, named NorESM1-M, is presented. The NorESM family of models are based on the Community Climate System Model version 4 (CCSM4) of the University Corporation for Atmospheric Research, but differs from the latter by, in particular, an isopycnic coordinate ocean model and advanced chemistry\u2013aerosol\u2013cloud\u2013radiation interaction schemes. NorESM1-M has a horizontal resolution of approximately 2\u00b0 for the atmosphere and land components and 1\u00b0 for the ocean and ice components. NorESM is also available in a lower resolution version (NorESM1-L) and a version that includes prognostic biogeochemical cycling (NorESM1-ME). The latter two model configurations are not part of this paper. Here, a first-order assessment of the model stability, the mean model state and the internal variability based on the model experiments made available to CMIP5 are presented. Further analysis of the model performance is provided in an accompanying paper (Iversen et al., 2013), presenting the corresponding climate response and scenario projections made with NorESM1-M.", "which has name ?", "NORESM1-M", 87.0, 96.0], ["Abstract\u2014 This study serves as a proof\u2010of\u2010concept for the technique of using visible\u2010near infrared (VNIR), short\u2010wavelength infrared (SWIR), and thermal infrared (TIR) spectroscopic observations to map impact\u2010exposed subsurface lithologies and stratigraphy on Earth or Mars. The topmost layer, three subsurface layers and undisturbed outcrops of the target sequence exposed just 10 km to the northeast of the 23 km diameter Haughton impact structure (Devon Island, Nunavut, Canada) were mapped as distinct spectral units using Landsat 7 ETM+ (VNIR/SWIR) and ASTER (VNIR/SWIR/TIR) multispectral images. Spectral mapping was accomplished by using standard image contrast\u2010stretching algorithms. Both spectral matching and deconvolution algorithms were applied to image\u2010derived ASTER TIR emissivity spectra using spectra from a library of laboratory\u2010measured spectra of minerals (Arizona State University) and whole\u2010rocks (Ward's). These identifications were made without the use of a priori knowledge from the field (i.e., a \u201cblind\u201d analysis). The results from this analysis suggest a sequence of dolomitic rock (in the crater rim), limestone (wall), gypsum\u2010rich carbonate (floor), and limestone again (central uplift). These matched compositions agree with the lithologic units and the pre\u2010impact stratigraphic sequence as mapped during recent field studies of the Haughton impact structure by Osinski et al. (2005a). Further conformation of the identity of image\u2010derived spectra was confirmed by matching these spectra with laboratory\u2010measured spectra of samples collected from Haughton. The results from the \u201cblind\u201d remote sensing methods used here suggest that these techniques can also be used to understand subsurface lithologies on Mars, where ground truth knowledge may not be generally available.", "which has dataset ?", " ASTER ", 773.0, 780.0], ["Merging hyperspectral data from optical and thermal ranges allows a wider variety of minerals to be mapped and thus allows lithology to be mapped in a more complex way. In contrast, in most of the studies that have taken advantage of the data from the visible (VIS), near-infrared (NIR), shortwave infrared (SWIR) and longwave infrared (LWIR) spectral ranges, these different spectral ranges were analysed and interpreted separately. This limits the complexity of the final interpretation. In this study a presentation is made of how multiple absorption features, which are directly linked to the mineral composition and are present throughout the VIS, NIR, SWIR and LWIR ranges, can be automatically derived and, moreover, how these new datasets can be successfully used for mineral/lithology mapping. The biggest advantage of this approach is that it overcomes the issue of prior definition of endmembers, which is a requested routine employed in all widely used spectral mapping techniques. In this study, two different airborne image datasets were analysed, HyMap (VIS/NIR/SWIR image data) and Airborne Hyperspectral Scanner (AHS, LWIR image data). Both datasets were acquired over the Sokolov lignite open-cast mines in the Czech Republic. It is further demonstrated that even in this case, when the absorption feature information derived from multispectral LWIR data is integrated with the absorption feature information derived from hyperspectral VIS/NIR/SWIR data, an important improvement in terms of more complex mineral mapping is achieved.", "which has dataset ?", "HyMap", 1062.0, 1067.0], ["Urban performance currently depends not only on a city's endowment of hard infrastructure (physical capital), but also, and increasingly so, on the availability and quality of knowledge communication and social infrastructure (human and social capital). The latter form of capital is decisive for urban competitiveness. Against this background, the concept of the \u201csmart city\u201d has recently been introduced as a strategic device to encompass modern urban production factors in a common framework and, in particular, to highlight the importance of Information and Communication Technologies (ICTs) in the last 20 years for enhancing the competitive profile of a city. The present paper aims to shed light on the often elusive definition of the concept of the \u201csmart city.\u201d We provide a focused and operational definition of this construct and present consistent evidence on the geography of smart cities in the EU27. Our statistical and graphical analyses exploit in depth, for the first time to our knowledge, the most recent version of the Urban Audit data set in order to analyze the factors determining the performance of smart cities. We find that the presence of a creative class, the quality of and dedicated attention to the urban environment, the level of education, and the accessibility to and use of ICTs for public administration are all positively correlated with urban wealth. This result prompts the formulation of a new strategic agenda for European cities that will allow them to achieve sustainable urban development and a better urban landscape.", "which has dataset ?", "Urban Audit data set ", 1040.0, 1061.0], ["Abstract. Spectroscopy plays a vital role in the identification and characterization of minerals on terrestrial and planetary surfaces. We review the three different spectroscopic techniques for characterizing minerals on the Earth and lunar surfaces separately. Seven sedimentary and metamorphic terrestrial rock samples were analyzed with three field-based spectrometers, i.e., Raman, Fourier transform infrared (FTIR), and visible to near infrared and shortwave infrared (Vis\u2013NIR\u2013SWIR) spectrometers. Similarly, a review of work done by previous researchers on lunar rock samples was also carried out for their Raman, Vis\u2013NIR\u2013SWIR, and thermal (mid-infrared) spectral responses. It has been found in both the cases that the spectral information such as Si-O-Si stretching (polymorphs) in Raman spectra, identification of impurities, Christiansen and Restrahlen band center variation in mid-infrared spectra, location of elemental substitution, the content of iron, and shifting of the band center of diagnostic absorption features at 1 and 2 \u03bcm in reflectance spectra are contributing to the characterization and identification of terrestrial and lunar minerals. We show that quartz can be better characterized by considering silica polymorphs from Raman spectra, emission features in the range of 8 to 14 \u03bcm in FTIR spectra, and reflectance absorption features from Vis\u2013NIR\u2013SWIR spectra. KREEP materials from Apollo 12 and 14 samples are also better characterized using integrated spectroscopic studies. Integrated spectral responses felicitate comprehensive characterization and better identification of minerals. We suggest that Raman spectroscopy and visible and NIR-thermal spectroscopy are the best techniques to explore the Earth\u2019s and lunar mineralogy.", "which Minerals identified (Lunar rock samples) ?", "KREEP", 1392.0, 1397.0], ["LodLive project, http://en.lodlive.it/, provides a demonstration of the use of Linked Data standard (RDF, SPARQL) to browse RDF resources. The application aims to spread linked data principles with a simple and friendly interface and reusable techniques. In this report we present an overview of the potential of LodLive, mentioning tools and methodologies that were used to create it.", "which System ?", "Lodlive ", 0.0, 8.0], ["Gephi is an open source software for graph and network analysis. It uses a 3D render engine to display large networks in real-time and to speed up the exploration. A flexible and multi-task architecture brings new possibilities to work with complex data sets and produce valuable visual results. We present several key features of Gephi in the context of interactive exploration and interpretation of networks. It provides easy and broad access to network data and allows for spatializing, filtering, navigating, manipulating and clustering. Finally, by presenting dynamic features of Gephi, we highlight key aspects of dynamic network visualization.", "which System ?", "Gephi ", 0.0, 6.0], ["The need to visualize large social networks is growing as hardware capabilities make analyzing large networks feasible and many new data sets become available. Unfortunately, the visualizations in existing systems do not satisfactorily resolve the basic dilemma of being readable both for the global structure of the network and also for detailed analysis of local communities. To address this problem, we present NodeTrix, a hybrid representation for networks that combines the advantages of two traditional representations: node-link diagrams are used to show the global structure of a network, while arbitrary portions of the network can be shown as adjacency matrices to better support the analysis of communities. A key contribution is a set of interaction techniques. These allow analysts to create a NodeTrix visualization by dragging selections to and from node-link and matrix forms, and to flexibly manipulate the NodeTrix representation to explore the dataset and create meaningful summary visualizations of their findings. Finally, we present a case study applying NodeTrix to the analysis of the InfoVis 2004 coauthorship dataset to illustrate the capabilities of NodeTrix as both an exploration tool and an effective means of communicating results.", "which System ?", "NodeTrix ", 807.0, 816.0], ["In this paper we propose LODeX, a tool that produces a representative summary of a Linked open Data (LOD) source starting from scratch, thus supporting users in exploring and understanding the contents of a dataset. The tool takes in input the URL of a SPARQL endpoint and launches a set of predefined SPARQL queries, from the results of the queries it generates a visual summary of the source. The summary reports statistical and structural information of the LOD dataset and it can be browsed to focus on particular classes or to explore their properties and their use. LODeX was tested on the 137 public SPARQL endpoints contained in Data Hub (formerly CKAN), one of the main Open Data catalogues. The statistical and structural information extraction was successfully performed on 107 sources, among these the most significant ones are included in the online version of the tool.", "which System ?", "LODeX ", 572.0, 578.0], ["We present Paged Graph Visualization (PGV), a new semi-autonomous tool for RDF data exploration and visualization. PGV consists of two main components: a) the \"PGV explorer\" and b) the \"RDF pager\" module utilizing BRAHMS, our high per-formance main-memory RDF storage system. Unlike existing graph visualization techniques which attempt to display the entire graph and then filter out irrelevant data, PGV begins with a small graph and provides the tools to incrementally explore and visualize relevant data of very large RDF ontologies. We implemented several techniques to visualize and explore hot spots in the graph, i.e. nodes with large numbers of immediate neighbors. In response to the user-controlled, semantics-driven direction of the exploration, the PGV explorer obtains the necessary sub-graphs from the RDF pager and enables their incremental visualization leaving the previously laid out sub-graphs intact. We outline the problem of visualizing large RDF data sets, discuss our interface and its implementation, and through a controlled experiment we show the benefits of PGV.", "which System ?", "PGV ", 115.0, 119.0], ["Recently, the amount of semantic data available in the Web has increased dramatically. The potential of this vast amount of data is enormous but in most cases it is difficult for users to explore and use this data, especially for those without experience with Semantic Web technologies. Applying information visualization techniques to the Semantic Web helps users to easily explore large amounts of data and interact with them. In this article we devise a formal Linked Data Visualization Model (LDVM), which allows to dynamically connect data with visualizations. We report about our implementation of the LDVM comprising a library of generic visualizations that enable both users and data analysts to get an overview on, visualize and explore the Data Web and perform detailed analyzes on Linked Data.", "which System ?", "LDVM ", 608.0, 613.0], ["The success of Open Data initiatives has increased the amount of data available on the Web. Unfortunately, most of this data is only available in raw tabular form, what makes analysis and reuse quite difficult for non-experts. Linked Data principles allow for a more sophisticated approach by making explicit both the structure and semantics of the data. However, from the end-user viewpoint, they continue to be monolithic files completely opaque or difficult to explore by making tedious semantic queries. Our objective is to facilitate the user to grasp what kind of entities are in the dataset, how they are interrelated, which are their main properties and values, etc. Rhizomer is a tool for data publishing whose interface provides a set of components borrowed from Information Architecture (IA) that facilitate awareness of the dataset at hand. It automatically generates navigation menus and facets based on the kinds of things in the dataset and how they are described through metadata properties and values. Moreover, motivated by recent tests with end-users, it also provides the possibility to pivot among the faceted views created for each class of resources in the dataset.", "which System ?", "Rhizomer ", 675.0, 684.0], ["A wealth of information has recently become available as browsable RDF data on the Web, but the selection of client applications to interact with this Linked Data remains limited. We show how to browse Linked Data with Fenfire, a Free and Open Source Software RDF browser and editor that employs a graph view and focuses on an engaging and interactive browsing experience. This sets Fenfire apart from previous table- and outline-based Linked Data browsers.", "which System ?", "Fenfire ", 383.0, 391.0], ["PurposeThis paper introduces the Research Articles in Simplified HTML (or RASH), which is a Web-first format for writing HTML-based scholarly papers; it is accompanied by the RASH Framework, a set of tools for interacting with RASH-based articles. The paper also presents an evaluation that involved authors and reviewers of RASH articles submitted to the SAVE-SD 2015 and SAVE-SD 2016 workshops.DesignRASH has been developed aiming to: be easy to learn and use; share scholarly documents (and embedded semantic annotations) through the Web; support its adoption within the existing publishing workflow.FindingsThe evaluation study confirmed that RASH is ready to be adopted in workshops, conferences, and journals and can be quickly learnt by researchers who are familiar with HTML.Research LimitationsThe evaluation study also highlighted some issues in the adoption of RASH, and in general of HTML formats, especially by less technically savvy users. Moreover, additional tools are needed, e.g., for enabling additional conversions from/to existing formats such as OpenXML.Practical ImplicationsRASH (and its Framework) is another step towards enabling the definition of formal representations of the meaning of the content of an article, facilitating its automatic discovery, enabling its linking to semantically related articles, providing access to data within the article in actionable form, and allowing integration of data between papers.Social ImplicationsRASH addresses the intrinsic needs related to the various users of a scholarly article: researchers (focussing on its content), readers (experiencing new ways for browsing it), citizen scientists (reusing available data formally defined within it through semantic annotations), publishers (using the advantages of new technologies as envisioned by the Semantic Publishing movement).ValueRASH helps authors to focus on the organisation of their texts, supports them in the task of semantically enriching the content of articles, and leaves all the issues about validation, visualisation, conversion, and semantic data extraction to the various tools developed within its Framework.", "which Semantic representation ?", "RASH", 117.0, 121.0], ["Abstract While the Web was designed as a decentralised environment, individual authors still lack the ability to conveniently author and publish documents, and to engage in social interactions with documents of others in a truly decentralised fashion. We present dokieli, a fully decentralised, browser-based authoring and annotation platform with built-in support for social interactions, through which people retain ownership of and sovereignty over their data. The resulting \u201cliving\u201d documents are interoperable and independent of dokieli since they follow standards and best practices, such as HTML+RDFa for a fine-grained semantic structure, Linked Data Platform for personal data storage, and Linked Data Notifications for updates. This article describes dokieli\u2019s architecture and implementation, demonstrating advanced document authoring and interaction without a single point of control. Such an environment provides the right technological conditions for independent publication of scientific articles, news, and other works that benefit from diverse voices and open interactions. To experience the described features please open this document in your Web browser under its canonical URI: http://csarven.ca/dokieli-rww.", "which Semantic representation ?", "Dokie.li", 361.0, 369.0], ["As the amount of scholarly communication increases, it is increasingly difficult for specific core scientific statements to be found, connected and curated. Additionally, the redundancy of these statements in multiple fora makes it difficult to determine attribution, quality and provenance. To tackle these challenges, the Concept Web Alliance has promoted the notion of nanopublications (core scientific statements with associated context). In this document, we present a model of nanopublications along with a Named Graph/RDF serialization of the model. Importantly, the serialization is defined completely using already existing community-developed technologies. Finally, we discuss the importance of aggregating nanopublications and the role that the Concept Wiki plays in facilitating it.", "which Semantic representation ?", "Nanopublications", 372.0, 388.0], ["Abstract Background Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections and the resulting disease, coronavirus disease 2019 (Covid-19), have spread to millions of persons worldwide. Multiple vaccine candidates are under development, but no vaccine is currently available. Interim safety and immunogenicity data about the vaccine candidate BNT162b1 in younger adults have been reported previously from trials in Germany and the United States. Methods In an ongoing, placebo-controlled, observer-blinded, dose-escalation, phase 1 trial conducted in the United States, we randomly assigned healthy adults 18 to 55 years of age and those 65 to 85 years of age to receive either placebo or one of two lipid nanoparticle\u2013formulated, nucleoside-modified RNA vaccine candidates: BNT162b1, which encodes a secreted trimerized SARS-CoV-2 receptor\u2013binding domain; or BNT162b2, which encodes a membrane-anchored SARS-CoV-2 full-length spike, stabilized in the prefusion conformation. The primary outcome was safety (e.g., local and systemic reactions and adverse events); immunogenicity was a secondary outcome. Trial groups were defined according to vaccine candidate, age of the participants, and vaccine dose level (10 \u03bcg, 20 \u03bcg, 30 \u03bcg, and 100 \u03bcg). In all groups but one, participants received two doses, with a 21-day interval between doses; in one group (100 \u03bcg of BNT162b1), participants received one dose. Results A total of 195 participants underwent randomization. In each of 13 groups of 15 participants, 12 participants received vaccine and 3 received placebo. BNT162b2 was associated with a lower incidence and severity of systemic reactions than BNT162b1, particularly in older adults. In both younger and older adults, the two vaccine candidates elicited similar dose-dependent SARS-CoV-2\u2013neutralizing geometric mean titers, which were similar to or higher than the geometric mean titer of a panel of SARS-CoV-2 convalescent serum samples. Conclusions The safety and immunogenicity data from this U.S. phase 1 trial of two vaccine candidates in younger and older adults, added to earlier interim safety and immunogenicity data regarding BNT162b1 in younger adults from trials in Germany and the United States, support the selection of BNT162b2 for advancement to a pivotal phase 2\u20133 safety and efficacy evaluation. (Funded by BioNTech and Pfizer; ClinicalTrials.gov number, NCT04368728.)", "which Vaccine Name ?", "BNT162b2", 876.0, 884.0], ["There is an urgent need for vaccines to counter the COVID-19 pandemic due to infections with severe acute respiratory syndrome coronavirus (SARS-CoV-2). Evidence from convalescent sera and preclinical studies has identified the viral Spike (S) protein as a key antigenic target for protective immune responses. We have applied an mRNA-based technology platform, RNActive, to develop CVnCoV which contains sequence optimized mRNA coding for a stabilized form of S protein encapsulated in lipid nanoparticles (LNP). Following demonstration of protective immune responses against SARS-CoV-2 in animal models we performed a dose-escalation phase 1 study in healthy 18-60 year-old volunteers. This interim analysis shows that two doses of CVnCoV ranging from 2 g to 12 g per dose, administered 28 days apart were safe. No vaccine-related serious adverse events were reported. There were dose-dependent increases in frequency and severity of solicited systemic adverse events, and to a lesser extent of local reactions, but the majority were mild or moderate and transient in duration. Immune responses when measured as IgG antibodies against S protein or its receptor-binding domain (RBD) by ELISA, and SARS-CoV-2-virus neutralizing antibodies measured by micro-neutralization, displayed dose-dependent increases. Median titers measured in these assays two weeks after the second 12 g dose were comparable to the median titers observed in convalescent sera from COVID-19 patients. Seroconversion (defined as a 4-fold increase over baseline titer) of virus neutralizing antibodies two weeks after the second vaccination occurred in all participants who received 12 g doses. Preliminary results in the subset of subjects who were enrolled with known SARS-CoV-2 seropositivity at baseline show that CVnCoV is also safe and well tolerated in this population, and is able to boost the pre-existing immune response even at low dose levels. Based on these results, the 12 g dose is selected for further clinical investigation, including a phase 2b/3 study that will investigate the efficacy, safety, and immunogenicity of the candidate vaccine CVnCoV.", "which Vaccine Name ?", "CVnCoV", 383.0, 389.0], ["Abstract Background The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) emerged in late 2019 and spread globally, prompting an international effort to accelerate development of a vaccine. The candidate vaccine mRNA-1273 encodes the stabilized prefusion SARS-CoV-2 spike protein. Methods We conducted a phase 1, dose-escalation, open-label trial including 45 healthy adults, 18 to 55 years of age, who received two vaccinations, 28 days apart, with mRNA-1273 in a dose of 25 \u03bcg, 100 \u03bcg, or 250 \u03bcg. There were 15 participants in each dose group. Results After the first vaccination, antibody responses were higher with higher dose (day 29 enzyme-linked immunosorbent assay anti\u2013S-2P antibody geometric mean titer [GMT], 40,227 in the 25-\u03bcg group, 109,209 in the 100-\u03bcg group, and 213,526 in the 250-\u03bcg group). After the second vaccination, the titers increased (day 57 GMT, 299,751, 782,719, and 1,192,154, respectively). After the second vaccination, serum-neutralizing activity was detected by two methods in all participants evaluated, with values generally similar to those in the upper half of the distribution of a panel of control convalescent serum specimens. Solicited adverse events that occurred in more than half the participants included fatigue, chills, headache, myalgia, and pain at the injection site. Systemic adverse events were more common after the second vaccination, particularly with the highest dose, and three participants (21%) in the 250-\u03bcg dose group reported one or more severe adverse events. Conclusions The mRNA-1273 vaccine induced anti\u2013SARS-CoV-2 immune responses in all participants, and no trial-limiting safety concerns were identified. These findings support further development of this vaccine. (Funded by the National Institute of Allergy and Infectious Diseases and others; mRNA-1273 ClinicalTrials.gov number, NCT04283461).", "which Vaccine Name ?", "mRNA-1273", 223.0, 232.0], ["This study is a comparison of AUPress with three other traditional (non-open access) Canadian university presses. The analysis is based on the rankings that are correlated with book sales on Amazon.com and Amazon.ca. Statistical methods include the sampling of the sales ranking of randomly selected books from each press. The results of one-way ANOVA analyses show that there is no significant difference in the ranking of printed books sold by AUPress in comparison with traditional university presses. However, AUPress, can demonstrate a significantly larger readership for its books as evidenced by the number of downloads of the open electronic versions.", "which statistical_methods ?", "ANOVA", 346.0, 351.0], ["Agriculture is fundamental to achieving nutrition goals; it provides the food, energy, and nutrients essential for human health and well-being. This paper has examined crop diversity and dietary diversity in six villages using the ICRISAT Village Level Studies (VLS) data from the Telangana and Maharashtra states of India. The study has used the data of cultivating households for constructing the crop diversity index while dietary diversity data is from the special purpose nutritional surveys conducted by ICRISAT in the six villages. The study has revealed that the cropping pattern is not uniform across the six study villages with dominance of mono cropping in Telangana villages and of mixed cropping in Maharashtra villages. The analysis has indicated a positive and significant correlation between crop diversity and household dietary diversity at the bivariate level. In multiple linear regression model, controlling for the other covariates, crop diversity has not shown a significant association with household dietary diversity. However, other covariates have shown strong association with dietary diversity. The regression results have revealed that households which cultivated minimum one food crop in a single cropping year have a significant and positive relationship with dietary diversity. From the study it can be inferred that crop diversity alone does not affect the household dietary diversity in the semi-arid tropics. Enhancing the evidence base and future research, especially in the fragile environment of semi-arid tropics, is highly recommended.", "which statistical_methods ?", "Multiple linear regression model", 882.0, 914.0], ["Background: Recent literature, largely from Africa, shows mixed effects of own-production on diet diversity. However, the role of own-production, relative to markets, in influencing food consumption becomes more pronounced as market integration increases. Objective: This paper investigates the relative importance of two factors - production diversity and household market integration - for the intake of a nutritious diet by women and households in rural India. Methods: Data analysis is based on primary data from an extensive agriculture-nutrition survey of 3600 Indian households that was collected in 2017. Dietary diversity scores are constructed for women and households is based on 24-hour and 7-day recall periods. Household market integration is measured as monthly household expenditure on key non-staple food groups. We measure production diversity in two ways - field-level and on-farm production diversity - in order to account for the cereal centric rice-wheat cropping system found in our study locations. The analysis is based on Ordinary Least Squares regressions where we control for a variety of village, household, and individual level covariates that affect food consumption, and village fixed effects. Robustness checks are done by way of using a Poisson regression specifications and 7-day recall period. Results: Conventional measures of field-level production diversity, like the number of crops or food groups grown, have no significant association with diet diversity. In contrast, it is on-farm production diversity (the field-level cultivation of pulses and on-farm livestock management, and kitchen gardens in the longer run) that is significantly associated with improved dietary diversity scores, thus suggesting the importance of non-staples in improving both individual and household dietary diversity. Furthermore, market purchases of non-staples like pulses and dairy products are associated with a significantly higher dietary diversity. Other significant determinants of dietary diversity include women\u2019s literacy and awareness of nutrition. These results mostly remain robust to changes in the recall period of the diet diversity measure and the nature of the empirical specification. Conclusions: This study contributes to the scarce empirical evidence related to diets in India. Additionally, our results indicate some key intervention areas - promoting livestock rearing, strengthening households\u2019 market integration (for purchase of non-staples) and increasing women\u2019s awareness about nutrition. These are more impactful than raising production diversity. ", "which statistical_methods ?", "Ordinary least squares regression", NaN, NaN], ["Instruments play an essential role in creating research data. Given the importance of instruments and associated metadata to the assessment of data quality and data reuse, globally unique, persistent and resolvable identification of instruments is crucial. The Research Data Alliance Working Group Persistent Identification of Instruments (PIDINST) developed a community-driven solution for persistent identification of instruments which we present and discuss in this paper. Based on an analysis of 10 use cases, PIDINST developed a metadata schema and prototyped schema implementation with DataCite and ePIC as representative persistent identifier infrastructures and with HZB (Helmholtz-Zentrum Berlin fur Materialien und Energie) and BODC (British Oceanographic Data Centre) as representative institutional instrument providers. These implementations demonstrate the viability of the proposed solution in practice. Moving forward, PIDINST will further catalyse adoption and consolidate the schema by addressing new stakeholder requirements.", "which Used by ?", "BODC", 738.0, 742.0], ["Instruments play an essential role in creating research data. Given the importance of instruments and associated metadata to the assessment of data quality and data reuse, globally unique, persistent and resolvable identification of instruments is crucial. The Research Data Alliance Working Group Persistent Identification of Instruments (PIDINST) developed a community-driven solution for persistent identification of instruments which we present and discuss in this paper. Based on an analysis of 10 use cases, PIDINST developed a metadata schema and prototyped schema implementation with DataCite and ePIC as representative persistent identifier infrastructures and with HZB (Helmholtz-Zentrum Berlin fur Materialien und Energie) and BODC (British Oceanographic Data Centre) as representative institutional instrument providers. These implementations demonstrate the viability of the proposed solution in practice. Moving forward, PIDINST will further catalyse adoption and consolidate the schema by addressing new stakeholder requirements.", "which Used by ?", "HZB", 675.0, 678.0], ["Abstract Background Emerging public health threats often originate in resource-limited countries. In recognition of this fact, the World Health Organization issued revised International Health Regulations in 2005, which call for significantly increased reporting and response capabilities for all signatory nations. Electronic biosurveillance systems can improve the timeliness of public health data collection, aid in the early detection of and response to disease outbreaks, and enhance situational awareness. Methods As components of its Suite for Automated Global bioSurveillance (SAGES) program, The Johns Hopkins University Applied Physics Laboratory developed two open-source, electronic biosurveillance systems for use in resource-limited settings. OpenESSENCE provides web-based data entry, analysis, and reporting. ESSENCE Desktop Edition provides similar capabilities for settings without internet access. Both systems may be configured to collect data using locally available cell phone technologies. Results ESSENCE Desktop Edition has been deployed for two years in the Republic of the Philippines. Local health clinics have rapidly adopted the new technology to provide daily reporting, thus eliminating the two-to-three week data lag of the previous paper-based system. Conclusions OpenESSENCE and ESSENCE Desktop Edition are two open-source software products with the capability of significantly improving disease surveillance in a wide range of resource-limited settings. These products, and other emerging surveillance technologies, can assist resource-limited countries compliance with the revised International Health Regulations.", "which Epidemiological surveillance software ?", "OpenESSENCE", 757.0, 768.0], ["Abstract Background Emerging public health threats often originate in resource-limited countries. In recognition of this fact, the World Health Organization issued revised International Health Regulations in 2005, which call for significantly increased reporting and response capabilities for all signatory nations. Electronic biosurveillance systems can improve the timeliness of public health data collection, aid in the early detection of and response to disease outbreaks, and enhance situational awareness. Methods As components of its Suite for Automated Global bioSurveillance (SAGES) program, The Johns Hopkins University Applied Physics Laboratory developed two open-source, electronic biosurveillance systems for use in resource-limited settings. OpenESSENCE provides web-based data entry, analysis, and reporting. ESSENCE Desktop Edition provides similar capabilities for settings without internet access. Both systems may be configured to collect data using locally available cell phone technologies. Results ESSENCE Desktop Edition has been deployed for two years in the Republic of the Philippines. Local health clinics have rapidly adopted the new technology to provide daily reporting, thus eliminating the two-to-three week data lag of the previous paper-based system. Conclusions OpenESSENCE and ESSENCE Desktop Edition are two open-source software products with the capability of significantly improving disease surveillance in a wide range of resource-limited settings. These products, and other emerging surveillance technologies, can assist resource-limited countries compliance with the revised International Health Regulations.", "which Epidemiological surveillance software ?", "ESSENCE Desktop", 825.0, 840.0], ["Background The Electronic Surveillance System for the Early Notification of Community-Based Epidemics (ESSENCE) is a secure web-based tool that enables health care practitioners to monitor health indicators of public health importance for the detection and tracking of disease outbreaks, consequences of severe weather, and other events of concern. The ESSENCE concept began in an internally funded project at the Johns Hopkins University Applied Physics Laboratory, advanced with funding from the State of Maryland, and broadened in 1999 as a collaboration with the Walter Reed Army Institute for Research. Versions of the system have been further developed by Johns Hopkins University Applied Physics Laboratory in multiple military and civilian programs for the timely detection and tracking of health threats. Objective This study aims to describe the components and development of a biosurveillance system increasingly coordinating all-hazards health surveillance and infectious disease monitoring among large and small health departments, to list the key features and lessons learned in the growth of this system, and to describe the range of initiatives and accomplishments of local epidemiologists using it. Methods The features of ESSENCE include spatial and temporal statistical alerting, custom querying, user-defined alert notifications, geographical mapping, remote data capture, and event communications. To expedite visualization, configurable and interactive modes of data stratification and filtering, graphical and tabular customization, user preference management, and sharing features allow users to query data and view geographic representations, time series and data details pages, and reports. These features allow ESSENCE users to gather and organize the resulting wealth of information into a coherent view of population health status and communicate findings among users. Results The resulting broad utility, applicability, and adaptability of this system led to the adoption of ESSENCE by the Centers for Disease Control and Prevention, numerous state and local health departments, and the Department of Defense, both nationally and globally. The open-source version of Suite for Automated Global Electronic bioSurveillance is available for global, resource-limited settings. Resourceful users of the US National Syndromic Surveillance Program ESSENCE have applied it to the surveillance of infectious diseases, severe weather and natural disaster events, mass gatherings, chronic diseases and mental health, and injury and substance abuse. Conclusions With emerging high-consequence communicable diseases and other health conditions, the continued user requirement\u2013driven enhancements of ESSENCE demonstrate an adaptable disease surveillance capability focused on the everyday needs of public health. The challenge of a live system for widely distributed users with multiple different data sources and high throughput requirements has driven a novel, evolving architecture design.", "which Epidemiological surveillance software ?", "ESSENCE", 103.0, 110.0], ["The rapid pace of the coronavirus disease 2019 (COVID-19) pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) presents challenges to the robust collection of population-scale data to address this global health crisis. We established the COronavirus Pandemic Epidemiology (COPE) Consortium to unite scientists with expertise in big data research and epidemiology to develop the COVID Symptom Study, previously known as the COVID Symptom Tracker, mobile application. This application\u2014which offers data on risk factors, predictive symptoms, clinical outcomes, and geographical hotspots\u2014was launched in the United Kingdom on 24 March 2020 and the United States on 29 March 2020 and has garnered more than 2.8 million users as of 2 May 2020. Our initiative offers a proof of concept for the repurposing of existing approaches to enable rapidly scalable epidemiologic data collection and analysis, which is critical for a data-driven response to this public health challenge.", "which Epidemiological surveillance software ?", "COVID Symptom Tracker", 458.0, 479.0], ["Background The Electronic Surveillance System for the Early Notification of Community-Based Epidemics (ESSENCE) is a secure web-based tool that enables health care practitioners to monitor health indicators of public health importance for the detection and tracking of disease outbreaks, consequences of severe weather, and other events of concern. The ESSENCE concept began in an internally funded project at the Johns Hopkins University Applied Physics Laboratory, advanced with funding from the State of Maryland, and broadened in 1999 as a collaboration with the Walter Reed Army Institute for Research. Versions of the system have been further developed by Johns Hopkins University Applied Physics Laboratory in multiple military and civilian programs for the timely detection and tracking of health threats. Objective This study aims to describe the components and development of a biosurveillance system increasingly coordinating all-hazards health surveillance and infectious disease monitoring among large and small health departments, to list the key features and lessons learned in the growth of this system, and to describe the range of initiatives and accomplishments of local epidemiologists using it. Methods The features of ESSENCE include spatial and temporal statistical alerting, custom querying, user-defined alert notifications, geographical mapping, remote data capture, and event communications. To expedite visualization, configurable and interactive modes of data stratification and filtering, graphical and tabular customization, user preference management, and sharing features allow users to query data and view geographic representations, time series and data details pages, and reports. These features allow ESSENCE users to gather and organize the resulting wealth of information into a coherent view of population health status and communicate findings among users. Results The resulting broad utility, applicability, and adaptability of this system led to the adoption of ESSENCE by the Centers for Disease Control and Prevention, numerous state and local health departments, and the Department of Defense, both nationally and globally. The open-source version of Suite for Automated Global Electronic bioSurveillance is available for global, resource-limited settings. Resourceful users of the US National Syndromic Surveillance Program ESSENCE have applied it to the surveillance of infectious diseases, severe weather and natural disaster events, mass gatherings, chronic diseases and mental health, and injury and substance abuse. Conclusions With emerging high-consequence communicable diseases and other health conditions, the continued user requirement\u2013driven enhancements of ESSENCE demonstrate an adaptable disease surveillance capability focused on the everyday needs of public health. The challenge of a live system for widely distributed users with multiple different data sources and high throughput requirements has driven a novel, evolving architecture design.", "which Epidemiological surveillance software ?", "Web-based tool", 124.0, 138.0], ["SUMMARY The surveillance of Clostridium difficile (CD) in Denmark consists of laboratory based data from Departments of Clinical Microbiology (DCMs) sent to the National Registry of Enteric Pathogens (NREP). We validated a new surveillance system for CD based on the Danish Microbiology Database (MiBa). MiBa automatically collects microbiological test results from all Danish DCMs. We built an algorithm to identify positive test results for CD recorded in MiBa. A CD case was defined as a person with a positive culture for CD or PCR detection of toxin A and/or B and/or binary toxin. We compared CD cases identified through the MiBa-based surveillance with those reported to NREP and locally in five DCMs representing different Danish regions. During 2010\u20132014, NREP reported 13 896 CD cases, and the MiBa-based surveillance 21 252 CD cases. There was a 99\u00b79% concordance between the local datasets and the MiBa-based surveillance. Surveillance based on MiBa was superior to the current surveillance system, and the findings show that the number of CD cases in Denmark hitherto has been under-reported. There were only minor differences between local data and the MiBa-based surveillance, showing the completeness and validity of CD data in MiBa. This nationwide electronic system can greatly strengthen surveillance and research in various applications.", "which Epidemiological surveillance software ?", "MiBa", 297.0, 301.0], ["ABSTRACT Background: Tuberculosis (TB) surveillance data are crucial to the effectiveness of National TB Control Programs. In South Africa, few surveillance system evaluations have been undertaken to provide a rigorous assessment of the platform from which the national and district health systems draws data to inform programs and policies. Objective: Evaluate the attributes of Eden District\u2019s TB surveillance system, Western Cape Province, South Africa. Methods: Data quality, sensitivity and positive predictive value were assessed using secondary data from 40,033 TB cases entered in Eden District\u2019s ETR.Net from 2007 to 2013, and 79 purposively selected TB Blue Cards (TBCs), a medical patient file and source document for data entered into ETR.Net. Simplicity, flexibility, acceptability, stability and usefulness of the ETR.Net were assessed qualitatively through interviews with TB nurses, information health officers, sub-district and district coordinators involved in the TB surveillance. Results: TB surveillance system stakeholders report that Eden District\u2019s ETR.Net system was simple, acceptable, flexible and stable, and achieves its objective of informing TB control program, policies and activities. Data were less complete in the ETR.Net (66\u2013100%) than in the TBCs (76\u2013100%), and concordant for most variables except pre-treatment smear results, antiretroviral therapy (ART) and treatment outcome. The sensitivity of recorded variables in ETR.Net was 98% for gender, 97% for patient category, 93% for ART, 92% for treatment outcome and 90% for pre-treatment smear grading. Conclusions: Our results reveal that the system provides useful information to guide TB control program activities in Eden District. However, urgent attention is needed to address gaps in clinical recording on the TBC and data capturing into the ETR.Net system. We recommend continuous training and support of TB personnel involved with TB care, management and surveillance on TB data recording into the TBCs and ETR.Net as well as the implementation of a well-structured quality control and assurance system.", "which Epidemiological surveillance software ?", "ETR.net", 605.0, 612.0], [" An ongoing project explores the extent to which artificial intelligence (AI), specifically in the areas of natural language processing and semantic reasoning, can be exploited to facilitate the studies of science by deploying software agents equipped with natural language understanding capabilities to read scholarly publications on the web. The knowledge extracted by these AI agents is organized into a heterogeneous graph, called Microsoft Academic Graph (MAG), where the nodes and the edges represent the entities engaging in scholarly communications and the relationships among them, respectively. The frequently updated data set and a few software tools central to the underlying AI components are distributed under an open data license for research and commercial applications. This paper describes the design, schema, and technical and business motivations behind MAG and elaborates how MAG can be used in analytics, search, and recommendation scenarios. How AI plays an important role in avoiding various biases and human induced errors in other data sets and how the technologies can be further improved in the future are also discussed. ", "which Database ?", "MAG", 469.0, 472.0], ["SUMMARY The MIPS mammalian protein-protein interaction database (MPPI) is a new resource of high-quality experimental protein interaction data in mammals. The content is based on published experimental evidence that has been processed by human expert curators. We provide the full dataset for download and a flexible and powerful web interface for users with various requirements.", "which Database ?", "MIPS", 12.0, 16.0], ["

Publishing studies using standardized, machine-readable formats will enable machines toperform meta-analyses on-demand. To build a semantically-enhanced technology that embodiesthese functions, we developed the Cooperation Databank (CoDa) \u2013 a databank that contains2,641 studies on human cooperation (1958-2017) conducted in 78 countries involving 356,680participants. Experts annotated these studies for 312 variables, including the quantitative results(13, 959 effect sizes). We designed an ontology that defines and relates concepts in cooperationresearch and that can represent the relationships between individual study results. We havecreated a research platform that, based on the dataset, enables users to retrieve studies that testthe relation of variables with cooperation, visualize these study results, and perform (1) metaanalyses, (2) meta-regressions, (3) estimates of publication bias, and (4) statistical poweranalyses for future studies. We leveraged the dataset with visualization tools that allow users toexplore the ontology of concepts in cooperation research and to plot a citation network of thehistory of studies. CoDa offers a vision of how publishing studies in a machine-readable formatcan establish institutions and tools that improve scientific practices and knowledge.

", "which Database ?", "CoDa", 236.0, 240.0], ["OpenAIRE is the European Union initiative for an Open Access Infrastructure for Research in support of open scholarly communication and access to the research output of European funded projects and open access content from a network of institutional and disciplinary repositories. This article outlines the curation activities conducted in the OpenAIRE infrastructure, which employs a multi-level, multi-targeted approach: the publication and implementation of interoperability guidelines to assist in the local data curation processes, the data curation due to the integration of heterogeneous sources supporting different types of data, the inference of links to accomplish the publication research contextualization and data enrichment, and the end-user metadata curation that allows users to edit the attributes and provide links among the entities.", "which Database ?", "OpenAIRE", 0.0, 8.0], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "which Database ?", "OpenBiodiv", 468.0, 478.0], ["In recent years, the development of recommender systems has attracted increased interest in several domains, especially in e-learning. Massive Open Online Courses have brought a revolution. However, deficiency in support and personalization in this context drive learners to lose their motivation and leave the learning process. To overcome this problem we focus on adapting learning activities to learners' needs using a recommender system.This paper attempts to provide an introduction to different recommender systems for e-learning settings, as well as to present our proposed recommender system for massive learning activities in order to provide learners with the suitable learning activities to follow the learning process and maintain their motivation. We propose a hybrid knowledge-based recommender system based on ontology for recommendation of e-learning activities to learners in the context of MOOCs. In the proposed recommendation approach, ontology is used to model and represent the knowledge about the domain model, learners and learning activities.", "which Development in ?", "OWL", NaN, NaN], ["Abstract Summary The COVID-19 crisis has elicited a global response by the scientific community that has led to a burst of publications on the pathophysiology of the virus. However, without coordinated efforts to organize this knowledge, it can remain hidden away from individual research groups. By extracting and formalizing this knowledge in a structured and computable form, as in the form of a knowledge graph, researchers can readily reason and analyze this information on a much larger scale. Here, we present the COVID-19 Knowledge Graph, an expansive cause-and-effect network constructed from scientific literature on the new coronavirus that aims to provide a comprehensive view of its pathophysiology. To make this resource available to the research community and facilitate its exploration and analysis, we also implemented a web application and released the KG in multiple standard formats. Availability and implementation The COVID-19 Knowledge Graph is publicly available under CC-0 license at https://github.com/covid19kg and https://bikmi.covid19-knowledgespace.de. Supplementary information Supplementary data are available at Bioinformatics online.", "which Domain ?", "COVID-19", 21.0, 29.0], ["Abstract Biological dinitrogen (N 2 ) fixation exerts an important control on oceanic primary production by providing bioavailable form of nitrogen (such as ammonium) to photosynthetic microorganisms. N 2 fixation is dominant in nutrient poor and warm surface waters. The Bay of Bengal is one such region where no measurements of phototrophic N 2 fixation rates exist. The surface water of the Bay of Bengal is generally nitrate-poor and warm due to prevailing stratification and thus, could favour N 2 fixation. We commenced the first N 2 fixation study in the photic zone of the Bay of Bengal using 15 N 2 gas tracer incubation experiment during summer monsoon 2018. We collected seawater samples from four depths (covering the mixed layer depth of up to 75 m) at eight stations. N 2 fixation rates varied from 4 to 75 \u03bc mol N m \u22122 d \u22121 . The contribution of N 2 fixation to primary production was negligible (<1%). However, the upper bound of observed N 2 fixation rates is higher than the rates measured in other oceanic regimes, such as the Eastern Tropical South Pacific, the Tropical Northwest Atlantic, and the Equatorial and Southern Indian Ocean.", "which Domain ?", "Ocean", 1153.0, 1158.0], ["Abstract In a humanitarian response, leaders are often tasked with making large numbers of decisions, many of which have significant consequences, in situations of urgency and uncertainty. These conditions have an impact on the decision-maker (causing stress, for example) and subsequently on how decisions get made. Evaluations of humanitarian action suggest that decision-making is an area of weakness in many operations. There are examples of important decisions being missed and of decision-making processes that are slow and ad hoc. As part of a research process to address these challenges, this article considers literature from the humanitarian and emergency management sectors that relates to decision-making. It outlines what the literature tells us about the nature of the decisions that leaders at the country level are taking during humanitarian operations, and the circumstances under which these decisions are taken. It then considers the potential application of two different types of decision-making process in these contexts: rational/analytical decision-making and naturalistic decision-making. The article concludes with broad hypotheses that can be drawn from the literature and with the recommendation that these be further tested by academics with an interest in the topic.", "which Domain ?", "Humanitarian response", 14.0, 35.0], ["SUMMARY The MIPS mammalian protein-protein interaction database (MPPI) is a new resource of high-quality experimental protein interaction data in mammals. The content is based on published experimental evidence that has been processed by human expert curators. We provide the full dataset for download and a flexible and powerful web interface for users with various requirements.", "which Domain ?", "Protein-protein interaction", 27.0, 54.0], ["

Publishing studies using standardized, machine-readable formats will enable machines toperform meta-analyses on-demand. To build a semantically-enhanced technology that embodiesthese functions, we developed the Cooperation Databank (CoDa) \u2013 a databank that contains2,641 studies on human cooperation (1958-2017) conducted in 78 countries involving 356,680participants. Experts annotated these studies for 312 variables, including the quantitative results(13, 959 effect sizes). We designed an ontology that defines and relates concepts in cooperationresearch and that can represent the relationships between individual study results. We havecreated a research platform that, based on the dataset, enables users to retrieve studies that testthe relation of variables with cooperation, visualize these study results, and perform (1) metaanalyses, (2) meta-regressions, (3) estimates of publication bias, and (4) statistical poweranalyses for future studies. We leveraged the dataset with visualization tools that allow users toexplore the ontology of concepts in cooperation research and to plot a citation network of thehistory of studies. CoDa offers a vision of how publishing studies in a machine-readable formatcan establish institutions and tools that improve scientific practices and knowledge.

", "which Domain ?", "Human cooperation", 285.0, 302.0], ["In the past decade, much effort has been put into the visual representation of ontologies. However, present visualization strategies are not equipped to handle complex ontologies with many relations, leading to visual clutter and inefficient use of space. In this paper, we propose GLOW, a method for ontology visualization based on Hierarchical Edge Bundles. Hierarchical Edge Bundles is a new visually attractive technique for displaying relations in hierarchical data, such as concept structures formed by 'subclass-of' and 'type-of' relations. We have developed a visualization library based on OWL API, as well as a plug-in for Prot\u00e9g\u00e9, a well-known ontology editor. The displayed adjacency relations can be selected from an ontology using a set of common configurations, allowing for intuitive discovery of information. Our evaluation demonstrates that the GLOW visualization provides better visual clarity, and displays relations and complex ontologies better than the existing Prot\u00e9g\u00e9 visualization plug-in Jambalaya.", "which Domain ?", "ontology", 301.0, 309.0], ["Data sharing and reuse are crucial to enhance scientific progress and maximize return of investments in science. Although attitudes are increasingly favorable, data reuse remains difficult due to lack of infrastructures, standards, and policies. The FAIR (findable, accessible, interoperable, reusable) principles aim to provide recommendations to increase data reuse. Because of the broad interpretation of the FAIR principles, maturity indicators are necessary to determine the FAIRness of a dataset. In this work, we propose a reproducible computational workflow to assess data FAIRness in the life sciences. Our implementation follows principles and guidelines recommended by the maturity indicator authoring group and integrates concepts from the literature. In addition, we propose a FAIR balloon plot to summarize and compare dataset FAIRness. We evaluated the feasibility of our method on three real use cases where researchers looked for six datasets to answer their scientific questions. We retrieved information from repositories (ArrayExpress, Gene Expression Omnibus, eNanoMapper, caNanoLab, NanoCommons and ChEMBL), a registry of repositories, and a searchable resource (Google Dataset Search) via application program interfaces (API) wherever possible. With our analysis, we found that the six datasets met the majority of the criteria defined by the maturity indicators, and we showed areas where improvements can easily be reached. We suggest that use of standard schema for metadata and the presence of specific attributes in registries of repositories could increase FAIRness of datasets.", "which Domain ?", "Life Sciences", 597.0, 610.0], ["Research on visualizing Semantic Web data has yielded many tools that rely on information visualization techniques to better support the user in understanding and editing these data. Most tools structure the visualization according to the concept definitions and interrelations that constitute the ontology's vocabulary. Instances are often treated as somewhat peripheral information, when considered at all. These instances, that populate ontologies, represent an essential part of any knowledge base. Understanding instance-level data might be easier for users because of their higher concreteness, but instances will often be orders of magnitude more numerous than the concept definitions that give them machine-processable meaning. As such, the visualization of instance-level data poses different but real challenges. The authors present a visualization technique designed to enable users to visualize large instance sets and the relations that connect them. This visualization uses both node-link and adjacency matrix representations of graphs to visualize different parts of the data depending on their semantic and local structural properties. The technique was originally devised for simple social network visualization. The authors extend it to handle the richer and more complex graph structures of populated ontologies, exploiting ontological knowledge to drive the layout of, and navigation in, the representation embedded in a smooth zoomable environment.", "which Domain ?", "ontology", 298.0, 306.0], ["The growing maturity of Natural Language Processing (NLP) techniques and resources is dramatically changing the landscape of many application domains which are dependent on the analysis of unstructured data at scale. The finance domain, with its reliance on the interpretation of multiple unstructured and structured data sources and its demand for fast and comprehensive decision making is already emerging as a primary ground for the experimentation of NLP, Web Mining and Information Retrieval (IR) techniques for the automatic analysis of financial news and opinions online. This challenge focuses on advancing the state-of-the-art of aspect-based sentiment analysis and opinion-based Question Answering for the financial domain.", "which Domain ?", "financial domain", 716.0, 732.0], ["Instruments play an essential role in creating research data. Given the importance of instruments and associated metadata to the assessment of data quality and data reuse, globally unique, persistent and resolvable identification of instruments is crucial. The Research Data Alliance Working Group Persistent Identification of Instruments (PIDINST) developed a community-driven solution for persistent identification of instruments which we present and discuss in this paper. Based on an analysis of 10 use cases, PIDINST developed a metadata schema and prototyped schema implementation with DataCite and ePIC as representative persistent identifier infrastructures and with HZB (Helmholtz-Zentrum Berlin fur Materialien und Energie) and BODC (British Oceanographic Data Centre) as representative institutional instrument providers. These implementations demonstrate the viability of the proposed solution in practice. Moving forward, PIDINST will further catalyse adoption and consolidate the schema by addressing new stakeholder requirements.", "which uses identifier system ?", "ePIC", 605.0, 609.0], ["The Open Researcher & Contributor ID (ORCID) registry presents a unique opportunity to solve the problem of author name ambiguity. At its core the value of the ORCID registry is that it crosses disciplines, organizations, and countries, linking ORCID with both existing identifier schemes as well as publications and other research activities. By supporting linkages across multiple datasets \u2013 clinical trials, publications, patents, datasets \u2013 such a registry becomes a switchboard for researchers and publishers alike in managing the dissemination of research findings. We describe use cases for embedding ORCID identifiers in manuscript submission workflows, prior work searches, manuscript citations, and repository deposition. We make recommendations for storing and displaying ORCID identifiers in publication metadata to include ORCID identifiers, with CrossRef integration as a specific example. Finally, we provide an overview of ORCID membership and integration tools and resources.", "which uses identifier system ?", "ORCID Identifiers", 608.0, 625.0], ["Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.", "which Relation types ?", "URL", 818.0, 821.0], ["Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.", "which Relation types ?", "Citation", NaN, NaN], ["Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of \u201cwhat a chemical does to a gene/protein\" (chemical \u2192 gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical \u2013 biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor \u2013 target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. \u201cInhibitor\u201d, \u201cAntagonist\u201d, \u201cAgonist\u201d,...), some of them being regarded as highly specific for the purpose of this task (e.g. \u201cintercalation\u201d, \u201ccross-linking/alkylation\u201d) or referring to biological roles (e.g. \u201cAntibody\u201d, \u201cIncorporation into and Destabilization\u201d) and others, partially overlapping between them (e.g. \u201cBinder\u201d and \u201cLigand\u201d), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (\u201cdecreases\u201d, \u201cdirectlyDecreases\u201d, \u201cincreases\u201d, \u201cdirectlyIncreases\u201d and \u201ccausesNoChange\u201d) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su", "which Relation types ?", "Substrate", 920.0, 929.0], ["Software contributions to academic research are relatively invisible, especially to the formalized scholarly reputation system based on bibliometrics. In this article, we introduce a gold\u2010standard dataset of software mentions from the manual annotation of 4,971 academic PDFs in biomedicine and economics. The dataset is intended to be used for automatic extraction of software mentions from PDF format research publications by supervised learning at scale. We provide a description of the dataset and an extended discussion of its creation process, including improved text conversion of academic PDFs. Finally, we reflect on our challenges and lessons learned during the dataset creation, in hope of encouraging more discussion about creating datasets for machine learning use.", "which Relation types ?", "Version", NaN, NaN], ["Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of \u201cwhat a chemical does to a gene/protein\" (chemical \u2192 gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical \u2013 biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor \u2013 target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. \u201cInhibitor\u201d, \u201cAntagonist\u201d, \u201cAgonist\u201d,...), some of them being regarded as highly specific for the purpose of this task (e.g. \u201cintercalation\u201d, \u201ccross-linking/alkylation\u201d) or referring to biological roles (e.g. \u201cAntibody\u201d, \u201cIncorporation into and Destabilization\u201d) and others, partially overlapping between them (e.g. \u201cBinder\u201d and \u201cLigand\u201d), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (\u201cdecreases\u201d, \u201cdirectlyDecreases\u201d, \u201cincreases\u201d, \u201cdirectlyIncreases\u201d and \u201ccausesNoChange\u201d) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su", "which Relation types ?", "Agonist", 866.0, 873.0], ["Science across all disciplines has become increasingly data-driven, leading to additional needs with respect to software for collecting, processing and analysing data. Thus, transparency about software used as part of the scientific process is crucial to understand provenance of individual research data and insights, is a prerequisite for reproducibility and can enable macro-analysis of the evolution of scientific methods over time. However, missing rigor in software citation practices renders the automated detection and disambiguation of software mentions a challenging problem. In this work, we provide a large-scale analysis of software usage and citation practices facilitated through an unprecedented knowledge graph of software mentions and affiliated metadata generated through supervised information extraction models trained on a unique gold standard corpus and applied to more than 3 million scientific articles. Our information extraction approach distinguishes different types of software and mentions, disambiguates mentions and outperforms the state-of-the-art significantly, leading to the most comprehensive corpus of 11.8 M software mentions that are described through a knowledge graph consisting of more than 300 M triples. Our analysis provides insights into the evolution of software usage and citation patterns across various fields, ranks of journals, and impact of publications. Whereas, to the best of our knowledge, this is the most comprehensive analysis of software use and citation at the time, all data and models are shared publicly to facilitate further research into scientific use and citation of software.", "which Relation types ?", "Citation", 472.0, 480.0], ["Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.", "which Relation types ?", "Version", 792.0, 799.0], ["This paper describes the first task on semantic relation extraction and classification in scientific paper abstracts at SemEval 2018. The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios.", "which Relation types ?", "Result", NaN, NaN], ["Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task\u2014a result that approaches the human inter-annotator agreement (0.8875)\u2014and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system\u2019s ability to return real-time results: the average response time for each team\u2019s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "which Relation types ?", "Chemical-disease relation", 303.0, 328.0], ["We present two related tasks of the BioNLP Shared Tasks 2011: Bacteria Gene Renaming (Rename) and Bacteria Gene Interactions (GI). We detail the objectives, the corpus specification, the evaluation metrics, and we summarize the participants' results. Both issued from PubMed scientific literature abstracts, the Rename task aims at extracting gene name synonyms, and the GI task aims at extracting genic interaction events, mainly about gene transcriptional regulations in bacteria.", "which Relation types ?", "Interaction", 404.0, 415.0], ["Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of \u201cwhat a chemical does to a gene/protein\" (chemical \u2192 gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical \u2013 biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor \u2013 target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. \u201cInhibitor\u201d, \u201cAntagonist\u201d, \u201cAgonist\u201d,...), some of them being regarded as highly specific for the purpose of this task (e.g. \u201cintercalation\u201d, \u201ccross-linking/alkylation\u201d) or referring to biological roles (e.g. \u201cAntibody\u201d, \u201cIncorporation into and Destabilization\u201d) and others, partially overlapping between them (e.g. \u201cBinder\u201d and \u201cLigand\u201d), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (\u201cdecreases\u201d, \u201cdirectlyDecreases\u201d, \u201cincreases\u201d, \u201cdirectlyIncreases\u201d and \u201ccausesNoChange\u201d) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su", "which Relation types ?", "Direct Regulator", 7803.0, 7819.0], ["We present two related tasks of the BioNLP Shared Tasks 2011: Bacteria Gene Renaming (Rename) and Bacteria Gene Interactions (GI). We detail the objectives, the corpus specification, the evaluation metrics, and we summarize the participants' results. Both issued from PubMed scientific literature abstracts, the Rename task aims at extracting gene name synonyms, and the GI task aims at extracting genic interaction events, mainly about gene transcriptional regulations in bacteria.", "which Relation types ?", "Rename", 86.0, 92.0], ["type=\"main\" xml:lang=\"en\"> Based on the Environmental Kuznets Curve (EKC) hypothesis, this paper uses panel cointegration techniques to investigate the short- and long-run relationship between CO 2 emissions, gross domestic product (GDP), renewable energy consumption and international trade for a panel of 24 sub-Saharan Africa countries over the period 1980\u20132010. Short-run Granger causality results reveal that there is a bidirectional causality between emissions and economic growth; bidirectional causality between emissions and real exports; unidirectional causality from real imports to emissions; and unidirectional causality runs from trade (exports or imports) to renewable energy consumption. There is an indirect short-run causality running from emissions to renewable energy and an indirect short-run causality from GDP to renewable energy. In the long-run, the error correction term is statistically significant for emissions, renewable energy consumption and trade. The long-run estimates suggest that the inverted U-shaped EKC hypothesis is not supported for these countries; exports have a positive impact on CO 2 emissions, whereas imports have a negative impact on CO 2 emissions. As a policy recommendation, sub-Saharan Africa countries should expand their trade exchanges particularly with developed countries and try to maximize their benefit from technology transfer occurring when importing capital goods as this may increase their renewable energy consumption and reduce CO 2 emissions.", "which Type of data ?", "Panel", 102.0, 107.0], ["The present study examines whether the Race to the Bottom and Revised EKC scenarios presented by Dasgupta and others (2002) are, with regard to the analytical framework of the Environmental Kuznets Curve (EKC), applicable in Asia to representative environmental indices, such as sulphur emissions and carbon emissions. To carry out this study, a generalized method of moments (GMM) estimation was made, using panel data of 19 economies for the period 1950-2009. The main findings of the analysis on the validity of EKC indicate that sulphur emissions follow the expected inverted U-shape pattern, while carbon emissions tend to increase in line with per capita income in the observed range. As for the Race to the Bottom and Revised EKC scenarios, the latter was verified in sulphur emissions, as their EKC trajectories represent a linkage of the later development of the economy with the lower level of emissions while the former one was not present in neither sulphur nor carbon emissions.", "which Type of data ?", "Panel", 409.0, 414.0], ["This paper investigates the relationship between CO2 emission, real GDP, energy consumption, urbanization and trade openness for 10 for selected Central and Eastern European Countries (CEECs), including, Albania, Bulgaria, Croatia, Czech Republic, Macedonia, Hungary, Poland, Romania, Slovak Republic and Slovenia for the period of 1991\u20132011. The results show that the environmental Kuznets curve (EKC) hypothesis holds for these countries. The fully modified ordinary least squares (FMOLS) results reveal that a 1% increase in energy consumption leads to a %1.0863 increase in CO2 emissions. Results for the existence and direction of panel Vector Error Correction Model (VECM) Granger causality method show that there is bidirectional causal relationship between CO2 emissions - real GDP and energy consumption-real GDP as well.", "which Type of data ?", "Panel", 636.0, 641.0], ["This paper examines the relationship between per capita income and a wide range of environmental indicators using cross-country panel sets. The manner in which this has been done overcomes several of the weaknesses asscociated with the estimation of environmental Kuznets curves (EKCs). outlined by Stern et al. (1996). Results suggest that meaningful EKCs exist only for local air pollutants whilst indicators with a more global, or indirect, impact either increase monotonically with income or else have predicted turning points at high per capita income levels with large standard errors \u2013 unless they have been subjected to a multilateral policy initiative. Two other findings are also made: that concentration of local pollutants in urban areas peak at a lower per capita income level than total emissions per capita; and that transport-generated local air pollutants peak at a higher per capita income level than total emissions per capita. Given these findings, suggestions are made regarding the necessary future direction of environmental policy.", "which Type of data ?", "Panel", 128.0, 133.0], ["This study aims to examine the relationship between income and environmental degradation in West Africa and ascertain the validity of EKC hypothesis in the region. The study adopted a panel data approach for fifteen West Africa countries for the period 1980-2012. The available results from our estimation procedure confirmed the EKC theory in the region. At early development stages, pollution rises with income and reaching a turning point, pollution dwindles with increasing income; as indicated by the significant inverse relation between income and environmental degradation. Consequently, literacy level and sound institutional arrangement were found to contribute significantly in mitigating the extent of environmental degradation. Among notable recommendation is the need for awareness campaign on environment abatement and adaptation strategies, strengthening of institutions to caution production and dumping pollution emitting commodities and encourage adoption of cleaner technologies.", "which Type of data ?", "Panel", 184.0, 189.0], ["Abstract Semantic embedding of knowledge graphs has been widely studied and used for prediction and statistical analysis tasks across various domains such as Natural Language Processing and the Semantic Web. However, less attention has been paid to developing robust methods for embedding OWL (Web Ontology Language) ontologies, which contain richer semantic information than plain knowledge graphs, and have been widely adopted in domains such as bioinformatics. In this paper, we propose a random walk and word embedding based ontology embedding method named , which encodes the semantics of an OWL ontology by taking into account its graph structure, lexical information and logical constructors. Our empirical evaluation with three real world datasets suggests that benefits from these three different aspects of an ontology in class membership prediction and class subsumption prediction tasks. Furthermore, often significantly outperforms the state-of-the-art methods in our experiments.", "which Type of data ?", "OWL", 289.0, 292.0], ["This article applies the dynamic panel generalized method of moments technique to reexamine the environmental Kuznets curve (EKC) hypothesis for carbon dioxide (CO_2) emissions and asks two critical questions: \"Does the global data set fit the EKC hypothesis?\" and \"Do different income levels or regions influence the results of the EKC?\" We find evidence of the EKC hypothesis for CO_2 emissions in a global data set, middle-income, and American and European countries, but not in other income levels and regions. Thus, the hypothesis that one size fits all cannot be supported for the EKC, and even more importantly, results, robustness checking, and implications emerge. Copyright 2009 Agricultural and Applied Economics Association", "which Type of data ?", "Panel", 33.0, 38.0], ["Purpose \u2013 The purpose of this paper is to examine the relationship among environmental pollution, economic growth and energy consumption per capita in the case of Pakistan. The per capital carbon dioxide (CO2) emission is used as the environmental indicator, the commercial energy use per capita as the energy consumption indicator, and the per capita gross domestic product (GDP) as the economic indicator.Design/methodology/approach \u2013 The investigation is made on the basis of the environmental Kuznets curve (EKC), using time series data from 1971 to 2006, by applying different econometric tools like ADF Unit Root Johansen Co\u2010integration VECM and Granger causality tests.Findings \u2013 The Granger causality test shows that there is a long term relationship between these three indicators, with bidirectional causality between per capita CO2 emission and per capita energy consumption. A monotonically increasing curve between GDP and CO2 emission has been found for the sample period, rejecting the EKC relationship, i...", "which Type of data ?", "Time series", 524.0, 535.0], ["In the last few years, several studies have found an inverted-U relationship between per capita income and environmental degradation. This relationship, known as the environmental Kuznets curve (EKC), suggests that environmental degradation increases in the early stages of growth, but it eventually decreases as income exceeds a threshold level. However, this paper investigation relationship between per capita CO2 emission, growth economics and trade liberalization based on econometric techniques of unit root test, co-integration and a panel data set during the period 1960-1996 for BRICS countries. Data properties were analyzed to determine their stationarity using the LLC , IPS , ADF and PP unit root tests which indicated that the series are I(1). We find a cointegration relationship between per capita CO2 emission, growth economics and trade liberalization by applying Kao panel cointegration test. The evidence indi cates that in the long-run trade liberalization has a positive significant impact on CO2 emissions and impact of trade liberalization on emissions growth depends on the level of income Our findings suggest that there is a quadratic relationship between relationship between real GDP and CO2 emissions for the region as a whole. The estimated long-run coefficients of real GDP and its square satisfy the EKC hypothesis in all of studied countries. Our estimation shows that the inflection point or optimal point real GDP per capita is about 5269.4 dollars. The results show that on average, sample countries are on the positive side of the inverted U curve. The turning points are very low in some cases and very high in other cases, hence providing poor evidence in support of the EKC hypothesis. Thus, our findings suggest that all BRICS countries need to sacrifice economic growth to decrease their emission levels", "which Type of data ?", "Panel", 541.0, 546.0], ["Abstract Based on the Environmental Kuznets Curve theory, the authors choose provincial panel data of China in 1990\u20132007 and adopt panel unit root and co-integration testing method to study whether there is Environmental Kuznets Curve for China\u2019s carbon emissions. The research results show that: carbon emissions per capita of the eastern region and the central region of China fit into Environmental Kuznets Curve, but that of the western region does not. On this basis, the authors carry out scenario analysis on the occurrence time of the inflection point of carbon emissions per capita of different regions, and describe a specific time path.", "which Type of data ?", "Panel", 88.0, 93.0], ["Previous studies show that the environmental quality and economic growth can be represented by the inverted U curve called Environmental Kuznets Curve (EKC). In this study, we conduct empirical analyses on detecting the existence of EKC using the five common pollutants emissions (i.e. CO2, SO2, BOD, SPM10, and GHG) as proxy for environmental quality. The data spanning from year 1961 to 2009 and cover 40 countries. We seek to investigate if the EKC hypothesis holds in two groups of economies, i.e. developed versus developing economies. Applying panel data approach, our results show that the EKC does not hold in all countries. We also detect the existence of U shape and increasing trend in other cases. The results reveal that CO2 and SPM10 are good data to proxy for environmental pollutant and they can be explained well by GDP. Also, it is observed that the developed countries have higher turning points than the developing countries. Higher economic growth may lead to different impacts on environmental quality in different economies.", "which Type of data ?", "Panel", 550.0, 555.0], ["Learning low-dimensional embeddings of knowledge graphs is a powerful approach used to predict unobserved or missing edges between entities. However, an open challenge in this area is developing techniques that can go beyond simple edge prediction and handle more complex logical queries, which might involve multiple unobserved edges, entities, and variables. For instance, given an incomplete biological knowledge graph, we might want to predict \"em what drugs are likely to target proteins involved with both diseases X and Y?\" -- a query that requires reasoning about all possible proteins that might interact with diseases X and Y. Here we introduce a framework to efficiently make predictions about conjunctive logical queries -- a flexible but tractable subset of first-order logic -- on incomplete knowledge graphs. In our approach, we embed graph nodes in a low-dimensional space and represent logical operators as learned geometric operations (e.g., translation, rotation) in this embedding space. By performing logical operations within a low-dimensional embedding space, our approach achieves a time complexity that is linear in the number of query variables, compared to the exponential complexity required by a naive enumeration-based approach. We demonstrate the utility of this framework in two application studies on real-world datasets with millions of relations: predicting logical relationships in a network of drug-gene-disease interactions and in a graph-based representation of social interactions derived from a popular web forum.", "which Type of data ?", "Graph", 416.0, 421.0], ["It has been forecasted by many economists that in the next couple of decades the BRICS economies are going to experience an unprecedented economic growth. This massive economic growth would definitely have a detrimental impact on the environment since these economies, like others, would extract their environmental and natural resource to a larger scale in the process of their economic growth. Therefore, maintaining environmental quality while growing has become a major challenge for these economies. However, the proponents of Environmental Kuznets Curve (EKC) Hypothesis - an inverted U shape relationship between income and emission per capita, suggest BRICS economies need not bother too much about environmental quality while growing because growth would eventually take care of the environment once a certain level of per capita income is achieved. In this backdrop, the present study makes an attempt to estimate EKC type relationship, if any, between income and emission in the context of the BRICS countries for the period 1997 to 2011. Therefore, the study first adopts fixed effect (FE) panel data model to control time constant country specific effects, and then uses Generalized Method of Moments (GMM) approach for dynamic panel data to address endogeneity of income variable and dynamism in emission per capita. Apart from income, we also include variables related to financial sector development and energy utilization to explain emission. The fixed effect model shows a significant EKC type relation between income and emission supporting the previous literature. However, GMM estimates for the dynamic panel model show the relationship between income and emission is actually U shaped with the turning point being out of sample. This out of sample turning point indicates that emission has been growing monotonically with growth in income. Factors like, net energy imports and share of industrial output in GDP are found to be significant and having detrimental impact on the environment in the dynamic panel model. However, these variables are found to be insignificant in FE model. Capital account convertibility shows significant and negative impact on the environment irrespective of models used. The monotonically increasing relationship between income and emission suggests the BRICS economies must adopt some efficiency oriented action plan so that they can grow without putting much pressure on the environment. These findings can have important policy implications as BRICS countries are mainly depending on these factors for their growth but at the same time they can cause serious threat to the environment.", "which Type of data ?", "Panel", 1102.0, 1107.0], ["Purpose \u2013 The purpose of this paper is to analyse the implication of trade on carbon emissions in a panel of eight highly trading Southeast and East Asian countries, namely, China, Indonesia, South Korea, Malaysia, Hong Kong, The Philippines, Singapore and Thailand. Design/methodology/approach \u2013 The analysis relies on the standard quadratic environmental Kuznets curve (EKC) extended to include energy consumption and international trade. A battery of panel unit root and co-integration tests is applied to establish the variables\u2019 stochastic properties and their long-run relations. Then, the specified EKC is estimated using the panel dynamic ordinary least square (OLS) estimation technique. Findings \u2013 The panel co-integration statistics verifies the validity of the extended EKC for the countries under study. Estimation of the long-run EKC via the dynamic OLS estimation method reveals the environmentally degrading effects of trade in these countries, especially in ASEAN and plus South Korea and Hong Kong. Pra...", "which Type of data ?", "Panel", 100.0, 105.0], ["The aim of this paper is to investigate the existence of environmental Kuznets curve (EKC) in an open economy like Tunisia using annual time series data for the period of 1971-2010. The ARDL bounds testing approach to cointegration is applied to test long run relationship in the presence of structural breaks and vector error correction model (VECM) to detect the causality among the variables. The robustness of causality analysis has been tested by applying the innovative accounting approach (IAA). The findings of this paper confirmed the long run relationship between economic growth, energy consumption, trade openness and CO2 emissions in Tunisian Economy. The results also indicated the existence of EKC confirmed by the VECM and IAA approaches. The study has significant contribution for policy implications to curtail energy pollutants by implementing environment friendly regulations to sustain the economic development in Tunisia.", "which Type of data ?", "Time series", 136.0, 147.0], ["This article investigates the Environmental Kuznets Curves (EKC) for CO2 emissions in a panel of 109 countries during the period 1959 to 2001. The length of the series makes the application of a heterogeneous estimator suitable from an econometric point of view. The results, based on the hierarchical Bayes estimator, show that different EKC dynamics are associated with the different sub-samples of countries considered. On average, more industrialized countries show evidence of EKC in quadratic specifications, which nevertheless are probably evolving into an N-shape based on their cubic specification. Nevertheless, it is worth noting that the EU, and not the Umbrella Group led by US, has been driving currently observed EKC-like shapes. The latter is associated to monotonic income\u2013CO2 dynamics. The EU shows a clear EKC shape. Evidence for less-developed countries consistently shows that CO2 emissions rise positively with income, though there are some signs of an EKC. Analyses of future performance, nevertheless, favour quadratic specifications, thus supporting EKC evidence for wealthier countries and non-EKC shapes for industrializing regions.", "which Type of data ?", "Panel", 88.0, 93.0], ["A fused hexacyclic electron acceptor, IHIC, based on strong electron\u2010donating group dithienocyclopentathieno[3,2\u2010b]thiophene flanked by strong electron\u2010withdrawing group 1,1\u2010dicyanomethylene\u20103\u2010indanone, is designed, synthesized, and applied in semitransparent organic solar cells (ST\u2010OSCs). IHIC exhibits strong near\u2010infrared absorption with extinction coefficients of up to 1.6 \u00d7 105m\u22121 cm\u22121, a narrow optical bandgap of 1.38 eV, and a high electron mobility of 2.4 \u00d7 10\u22123 cm2 V\u22121 s\u22121. The ST\u2010OSCs based on blends of a narrow\u2010bandgap polymer donor PTB7\u2010Th and narrow\u2010bandgap IHIC acceptor exhibit a champion power conversion efficiency of 9.77% with an average visible transmittance of 36% and excellent device stability; this efficiency is much higher than any single\u2010junction and tandem ST\u2010OSCs reported in the literature.", "which Acceptor ?", "IHIC", 38.0, 42.0], ["Molecular acceptors are promising alternatives to fullerenes (e.g., PC61/71BM) in the fabrication of high-efficiency bulk-heterojunction (BHJ) solar cells. While solution-processed polymer\u2013fullerene BHJ devices have recently met the 10% efficiency threshold, molecular acceptors have yet to prove comparably efficient with polymer donors. At this point in time, it is important to forge a better understanding of the design parameters that directly impact small-molecule (SM) acceptor performance in BHJ solar cells. In this report, we show that 2-(benzo[c][1,2,5]thiadiazol-4-ylmethylene)malononitrile (BM)-terminated SM acceptors can achieve efficiencies as high as 5.3% in BHJ solar cells with the polymer donor PCE10. Through systematic device optimization and characterization studies, we find that the nonfullerene analogues (FBM, CBM, and CDTBM) all perform comparably well, independent of the molecular structure and electronics of the \u03c0-bridge that links the two electron-deficient BM end groups. With estimated...", "which Acceptor ?", "CDTBM", 846.0, 851.0], ["A new acceptor\u2013donor\u2013acceptor-structured nonfullerene acceptor, 2,2\u2032-((2Z,2\u2032Z)-(((4,4,9,9-tetrakis(4-hexylphenyl)-4,9-dihydro-s-indaceno[1,2-b:5,6-b\u2032]dithiophene-2,7-diyl)bis(4-((2-ethylhexyl)oxy)thiophene-4,3-diyl))bis(methanylylidene))bis(5,6-difluoro-3-oxo-2,3-dihydro-1H-indene-2,1-diylidene))dimalononitrile (i-IEICO-4F), is designed and synthesized via main-chain substituting position modification of 2-(5,6-difluoro-3-oxo-2,3-dihydro-1H-indene-2,1-diylidene)dimalononitrile. Unlike its planar analogue IEICO-4F with strong absorption in the near-infrared region, i-IEICO-4F exhibits a twisted main-chain configuration, resulting in 164 nm blue shifts and leading to complementary absorption with the wide-bandgap polymer (J52). A high solution molar extinction coefficient of 2.41 \u00d7 105 M\u20131 cm\u20131, and sufficiently high energy of charge-transfer excitons of 1.15 eV in a J52:i-IEICO-4F blend were observed, in comparison with those of 2.26 \u00d7 105 M\u20131 cm\u20131 and 1.08 eV for IEICO-4F. A power conversion efficiency of...", "which Acceptor ?", "i-IEICO-4F", 314.0, 324.0], ["With an indenoindene core, a new thieno[3,4\u2010b]thiophene\u2010based small\u2010molecule electron acceptor, 2,2\u2032\u2010((2Z,2\u2032Z)\u2010((6,6\u2032\u2010(5,5,10,10\u2010tetrakis(2\u2010ethylhexyl)\u20105,10\u2010dihydroindeno[2,1\u2010a]indene\u20102,7\u2010diyl)bis(2\u2010octylthieno[3,4\u2010b]thiophene\u20106,4\u2010diyl))bis(methanylylidene))bis(5,6\u2010difluoro\u20103\u2010oxo\u20102,3\u2010dihydro\u20101H\u2010indene\u20102,1\u2010diylidene))dimalononitrile (NITI), is successfully designed and synthesized. Compared with 12\u2010\u03c0\u2010electron fluorene, a carbon\u2010bridged biphenylene with an axial symmetry, indenoindene, a carbon\u2010bridged E\u2010stilbene with a centrosymmetry, shows elongated \u03c0\u2010conjugation with 14 \u03c0\u2010electrons and one more sp3 carbon bridge, which may increase the tunability of electronic structure and film morphology. Despite its twisted molecular framework, NITI shows a low optical bandgap of 1.49 eV in thin film and a high molar extinction coefficient of 1.90 \u00d7 105m\u22121 cm\u22121 in solution. By matching NITI with a large\u2010bandgap polymer donor, an extraordinary power conversion efficiency of 12.74% is achieved, which is among the best performance so far reported for fullerene\u2010free organic photovoltaics and is inspiring for the design of new electron acceptors.", "which Acceptor ?", "NITI", 335.0, 339.0], ["Low-bandgap polymers/molecules are an interesting family of semiconductor materials, and have enabled many recent exciting breakthroughs in the field of organic electronics, especially for organic photovoltaics (OPVs). Here, such a low-bandgap (1.43 eV) non-fullerene electron acceptor (BT-IC) bearing a fused 7-heterocyclic ring with absorption edge extending to the near-infrared (NIR) region was specially designed and synthesized. Benefitted from its NIR light harvesting, high performance OPVs were fabricated with medium bandgap polymers (J61 and J71) as donors, showing power conversion efficiencies of 9.6% with J61 and 10.5% with J71 along with extremely low energy loss (0.56 eV for J61 and 0.53 eV for J71). Interestingly, femtosecond transient absorption spectroscopy studies on both systems show that efficient charge generation was observed despite the fact that the highest occupied molecular orbital (HOMO)\u2013HOMO offset (\u0394EH) in the blends was as low as 0.10 eV, suggesting that such a small \u0394EH is not a crucial limitation in realizing high performance of NIR non-fullerene based OPVs. Our results indicated that BT-IC is an interesting NIR non-fullerene acceptor with great potential application in tandem/multi-junction, semitransparent, and ternary blend solar cells.", "which Acceptor ?", "BT-IC", 287.0, 292.0], ["A side\u2010chain conjugation strategy in the design of nonfullerene electron acceptors is proposed, with the design and synthesis of a side\u2010chain\u2010conjugated acceptor (ITIC2) based on a 4,8\u2010bis(5\u2010(2\u2010ethylhexyl)thiophen\u20102\u2010yl)benzo[1,2\u2010b:4,5\u2010b\u2032]di(cyclopenta\u2010dithiophene) electron\u2010donating core and 1,1\u2010dicyanomethylene\u20103\u2010indanone electron\u2010withdrawing end groups. ITIC2 with the conjugated side chains exhibits an absorption peak at 714 nm, which redshifts 12 nm relative to ITIC1. The absorption extinction coefficient of ITIC2 is 2.7 \u00d7 105m\u22121 cm\u22121, higher than that of ITIC1 (1.5 \u00d7 105m\u22121 cm\u22121). ITIC2 exhibits slightly higher highest occupied molecular orbital (HOMO) (\u22125.43 eV) and lowest unoccupied molecular orbital (LUMO) (\u22123.80 eV) energy levels relative to ITIC1 (HOMO: \u22125.48 eV; LUMO: \u22123.84 eV), and higher electron mobility (1.3 \u00d7 10\u22123 cm2 V\u22121 s\u22121) than that of ITIC1 (9.6 \u00d7 10\u22124 cm2 V\u22121 s\u22121). The power conversion efficiency of ITIC2\u2010based organic solar cells is 11.0%, much higher than that of ITIC1\u2010based control devices (8.54%). Our results demonstrate that side\u2010chain conjugation can tune energy levels, enhance absorption, and electron mobility, and finally enhance photovoltaic performance of nonfullerene acceptors.", "which Acceptor ?", "ITIC2", 163.0, 168.0], ["A simple small molecule acceptor named DICTF, with fluorene as the central block and 2-(2,3-dihydro-3-oxo-1H-inden-1-ylidene)propanedinitrile as the end-capping groups, has been designed for fullerene-free organic solar cells. The new molecule was synthesized from widely available and inexpensive commercial materials in only three steps with a high overall yield of \u223c60%. Fullerene-free organic solar cells with DICTF as the acceptor material provide a high PCE of 7.93%.", "which Acceptor ?", "DICTF", 39.0, 44.0], ["A novel small molecule, FBR, bearing 3-ethylrhodanine flanking groups was synthesized as a nonfullerene electron acceptor for solution-processed bulk heterojunction organic photovoltaics (OPV). A straightforward synthesis route was employed, offering the potential for large scale preparation of this material. Inverted OPV devices employing poly(3-hexylthiophene) (P3HT) as the donor polymer and FBR as the acceptor gave power conversion efficiencies (PCE) up to 4.1%. Transient and steady state optical spectroscopies indicated efficient, ultrafast charge generation and efficient photocurrent generation from both donor and acceptor. Ultrafast transient absorption spectroscopy was used to investigate polaron generation efficiency as well as recombination dynamics. It was determined that the P3HT:FBR blend is highly intermixed, leading to increased charge generation relative to comparative devices with P3HT:PC60BM, but also faster recombination due to a nonideal morphology in which, in contrast to P3HT:PC60BM devices, the acceptor does not aggregate enough to create appropriate percolation pathways that prevent fast nongeminate recombination. Despite this nonoptimal morphology the P3HT:FBR devices exhibit better performance than P3HT:PC60BM devices, used as control, demonstrating that this acceptor shows great promise for further optimization.", "which Acceptor ?", "FBR", 24.0, 27.0], ["Naphtho[1,2\u2010b:5,6\u2010b\u2032]dithiophene is extended to a fused octacyclic building block, which is end capped by strong electron\u2010withdrawing 2\u2010(5,6\u2010difluoro\u20103\u2010oxo\u20102,3\u2010dihydro\u20101H\u2010inden\u20101\u2010ylidene)malononitrile to yield a fused\u2010ring electron acceptor (IOIC2) for organic solar cells (OSCs). Relative to naphthalene\u2010based IHIC2, naphthodithiophene\u2010based IOIC2 with a larger \u03c0\u2010conjugation and a stronger electron\u2010donating core shows a higher lowest unoccupied molecular orbital energy level (IOIC2: \u22123.78 eV vs IHIC2: \u22123.86 eV), broader absorption with a smaller optical bandgap (IOIC2: 1.55 eV vs IHIC2: 1.66 eV), and a higher electron mobility (IOIC2: 1.0 \u00d7 10\u22123 cm2 V\u22121 s\u22121 vs IHIC2: 5.0 \u00d7 10\u22124 cm2 V\u22121 s\u22121). Thus, IOIC2\u2010based OSCs show higher values in open\u2010circuit voltage, short\u2010circuit current density, fill factor, and thereby much higher power conversion efficiency (PCE) values than those of the IHIC2\u2010based counterpart. In particular, as\u2010cast OSCs based on FTAZ: IOIC2 yield PCEs of up to 11.2%, higher than that of the control devices based on FTAZ: IHIC2 (7.45%). Furthermore, by using 0.2% 1,8\u2010diiodooctane as the processing additive, a PCE of 12.3% is achieved from the FTAZ:IOIC2\u2010based devices, higher than that of the FTAZ:IHIC2\u2010based devices (7.31%). These results indicate that incorporating extended conjugation into the electron\u2010donating fused\u2010ring units in nonfullerene acceptors is a promising strategy for designing high\u2010performance electron acceptors.", "which Acceptor ?", "IOIC2", 242.0, 247.0], ["Molecular acceptors are promising alternatives to fullerenes (e.g., PC61/71BM) in the fabrication of high-efficiency bulk-heterojunction (BHJ) solar cells. While solution-processed polymer\u2013fullerene BHJ devices have recently met the 10% efficiency threshold, molecular acceptors have yet to prove comparably efficient with polymer donors. At this point in time, it is important to forge a better understanding of the design parameters that directly impact small-molecule (SM) acceptor performance in BHJ solar cells. In this report, we show that 2-(benzo[c][1,2,5]thiadiazol-4-ylmethylene)malononitrile (BM)-terminated SM acceptors can achieve efficiencies as high as 5.3% in BHJ solar cells with the polymer donor PCE10. Through systematic device optimization and characterization studies, we find that the nonfullerene analogues (FBM, CBM, and CDTBM) all perform comparably well, independent of the molecular structure and electronics of the \u03c0-bridge that links the two electron-deficient BM end groups. With estimated...", "which Acceptor ?", "FBM", 832.0, 835.0], ["Molecular acceptors are promising alternatives to fullerenes (e.g., PC61/71BM) in the fabrication of high-efficiency bulk-heterojunction (BHJ) solar cells. While solution-processed polymer\u2013fullerene BHJ devices have recently met the 10% efficiency threshold, molecular acceptors have yet to prove comparably efficient with polymer donors. At this point in time, it is important to forge a better understanding of the design parameters that directly impact small-molecule (SM) acceptor performance in BHJ solar cells. In this report, we show that 2-(benzo[c][1,2,5]thiadiazol-4-ylmethylene)malononitrile (BM)-terminated SM acceptors can achieve efficiencies as high as 5.3% in BHJ solar cells with the polymer donor PCE10. Through systematic device optimization and characterization studies, we find that the nonfullerene analogues (FBM, CBM, and CDTBM) all perform comparably well, independent of the molecular structure and electronics of the \u03c0-bridge that links the two electron-deficient BM end groups. With estimated...", "which Acceptor ?", "CBM", 837.0, 840.0], ["Three novel non-fullerene small molecular acceptors ITOIC, ITOIC-F, and ITOIC-2F were designed and synthesized with easy chemistry. The concept of supramolecular chemistry was successfully used in the molecular design, which includes noncovalently conformational locking (via intrasupramolecular interaction) to enhance the planarity of backbone and electrostatic interaction (intersupramolecular interaction) to enhance the \u03c0\u2013\u03c0 stacking of terminal groups. Fluorination can further strengthen the intersupramolecular electrostatic interaction of terminal groups. As expected, the designed acceptors exhibited excellent device performance when blended with polymer donor PBDB-T. In comparison with the parent acceptor molecule DC-IDT2T reported in the literature with a power conversion efficiency (PCE) of 3.93%, ITOIC with a planar structure exhibited a PCE of 8.87% and ITOIC-2F with a planar structure and enhanced electrostatic interaction showed a quite impressive PCE of 12.17%. Our result demonstrates the import...", "which Acceptor ?", "ITOIC-2F", 72.0, 80.0], ["

In this work, we present a non-fullerene electron acceptor bearing a fused five-heterocyclic ring containing selenium atoms, denoted as IDSe-T-IC, for fullerene-free polymer solar cells (PSCs).

", "which Acceptor ?", "IDSe-T-IC", 139.0, 148.0], ["Naphtho[1,2\u2010b:5,6\u2010b\u2032]dithiophene is extended to a fused octacyclic building block, which is end capped by strong electron\u2010withdrawing 2\u2010(5,6\u2010difluoro\u20103\u2010oxo\u20102,3\u2010dihydro\u20101H\u2010inden\u20101\u2010ylidene)malononitrile to yield a fused\u2010ring electron acceptor (IOIC2) for organic solar cells (OSCs). Relative to naphthalene\u2010based IHIC2, naphthodithiophene\u2010based IOIC2 with a larger \u03c0\u2010conjugation and a stronger electron\u2010donating core shows a higher lowest unoccupied molecular orbital energy level (IOIC2: \u22123.78 eV vs IHIC2: \u22123.86 eV), broader absorption with a smaller optical bandgap (IOIC2: 1.55 eV vs IHIC2: 1.66 eV), and a higher electron mobility (IOIC2: 1.0 \u00d7 10\u22123 cm2 V\u22121 s\u22121 vs IHIC2: 5.0 \u00d7 10\u22124 cm2 V\u22121 s\u22121). Thus, IOIC2\u2010based OSCs show higher values in open\u2010circuit voltage, short\u2010circuit current density, fill factor, and thereby much higher power conversion efficiency (PCE) values than those of the IHIC2\u2010based counterpart. In particular, as\u2010cast OSCs based on FTAZ: IOIC2 yield PCEs of up to 11.2%, higher than that of the control devices based on FTAZ: IHIC2 (7.45%). Furthermore, by using 0.2% 1,8\u2010diiodooctane as the processing additive, a PCE of 12.3% is achieved from the FTAZ:IOIC2\u2010based devices, higher than that of the FTAZ:IHIC2\u2010based devices (7.31%). These results indicate that incorporating extended conjugation into the electron\u2010donating fused\u2010ring units in nonfullerene acceptors is a promising strategy for designing high\u2010performance electron acceptors.", "which Donor ?", "FTAZ", 956.0, 960.0], ["A new acceptor\u2013donor\u2013acceptor-structured nonfullerene acceptor, 2,2\u2032-((2Z,2\u2032Z)-(((4,4,9,9-tetrakis(4-hexylphenyl)-4,9-dihydro-s-indaceno[1,2-b:5,6-b\u2032]dithiophene-2,7-diyl)bis(4-((2-ethylhexyl)oxy)thiophene-4,3-diyl))bis(methanylylidene))bis(5,6-difluoro-3-oxo-2,3-dihydro-1H-indene-2,1-diylidene))dimalononitrile (i-IEICO-4F), is designed and synthesized via main-chain substituting position modification of 2-(5,6-difluoro-3-oxo-2,3-dihydro-1H-indene-2,1-diylidene)dimalononitrile. Unlike its planar analogue IEICO-4F with strong absorption in the near-infrared region, i-IEICO-4F exhibits a twisted main-chain configuration, resulting in 164 nm blue shifts and leading to complementary absorption with the wide-bandgap polymer (J52). A high solution molar extinction coefficient of 2.41 \u00d7 105 M\u20131 cm\u20131, and sufficiently high energy of charge-transfer excitons of 1.15 eV in a J52:i-IEICO-4F blend were observed, in comparison with those of 2.26 \u00d7 105 M\u20131 cm\u20131 and 1.08 eV for IEICO-4F. A power conversion efficiency of...", "which Donor ?", "J52", 730.0, 733.0], ["Low bandgap n-type organic semiconductor (n-OS) ITIC has attracted great attention for the application as an acceptor with medium bandgap p-type conjugated polymer as donor in nonfullerene polymer solar cells (PSCs) because of its attractive photovoltaic performance. Here we report a modification on the molecular structure of ITIC by side-chain isomerization with meta-alkyl-phenyl substitution, m-ITIC, to further improve its photovoltaic performance. In a comparison with its isomeric counterpart ITIC with para-alkyl-phenyl substitution, m-ITIC shows a higher film absorption coefficient, a larger crystalline coherence, and higher electron mobility. These inherent advantages of m-ITIC resulted in a higher power conversion efficiency (PCE) of 11.77% for the nonfullerene PSCs with m-ITIC as acceptor and a medium bandgap polymer J61 as donor, which is significantly improved over that (10.57%) of the corresponding devices with ITIC as acceptor. To the best of our knowledge, the PCE of 11.77% is one of the highest values reported in the literature to date for nonfullerene PSCs. More importantly, the m-ITIC-based device shows less thickness-dependent photovoltaic behavior than ITIC-based devices in the active-layer thickness range of 80-360 nm, which is beneficial for large area device fabrication. These results indicate that m-ITIC is a promising low bandgap n-OS for the application as an acceptor in PSCs, and the side-chain isomerization could be an easy and convenient way to further improve the photovoltaic performance of the donor and acceptor materials for high efficiency PSCs.", "which Donor ?", "J61", 836.0, 839.0], ["Low-bandgap polymers/molecules are an interesting family of semiconductor materials, and have enabled many recent exciting breakthroughs in the field of organic electronics, especially for organic photovoltaics (OPVs). Here, such a low-bandgap (1.43 eV) non-fullerene electron acceptor (BT-IC) bearing a fused 7-heterocyclic ring with absorption edge extending to the near-infrared (NIR) region was specially designed and synthesized. Benefitted from its NIR light harvesting, high performance OPVs were fabricated with medium bandgap polymers (J61 and J71) as donors, showing power conversion efficiencies of 9.6% with J61 and 10.5% with J71 along with extremely low energy loss (0.56 eV for J61 and 0.53 eV for J71). Interestingly, femtosecond transient absorption spectroscopy studies on both systems show that efficient charge generation was observed despite the fact that the highest occupied molecular orbital (HOMO)\u2013HOMO offset (\u0394EH) in the blends was as low as 0.10 eV, suggesting that such a small \u0394EH is not a crucial limitation in realizing high performance of NIR non-fullerene based OPVs. Our results indicated that BT-IC is an interesting NIR non-fullerene acceptor with great potential application in tandem/multi-junction, semitransparent, and ternary blend solar cells.", "which Donor ?", "J71", 553.0, 556.0], ["Ladder-type dithienocyclopentacarbazole (DTCC) cores, which possess highly extended \u03c0-conjugated backbones and versatile modular structures for derivatization, were widely used to develop high-performance p-type polymeric semiconductors. However, an n-type DTCC-based organic semiconductor has not been reported to date. In this study, the first DTCC-based n-type organic semiconductor (DTCC\u2013IC) with a well-defined A\u2013D\u2013A backbone was designed, synthesized, and characterized, in which a DTCC derivative substituted by four p-octyloxyphenyl groups was used as the electron-donating core and two strongly electron-withdrawing 3-(dicyanomethylene)indan-1-one moieties were used as the terminal acceptors. It was found that DTCC\u2013IC has strong light-capturing ability in the range of 500\u2013720 nm and exhibits an impressively high molar absorption coefficient of 2.24 \u00d7 105 M\u22121 cm\u22121 at 669 nm owing to effective intramolecular charge transfer and a strong D\u2013A effect. Cyclic voltammetry measurements indicated that the HOMO and LUMO energy levels of DTCC\u2013IC are \u22125.50 and \u22123.87 eV, respectively. More importantly, a high electron mobility of 2.17 \u00d7 10\u22123 cm2 V\u22121 s\u22121 was determined by the space-charge-limited current method; this electron mobility can be comparable to that of fullerene derivative acceptors (\u03bce \u223c 10\u22123 cm2 V\u22121 s\u22121). To investigate its application potential in non-fullerene solar cells, we fabricated organic solar cells (OSCs) by blending a DTCC\u2013IC acceptor with a PTB7-Th donor under various conditions. The results suggest that the optimized device exhibits a maximum power conversion efficiency (PCE) of up to 6% and a rational high VOC of 0.95 V. These findings demonstrate that the ladder-type DTCC core is a promising building block for the development of high-mobility n-type organic semiconductors for OSCs.", "which Donor ?", "PTB7-Th", 1477.0, 1484.0], ["Two cheliform non-fullerene acceptors, DTPC-IC and DTPC-DFIC, based on a highly electron-rich core, dithienopicenocarbazole (DTPC), are synthesized, showing ultra-narrow bandgaps (as low as 1.21 eV). The two-dimensional nitrogen-containing conjugated DTPC possesses strong electron-donating capability, which induces intense intramolecular charge transfer and intermolecular \u03c0-\u03c0 stacking in derived acceptors. The solar cell based on DTPC-DFIC and a spectrally complementary polymer donor, PTB7-Th, showed a high power conversion efficiency of 10.21% and an extremely low energy loss of 0.45 eV, which is the lowest among reported efficient OSCs.", "which Donor ?", "PTB7-Th", 490.0, 497.0], ["Organic solar cells (OSCs) are a promising cost-effective alternative for utility of solar energy, and possess low-cost, light-weight, and fl exibility advantages. [ 1\u20137 ] Much attention has been focused on the development of OSCs which have seen a dramatic rise in effi ciency over the last decade, and the encouraging power conversion effi ciency (PCE) over 9% has been achieved from bulk heterojunction (BHJ) OSCs. [ 8 ] With regard to photoactive materials, fullerenes and their derivatives, such as [6,6]-phenyl C 61 butyric acid methyl ester (PC 61 BM), have been the dominant electron-acceptor materials in BHJ OSCs, owing to their high electron mobility, large electron affi nity and isotropy of charge transport. [ 9 ] However, fullerenes have a few disadvantages, such as restricted electronic tuning and weak absorption in the visible region. Furthermore, in typical BHJ system of poly(3-hexylthiophene) (P3HT):PC 61 BM, mismatching energy levels between donor and acceptor leads to energy loss and low open-circuit voltages ( V OC ). To solve these problems, novel electron acceptor materials with strong and broad absorption spectra and appropriate energy levels are necessary for OSCs. Recently, non-fullerene small molecule acceptors have been developed. [ 10 , 11 ] However, rare reports on the devices based on solution-processed non-fullerene small molecule acceptors have shown PCEs approaching or exceeding 1.5%, [ 12\u201319 ] and only one paper reported PCEs over 2%. [ 16 ]", "which Donor ?", "P3HT", 916.0, 920.0], ["A novel small molecule, FBR, bearing 3-ethylrhodanine flanking groups was synthesized as a nonfullerene electron acceptor for solution-processed bulk heterojunction organic photovoltaics (OPV). A straightforward synthesis route was employed, offering the potential for large scale preparation of this material. Inverted OPV devices employing poly(3-hexylthiophene) (P3HT) as the donor polymer and FBR as the acceptor gave power conversion efficiencies (PCE) up to 4.1%. Transient and steady state optical spectroscopies indicated efficient, ultrafast charge generation and efficient photocurrent generation from both donor and acceptor. Ultrafast transient absorption spectroscopy was used to investigate polaron generation efficiency as well as recombination dynamics. It was determined that the P3HT:FBR blend is highly intermixed, leading to increased charge generation relative to comparative devices with P3HT:PC60BM, but also faster recombination due to a nonideal morphology in which, in contrast to P3HT:PC60BM devices, the acceptor does not aggregate enough to create appropriate percolation pathways that prevent fast nongeminate recombination. Despite this nonoptimal morphology the P3HT:FBR devices exhibit better performance than P3HT:PC60BM devices, used as control, demonstrating that this acceptor shows great promise for further optimization.", "which Donor ?", "P3HT", 366.0, 370.0], ["Three novel non-fullerene small molecular acceptors ITOIC, ITOIC-F, and ITOIC-2F were designed and synthesized with easy chemistry. The concept of supramolecular chemistry was successfully used in the molecular design, which includes noncovalently conformational locking (via intrasupramolecular interaction) to enhance the planarity of backbone and electrostatic interaction (intersupramolecular interaction) to enhance the \u03c0\u2013\u03c0 stacking of terminal groups. Fluorination can further strengthen the intersupramolecular electrostatic interaction of terminal groups. As expected, the designed acceptors exhibited excellent device performance when blended with polymer donor PBDB-T. In comparison with the parent acceptor molecule DC-IDT2T reported in the literature with a power conversion efficiency (PCE) of 3.93%, ITOIC with a planar structure exhibited a PCE of 8.87% and ITOIC-2F with a planar structure and enhanced electrostatic interaction showed a quite impressive PCE of 12.17%. Our result demonstrates the import...", "which Donor ?", "PBDB-T", 671.0, 677.0], ["Stable bioimaging with nanomaterials in living cells has been a great challenge and of great importance for understanding intracellular events and elucidating various biological phenomena. Herein, we demonstrate that N,S co-doped carbon dots (N,S-CDs) produced by one-pot reflux treatment of C3N3S3 with ethane diamine at a relatively low temperature (80 \u00b0C) exhibit a high fluorescence quantum yield of about 30.4%, favorable biocompatibility, low-toxicity, strong resistance to photobleaching and good stability. The N,S-CDs as an effective temperature indicator exhibit good temperature-dependent fluorescence with a sensational linear response from 20 to 80 \u00b0C. In addition, the obtained N,S-CDs facilitate high selectivity detection of tetracycline (TC) with a detection limit as low as 3 \u00d7 10-10 M and a wide linear range from 1.39 \u00d7 10-5 to 1.39 \u00d7 10-9 M. More importantly, the N,S-CDs display an unambiguous bioimaging ability in the detection of intracellular temperature and TC with satisfactory results.", "which precursors ?", "C3N3S3", 292.0, 298.0], ["The fluorescent N-doped carbon dots (N-CDs) obtained from C3N4 emit strong blue fluorescence, which is stable with different ionic strengths and time. The fluorescence intensity of N-CDs decreases with the temperature increasing, while it can recover to the initial one with the temperature decreasing. It is an accurate linear response of fluorescence intensity to temperature, which may be attributed to the synergistic effect of abundant oxygen-containing functional groups and hydrogen bonds. Further experiments also demonstrate that N-CDs can serve as effective in vitro and in vivo fluorescence-based nanothermometer.", "which precursors ?", "C3N4", 58.0, 62.0], ["Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179\u00b122 nm were produced. Res was loaded with high EE of 73\u00b10.9% and DL content of 6.2\u00b10.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", "which has cell line ?", "MDA-MB-231", 1030.0, 1040.0], ["PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.", "which has cell line ?", "MDA-MB-231", 682.0, 692.0], ["Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179\u00b122 nm were produced. Res was loaded with high EE of 73\u00b10.9% and DL content of 6.2\u00b10.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", "which has cell line ?", "MCF-10A", 1055.0, 1062.0], ["Background: Paclitaxel (PTX) is one of the most important and effective anticancer drugs for the treatment of human cancer. However, its low solubility and severe adverse effects limited clinical use. To overcome this limitation, nanotechnology has been used to overcome tumors due to its excellent antimicrobial activity. Objective: This study was to demonstrate the anticancer properties of functionalization silver nanoparticles loaded with paclitaxel (Ag@PTX) induced A549 cells apoptosis through ROS-mediated signaling pathways. Methods: The Ag@PTX nanoparticles were charged with a zeta potential of about -17 mv and characterized around 2 nm with a narrow size distribution. Results: Ag@PTX significantly decreased the viability of A549 cells and possessed selectivity between cancer and normal cells. Ag@PTX induced A549 cells apoptosis was confirmed by nuclear condensation, DNA fragmentation, and activation of caspase-3. Furthermore, Ag@PTX enhanced the anti-cancer activity of A549 cells through ROS-mediated p53 and AKT signalling pathways. Finally, in a xenograft nude mice model, Ag@PTX suppressed the growth of tumors. Conclusion: Our findings suggest that Ag@PTX may be a candidate as a chemopreventive agent and could be a highly efficient way to achieve anticancer synergism for human cancers.", "which has cell line ?", "A549 cells", 472.0, 482.0], ["Nanocrystal formulation has become a viable solution for delivering poorly soluble drugs including chemotherapeutic agents. The purpose of this study was to examine cellular uptake of paclitaxel nanocrystals by confocal imaging and concentration measurement. It was found that drug nanocrystals could be internalized by KB cells at much higher concentrations than a conventional, solubilized formulation. The imaging and quantitative results suggest that nanocrystals could be directly taken up by cells as solid particles, likely via endocytosis. Moreover, it was found that polymer treatment to drug nanocrystals, such as surface coating and lattice entrapment, significantly influenced the cellular uptake. While drug molecules are in the most stable physical state, nanocrystals of a poorly soluble drug are capable of achieving concentrated intracellular presence enabling needed therapeutic effects.", "which has cell line ?", "KB cells", 320.0, 328.0], ["The folding of monomeric antigens and their subsequent assembly into higher ordered structures are crucial for robust and effective production of nanoparticle (NP) vaccines in a timely and reproducible manner. Despite significant advances in in silico design and structure-based assembly, most engineered NPs are refractory to soluble expression and fail to assemble as designed, presenting major challenges in the manufacturing process. The failure is due to a lack of understanding of the kinetic pathways and enabling technical platforms to ensure successful folding of the monomer antigens into regular assemblages. Capitalizing on a novel function of RNA as a molecular chaperone (chaperna: chaperone + RNA), we provide a robust protein-folding vehicle that may be implemented to NP assembly in bacterial hosts. The receptor-binding domain (RBD) of Middle East respiratory syndrome-coronavirus (MERS-CoV) was fused with the RNA-interaction domain (RID) and bacterioferritin, and expressed in Escherichia coli in a soluble form. Site-specific proteolytic removal of the RID prompted the assemblage of monomers into NPs, which was confirmed by electron microscopy and dynamic light scattering. The mutations that affected the RNA binding to RBD significantly increased the soluble aggregation into amorphous structures, reducing the overall yield of NPs of a defined size. This underscored the RNA-antigen interactions during NP assembly. The sera after mouse immunization effectively interfered with the binding of MERS-CoV RBD to the cellular receptor hDPP4. The results suggest that RNA-binding controls the overall kinetic network of the antigen folding pathway in favor of enhanced assemblage of NPs into highly regular and immunologically relevant conformations. The concentration of the ion Fe2+, salt, and fusion linker also contributed to the assembly in vitro, and the stability of the NPs. The kinetic \u201cpace-keeping\u201d role of chaperna in the super molecular assembly of antigen monomers holds promise for the development and delivery of NPs and virus-like particles as recombinant vaccines and for serological detection of viral infections.", "which Virus ?", "MERS-CoV", 900.0, 908.0], ["Middle East respiratory syndrome (MERS) coronavirus (MERS-CoV), an infectious coronavirus first reported in 2012, has a mortality rate greater than 35%. Therapeutic antibodies are key tools for preventing and treating MERS-CoV infection, but to date no such agents have been approved for treatment of this virus. Nanobodies (Nbs) are camelid heavy chain variable domains with properties distinct from those of conventional antibodies and antibody fragments. We generated two oligomeric Nbs by linking two or three monomeric Nbs (Mono-Nbs) targeting the MERS-CoV receptor-binding domain (RBD), and compared their RBD-binding affinity, RBD\u2013receptor binding inhibition, stability, and neutralizing and cross-neutralizing activity against MERS-CoV. Relative to Mono-Nb, dimeric Nb (Di-Nb) and trimeric Nb (Tri-Nb) had significantly greater ability to bind MERS-CoV RBD proteins with or without mutations in the RBD, thereby potently blocking RBD\u2013MERS-CoV receptor binding. The engineered oligomeric Nbs were very stable under extreme conditions, including low or high pH, protease (pepsin), chaotropic denaturant (urea), and high temperature. Importantly, Di-Nb and Tri-Nb exerted significantly elevated broad-spectrum neutralizing activity against at least 19 human and camel MERS-CoV strains isolated in different countries and years. Overall, the engineered Nbs could be developed into effective therapeutic agents for prevention and treatment of MERS-CoV infection.", "which Virus ?", "MERS-CoV", 53.0, 61.0], ["Worldwide outbreaks of infectious diseases necessitate the development of rapid and accurate diagnostic methods. Colorimetric assays are a representative tool to simply identify the target molecules in specimens through color changes of an indicator (e.g., nanosized metallic particle, and dye molecules). The detection method is used to confirm the presence of biomarkers visually and measure absorbance of the colored compounds at a specific wavelength. In this study, we propose a colorimetric assay based on an extended form of double-stranded DNA (dsDNA) self-assembly shielded gold nanoparticles (AuNPs) under positive electrolyte (e.g., 0.1 M MgCl2) for detection of Middle East respiratory syndrome coronavirus (MERS-CoV). This platform is able to verify the existence of viral molecules through a localized surface plasmon resonance (LSPR) shift and color changes of AuNPs in the UV\u2013vis wavelength range. We designed a pair of thiol-modified probes at either the 5\u2032 end or 3\u2032 end to organize complementary base pairs with upstream of the E protein gene (upE) and open reading frames (ORF) 1a on MERS-CoV. The dsDNA of the target and probes forms a disulfide-induced long self-assembled complex, which protects AuNPs from salt-induced aggregation and transition of optical properties. This colorimetric assay could discriminate down to 1 pmol/\u03bcL of 30 bp MERS-CoV and further be adapted for convenient on-site detection of other infectious diseases, especially in resource-limited settings.", "which Virus ?", "MERS-CoV", 720.0, 728.0], ["The development of simple fluorescent and colorimetric assays that enable point-of-care DNA and RNA detection has been a topic of significant research because of the utility of such assays in resource limited settings. The most common motifs utilize hybridization to a complementary detection strand coupled with a sensitive reporter molecule. Here, a paper-based colorimetric assay for DNA detection based on pyrrolidinyl peptide nucleic acid (acpcPNA)-induced nanoparticle aggregation is reported as an alternative to traditional colorimetric approaches. PNA probes are an attractive alternative to DNA and RNA probes because they are chemically and biologically stable, easily synthesized, and hybridize efficiently with the complementary DNA strands. The acpcPNA probe contains a single positive charge from the lysine at C-terminus and causes aggregation of citrate anion-stabilized silver nanoparticles (AgNPs) in the absence of complementary DNA. In the presence of target DNA, formation of the anionic DNA-acpcPNA duplex results in dispersion of the AgNPs as a result of electrostatic repulsion, giving rise to a detectable color change. Factors affecting the sensitivity and selectivity of this assay were investigated, including ionic strength, AgNP concentration, PNA concentration, and DNA strand mismatches. The method was used for screening of synthetic Middle East respiratory syndrome coronavirus (MERS-CoV), Mycobacterium tuberculosis (MTB), and human papillomavirus (HPV) DNA based on a colorimetric paper-based analytical device developed using the aforementioned principle. The oligonucleotide targets were detected by measuring the color change of AgNPs, giving detection limits of 1.53 (MERS-CoV), 1.27 (MTB), and 1.03 nM (HPV). The acpcPNA probe exhibited high selectivity for the complementary oligonucleotides over single-base-mismatch, two-base-mismatch, and noncomplementary DNA targets. The proposed paper-based colorimetric DNA sensor has potential to be an alternative approach for simple, rapid, sensitive, and selective DNA detection.", "which Virus ?", "MERS-CoV", 1414.0, 1422.0], ["Significance Middle East respiratory syndrome coronavirus (MERS-CoV) recurrently infects humans from its dromedary camel reservoir, causing severe respiratory disease with an \u223c35% fatality rate. The virus binds to the dipeptidyl peptidase 4 (DPP4) entry receptor on respiratory epithelial cells via its spike protein. We here report that the MERS-CoV spike protein selectively binds to sialic acid (Sia) and demonstrate that cell-surface sialoglycoconjugates can serve as an attachment factor. Our observations warrant further research into the role of Sia binding in the virus\u2019s host and tissue tropism and transmission, which may be influenced by the observed Sia-binding fine specificity and by differences in sialoglycomes among host species. Middle East respiratory syndrome coronavirus (MERS-CoV) targets the epithelial cells of the respiratory tract both in humans and in its natural host, the dromedary camel. Virion attachment to host cells is mediated by 20-nm-long homotrimers of spike envelope protein S. The N-terminal subunit of each S protomer, called S1, folds into four distinct domains designated S1A through S1D. Binding of MERS-CoV to the cell surface entry receptor dipeptidyl peptidase 4 (DPP4) occurs via S1B. We now demonstrate that in addition to DPP4, MERS-CoV binds to sialic acid (Sia). Initially demonstrated by hemagglutination assay with human erythrocytes and intact virus, MERS-CoV Sia-binding activity was assigned to S subdomain S1A. When multivalently displayed on nanoparticles, S1 or S1A bound to human erythrocytes and to human mucin in a strictly Sia-dependent fashion. Glycan array analysis revealed a preference for \u03b12,3-linked Sias over \u03b12,6-linked Sias, which correlates with the differential distribution of \u03b12,3-linked Sias and the predominant sites of MERS-CoV replication in the upper and lower respiratory tracts of camels and humans, respectively. Binding is hampered by Sia modifications such as 5-N-glycolylation and (7,)9-O-acetylation. Depletion of cell surface Sia by neuraminidase treatment inhibited MERS-CoV entry of Calu-3 human airway cells, thus providing direct evidence that virus\u2013Sia interactions may aid in virion attachment. The combined observations lead us to propose that high-specificity, low-affinity attachment of MERS-CoV to sialoglycans during the preattachment or early attachment phase may form another determinant governing the host range and tissue tropism of this zoonotic pathogen.", "which Virus ?", "MERS-CoV", 59.0, 67.0], ["MERS-CoV uses the S1B domain of its spike protein to attach to its host receptor, dipeptidyl peptidase 4 (DPP4). The tissue localization of DPP4 has been mapped in different susceptible species. On the other hand, the S1A domain, the N-terminal domain of this spike protein, preferentially binds to several glycotopes of \u03b12,3-sialic acids, the attachment factor of MERS-CoV. Here we show, using a novel method, that the S1A domain specifically binds to the nasal epithelium of dromedary camels, alveolar epithelium of humans, and intestinal epithelium of common pipistrelle bats. In contrast, it does not bind to the nasal epithelium of pigs or rabbits, nor does it bind to the intestinal epithelium of serotine bats and frugivorous bat species. This finding supports the importance of the S1A domain in MERS-CoV infection and tropism, suggests its role in transmission, and highlights its potential use as a component of novel vaccine candidates. ABSTRACT Middle East respiratory syndrome coronavirus (MERS-CoV) uses the S1B domain of its spike protein to bind to dipeptidyl peptidase 4 (DPP4), its functional receptor, and its S1A domain to bind to sialic acids. The tissue localization of DPP4 in humans, bats, camelids, pigs, and rabbits generally correlates with MERS-CoV tropism, highlighting the role of DPP4 in virus pathogenesis and transmission. However, MERS-CoV S1A does not indiscriminately bind to all \u03b12,3-sialic acids, and the species-specific binding and tissue distribution of these sialic acids in different MERS-CoV-susceptible species have not been investigated. We established a novel method to detect these sialic acids on tissue sections of various organs of different susceptible species by using nanoparticles displaying multivalent MERS-CoV S1A. We found that the nanoparticles specifically bound to the nasal epithelial cells of dromedary camels, type II pneumocytes in human lungs, and the intestinal epithelial cells of common pipistrelle bats. Desialylation by neuraminidase abolished nanoparticle binding and significantly reduced MERS-CoV infection in primary susceptible cells. In contrast, S1A nanoparticles did not bind to the intestinal epithelium of serotine bats and frugivorous bat species, nor did they bind to the nasal epithelium of pigs and rabbits. Both pigs and rabbits have been shown to shed less infectious virus than dromedary camels and do not transmit the virus via either contact or airborne routes. Our results depict species-specific colocalization of MERS-CoV entry and attachment receptors, which may be relevant in the transmission and pathogenesis of MERS-CoV. IMPORTANCE MERS-CoV uses the S1B domain of its spike protein to attach to its host receptor, dipeptidyl peptidase 4 (DPP4). The tissue localization of DPP4 has been mapped in different susceptible species. On the other hand, the S1A domain, the N-terminal domain of this spike protein, preferentially binds to several glycotopes of \u03b12,3-sialic acids, the attachment factor of MERS-CoV. Here we show, using a novel method, that the S1A domain specifically binds to the nasal epithelium of dromedary camels, alveolar epithelium of humans, and intestinal epithelium of common pipistrelle bats. In contrast, it does not bind to the nasal epithelium of pigs or rabbits, nor does it bind to the intestinal epithelium of serotine bats and frugivorous bat species. This finding supports the importance of the S1A domain in MERS-CoV infection and tropism, suggests its role in transmission, and highlights its potential use as a component of novel vaccine candidates.", "which Virus ?", "MERS-CoV", 0.0, 8.0], ["Therapeutic development is critical for preventing and treating continual MERS-CoV infections in humans and camels. Because of their small size, nanobodies (Nbs) have advantages as antiviral therapeutics (e.g., high expression yield and robustness for storage and transportation) and also potential limitations (e.g., low antigen-binding affinity and fast renal clearance). Here, we have developed novel Nbs that specifically target the receptor-binding domain (RBD) of MERS-CoV spike protein. They bind to a conserved site on MERS-CoV RBD with high affinity, blocking RBD's binding to MERS-CoV receptor. Through engineering a C-terminal human Fc tag, the in vivo half-life of the Nbs is significantly extended. Moreover, the Nbs can potently cross-neutralize the infections of diverse MERS-CoV strains isolated from humans and camels. The Fc-tagged Nb also completely protects humanized mice from lethal MERS-CoV challenge. Taken together, our study has discovered novel Nbs that hold promise as potent, cost-effective, and broad-spectrum anti-MERS-CoV therapeutic agents.", "which Virus ?", "ERS-CoV", NaN, NaN], ["ABSTRACT Camelid heavy-chain variable domains (VHHs) are the smallest, intact, antigen-binding units to occur in nature. VHHs possess high degrees of solubility and robustness enabling generation of multivalent constructs with increased avidity \u2013 characteristics that mark their superiority to other antibody fragments and monoclonal antibodies. Capable of effectively binding to molecular targets inaccessible to classical immunotherapeutic agents and easily produced in microbial culture, VHHs are considered promising tools for pharmaceutical biotechnology. With the aim to demonstrate the perspective and potential of VHHs for the development of prophylactic and therapeutic drugs to target diseases caused by bacterial and viral infections, this review article will initially describe the structural features that underlie the unique properties of VHHs and explain the methods currently used for the selection and recombinant production of pathogen-specific VHHs, and then thoroughly summarize the experimental findings of five distinct studies that employed VHHs as inhibitors of host\u2013pathogen interactions or neutralizers of infectious agents. Past and recent studies suggest the potential of camelid heavy-chain variable domains as a novel modality of immunotherapeutic drugs and a promising alternative to monoclonal antibodies. VHHs demonstrate the ability to interfere with bacterial pathogenesis by preventing adhesion to host tissue and sequestering disease-causing bacterial toxins. To protect from viral infections, VHHs may be employed as inhibitors of viral entry by binding to viral coat proteins or blocking interactions with cell-surface receptors. The implementation of VHHs as immunotherapeutic agents for infectious diseases is of considerable potential and set to contribute to public health in the near future.", "which Virus ?", "viral infections", 728.0, 744.0], ["Engineered cocrystals offer an alternative solid drug form with tailored physicochemical properties. Interestingly, although cocrystals provide many new possibilities, they also present new challenges, particularly in regard to their design and large-scale manufacture. Current literature has primarily focused on the preparation and characterization of novel cocrystals typically containing only the drug and coformer, leaving the subsequent formulation less explored. In this paper we propose, for the first time, the use of hot melt extrusion for the mechanochemical synthesis of pharmaceutical cocrystals in the presence of a meltable binder. In this approach, we examine excipients that are amenable to hot melt extrusion, forming a suspension of cocrystal particulates embedded in a pharmaceutical matrix. Using ibuprofen and isonicotinamide as a model cocrystal reagent pair, formulations extruded with a small molecular matrix carrier (xylitol) were examined to be intimate mixtures wherein the newly formed cocrystal particulates were physically suspended in a matrix. With respect to formulations extruded using polymeric carriers (Soluplus and Eudragit EPO, respectively), however, there was no evidence within PXRD patterns of either crystalline ibuprofen or the cocrystal. Importantly, it was established in this study that an appropriate carrier for a cocrystal reagent pair during HME processing should satisfy certain criteria including limited interaction with parent reagents and cocrystal product, processing temperature sufficiently lower than the onset of cocrystal Tm, low melt viscosity, and rapid solidification upon cooling.", "which Carrier for hot melt extrusion ?", "Xylitol", 944.0, 951.0], ["The objective of the present study was to investigate the effects of processing variables and formulation factors on the characteristics of hot-melt extrudates containing a copolymer (Kollidon\u00ae VA 64). Nifedipine was used as a model drug in all of the extrudates. Differential scanning calorimetry (DSC) was utilized on the physical mixtures and melts of varying drug\u2013polymer concentrations to study their miscibility. The drug\u2013polymer binary mixtures were studied for powder flow, drug release, and physical and chemical stabilities. The effects of moisture absorption on the content uniformity of the extrudates were also studied. Processing the materials at lower barrel temperatures (115\u2013135\u00b0C) and higher screw speeds (50\u2013100 rpm) exhibited higher post-processing drug content (~99\u2013100%). DSC and X-ray diffraction studies confirmed that melt extrusion of drug\u2013polymer mixtures led to the formation of solid dispersions. Interestingly, the extrusion process also enhanced the powder flow characteristics, which occurred irrespective of the drug load (up to 40% w/w). Moreover, the content uniformity of the extrudates, unlike the physical mixtures, was not sensitive to the amount of moisture absorbed. The extrusion conditions did not influence drug release from the extrudates; however, release was greatly affected by the drug loading. Additionally, the drug release from the physical mixture of nifedipine\u2013Kollidon\u00ae VA 64 was significantly different when compared to the corresponding extrudates (f2 = 36.70). The extrudates exhibited both physical and chemical stabilities throughout the period of study. Overall, hot-melt extrusion technology in combination with Kollidon\u00ae VA 64 produced extrudates capable of higher drug loading, with enhanced flow characteristics, and excellent stability.", "which Carrier for hot melt extrusion ?", "Kollidon\u00ae VA 64", 184.0, 199.0], ["In this study, we examine the relationship between the physical structure and dissolution behavior of olanzapine (OLZ) prepared via hot-melt extrusion in three polymers [polyvinylpyrrolidone (PVP) K30, polyvinylpyrrolidone-co-vinyl acetate (PVPVA) 6:4, and Soluplus\u00ae (SLP)]. In particular, we examine whether full amorphicity is necessary to achieve a favorable dissolution profile. Drug\u2013polymer miscibility was estimated using melting point depression and Hansen solubility parameters. Solid dispersions were characterized using differential scanning calorimetry, X-ray powder diffraction, and scanning electron microscopy. All the polymers were found to be miscible with OLZ in a decreasing order of PVP>PVPVA>SLP. At a lower extrusion temperature (160\u00b0C), PVP generated fully amorphous dispersions with OLZ, whereas the formulations with PVPVA and SLP contained 14%\u201316% crystalline OLZ. Increasing the extrusion temperature to 180\u00b0C allowed the preparation of fully amorphous systems with PVPVA and SLP. Despite these differences, the dissolution rates of these preparations were comparable, with PVP showing a lower release rate despite being fully amorphous. These findings suggested that, at least in the particular case of OLZ, the absence of crystalline material may not be critical to the dissolution performance. We suggest alternative key factors determining dissolution, particularly the dissolution behavior of the polymers themselves.", "which Carrier for hot melt extrusion ?", "Soluplus\u00ae", NaN, NaN], ["Abstract The aim of the current study is to develop amorphous solid dispersion (SD) via hot melt extrusion technology to improve the solubility of a water-insoluble compound, felodipine (FEL). The solubility was dramatically increased by preparation of amorphous SDs via hot-melt extrusion with an amphiphilic polymer, Soluplus\u00ae (SOL). FEL was found to be miscible with SOL by calculating the solubility parameters. The solubility of FEL within SOL was determined to be in the range of 6.2\u20139.9% (w/w). Various techniques were applied to characterize the solid-state properties of the amorphous SDs. These included Fourier Transform Infrared Spectrometry spectroscopy and Raman spectroscopy to detect the formation of hydrogen bonding between the drug and the polymer. Scanning electron microscopy was performed to study the morphology of the SDs. Among all the hot-melt extrudates, FEL was found to be molecularly dispersed within the polymer matrix for the extrudates containing 10% drug, while few small crystals were detected in the 30 and 50% extrudates. In conclusion, solubility of FEL was enhanced while a homogeneous SD was achieved for 10% drug loading.", "which Carrier for hot melt extrusion ?", "Soluplus\u00ae", NaN, NaN], ["Abstract It is very challenging to treat brain cancer because of the blood\u2013brain barrier (BBB) restricting therapeutic drug or gene to access the brain. In this research project, angiopep-2 (ANG) was used as a brain-targeted peptide for preparing multifunctional ANG-modified poly(lactic-co-glycolic acid) (PLGA) nanoparticles (NPs), which encapsulated both doxorubicin (DOX) and epidermal growth factor receptor (EGFR) siRNA, designated as ANG/PLGA/DOX/siRNA. This system could efficiently deliver DOX and siRNA into U87MG cells leading to significant cell inhibition, apoptosis and EGFR silencing in vitro. It demonstrated that this drug system was capable of penetrating the BBB in vivo, resulting in more drugs accumulation in the brain. The animal study using the brain orthotopic U87MG glioma xenograft model indicated that the ANG-targeted co-delivery of DOX and EGFR siRNA resulted in not only the prolongation of the life span of the glioma-bearing mice but also an obvious cell apoptosis in glioma tissue.", "which Surface functionalized with ?", "Angiopep-2", 179.0, 189.0], ["AIM Drug targeting to the CNS is challenging due to the presence of blood-brain barrier. We investigated chitosan (Cs) nanoparticles (NPs) as drug transporter system across the blood-brain barrier, based on mAb OX26 modified Cs. MATERIALS & METHODS Cs NPs functionalized with PEG, modified and unmodified with OX26 (Cs-PEG-OX26) were prepared and chemico-physically characterized. These NPs were administered (intraperitoneal) in mice to define their ability to reach the brain. RESULTS Brain uptake of OX26-conjugated NPs is much higher than of unmodified NPs, because: long-circulating abilities (conferred by PEG), interaction between cationic Cs and brain endothelium negative charges and OX26 TfR receptor affinity. CONCLUSION Cs-PEG-OX26 NPs are promising drug delivery system to the CNS.", "which Surface functionalized with ?", "OX26", 211.0, 215.0], ["Abstract Melanotransferrin antibody (MA) and tamoxifen (TX) were conjugated on etoposide (ETP)-entrapped solid lipid nanoparticles (ETP-SLNs) to target the blood\u2013brain barrier (BBB) and glioblastom multiforme (GBM). MA- and TX-conjugated ETP-SLNs (MA\u2013TX\u2013ETP\u2013SLNs) were used to infiltrate the BBB comprising a monolayer of human astrocyte-regulated human brain-microvascular endothelial cells (HBMECs) and to restrain the proliferation of malignant U87MG cells. TX-grafted ETP-SLNs (TX\u2013ETP\u2013SLNs) significantly enhanced the BBB permeability coefficient for ETP and raised the fluorescent intensity of calcein-AM when compared with ETP-SLNs. In addition, surface MA could increase the BBB permeability coefficient for ETP about twofold. The viability of HBMECs was higher than 86%, suggesting a high biocompatibility of MA\u2013TX\u2013ETP-SLNs. Moreover, the efficiency in antiproliferation against U87MG cells was in the order of MA\u2013TX\u2013ETP-SLNs > TX\u2013ETP-SLNs > ETP-SLNs > SLNs. The capability of MA\u2013TX\u2013ETP-SLNs to target HBMECs and U87MG cells during internalization was verified by immunochemical staining of expressed melanotransferrin. MA\u2013TX\u2013ETP-SLNs can be a potent pharmacotherapy to deliver ETP across the BBB to GBM.", "which Surface functionalized with ?", "Melanotransferrin antibody (MA)", NaN, NaN], ["Alzheimer's disease is a growing concern in the modern world. As the currently available medications are not very promising, there is an increased need for the fabrication of newer drugs. Curcumin is a plant derived compound which has potential activities beneficial for the treatment of Alzheimer's disease. Anti-amyloid activity and anti-oxidant activity of curcumin is highly beneficial for the treatment of Alzheimer's disease. The insolubility of curcumin in water restricts its use to a great extend, which can be overcome by the synthesis of curcumin nanoparticles. In our work, we have successfully synthesized water-soluble PLGA coated- curcumin nanoparticles and characterized it using different techniques. As drug targeting to diseases of cerebral origin are difficult due to the stringency of blood-brain barrier, we have coupled the nanoparticle with Tet-1 peptide, which has the affinity to neurons and possess retrograde transportation properties. Our results suggest that curcumin encapsulated-PLGA nanoparticles are able to destroy amyloid aggregates, exhibit anti-oxidative property and are non-cytotoxic. The encapsulation of the curcumin in PLGA does not destroy its inherent properties and so, the PLGA-curcumin nanoparticles can be used as a drug with multiple functions in treating Alzheimer's disease proving it to be a potential therapeutic tool against this dreaded disease.", "which Surface functionalized with ?", "Tet-1 peptide", 865.0, 878.0], ["A brain drug delivery system for glioma chemotherapy based on transferrin-conjugated biodegradable polymersomes, Tf-PO-DOX, was made and evaluated with doxorubicin (DOX) as a model drug. Biodegradable polymersomes (PO) loaded with doxorubicin (DOX) were prepared by the nanoprecipitation method (PO-DOX) and then conjugated with transferrin (Tf) to yield Tf-PO-DOX with an average diameter of 107 nm and surface Tf molecule number per polymersome of approximately 35. Compared with PO-DOX and free DOX, Tf-PO-DOX demonstrated the strongest cytotoxicity against C6 glioma cells and the greatest intracellular delivery. It was shown in pharmacokinetic and brain distribution experiments that Tf-PO significantly enhanced brain delivery of DOX, especially the delivery of DOX into brain tumor cells. Pharmacodynamics results revealed a significant reduction of tumor volume and a significant increase of median survival time in the group of Tf-PO-DOX compared with those in saline control animals, animals treated with PO-DOX, and free DOX solution. By terminal deoxynucleotidyl transferase-mediated dUTP nick-end-labeling, Tf-PO-DOX could extensively make tumor cell apoptosis. These results indicated that Tf-PO-DOX could significantly enhance the intracellular delivery of DOX in glioma and the chemotherapeutic effect of DOX for glioma rats.", "which Surface functionalized with ?", "Transferrin (Tf)", NaN, NaN], ["Alzheimer's disease (AD) is the most common form of dementia, characterized by the formation of extracellular senile plaques and neuronal loss caused by amyloid \u03b2 (A\u03b2) aggregates in the brains of AD patients. Conventional strategies failed to treat AD in clinical trials, partly due to the poor solubility, low bioavailability and ineffectiveness of the tested drugs to cross the blood-brain barrier (BBB). Moreover, AD is a complex, multifactorial neurodegenerative disease; one-target strategies may be insufficient to prevent the processes of AD. Here, we designed novel kind of poly(lactide-co-glycolic acid) (PLGA) nanoparticles by loading with A\u03b2 generation inhibitor S1 (PQVGHL peptide) and curcumin to target the detrimental factors in AD development and by conjugating with brain targeting peptide CRT (cyclic CRTIGPSVC peptide), an iron-mimic peptide that targets transferrin receptor (TfR), to improve BBB penetration. The average particle size of drug-loaded PLGA nanoparticles and CRT-conjugated PLGA nanoparticles were 128.6 nm and 139.8 nm, respectively. The results of Y-maze and new object recognition test demonstrated that our PLGA nanoparticles significantly improved the spatial memory and recognition in transgenic AD mice. Moreover, PLGA nanoparticles remarkably decreased the level of A\u03b2, reactive oxygen species (ROS), TNF-\u03b1 and IL-6, and enhanced the activities of super oxide dismutase (SOD) and synapse numbers in the AD mouse brains. Compared with other PLGA nanoparticles, CRT peptide modified-PLGA nanoparticles co-delivering S1 and curcumin exhibited most beneficial effect on the treatment of AD mice, suggesting that conjugated CRT peptide, and encapsulated S1 and curcumin exerted their corresponding functions for the treatment.", "which Surface functionalized with ?", "Brain targeting peptide CRT (cyclic CRTIGPSVC peptide)", NaN, NaN], ["PURPOSE This study aimed to: (1) determine the relative efficiencies of topical and systemic absorption of drugs delivered by eyedrops to the anterior and posterior segments of the eye; (2) establish whether dexamethasone-cyclodextrin eyedrops deliver significant levels of drug to the retina and vitreous in the rabbit eye, and (3) compare systemic absorption following topical application to the eye versus intranasal or intravenous delivery. METHODS In order to distinguish between topical and systemic absorption in the eye, we applied 0.5% dexamethasone-cyclodextrin eyedrops to one (study) eye of rabbits and not to the contralateral (control) eye. Drug levels were measured in each eye. The study eye showed the result of the combination of topical and systemic absorption, whereas the control eye showed the result of systemic absorption only. Systemic absorption was also examined after intranasal and intravenous administration of the same dose of dexamethasone. RESULTS In the aqueous humour dexamethasone levels were 170 +/- 76 ng/g (mean +/- standard deviation) in the study eye and 6 +/- 2 ng/g in the control eye. Similar ratios were seen in the iris and ciliary body. In the retina the dexamethasone level was 33 +/- 7 ng/g in the study eye and 14 +/- 3 ng/g in the control eye. Similar ratios were seen in the vitreous humour. Systemic absorption was similar from ocular, intranasal and intravenous administration. CONCLUSIONS Absorption after topical application dominates in the anterior segment. Topical absorption also plays a significant role in delivering dexamethasone to the posterior segment of the rabbit eye. In medication administered to the retina, 40% of the drug reaches the retina via the systemic route and 60% via topical penetration. Dexamethasone-cyclodextrin eyedrops deliver a significant amount of drug to the rabbit retina.", "which Uses drug ?", "Dexamethasone", 208.0, 221.0], ["Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179\u00b122 nm were produced. Res was loaded with high EE of 73\u00b10.9% and DL content of 6.2\u00b10.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", "which Uses drug ?", "Resveratrol", 316.0, 327.0], ["Abstract It is very challenging to treat brain cancer because of the blood\u2013brain barrier (BBB) restricting therapeutic drug or gene to access the brain. In this research project, angiopep-2 (ANG) was used as a brain-targeted peptide for preparing multifunctional ANG-modified poly(lactic-co-glycolic acid) (PLGA) nanoparticles (NPs), which encapsulated both doxorubicin (DOX) and epidermal growth factor receptor (EGFR) siRNA, designated as ANG/PLGA/DOX/siRNA. This system could efficiently deliver DOX and siRNA into U87MG cells leading to significant cell inhibition, apoptosis and EGFR silencing in vitro. It demonstrated that this drug system was capable of penetrating the BBB in vivo, resulting in more drugs accumulation in the brain. The animal study using the brain orthotopic U87MG glioma xenograft model indicated that the ANG-targeted co-delivery of DOX and EGFR siRNA resulted in not only the prolongation of the life span of the glioma-bearing mice but also an obvious cell apoptosis in glioma tissue.", "which Uses drug ?", "Doxorubicin", 358.0, 369.0], ["Abstract Galantamine hydrobromide, a promising acetylcholinesterase inhibitor is reported to be associated with cholinergic side effects. Its poor brain penetration results in lower bioavailability to the target site. With an aim to overcome these limitations, solid\u2013lipid nanoparticulate formulation of galantamine hydrobromide was developed employing biodegradable and biocompatible components. The selected galantamine hydrobromide-loaded solid\u2013lipid nanoparticles offered nanocolloidal with size lower than 100 nm and maximum drug entrapment 83.42 \u00b1 0.63%. In vitro drug release from these spherical drug-loaded nanoparticles was observed to be greater than 90% for a period of 24 h in controlled manner. In vivo evaluations demonstrated significant memory restoration capability in cognitive deficit rats in comparison with naive drug. The developed carriers offered approximately twice bioavailability to that of plain drug. Hence, the galantamine hydrobromide-loaded solid\u2013lipid nanoparticles can be a promising vehicle for safe and effective delivery especially in disease like Alzheimer\u2019s.", "which Uses drug ?", "Galantamine", 9.0, 20.0], ["Effectiveness of CNS-acting drugs depends on the localization, targeting, and capacity to be transported through the blood\u2013brain barrier (BBB) which can be achieved by designing brain-targeting delivery vectors. Hence, the objective of this study was to screen the formulation and process variables affecting the performance of sertraline (Ser-HCl)-loaded pegylated and glycosylated liposomes. The prepared vectors were characterized for Ser-HCl entrapment, size, surface charge, release behavior, and in vitro transport through the BBB. Furthermore, the compatibility among liposomal components was assessed using SEM, FTIR, and DSC analysis. Through a thorough screening study, enhancement of Ser-HCl entrapment, nanosized liposomes with low skewness, maximized stability, and controlled drug leakage were attained. The solid-state characterization revealed remarkable interaction between Ser-HCl and the charging agent to determine drug entrapment and leakage. Moreover, results of liposomal transport through mouse brain endothelialpolyoma cells demonstrated greater capacity of the proposed glycosylated liposomes to target the cerebellar due to its higher density of GLUT1 and higher glucose utilization. This transport capacity was confirmed by the inhibiting action of both cytochalasin B and phenobarbital. Using C6 glioma cells model, flow cytometry, time-lapse live cell imaging, and in vivo NIR fluorescence imaging demonstrated that optimized glycosylated liposomes can be transported through the BBB by classical endocytosis, as well as by specific transcytosis. In conclusion, the current study proposed a thorough screening of important formulation and process variabilities affecting brain-targeting liposomes for further scale-up processes.", "which Uses drug ?", "Sertraline", 328.0, 338.0], ["PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.", "which Uses drug ?", "Paclitaxel", 174.0, 184.0], ["Poor delivery of insoluble anticancer drugs has so far precluded their clinical application. In this study, we developed a tumor-targeting delivery system for insoluble drug (paclitaxel, PTX) by PEGylated O-carboxymethyl-chitosan (CMC) nanoparticles grafted with cyclic Arg-Gly-Asp (RGD) peptide. To improve the loading efficiency (LE), we combined O/W/O double emulsion method with temperature-programmed solidification technique and controlled PTX within the matrix network as in situ nanocrystallite form. Furthermore, these CMC nanoparticles were PEGylated, which could reduce recognition by the reticuloendothelial system (RES) and prolong the circulation time in blood. In addition, further graft of cyclic RGD peptide at the terminal of PEG chain endowed these nanoparticles with higher affinity to in vitro Lewis lung carcinoma (LLC) cells and in vivo tumor tissue. These outstanding properties enabled as-designed nanodevice to exhibit a greater tumor growth inhibition effect and much lower side effects over the commercial formulation Taxol.", "which Uses drug ?", "Paclitaxel", 175.0, 185.0], ["\nBackground:\n Oral administrations of microparticles (MPs) and nanoparticles (NPs) have\nbeen widely employed as therapeutic approaches for the treatment of ulcerative colitis (UC). However,\nno previous study has comparatively investigated the therapeutic efficacies of MPs and NPs.\n\n\nMethods:\n In this study, curcumin (CUR)-loaded MPs (CUR-MPs) and CUR-loaded NPs (CUR-NPs)\nwere prepared using a single water-in-oil emulsion solvent evaporation technique. Their therapeutic\noutcomes against UC were further comparatively studied.\n\n\nResults:\n The resultant spherical MPs and NPs exhibited slightly negative zeta-potential with average\nparticle diameters of approximately 1.7 &#181;m and 270 nm, respectively. It was found that NPs exhibited\na much higher CUR release rate than MPs within the same period of investigation. In vivo experiments\ndemonstrated that oral administration of CUR-MPs and CUR-NPs reduced the symptoms\nof inflammation in a UC mouse model induced by dextran sulfate sodium. Importantly, CUR-NPs\nshowed much better therapeutic outcomes in alleviating UC compared with CUR-MPs.\n\n\nConclusion:\n NPs can improve the anti-inflammatory activity of CUR by enhancing the drug release\nand cellular uptake efficiency, in comparison with MPs. Thus, they could be exploited as a promising\noral drug delivery system for effective UC treatment.\n", "which Uses drug ?", "Curcumin", 415.0, 423.0], ["Abstract Context: Glioma is a common malignant brain tumor originating in the central nervous system. Efficient delivery of therapeutic agents to the cells and tissues is a difficult challenge. Co-delivery of anticancer drugs into the cancer cells or tissues by multifunctional nanocarriers may provide a new paradigm in cancer treatment. Objective: In this study, solid lipid nanoparticles (SLNs) and nanostructured lipid carriers (NLCs) were constructed for co-delivery of vincristine (VCR) and temozolomide (TMZ) to develop the synergetic therapeutic action of the two drugs. The antitumor effects of these two systems were compared to provide a better choice for gliomatosis cerebri treatment. Methods: VCR- and TMZ-loaded SLNs (VT-SLNs) and NLCs (VT-NLCs) were formulated. Their particle size, zeta potential, drug encapsulation efficiency (EE) and drug loading capacity were evaluated. The single TMZ-loaded SLNs and NLCs were also prepared as contrast. Anti-tumor efficacies of the two kinds of carriers were evaluated on U87 malignant glioma cells and mice bearing malignant glioma model. Results: Significantly better glioma inhibition was observed on NLCs formulations than SLNs, and dual drugs displayed the highest antitumor efficacy in vivo and in vitro than all the other formulations used. Conclusion: VT-NLCs can deliver VCR and TMZ into U87MG cells more efficiently, and inhibition efficacy is higher than VT-SLNs. This dual drugs-loaded NLCs could be an outstanding drug delivery system to achieve excellent therapeutic efficiency for the treatment of malignant gliomatosis cerebri.", "which Uses drug ?", "Vincristine", 475.0, 486.0], ["Although in vitro-in vivo correlations (IVIVCs) are commonly pursued for modified-release products, there are limited reports of successful IVIVCs for immediate-release (IR) formulations. This manuscript details the development of a Multiple Level C IVIVC for the amorphous solid dispersion formulation of suvorexant, a BCS class II compound, and its application to establishing dissolution specifications and in-process controls. Four different 40 mg batches were manufactured at different tablet hardnesses to produce distinct dissolution profiles. These batches were evaluated in a relative bioavailability clinical study in healthy volunteers. Although no differences were observed for the total exposure (AUC) of the different batches, a clear relationship between dissolution and Cmax was observed. A validated Multiple Level C IVIVC against Cmax was developed for the 10, 15, 20, 30, and 45 min dissolution time points and the tablet disintegration time. The relationship established between tablet tensile strength and dissolution was subsequently used to inform suitable tablet hardness ranges within acceptable Cmax limits. This is the first published report for a validated Multiple Level C IVIVC for an IR solid dispersion formulation demonstrating how this approach can facilitate Quality by Design in formulation development and help toward clinically relevant specifications and in-process controls.", "which Uses drug ?", "Suvorexant", 306.0, 316.0], ["Bioimaging and therapeutic agents accumulated in ectopic tumors following intravenous administration of hybrid nanocrystals to tumor-bearing mice. Solid, nanosized paclitaxel crystals physically incorporated fluorescent molecules throughout the crystal lattice and retained fluorescent properties in the solid state. Hybrid nanocrystals were significantly localized in solid tumors and remained in the tumor for several days. An anticancer effect is expected of these hybrid nanocrystals.", "which Uses drug ?", "Paclitaxel", 164.0, 174.0], ["Engineered cocrystals offer an alternative solid drug form with tailored physicochemical properties. Interestingly, although cocrystals provide many new possibilities, they also present new challenges, particularly in regard to their design and large-scale manufacture. Current literature has primarily focused on the preparation and characterization of novel cocrystals typically containing only the drug and coformer, leaving the subsequent formulation less explored. In this paper we propose, for the first time, the use of hot melt extrusion for the mechanochemical synthesis of pharmaceutical cocrystals in the presence of a meltable binder. In this approach, we examine excipients that are amenable to hot melt extrusion, forming a suspension of cocrystal particulates embedded in a pharmaceutical matrix. Using ibuprofen and isonicotinamide as a model cocrystal reagent pair, formulations extruded with a small molecular matrix carrier (xylitol) were examined to be intimate mixtures wherein the newly formed cocrystal particulates were physically suspended in a matrix. With respect to formulations extruded using polymeric carriers (Soluplus and Eudragit EPO, respectively), however, there was no evidence within PXRD patterns of either crystalline ibuprofen or the cocrystal. Importantly, it was established in this study that an appropriate carrier for a cocrystal reagent pair during HME processing should satisfy certain criteria including limited interaction with parent reagents and cocrystal product, processing temperature sufficiently lower than the onset of cocrystal Tm, low melt viscosity, and rapid solidification upon cooling.", "which Uses drug ?", "isonicotinamide", 832.0, 847.0], ["A carrier-free method for delivery of a hydrophobic drug in its pure form, using nanocrystals (nanosized crystals), is proposed. To demonstrate this technique, nanocrystals of a hydrophobic photosensitizing anticancer drug, 2-devinyl-2-(1-hexyloxyethyl)pyropheophorbide (HPPH), have been synthesized using the reprecipitation method. The resulting drug nanocrystals were monodispersed and stable in aqueous dispersion, without the necessity of an additional stabilizer (surfactant). As shown by confocal microscopy, these pure drug nanocrystals were taken up by the cancer cells with high avidity. Though the fluorescence and photodynamic activity of the drug were substantially quenched in the form of nanocrystals in aqueous suspension, both these characteristics were recovered under in vitro and in vivo conditions. This recovery of drug activity and fluorescence is possibly due to the interaction of nanocrystals with serum albumin, resulting in conversion of the drug nanocrystals into the molecular form. This was confirmed by demonstrating similar recovery in presence of fetal bovine serum (FBS) or bovine serum albumin (BSA). Under similar treatment conditions, the HPPH in nanocrystal form or in 1% Tween-80/water formulation showed comparable in vitro and in vivo efficacy.", "which Uses drug ?", "2-devinyl-2-(1-hexyloxyethyl)pyropheophorbide (HPPH)", NaN, NaN], ["Abstract The aim of the current study is to develop amorphous solid dispersion (SD) via hot melt extrusion technology to improve the solubility of a water-insoluble compound, felodipine (FEL). The solubility was dramatically increased by preparation of amorphous SDs via hot-melt extrusion with an amphiphilic polymer, Soluplus\u00ae (SOL). FEL was found to be miscible with SOL by calculating the solubility parameters. The solubility of FEL within SOL was determined to be in the range of 6.2\u20139.9% (w/w). Various techniques were applied to characterize the solid-state properties of the amorphous SDs. These included Fourier Transform Infrared Spectrometry spectroscopy and Raman spectroscopy to detect the formation of hydrogen bonding between the drug and the polymer. Scanning electron microscopy was performed to study the morphology of the SDs. Among all the hot-melt extrudates, FEL was found to be molecularly dispersed within the polymer matrix for the extrudates containing 10% drug, while few small crystals were detected in the 30 and 50% extrudates. In conclusion, solubility of FEL was enhanced while a homogeneous SD was achieved for 10% drug loading.", "which Uses drug ?", "Felodipine", 175.0, 185.0], ["Bladder cancer (BC) is a very common cancer. Nonmuscle-invasive bladder cancer (NMIBC) is the most common type of bladder cancer. After postoperative tumor resection, chemotherapy intravesical instillation is recommended as a standard treatment to significantly reduce recurrences. Nanomedicine-mediated delivery of a chemotherapeutic agent targeting cancer could provide a solution to obtain longer residence time and high bioavailability of an anticancer drug. The approach described here provides a nanomedicine with sustained and prolonged delivery of paclitaxel and enhanced therapy of intravesical bladder cancer, which is paclitaxel/chitosan (PTX/CS) nanosupensions (NSs). The positively charged PTX/CS NSs exhibited a rod-shaped morphology with a mean diameter about 200 nm. They have good dispersivity in water without any protective agents, and the positively charged properties make them easy to be adsorbed on the inner mucosa of the bladder through electrostatic adsorption. PTX/CS NSs also had a high drug loading capacity and can maintain sustained release of paclitaxel which could be prolonged over 10 days. Cell experiments in vitro demonstrated that PTX/CS NSs had good biocompatibility and effective bladder cancer cell proliferation inhibition. The significant anticancer efficacy against intravesical bladder cancer was verified by an in situ bladder cancer model. The paclitaxel/chitosan nanosupensions could provide sustained delivery of chemotherapeutic agents with significant anticancer efficacy against intravesical bladder cancer.", "which Uses drug ?", "Paclitaxel", 556.0, 566.0], ["Abstract Melanotransferrin antibody (MA) and tamoxifen (TX) were conjugated on etoposide (ETP)-entrapped solid lipid nanoparticles (ETP-SLNs) to target the blood\u2013brain barrier (BBB) and glioblastom multiforme (GBM). MA- and TX-conjugated ETP-SLNs (MA\u2013TX\u2013ETP\u2013SLNs) were used to infiltrate the BBB comprising a monolayer of human astrocyte-regulated human brain-microvascular endothelial cells (HBMECs) and to restrain the proliferation of malignant U87MG cells. TX-grafted ETP-SLNs (TX\u2013ETP\u2013SLNs) significantly enhanced the BBB permeability coefficient for ETP and raised the fluorescent intensity of calcein-AM when compared with ETP-SLNs. In addition, surface MA could increase the BBB permeability coefficient for ETP about twofold. The viability of HBMECs was higher than 86%, suggesting a high biocompatibility of MA\u2013TX\u2013ETP-SLNs. Moreover, the efficiency in antiproliferation against U87MG cells was in the order of MA\u2013TX\u2013ETP-SLNs > TX\u2013ETP-SLNs > ETP-SLNs > SLNs. The capability of MA\u2013TX\u2013ETP-SLNs to target HBMECs and U87MG cells during internalization was verified by immunochemical staining of expressed melanotransferrin. MA\u2013TX\u2013ETP-SLNs can be a potent pharmacotherapy to deliver ETP across the BBB to GBM.", "which Uses drug ?", "Etoposide", 79.0, 88.0], ["Engineered cocrystals offer an alternative solid drug form with tailored physicochemical properties. Interestingly, although cocrystals provide many new possibilities, they also present new challenges, particularly in regard to their design and large-scale manufacture. Current literature has primarily focused on the preparation and characterization of novel cocrystals typically containing only the drug and coformer, leaving the subsequent formulation less explored. In this paper we propose, for the first time, the use of hot melt extrusion for the mechanochemical synthesis of pharmaceutical cocrystals in the presence of a meltable binder. In this approach, we examine excipients that are amenable to hot melt extrusion, forming a suspension of cocrystal particulates embedded in a pharmaceutical matrix. Using ibuprofen and isonicotinamide as a model cocrystal reagent pair, formulations extruded with a small molecular matrix carrier (xylitol) were examined to be intimate mixtures wherein the newly formed cocrystal particulates were physically suspended in a matrix. With respect to formulations extruded using polymeric carriers (Soluplus and Eudragit EPO, respectively), however, there was no evidence within PXRD patterns of either crystalline ibuprofen or the cocrystal. Importantly, it was established in this study that an appropriate carrier for a cocrystal reagent pair during HME processing should satisfy certain criteria including limited interaction with parent reagents and cocrystal product, processing temperature sufficiently lower than the onset of cocrystal Tm, low melt viscosity, and rapid solidification upon cooling.", "which Uses drug ?", "Ibuprofen", 818.0, 827.0], ["Abstract Background: Delivery of drugs to brain is a subtle task in the therapy of many severe neurological disorders. Solid lipid nanoparticles (SLN) easily diffuse the blood\u2013brain barrier (BBB) due to their lipophilic nature. Furthermore, ligand conjugation on SLN surface enhances the targeting efficiency. Lactoferin (Lf) conjugated SLN system is first time attempted for effective brain targeting in this study. Purpose: Preparation of Lf-modified docetaxel (DTX)-loaded SLN for proficient delivery of DTX to brain. Methods: DTX-loaded SLN were prepared using emulsification and solvent evaporation method and conjugation of Lf on SLN surface (C-SLN) was attained through carbodiimide chemistry. These lipidic nanoparticles were evaluated by DLS, AFM, FTIR, XRD techniques and in vitro release studies. Colloidal stability study was performed in biologically simulated environment (normal saline and serum). These lipidic nanoparticles were further evaluated for its targeting mechanism for uptake in brain tumour cells and brain via receptor saturation studies and distribution studies in brain, respectively. Results: Particle size of lipidic nanoparticles was found to be optimum. Surface morphology (zeta potential, AFM) and surface chemistry (FTIR) confirmed conjugation of Lf on SLN surface. Cytotoxicity studies revealed augmented apoptotic activity of C-SLN than SLN and DTX. Enhanced cytotoxicity was demonstrated by receptor saturation and uptake studies. Brain concentration of DTX was elevated significantly with C-SLN than marketed formulation. Conclusions: It is evident from the cytotoxicity, uptake that SLN has potential to deliver drug to brain than marketed formulation but conjugating Lf on SLN surface (C-SLN) further increased the targeting potential for brain tumour. Moreover, brain distribution studies corroborated the use of C-SLN as a viable vehicle to target drug to brain. Hence, C-SLN was demonstrated to be a promising DTX delivery system to brain as it possessed remarkable biocompatibility, stability and efficacy than other reported delivery systems.", "which Uses drug ?", "Docetaxel", 453.0, 462.0], ["This study aimed to improve skin permeation and deposition of psoralen by using ethosomes and to investigate real-time drug release in the deep skin in rats. We used a uniform design method to evaluate the effects of different ethosome formulations on entrapment efficiency and drug skin deposition. Using in vitro and in vivo methods, we investigated skin penetration and release from psoralen-loaded ethosomes in comparison with an ethanol tincture. In in vitro studies, the use of ethosomes was associated with a 6.56-fold greater skin deposition of psoralen than that achieved with the use of the tincture. In vivo skin microdialysis showed that the peak concentration and area under the curve of psoralen from ethosomes were approximately 3.37 and 2.34 times higher, respectively, than those of psoralen from the tincture. Moreover, it revealed that the percutaneous permeability of ethosomes was greater when applied to the abdomen than when applied to the chest or scapulas. Enhanced permeation and skin deposition of psoralen delivered by ethosomes may help reduce toxicity and improve the efficacy of long-term psoralen treatment.", "which Uses drug ?", "Psoralen", 62.0, 70.0], ["In this work we describe the development and characterization of a new formulation of insulin (INS). Insulin was complexed with cyclodextrins (CD) in order to improve its solubility and stability being available as a dry powder, after encapsulation into poly (D,L-lactic-co-glycolic acid) (PLGA) microspheres. The complex INS : CD was encapsulated into microspheres in order to obtain particles with an average diameter between 2 and 6 microm. This system was able to induce significant reduction of the plasma glucose level in two rodent models, normal mice and diabetic rats, after intratracheal administration.", "which Uses drug ?", "Insulin", 86.0, 93.0], ["Cyclodextrins are cylindrical oligosaccharides with a lipophilic central cavity and hydrophilic outer surface. They can form water-soluble complexes with lipophilic drugs, which 'hide' in the cavity. Cyclodextrins can be used to form aqueous eye drop solutions with lipophilic drugs, such as steroids and some carbonic anhydrase inhibitors. The cyclodextrins increase the water solubility of the drug, enhance drug absorption into the eye, improve aqueous stability and reduce local irritation. Cyclodextrins are useful excipients in eye drop formulations of various drugs, including steroids of any kind, carbonic anhydrase inhibitors, pilocarpine, cyclosporins, etc. Their use in ophthalmology has already begun and is likely to expand the selection of drugs available as eye drops. In this paper we review the properties of cyclodextrins and their application in eye drop formulations, of which their use in the formulation of dexamethasone eye drops is an example. Cyclodextrins have been used to formulate eye drops containing corticosteroids, such as dexamethasone, with levels of concentration and ocular absorption which, according to human and animal studies, are many times those seen with presently available formulations. Cyclodextrin-based dexamethasone eye drops are well tolerated in the eye and seem to provide a higher degree of bioavailability and clinical efficiency than the steroid eye drop formulations presently available. Such formulations offer the possibility of once per day application of corticosteroid eye drops after eye surgery, and more intensive topical steroid treatment in severe inflammation. While cyclodextrins have been known for more than a century, their use in ophthalmology is just starting. Cyclodextrins are useful excipients in eye drop formulations for a variety of lipophilic drugs. They will facilitate eye drop formulations for drugs that otherwise might not be available for topical use, while improving absorption and stability and decreasing local irritation.", "which Uses drug ?", "Dexamethasone", 930.0, 943.0], ["A brain drug delivery system for glioma chemotherapy based on transferrin-conjugated biodegradable polymersomes, Tf-PO-DOX, was made and evaluated with doxorubicin (DOX) as a model drug. Biodegradable polymersomes (PO) loaded with doxorubicin (DOX) were prepared by the nanoprecipitation method (PO-DOX) and then conjugated with transferrin (Tf) to yield Tf-PO-DOX with an average diameter of 107 nm and surface Tf molecule number per polymersome of approximately 35. Compared with PO-DOX and free DOX, Tf-PO-DOX demonstrated the strongest cytotoxicity against C6 glioma cells and the greatest intracellular delivery. It was shown in pharmacokinetic and brain distribution experiments that Tf-PO significantly enhanced brain delivery of DOX, especially the delivery of DOX into brain tumor cells. Pharmacodynamics results revealed a significant reduction of tumor volume and a significant increase of median survival time in the group of Tf-PO-DOX compared with those in saline control animals, animals treated with PO-DOX, and free DOX solution. By terminal deoxynucleotidyl transferase-mediated dUTP nick-end-labeling, Tf-PO-DOX could extensively make tumor cell apoptosis. These results indicated that Tf-PO-DOX could significantly enhance the intracellular delivery of DOX in glioma and the chemotherapeutic effect of DOX for glioma rats.", "which Uses drug ?", "Doxorubicin", 152.0, 163.0], ["Abstract The influence of hydroxypropyl \u03b2-cyclodextrin (HP\u03b2CD) on the corneal permeation of pilocarpine nitrate was investigated by an in vitro permeability study using isolated rabbit cornea. Pupillary-response pattern to pilocarpine nitrate with and without HP\u03b2CD was examined in rabbit eye. Corneal permeation of pilocarpine nitrate was found to be four times higher after adding HP\u03b2CD into the formulation. The reduction of pupil diameter (miosis) by pilocarpine nitrate was significantly increased as a result of HP\u03b2CD addition into the simple aqueous solution of the active substance. The highest miotic response was obtained with the formulation prepared in a vehicle of Carbopol\u00ae 940. It is suggested that ocular bioavailability of pilocarpine nitrate could be improved by the addition of HP\u03b2CD.", "which Uses drug ?", "Pilocarpine", 92.0, 103.0], ["Alzheimer's disease (AD) is the most common form of dementia, characterized by the formation of extracellular senile plaques and neuronal loss caused by amyloid \u03b2 (A\u03b2) aggregates in the brains of AD patients. Conventional strategies failed to treat AD in clinical trials, partly due to the poor solubility, low bioavailability and ineffectiveness of the tested drugs to cross the blood-brain barrier (BBB). Moreover, AD is a complex, multifactorial neurodegenerative disease; one-target strategies may be insufficient to prevent the processes of AD. Here, we designed novel kind of poly(lactide-co-glycolic acid) (PLGA) nanoparticles by loading with A\u03b2 generation inhibitor S1 (PQVGHL peptide) and curcumin to target the detrimental factors in AD development and by conjugating with brain targeting peptide CRT (cyclic CRTIGPSVC peptide), an iron-mimic peptide that targets transferrin receptor (TfR), to improve BBB penetration. The average particle size of drug-loaded PLGA nanoparticles and CRT-conjugated PLGA nanoparticles were 128.6 nm and 139.8 nm, respectively. The results of Y-maze and new object recognition test demonstrated that our PLGA nanoparticles significantly improved the spatial memory and recognition in transgenic AD mice. Moreover, PLGA nanoparticles remarkably decreased the level of A\u03b2, reactive oxygen species (ROS), TNF-\u03b1 and IL-6, and enhanced the activities of super oxide dismutase (SOD) and synapse numbers in the AD mouse brains. Compared with other PLGA nanoparticles, CRT peptide modified-PLGA nanoparticles co-delivering S1 and curcumin exhibited most beneficial effect on the treatment of AD mice, suggesting that conjugated CRT peptide, and encapsulated S1 and curcumin exerted their corresponding functions for the treatment.", "which Uses drug ?", "Curcumin", 698.0, 706.0], ["The development of multidrug resistance (due to drug efflux by P-glycoproteins) is a major drawback with the use of paclitaxel (PTX) in the treatment of cancer. The rationale behind this study is to prepare PTX nanoparticles (NPs) for the reversal of multidrug resistance based on the fact that PTX loaded into NPs is not recognized by P-glycoproteins and hence is not effluxed out of the cell. Also, the intracellular penetration of the NPs could be enhanced by anchoring transferrin (Tf) on the PTX-PLGA-NPs. PTX-loaded PLGA NPs (PTX-PLGA-NPs), Pluronic\u00aeP85-coated PLGA NPs (P85-PTX-PLGA-NPs), and Tf-anchored PLGA NPs (Tf-PTX-PLGA-NPs) were prepared and evaluted for cytotoxicity and intracellular uptake using C6 rat glioma cell line. A significant increase in cytotoxicity was observed in the order of Tf-PTX-PLGA-NPs > P85-PTX-PLGA-NPs > PTX-PLGA-NPs in comparison to drug solution. In vivo biodistribution on male Sprague\u2013Dawley rats bearing C6 glioma (subcutaneous) showed higher tumor PTX concentrations in animals administered with PTX-NPs compared to drug solution.", "which Uses drug ?", "Paclitaxel", 116.0, 126.0], ["ABSTRACT Background: Actually, no drugs provide therapeutic benefit to approximately one-third of depressed patients. Depression is predicted to become the first global disease by 2030. So, new therapeutic interventions are imperative. Research design and methods: Venlafaxine-loaded poly(lactic-co-glycolic acid) (PLGA) nanoparticles (NPs) were surface functionalized with two ligands against transferrin receptor to enhance access to brain. An in vitro blood\u2013brain barrier model using hCMEC/D3 cell line was developed to evaluate permeability. In vivo biodistribution studies were performed using C57/bl6 mice. Particles were administered intranasal and main organs were analyzed. Results: Particles were obtained as a lyophilized powder easily to re-suspend. Internalization and permeability studies showed the following cell association sequence: TfRp-NPs>Tf-NPs>plain NPs. Permeability studies also showed that encapsulated VLF was not affected by P-gP pump efflux increasing its concentration in the basolateral side after 24 h. In vivo studies showed that 25% of plain NPs reach the brain after 30 min of one intranasal administration while less than 5% of functionalized NPs get the target. Conclusions: Plain NPs showed the highest ability to reach the brain vs. functionalized NPs after 30 min by intranasal administration. We suggest plain NPs probably travel via direct nose-to-brian route whereas functionalized NPs reach the brain by receptor-mediated endocytosis.", "which Uses drug ?", "Venlafaxine", 265.0, 276.0], ["The effects of chemically modified cyclodextrins on the nasal absorption of buserelin, an agonist of luteinizing hormone-releasing hormone, were investigated in anesthetized rats. Of the cyclodextrins tested, dimethyl-beta-cyclodextrin (DM-beta-CyD) was the most effective in improving the rate and extent of the nasal bioavailability of buserelin. Fluorescence spectroscopic studies indicated that the cyclodextrins formed inclusion complexes with buserelin, which may reduce the diffusibility of buserelin across the nasal epithelium and may participate in the protection of the peptide against enzymatic degradation in the nasal mucosa. Additionally, the cyclodextrins increased the permeability of the nasal mucosa, which was the primary determinant based on the multiple regression analysis of the nasal absorption enhancement of buserelin. Scanning electron microscopic observations revealed that DM-beta-CyD induced no remarkable changes in the surface morphology of the nasal mucosa at a minimal concentration necessary to achieve substantial absorption enhancement. The present results suggest that DM-beta-CyD could improve the nasal bioavailability of buserelin and is well-tolerated by the nasal mucosa of the rat.", "which Uses drug ?", "Buserelin ", 498.0, 508.0], ["Glucagon-like peptide-1 (GLP-1) receptor activation in the brain provides neuroprotection. Exendin-4 (Ex-4), a GLP-1 analog, has seen limited clinical usage because of its short half-life. We developed long-lasting Ex-4-loaded poly(D,L-lactide-co-glycolide) microspheres (PEx-4) and explored its neuroprotective potential against cerebral ischemia in diabetic rats. Compared with Ex-4, PEx-4 in the gradually degraded microspheres sustained higher Ex-4 levels in the plasma and cerebrospinal fluid for at least 2 weeks and improved diabetes-induced glycemia after a single subcutaneous administration (20 \u03bcg/day). Ten minutes of bilateral carotid artery occlusion (CAO) combined with hemorrhage-induced hypotension (around 30 mm Hg) significantly decreased cerebral blood flow and microcirculation in male Wistar rats subjected to streptozotocin-induced diabetes. CAO increased cortical O 2 \u2013 levels by chemiluminescence amplification and prefrontal cortex edema by T2-weighted magnetic resonance imaging analysis. CAO significantly increased aquaporin 4 and glial fibrillary acidic protein expression and led to cognition deficits. CAO downregulated phosphorylated Akt/endothelial nitric oxide synthase (p-Akt/p-eNOS) signaling and enhanced nuclear factor (NF)-\u03baBp65/ intercellular adhesion molecule-1 (ICAM-1) expression, endoplasmic reticulum (ER) stress, and apoptosis in the cerebral cortex. PEx-4 was more effective than Ex-4 to improve CAO-induced oxidative injury and cognitive deficits. The neuroprotection provided by PEx-4 was through p-Akt/p-eNOS pathways, which suppressed CAO-enhanced NF- \u03baB/ICAM-1 signaling, ER stress, and apoptosis.", "which Uses drug ?", "exendin-4", 91.0, 100.0], ["The objective of the present study was to investigate the effects of processing variables and formulation factors on the characteristics of hot-melt extrudates containing a copolymer (Kollidon\u00ae VA 64). Nifedipine was used as a model drug in all of the extrudates. Differential scanning calorimetry (DSC) was utilized on the physical mixtures and melts of varying drug\u2013polymer concentrations to study their miscibility. The drug\u2013polymer binary mixtures were studied for powder flow, drug release, and physical and chemical stabilities. The effects of moisture absorption on the content uniformity of the extrudates were also studied. Processing the materials at lower barrel temperatures (115\u2013135\u00b0C) and higher screw speeds (50\u2013100 rpm) exhibited higher post-processing drug content (~99\u2013100%). DSC and X-ray diffraction studies confirmed that melt extrusion of drug\u2013polymer mixtures led to the formation of solid dispersions. Interestingly, the extrusion process also enhanced the powder flow characteristics, which occurred irrespective of the drug load (up to 40% w/w). Moreover, the content uniformity of the extrudates, unlike the physical mixtures, was not sensitive to the amount of moisture absorbed. The extrusion conditions did not influence drug release from the extrudates; however, release was greatly affected by the drug loading. Additionally, the drug release from the physical mixture of nifedipine\u2013Kollidon\u00ae VA 64 was significantly different when compared to the corresponding extrudates (f2 = 36.70). The extrudates exhibited both physical and chemical stabilities throughout the period of study. Overall, hot-melt extrusion technology in combination with Kollidon\u00ae VA 64 produced extrudates capable of higher drug loading, with enhanced flow characteristics, and excellent stability.", "which Uses drug ?", "Nifedipine", 202.0, 212.0], ["ABSTRACT Targeted airway delivery of antifungals as prophylaxis against invasive aspergillosis may lead to high lung drug concentrations while avoiding toxicities associated with systemically administered agents. We evaluated the effectiveness of aerosolizing the intravenous formulation of voriconazole as prophylaxis against invasive pulmonary aspergillosis caused by Aspergillus fumigatus in an established murine model. Inhaled voriconazole significantly improved survival and limited the extent of invasive disease, as assessed by histopathology, compared to control and amphotericin B treatments.", "which Uses drug ?", "Voriconazole", 291.0, 303.0], ["The insulin administration by pulmonary route has been investigated in the last years with good perspectives as alternative for parenteral administration. However, it has been reported that insulin absorption after pulmonary administration is limited by various factors. Moreover, in the related studies one daily injection of long-acting insulin was necessary for a correct glycemic control. To abolish the insulin injection, the present study aimed to develop a new formulation for prolonged pulmonary insulin delivery based on the encapsulation of an insulin:dimethyl-\u03b2-cyclodextrin (INS:DM-\u03b2-CD) complex into PLGA microspheres. The molar ratio of insulin/cyclodextrin in the complex was equal to 1:5. The particles were obtained by the w/o/w solvent evaporation method. The inner aqueous phase of the w/o/w multiple emulsion contained the INS:DM-\u03b2-CD complex. The characteristics of the INS:DM-\u03b2-CD complex obtained were assessed by 1H-NMR spectroscopy and Circular Dichroism study. The average diameter of the microspheres prepared, evaluated by laser diffractometry, was 2.53 \u00b1 1.8 \u00b5m and the percentage of insulin loading was 14.76 \u00b1 1.1. The hypoglycemic response after intratracheal administration (3.0 I.U. kg\u22121) of INS:DM-\u03b2-CD complex-loaded microspheres to diabetic rats indicated an efficient and prolonged release of the hormone compared with others insulin formulations essayed.", "which Uses drug ?", "Insulin", 4.0, 11.0], ["The interaction of acetazolamide with beta-cyclodextrin, (beta-CD), dimethyl-beta-cyclodextrin (DM-beta-CD) and trimethyl-beta-cyclodextrin (TM-beta-CD) was monitored spectrophotometrically. The results revealed formation of equimolar complexes. The apparent solubility of acetazolamide in water was found to increase linearly with increasing CD concentration. The effect of CDs on the permeation of acetazolamide through semi-permeable membranes and the topical delivery of acetazolamide was investigated. Maximum acetazolamide penetration was obtained when just enough CD was used to keep all acetazolamide in solution. For an acetazolamide concentration of 10 mg/ml, the optimum CD concentration appeared to be 3.5 mmol/l for beta-CD, 2.8 mmol/l for TM-beta-CD and 6.0 mmol/l for DM-beta-CD. The effect of CDs on the bioavailability of acetazolamide was assessed by measuring the intraocular pressure in rabbits. The results indicated that CDs have a significant influence on the biological performance of the drug leading to augmentation in its intensity of action and bioavailability as well as prolongation in its duration of action.", "which Uses drug ?", "Acetazolamide", 19.0, 32.0], ["The aim of this article is to prepare and characterize inhalable dry powders of recombinant human growth hormone (rhGH), and assess their efficacy for systemic delivery of the protein in rats. The powders were prepared by spray drying using dimethyl-beta-cyclodextrin (DMbetaCD) at different molar ratios in the initial feeds. Size exclusive chromatography was performed in order to determine protecting effect of DMbetaCD on the rhGH aggregation during spray drying. By increasing the concentration of DMbetaCD, rhGH aggregation was decreased from 9.67 (in the absence of DMbetaCD) to 0.84% (using DMbetaCD at 1000 molar ratio in the spray solution). The aerosol performance of the spray dried (SD) powders was evaluated using Andersen cascade impactor. Fine particle fraction values of 53.49%, 33.40%, and 23.23% were obtained using DMbetaCD at 10, 100, and 1000 molar ratio, respectively. In vivo studies showed the absolute bioavailability of 25.38%, 76.52%, and 63.97% after intratracheal insufflation of the powders produced after spray drying of the solutions containing DMbetaCD at 10, 100, and 1000 molar ratio, respectively in rat. In conclusion, appropriate cyclodextrin concentration was achieved considering the protein aggregation and aerosol performance of the SD powders and the systemic absorption following administration through the rat lung.", "which Uses drug ?", "Recombinant human growth hormone", 80.0, 112.0], ["Abstract It is very challenging to treat brain cancer because of the blood\u2013brain barrier (BBB) restricting therapeutic drug or gene to access the brain. In this research project, angiopep-2 (ANG) was used as a brain-targeted peptide for preparing multifunctional ANG-modified poly(lactic-co-glycolic acid) (PLGA) nanoparticles (NPs), which encapsulated both doxorubicin (DOX) and epidermal growth factor receptor (EGFR) siRNA, designated as ANG/PLGA/DOX/siRNA. This system could efficiently deliver DOX and siRNA into U87MG cells leading to significant cell inhibition, apoptosis and EGFR silencing in vitro. It demonstrated that this drug system was capable of penetrating the BBB in vivo, resulting in more drugs accumulation in the brain. The animal study using the brain orthotopic U87MG glioma xenograft model indicated that the ANG-targeted co-delivery of DOX and EGFR siRNA resulted in not only the prolongation of the life span of the glioma-bearing mice but also an obvious cell apoptosis in glioma tissue.", "which Uses drug ?", "siRNA", 420.0, 425.0], ["ABSTRACT Extracellular vesicles (EVs) hold great potential as novel systems for nucleic acid delivery due to their natural composition. Our goal was to load EVs with microRNA that are synthesized by the cells that produce the EVs. HEK293T cells were engineered to produce EVs expressing a lysosomal associated membrane, Lamp2a fusion protein. The gene encoding pre-miR-199a was inserted into an artificial intron of the Lamp2a fusion protein. The TAT peptide/HIV-1 transactivation response (TAR) RNA interacting peptide was exploited to enhance the EV loading of the pre-miR-199a containing a modified TAR RNA loop. Computational modeling demonstrated a stable interaction between the modified pre-miR-199a loop and TAT peptide. EMSA gel shift, recombinant Dicer processing and luciferase binding assays confirmed the binding, processing and functionality of the modified pre-miR-199a. The TAT-TAR interaction enhanced the loading of the miR-199a into EVs by 65-fold. Endogenously loaded EVs were ineffective at delivering active miR-199a-3p therapeutic to recipient SK-Hep1 cells. While the low degree of miRNA loading into EVs through this approach resulted in inefficient distribution of RNA cargo into recipient cells, the TAT TAR strategy to load miRNA into EVs may be valuable in other drug delivery approaches involving miRNA mimics or other hairpin containing RNAs.", "which Fusion protein ?", "TAT-TAR", 890.0, 897.0], ["ABSTRACT Camelid heavy-chain variable domains (VHHs) are the smallest, intact, antigen-binding units to occur in nature. VHHs possess high degrees of solubility and robustness enabling generation of multivalent constructs with increased avidity \u2013 characteristics that mark their superiority to other antibody fragments and monoclonal antibodies. Capable of effectively binding to molecular targets inaccessible to classical immunotherapeutic agents and easily produced in microbial culture, VHHs are considered promising tools for pharmaceutical biotechnology. With the aim to demonstrate the perspective and potential of VHHs for the development of prophylactic and therapeutic drugs to target diseases caused by bacterial and viral infections, this review article will initially describe the structural features that underlie the unique properties of VHHs and explain the methods currently used for the selection and recombinant production of pathogen-specific VHHs, and then thoroughly summarize the experimental findings of five distinct studies that employed VHHs as inhibitors of host\u2013pathogen interactions or neutralizers of infectious agents. Past and recent studies suggest the potential of camelid heavy-chain variable domains as a novel modality of immunotherapeutic drugs and a promising alternative to monoclonal antibodies. VHHs demonstrate the ability to interfere with bacterial pathogenesis by preventing adhesion to host tissue and sequestering disease-causing bacterial toxins. To protect from viral infections, VHHs may be employed as inhibitors of viral entry by binding to viral coat proteins or blocking interactions with cell-surface receptors. The implementation of VHHs as immunotherapeutic agents for infectious diseases is of considerable potential and set to contribute to public health in the near future.", "which Type of nanoparticles ?", "VHHs", 47.0, 51.0], ["Virus like particles (VLPs) produced by the expression of viral structural proteins can serve as versatile nanovectors or potential vaccine candidates. In this study we describe for the first time the generation of HCoV-NL63 VLPs using baculovirus system. Major structural proteins of HCoV-NL63 have been expressed in tagged or native form, and their assembly to form VLPs was evaluated. Additionally, a novel procedure for chromatography purification of HCoV-NL63 VLPs was developed. Interestingly, we show that these nanoparticles may deliver cargo and selectively transduce cells expressing the ACE2 protein such as ciliated cells of the respiratory tract. Production of a specific delivery vector is a major challenge for research concerning targeting molecules. The obtained results show that HCoV-NL63 VLPs may be efficiently produced, purified, modified and serve as a delivery platform. This study constitutes an important basis for further development of a promising viral vector displaying narrow tissue tropism.", "which Type of nanoparticles ?", "VLPs", 22.0, 26.0], ["The development of simple fluorescent and colorimetric assays that enable point-of-care DNA and RNA detection has been a topic of significant research because of the utility of such assays in resource limited settings. The most common motifs utilize hybridization to a complementary detection strand coupled with a sensitive reporter molecule. Here, a paper-based colorimetric assay for DNA detection based on pyrrolidinyl peptide nucleic acid (acpcPNA)-induced nanoparticle aggregation is reported as an alternative to traditional colorimetric approaches. PNA probes are an attractive alternative to DNA and RNA probes because they are chemically and biologically stable, easily synthesized, and hybridize efficiently with the complementary DNA strands. The acpcPNA probe contains a single positive charge from the lysine at C-terminus and causes aggregation of citrate anion-stabilized silver nanoparticles (AgNPs) in the absence of complementary DNA. In the presence of target DNA, formation of the anionic DNA-acpcPNA duplex results in dispersion of the AgNPs as a result of electrostatic repulsion, giving rise to a detectable color change. Factors affecting the sensitivity and selectivity of this assay were investigated, including ionic strength, AgNP concentration, PNA concentration, and DNA strand mismatches. The method was used for screening of synthetic Middle East respiratory syndrome coronavirus (MERS-CoV), Mycobacterium tuberculosis (MTB), and human papillomavirus (HPV) DNA based on a colorimetric paper-based analytical device developed using the aforementioned principle. The oligonucleotide targets were detected by measuring the color change of AgNPs, giving detection limits of 1.53 (MERS-CoV), 1.27 (MTB), and 1.03 nM (HPV). The acpcPNA probe exhibited high selectivity for the complementary oligonucleotides over single-base-mismatch, two-base-mismatch, and noncomplementary DNA targets. The proposed paper-based colorimetric DNA sensor has potential to be an alternative approach for simple, rapid, sensitive, and selective DNA detection.", "which Type of nanoparticles ?", "AgNPs", 910.0, 915.0], ["Worldwide outbreaks of infectious diseases necessitate the development of rapid and accurate diagnostic methods. Colorimetric assays are a representative tool to simply identify the target molecules in specimens through color changes of an indicator (e.g., nanosized metallic particle, and dye molecules). The detection method is used to confirm the presence of biomarkers visually and measure absorbance of the colored compounds at a specific wavelength. In this study, we propose a colorimetric assay based on an extended form of double-stranded DNA (dsDNA) self-assembly shielded gold nanoparticles (AuNPs) under positive electrolyte (e.g., 0.1 M MgCl2) for detection of Middle East respiratory syndrome coronavirus (MERS-CoV). This platform is able to verify the existence of viral molecules through a localized surface plasmon resonance (LSPR) shift and color changes of AuNPs in the UV\u2013vis wavelength range. We designed a pair of thiol-modified probes at either the 5\u2032 end or 3\u2032 end to organize complementary base pairs with upstream of the E protein gene (upE) and open reading frames (ORF) 1a on MERS-CoV. The dsDNA of the target and probes forms a disulfide-induced long self-assembled complex, which protects AuNPs from salt-induced aggregation and transition of optical properties. This colorimetric assay could discriminate down to 1 pmol/\u03bcL of 30 bp MERS-CoV and further be adapted for convenient on-site detection of other infectious diseases, especially in resource-limited settings.", "which Type of nanoparticles ?", "AuNPs", 603.0, 608.0], ["Infectious bronchitis virus (IBV) affects poultry respiratory, renal and reproductive systems. Currently the efficacy of available live attenuated or killed vaccines against IBV has been challenged. We designed a novel IBV vaccine alternative using a highly innovative platform called Self-Assembling Protein Nanoparticle (SAPN). In this vaccine, B cell epitopes derived from the second heptad repeat (HR2) region of IBV spike proteins were repetitively presented in its native trimeric conformation. In addition, flagellin was co-displayed in the SAPN to achieve a self-adjuvanted effect. Three groups of chickens were immunized at four weeks of age with the vaccine prototype, IBV-Flagellin-SAPN, a negative-control construct Flagellin-SAPN or a buffer control. The immunized chickens were challenged with 5x104.7 EID50 IBV M41 strain. High antibody responses were detected in chickens immunized with IBV-Flagellin-SAPN. In ex vivo proliferation tests, peripheral mononuclear cells (PBMCs) derived from IBV-Flagellin-SAPN immunized chickens had a significantly higher stimulation index than that of PBMCs from chickens receiving Flagellin-SAPN. Chickens immunized with IBV-Flagellin-SAPN had a significant reduction of tracheal virus shedding and lesser tracheal lesion scores than did negative control chickens. The data demonstrated that the IBV-Flagellin-SAPN holds promise as a vaccine for IBV.", "which Type of nanoparticles ?", "Self-Assembling Protein Nanoparticle (SAPN)", NaN, NaN], ["Abstract Aldehyde dehydrogenase 2 deficiency (ALDH2*2) causes facial flushing in response to alcohol consumption in approximately 560 million East Asians. Recent meta-analysis demonstrated the potential link between ALDH2*2 mutation and Alzheimer\u2019s Disease (AD). Other studies have linked chronic alcohol consumption as a risk factor for AD. In the present study, we show that fibroblasts of an AD patient that also has an ALDH2*2 mutation or overexpression of ALDH2*2 in fibroblasts derived from AD patients harboring ApoE \u03b54 allele exhibited increased aldehydic load, oxidative stress, and increased mitochondrial dysfunction relative to healthy subjects and exposure to ethanol exacerbated these dysfunctions. In an in vivo model, daily exposure of WT mice to ethanol for 11 weeks resulted in mitochondrial dysfunction, oxidative stress and increased aldehyde levels in their brains and these pathologies were greater in ALDH2*2/*2 (homozygous) mice. Following chronic ethanol exposure, the levels of the AD-associated protein, amyloid-\u03b2, and neuroinflammation were higher in the brains of the ALDH2*2/*2 mice relative to WT. Cultured primary cortical neurons of ALDH2*2/*2 mice showed increased sensitivity to ethanol and there was a greater activation of their primary astrocytes relative to the responses of neurons or astrocytes from the WT mice. Importantly, an activator of ALDH2 and ALDH2*2, Alda-1, blunted the ethanol-induced increases in A\u03b2, and the neuroinflammation in vitro and in vivo. These data indicate that impairment in the metabolism of aldehydes, and specifically ethanol-derived acetaldehyde, is a contributor to AD associated pathology and highlights the likely risk of alcohol consumption in the general population and especially in East Asians that carry ALDH2*2 mutation.", "which proteins detected by western blot ?", "ALDH2", 46.0, 51.0], ["Highly oriented single-crystal ZnO nanotube (ZNT) arrays were prepared by a two-step electrochemical/chemical process on indium-doped tin oxide (ITO) coated glass in an aqueous solution. The prepared ZNT arrays were further used as a working electrode to fabricate an enzyme-based glucose biosensor through immobilizing glucose oxidase in conjunction with a Nafion coating. The present ZNT arrays-based biosensor exhibits high sensitivity of 30.85 \u03bcA cm\u22122 mM\u22121 at an applied potential of +0.8 V vs. SCE, wide linear calibration ranges from 10 \u03bcM to 4.2 mM, and a low limit of detection (LOD) at 10 \u03bcM (measured) for sensing of glucose. The apparent Michaelis\u2212Menten constant KMapp was calculated to be 2.59 mM, indicating a higher bioactivity for the biosensor.", "which Reference Electrode ?", "ITO", 145.0, 148.0], ["Herein, we incorporated dual biotemplates, i.e., cellulose nanocrystals (CNC) and apoferritin, into electrospinning solution to achieve three distinct benefits, i.e., (i) facile synthesis of a WO3 nanotube by utilizing the self-agglomerating nature of CNC in the core of as-spun nanofibers, (ii) effective sensitization by partial phase transition from WO3 to Na2W4O13 induced by interaction between sodium-doped CNC and WO3 during calcination, and (iii) uniform functionalization with monodispersive apoferritin-derived Pt catalytic nanoparticles (2.22 \u00b1 0.42 nm). Interestingly, the sensitization effect of Na2W4O13 on WO3 resulted in highly selective H2S sensing characteristics against seven different interfering molecules. Furthermore, synergistic effects with a bioinspired Pt catalyst induced a remarkably enhanced H2S response ( Rair/ Rgas = 203.5), unparalleled selectivity ( Rair/ Rgas < 1.3 for the interfering molecules), and rapid response (<10 s)/recovery (<30 s) time at 1 ppm of H2S under 95% relative humidity level. This work paves the way for a new class of cosensitization routes to overcome critical shortcomings of SMO-based chemical sensors, thus providing a potential platform for diagnosis of halitosis.", "which Target gas ?", "H2S", 654.0, 657.0], ["Gas sensing properties of ZnO nanowires prepared via thermal chemical vapor deposition method were investigated by analyzing change in their photoluminescence (PL) spectra. The as-synthesized nanowires show two different PL peaks positioned at 380 nm and 520 nm. The 380 nm emission is ascribed to near band edge emission, and the green peak (520 nm) appears due to the oxygen vacancy defects. The intensity of the green PL signal enhances upon hydrogen gas exposure, whereas it gets quenched upon oxygen gas loading. The ZnO nanowires' sensing response values were observed as about 54% for H2 gas and 9% for O2 gas at room temperature for 50 sccm H2/O2 gas flow rate. The sensor response was also analyzed as a function of sample temperature ranging from 300 K to 400 K. A conclusion was derived from the observations that the H2/O2 gases affect the adsorbed oxygen species on the surface of ZnO nanowires. The adsorbed species result in the band bending and hence changes the depletion region which causes variation i...", "which Target gas ?", "Hydrogen", 445.0, 453.0], ["The unique properties of two dimensional (2D) materials make them promising candidates for chemical and biological sensing applications. However, most 2D nanomaterial sensors suffer very long recovery time due to slow molecular desorption at room temperature. Here, we report a highly sensitive molybdenum ditelluride (MoTe2) gas sensor for NO2 and NH3 detection with greatly enhanced recovery rate. The effects of gate bias on sensing performance have been systematically studied. It is found that the recovery kinetics can be effectively adjusted by biasing the sensor to different gate voltages. Under the optimum biasing potential, the MoTe2 sensor can achieve more than 90% recovery after each sensing cycle well within 10 min at room temperature. The results demonstrate the potential of MoTe2 as a promising candidate for high-performance chemical sensors. The idea of exploiting gate bias to adjust molecular desorption kinetics can be readily applied to much wider sensing platforms based on 2D nanomaterials.", "which Sensing material ?", "Molybdenum ditelluride", 295.0, 317.0], ["A novel and highly sensitive nonenzymatic glucose biosensor was developed by nucleating colloidal silver nanoparticles (AgNPs) on MoS2. The facile fabrication method, high reproducibility (97.5%) and stability indicates a promising capability for large-scale manufacturing. Additionally, the excellent sensitivity (9044.6 \u03bcA\u00b7mM\u22121\u00b7cm\u22122), low detection limit (0.03 \u03bcM), appropriate linear range of 0.1\u20131000 \u03bcM, and high selectivity suggests that this biosensor has a great potential to be applied for noninvasive glucose detection in human body fluids, such as sweat and saliva.", "which Sensing material ?", " Colloidal silver nanoparticles (AgNPs)", NaN, NaN], ["In this study, we synthesized hierarchical CuO nanoleaves in large-quantity via the hydrothermal method. We employed different techniques to characterize the morphological, structural, optical properties of the as-prepared hierarchical CuO nanoleaves sample. An electrochemical based nonenzymatic glucose biosensor was fabricated using engineered hierarchical CuO nanoleaves. The electrochemical behavior of fabricated biosensor towards glucose was analyzed with cyclic voltammetry (CV) and amperometry (i\u2013t) techniques. Owing to the high electroactive surface area, hierarchical CuO nanoleaves based nonenzymatic biosensor electrode shows enhanced electrochemical catalytic behavior for glucose electro-oxidation in 100 mM sodium hydroxide (NaOH) electrolyte. The nonenzymatic biosensor displays a high sensitivity (1467.32 \u03bc A/(mM cm 2 )), linear range (0.005\u20135.89 mM), and detection limit of 12 nM (S/N = 3). Moreover, biosensor displayed good selectivity, reproducibility, repeatability, and stability at room temperature over three-week storage period. Further, as-fabricated nonenzymatic glucose biosensors were employed for practical applications in human serum sample measurements. The obtained data were compared to the commercial biosensor, which demonstrates the practical usability of nonenzymatic glucose biosensors in real sample analysis.", "which Sensing material ?", "CuO nanoleaves", 43.0, 57.0], ["Herein, a novel electrochemical glucose biosensor based on glucose oxidase (GOx) immobilized on a surface containing platinum nanoparticles (PtNPs) electrodeposited on poly(Azure A) (PAA) previously electropolymerized on activated screen-printed carbon electrodes (GOx-PtNPs-PAA-aSPCEs) is reported. The resulting electrochemical biosensor was validated towards glucose oxidation in real samples and further electrochemical measurement associated with the generated H2O2. The electrochemical biosensor showed an excellent sensitivity (42.7 \u03bcA mM\u22121 cm\u22122), limit of detection (7.6 \u03bcM), linear range (20 \u03bcM\u20132.3 mM), and good selectivity towards glucose determination. Furthermore, and most importantly, the detection of glucose was performed at a low potential (0.2 V vs. Ag). The high performance of the electrochemical biosensor was explained through surface exploration using field emission SEM, XPS, and impedance measurements. The electrochemical biosensor was successfully applied to glucose quantification in several real samples (commercial juices and a plant cell culture medium), exhibiting a high accuracy when compared with a classical spectrophotometric method. This electrochemical biosensor can be easily prepared and opens up a good alternative in the development of new sensitive glucose sensors.", "which Sensing material ?", "Glucose oxidase (GOx) immobilized on a surface containing platinum nanoparticles (PtNPs) electrodeposited on poly(Azure A) (PAA) ", NaN, NaN], ["Nitrogen dioxide (NO2) is a gas species that plays an important role in certain industrial, farming, and healthcare sectors. However, there are still significant challenges for NO2 sensing at low detection limits, especially in the presence of other interfering gases. The NO2 selectivity of current gas-sensing technologies is significantly traded-off with their sensitivity and reversibility as well as fabrication and operating costs. In this work, we present an important progress for selective and reversible NO2 sensing by demonstrating an economical sensing platform based on the charge transfer between physisorbed NO2 gas molecules and two-dimensional (2D) tin disulfide (SnS2) flakes at low operating temperatures. The device shows high sensitivity and superior selectivity to NO2 at operating temperatures of less than 160 \u00b0C, which are well below those of chemisorptive and ion conductive NO2 sensors with much poorer selectivity. At the same time, excellent reversibility of the sensor is demonstrated, which has rarely been observed in other 2D material counterparts. Such impressive features originate from the planar morphology of 2D SnS2 as well as unique physical affinity and favorable electronic band positions of this material that facilitate the NO2 physisorption and charge transfer at parts per billion levels. The 2D SnS2-based sensor provides a real solution for low-cost and selective NO2 gas sensing.", "which Sensing material ?", "Tin disulfide", 666.0, 679.0], ["Room temperature gas sensing properties of chemically exfoliated black phosphorus (BP) to oxidizing (NO2, CO2) and reducing (NH3, H2, CO) gases in a dry air carrier have been reported. To study the gas sensing properties of BP, chemically exfoliated BP flakes have been drop casted on Si3N4 substrates provided with Pt comb-type interdigitated electrodes in N2 atmosphere. Scanning electron microscopy and x-ray photoelectron spectroscopy characterizations show respectively the occurrence of a mixed structure, composed of BP coarse aggregates dispersed on BP exfoliated few layer flakes bridging the electrodes, and a clear 2p doublet belonging to BP, which excludes the occurrence of surface oxidation. Room temperature electrical tests in dry air show a p-type response of multilayer BP with measured detection limits of 20 ppb and 10 ppm to NO2 and NH3 respectively. No response to CO and CO2 has been detected, while a slight but steady sensitivity to H2 has been recorded. The reported results confirm, on an experimental basis, what was previously theoretically predicted, demonstrating the promising sensing properties of exfoliated BP.", "which Sensing material ?", "Black phosphorus", 65.0, 81.0], ["The utilization of black phosphorus and its monolayer (phosphorene) and few-layers in field-effect transistors has attracted a lot of attention to this elemental two-dimensional material. Various studies on optimization of black phosphorus field-effect transistors, PN junctions, photodetectors, and other applications have been demonstrated. Although chemical sensing based on black phosphorus devices was theoretically predicted, there is still no experimental verification of such an important study of this material. In this article, we report on chemical sensing of nitrogen dioxide (NO2) using field-effect transistors based on multilayer black phosphorus. Black phosphorus sensors exhibited increased conduction upon NO2 exposure and excellent sensitivity for detection of NO2 down to 5 ppb. Moreover, when the multilayer black phosphorus field-effect transistor was exposed to NO2 concentrations of 5, 10, 20, and 40 ppb, its relative conduction change followed the Langmuir isotherm for molecules adsorbed on a surface. Additionally, on the basis of an exponential conductance change, the rate constants for adsorption and desorption of NO2 on black phosphorus were extracted for different NO2 concentrations, and they were in the range of 130-840 s. These results shed light on important electronic and sensing characteristics of black phosphorus, which can be utilized in future studies and applications.", "which Sensing material ?", "Black phosphorus", 19.0, 35.0], ["Graphene is a one atom thick carbon allotrope with all surface atoms that has attracted significant attention as a promising material as the conduction channel of a field-effect transistor and chemical field-effect transistor sensors. However, the zero bandgap of semimetal graphene still limits its application for these devices. In this work, ethanol-chemical vapor deposition (CVD) of a grown p-type semiconducting large-area monolayer graphene film was patterned into a nanomesh by the combination of nanosphere lithography and reactive ion etching and evaluated as a field-effect transistor and chemiresistor gas sensors. The resulting neck-width of the synthesized nanomesh was about \u223c20 nm and was comprised of the gap between polystyrene (PS) spheres that was formed during the reactive ion etching (RIE) process. The neck-width and the periodicities of the graphene nanomesh (GNM) could be easily controlled depending on the duration/power of the RIE and the size of the PS nanospheres. The fabricated GNM transistor device exhibited promising electronic properties featuring a high drive current and an I(ON)/I(OFF) ratio of about 6, significantly higher than its film counterpart. Similarly, when applied as a chemiresistor gas sensor at room temperature, the graphene nanomesh sensor showed excellent sensitivity toward NO(2) and NH(3), significantly higher than their film counterparts. The ethanol-based graphene nanomesh sensors exhibited sensitivities of about 4.32%/ppm in NO(2) and 0.71%/ppm in NH(3) with limits of detection of 15 and 160 ppb, respectively. Our demonstrated studies on controlling the neck width of the nanomesh would lead to further improvement of graphene-based transistors and sensors.", "which Sensing material ?", "Graphene", 0.0, 8.0], ["In this paper we present GATE, a framework and graphical development environment which enables users to develop and deploy language engineering components and resources in a robust fashion. The GATE architecture has enabled us not only to develop a number of successful applications for various language processing tasks (such as Information Extraction), but also to build and annotate corpora and carry out evaluations on the applications generated. The framework can be used to develop applications and resources in multiple languages, based on its thorough Unicode support.", "which Tool name ?", "GATE", 25.0, 29.0], ["The ClearTK-TimeML submission to TempEval 2013 competed in all English tasks: identifying events, identifying times, and identifying temporal relations. The system is a pipeline of machine-learning models, each with a small set of features from a simple morpho-syntactic annotation pipeline, and where temporal relations are only predicted for a small set of syntactic constructions and relation types. ClearTKTimeML ranked 1 st for temporal relation F1, time extent strict F1 and event tense accuracy.", "which Tool name ?", "ClearTK-TimeML", 4.0, 18.0], ["In this paper, we describe HeidelTime, a system for the extraction and normalization of temporal expressions. HeidelTime is a rule-based system mainly using regular expression patterns for the extraction of temporal expressions and knowledge resources as well as linguistic clues for their normalization. In the TempEval-2 challenge, HeidelTime achieved the highest F-Score (86%) for the extraction and the best results in assigning the correct value attribute, i.e., in understanding the semantics of the temporal expressions.", "which Tool name ?", "HeidelTime", 27.0, 37.0], ["We describe the design and use of the Stanford CoreNLP toolkit, an extensible pipeline that provides core natural language analysis. This toolkit is quite widely used, both in the research NLP community and also among commercial and government users of open source NLP technology. We suggest that this follows from a simple, approachable design, straightforward interfaces, the inclusion of robust and good quality analysis components, and not requiring use of a large amount of associated baggage.", "which Tool name ?", "Stanford CoreNLP toolkit", 38.0, 62.0], ["Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and \u201cfeeling left out\u201d can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.", "which Result ?", "being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences", 1270.0, 1406.0], ["Abstract Background Adaptive game software has been successful in remediation of dyslexia. Here we describe the cognitive and algorithmic principles underlying the development of similar software for dyscalculia. Our software is based on current understanding of the cerebral representation of number and the hypotheses that dyscalculia is due to a \"core deficit\" in number sense or in the link between number sense and symbolic number representations. Methods \"The Number Race\" software trains children on an entertaining numerical comparison task, by presenting problems adapted to the performance level of the individual child. We report full mathematical specifications of the algorithm used, which relies on an internal model of the child's knowledge in a multidimensional \"learning space\" consisting of three difficulty dimensions: numerical distance, response deadline, and conceptual complexity (from non-symbolic numerosity processing to increasingly complex symbolic operations). Results The performance of the software was evaluated both by mathematical simulations and by five weeks of use by nine children with mathematical learning difficulties. The results indicate that the software adapts well to varying levels of initial knowledge and learning speeds. Feedback from children, parents and teachers was positive. A companion article [1] describes the evolution of number sense and arithmetic scores before and after training. Conclusion The software, open-source and freely available online, is designed for learning disabled children aged 5\u20138, and may also be useful for general instruction of normal preschool children. The learning algorithm reported is highly general, and may be applied in other domains.", "which Result ?", "Positive", 1320.0, 1328.0], ["This study investigated the effects of gameplaying on fifth-graders\u2019 maths performance and attitudes. One hundred twenty five fifth graders were recruited and assigned to a cooperative Teams-Games-Tournament (TGT), interpersonal competitive or no gameplaying condition. A state standards-based maths exam and an inventory on attitudes towards maths were used for the pretest and posttest. The students\u2019 gender, socio-economic status and prior maths ability were examined as the moderating variables and covariate. Multivariate analysis of covariance (MANCOVA) indicated that gameplaying was more effective than drills in promoting maths performance, and cooperative gameplaying was most effective for promoting positive maths attitudes regardless of students\u2019 individual differences.", "which Result ?", "Positive", 711.0, 719.0], ["Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and \u201cfeeling left out\u201d can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.", "which Result ?", "simply addressing material aspects can only be part of establishing more inclusive science communication practices", 1146.0, 1260.0], ["A report on the implementation and evaluation of an intelligent learning system; the multimedia geography tutor and game software titled Lainos World SM was localized into English, French, Spanish, German, Portuguese, Russian and Simplified Chinese. Thereafter, multilingual online surveys were setup to which High school students were globally invited via mails to schools, targeted adverts and recruitment on Facebook, Google, etc. 1125 respondents from selected nations completed both the initial and final surveys. The effect of the software on students\u2019 geographical knowledge was analyzed through pre and post achievement test scores. In general, the mean score were higher after exposure to the educational software for fifteen days and it was established that the score differences were statistically significant. This positive effect and other qualitative data show that the localized software from students\u2019 perspective is a widely acceptable and effective educational tool for learning geography in an interactive and gaming environment..", "which Result ?", "Positive", 827.0, 835.0], ["Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and \u201cfeeling left out\u201d can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.", "which Result ?", "material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication", 770.0, 984.0], ["State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data pre-processing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks --- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both the two data --- 97.55\\% accuracy for POS tagging and 91.21\\% F1 for NER.", "which Result ?", "POS tagging", 807.0, 818.0], ["Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and \u201cfeeling left out\u201d can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.", "which Result ?", " emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role", NaN, NaN], ["Accurate prediction of network paths between arbitrary hosts on the Internet is of vital importance for network operators, cloud providers, and academic researchers. We present PredictRoute, a system that predicts network paths between hosts on the Internet using historical knowledge of the data and control plane. In addition to feeding on freely available traceroutes and BGP routing tables, PredictRoute optimally explores network paths towards chosen BGP prefixes. PredictRoute's strategy for exploring network paths discovers 4X more autonomous system (AS) hops than other well-known strategies used in practice today. Using a corpus of traceroutes, PredictRoute trains probabilistic models of routing towards prefixes on the Internet to predict network paths and their likelihood. PredictRoute's AS-path predictions differ from the measured path by at most 1 hop, 75% of the time. We expose PredictRoute's path prediction capability via a REST API to facilitate its inclusion in other applications and studies. We additionally demonstrate the utility of PredictRoute in improving real-world applications for circumventing Internet censorship and preserving anonymity online.", "which Result ?", "PredictRoute's AS-path predictions differ from the measured path by at most 1 hop, 75% of the time. ", 788.0, 888.0], ["In order to improve the signal-to-noise ratio of the hyperspectral sensors and exploit the potential of satellite hyperspectral data for predicting soil properties, we took MingShui County as the study area, which the study area is approximately 1481 km2, and we selected Gaofen-5 (GF-5) satellite hyperspectral image of the study area to explore an applicable and accurate denoising method that can effectively improve the prediction accuracy of soil organic matter (SOM) content. First, fractional-order derivative (FOD) processing is performed on the original reflectance (OR) to evaluate the optimal FOD. Second, singular value decomposition (SVD), Fourier transform (FT) and discrete wavelet transform (DWT) are used to denoise the OR and optimal FOD reflectance. Third, the spectral indexes of the reflectance under different denoising methods are extracted by optimal band combination algorithm, and the input variables of different denoising methods are selected by the recursive feature elimination (RFE) algorithm. Finally, the SOM content is predicted by a random forest prediction model. The results reveal that 0.6-order reflectance describes more useful details in satellite hyperspectral data. Five spectral indexes extracted from the reflectance under different denoising methods have a strong correlation with the SOM content, which is helpful for realizing high-accuracy SOM predictions. All three denoising methods can reduce the noise in hyperspectral data, and the accuracies of the different denoising methods are ranked DWT > FT > SVD, where 0.6-order-DWT has the highest accuracy (R2 = 0.84, RMSE = 3.36 g kg\u22121, and RPIQ = 1.71). This paper is relatively novel, in that GF-5 satellite hyperspectral data based on different denoising methods are used to predict SOM, and the results provide a highly robust and novel method for mapping the spatial distribution of SOM content at the regional scale.", "which Result ?", "DWT > FT > SVD", 1543.0, 1557.0], ["This study is a comparison of AUPress with three other traditional (non-open access) Canadian university presses. The analysis is based on the rankings that are correlated with book sales on Amazon.com and Amazon.ca. Statistical methods include the sampling of the sales ranking of randomly selected books from each press. The results of one-way ANOVA analyses show that there is no significant difference in the ranking of printed books sold by AUPress in comparison with traditional university presses. However, AUPress, can demonstrate a significantly larger readership for its books as evidenced by the number of downloads of the open electronic versions.", "which Result ?", "no significant difference", 380.0, 405.0], ["State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data pre-processing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks --- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both the two data --- 97.55\\% accuracy for POS tagging and 91.21\\% F1 for NER.", "which Result ?", "NER", 716.0, 719.0], ["The BioCreative NLM-Chem track calls for a community effort to fine-tune automated recognition of chemical names in biomedical literature. Chemical names are one of the most searched biomedical entities in PubMed and \u2013 as highlighted during the COVID-19 pandemic \u2013 their identification may significantly advance research in multiple biomedical subfields. While previous community challenges focused on identifying chemical names mentioned in titles and abstracts, the full text contains valuable additional detail. We organized the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document. This manuscript summarizes the BioCreative NLM-Chem track. We received a total of 88 submissions in total from 17 teams worldwide. The highest performance achieved for the Chemical Identification task was 0.8672 f-score (0.8759 precision, 0.8587 recall) for strict NER performance and 0.8136 f-score (0.8621 precision, 0.7702 recall) for strict normalization performance. The highest performance achieved for the Chemical Indexing task was 0.4825 f-score (0.4397 precision, 0.5344 recall). The NLM-Chem track dataset and other challenge materials are publicly available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/. This community challenge demonstrated 1) the current substantial achievements in deep learning technologies can be utilized to further improve automated prediction accuracy, and 2) the Chemical Indexing task is substantially more challenging. We look forward to further development of biomedical text mining methods to respond to the rapid growth of biomedical literature. Keywords\u2014 biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; text mining; chemical entity recognition; chemical indexing", "which Ontology used ?", "MeSH", 1022.0, 1026.0], ["Abstract Background Molecular Biology accumulated substantial amounts of data concerning functions of genes and proteins. Information relating to functional descriptions is generally extracted manually from textual data and stored in biological databases to build up annotations for large collections of gene products. Those annotation databases are crucial for the interpretation of large scale analysis approaches using bioinformatics or experimental techniques. Due to the growing accumulation of functional descriptions in biomedical literature the need for text mining tools to facilitate the extraction of such annotations is urgent. In order to make text mining tools useable in real world scenarios, for instance to assist database curators during annotation of protein function, comparisons and evaluations of different approaches on full text articles are needed. Results The Critical Assessment for Information Extraction in Biology (BioCreAtIvE) contest consists of a community wide competition aiming to evaluate different strategies for text mining tools, as applied to biomedical literature. We report on task two which addressed the automatic extraction and assignment of Gene Ontology (GO) annotations of human proteins, using full text articles. The predictions of task 2 are based on triplets of protein \u2013 GO term \u2013 article passage . The annotation-relevant text passages were returned by the participants and evaluated by expert curators of the GO annotation (GOA) team at the European Institute of Bioinformatics (EBI). Each participant could submit up to three results for each sub-task comprising task 2. In total more than 15,000 individual results were provided by the participants. The curators evaluated in addition to the annotation itself, whether the protein and the GO term were correctly predicted and traceable through the submitted text fragment. Conclusion Concepts provided by GO are currently the most extended set of terms used for annotating gene products, thus they were explored to assess how effectively text mining tools are able to extract those annotations automatically. Although the obtained results are promising, they are still far from reaching the required performance demanded by real world applications. Among the principal difficulties encountered to address the proposed task, were the complex nature of the GO terms and protein names (the large range of variants which are used to express proteins and especially GO terms in free text), and the lack of a standard training set. A range of very different strategies were used to tackle this task. The dataset generated in line with the BioCreative challenge is publicly available and will allow new possibilities for training information extraction methods in the domain of molecular biology.", "which Ontology used ?", "Gene Ontology (GO)", NaN, NaN], ["One of the biomedical entity types of relevance for medicine or biosciences are chemical compounds and drugs. The correct detection these entities is critical for other text mining applications building on them, such as adverse drug-reaction detection, medication-related fake news or drug-target extraction. Although a significant effort was made to detect mentions of drugs/chemicals in English texts, so far only very limited attempts were made to recognize them in medical documents in other languages. Taking into account the growing amount of medical publications and clinical records written in Spanish, we have organized the first shared task on detecting drug and chemical entities in Spanish medical documents. Additionally, we included a clinical concept-indexing sub-track asking teams to return SNOMED-CT identifiers related to drugs/chemicals for a collection of documents. For this task, named PharmaCoNER, we generated annotation guidelines together with a corpus of 1,000 manually annotated clinical case studies. A total of 22 teams participated in the sub-track 1, (77 system runs), and 7 teams in the sub-track 2 (19 system runs). Top scoring teams used sophisticated deep learning approaches yielding very competitive results with F-measures above 0.91. These results indicate that there is a real interest in promoting biomedical text mining efforts beyond English. We foresee that the PharmaCoNER annotation guidelines, corpus and participant systems will foster the development of new resources for clinical and biomedical text mining systems of Spanish medical data.", "which Ontologies used ?", "SNOMED-CT", 808.0, 817.0], ["This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2016, which follows the previous 2013 and 2011 editions. The task focuses on the extraction of the locations (biotopes and geographical places) of bacteria from PubMe abstracts and the characterization of bacteria and their associated habitats with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on bacteria habitats for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, the challenge organization, and the evaluation metrics. We also provide an analysis of the results obtained by participants.", "which Ontologies used ?", "NCBI Taxonomy", 361.0, 374.0], ["This paper presents the fourth edition of the Bacteria Biotope task at BioNLP Open Shared Tasks 2019. The task focuses on the extraction of the locations and phenotypes of microorganisms from PubMed abstracts and full-text excerpts, and the characterization of these entities with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on biodiversity for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, and the challenge organization. We also provide an analysis of the results obtained by participants, and inspect the evolution of the results since the last edition in 2016.", "which Ontologies used ?", "NCBI Taxonomy", 321.0, 334.0], ["This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2016, which follows the previous 2013 and 2011 editions. The task focuses on the extraction of the locations (biotopes and geographical places) of bacteria from PubMe abstracts and the characterization of bacteria and their associated habitats with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on bacteria habitats for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, the challenge organization, and the evaluation metrics. We also provide an analysis of the results obtained by participants.", "which Ontologies used ?", "OntoBiotope ontology", 376.0, 396.0], ["This paper presents the fourth edition of the Bacteria Biotope task at BioNLP Open Shared Tasks 2019. The task focuses on the extraction of the locations and phenotypes of microorganisms from PubMed abstracts and full-text excerpts, and the characterization of these entities with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on biodiversity for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, and the challenge organization. We also provide an analysis of the results obtained by participants, and inspect the evolution of the results since the last edition in 2016.", "which Ontologies used ?", "OntoBiotope ontology", 336.0, 356.0], ["As analysts still grapple with understanding core damage accident progression at Three Mile Island and Fukushima that caught the nuclear industry off-guard once too many times, one notices the very limited detail with which the large reactor cores of these subject reactors have been modelled in their severe accident simulation code packages. At the same time, modelling of CANDU severe accidents have largely borrowed from and suffered from the limitations of the same LWR codes (see IAEA TECDOC 1727) whose applications to PHWRs have poorly caught critical PHWR design specifics and vulnerabilities. As a result, accident management measures that have been instituted at CANDU PHWRs, while meeting the important industry objective of publically seeming to be doing something about lessons learnt from say Fukushima and showing that the reactor designs are oh so close to perfect and the off-site consequences of severe accidents happily benign. Integrated PHWR severe accident progression and consequence assessment code ROSHNI can make a significant contribution to actual, practical understanding of severe accident progression in CANDU PHWRs, improving significantly on the other PHWR specific computer codes developed three decades ago when modeling decisions were constrained by limited computing power and poor understanding of and interest in severe core damage accidents. These codes force gross simplifications in reactor core modelling and do not adequately represent all the right CANDU core details, materials, fluids, vessels or phenomena. But they produce results that are familiar and palatable. They do, however to their credit, also excel in their computational speed, largely because they model and compute so little and with such un-necessary simplifications. ROSHNI sheds most previous modelling simplifications and represents each of the 380 channels, 4560 bundle, 37 elements in four concentric ring, Zircaloy clad fuel geometry, materials and fluids more faithfully in a 2000 MW(Th) CANDU6 reactor. It can be used easily for other PHWRs with different number of fuel channels and bundles per each channel. Each of horizontal PHWR reactor channels with all their bundles, fuel rings, sheaths, appendages, end fittings and feeders are modelled and in detail that reflects large across core differences. While other codes model at best a few hundred core fuel entities, thermo-chemical transient behaviour of about 73,000 different fuel channel entities within the core is considered by ROSHNI simultaneously along with other 15,000 or so other flow path segments. At each location all known thermo-chemical and hydraulic phenomena are computed. With such detail, ROSHNI is able to provide information on their progressive and parallel thermo-chemical contribution to accident progression and a more realistic fission product release source term that would belie the miniscule one (100 TBq of Cs-137 or 0.15% of core inventory) used by EMOs now in Canada on recommendation of our national regulator CNSC. ROSHNI has an advanced, more CANDU specific consideration of each bundle transitioning to a solid debris behaviour in the Calandria vessel without reverting to a simplified molten corium formulation that happily ignores interaction of debris with vessel welds, further vessel failures and energetic interactions. The code is able to follow behaviour of each fuel bundle following its disassembly from the fuel channel and thus demonstrate that the gross assumption of a core collapse made in some analyses is wrong and misleading. It is able to thus demonstrate that PHWR core disassembly is not only gradual, it will be also be incomplete with a large number of low power, peripheral fuel channels never disassembling under most credible scenarios. The code is designed to grow into and use its voluminous results in a severe accident simulator for operator training. It\u2019s phenomenological models are able to examine design inadequacies / issues that affect accident progression and several simple to implement design improvements that have a profound effect on results. For example, an early pressure boundary failure due to inadequacy of heat sinks in a station blackout scenario can be examined along with the effect of improved and adequate over pressure protection. A best effort code such as ROSHNI can be instrumental in identifying the risk reduction benefits of undertaking certain design, operational and accidental management improvements for PHWRs, with some of the multi-unit ones handicapped by poor pressurizer placement and leaky containments with vulnerable materials, poor overpressure protection, ad-hoc mitigation measures and limited instrumentation common to all CANDUs. Case in point is the PSA supported design and installed number of Hydrogen recombiners that are neither for the right gas (designed mysteriously for H2 instead of D2) or its potential release quantity (they are sparse and will cause explosions). The paper presents ROSHNI results of simulations of a postulated station blackout scenario and sheds a light on the challenges ahead in minimizing risk from operation of these otherwise unique power reactors.", "which Nuclear reactor type ?", "CANDU", 375.0, 380.0], ["As analysts still grapple with understanding core damage accident progression at Three Mile Island and Fukushima that caught the nuclear industry off-guard once too many times, one notices the very limited detail with which the large reactor cores of these subject reactors have been modelled in their severe accident simulation code packages. At the same time, modelling of CANDU severe accidents have largely borrowed from and suffered from the limitations of the same LWR codes (see IAEA TECDOC 1727) whose applications to PHWRs have poorly caught critical PHWR design specifics and vulnerabilities. As a result, accident management measures that have been instituted at CANDU PHWRs, while meeting the important industry objective of publically seeming to be doing something about lessons learnt from say Fukushima and showing that the reactor designs are oh so close to perfect and the off-site consequences of severe accidents happily benign. Integrated PHWR severe accident progression and consequence assessment code ROSHNI can make a significant contribution to actual, practical understanding of severe accident progression in CANDU PHWRs, improving significantly on the other PHWR specific computer codes developed three decades ago when modeling decisions were constrained by limited computing power and poor understanding of and interest in severe core damage accidents. These codes force gross simplifications in reactor core modelling and do not adequately represent all the right CANDU core details, materials, fluids, vessels or phenomena. But they produce results that are familiar and palatable. They do, however to their credit, also excel in their computational speed, largely because they model and compute so little and with such un-necessary simplifications. ROSHNI sheds most previous modelling simplifications and represents each of the 380 channels, 4560 bundle, 37 elements in four concentric ring, Zircaloy clad fuel geometry, materials and fluids more faithfully in a 2000 MW(Th) CANDU6 reactor. It can be used easily for other PHWRs with different number of fuel channels and bundles per each channel. Each of horizontal PHWR reactor channels with all their bundles, fuel rings, sheaths, appendages, end fittings and feeders are modelled and in detail that reflects large across core differences. While other codes model at best a few hundred core fuel entities, thermo-chemical transient behaviour of about 73,000 different fuel channel entities within the core is considered by ROSHNI simultaneously along with other 15,000 or so other flow path segments. At each location all known thermo-chemical and hydraulic phenomena are computed. With such detail, ROSHNI is able to provide information on their progressive and parallel thermo-chemical contribution to accident progression and a more realistic fission product release source term that would belie the miniscule one (100 TBq of Cs-137 or 0.15% of core inventory) used by EMOs now in Canada on recommendation of our national regulator CNSC. ROSHNI has an advanced, more CANDU specific consideration of each bundle transitioning to a solid debris behaviour in the Calandria vessel without reverting to a simplified molten corium formulation that happily ignores interaction of debris with vessel welds, further vessel failures and energetic interactions. The code is able to follow behaviour of each fuel bundle following its disassembly from the fuel channel and thus demonstrate that the gross assumption of a core collapse made in some analyses is wrong and misleading. It is able to thus demonstrate that PHWR core disassembly is not only gradual, it will be also be incomplete with a large number of low power, peripheral fuel channels never disassembling under most credible scenarios. The code is designed to grow into and use its voluminous results in a severe accident simulator for operator training. It\u2019s phenomenological models are able to examine design inadequacies / issues that affect accident progression and several simple to implement design improvements that have a profound effect on results. For example, an early pressure boundary failure due to inadequacy of heat sinks in a station blackout scenario can be examined along with the effect of improved and adequate over pressure protection. A best effort code such as ROSHNI can be instrumental in identifying the risk reduction benefits of undertaking certain design, operational and accidental management improvements for PHWRs, with some of the multi-unit ones handicapped by poor pressurizer placement and leaky containments with vulnerable materials, poor overpressure protection, ad-hoc mitigation measures and limited instrumentation common to all CANDUs. Case in point is the PSA supported design and installed number of Hydrogen recombiners that are neither for the right gas (designed mysteriously for H2 instead of D2) or its potential release quantity (they are sparse and will cause explosions). The paper presents ROSHNI results of simulations of a postulated station blackout scenario and sheds a light on the challenges ahead in minimizing risk from operation of these otherwise unique power reactors.", "which Nuclear reactor type ?", "PHWR", 560.0, 564.0], ["As analysts still grapple with understanding core damage accident progression at Three Mile Island and Fukushima that caught the nuclear industry off-guard once too many times, one notices the very limited detail with which the large reactor cores of these subject reactors have been modelled in their severe accident simulation code packages. At the same time, modelling of CANDU severe accidents have largely borrowed from and suffered from the limitations of the same LWR codes (see IAEA TECDOC 1727) whose applications to PHWRs have poorly caught critical PHWR design specifics and vulnerabilities. As a result, accident management measures that have been instituted at CANDU PHWRs, while meeting the important industry objective of publically seeming to be doing something about lessons learnt from say Fukushima and showing that the reactor designs are oh so close to perfect and the off-site consequences of severe accidents happily benign. Integrated PHWR severe accident progression and consequence assessment code ROSHNI can make a significant contribution to actual, practical understanding of severe accident progression in CANDU PHWRs, improving significantly on the other PHWR specific computer codes developed three decades ago when modeling decisions were constrained by limited computing power and poor understanding of and interest in severe core damage accidents. These codes force gross simplifications in reactor core modelling and do not adequately represent all the right CANDU core details, materials, fluids, vessels or phenomena. But they produce results that are familiar and palatable. They do, however to their credit, also excel in their computational speed, largely because they model and compute so little and with such un-necessary simplifications. ROSHNI sheds most previous modelling simplifications and represents each of the 380 channels, 4560 bundle, 37 elements in four concentric ring, Zircaloy clad fuel geometry, materials and fluids more faithfully in a 2000 MW(Th) CANDU6 reactor. It can be used easily for other PHWRs with different number of fuel channels and bundles per each channel. Each of horizontal PHWR reactor channels with all their bundles, fuel rings, sheaths, appendages, end fittings and feeders are modelled and in detail that reflects large across core differences. While other codes model at best a few hundred core fuel entities, thermo-chemical transient behaviour of about 73,000 different fuel channel entities within the core is considered by ROSHNI simultaneously along with other 15,000 or so other flow path segments. At each location all known thermo-chemical and hydraulic phenomena are computed. With such detail, ROSHNI is able to provide information on their progressive and parallel thermo-chemical contribution to accident progression and a more realistic fission product release source term that would belie the miniscule one (100 TBq of Cs-137 or 0.15% of core inventory) used by EMOs now in Canada on recommendation of our national regulator CNSC. ROSHNI has an advanced, more CANDU specific consideration of each bundle transitioning to a solid debris behaviour in the Calandria vessel without reverting to a simplified molten corium formulation that happily ignores interaction of debris with vessel welds, further vessel failures and energetic interactions. The code is able to follow behaviour of each fuel bundle following its disassembly from the fuel channel and thus demonstrate that the gross assumption of a core collapse made in some analyses is wrong and misleading. It is able to thus demonstrate that PHWR core disassembly is not only gradual, it will be also be incomplete with a large number of low power, peripheral fuel channels never disassembling under most credible scenarios. The code is designed to grow into and use its voluminous results in a severe accident simulator for operator training. It\u2019s phenomenological models are able to examine design inadequacies / issues that affect accident progression and several simple to implement design improvements that have a profound effect on results. For example, an early pressure boundary failure due to inadequacy of heat sinks in a station blackout scenario can be examined along with the effect of improved and adequate over pressure protection. A best effort code such as ROSHNI can be instrumental in identifying the risk reduction benefits of undertaking certain design, operational and accidental management improvements for PHWRs, with some of the multi-unit ones handicapped by poor pressurizer placement and leaky containments with vulnerable materials, poor overpressure protection, ad-hoc mitigation measures and limited instrumentation common to all CANDUs. Case in point is the PSA supported design and installed number of Hydrogen recombiners that are neither for the right gas (designed mysteriously for H2 instead of D2) or its potential release quantity (they are sparse and will cause explosions). The paper presents ROSHNI results of simulations of a postulated station blackout scenario and sheds a light on the challenges ahead in minimizing risk from operation of these otherwise unique power reactors.", "which Software Used ?", "ROSHNI", 1024.0, 1030.0], ["This study investigated atmospheric hydrodeoxygenation (HDO) of guaiacol over Ni2P-supported catalysts. Alumina, zirconia, and silica served as the supports of Ni2P catalysts. The physicochemical properties of these catalysts were surveyed by N2 physisorption, X-ray diffraction (XRD), CO chemisorption, H2 temperature-programmed reduction (H2-TPR), H2 temperature-programmed desorption (H2-TPD), and NH3 temperature-programmed desorption (NH3-TPD). The catalytic performance of these catalysts was tested in a continuous fixed-bed system. This paper proposes a plausible network of atmospheric guaiacol HDO, containing demethoxylation (DMO), demethylation (DME), direct deoxygenation (DDO), hydrogenation (HYD), transalkylation, and methylation. Pseudo-first-order kinetics analysis shows that the intrinsic activity declined in the following order: Ni2P/ZrO2 > Ni2P/Al2O3 > Ni2P/SiO2. Product selectivity at zero guaiacol conversion indicates that Ni2P/SiO2 promotes DMO and DDO routes, whereas Ni2P/ZrO2 and Ni2P/Al2O...", "which catalyst ?", "Ni2P/SiO2", 876.0, 885.0], ["We report a facile synthesis of new core-Au/shell-CeO2 nanoparticles (Au@CeO2) using a redox-coprecipitation method, where the Au nanoparticles and the nanoporous shell of CeO2 are simultaneously formed in one step. The Au@CeO2 catalyst enables the highly selective semihydrogenation of various alkynes at ambient temperature under additive-free conditions. The core-shell structure plays a crucial role in providing the excellent selectivity for alkenes through the selective dissociation of H2 in a heterolytic manner by maximizing interfacial sites between the core-Au and the shell-CeO2.", "which catalyst ?", "Au@CeO2", 70.0, 77.0], ["Palladium nanoparticles supported on a mesoporous graphitic carbon nitride, Pd@mpg-C3N4, has been developed as an effective, heterogeneous catalyst for the liquid-phase semihydrogenation of phenylacetylene under mild conditions (303 K, atmospheric H2). A total conversion was achieved with high selectivity of styrene (higher than 94%) within 85 minutes. Moreover, the spent catalyst can be easily recovered by filtration and then reused nine times without apparent lose of selectivity. The generality of Pd@mpg-C3N4 catalyst for partial hydrogenation of alkynes was also checked for terminal and internal alkynes with similar performance. The Pd@mpg-C3N4 catalyst was proven to be of industrial interest.", "which catalyst ?", "Pd@mpg-C3N4", 76.0, 87.0], ["In recent years, hybrid nanocomposites with core\u2013shell structures have increasingly attracted enormous attention in many important research areas such as quantum dots, optical, magnetic, and electronic devices, and catalysts. In the catalytic applications of core\u2013shell materials, core-metals having magnetic properties enable easy separation of the catalysts from the reaction mixtures by a magnet. The core-metals can also affect the active shell-metals, delivering significant improvements in their activities and selectivities. However, it is difficult for core-metals to act directly as the catalytic active species because they are entirely covered by the shell. Thus, few successful designs of core\u2013shell nanocomposite catalysts having active metal species in the core have appeared to date. Recently, we have demonstrated the design of a core\u2013shell catalyst consisting of active metal nanoparticles (NPs) in the core and closely assembled oxides with nano-gaps in the shell, allowing the access of substrates to the core-metal. The shell acted as a macro ligand (shell ligand) for the core-metal and the core\u2013shell structure maximized the metal\u2013ligand interaction (ligand effect), promoting highly selective reactions. The design concept of core\u2013shell catalysts having core-metal NPs with a shell ligand is highly useful for selective organic transformations owing to the ideal structure of these catalysts for maximizing the ligand effect, leading to superior catalytic performances compared to those of conventional supported metal NPs. Semihydrogenation of alkynes is a powerful tool to synthesize (Z)-alkenes which are important building blocks for fine chemicals, such as bioactive molecules, flavors, and natural products. In this context, the Lindlar catalyst (Pd/ CaCO3 treated with Pb(OAc)2) has been widely used. [13] Unfortunately, the Lindlar catalyst has serious drawbacks including the requirement of a toxic lead salt and the addition of large amounts of quinoline to suppress the over-hydrogenation of the product alkenes. Furthermore, the Lindlar catalyst has a limited substrate scope; terminal alkynes cannot be converted selectively into terminal alkenes because of the rapid over-hydrogenation of the resulting alkenes to alkanes. Aiming at the development of environmentally benign catalyst systems, a number of alternative lead-free catalysts have been reported. 15] Recently, we also developed a leadfree catalytic system for the selective semihydrogenation consisting of SiO2-supported Pd nanoparticles (PdNPs) and dimethylsulfoxide (DMSO), in which the addition of DMSO drastically suppressed the over-hydrogenation and isomerization of the alkene products even after complete consumption of the alkynes. This effect is due to the coordination of DMSO to the PdNPs. DMSO adsorbed on the surface of PdNPs inhibits the coordination of alkenes to the PdNPs, while alkynes can adsorb onto the PdNPs surface because they have a higher coordination ability than DMSO. This phenomenon inspired us to design PdNPs coordinated with a DMSO-like species in a solid matrix. If a core\u2013shell structured nanocomposite involving PdNPs encapsulated by a shell having a DMSO-like species could be constructed, it would act as an efficient and functional solid catalyst for the selective semihydrogenation of alkynes. Herein, we successfully synthesized core\u2013shell nanocomposites of PdNPs covered with a DMSO-like matrix on the surface of SiO2 (Pd@MPSO/SiO2). The shell, consisting of an alkyl sulfoxide network, acted as a macroligand and allowed the selective access of alkynes to the active center of the PdNPs, promoting the selective semihydrogenation of not only internal but also terminal alkynes without any additives. Moreover, these catalysts were reusable while maintaining high activity and selectivity. Pd@MPSO/SiO2 catalysts were synthesized as follows. Pd/ SiO2 prepared according to our procedure [16] was stirred in n-heptane with small amounts of 3,5-di-tert-butyl-4-hydroxytoluene (BHT) and water at room temperature. Next, methyl3-trimethoxysilylpropylsulfoxide (MPSO) was added to the mixture and the mixture was heated. The slurry obtained was collected by filtration, washed, and dried in vacuo, affording Pd@MPSO/SiO2 as a gray powder. Altering the molar ratios of MPSO to Pd gave two kinds of catalysts: Pd@MPSO/SiO21 (MPSO:Pd = 7:1), and Pd@MPSO/SiO2-2 (MPSO:Pd = 100:1). [*] Dr. T. Mitsudome, Y. Takahashi, Dr. T. Mizugaki, Prof. Dr. K. Jitsukawa, Prof. Dr. K. Kaneda Department of Materials Engineering Science Graduate School of Engineering Science, Osaka University 1\u20133, Machikaneyama, Toyonaka, Osaka 560-8531 (Japan) E-mail: kaneda@cheng.es.osaka-u.ac.jp", "which catalyst ?", "Pd@MPSO/SiO2", 3460.0, 3472.0], ["Catalytic bio\u2010oil upgrading to produce renewable fuels has attracted increasing attention in response to the decreasing oil reserves and the increased fuel demand worldwide. Herein, the catalytic hydrodeoxygenation (HDO) of guaiacol with carbon\u2010supported non\u2010sulfided metal catalysts was investigated. Catalytic tests were performed at 4.0 MPa and temperatures ranging from 623 to 673 K. Both Ru/C and Mo/C catalysts showed promising catalytic performance in HDO. The selectivity to benzene was 69.5 and 83.5 % at 653 K over Ru/C and 10Mo/C catalysts, respectively. Phenol, with a selectivity as high as 76.5 %, was observed mainly on 1Mo/C. However, the reaction pathway over both catalysts is different. Over the Ru/C catalyst, the O\uf8ffCH3 bond was cleaved to form the primary intermediate catechol, whereas only traces of catechol were detected over Mo/C catalysts. In addition, two types of active sites were detected over Mo samples after reduction in H2 at 973 K. Catalytic studies showed that the demethoxylation of guaiacol is performed over residual MoOx sites with high selectivity to phenol whereas the consecutive HDO of phenol is performed over molybdenum carbide species, which is widely available only on the 10Mo/C sample. Different deactivation patterns were also observed over Ru/C and Mo/C catalysts.", "which catalyst ?", "Ru/C", 393.0, 397.0], ["Highly dispersed palladium nanoparticles (Pd NPs) immobilized on heteroatom-doped hierarchical porous carbon supports (N,O-carbon) with large specific surface areas are synthesized by a wet chemical reduction method. The N,O-carbon derived from naturally abundant bamboo shoots is fabricated by a tandem hydrothermal-carbonization process without assistance of any templates, chemical activation reagents, or exogenous N or O sources in a simple and ecofriendly manner. The prepared Pd/N,O-carbon catalyst shows extremely high activity and excellent chemoselectivity for semihydrogenation of a broad range of alkynes to versatile and valuable alkenes under ambient conditions. The catalyst can be readily recovered for successive reuse with negligible loss in activity and selectivity, and is also applicable for practical gram-scale reactions.", "which catalyst ?", "Pd/N,O-carbon", 483.0, 496.0], ["The formation of a PdZn alloy from a 4.3% Pd/ZnO catalyst was characterized by combined in situ high-resolution X-ray diffraction (HRXRD) and X-ray absorption spectroscopy (XAS). Alloy formation started already at around 100 \u00b0C, likely at the surface, and reached the bulk with increasing temperature. The structure of the catalyst was close to the bulk value of a 1:1 PdZn alloy with a L1o structure (RPd\u2212Pd = 2.9 A, RPd\u2212Zn = 2.6 A, CNPd\u2212Zn = 8, CNPd\u2212Pd = 4) after reduction at 300 \u00b0C and above. The activity of the gas-phase hydrogenation of 1-pentyne decreased with the formation of the PdZn alloy. In contrast to Pd/SiO2, no full hydrogenation occurred over Pd/ZnO. Over time, only slight decomposition of the alloy occurred under reaction conditions.", "which catalyst ?", "Pd/ZnO", 42.0, 48.0], ["Abstract Cleavage of C\u2013O bonds in lignin can afford the renewable aryl sources for fine chemicals. However, the high bond energies of these C\u2013O bonds, especially the 4-O-5-type diaryl ether C\u2013O bonds (~314 kJ/mol) make the cleavage very challenging. Here, we report visible-light photoredox-catalyzed C\u2013O bond cleavage of diaryl ethers by an acidolysis with an aryl carboxylic acid and a following one-pot hydrolysis. Two molecules of phenols are obtained from one molecule of diaryl ether at room temperature. The aryl carboxylic acid used for the acidolysis can be recovered. The key to success of the acidolysis is merging visible-light photoredox catalysis using an acridinium photocatalyst and Lewis acid catalysis using Cu(TMHD) 2 . Preliminary mechanistic studies indicate that the catalytic cycle occurs via a rare selective electrophilic attack of the generated aryl carboxylic radical on the electron-rich aryl ring of the diphenyl ether. This transformation is applied to a gram-scale reaction and the model of 4-O-5 lignin linkages.", "which catalyst ?", "Lewis acid", 699.0, 709.0], ["The association of heptamethine cyanine cation 1(+) with various counterions A (A = Br(-), I(-), PF(6)(-), SbF(6)(-), B(C(6)F(5))(4)(-), TRISPHAT) was realized. The six different ion pairs have been characterized by X-ray diffraction, and their absorption properties were studied in polar (DCM) and apolar (toluene) solvents. A small, hard anion (Br(-)) is able to strongly polarize the polymethine chain, resulting in the stabilization of an asymmetric dipolar-like structure in the crystal and in nondissociating solvents. On the contrary, in more polar solvents or when it is associated with a bulky soft anion (TRISPHAT or B(C(6)F(5))(4)(-)), the same cyanine dye adopts preferentially the ideal polymethine state. The solid-state and solution absorption properties of heptamethine dyes are therefore strongly correlated to the nature of the counterion.", "which Counterion ?", "TRISPHAT", 137.0, 145.0], ["Absorption spectra of Cyanine+Br- salts show a remarkable solvent dependence in non-/polar solvents, exhibiting a narrow, sharp band shapes in dichloromethane but broad features in toluene; this change was attributed to ion pair association, breaking the symmetry of the cyanine, similar to the situation in the crystals (P.-A. Bouit et al, J. Am. Chem. Soc. 2010, 132, 4328). Our density functional theory (DFT) based quantum mechanics/molecular mechanics (QM/MM) calculations of the crystals evidence the crucial role of specific asymmetric anion positioning on the symmetry breaking. Molecular dynamics (MD) simulations prove the ion pair association in non-polar solvents. Time-dependent DFT vibronic calculations in toluene show that ion pairing, controlled by steric demands, induces symmetry breaking in the electronic ground state. This largely broadens the spectrum in very reasonable agreement with experiment, while the principal pattern of vibrational modes is retained. The current findings allow to establish a unified picture on symmetry breaking of polymethine dyes in fluid solution.", "which Counterion ?", "Br-", NaN, NaN], ["The association of heptamethine cyanine cation 1(+) with various counterions A (A = Br(-), I(-), PF(6)(-), SbF(6)(-), B(C(6)F(5))(4)(-), TRISPHAT) was realized. The six different ion pairs have been characterized by X-ray diffraction, and their absorption properties were studied in polar (DCM) and apolar (toluene) solvents. A small, hard anion (Br(-)) is able to strongly polarize the polymethine chain, resulting in the stabilization of an asymmetric dipolar-like structure in the crystal and in nondissociating solvents. On the contrary, in more polar solvents or when it is associated with a bulky soft anion (TRISPHAT or B(C(6)F(5))(4)(-)), the same cyanine dye adopts preferentially the ideal polymethine state. The solid-state and solution absorption properties of heptamethine dyes are therefore strongly correlated to the nature of the counterion.", "which BLA evaluation method ?", "X-Ray", 216.0, 221.0], ["Phyllosilicates have previously been detected in layered outcrops in and around the Martian outflow channel Mawrth Vallis. CRISM spectra of these outcrops exhibit features diagnostic of kaolinite, montmorillonite, and Fe/Mg\u2010rich smectites, along with crystalline ferric oxide minerals such as hematite. These minerals occur in distinct stratigraphic horizons, implying changing environmental conditions and/or a variable sediment source for these layered deposits. Similar stratigraphic sequences occur on both sides of the outflow channel and on its floor, with Al\u2010clay\u2010bearing layers typically overlying Fe/Mg\u2010clay\u2010bearing layers. This pattern, combined with layer geometries measured using topographic data from HiRISE and HRSC, suggests that the Al\u2010clay\u2010bearing horizons at Mawrth Vallis postdate the outflow channel and may represent a later sedimentary or altered pyroclastic deposit that drapes the topography.", "which Supplimentary Information ?", " HRSC", 725.0, 730.0], ["Gale Crater on Mars has the layered structure of deposit covered by the Noachian/Hesperian boundary. Mineral identification and classification at this region can provide important constrains on environment and geological evolution for Mars. Although Curiosity rove has provided the in-situ mineralogical analysis in Gale, but it restricted in small areas. Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) aboard the Mars Reconnaissance Orbiter (MRO) with enhanced spectral resolution can provide more information in spatial and time scale. In this paper, CRISM near-infrared spectral data are used to identify mineral classes and groups at Martian Gale region. By using diagnostic absorptions features analysis in conjunction with spectral angle mapper (SAM), detailed mineral species are identified at Gale region, e.g., kaolinite, chlorites, smectite, jarosite, and northupite. The clay minerals' diversity in Gale Crater suggests the variation of aqueous alteration. The detection of northupite suggests that the Gale region has experienced the climate change from moist condition with mineral dissolution to dryer climate with water evaporation. The presence of ferric sulfate mineral jarosite formed through the oxidation of iron sulfides in acidic environments shows the experience of acidic sulfur-rich condition in Gale history.", "which Supplimentary Information ?", " Smectite", NaN, NaN], ["fiber bundle that carried the laser beam and returned the scattered radiation could be placed against surfaces at any desired angle by a deployment mechanism; otherwise, the instrument would need no moving parts. A modem micro-Raman spectrometer with its beam broadened (to .expand the spot to 50-gm diameter) and set for low resolution (7 cm '\u007f in the 100-1400 cm '\u007f region relative to 514.5-nm excitation), was used to simulate the spectra anticipated from a rover instrument. We present spectra for lunar mineral grains, <1 mm soil fines, breccia fragments, and glasses. From frequencies of olivine peaks, we derived sufficiently precise forsteritc contents to correlate the analyzed grains to known rock types and we obtained appropriate forsteritc contents from weak signals above background in soil fines and breccias. Peak positions of pyroxenes were sufficiently well determined to distinguish among orthorhombic, monoclinic, and triclinic (pyroxenoid) structures; additional information can be obtained from pyroxene spectra, but requires further laboratory calibration. Plagioclase provided sharp peaks in soil fines and most breccias even when the glass content was high.", "which Raman Stokes-shift range (cm-1) ?", "100-1400", 353.0, 361.0], ["This study uses photoresist materials in combination with several optical filters as a diagnostic to examine the relative importance of VUV-induced surface modifications for different cold atmospheric pressure plasma (CAPP) sources. The argon fed kHz-driven ring-APPJ showed the largest ratio of VUV surface modification relative to the total modification introduced, whereas the MHz APPJ showed the largest overall surface modification. The MHz APPJ shows increased total thickness reduction and reduced VUV effect as oxygen is added to the feed gas, a condition that is often used for practical applications. We examine the influence of noble gas flow from the APPJ on the local environment. The local environment has a decisive impact on polymer modification from VUV emission as O2 readily absorbs VUV photons.", "which VUV ?", "FILTERS", 74.0, 81.0], ["A plasma jet has been developed for etching materials at atmospheric pressure and between 100 and C. Gas mixtures containing helium, oxygen and carbon tetrafluoride were passed between an outer, grounded electrode and a centre electrode, which was driven by 13.56 MHz radio frequency power at 50 to 500 W. At a flow rate of , a stable, arc-free discharge was produced. This discharge extended out through a nozzle at the end of the electrodes, forming a plasma jet. Materials placed 0.5 cm downstream from the nozzle were etched at the following maximum rates: for Kapton ( and He only), for silicon dioxide, for tantalum and for tungsten. Optical emission spectroscopy was used to identify the electronically excited species inside the plasma and outside in the jet effluent.", "which Unit_frequency ?", "MHz", 264.0, 267.0], ["The UV/VUV spectrum of a non\u2010thermal capillary plasma jet operating with Ar at ambient atmosphere and the temperature load of a substrate exposed to the jet have been measured. The VUV radiation is assigned to N, H, and O atomic lines along with an Ar*2 excimer continuum. The absolute radiance (115\u2010200 nm) of the source has been determined. Maximum values of 880 \u03bcW/mm2sr are obtained. Substrate temperatures range between 35 \u00b0C for low powers and high gas flow conditions and 95 \u00b0C for high powers and reduced gas flow. The plasma source (13.56, 27.12 or 40.78 MHz) can be operated in Ar and in N2. The further addition of a low percentage of silicon containing reactive admixtures has been demonstrated for thin film deposition. Several further applications related to surface modification have been successfully applied. (\u00a9 2007 WILEY\u2010VCH Verlag GmbH & Co. KGaA, Weinheim)", "which Unit_frequency ?", "MHz", 564.0, 567.0], ["The vacuum ultraviolet (VUV) emissions from 115 to 200 nm from the effluent of an RF (1.2 MHz) capillary jet fed with pure argon and binary mixtures of argon and xenon or krypton (up to 20%) are analyzed. The feed gas mixture is emanating into air at normal pressure. The Ar2 excimer second continuum, observed in the region of 120-135 nm, prevails in the pure Ar discharge. It decreases when small amounts (as low as 0.5%) of Xe or Kr are added. In that case, the resonant emission of Xe at 147 nm (or 124 nm for Kr, respectively) becomes dominant. The Xe2 second continuum at 172 nm appears for higher admixtures of Xe (10%). Furthermore, several N I emission lines, the O I resonance line, and H I line appear due to ambient air. Two absorption bands (120.6 and 124.6 nm) are present in the spectra. Their origin could be unequivocally associated to O2 and O3. The radiance is determined end-on at varying axial distance in absolute units for various mixtures of Ar/Xe and Ar/Kr and compared to pure Ar. Integration over the entire VUV wavelength region provides the integrated spectral distribution. Maximum values of 2.2 mW middotmm-2middotsr-1 are attained in pure Ar and at a distance of 4 mm from the outlet nozzle of the discharge. By adding diminutive admixtures of Kr or Xe, the intensity and spectral distribution is effectively changed.", "which Unit_frequency ?", "MHz", 90.0, 93.0], ["The planar 13.56 MHz RF-excited low temperature atmospheric pressure plasma jet (APPJ) investigated in this study is operated with helium feed gas and a small molecular oxygen admixture. The effluent leaving the discharge through the jet's nozzle contains very few charged particles and a high reactive oxygen species' density. As its main reactive radical, essential for numerous applications, the ground state atomic oxygen density in the APPJ's effluent is measured spatially resolved with two-photon absorption laser induced fluorescence spectroscopy. The atomic oxygen density at the nozzle reaches a value of ~1016 cm\u22123. Even at several centimetres distance still 1% of this initial atomic oxygen density can be detected. Optical emission spectroscopy (OES) reveals the presence of short living excited oxygen atoms up to 10 cm distance from the jet's nozzle. The measured high ground state atomic oxygen density and the unaccounted for presence of excited atomic oxygen require further investigations on a possible energy transfer from the APPJ's discharge region into the effluent: energetic vacuum ultraviolet radiation, measured by OES down to 110 nm, reaches far into the effluent where it is presumed to be responsible for the generation of atomic oxygen.", "which Unit_frequency ?", "MHz", 17.0, 20.0], ["This study uses photoresist materials in combination with several optical filters as a diagnostic to examine the relative importance of VUV-induced surface modifications for different cold atmospheric pressure plasma (CAPP) sources. The argon fed kHz-driven ring-APPJ showed the largest ratio of VUV surface modification relative to the total modification introduced, whereas the MHz APPJ showed the largest overall surface modification. The MHz APPJ shows increased total thickness reduction and reduced VUV effect as oxygen is added to the feed gas, a condition that is often used for practical applications. We examine the influence of noble gas flow from the APPJ on the local environment. The local environment has a decisive impact on polymer modification from VUV emission as O2 readily absorbs VUV photons.", "which Unit_frequency ?", "MHz", 380.0, 383.0], ["The efficient generation of reactive oxygen species (ROS) in cold atmospheric pressure plasma jets (APPJs) is an increasingly important topic, e.g. for the treatment of temperature sensitive biological samples in the field of plasma medicine. A 13.56 MHz radio-frequency (rf) driven APPJ device operated with helium feed gas and small admixtures of oxygen (up to 1%), generating a homogeneous glow-mode plasma at low gas temperatures, was investigated. Absolute densities of ozone, one of the most prominent ROS, were measured across the 11 mm wide discharge channel by means of broadband absorption spectroscopy using the Hartley band centered at \u03bb = 255 nm. A two-beam setup with a reference beam in MachZehnder configuration is employed for improved signal-to-noise ratio allowing highsensitivity measurements in the investigated single-pass weak-absorbance regime. The results are correlated to gas temperature measurements, deduced from the rotational temperature of the N2 (C \u03a0 u \u2192 B \u03a0 g , \u03c5 = 0 \u2192 2) optical emission from introduced air impurities. The observed opposing trends of both quantities as a function of rf power input and oxygen admixture are analysed and explained in terms of a zerodimensional plasma-chemical kinetics simulation. It is found that the gas temperature as well as the densities of O and O2(b \u03a3 g ) influence the absolute O3 densities when the rf power is varied. \u2021 Current address: KROHNE Innovation GmbH, Ludwig-Krone-Str.5, 47058 Duisburg, Germany Page 1 of 26 AUTHOR SUBMITTED MANUSCRIPT PSST-101801.R1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 A cc e d M an us cr ip t", "which Unit_frequency ?", "MHz", 251.0, 254.0], ["Two-dimensional spatially resolved absolute atomic oxygen densities are measured within an atmospheric pressure micro plasma jet and in its effluent. The plasma is operated in helium with an admixture of 0.5% of oxygen at 13.56 MHz and with a power of 1 W. Absolute atomic oxygen densities are obtained using two photon absorption laser induced fluorescence spectroscopy. The results are interpreted based on measurements of the electron dynamics by phase resolved optical emission spectroscopy in combination with a simple model that balances the production of atomic oxygen with its losses due to chemical reactions and diffusion. Within the discharge, the atomic oxygen density builds up with a rise time of 600 \u00b5s along the gas flow and reaches a plateau of 8 \u00d7 1015 cm\u22123. In the effluent, the density decays exponentially with a decay time of 180 \u00b5s (corresponding to a decay length of 3 mm at a gas flow of 1.0 slm). It is found that both, the species formation behavior and the maximum distance between the jet nozzle and substrates for possible oxygen treatments of surfaces can be controlled by adjusting the gas flow.", "which Unit_frequency ?", "MHz", 228.0, 231.0], ["Polyethylene terephthalate (PET) is the most important mass\u2010produced thermoplastic polyester used as a packaging material. Recently, thermophilic polyester hydrolases such as TfCut2 from Thermobifida fusca have emerged as promising biocatalysts for an eco\u2010friendly PET recycling process. In this study, postconsumer PET food packaging containers are treated with TfCut2 and show weight losses of more than 50% after 96 h of incubation at 70 \u00b0C. Differential scanning calorimetry analysis indicates that the high linear degradation rates observed in the first 72 h of incubation is due to the high hydrolysis susceptibility of the mobile amorphous fraction (MAF) of PET. The physical aging process of PET occurring at 70 \u00b0C is shown to gradually convert MAF to polymer microstructures with limited accessibility to enzymatic hydrolysis. Analysis of the chain\u2010length distribution of degraded PET by nuclear magnetic resonance spectroscopy reveals that MAF is rapidly hydrolyzed via a combinatorial exo\u2010 and endo\u2010type degradation mechanism whereas the remaining PET microstructures are slowly degraded only by endo\u2010type chain scission causing no detectable weight loss. Hence, efficient thermostable biocatalysts are required to overcome the competitive physical aging process for the complete degradation of postconsumer PET materials close to the glass transition temperature of PET.", "which Enzyme ?", "TfCut2", 175.0, 181.0], ["Polyethylene terephthalate (PET) is the most important mass\u2010produced thermoplastic polyester used as a packaging material. Recently, thermophilic polyester hydrolases such as TfCut2 from Thermobifida fusca have emerged as promising biocatalysts for an eco\u2010friendly PET recycling process. In this study, postconsumer PET food packaging containers are treated with TfCut2 and show weight losses of more than 50% after 96 h of incubation at 70 \u00b0C. Differential scanning calorimetry analysis indicates that the high linear degradation rates observed in the first 72 h of incubation is due to the high hydrolysis susceptibility of the mobile amorphous fraction (MAF) of PET. The physical aging process of PET occurring at 70 \u00b0C is shown to gradually convert MAF to polymer microstructures with limited accessibility to enzymatic hydrolysis. Analysis of the chain\u2010length distribution of degraded PET by nuclear magnetic resonance spectroscopy reveals that MAF is rapidly hydrolyzed via a combinatorial exo\u2010 and endo\u2010type degradation mechanism whereas the remaining PET microstructures are slowly degraded only by endo\u2010type chain scission causing no detectable weight loss. Hence, efficient thermostable biocatalysts are required to overcome the competitive physical aging process for the complete degradation of postconsumer PET materials close to the glass transition temperature of PET.", "which Enzyme ?", "fusca", 200.0, 205.0], ["Face alignment is a crucial step in face recognition tasks. Especially, using landmark localization for geometric face normalization has shown to be very effective, clearly improving the recognition results. However, no adequate databases exist that provide a sufficient number of annotated facial landmarks. The databases are either limited to frontal views, provide only a small number of annotated images or have been acquired under controlled conditions. Hence, we introduce a novel database overcoming these limitations: Annotated Facial Landmarks in the Wild (AFLW). AFLW provides a large-scale collection of images gathered from Flickr, exhibiting a large variety in face appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total 25,993 faces in 21,997 real-world images are annotated with up to 21 landmarks per image. Due to the comprehensive set of annotations AFLW is well suited to train and test algorithms for multi-view face detection, facial landmark localization and face pose estimation. Further, we offer a rich set of tools that ease the integration of other face databases and associated annotations into our joint framework.", "which Databases ?", "AFLW ", 573.0, 578.0], ["Abstract Cytogenetics is considered one of the most valuable prognostic determinants in acute myeloid leukemia (AML). However, many studies on which this assertion is based were limited by relatively small sample sizes or varying treatment approach, leading to conflicting data regarding the prognostic implications of specific cytogenetic abnormalities. The Medical Research Council (MRC) AML 10 trial, which included children and adults up to 55 years of age, not only affords the opportunity to determine the independent prognostic significance of pretreatment cytogenetics in the context of large patient groups receiving comparable therapy, but also to address their impact on the outcome of subsequent transplantation procedures performed in first complete remission (CR). On the basis of response to induction treatment, relapse risk, and overall survival, three prognostic groups could be defined by cytogenetic abnormalities detected at presentation in comparison with the outcome of patients with normal karyotype. AML associated with t(8;21), t(15;17) or inv(16) predicted a relatively favorable outcome. Whereas in patients lacking these favorable changes, the presence of a complex karyotype, \u22125, del(5q), \u22127, or abnormalities of 3q defined a group with relatively poor prognosis. The remaining group of patients including those with 11q23 abnormalities, +8, +21, +22, del(9q), del(7q) or other miscellaneous structural or numerical defects not encompassed by the favorable or adverse risk groups were found to have an intermediate prognosis. The presence of additional cytogenetic abnormalities did not modify the outcome of patients with favorable cytogenetics. Subgroup analysis demonstrated that the three cytogenetically defined prognostic groups retained their predictive value in the context of secondary as well as de novo AML, within the pediatric age group and furthermore were found to be a key determinant of outcome from autologous or allogeneic bone marrow transplantation (BMT) in first CR. This study highlights the importance of diagnostic cytogenetics as an independent prognostic factor in AML, providing the framework for a stratified treatment approach of this disease, which has been adopted in the current MRC AML 12 trial.", "which Disease ?", "AML", 112.0, 115.0], ["The development of chromosomal abnormalities (CAs) in the Philadelphia chromosome (Ph)-negative metaphases during imatinib (IM) therapy in patients with newly diagnosed chronic myecloid leukemia (CML) has been reported only anecdotally. We assessed the frequency and significance of this phenomenon among 258 patients with newly diagnosed CML in chronic phase receiving IM. After a median follow-up of 37 months, 21 (9%) patients developed 23 CAs in Ph-negative cells; excluding -Y, this incidence was 5%. Sixteen (70%) of all CAs were observed in 2 or more metaphases. The median time from start of IM to the appearance of CAs was 18 months. The most common CAs were -Y and + 8 in 9 and 3 patients, respectively. CAs were less frequent in young patients (P = .02) and those treated with high-dose IM (P = .03). In all but 3 patients, CAs were transient and disappeared after a median of 5 months. One patient developed acute myeloid leukemia (associated with - 7). At last follow-up, 3 patients died from transplantation-related complications, myocardial infarction, and progressive disease and 2 lost cytogenetic response. CAs occur in Ph-negative cells in a small percentage of patients with newly diagnosed CML treated with IM. In rare instances, these could reflect the emergence of a new malignant clone.", "which Disease ?", "CML", 196.0, 199.0], ["We have generated a large, unique database that includes morphologic, clinical, cytogenetic, and follow-up data from 2124 patients with myelodysplastic syndromes (MDSs) at 4 institutions in Austria and 4 in Germany. Cytogenetic analyses were successfully performed in 2072 (97.6%) patients, revealing clonal abnormalities in 1084 (52.3%) patients. Numeric and structural chromosomal abnormalities were documented for each patient and subdivided further according to the number of additional abnormalities. Thus, 684 different cytogenetic categories were identified. The impact of the karyotype on the natural course of the disease was studied in 1286 patients treated with supportive care only. Median survival was 53.4 months for patients with normal karyotypes (n = 612) and 8.7 months for those with complex anomalies (n = 166). A total of 13 rare abnormalities were identified with good (+1/+1q, t(1q), t(7q), del(9q), del(12p), chromosome 15 anomalies, t(17q), monosomy 21, trisomy 21, and -X), intermediate (del(11q), chromosome 19 anomalies), or poor (t(5q)) prognostic impact, respectively. The prognostic relevance of additional abnormalities varied considerably depending on the chromosomes affected. For all World Health Organization (WHO) and French-American-British (FAB) classification system subtypes, the karyotype provided additional prognostic information. Our analyses offer new insights into the prognostic significance of rare chromosomal abnormalities and specific karyotypic combinations in MDS.", "which Disease ?", "MDS", 1514.0, 1517.0], ["We describe the chromosomal abnormalities found in 104 previously untreated patients with non-Hodgkin's lymphoma (NHL) and the correlations of these abnormalities with disease characteristics. The cytogenetic method used was a 24- to 48-hour culture, followed by G-banding. Several significant associations were discovered. A trisomy 3 was correlated with high-grade NHL. In the patients with an immunoblastic NHL, an abnormal chromosome no. 3 or 6 was found significantly more frequently. As previously described, a t(14;18) was significantly correlated with a follicular growth pattern. Abnormalities on chromosome no. 17 were correlated with a diffuse histology and a shorter survival. A shorter survival was also correlated with a +5, +6, +18, all abnormalities on chromosome no. 5, or involvement of breakpoint 14q11-12. In a multivariate analysis, these chromosomal abnormalities appeared to be independent prognostic factors and correlated with survival more strongly than any traditional prognostic variable. Patients with a t(11;14)(q13;q32) had an elevated lactate dehydrogenase (LDH). Skin infiltration was correlated with abnormalities on 2p. Abnormalities involving breakpoints 6q11-16 were correlated with B symptoms. Patients with abnormalities involving breakpoints 3q21-25 and 13q21-24 had more frequent bulky disease. The correlations of certain clinical findings with specific chromosomal abnormalities might help unveil the pathogenetic mechanisms of NHL and tailor treatment regimens.", "which Disease ?", "NHL", 114.0, 117.0], ["Although cytogenetic abnormalities are important prognostic factors in myeloid malignancies, they are not included in current prognostic scores for primary myelofibrosis (PMF). To determine their relevance in PMF, we retrospectively examined the impact of cytogenetic abnormalities and karyotypic evolution on the outcome of 256 patients. Baseline cytogenetic status impacted significantly on survival: patients with favorable abnormalities (sole deletions in 13q or 20q, or trisomy 9 +/- one other abnormality) had survivals similar to those with normal diploid karyotypes (median, 63 and 46 months, respectively), whereas patients with unfavorable abnormalities (rearrangement of chromosome 5 or 7, or > or = 3 abnormalities) had a poor median survival of 15 months. Patients with abnormalities of chromosome 17 had a median survival of only 5 months. A model containing karyotypic abnormalities, hemoglobin, platelet count, and performance status effectively risk-stratified patients at initial evaluation. Among 73 patients assessable for clonal evolution during stable chronic phase, those who developed unfavorable or chromosome 17 abnormalities had median survivals of 18 and 9 months, respectively, suggesting the potential role of cytogenetics as a risk factor applicable at any time in the disease course. Dynamic prognostic significance of cytogenetic abnormalities in PMF should be further prospectively evaluated.", "which Disease ?", "PMF", 171.0, 174.0], ["Cytogenetic abnormalities, evaluated either by karyotype or by fluorescence in situ hybridization (FISH), are considered the most important prognostic factor in multiple myeloma (MM). However, there is no information about the prognostic impact of genomic changes detected by comparative genomic hybridization (CGH). We have analyzed the frequency and prognostic impact of genetic changes as detected by CGH and evaluated the relationship between these chromosomal imbalances and IGH translocation, analyzed by FISH, in 74 patients with newly diagnosed MM. Genomic changes were identified in 51 (69%) of the 74 MM patients. The most recurrent abnormalities among the cases with genomic changes were gains on chromosome regions 1q (45%), 5q (24%), 9q (24%), 11q (22%), 15q (22%), 3q (16%), and 7q (14%), while losses mainly involved chromosomes 13 (39%), 16q (18%), 6q (10%), and 8p (10%). Remarkably, the 6 patients with gains on 11q had IGH translocations. Multivariate analysis selected chromosomal losses, 11q gains, age, and type of treatment (conventional chemotherapy vs autologous transplantation) as independent parameters for predicting survival. Genomic losses retained the prognostic value irrespective of treatment approach. According to these results, losses of chromosomal material evaluated by CGH represent a powerful prognostic factor in MM patients.", "which Disease ?", "Myeloma", 170.0, 177.0], ["Acute lymphoblastic leukemia (ALL) is the most common childhood malignancy, and implementation of risk-adapted therapy has been instrumental in the dramatic improvements in clinical outcomes. A key to risk-adapted therapies includes the identification of genomic features of individual tumors, including chromosome number (for hyper- and hypodiploidy) and gene fusions, notably ETV6-RUNX1, TCF3-PBX1, and BCR-ABL1 in B-cell ALL (B-ALL). RNA-sequencing (RNA-seq) of large ALL cohorts has expanded the number of recurrent gene fusions recognized as drivers in ALL, and identification of these new entities will contribute to refining ALL risk stratification. We used RNA-seq on 126 ALL patients from our clinical service to test the utility of including RNA-seq in standard-of-care diagnostic pipelines to detect gene rearrangements and IKZF1 deletions. RNA-seq identified 86% of rearrangements detected by standard-of-care diagnostics. KMT2A (MLL) rearrangements, although usually identified, were the most commonly missed by RNA-seq as a result of low expression. RNA-seq identified rearrangements that were not detected by standard-of-care testing in 9 patients. These were found in patients who were not classifiable using standard molecular assessment. We developed an approach to detect the most common IKZF1 deletion from RNA-seq data and validated this using an RQ-PCR assay. We applied an expression classifier to identify Philadelphia chromosome-like B-ALL patients. T-ALL proved a rich source of novel gene fusions, which have clinical implications or provide insights into disease biology. Our experience shows that RNA-seq can be implemented within an individual clinical service to enhance the current molecular diagnostic risk classification of ALL.", "which Disease ?", "acute lymphoblastic leukemia", 0.0, 28.0], ["An artificial neural network (ANN) was applied successfully to predict flow boiling curves. The databases used in the analysis are from the 1960's, including 1,305 data points which cover these parameter ranges: pressure P=100\u20131,000 kPa, mass flow rate G=40\u2013500 kg/m2-s, inlet subcooling \u0394Tsub =0\u201335\u00b0C, wall superheat \u0394Tw = 10\u2013300\u00b0C and heat flux Q=20\u20138,000kW/m2. The proposed methodology allows us to achieve accurate results, thus it is suitable for the processing of the boiling curve data. The effects of the main parameters on flow boiling curves were analyzed using the ANN. The heat flux increases with increasing inlet subcooling for all heat transfer modes. Mass flow rate has no significant effects on nucleate boiling curves. The transition boiling and film boiling heat fluxes will increase with an increase in the mass flow rate. Pressure plays a predominant role and improves heat transfer in all boiling regions except the film boiling region. There are slight differences between the steady and the transient boiling curves in all boiling regions except the nucleate region. The transient boiling curve lies below the corresponding steady boiling curve.", "which Types ?", "ANN", 30.0, 33.0], ["Although rotating beds are good equipments for intensified separations and multiphase reactions, but the fundamentals of its hydrodynamics are still unknown. In the wide range of operating conditions, the pressure drop across an irrigated bed is significantly lower than dry bed. In this regard, an approach based on artificial intelligence, that is, artificial neural network (ANN) has been proposed for prediction of the pressure drop across the rotating packed beds (RPB). The experimental data sets used as input data (280 data points) were divided into training and testing subsets. The training data set has been used to develop the ANN model while the testing data set was used to validate the performance of the trained ANN model. The results of the predicted pressure drop values with the experimental values show a good agreement between the prediction and experimental results regarding to some statistical parameters, for example (AARD% = 4.70, MSE = 2.0 \u00d7 10\u22125 and R2 = 0.9994). The designed ANN model can estimate the pressure drop in the countercurrent flow rotating packed bed with unexpected phenomena for higher pressure drop in dry bed than in wet bed. Also, the designed ANN model has been able to predict the pressure drop in a wet bed with the good accuracy with experimental.", "which Types ?", "ANN", 378.0, 381.0], ["In this work, the utilization of neural network in hybrid with first principle models for modelling and control of a batch polymerization process was investigated. Following the steps of the methodology, hybrid neural network (HNN) forward models and HNN inverse model of the process were first developed and then the performance of the model in direct inverse control strategy and internal model control (IMC) strategy was investigated. For comparison purposes, the performance of conventional neural network and PID controller in control was compared with the proposed HNN. The results show that HNN is able to control perfectly for both set points tracking and disturbance rejection studies.", "which Types ?", "HNN", 227.0, 230.0], ["A novel Structure Approaching Hybrid Neural Network (SAHNN) approach to model batch reactors is presented. The Virtual Supervisor\u2212Artificial Immune Algorithm method is utilized for the training of SAHNN, especially for the batch processes with partial unmeasurable state variables. SAHNN involves the use of approximate mechanistic equations to characterize unmeasured state variables. Since the main interest in batch process operation is on the end-of-batch product quality, an extended integral square error control index based on the SAHNN model is applied to track the desired temperature profile of a batch process. This approach introduces model mismatches and unmeasured disturbances into the optimal control strategy and provides a feedback channel for control. The performance of robustness and antidisturbances of the control system are then enhanced. The simulation result indicates that the SAHNN model and model-based optimal control strategy of the batch process are effective.", "which Types ?", "SAHNN", 53.0, 58.0], ["Data fusion is an emerging technology to fuse data from multiple data or information of the environment through measurement and detection to make a more accurate and reliable estimation or decision. In this Article, energy consumption data are collected from ethylene plants with the high temperature steam cracking process technology. An integrated framework of the energy efficiency estimation is proposed on the basis of data fusion strategy. A Hierarchical Variable Variance Fusion (HVVF) algorithm and a Fuzzy Analytic Hierarchy Process (FAHP) method are proposed to estimate energy efficiencies of ethylene equipments. For different equipment scales with the same process technology, the HVVF algorithm is used to estimate energy efficiency ranks among different equipments. For different technologies based on HVVF results, the FAHP method based on the approximate fuzzy eigenvector is used to get energy efficiency indices (EEI) of total ethylene industries. The comparisons are used to assess energy utilization...", "which Types ?", "Fuzzy", 509.0, 514.0], ["Fuzzy logic has been applied to a batch microbial fermentation described by a model with two adjustable parameters which associate product formation with the increasing and/or stationary phases of cell growth. The fermentation is inhibited by its product and, beyond a critical concentration, also by the substrate. To mimic an industrial condition, Gaussian noise was added and the resulting performance was simulated by fuzzy estimation systems. Simple rules with a few membership functions were able to portray bioreactor performance and the feedback interactions between cell growth and the concentrations of substrate and product. Through careful choices of the membership functions and the fuzzy logic, accuracies better than previously reported for ideal fermentations could be obtained, suggesting the suitability of fuzzy estimations for on-line applications.", "which Types ?", "Fuzzy", 0.0, 5.0], ["Abstract Since online measurement of the melt index (MI) of polyethylene is difficult, a virtual sensor model is desirable. However, a polyethylene process usually produces products with multiple grades. The relation between process and quality variables is highly nonlinear. Besides, a virtual sensor model in real plant process with many inputs has to deal with collinearity and time-varying issues. A new recursive algorithm, which models a multivariable, time-varying and nonlinear system, is presented. Principal component analysis (PCA) is used to eliminate the collinearity. Fuzzy c-means (FCM) and fuzzy Takagi\u2013Sugeno (FTS) modeling are used to decompose the nonlinear system into several linear subsystems. Effectiveness of the model is demonstrated using real plant data from a polyethylene process.", "which Types ?", "Fuzzy c-means (FCM)", NaN, NaN], ["Prolonging the lifetime of wireless sensor networks has always been a determining factor when designing and deploying such networks. Clustering is one technique that can be used to extend the lifetime of sensor networks by grouping sensors together. However, there exists the hot spot problem which causes an unbalanced energy consumption in equally formed clusters. In this paper, we propose UHEED, an unequal clustering algorithm which mitigates this problem and which leads to a more uniform residual energy in the network and improves the network lifetime. Furthermore, from the simulation results presented, we were able to deduce the most appropriate unequal cluster size to be used.", "which Protocol ?", "UHEED", 393.0, 398.0], ["Clustering is a standard approach for achieving efficient and scalable performance in wireless sensor networks. Most of the published clustering algorithms strive to generate the minimum number of disjoint clusters. However, we argue that guaranteeing some degree of overlap among clusters can facilitate many applications, like inter-cluster routing, topology discovery and node localization, recovery from cluster head failure, etc. We formulate the overlapping multi-hop clustering problem as an extension to the k-dominating set problem. Then we propose MOCA; a randomized distributed multi-hop clustering algorithm for organizing the sensors into overlapping clusters. We validate MOCA in a simulated environment and analyze the effect of different parameters, e.g. node density and network connectivity, on its performance. The simulation results demonstrate that MOCA is scalable, introduces low overhead and produces approximately equal-sized clusters.", "which Protocol ?", "MOCA", 558.0, 562.0], ["Prolonged network lifetime, scalability, and load balancing are important requirements for many ad-hoc sensor network applications. Clustering sensor nodes is an effective technique for achieving these goals. In this work, we propose a new energy-efficient approach for clustering nodes in ad-hoc sensor networks. Based on this approach, we present a protocol, HEED (hybrid energy-efficient distributed clustering), that periodically selects cluster heads according to a hybrid of their residual energy and a secondary parameter, such as node proximity to its neighbors or node degree. HEED does not make any assumptions about the distribution or density of nodes, or about node capabilities, e.g., location-awareness. The clustering process terminates in O(1) iterations, and does not depend on the network topology or size. The protocol incurs low overhead in terms of processing cycles and messages exchanged. It also achieves fairly uniform cluster head distribution across the network. A careful selection of the secondary clustering parameter can balance load among cluster heads. Our simulation results demonstrate that HEED outperforms weight-based clustering protocols in terms of several cluster characteristics. We also apply our approach to a simple application to demonstrate its effectiveness in prolonging the network lifetime and supporting data aggregation.", "which Protocol ?", "HEED", 361.0, 365.0], ["Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.", "which Protocol ?", "LEACH", 485.0, 490.0], ["In this study, the authors propose a mobility-based clustering (MBC) protocol for wireless sensor networks with mobile nodes. In the proposed clustering protocol, a sensor node elects itself as a cluster-head based on its residual energy and mobility. A non-cluster-head node aims at its link stability with a cluster head during clustering according to the estimated connection time. Each non-cluster-head node is allocated a timeslot for data transmission in ascending order in a time division multiple address (TDMA) schedule based on the estimated connection time. In the steady-state phase, a sensor node transmits its sensed data in its timeslot and broadcasts a joint request message to join in a new cluster and avoid more packet loss when it has lost or is going to lose its connection with its cluster head. Simulation results show that the MBC protocol can reduce the packet loss by 25% compared with the cluster-based routing (CBR) protocol and 50% compared with the low-energy adaptive clustering hierarchy-mobile (LEACH-mobile) protocol. Moreover, it outperforms both the CBR protocol and the LEACH-mobile protocol in terms of average energy consumption and average control overhead, and can better adapt to a highly mobile environment.", "which Protocol ?", "MBC", 64.0, 67.0], ["In order to gather information more efficiently, wireless sensor networks (WSNs) are partitioned into clusters. The most of the proposed clustering algorithms do not consider the location of the base station. This situation causes hot spots problem in multi-hop WSNs. Unequal clustering mechanisms, which are designed by considering the base station location, solve this problem. In this paper, we introduce a fuzzy unequal clustering algorithm (EAUCF) which aims to prolong the lifetime of WSNs. EAUCF adjusts the cluster-head radius considering the residual energy and the distance to the base station parameters of the sensor nodes. This helps decreasing the intra-cluster work of the sensor nodes which are closer to the base station or have lower battery level. We utilize fuzzy logic for handling the uncertainties in cluster-head radius estimation. We compare our algorithm with some popular algorithms in literature, namely LEACH, CHEF and EEUC, according to First Node Dies (FND), Half of the Nodes Alive (HNA) and energy-efficiency metrics. Our simulation results show that EAUCF performs better than the other algorithms in most of the cases. Therefore, EAUCF is a stable and energy-efficient clustering algorithm to be utilized in any real time WSN application.", "which Protocol ?", "EAUCF", 446.0, 451.0], ["This paper focuses on reducing the power consumption of wireless microsensor networks. Therefore, a communication protocol named LEACH (low-energy adaptive clustering hierarchy) is modified. We extend LEACH's stochastic cluster-head selection algorithm by a deterministic component. Depending on the network configuration an increase of network lifetime by about 30% can be accomplished. Furthermore, we present a new approach to define lifetime of microsensor networks using three new metrics FND (First Node Dies), HNA (Half of the Nodes Alive), and LND (Last Node Dies).", "which Protocol ?", "Deterministic", 258.0, 271.0], ["Wireless sensor networks with thousands of tiny sensor nodes are expected to find wide applicability and increasing deployment in coming years, as they enable reliable monitoring and analysis of the environment. In this paper we propose a modification to a well-known protocol for sensor networks called Low Energy Adaptive Clustering Hierarchy (LEACH). This last is designed for sensor networks where end- user wants to remotely monitor the environment. In such situation, the data from the individual nodes must be sent to a central base station, often located far from the sensor network, through which the end-user can access the data. In this context our contribution is represented by building a two-level hierarchy to realize a protocol that saves better the energy consumption. Our TL-LEACH uses random rotation of local cluster base stations (primary cluster-heads and secondary cluster-heads). In this way we build, where it is possible, a two-level hierarchy. This permits to better distribute the energy load among the sensors in the network especially when the density of network is higher. TL- LEACH uses localized coordination to enable scalability and robustness. We evaluated the performances of our protocol with NS-2 and we observed that our protocol outperforms the LEACH in terms of energy consumption and lifetime of the network.", "which Protocol ?", "TL-LEACH", 790.0, 798.0], ["In order to prolong the lifetime of wireless sensor networks, this paper presents a multihop routing protocol with unequal clustering (MRPUC). On the one hand, cluster heads deliver the data to the base station with relay to reduce energy consumption. On the other hand, MRPUC uses many measures to balance the energy of nodes. First, it selects the nodes with more residual energy as cluster heads, and clusters closer to the base station have smaller sizes to preserve some energy during intra-cluster communication for inter-cluster packets forwarding. Second, when regular nodes join clusters, they consider not only the distance to cluster heads but also the residual energy of cluster heads. Third, cluster heads choose those nodes as relay nodes, which have minimum energy consumption for forwarding and maximum residual energy to avoid dying earlier. Simulation results show that MRPUC performs much better than similar protocols.", "which Protocol ?", "MRPUC", 135.0, 140.0], ["We study the impact of heterogeneity of nodes, in terms of their energy, in wireless sensor networks that are hierarchically clustered. In these networks some of the nodes become cluster heads, aggregate the data of their cluster members and transmit it to the sink. We assume that a percentage of the population of sensor nodes is equipped with additional energy resources\u2014this is a source of heterogeneity which may result from the initial setting or as the operation of the network evolves. We also assume that the sensors are randomly (uniformly) distributed and are not mobile, the coordinates of the sink and the dimensions of the sensor field are known. We show that the behavior of such sensor networks becomes very unstable once the first node dies, especially in the presence of node heterogeneity. Classical clustering protocols assume that all the nodes are equipped with the same amount of energy and as a result, they can not take full advantage of the presence of node heterogeneity. We propose SEP, a heterogeneous-aware protocol to prolong the time interval before the death of the first node (we refer to as stability period), which is crucial for many applications where the feedback from the sensor network must be reliable. SEP is based on weighted election probabilities of each node to become cluster head according to the remaining energy in each node. We show by simulation that SEP always prolongs the stability period compared to (and that the average throughput is greater than) the one obtained using current clustering protocols. We conclude by studying the sensitivity of our SEP protocol to heterogeneity parameters capturing energy imbalance in the network. We found that SEP yields longer stability region for higher values of extra energy brought by more powerful nodes.", "which Protocol ?", "SEP", 1010.0, 1013.0], ["We present a fast local clustering service, FLOC, that partitions a multi-hop wireless network into nonoverlapping and approximately equal-sited clusters. Each cluster has a clusterhead such that all nodes within unit distance of the clusterhead belong to the cluster but no node beyond distance m from the clusterhead belongs to the cluster. By asserting m /spl ges/ 2, FLOC achieves locality: effects of cluster formation and faults/changes at any part of the network are contained within most m units. By taking unit distance to be the reliable communication radius and m to be the maximum communication radius, FLOC exploits the double-band nature of wireless radio-model and achieves clustering in constant time regardless of the network size. Through simulations and experiments with actual deployments, we analyze the tradeoffs between clustering time and the quality of clustering, and suggest suitable parameters for FLOC to achieve a fast completion time without compromising the quality of the resulting clustering.", "which Protocol ?", "FLOC", 44.0, 48.0], ["Clustering provides an effective way for prolonging the lifetime of a wireless sensor network. Current clustering algorithms usually utilize two techniques, selecting cluster heads with more residual energy and rotating cluster heads periodically, to distribute the energy consumption among nodes in each cluster and extend the network lifetime. However, they rarely consider the hot spots problem in multihop wireless sensor networks. When cluster heads cooperate with each other to forward their data to the base station, the cluster heads closer to the base station are burdened with heavy relay traffic and tend to die early, leaving areas of the network uncovered and causing network partition. To address the problem, we propose an energy-efficient unequal clustering (EEUC) mechanism for periodical data gathering in wireless sensor networks. It partitions the nodes into clusters of unequal size, and clusters closer to the base station have smaller sizes than those farther away from the base station. Thus cluster heads closer to the base station can preserve some energy for the inter-cluster data forwarding. We also propose an energy-aware multihop routing protocol for the inter-cluster communication. Simulation results show that our unequal clustering mechanism balances the energy consumption well among all sensor nodes and achieves an obvious improvement on the network lifetime", "which Protocol ?", "EEUC", 775.0, 779.0], ["Wireless sensor networks consist of small battery powered devices with limited energy resources. Once deployed, the small sensor nodes are usually inaccessible to the user, and thus replacement of the energy source is not feasible. Hence, energy efficiency is a key design issue that needs to be enhanced in order to improve the life span of the network. Several network layer protocols have been proposed to improve the effective lifetime of a network with a limited energy supply. In this article we propose a centralized routing protocol called base-station controlled dynamic clustering protocol (BCDCP), which distributes the energy dissipation evenly among all sensor nodes to improve network lifetime and average energy savings. The performance of BCDCP is then compared to clustering-based schemes such as low-energy adaptive clustering hierarchy (LEACH), LEACH-centralized (LEACH-C), and power-efficient gathering in sensor information systems (PEGASIS). Simulation results show that BCDCP reduces overall energy consumption and improves network lifetime over its comparatives.", "which Protocol ?", "BCDCP", 601.0, 606.0], ["Due to the imbalance of energy consumption of nodes in wireless sensor networks (WSNs), some local nodes die prematurely, which causes the network partitions and then shortens the lifetime of the network. The phenomenon is called \u201chot spot\u201d or \u201cenergy hole\u201d problem. For this problem, an energy-aware distributed unequal clustering protocol (EADUC) in multihop heterogeneous WSNs is proposed. Compared with the previous protocols, the cluster heads obtained by EADUC can achieve balanced energy, good distribution, and seamless coverage for all the nodes. Moreover, the complexity of time and control message is low. Simulation experiments show that EADUC can prolong the lifetime of the network significantly.", "which Protocol ?", "EADUC", 342.0, 347.0], ["Wireless sensor networks with thousands of tiny sensor nodes, are expected to find wide applicability and increasing deployment in coming years, as they enable reliable monitoring and analysis of the environment. In this paper, we propose a hybrid routing protocol (APTEEN) which allows for comprehensive information retrieval. The nodes in such a network not only react to time-critical situations, but also give an overall picture of the network at periodic intervals in a very energy efficient manner. Such a network enables the user to request past, present and future data from the network in the form of historical, one-time and persistent queries respectively. We evaluated the performance of these protocols and observe that these protocols are observed to outperform existing protocols in terms of energy consumption and longevity of the network.", "which Protocol ?", "APTEEN", 266.0, 272.0], ["Adaptive support weight (ASW) methods represent the state of the art in local stereo matching, while the bilateral filter-based ASW method achieves outstanding performance. However, this method fails to resolve the ambiguity induced by nearby pixels at different disparities but with similar colors. In this paper, we introduce a novel trilateral filter (TF)-based ASW method that remedies such ambiguities by considering the possible disparity discontinuities through color discontinuity boundaries, i.e., the boundary strength between two pixels, which is measured by a local energy model. We also present a recursive TF-based ASW method whose computational complexity is O(N) for the cost aggregation step, and O(NLog2(N)) for boundary detection, where N denotes the input image size. This complexity is thus independent of the support window size. The recursive TF-based method is a nonlocal cost aggregation strategy. The experimental evaluation on the Middlebury benchmark shows that the proposed method, whose average error rate is 4.95%, outperforms other local methods in terms of accuracy. Equally, the average runtime of the proposed TF-based cost aggregation is roughly 260 ms on a 3.4-GHz Inter Core i7 CPU, which is comparable with state-of-the-art efficiency.", "which Taxonomy stage: Step ?", "ASW", 25.0, 28.0], ["This paper proposes a segmentation-based approach for matching of high-resolution stereo images in real time. The approach employs direct region matching in a raster scan fashion influenced by scanline approaches, but with pixel decoupling. To enable real-time performance it is implemented as a heterogeneous system of an FPGA and a sequential processor. Additionally, the approach is designed for low resource usage in order to qualify as part of unified image processing in an embedded system.", "which Taxonomy stage: Step ?", "Segmentation", 22.0, 34.0], ["SymStereo is a new algorithm used for stereo estimation. Instead of measuring photo-similarity, it proposes novel cost functions that measure symmetry for evaluating the likelihood of two pixels being a match. In this work we propose a parallel approach of the LogN matching cost variant of SymStereo capable of processing pairs of images in real-time for depth estimation. The power of the graphics processing units utilized allows exploring more efficiently the bank of log-Gabor wavelets developed to analyze symmetry, in the spectral domain. We analyze tradeoffs and propose different parameter-izations of the signal processing algorithm to accommodate image size, dimension of the filter bank, number of wavelets and also the number of disparities that controls the space density of the estimation, and still process up to 53 frames per second (fps) for images with size 288 \u00d7 384 and up to 3 fps for 768 \u00d7 1024 images.", "which Taxonomy stage: Step ?", "Symmetry", 142.0, 150.0], ["Recent growth of technology has also increased identification insecurity. Signature is a unique feature which is different for every other person, and each person can be identified using their own handwritten signature. Gender identification is one of key feature in case of human identification. In this paper, a feature based gender detection method has been proposed. The proposed framework takes handwritten signature as an input. Afterwards, several features are extracted from those images. The extracted features and their values are stored as data, which is further classified using Back Propagation Neural Network (BPNN). Gender classification is done using BPNN which is one of the most popular classifier. The proposed system is broken into two parts. In the first part, several features such as roundness, skewness, kurtosis, mean, standard deviation, area, Euler number, distribution density of black pixel, entropy, equi-diameter, connected component (cc) and perimeter were taken as feature. Then obtained features are divided into two categories. In the first category experimental feature set contains Euler number, whereas in the second category the obtained feature set excludes the same. BPNN is used to classify both types of feature sets to recognize the gender. Our study reports an improvement of 4.7% in gender classification system by the inclusion of Euler number as a feature.", "which Classifier ?", "BPNN", 624.0, 628.0], ["As signature continues to play a crucial part in personal identification for number of applications including financial transaction, an efficient signature authentication system becomes more and more important. Various researches in the field of signature authentication has been dynamically pursued for many years and its extent is still being explored. Signature verification is the process which is carried out to determine whether a given signature is genuine or forged. It can be distinguished into two types such as the Online and the Offline. In this paper we presented the Offline signature verification system and extracted some new local and geometric features like QuadSurface feature, Area ratio, Distance ratio etc. For this we have taken some genuine signatures from 5 different persons and extracted the features from all of the samples after proper preprocessing steps. The training phase uses Gaussian Mixture Model (GMM) technique to obtain a reference model for each signature sample of a particular user. By computing Euclidian distance between reference signature and all the training sets of signatures, acceptance range is defined. If the Euclidian distance of a query signature is within the acceptance range then it is detected as an authenticated signature else, a forged signature.", "which Classifier ?", "GMM", 934.0, 937.0], ["In the field of information security, the usage of biometrics is growing for user authentication. Automatic signature recognition and verification is one of the biometric techniques, which is only one of several used to verify the identity of individuals. In this paper, a foreground and background based technique is proposed for identification of scripts from bi-lingual (English/Roman and Chinese) off-line signatures. This system will identify whether a claimed signature belongs to the group of English signatures or Chinese signatures. The identification of signatures based on its script is a major contribution for multi-script signature verification. Two background information extraction techniques are used to produce the background components of the signature images. Gradient-based method was used to extract the features of the foreground as well as background components. Zernike Moment feature was also employed on signature samples. Support Vector Machine (SVM) is used as the classifier for signature identification in the proposed system. A database of 1120 (640 English+480 Chinese) signature samples were used for training and 560 (320 English+240 Chinese) signature samples were used for testing the proposed system. An encouraging identification accuracy of 97.70% was obtained using gradient feature from the experiment.", "which Classifier ?", "SVM", 974.0, 977.0], ["In this paper, we present an approach based on chain code histogram features enhanced through Laplacian of Gaussian filter for off-line signature verification. In the proposed approach, the four-directional chain code histogram of each grid on the contour of the signature image is extracted. The Laplacian of Gaussian filter is used to enhance the extracted features of each signature sample. Thus, the extracted and enhanced features of all signature samples of the off-line signature dataset constitute the knowledge base. Subsequently, the Support Vector Machine (SVM) classifier is used as the verification tool. The SVM is trained with the randomly selected training sample's features including genuine and random forgeries and tested with the remaining untrained genuine along with the skilled forge sample features to classify the tested/questioned sample as genuine or forge. Similar to the real time scenario, in the proposed approach we have not considered the skilled fore sample to train the classifier. Extensive experimentations have been conducted to exhibit the performance of the proposed approach on the publicly available datasets namely, CEDAR, GPDS-100 and MUKOS, a regional language dataset. The state-of-art off-line signature verification methods are considered for comparative study to justify the feasibility of the proposed approach for off-line signature verification and to reveal its accuracy over the existing approaches.", "which Classifier ?", "SVM", 568.0, 571.0], ["In this paper, we proposed to combine the transform based approach with dimensionality reduction technique for off-line signature verification. The proposed approach has four major phases: Preprocessing, Feature extraction, Feature reduction and Classification. In the feature extraction phase, Discrete Cosine Transform (DCT) is employed on the signature image to obtain the upper-left corner block of size mX n as a representative feature vector. These features are subjected to Linear Discriminant Analysis (LDA) for further reduction and representing the signature with optimal set of features. Thus obtained features from all the samples in the dataset form the knowledge base. The Support Vector Machine (SVM), a bilinear classifier is used for classification and the performance is measured through FAR/FRR metric. Experiments have been conducted on standard signature datasets namely CEDAR and GPDS-160, and MUKOS, a regional language (Kannada) dataset. The comparative study is also provided with the well known approaches to exhibit the performance of the proposed approach.", "which Classifier ?", "SVM", 711.0, 714.0], ["Software reliability is defined as the probability of the failure free operation of a software system for a specified period of time in a specified environment. Day by day software applications are growing more complex and with more emphasis on reuse. Component Based Software (CBS) applications have emerged. The focus of this paper is to provide an overview for the state of the art of Component Based Systems reliability estimation. In this paper, we discussed various approaches in terms of their scope, model, methods, technique and validation scheme. This comparison provides insight into determining the direction of future CBS reliability research.", "which Area of use ?", "CBS", 278.0, 281.0], ["Service-oriented architecture (SOA) is a popular paradigm for development of distributed systems by composing the functionality provided by the services exposed on the network. In effect, the services can use functionalities of other services to accomplish their own goals. Although such an architecture provides an elegant solution to simple construction of loosely coupled distributed systems, it also introduces additional concerns. One of the primary concerns in designing a SOA system is the overall system reliability. Since the building blocks are services provided by various third parties, it is often not possible to apply the well established fault removal techniques during the development phases. Therefore, in order to reach desirable system reliability for SOA systems, the focus shifts towards fault prediction and fault tolerance techniques. In this paper an overview of existing reliability modeling techniques for SOA-based systems is given. Furthermore, we present a model for reliability estimation of a service composition using directed acyclic graphs. The model is applied to the service composition based on the orchestration model. A case study for the proposed model is presented by analyzing a simple Web Service composition scenario.", "which Area of use ?", "SOA", 31.0, 34.0], ["A well-known concept for the design and development of distributed software systems is service-orientation. In SOA, an interacting group of autonomous services realize a dynamic adaptive heterogenous distributed system. Because of its flexibility, SOA allows an easy adaptation of new business requirements. This also makes the serviceorientation idea a suitable concept for development of critical software systems. Reliability is a central parameter for developing critical software systems. SOA brings some additional requirements to the usual reliability models currently being used for standard software solutions. In order to fullfil all requirements and guarantee a certain degree of reliability, a generic reliability management model is needed for SOA based software systems. This article defines research challenges in this area and gives an approach to solve this problem.", "which Area of use ?", "SOA", 111.0, 114.0], ["In service-oriented architecture (SOA), the entire software system consists of an interacting group of autonomous services. In order to make such a system reliable, it should inhibit guarantee for basic service, data flow, composition of services, and the complete workflow. This paper discusses the important factor of SOA and their role in the entire SOA system reliability. We focus on the factors that have the strongest effect of SOA system reliability. Based on these factors, we used a fuzzy-based approach to estimate the SOA reliability. The proposed approach is implemented on a database obtained for SOA application, and the results obtained validate and confirm the effectiveness of the proposed fuzzy approach. Furthermore, one can make trade-off analyses between different parameters for reliability.", "which Area of use ?", "SOA", 34.0, 37.0], ["The composition of web-based services is a process that usually requires advanced programming skills and vast knowledge about specific technologies. How to carry out web service composition according to functional sufficiency and performance is widely studied. Non-functional characteristics like reliability and security play an important role in the selection of web services composition process. This paper provides a web service reliability model for atomic web service without structural information and the composite web service consist of atomic web service and its redundant services. It outlines a framework based on client feedback to gather trustworthiness attributes to service registry for reliability evaluation.", "which Area of use ?", "web service", 166.0, 177.0], ["The Quality of Service for web services here mainly refers to the quality aspect of a web service. The QoS for web services is becoming increasingly important to service providers and service requesters due to increasing use of web services. Web services providing similar functionalities, more emphasis is being placed on how to find the service that best fits the consumer's requirements. In order to find services that best meet their QoS requirements, the service consumers and/or discovery agents need to know both the QoS information for the services and the reliability of this information. In this paper first of all we implement Reputation-Enhanced Web Services Discovery protocol. And after implementation we enhance the protocol over memory used, time to discovery and response time of given web service.", "which Area of use ?", "web service", 86.0, 97.0], ["We propose in this work a signature verification system based on decision combination of off-line signatures for managing conflict provided by the SVM classifiers. The system is basically divided into three modules: i) Radon Transform-SVM, ii) Ridgelet Transform-SVM and iii) PCR5 combination rule based on the generalized belief functions of Dezert-Smarandache theory. The proposed framework allows combining the normalized SVM outputs and uses an estimation technique based on the dissonant model of Appriou to compute the belief assignments. Decision making is performed through likelihood ratio. Experiments are conducted on the well known CEDAR database using false rejection and false acceptance criteria. The obtained results show that the proposed combination framework improves the verification accuracy compared to individual SVM classifiers.", "which Offline\nDatabase ?", "CEDAR", 644.0, 649.0], ["In the last few years, several studies have found an inverted-U relationship between per capita income and environmental degradation. This relationship, known as the environmental Kuznets curve (EKC), suggests that environmental degradation increases in the early stages of growth, but it eventually decreases as income exceeds a threshold level. However, this paper investigation relationship between per capita CO2 emission, growth economics and trade liberalization based on econometric techniques of unit root test, co-integration and a panel data set during the period 1960-1996 for BRICS countries. Data properties were analyzed to determine their stationarity using the LLC , IPS , ADF and PP unit root tests which indicated that the series are I(1). We find a cointegration relationship between per capita CO2 emission, growth economics and trade liberalization by applying Kao panel cointegration test. The evidence indi cates that in the long-run trade liberalization has a positive significant impact on CO2 emissions and impact of trade liberalization on emissions growth depends on the level of income Our findings suggest that there is a quadratic relationship between relationship between real GDP and CO2 emissions for the region as a whole. The estimated long-run coefficients of real GDP and its square satisfy the EKC hypothesis in all of studied countries. Our estimation shows that the inflection point or optimal point real GDP per capita is about 5269.4 dollars. The results show that on average, sample countries are on the positive side of the inverted U curve. The turning points are very low in some cases and very high in other cases, hence providing poor evidence in support of the EKC hypothesis. Thus, our findings suggest that all BRICS countries need to sacrifice economic growth to decrease their emission levels", "which Methodology ?", "Cointegration", 768.0, 781.0], ["It has been forecasted by many economists that in the next couple of decades the BRICS economies are going to experience an unprecedented economic growth. This massive economic growth would definitely have a detrimental impact on the environment since these economies, like others, would extract their environmental and natural resource to a larger scale in the process of their economic growth. Therefore, maintaining environmental quality while growing has become a major challenge for these economies. However, the proponents of Environmental Kuznets Curve (EKC) Hypothesis - an inverted U shape relationship between income and emission per capita, suggest BRICS economies need not bother too much about environmental quality while growing because growth would eventually take care of the environment once a certain level of per capita income is achieved. In this backdrop, the present study makes an attempt to estimate EKC type relationship, if any, between income and emission in the context of the BRICS countries for the period 1997 to 2011. Therefore, the study first adopts fixed effect (FE) panel data model to control time constant country specific effects, and then uses Generalized Method of Moments (GMM) approach for dynamic panel data to address endogeneity of income variable and dynamism in emission per capita. Apart from income, we also include variables related to financial sector development and energy utilization to explain emission. The fixed effect model shows a significant EKC type relation between income and emission supporting the previous literature. However, GMM estimates for the dynamic panel model show the relationship between income and emission is actually U shaped with the turning point being out of sample. This out of sample turning point indicates that emission has been growing monotonically with growth in income. Factors like, net energy imports and share of industrial output in GDP are found to be significant and having detrimental impact on the environment in the dynamic panel model. However, these variables are found to be insignificant in FE model. Capital account convertibility shows significant and negative impact on the environment irrespective of models used. The monotonically increasing relationship between income and emission suggests the BRICS economies must adopt some efficiency oriented action plan so that they can grow without putting much pressure on the environment. These findings can have important policy implications as BRICS countries are mainly depending on these factors for their growth but at the same time they can cause serious threat to the environment.", "which Methodology ?", "GMM", 1215.0, 1218.0], ["Applying the Johansen cointegration test, this study finds that energy consumption, economic growth, capital and labour are cointegrated. However, this study detects no causality from energy consumption to economic growth using Hsiao's version of the Granger causality method with the aid of cointegration and error correction modelling. Interestingly, it is discerned that causality runs from economic growth to energy consumption both in the short run and in the long run and causality flows from capital to economic growth in the short run.", "which Methodology ?", "Cointegration", 22.0, 35.0], ["This paper investigates the relationship between CO2 emission, real GDP, energy consumption, urbanization and trade openness for 10 for selected Central and Eastern European Countries (CEECs), including, Albania, Bulgaria, Croatia, Czech Republic, Macedonia, Hungary, Poland, Romania, Slovak Republic and Slovenia for the period of 1991\u20132011. The results show that the environmental Kuznets curve (EKC) hypothesis holds for these countries. The fully modified ordinary least squares (FMOLS) results reveal that a 1% increase in energy consumption leads to a %1.0863 increase in CO2 emissions. Results for the existence and direction of panel Vector Error Correction Model (VECM) Granger causality method show that there is bidirectional causal relationship between CO2 emissions - real GDP and energy consumption-real GDP as well.", "which Methodology ?", "FMOLS", 484.0, 489.0], ["We analyze the effects of monetary policy on economic activity in the proposed African monetary unions. Findings broadly show that: (1) but for financial efficiency in the EAMZ, monetary policy variables affect output neither in the short-run nor in the long-term and; (2) with the exception of financial size that impacts inflation in the EAMZ in the short-term, monetary policy variables generally have no effect on prices in the short-run. The WAMZ may not use policy instruments to offset adverse shocks to output by pursuing either an expansionary or a contractionary policy, while the EAMZ can do with the \u2018financial allocation efficiency\u2019 instrument. Policy implications are discussed.", "which Methodology ?", "VAR", NaN, NaN], ["This paper reexamines the causality between energy consumption and economic growth with both bivariate and multivariate models by applying the recently developed methods of cointegration and Hsiao`s version of the Granger causality to transformed U.S. data for the period 1947-1990. The Phillips-Perron (PP) tests reveal that the original series are not stationary and, therefore, a first differencing is performed to secure stationarity. The study finds no causal linkages between energy consumption and economic growth. Energy and gross national product (GNP) each live a life of its own. The results of this article are consistent with some of the past studies that find no relationship between energy and GNP but are contrary to some other studies that find GNP unidirectionally causes energy consumption. Both the bivariate and trivariate models produce the similar results. We also find that there is no causal relationship between energy consumption and industrial production. The United States is basically a service-oriented economy and changes in energy consumption can cause little or no changes in GNP. In other words, an implementation of energy conservation policy may not impair economic growth. 27 refs., 5 tabs.", "which Methodology ?", "Granger causality", 214.0, 231.0], [" The East African Community\u2019s (EAC) economic integration has gained momentum recently, with the EAC countries aiming to adopt a single currency in 2015. This article evaluates empirically the readiness of the EAC countries for monetary union. First, structural similarity in terms of similarity of production and exports of the EAC countries is measured. Second, the symmetry of shocks is examined with structural vector auto-regression analysis (SVAR). The lack of macroeconomic convergence gives evidence against a hurried transition to a monetary union. Given the divergent macroeconomic outcomes, structural reforms, including closing infrastructure gaps and harmonizing macroeconomic policies that would raise synchronization of business cycles, need to be in place before moving to monetary union. ", "which Methodology ?", "SVAR", 455.0, 459.0], ["Purpose \u2013 A spectre is hunting embryonic African monetary zones: the EMU crisis. This paper assesses real, monetary and fiscal policy convergence within the proposed WAM and EAM zones. The introduction of common currencies in West and East Africa is facing stiff challenges in the timing of monetary convergence, the imperative of central bankers to apply common modeling and forecasting methods of monetary policy transmission, as well as the requirements of common structural and institutional characteristics among candidate states. Design/methodology/approach \u2013 In the analysis: monetary policy targets inflation and financial dynamics of depth, efficiency, activity and size; real sector policy targets economic performance in terms of GDP growth at macro and micro levels; while, fiscal policy targets debt-to-GDP and deficit-to-GDP ratios. A dynamic panel GMM estimation with data from different non-overlapping intervals is employed. The implied rate of convergence and the time required to achieve full (100%) convergence are then computed from the estimations. Findings \u2013 Findings suggest overwhelming lack of convergence: (1) initial conditions for financial development are different across countries; (2) fundamental characteristics as common monetary policy initiatives and IMF backed financial reform programs are implemented differently across countries; (3) there is remarkable evidence of cross-country variations in structural characteristics of macroeconomic performance; (4) institutional cross-country differences could also be responsible for the deficiency in convergence within the potential monetary zones; (5) absence of fiscal policy convergence and no potential for eliminating idiosyncratic fiscal shocks due to business cycle incoherence. Practical implications \u2013 As a policy implication, heterogeneous structural and institutional characteristics across countries are giving rise to different levels and patterns of financial intermediary development. Thus, member states should work towards harmonizing cross-country differences in structural and institutional characteristics that hamper the effectiveness of convergence in monetary, real and fiscal policies. This could be done by stringently monitoring the implementation of existing common initiatives and/or the adoption of new reforms programs. Originality/value \u2013 It is one of the few attempts to investigate the issue of convergence within the proposed WAM and EAM unions.", "which Methodology ?", "GMM", 863.0, 866.0], ["Although facial feature detection from 2D images is a well-studied field, there is a lack of real-time methods that estimate feature points even on low quality images. Here we propose conditional regression forest for this task. While regression forest learn the relations between facial image patches and the location of feature points from the entire set of faces, conditional regression forest learn the relations conditional to global face properties. In our experiments, we use the head pose as a global property and demonstrate that conditional regression forests outperform regression forests for facial feature detection. We have evaluated the method on the challenging Labeled Faces in the Wild [20] database where close-to-human accuracy is achieved while processing images in real-time.", "which Methods ?", "Conditional regression forest", 184.0, 213.0], ["Abstract Introduction Since December 29, 2019 a pandemic of new novel coronavirus-infected pneumonia named COVID-19 has started from Wuhan, China, has led to 254 996 confirmed cases until midday March 20, 2020. Sporadic cases have been imported worldwide, in Algeria, the first case reported on February 25, 2020 was imported from Italy, and then the epidemic has spread to other parts of the country very quickly with 139 confirmed cases until March 21, 2020. Methods It is crucial to estimate the cases number growth in the early stages of the outbreak, to this end, we have implemented the Alg-COVID-19 Model which allows to predict the incidence and the reproduction number R0 in the coming months in order to help decision makers. The Alg-COVIS-19 Model initial equation 1, estimates the cumulative cases at t prediction time using two parameters: the reproduction number R0 and the serial interval SI. Results We found R0=2.55 based on actual incidence at the first 25 days, using the serial interval SI= 4,4 and the prediction time t=26. The herd immunity HI estimated is HI=61%. Also, The Covid-19 incidence predicted with the Alg-COVID-19 Model fits closely the actual incidence during the first 26 days of the epidemic in Algeria Fig. 1.A. which allows us to use it. According to Alg-COVID-19 Model, the number of cases will exceed 5000 on the 42 th day (April 7 th ) and it will double to 10000 on 46th day of the epidemic (April 11 th ), thus, exponential phase will begin (Table 1; Fig.1.B) and increases continuously until reaching \u00e0 herd immunity of 61% unless serious preventive measures are considered. Discussion This model is valid only when the majority of the population is vulnerable to COVID-19 infection, however, it can be updated to fit the new parameters values.", "which Methods ?", "Alg-COVID-19 Model", 593.0, 611.0], ["Opencast mining has huge effects on water pollution for several reasons. Fresh water is heavily used to process ore. Mine effluent and seepage from various mine related areas especially tailing reservoir, increase water pollution immensely. Monitoring and classification of mine water bodies, which have such environmental impacts, have several research challenges. In the past, land cover classification of a mining region detects mine and non mine water bodies simultaneously. Water bodies inside surface mines have different characteristics from other water bodies. In this paper, a novel method has been proposed to differentiate mine and non mine water bodies over the seasons, which does not require to set a threshold value manually. Here, water body regions are detected over the entire scene by any classical water body detection algorithm. Further, each water body is treated independently, and reflectance properties of a bounding box over each water body region are analyzed. In the past, there were efforts to use clay mineral ratio (CLM) to separate mine and non mine water bodies. In this paper, it has been observed that iron oxide ratio (IO) can also separate mine and non mine water bodies. The accuracy is observed to increase, if the difference of CLM and IO is used for segregation. The proposed algorithm separates these regions by taking into account seasonal variations. Means of differences of CLM and IO of each bounding box have been clustered using K-means clustering algorithm. The automation provides precision and recall for mine, and non mine water bodies as $[77.83\\%,76.55\\%]$ and $[75.18\\%,75.84\\%]$, respectively, using ground truths from high-definition Google Earth images.", "which Methods ?", " iron oxide ratio (IO)", NaN, NaN], ["We present an algorithm for simultaneous face detection, landmarks localization, pose estimation and gender recognition using deep convolutional neural networks (CNN). The proposed method called, HyperFace, fuses the intermediate layers of a deep CNN using a separate CNN followed by a multi-task learning algorithm that operates on the fused features. It exploits the synergy among the tasks which boosts up their individual performances. Additionally, we propose two variants of HyperFace: (1) HyperFace-ResNet that builds on the ResNet-101 model and achieves significant improvement in performance, and (2) Fast-HyperFace that uses a high recall fast face detector for generating region proposals to improve the speed of the algorithm. Extensive experiments show that the proposed models are able to capture both global and local information in faces and performs significantly better than many competitive algorithms for each of these four tasks.", "which Methods ?", "HyperFace ", 615.0, 625.0], ["Background: Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods: We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results: The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions: Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities.", "which Methods ?", "serial interval", 236.0, 251.0], ["Abstract. The integration of Landsat 8 OLI and ASTER data is an efficient tool for interpreting lead\u2013zinc mineralization in the Huoshaoyun Pb\u2013Zn mining region located in the west Kunlun mountains at high altitude and very rugged terrain, where traditional geological work becomes limited and time-consuming. This task was accomplished by using band ratios (BRs), principal component analysis, and spectral matched filtering methods. It is concluded that some BR color composites and principal components of each imagery contain useful information for lithological mapping. SMF technique is useful for detecting lead\u2013zinc mineralization zones, and the results could be verified by handheld portable X-ray fluorescence analysis. Therefore, the proposed methodology shows strong potential of Landsat 8 OLI and ASTER data in lithological mapping and lead\u2013zinc mineralization zone extraction in carbonate stratum.", "which Methods ?", " Spectral Matched Filtering", 396.0, 423.0], ["The outbreak of the novel coronavirus disease, COVID-19, originating from Wuhan, China in early December, has infected more than 70,000 people in China and other countries and has caused more than 2,000 deaths. As the disease continues to spread, the biomedical society urgently began identifying effective approaches to prevent further outbreaks. Through rigorous epidemiological analysis, we characterized the fast transmission of COVID-19 with a basic reproductive number 5.6 and proved a sole zoonotic source to originate in Wuhan. No changes in transmission have been noted across generations. By evaluating different control strategies through predictive modeling and Monte carlo simulations, a comprehensive quarantine in hospitals and quarantine stations has been found to be the most effective approach. Government action to immediately enforce this quarantine is highly recommended.", "which Methods ?", "Monte carlo simulation", NaN, NaN], ["Human faces captured in real-world conditions present large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food). Current face landmark estimation approaches struggle under such conditions since they fail to provide a principled way of handling outliers. We propose a novel method, called Robust Cascaded Pose Regression (RCPR) which reduces exposure to outliers by detecting occlusions explicitly and using robust shape-indexed features. We show that RCPR improves on previous landmark estimation methods on three popular face datasets (LFPW, LFW and HELEN). We further explore RCPR's performance by introducing a novel face dataset focused on occlusion, composed of 1,007 faces presenting a wide range of occlusion patterns. RCPR reduces failure cases by half on all four datasets, at the same time as it detects face occlusions with a 80/40% precision/recall.", "which Methods ?", "RCPR ", 563.0, 568.0], ["Human faces captured in real-world conditions present large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food). Current face landmark estimation approaches struggle under such conditions since they fail to provide a principled way of handling outliers. We propose a novel method, called Robust Cascaded Pose Regression (RCPR) which reduces exposure to outliers by detecting occlusions explicitly and using robust shape-indexed features. We show that RCPR improves on previous landmark estimation methods on three popular face datasets (LFPW, LFW and HELEN). We further explore RCPR's performance by introducing a novel face dataset focused on occlusion, composed of 1,007 faces presenting a wide range of occlusion patterns. RCPR reduces failure cases by half on all four datasets, at the same time as it detects face occlusions with a 80/40% precision/recall.", "which Methods ?", "RCPR", 433.0, 437.0], ["In this paper, 2D cascaded AdaBoost, a novel classifier designing framework, is presented and applied to eye localization. By the term \"2D\", we mean that in our method there are two cascade classifiers in two directions: The first one is a cascade designed by bootstrapping the positive samples, and the second one, as the component classifiers of the first one, is cascaded by bootstrapping the negative samples. The advantages of the 2D structure include: (1) it greatly facilitates the classifier designing on huge-scale training set; (2) it can easily deal with the significant variations within the positive (or negative) samples; (3) both the training and testing procedures are more efficient. The proposed structure is applied to eye localization and evaluated on four public face databases, extensive experimental results verified the effectiveness, efficiency, and robustness of the proposed method", "which Methods ?", "2D Cascaded AdaBoost", 15.0, 35.0], ["Many computer vision problems (e.g., camera calibration, image alignment, structure from motion) are solved through a nonlinear optimization method. It is generally accepted that 2nd order descent methods are the most robust, fast and reliable approaches for nonlinear optimization of a general smooth function. However, in the context of computer vision, 2nd order descent methods have two main drawbacks: (1) The function might not be analytically differentiable and numerical approximations are impractical. (2) The Hessian might be large and not positive definite. To address these issues, this paper proposes a Supervised Descent Method (SDM) for minimizing a Non-linear Least Squares (NLS) function. During training, the SDM learns a sequence of descent directions that minimizes the mean of NLS functions sampled at different points. In testing, SDM minimizes the NLS objective using the learned descent directions without computing the Jacobian nor the Hessian. We illustrate the benefits of our approach in synthetic and real examples, and show how SDM achieves state-of-the-art performance in the problem of facial feature detection. The code is available at www.humansensing.cs. cmu.edu/intraface.", "which Methods ?", "SDM ", 727.0, 731.0], ["Abstract. Lithological mapping is a fundamental step in various mineral prospecting studies because it forms the basis of the interpretation and validation of retrieved results. Therefore, this study exploited the multispectral Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Landsat 8 Operational Land Imager (OLI) data in order to map lithological units in the Bas Dr\u00e2a inlier, at the Moroccan Anti Atlas. This task was completed by using principal component analysis (PCA), band ratios (BR), and support vector machine (SVM) classification. Overall accuracy and the kappa coefficient of SVM based on ground truth in addition to the results of PCA and BR show an excellent correlation with the existing geological map of the study area. Consequently, the methodology proposed demonstrates a high potential of ASTER and Landsat 8 OLI data in lithological units discrimination.", "which Methods ?", "Principal Component Analysis (PCA)", NaN, NaN], ["Background: Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods: We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results: The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions: Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities.", "which Methods ?", "generation interval", 302.0, 321.0], ["Abstract. Lithological mapping is a fundamental step in various mineral prospecting studies because it forms the basis of the interpretation and validation of retrieved results. Therefore, this study exploited the multispectral Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Landsat 8 Operational Land Imager (OLI) data in order to map lithological units in the Bas Dr\u00e2a inlier, at the Moroccan Anti Atlas. This task was completed by using principal component analysis (PCA), band ratios (BR), and support vector machine (SVM) classification. Overall accuracy and the kappa coefficient of SVM based on ground truth in addition to the results of PCA and BR show an excellent correlation with the existing geological map of the study area. Consequently, the methodology proposed demonstrates a high potential of ASTER and Landsat 8 OLI data in lithological units discrimination.", "which Methods ?", " Kappa coefficient", 595.0, 613.0], ["Opencast mining has huge effects on water pollution for several reasons. Fresh water is heavily used to process ore. Mine effluent and seepage from various mine related areas especially tailing reservoir, increase water pollution immensely. Monitoring and classification of mine water bodies, which have such environmental impacts, have several research challenges. In the past, land cover classification of a mining region detects mine and non mine water bodies simultaneously. Water bodies inside surface mines have different characteristics from other water bodies. In this paper, a novel method has been proposed to differentiate mine and non mine water bodies over the seasons, which does not require to set a threshold value manually. Here, water body regions are detected over the entire scene by any classical water body detection algorithm. Further, each water body is treated independently, and reflectance properties of a bounding box over each water body region are analyzed. In the past, there were efforts to use clay mineral ratio (CLM) to separate mine and non mine water bodies. In this paper, it has been observed that iron oxide ratio (IO) can also separate mine and non mine water bodies. The accuracy is observed to increase, if the difference of CLM and IO is used for segregation. The proposed algorithm separates these regions by taking into account seasonal variations. Means of differences of CLM and IO of each bounding box have been clustered using K-means clustering algorithm. The automation provides precision and recall for mine, and non mine water bodies as $[77.83\\%,76.55\\%]$ and $[75.18\\%,75.84\\%]$, respectively, using ground truths from high-definition Google Earth images.", "which Methods ?", " Clay mineral ratio (CLM)", NaN, NaN], ["We present a novel discriminative regression based approach for the Constrained Local Models (CLMs) framework, referred to as the Discriminative Response Map Fitting (DRMF) method, which shows impressive performance in the generic face fitting scenario. The motivation behind this approach is that, unlike the holistic texture based features used in the discriminative AAM approaches, the response map can be represented by a small set of parameters and these parameters can be very efficiently used for reconstructing unseen response maps. Furthermore, we show that by adopting very simple off-the-shelf regression techniques, it is possible to learn robust functions from response maps to the shape parameters updates. The experiments, conducted on Multi-PIE, XM2VTS and LFPW database, show that the proposed DRMF method outperforms state-of-the-art algorithms for the task of generic face fitting. Moreover, the DRMF method is computationally very efficient and is real-time capable. The current MATLAB implementation takes 1 second per image. To facilitate future comparisons, we release the MATLAB code and the pre-trained models for research purposes.", "which Methods ?", "DRMF", 167.0, 171.0], ["This paper investigates the causal relationship between energy consumption and economic growth and energy consumption and employment in Pakistan. By applying techniques of co-integration and Hsiao\u2019s version of Granger causality, the results infer that economic growth causes total energy consumption. Economic growth also leads to growth in petroleum consumption, while on the other hand, neither economic growth nor gas consumption affect each other. However, in the power sector it has been found that electricity consumption leads to economic growth without feedback. The implications of the study are that energy conservation policy regarding petroleum consumption would not lead to any side-effects on economic growth in Pakistan. However, an energy growth policy in the case of gas and electricity consumption should be adopted in such a way that it stimulates growth in the economy and thus expands employment opportunities. The relationship between energy consumption and economic growth is now well established in the literature, yet the direction of causation of this relationship remains controversial. That is, whether economic growth leads to energy consumption or that energy consumption is the engine of economic growth. The direction of causality has significant policy implications. Empirically it has been tried to find the direction of causality between energy consumption and economic activities for the developing as well as for the developed countries employing the Granger or Sims techniques. However, results are mixed. The seminal paper by Kraft and Kraft (1978), supported the unidirectional causality from GNP growth to energy consumption in the case of the United States of America for the period 1947-1974. Erol, and Yu, (1987), tested data for six industrialized countries, and found no significant causal relationship between energy consumption and GDP growth and, energy and employment. Yu, et. al. (1988), found no relationship between energy and GNP, and", "which Countries ?", "Pakistan", 136.0, 144.0], ["Using Hsiao's version of Granger causality and cointegration, this study finds that employment (EP), energy consumption (EC), Real GNP (RGNP) and capital are not cointegrated. EC is found to negatively cause EP whereas EP and RNGP are found to directly cause EC. It is also found that capital negatively Granger-causes EP while RGNP and EP are found to strongly influence EC. The findings of this study seem to suggest that a policy of energy conservation may not be detrimental to a country such as Japan. In addition, the finding that energy and capital are substitutes implies that energy conservation will promote capital formation, given output constant.", "which Countries ?", "Japan", 500.0, 505.0], ["Purpose - This study attempts to re-investigate the electricity consumption function for Malaysia through the cointegration and causality analyses over the period 1970 to 2005. Design/methodology/approach - The study employed the bounds-testing procedure for cointegration to examine the potential long-run relationship, while an autoregressive distributed lag model is used to derive the short- and long-run coefficients. The Granger causality test is applied to determine the causality direction between electricity consumption and its determinants. Findings - New evidence is found in this study: first, electricity consumption, income, foreign direct investment, and population in Malaysia are cointegrated. Second, the influx of foreign direct investment and population growth are positively related to electricity consumption in Malaysia and the Granger causality evidence indicates that electricity consumption, income, and foreign direct investment are of bilateral causality. Originality/value - The estimated multivariate electricity consumption function for Malaysia implies that Malaysia is an energy-dependent country; thus energy-saving policies may have an inverse effect on current and also future economic development in Malaysia.", "which Countries ?", "Malaysia", 89.0, 97.0], ["This study examines the processes of the monetary union of the Economic Community of West African States (ECOWAS). It takes a critical look at the convergence criteria and the various conditions under which they are to be met. Using the panel least square technique an estimate of the beta convergence was made for the period 2000-2008. The findings show that nearly all the explanatory variables have indirect effects on the income growth rate and that there tends to be convergence in income over time. The speed of adjustment estimated is 0.2% per year and the half-life is -346.92. Thus the economies can make up for half of the distance that separates them from their stationary state. From the findings, it was concluded that a well integrated economy could further the achievement of steady growth in these countries in the long run.", "which Countries ?", "ECOWAS", 106.0, 112.0], ["Currency convertibility and monetary integration activities of the Economic Community of West African States (ECOWAS) are directed at addressing the problems of multiple currencies and exchange rate changes that are perceived as stumbling blocks to regional integration. A real exchange rate (RER) variability model shows that ECOWAS is closer to a monetary union now than before. As expected, the implementation of structural adjustment programmes (SAPs) by various governments in the subregion has brought about a reasonable level of convergence. However, wide differences still exist between RER shocks facing CFA zone and non-CFA zone West African countries. Further convergence in economic policy and alternatives to dependence on revenues from taxes on international transactions are required for a stable region-wide monetary union in West Africa.", "which Countries ?", "ECOWAS", 110.0, 116.0], ["In this paper we aim to answer the following two questions: 1) has the Common Monetary Area in Southern Africa (henceforth CMA) ever been an optimal currency area (OCA)? 2) What are the costs and benefits of the CMA for its participating countries? In order to answer these questions, we carry out a two-step econometric exercise based on the theory of generalised purchasing power parity (G-PPP). The econometric evidence shows that the CMA (but also Botswana as a de facto member) form an OCA given the existence of common long-run trends in their bilateral real exchange rates. Second, we also test that in the case of the CMA and Botswana the smoothness of the operation of the common currency area \u2014 measured through the degree of relative price correlation \u2014 depends on a variety of factors. These factors signal both the advantages and disadvantages of joining a monetary union. On the one hand, the more open and more similarly diversified the economies are, the higher the benefits they ... Ce Document de travail s'efforce de repondre a deux questions : 1) la zone monetaire commune de l'Afrique australe (Common Monetary Area - CMA) a-t-elle vraiment reussi a devenir une zone monetaire optimale ? 2) quels sont les couts et les avantages de la CMA pour les pays participants ? Nous avons effectue un exercice econometrique en deux etapes base sur la theorie des parites de pouvoir d'achat generalisees. D'apres les resultats econometriques, la CMA (avec le Botswana comme membre de facto) est effectivement une zone monetaire optimale etant donne les evolutions communes sur le long terme de leurs taux de change bilateraux. Nous avons egalement mis en evidence que le bon fonctionnement de l'union monetaire \u2014 mesure par le degre de correlation des prix relatifs \u2014 depend de plusieurs facteurs. Ces derniers revelent a la fois les couts et les avantages de l'appartenance a une union monetaire. D'un cote, plus les economies sont ouvertes et diversifiees de facon comparable, plus ...", "which Countries ?", "Botswana", 452.0, 460.0], ["This paper compares different nominal anchors to promote internal and external competitiveness in the case of a fixed exchange rate regime for the future single regional currency of the Economic Community of the West African States (ECOWAS). We use counterfactual analyses and estimate a model of dependent economy for small commodity exporting countries. We consider four foreign anchor currencies: the US dollar, the euro, the yen and the yuan. Our simulations show little support for a dominant peg in the ECOWAS area if they pursue several goals: maximizing the export revenues, minimizing their variability, stabilizing them and minimizing the real exchange rate misalignments from the fundamental value.", "which Countries ?", "ECOWAS", 233.0, 239.0], ["The East African Community (EAC) has fast-tracked its plans to create a single currency for the five countries making up the region, and hopes to conclude negotiations on a monetary union protocol by the end of 2012. While the benefits of lower transactions costs from a common currency may be significant, countries will also lose the ability to use monetary policy to respond to different shocks. Evidence presented shows that the countries differ in a number of respects, facing asymmetric shocks and different production structures. Countries have had difficulty meeting convergence criteria, most seriously as concerns fiscal deficits. Preparation for monetary union will require effective institutions for macroeconomic surveillance and enforcing fiscal discipline, and euro zone experience indicates that these institutions will be difficult to design and take a considerable time to become effective. This suggests that a timetable for monetary union in the EAC should allow for a substantial initial period of institution building. In order to have some visible evidence of the commitment to monetary union, in the meantime the EAC may want to consider introducing a common basket currency in the form of notes and coin, to circulate in parallel with national currencies.", "which Countries ?", "East African Community", 4.0, 26.0], ["This paper uses the business cycle synchronization criteria of the theory of optimum currency area (OCA) to examine the feasibility of the East African Community (EAC) as a monetary union. We also investigate whether the degree of business cycle synchronization has increased after the 1999 EAC Treaty. We use an unobserved component model to measure business cycle synchronization as the proportion of structural shocks that are common across different countries, and a time-varying parameter model to examine the dynamics of synchronization over time. We find that although the degree of synchronization has increased since 2000 when the EAC Treaty came into force, the proportion of shocks that is common across different countries is still small implying weak synchronization. This evidence casts doubt on the feasibility of a monetary union for the EAC as scheduled by 2012.", "which Countries ?", "East African Community", 139.0, 161.0], ["Abstract The Paper re \u2010 examined co\u2010integration and causality relationship between energy consumption and economic growth for Nigeria using data covering the period 1970 to 2005. Unlike previous related study for Nigeria, different proxies of energy consumption (electricity demand, domestic crude oil consumption and gas utilization) were used for the estimation. It also included government activities proxied by health expenditure and monetary policy proxied by broad money supply though; emphasis was on energy consumption. Using the Johansen co\u2010integration technique, it was found that there existed a long run relationship among the series. It was also found that all the variables used for the study were I(1). Furthermore, unidirectional causality was established between electricity consumption and economic growth, domestic crude oil production and economic growth as well as between gas utilization and economic growth in Nigeria. While causality runs from electricity consumption to economic growth as well a...", "which Countries ?", "Nigeria", 126.0, 133.0], ["We examine prospects for a monetary union in the East African Community (EAC) by developing a stylized model of policymakers' decision problem that allows for uncertain benefits derived from monetary,financial and fiscal stability, and then calibrating the model for the EAC for the period 2003-2010. When policymakers properly allow for uncertainty, none of the countries wants to pursue a monetary union based on either monetary or financial stability grounds, and only Rwanda might favor it on fiscal stability grounds; we argue that robust institutional arrangements assuring substantial improvements in monetary, financial and fiscal stability are needed to compensate. (This abstract was borrowed from another version of this item.)", "which Countries ?", "East African Community", 49.0, 71.0], [" The East African Community\u2019s (EAC) economic integration has gained momentum recently, with the EAC countries aiming to adopt a single currency in 2015. This article evaluates empirically the readiness of the EAC countries for monetary union. First, structural similarity in terms of similarity of production and exports of the EAC countries is measured. Second, the symmetry of shocks is examined with structural vector auto-regression analysis (SVAR). The lack of macroeconomic convergence gives evidence against a hurried transition to a monetary union. Given the divergent macroeconomic outcomes, structural reforms, including closing infrastructure gaps and harmonizing macroeconomic policies that would raise synchronization of business cycles, need to be in place before moving to monetary union. ", "which Countries ?", "East African Community", 13.0, 35.0], ["OBJECTIVE An estimated 293,300 healthcare-associated cases of Clostridium difficile infection (CDI) occur annually in the United States. To date, research has focused on developing risk prediction models for CDI that work well across institutions. However, this one-size-fits-all approach ignores important hospital-specific factors. We focus on a generalizable method for building facility-specific models. We demonstrate the applicability of the approach using electronic health records (EHR) from the University of Michigan Hospitals (UM) and the Massachusetts General Hospital (MGH). METHODS We utilized EHR data from 191,014 adult admissions to UM and 65,718 adult admissions to MGH. We extracted patient demographics, admission details, patient history, and daily hospitalization details, resulting in 4,836 features from patients at UM and 1,837 from patients at MGH. We used L2 regularized logistic regression to learn the models, and we measured the discriminative performance of the models on held-out data from each hospital. RESULTS Using the UM and MGH test data, the models achieved area under the receiver operating characteristic curve (AUROC) values of 0.82 (95% confidence interval [CI], 0.80\u20130.84) and 0.75 ( 95% CI, 0.73\u20130.78), respectively. Some predictive factors were shared between the 2 models, but many of the top predictive factors differed between facilities. CONCLUSION A data-driven approach to building models for estimating daily patient risk for CDI was used to build institution-specific models at 2 large hospitals with different patient populations and EHR systems. In contrast to traditional approaches that focus on developing models that apply across hospitals, our generalizable approach yields risk-stratification models tailored to an institution. These hospital-specific models allow for earlier and more accurate identification of high-risk patients and better targeting of infection prevention strategies. Infect Control Hosp Epidemiol 2018;39:425\u2013433", "which Features ?", "EHR", 490.0, 493.0], ["The National Surgical Quality Improvement Project (NSQIP) is widely recognized as \u201cthe best in the nation\u201d surgical quality improvement resource in the United States. In particular, it rigorously defines postoperative morbidity outcomes, including surgical adverse events occurring within 30 days of surgery. Due to its manual yet expensive construction process, the NSQIP registry is of exceptionally high quality, but its high cost remains a significant bottleneck to NSQIP\u2019s wider dissemination. In this work, we propose an automated surgical adverse events detection tool, aimed at accelerating the process of extracting postoperative outcomes from medical charts. As a prototype system, we combined local EHR data with the NSQIP gold standard outcomes and developed machine learned models to retrospectively detect Surgical Site Infections (SSI), a particular family of adverse events that NSQIP extracts. The built models have high specificity (from 0.788 to 0.988) as well as very high negative predictive values (>0.98), reliably eliminating the vast majority of patients without SSI, thereby significantly reducing the NSQIP extractors\u2019 burden.", "which Features ?", "EHR", 710.0, 713.0], ["Objective. A major challenge in treating Clostridium difficile infection (CDI) is relapse. Many new therapies are being developed to help prevent this outcome. We sought to establish risk factors for relapse and determine whether fields available in an electronic health record (EHR) could be used to identify high-risk patients for targeted relapse prevention strategies. Design. Retrospective cohort study. Setting. Large clinical data warehouse at a 4-hospital healthcare organization. Participants. Data were gathered from January 2006 through October 2010. Subjects were all inpatient episodes of a positive C. difficile test where patients were available for 56 days of follow-up. Methods. Relapse was defined as another positive test between 15 and 56 days after the initial test. Multivariable regression was performed to identify factors independently associated with CDI relapse. Results. Eight hundred twenty-nine episodes met eligibility criteria, and 198 resulted in relapse (23.9%). In the final multivariable analysis, risk of relapse was associated with age (odds ratio [OR], 1.02 per year [95% confidence interval (CI), 1.01\u20131.03]), fluoroquinolone exposure in the 90 days before diagnosis (OR, 1.58 [95% CI, 1.11\u20132.26]), intensive care unit stay in the 30 days before diagnosis (OR, 0.47 [95% CI, 0.30\u20130.75]), cephalosporin (OR, 1.80 [95% CI, 1.19\u20132.71]), proton pump inhibitor (PPI; OR, 1.55 [95% CI, 1.05\u20132.29]), and metronidazole exposure after diagnosis (OR, 2.74 [95% CI, 1.64\u20134.60]). A prediction model tuned to ensure a 50% probability of relapse would flag 14.6% of CDI episodes. Conclusions. Data from a comprehensive EHR can be used to identify patients at high risk for CDI relapse. Major risk factors include antibiotic and PPI exposure.", "which Features ?", "clinical data", 424.0, 437.0], ["This paper presents two-stage bi-objective stochastic programming models for disaster relief operations. We consider a problem that occurs in the aftermath of a natural disaster: a transportation system for supplying disaster victims with relief goods must be established. We propose bi-objective optimization models with a monetary objective and humanitarian objective. Uncertainty in the accessibility of the road network is modeled by a discrete set of scenarios. The key features of our model are the determination of locations for intermediate depots and acquisition of vehicles. Several model variants are considered. First, the operating budget can be fixed at the first stage for all possible scenarios or determined for each scenario at the second stage. Second, the assignment of vehicles to a depot can be either fixed or free. Third, we compare a heterogeneous vehicle fleet to a homogeneous fleet. We study the impact of the variants on the solutions. The set of Pareto-optimal solutions is computed by applying the adaptive Epsilon-constraint method. We solve the deterministic equivalents of the two-stage stochastic programs using the MIP-solver CPLEX.", "which Features ?", "Several model variants", 585.0, 607.0], ["We used an electron probe microanalyzer (EPMA) to determine the migratory environ- mental history of the catadromous grey mullet Mugil cephalus from the Sr:Ca ratios in otoliths of 10 newly recruited juveniles collected from estuaries and 30 adults collected from estuaries, nearshore (coastal waters and bay) and offshore, in the adjacent waters off Taiwan. Mean (\u00b1SD) Sr:Ca ratios at the edges of adult otoliths increased significantly from 6.5 \u00b1 0.9 \u00d7 10 -3 in estuaries and nearshore waters to 8.9 \u00b1 1.4 \u00d7 10 -3 in offshore waters (p < 0.01), corresponding to increasing ambi- ent salinity from estuaries and nearshore to offshore waters. The mean Sr:Ca ratios decreased sig- nificantly from the core (11.2 \u00b1 1.2 \u00d7 10 -3 ) to the otolith edge (6.2 \u00b1 1.4 \u00d7 10 -3 ) in juvenile otoliths (p < 0.001). The mullet generally spawned offshore and recruited to the estuary at the juvenile stage; therefore, these data support the use of Sr:Ca ratios in otoliths to reconstruct the past salinity history of the mullet. A life-history scan of the otolith Sr:Ca ratios indicated that the migratory environmen- tal history of the mullet beyond the juvenile stage consists of 2 types. In Type 1 mullet, Sr:Ca ratios range between 4.0 \u00d7 10 -3 and 13.9 \u00d7 10 -3 , indicating that they migrated between estuary and offshore waters but rarely entered the freshwater habitat. In Type 2 mullet, the Sr:Ca ratios decreased to a minimum value of 0.4 \u00d7 10 -3 , indicating that the mullet migrated to a freshwater habitat. Most mullet beyond the juvenile stage migrated from estuary to offshore waters, but a few mullet less than 2 yr old may have migrated into a freshwater habitat. Most mullet collected nearshore and offshore were of Type 1, while those collected from the estuaries were a mixture of Types 1 and 2. The mullet spawning stock consisted mainly of Type 1 fish. The growth rates of the mullet were similar for Types 1 and 2. The migratory patterns of the mullet were more divergent than indicated by previous reports of their catadromous behavior.", "which Analytical method ?", "EPMA", 41.0, 45.0], ["Strontium isotope and Sr/Ca ratios measured in situ by ion microprobe along radial transects of otoliths of juvenile chinook salmon (Oncorhynchus tshawytscha) vary between watersheds with contrasting geology. Otoliths from ocean-type chinook from Skagit River estuary, Washington, had prehatch regions with 87Sr/86Sr ratios of ~0.709, suggesting a maternally inherited marine signature, extensive fresh water growth zones with 87Sr/86Sr ratios similar to those of the Skagit River at ~0.705, and marine-like 87Sr/86Sr ratios near their edges. Otoliths from stream-type chinook from central Idaho had prehatch 87Sr/86Sr ratios \u22650.711, indicating that a maternal marine Sr isotopic signature is not preserved after the ~1000- to 1400-km migration from the Pacific Ocean. 87Sr/86Sr ratios in the outer portions of otoliths from these Idaho juveniles were similar to those of their respective streams (~0.708\u00960.722). For Skagit juveniles, fresh water growth was marked by small decreases in otolith Sr/Ca, with increases in ...", "which Analytical method ?", "Ion microprobe", 55.0, 69.0], ["In recent times, social media has been increasingly playing a critical role in response actions following natural catastrophes. From facilitating the recruitment of volunteers during an earthquake to supporting emotional recovery after a hurricane, social media has demonstrated its power in serving as an effective disaster response platform. Based on a case study of Thailand flooding in 2011 \u2013 one of the worst flooding disasters in more than 50 years that left the country severely impaired \u2013 this paper provides an in\u2010depth understanding on the emergent roles of social media in disaster response. Employing the perspective of boundary object, we shed light on how different boundary spanning competences of social media emerged in practice to facilitate cross\u2010boundary response actions during a disaster, with an aim to promote further research in this area. We conclude this paper with guidelines for response agencies and impacted communities to deploy social media for future disaster response.", "which Technology ?", "social media", 17.0, 29.0], ["Two weeks after the Great Tohoku earthquake followed by the devastating tsunami, we have sent open-ended questionnaires to a randomly selected sample of Twitter users and also analysed the tweets sent from the disaster-hit areas. We found that people in directly affected areas tend to tweet about their unsafe and uncertain situation while people in remote areas post messages to let their followers know that they are safe. Our analysis of the open-ended answers has revealed that unreliable retweets (RTs) on Twitter was the biggest problem the users have faced during the disaster. Some of the solutions offered by the respondents included introducing official hash tags, limiting the number of RTs for each hash tag and adding features that allow users to trace information by maintaining anonymity.", "which Technology ?", "Twitter", 153.0, 160.0], ["Purpose \u2013 With the explosion of the Deepwater Horizon oil well in the Gulf of Mexico on April 20, 2010 and until the well was officially \u201ckilled\u201d on September 19, 2010, British Petroleum (BP) did not merely experience a crisis but a five\u2010month marathon of sustained, multi\u2010media engagement. Whereas traditional public relations theory teaches us that an organization should synchronize its messages across channels, there are no models to understand how an organization may strategically coordinate public relations messaging across traditional and social media platforms. This is especially important in the new media environment where social media (e.g. Facebook and Twitter) are increasingly being used in concert with traditional public relations tools (e.g. press releases) as a part of an organization's stakeholder engagement strategy. This paper seeks to address these issues.Design/methodology/approach \u2013 The present study is a content analysis examining all of BP's press releases (N=126), its Facebook posts (...", "which Technology ?", "social media", 549.0, 561.0], ["This study explores the role of social media in social change by analyzing Twitter data collected during the 2011 Egypt Revolution. Particular attention is paid to the notion of collective sense making, which is considered a critical aspect for the emergence of collective action for social change. We suggest that collective sense making through social media can be conceptualized as human-machine collaborative information processing that involves an interplay of signs, Twitter grammar, humans, and social technologies. We focus on the occurrences of hashtags among a high volume of tweets to study the collective sense-making phenomena of milling and keynoting. A quantitative Markov switching analysis is performed to understand how the hashtag frequencies vary over time, suggesting structural changes that depict the two phenomena. We further explore different hashtags through a qualitative content analysis and find that, although many hashtags were used as symbolic anchors to funnel online users' attention to the Egypt Revolution, other hashtags were used as part of tweet sentences to share changing situational information. We suggest that hashtags functioned as a means to collect information and maintain situational awareness during the unstable political situation of the Egypt Revolution.", "which Technology ?", "social media", 32.0, 44.0], ["Purpose \u2013 With the explosion of the Deepwater Horizon oil well in the Gulf of Mexico on April 20, 2010 and until the well was officially \u201ckilled\u201d on September 19, 2010, British Petroleum (BP) did not merely experience a crisis but a five\u2010month marathon of sustained, multi\u2010media engagement. Whereas traditional public relations theory teaches us that an organization should synchronize its messages across channels, there are no models to understand how an organization may strategically coordinate public relations messaging across traditional and social media platforms. This is especially important in the new media environment where social media (e.g. Facebook and Twitter) are increasingly being used in concert with traditional public relations tools (e.g. press releases) as a part of an organization's stakeholder engagement strategy. This paper seeks to address these issues.Design/methodology/approach \u2013 The present study is a content analysis examining all of BP's press releases (N=126), its Facebook posts (...", "which Technology ?", "social media", 549.0, 561.0], ["In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency\u2019s perspective, which undermines communities\u2019 role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.", "which Technology ?", "social media", 433.0, 445.0], ["Purpose \u2013 With the explosion of the Deepwater Horizon oil well in the Gulf of Mexico on April 20, 2010 and until the well was officially \u201ckilled\u201d on September 19, 2010, British Petroleum (BP) did not merely experience a crisis but a five\u2010month marathon of sustained, multi\u2010media engagement. Whereas traditional public relations theory teaches us that an organization should synchronize its messages across channels, there are no models to understand how an organization may strategically coordinate public relations messaging across traditional and social media platforms. This is especially important in the new media environment where social media (e.g. Facebook and Twitter) are increasingly being used in concert with traditional public relations tools (e.g. press releases) as a part of an organization's stakeholder engagement strategy. This paper seeks to address these issues.Design/methodology/approach \u2013 The present study is a content analysis examining all of BP's press releases (N=126), its Facebook posts (...", "which Technology ?", "social media", 549.0, 561.0], ["ABSTRACT This paper systematically develops a set of general and supporting design principles and specifications for a \"Dynamic Emergency Response Management Information System\" (DERMIS) by identifying design premises resulting from the use of the \"Emergency Management Information System and Reference Index\" (EMISARI) and design concepts resulting from a comprehensive literature review. Implicit in crises of varying scopes and proportions are communication and information needs that can be addressed by today's information and communication technologies. However, what is required is organizing the premises and concepts that can be mapped into a set of generic design principles in turn providing a framework for the sensible development of flexible and dynamic Emergency Response Information Systems. A framework is presented for the system design and development that addresses the communication and information needs of first responders as well as the decision making needs of command and control personnel. The framework also incorporates thinking about the value of insights and information from communities of geographically dispersed experts and suggests how that expertise can be brought to bear on crisis decision making. Historic experience is used to suggest nine design premises. These premises are complemented by a series of five design concepts based upon the review of pertinent and applicable research. The result is a set of eight general design principles and three supporting design considerations that are recommended to be woven into the detailed specifications of a DERMIS. The resulting DERMIS design model graphically indicates the heuristic taken by this paper and suggests that the result will be an emergency response system flexible, robust, and dynamic enough to support the communication and information needs of emergency and crisis personnel on all levels. In addition it permits the development of dynamic emergency response information systems with tailored flexibility to support and be integrated across different sizes and types of organizations. This paper provides guidelines for system analysts and designers, system engineers, first responders, communities of experts, emergency command and control personnel, and MIS/IT researchers. SECTIONS 1. Introduction 2. Historical Insights about EMISARI 3. The emergency Response Atmosphere of OEP 4. Resulting Requirements for Emergency Response and Conceptual Design Specifics 4.1 Metaphors 4.2 Roles 4.3 Notifications 4.4 Context Visibility 4.5 Hypertext 5. Generalized Design Principles 6. Supporting Design Considerations 6.1 Resource Databases and Community Collaboration 6.2 Collective Memory 6.3 Online Communities of Experts 7. Conclusions and Final Observations 8. References 1. INTRODUCTION There have been, since 9/11, considerable efforts to propose improvements in the ability to respond to emergencies. However, the vast majority of these efforts have concentrated on infrastructure improvements to aid in mitigation of the impacts of either a man-made or natural disaster. In the area of communication and information systems to support the actual ongoing reaction to a disaster situation, the vast majority of the efforts have focused on the underlying technology to reliably support survivability of the underlying networks and physical facilities (Kunreuther and LernerLam 2002; Mork 2002). The fact that there were major failures of the basic technology and loss of the command center for 48 hours in the 9/11 event has made this an understandable result. The very workable commercial paging and digital mail systems supplied immediately afterwards by commercial firms (Michaels 2001; Vatis 2002) to the emergency response workers demonstrated that the correction of underlying technology is largely a process of setting integration standards and deciding to spend the necessary funds to update antiquated systems. \u2026", "which Technology ?", "ERMIS", NaN, NaN], ["This study employs the perspective of organizational resilience to examine how information and communication technologies (ICTs) were used by organizations to aid in their recovery after Hurricane Katrina. In-depth interviews enabled longitudinal analysis of ICT use. Results showed that organizations enacted a variety of resilient behaviors through adaptive ICT use, including information sharing, (re)connection, and resource acquisition. Findings emphasize the transition of ICT use across different stages of recovery, including an anticipated stage. Key findings advance organizational resilience theory with an additional source of resilience, external availability. Implications and contributions to the literature of ICTs in disaster contexts and organizational resilience are discussed.", "which Technology ?", "ICT", 259.0, 262.0], ["In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency\u2019s perspective, which undermines communities\u2019 role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.", "which Technology ?", "social media", 433.0, 445.0], ["Through a quantitative content analysis, this study applies situational crisis communication theory (SCCT) to investigate how 13 corporate and government organizations responded to the first phase of the 2009 flu pandemic. The results indicate that government organizations emphasized providing instructing information to their primary publics such as guidelines about how to respond to the crisis. On the other hand, organizations representing corporate interests emphasized reputation management in their crisis responses, frequently adopting denial, diminish, and reinforce response strategies. In addition, both government and corporate organizations used social media more often than traditional media in responding to the crisis. Finally, the study expands SCCT's response options.", "which Technology ?", "social media", 660.0, 672.0], ["Social media are quickly becoming the channel of choice for disseminating emergency warning messages. However, relatively little data-driven research exists to inform effective message design when using these media. The present study addresses that void by examining terse health-related warning messages sent by public safety agencies over Twitter during the 2013 Boulder, CO, floods. An examination of 5,100 tweets from 52 Twitter accounts over the course of the 5-day flood period yielded several key conclusions and implications. First, public health messages posted by local emergency management leaders are most frequently retweeted by organizations in our study. Second, emergency public health messages focus primarily on drinking water in this event. Third, terse messages can be designed in ways that include imperative/instructional and declarative/explanatory styles of content, both of which are essential for promoting public health during crises. These findings demonstrate that even terse messages delivered via Twitter ought to provide information about the hazard event, its impact, and actionable instructions for self-protection.", "which Technology ?", "social media", 0.0, 12.0], ["Two weeks after the Great Tohoku earthquake followed by the devastating tsunami, we have sent open-ended questionnaires to a randomly selected sample of Twitter users and also analysed the tweets sent from the disaster-hit areas. We found that people in directly affected areas tend to tweet about their unsafe and uncertain situation while people in remote areas post messages to let their followers know that they are safe. Our analysis of the open-ended answers has revealed that unreliable retweets (RTs) on Twitter was the biggest problem the users have faced during the disaster. Some of the solutions offered by the respondents included introducing official hash tags, limiting the number of RTs for each hash tag and adding features that allow users to trace information by maintaining anonymity.", "which Technology ?", "Twitter", 153.0, 160.0], ["In this paper we propose an effective and efficient new Fuzzy Healthy Association Rule Mining Algorithm (FHARM) that produces more interesting and quality rules by introducing new quality measures. In this approach, edible attributes are filtered from transactional input data by projections and are then converted to Required Daily Allowance (RDA) numeric values. The averaged RDA database is then converted to a fuzzy database that contains normalized fuzzy attributes comprising different fuzzy sets. Analysis of nutritional information is then performed from the converted normalized fuzzy transactional database. The paper presents various performance tests and interestingness measures to demonstrate the effectiveness of the approach and proposes further work on evaluating our approach with other generic fuzzy association rule algorithms.", "which Algorithm name ?", "FHARM", 105.0, 110.0], ["In this paper, we extend the tradition association rule problem by allowing a weight to be associated with each item in a transaction, to re ect interest/intensity of the item within the transaction. This provides us in turn with an opportunity to associate a weight parameter with each item in the resulting association rule. We call it weighted association rule (WAR). WAR not only improves the con dence of the rules, but also provides a mechanism to do more effective target marketing by identifying or segmenting customers based on their potential degree of loyalty or volume of purchases. Our approach mines WARs by rst ignoring the weight and nding the frequent itemsets (via a traditional frequent itemset discovery algorithm), and is followed by introducing the weight during the rule generation. It is shown by experimental results that our approach not only results in shorter average execution times, but also produces higher quality results than the generalization of previous known methods on quantitative association rules.", "which Algorithm name ?", "WAR", 365.0, 368.0], ["In this study, we propose a simple and novel data structure using hyper-links, H-struct, and a new mining algorithm, H-mine, which takes advantage of this data structure and dynamically adjusts links in the mining process. A distinct feature of this method is that it has a very limited and precisely predictable main memory cost and runs very quickly in memory-based settings. Moreover, it can be scaled up to very large databases using database partitioning. When the data set becomes dense, (conditional) FP-trees can be constructed dynamically as part of the mining process. Our study shows that H-mine has an excellent performance for various kinds of data, outperforms currently available algorithms in different settings, and is highly scalable to mining large databases. This study also proposes a new data mining methodology, space-preserving mining, which may have a major impact on the future development of efficient and scalable data mining methods. \u2020Decreased", "which Algorithm name ?", "H-mine ", 600.0, 607.0], ["Frequent pattern mining discovers patterns in transaction databases based only on the relative frequency of occurrence of items without considering their utility. For many real world applications, however, utility of itemsets based on cost, profit or revenue is of importance. The utility mining problem is to find itemsets that have higher utility than a user specified minimum. Unlike itemset support in frequent pattern mining, itemset utility does not have the anti-monotone property and so efficient high utility mining poses a greater challenge. Recent research on utility mining has been based on the candidate-generation-and-test approach which is suitable for sparse data sets with short patterns, but not feasible for dense data sets or long patterns. In this paper we propose a new algorithm called CTU-Mine that mines high utility itemsets using the pattern growth approach. We have tested our algorithm on several dense data sets, compared it with the recent algorithms and the results show that our algorithm works efficiently.", "which Algorithm name ?", "CTU-Mine", 810.0, 818.0], ["Existing algorithms for utility mining are inadequate on datasets with high dimensions or long patterns. This paper proposes a hybrid method, which is composed of a row enumeration algorithm (i.e., inter-transaction) and a column enumeration algorithm (i.e., two-phase), to discover high utility itemsets from two directions: Two-phase seeks short high utility itemsets from the bottom, while inter-transaction seeks long high utility itemsets from the top. In addition, optimization technique is adopted to improve the performance of computing the intersection of transactions. Experiments on synthetic data show that the hybrid method achieves high performance in large high dimensional datasets.", "which Algorithm name ?", "Inter-transaction", 198.0, 215.0], ["We consider the problem of discovering association rules between items in a large database of sales transactions. We present two new algorithms for solving thii problem that are fundamentally different from the known algorithms. Empirical evaluation shows that these algorithms outperform the known algorithms by factors ranging from three for small problems to more than an order of magnitude for large problems. We also show how the best features of the two proposed algorithms can be combined into a hybrid algorithm, called AprioriHybrid. Scale-up experiments show that AprioriHybrid scales linearly with the number of transactions. AprioriHybrid also has excellent scale-up properties with respect to the transaction size and the number of items in the database.", "which Algorithm name ?", "AprioriHybrid", 528.0, 541.0], ["In this paper, we present a novel algorithm for mining complete frequent itemsets. This algorithm is referred to as the TM (transaction mapping) algorithm from hereon. In this algorithm, transaction ids of each itemset are mapped and compressed to continuous transaction intervals in a different space and the counting of itemsets is performed by intersecting these interval lists in a depth-first order along the lexicographic tree. When the compression coefficient becomes smaller than the average number of comparisons for intervals intersection at a certain level, the algorithm switches to transaction id intersection. We have evaluated the algorithm against two popular frequent itemset mining algorithms, FP-growth and dEclat, using a variety of data sets with short and long frequent patterns. Experimental data show that the TM algorithm outperforms these two algorithms.", "which Algorithm name ?", "Transaction", 124.0, 135.0], ["Association rule mining (ARM) identifies frequent itemsets from databases and generates association rules by considering each item in equal value. However, items are actually different in many aspects in a number of real applications, such as retail marketing, network log, etc. The difference between items makes a strong impact on the decision making in these applications. Therefore, traditional ARM cannot meet the demands arising from these applications. By considering the different values of individual items as utilities, utility mining focuses on identifying the itemsets with high utilities. As \"downward closure property\" doesn't apply to utility mining, the generation of candidate itemsets is the most costly in terms of time and memory space. In this paper, we present a Two-Phase algorithm to efficiently prune down the number of candidates and can precisely obtain the complete set of high utility itemsets. In the first phase, we propose a model that applies the \"transaction-weighted downward closure property\" on the search space to expedite the identification of candidates. In the second phase, one extra database scan is performed to identify the high utility itemsets. We also parallelize our algorithm on shared memory multi-process architecture using Common Count Partitioned Database (CCPD) strategy. We verify our algorithm by applying it to both synthetic and real databases. It performs very efficiently in terms of speed and memory cost, and shows good scalability on multiple processors, even on large databases that are difficult for existing algorithms to handle.", "which Algorithm name ?", "Two-Phase", 785.0, 794.0], ["Disparity estimation is a common task in stereo vision and usually requires a high computational effort. High resolution disparity maps are necessary to provide a good image quality on autostereoscopic displays which deliver stereo content without the need for 3D glasses. In this paper, an FPGA architecture for a disparity estimation algorithm is proposed, that is capable of processing high-definition content in real-time. The resulting architecture is efficient in terms of power consumption and can be easily scaled to support higher resolutions.", "which Computational platform ?", "FPGA", 291.0, 295.0], ["After a long period of drought, the water level of the Danube River has significantly dropped especially on the Romanian sector, in July-August 2015. Danube reached the lowest water level recorded in the last 12 years, causing the blockage of the ships in the sector located close to Zimnicea Harbour. The rising sand banks in the navigable channel congested the commercial traffic for a few days with more than 100 ships involved. The monitoring of the decreasing water level and the traffic jam was performed based on Sentinel-1 and Sentinel-2 free data provided by the European Space Agency and the European Commission within the Copernicus Programme. Specific processing methods (calibration, speckle filtering, geocoding, change detection, image classification, principal component analysis, etc.) were applied in order to generate useful products that the responsible authorities could benefit from. The Sentinel data yielded good results for water mask extraction and ships detection. The analysis continued after the closure of the crisis situation when the water reached the nominal level again. The results indicate that Sentinel data can be successfully used for ship traffic monitoring, building the foundation of future endeavours for a durable monitoring of the Danube River.", "which Satellite sensor ?", "Sentinel-2", 535.0, 545.0], ["The European Space Agency satellite Sentinel-2 provides multispectral images with pixel sizes down to 10 m. This high resolution allows for ship detection and recognition by determining a number of important ship parameters. We are able to show how a ship position, its heading, length and breadth can be determined down to a subpixel resolution. If the ship is moving, its velocity can also be determined from its Kelvin waves. The 13 spectrally different visual and infrared images taken using multispectral imagery (MSI) are \u201cfingerprints\u201d that allow for the recognition and identification of ships. Furthermore, the multispectral image profiles along the ship allow for discrimination between the ship, its turbulent wakes, and the Kelvin waves, such that the ship\u2019s length and breadth can be determined more accurately even when sailing. The ship\u2019s parameters are determined by using satellite imagery taken from several ships, which are then compared to known values from the automatic identification system. The agreement is on the order of the pixel resolution or better.", "which Satellite sensor ?", "Sentinel-2", 36.0, 46.0], ["In this paper, we introduce a new feature representation based on fusing local texture description of saliency map and enhanced global statistics for ship scene detection in very high-resolution remote sensing images in inland, coastal, and oceanic regions. First, two low computational complexity methods are adopted. Specifically, the Itti attention model is used to extract saliency map, from which local texture histograms are extracted by LBP with uniform pattern. Meanwhile, Gabor filters with multi-scale and multi-orientation are convolved with the input image to extract Gist, means and variances which are used to form the enhanced global statistics. Second, sliding window-based detection is applied to obtain local image patches and extract the fusion of local and global features. SVM with RBF kernel is then used for training and classification. Such detection manner could remove coastal and oceanic regions effectively. Moreover, the ship scene region of interest can be detected accurately. Experiments on 20 very high-resolution remote sensing images collected by Google Earth shows that the fusion feature has advantages than LBP, Saliency map-based LBP and Gist, respectively. Furthermore, desirable results can be obtained in the ship scene detection.", "which Satellite sensor ?", "Google Earth", 1082.0, 1094.0], ["Nowadays, the availability of high-resolution images taken from satellites, like Quickbird, Orbview, and others, offers the remote sensing community the possibility of monitoring and surveying vast areas of the Earth for different purposes, e.g. monitoring forest regions for ecological reasons. A particular application is the use of satellite images to survey the bottom of the seas around the Iberian peninsula which is flooded with innumerable treasures that are being plundered by specialized ships. In this paper we present a GIS-based application aimed to catalog areas of the sea with archeological interest and to monitor the risk of plundering of ships that stay within such areas during a suspicious period of time.", "which Satellite sensor ?", "QuickBird", 81.0, 90.0], ["This paper aims at the segmentation of seafaring vessels in optical satellite images, which allows an accurate length estimation. In maritime situation awareness, vessel length is an important parameter to classify a vessel. The proposed segmentation system consists of robust foreground-background separation, wake detection and ship-wake separation, simultaneous position and profile clustering and a special module for small vessel segmentation. We compared our system with a baseline implementation on 53 vessels that were observed with GeoEye-1. The results show that the relative L1 error in the length estimation is reduced from 3.9 to 0.5, which is an improvement of 87%. We learned that the wake removal is an important element for the accurate segmentation and length estimation of ships.", "which Satellite sensor ?", "GeoEye-1", 541.0, 549.0], ["In the authors' previous work, a sequence of image-processing algorithms was developed that was suitable for detecting and classifying ships from panchromatic Quickbird electro-optical satellite imagery. Presented in this paper are several new algorithms, which improve the performance and enhance the capabilities of the ship detection software, as well as an overview on how land masking is performed. Specifically, this paper describes the new algorithms for enhanced detection including for the reduction of false detects such as glint and clouds. Improved cloud detection and filtering algorithms are described as well as several texture classification algorithms are used to characterize the background statistics of the ocean texture. These detection algorithms employ both cloud and glint removal techniques, which we describe. Results comparing ship detection with and without these false detect reduction algorithms are provided. These are components of a larger effort to develop a low-cost solution for detecting the presence of ships from readily-available overhead commercial imagery and comparing this information against various open-source ship-registry databases to categorize contacts for follow-on analysis.", "which Satellite sensor ?", "QuickBird", 159.0, 168.0], ["This paper examines the performance of a spatiospectral template on Ikonos imagery to automatically detect small recreational boats. The spatiospectral template is utilized and then enhanced through the use of a weighted Euclidean distance metric adapted from the Mahalanobis distance metric. The aim is to assist the Canadian Coast Guard in gathering data on recreational boating for the modeling of search and rescue incidence risk. To test the detection accuracy of the enhanced spatiospectral template, a dataset was created by gathering position and attribute data for 53 recreational vessel targets purposely moored for this research within Cadboro Bay, British Columbia, Canada. The Cadboro Bay study site containing the targets was imaged using Ikonos. Overall detection accuracy was 77%. Targets were broken down into 2 categories: 1) Category A-less than 6 m in length, and Category B-more than 6 m long. The detection rate for Category B targets was 100%, while the detection rate for Category A targets was 61%. It is important to note that some Category A targets were intentionally selected for their small size to test the detection limits of the enhanced spatiospectral template. The smallest target detected was 2.2 m long and 1.1 m wide. The analysis also revealed that the ability to detect targets between 2.2 and 6 m long was diminished if the target was dark in color.", "which Satellite sensor ?", "IKONOS", 68.0, 74.0], ["Understanding the capabilities of satellite sensors with spatial and spectral characteristics similar to those of MODIS for Maritime Domain Awareness (MDA) is of importance because of the upcoming NPOES with 100 minutes revisit time carrying the MODIS-like VIIRS multispectral imaging sensor. This paper presents an experimental study of ship detection using MODIS imagery. We study the use of ship signatures such as contaminant plumes in clouds and the spectral contrast between the ship and the sea background for detection. Results show the potential and challenges for such approach in MDA.", "which Satellite sensor ?", "MODIS", 114.0, 119.0], ["In order to overcome cloud clutters and varied sizes of objects in high-resolution optical satellite images, a novel coarse-to-fine ship detection framework is proposed. Initially, a modified saliency fusion algorithm is derived to reduce cloud clutters and extract ship candidates. Then, in coarse discrimination stage, candidates are described by introducing shape feature to eliminate regions which are not conform to ship characteristics. In fine discrimination stage, candidates are represented by local descriptor-based feature encoding, and then linear SVM is used for discrimination. Experiments on 60 images (including 467 objects) collected from Microsoft Virtual Earth demonstrate the effectiveness of the proposed framework. Specifically, the fusion of visual saliency achieves 17.07% higher Precision and 7.23% higher Recall compared with those of individual one. Moreover, using local descriptor in fine discrimination makes Precision and F-measure further be improved by 7.23% and 1.74%, respectively.", "which Satellite sensor ?", "Microsoft Virtual Earth", 656.0, 679.0], ["Automatic ship detection from remote sensing imagery has many applications, such as maritime security, traffic surveillance, fisheries management. However, it is still a difficult task for noise and distractors. This paper is concerned with perceptual organization, which detect salient convex structures of ships from noisy images. Because the line segments of contour of ships compose a convex set, a local gradient analysis is adopted to filter out the edges which are not on the contour as preprocess. For convexity is the significant feature, we apply the salience as the prior probability to detect. Feature angle constraint helps us compute probability estimate and choose correct contour in many candidate closed line groups. Finally, the experimental results are demonstrated on the satellite imagery from Google earth.", "which Satellite sensor ?", "Google Earth", 815.0, 827.0], ["This paper incorporate the multilevel selection (MLS) theory into the genetic algorithm. Based on this theory, a Multilevel Cooperative Genetic Algorithm (MLGA) is presented. In MLGA, a species is subdivided in a set of populations, each population is subdivided in groups, and evolution occurs at two levels so called individual and group level. A fast population dynamics occurs at individual level. At this level, selection occurs between individuals of the same group. The popular genetic operators such as mutation and crossover are applied within groups. A slow population dynamics occurs at group level. At this level, selection occurs between groups of a population. A group level operator so called colonization is applied between groups in which a group is selected as extinct, and replaced by offspring of a colonist group. We used a set of well known numerical functions in order to evaluate performance of the proposed algorithm. The results showed that the MLGA is robust, and provides an efficient way for numerical function optimization.", "which Name ?", "MLGA", 155.0, 159.0], ["The Terrain-Based Genetic Algorithm (TBGA) is a self-tuning version of the traditional Cellular Genetic Algorithm (CGA). In a TBGA, various combinations of parameter values appear in different physical locations of the population, forming a sort of terrain in which individual solutions evolve. We compare the performance of the TBGA against that of the CGA on a known suite of problems. Our results indicate that the TBGA performs better than the CGA on the test suite, with less parameter tuning, when the CGA is set to parameter values thought in prior studies to be good. While we had hoped that good solutions would cluster around the best parameter settings, this was not observed. However, we were able to use the TBGA to automatically determine better parameter settings for the CGA. The resulting CGA produced even better results than were achieved by the TBGA which found those parameter settings.", "which Name ?", "TBGA", 37.0, 41.0], ["The OPRoS(Open Platform for Robotic Service) is a platform for network based intelligent robots supported by the IT R&D program of Ministry of Knowledge Economy of KOREA. The OPRoS technology aims at establishing a component based standard software platform for the robot which enables complicated functions to be developed easily by using the standardized COTS components. The OPRoS provides a software component model for supporting reusability and compatibility of the robot software component in the heterogeneous communication network. In this paper, we will introduce the OPRoS component model and its background.", "which Name ?", "OPRoS", 4.0, 9.0], ["Significant amounts of knowledge in science and technology have so far not been published as Linked Open Data but are contained in the text and tables of legacy PDF publications. Making such information available as RDF would, for example, provide direct access to claims and facilitate surveys of related work. A lot of valuable tabular information that till now only existed in PDF documents would also finally become machine understandable. Instead of studying scientific literature or engineering patents for months, it would be possible to collect such input by simple SPARQL queries. The SemAnn approach enables collaborative annotation of text and tables in PDF documents, a format that is still the common denominator of publishing, thus maximising the potential user base. The resulting annotations in RDF format are available for querying through a SPARQL endpoint. To incentivise users with an immediate benefit for making the effort of annotation, SemAnn recommends related papers, taking into account the hierarchical context of annotations in a novel way. We evaluated the usability of SemAnn and the usefulness of its recommendations by analysing annotations resulting from tasks assigned to test users and by interviewing them. While the evaluation shows that even few annotations lead to a good recall, we also observed unexpected, serendipitous recommendations, which confirms the merit of our low-threshold annotation support for the crowd.", "which Name ?", "SemAnn", 594.0, 600.0], ["Tables are ubiquitous in digital libraries. In scientific documents, tables are widely used to present experimental results or statistical data in a condensed fashion. However, current search engines do not support table search. The difficulty of automatic extracting tables from un-tagged documents, the lack of a universal table metadata specification, and the limitation of the existing ranking schemes make table search problem challenging. In this paper, we describe TableSeer, a search engine for tables. TableSeer crawls digital libraries, detects tables from documents, extracts tables metadata, indexes and ranks tables, and provides a user-friendly search interface. We propose an extensive set of medium-independent metadata for tables that scientists and other users can adopt for representing table information. In addition, we devise a novel page box-cutting method to improve the performance of the table detection. Given a query, TableSeer ranks the matched tables using an innovative ranking algorithm - TableRank. TableRank rates each \u20edquery, table\u2102 pair with a tailored vector space model and a specific term weighting scheme. Overall, TableSeer eliminates the burden of manually extract table data from digital libraries and enables users to automatically examine tables. We demonstrate the value of TableSeer with empirical studies on scientific documents.", "which Name ?", "TableSeer", 472.0, 481.0], ["Cyberbotics Ltd. develops Webots\u2122, a mobile robotics simulation software that provides you with a rapid prototyping environment for modelling, programming and simulating mobile robots. The provided robot libraries enable you to transfer your control programs to several commercially available real mobile robots. Webots\u2122 lets you define and modify a complete mobile robotics setup, even several different robots sharing the same environment. For each object, you can define a number of properties, such as shape, color, texture, mass, friction, etc. You can equip each robot with a large number of available sensors and actuators. You can program these robots using your favorite development environment, simulate them and optionally transfer the resulting programs onto your real robots. Webots\u2122 has been developed in collaboration with the Swiss Federal Institute of Technology in Lausanne, thoroughly tested, well documented and continuously maintained for over 7 years. It is now the main commercial product available from Cyberbotics Ltd.", "which Name ?", "Webots", 26.0, 32.0], ["The Web contains vast amounts of HTML tables. Most of these tables are used for layout purposes, but a small subset of the tables is relational, meaning that they contain structured data describing a set of entities [2]. As these relational Web tables cover a very wide range of different topics, there is a growing body of research investigating the utility of Web table data for completing cross-domain knowledge bases [6], for extending arbitrary tables with additional attributes [7, 4], as well as for translating data values [5]. The existing research shows the potentials of Web tables. However, comparing the performance of the different systems is difficult as up till now each system is evaluated using a different corpus of Web tables and as most of the corpora are owned by large search engine companies and are thus not accessible to the public. In this poster, we present a large public corpus of Web tables which contains over 233 million tables and has been extracted from the July 2015 version of the CommonCrawl. By publishing the corpus as well as all tools that we used to extract it from the crawled data, we intend to provide a common ground for evaluating Web table systems. The main difference of the corpus compared to an earlier corpus that we extracted from the 2012 version of the CommonCrawl as well as the corpus extracted by Eberius et al. [3] from the 2014 version of the CommonCrawl is that the current corpus contains a richer set of metadata for each table. This metadata includes table-specific information such as table orientation, table caption, header row, and key column, but also context information such as the text before and after the table, the title of the HTML page, as well as timestamp information that was found before and after the table. The context information can be useful for recovering the semantics of a table [7]. The timestamp information is crucial for fusing time-depended data, such as alternative population numbers for a city [8].", "which Name ?", "Web Tables", 241.0, 251.0], ["The paper investigates a new PATCHWORK model for structured population in evolutionary search, where population size may vary. This model allows control of both population diversity and selective pressure, and its operators are local in scope. Moreover, the PATCHWORK model gives a significant flexibility for introducing many additional concepts, like behavioral rules for individuals. First experiments allowed us to observe some interesting patterns which emerged during evolutionary process.", "which Name ?", "Patchwork model", 29.0, 44.0], ["The problem known as CAITO refers to the determination of an order to integrate and test classes and aspects that minimizes stubbing costs. Such problem is NP-hard and to solve it efficiently, search based algorithms have been used, mainly evolutionary ones. However, the problem is very complex since it involves different factors that may influence the stubbing process, such as complexity measures, contractual issues and so on. These factors are usually in conflict and different possible solutions for the problem exist. To deal properly with this problem, this work explores the use of multi-objective optimization algorithms. The paper presents results from the application of two evolutionary algorithms - NSGA-II and SPEA2 - to the CAITO problem in four real systems, implemented in AspectJ. Both multi-objective algorithms are evaluated and compared with the traditional Tarjan's algorithm and with a mono-objective genetic algorithm. Moreover, it is shown how the tester can use the found solutions, according to the test goals.", "which Algorithm(s) ?", "SPEA2", 726.0, 731.0], ["Traditionally, simulation has been used by project managers in optimising decision making. However, current simulation packages only include simulation optimisation which considers a single objective (or multiple objectives combined into a single fitness function). This paper aims to describe an approach that consists of using multiobjective optimisation techniques via simulation in order to help software project managers find the best values for initial team size and schedule estimates for a given project so that cost, time and productivity are optimised. Using a System Dynamics (SD) simulation model of a software project, the sensitivity of the output variables regarding productivity, cost and schedule using different initial team size and schedule estimations is determined. The generated data is combined with a well-known multiobjective optimisation algorithm, NSGA-II, to find optimal solutions for the output variables. The NSGA-II algorithm was able to quickly converge to a set of optimal solutions composed of multiple and conflicting variables from a medium size software project simulation model. Multiobjective optimisation and SD simulation modeling are complementary techniques that can generate the Pareto front needed by project managers for decision making. Furthermore, visual representations of such solutions are intuitive and can help project managers in their decision making process.", "which Algorithm(s) ?", "NSGA-II", 876.0, 883.0], ["This paper is concerned with the Multi-Objective Next Release Problem (MONRP), a problem in search-based requirements engineering. Previous work has considered only single objective formulations. In the multi-objective formulation, there are at least two (possibly conflicting) objectives that the software engineer wishes to optimize. It is argued that the multi-objective formulation is more realistic, since requirements engineering is characterised by the presence of many complex and conflicting demands, for which the software engineer must find a suitable balance. The paper presents the results of an empirical study into the suitability of weighted and Pareto optimal genetic algorithms, together with the NSGA-II algorithm, presenting evidence to support the claim that NSGA-II is well suited to the MONRP. The paper also provides benchmark data to indicate the size above which the MONRP becomes non--trivial.", "which Algorithm(s) ?", "NSGA-II", 715.0, 722.0], ["During the inter-class test, a common problem, named Class Integration and Test Order (CITO) problem, involves the determination of a test class order that minimizes stub creation effort, and consequently test costs. The approach based on Multi-Objective Evolutionary Algorithms (MOEAs) has achieved promising results because it allows the use of different factors and measures that can affect the stubbing process. Many times these factors are in conflict and usually there is no a single solution for the problem. Existing works on MOEAs present some limitations. The approach was evaluated with only two coupling measures, based on the number of attributes and methods of the stubs to be created. Other MOEAs can be explored and also other coupling measures. Considering this fact, this paper investigates the performance of two evolutionary algorithms: NSGA-II and SPEA2, for the CITO problem with four coupling measures (objectives) related to: attributes, methods, number of distinct return types and distinct parameter types. An experimental study was performed with four real systems developed in Java. The obtained results point out that the MOEAs can be efficiently used to solve this problem with several objectives, achieving solutions with balanced compromise between the measures, and of minimal effort to test.", "which Algorithm(s) ?", "NSGA-II", 857.0, 864.0], ["Software testing is an important issue in software engineering. As software systems become increasingly large and complex, the problem of how to optimally allocate the limited testing resource during the testing phase has become more important, and difficult. Traditional Optimal Testing Resource Allocation Problems (OTRAPs) involve seeking an optimal allocation of a limited amount of testing resource to a number of activities with respect to some objectives (e.g., reliability, or cost). We suggest solving OTRAPs with Multi-Objective Evolutionary Algorithms (MOEAs). Specifically, we formulate OTRAPs as two types of multi-objective problems. First, we consider the reliability of the system and the testing cost as two objectives. Second, the total testing resource consumed is also taken into account as the third objective. The advantages of MOEAs over state-of-the-art single objective approaches to OTRAPs will be shown through empirical studies. Our study has revealed that a well-known MOEA, namely Nondominated Sorting Genetic Algorithm II (NSGA-II), performs well on the first problem formulation, but fails on the second one. Hence, a Harmonic Distance Based Multi-Objective Evolutionary Algorithm (HaD-MOEA) is proposed and evaluated in this paper. Comprehensive experimental studies on both parallel-series, and star-structure modular software systems have shown the superiority of HaD-MOEA over NSGA-II for OTRAPs.", "which Algorithm(s) ?", "HaD-MOEA", 1214.0, 1222.0], ["Nowadays, as the software systems become increasingly large and complex, the problem of allocating the limited testing-resource during the testing phase has become more and more difficult. In this paper, we propose to solve the testing-resource allocation problem (TRAP) using multi-objective evolutionary algorithms. Specifically, we formulate TRAP as two multi-objective problems. First, we consider the reliability of the system and the testing cost as two objectives. In the second formulation, the total testing-resource consumed is also taken into account as the third goal. Two multi-objective evolutionary algorithms, non-dominated sorting genetic algorithm II (NSGA2) and multi-objective differential evolution algorithms (MODE), are applied to solve the TRAP in the two scenarios. This is the first time that the TRAP is explicitly formulated and solved by multi-objective evolutionary approaches. Advantages of our approaches over the state-of-the-art single-objective approaches are demonstrated on two parallel-series modular software models.", "which Algorithm(s) ?", "MODE", 732.0, 736.0], ["Software design is a process of trading off competing objectives. If the user objective space is rich, then we should use optimizers that can fully exploit that richness. For example, this study configures software product lines (expressed as feature maps) using various search-based software engineering methods. As we increase the number of optimization objectives, we find that methods in widespread use (e.g. NSGA-II, SPEA2) perform much worse than IBEA (Indicator-Based Evolutionary Algorithm). IBEA works best since it makes most use of user preference knowledge. Hence it does better on the standard measures (hypervolume and spread) but it also generates far more products with 0% violations of domain constraints. Our conclusion is that we need to change our methods for search-based software engineering, particularly when studying complex decision spaces.", "which Algorithm(s) ?", "SPEA2", 422.0, 427.0], ["One of the first issues which has to be taken into account by software companies is to determine what should be included in the next release of their products, in such a way that the highest possible number of customers get satisfied while this entails a minimum cost for the company. This problem is known as the Next Release Problem (NRP). Since minimizing the total cost of including new features into a software package and maximizing the total satisfaction of customers are contradictory objectives, the problem has a multi-objective nature. In this work we study the NRP problem from the multi-objective point of view, paying attention to the quality of the obtained solutions, the number of solutions, the range of solutions covered by these fronts, and the number of optimal solutions obtained.Also, we evaluate the performance of two state-of-the-art multi-objective metaheuristics for solving NRP: NSGA-II and MOCell. The obtained results show that MOCell outperforms NSGA-II in terms of the range of solutions covered, while this latter is able of obtaining better solutions than MOCell in large instances. Furthermore, we have observed that the optimal solutions found are composed of a high percentage of low-cost requirements and, also, the requirements that produce most satisfaction on the customers.", "which Algorithm(s) ?", "MOCell", 920.0, 926.0], ["During the inter-class test, a common problem, named Class Integration and Test Order (CITO) problem, involves the determination of a test class order that minimizes stub creation effort, and consequently test costs. The approach based on Multi-Objective Evolutionary Algorithms (MOEAs) has achieved promising results because it allows the use of different factors and measures that can affect the stubbing process. Many times these factors are in conflict and usually there is no a single solution for the problem. Existing works on MOEAs present some limitations. The approach was evaluated with only two coupling measures, based on the number of attributes and methods of the stubs to be created. Other MOEAs can be explored and also other coupling measures. Considering this fact, this paper investigates the performance of two evolutionary algorithms: NSGA-II and SPEA2, for the CITO problem with four coupling measures (objectives) related to: attributes, methods, number of distinct return types and distinct parameter types. An experimental study was performed with four real systems developed in Java. The obtained results point out that the MOEAs can be efficiently used to solve this problem with several objectives, achieving solutions with balanced compromise between the measures, and of minimal effort to test.", "which Algorithm(s) ?", "SPEA2", 869.0, 874.0], ["One of the first issues which has to be taken into account by software companies is to determine what should be included in the next release of their products, in such a way that the highest possible number of customers get satisfied while this entails a minimum cost for the company. This problem is known as the Next Release Problem (NRP). Since minimizing the total cost of including new features into a software package and maximizing the total satisfaction of customers are contradictory objectives, the problem has a multi-objective nature. In this work we study the NRP problem from the multi-objective point of view, paying attention to the quality of the obtained solutions, the number of solutions, the range of solutions covered by these fronts, and the number of optimal solutions obtained.Also, we evaluate the performance of two state-of-the-art multi-objective metaheuristics for solving NRP: NSGA-II and MOCell. The obtained results show that MOCell outperforms NSGA-II in terms of the range of solutions covered, while this latter is able of obtaining better solutions than MOCell in large instances. Furthermore, we have observed that the optimal solutions found are composed of a high percentage of low-cost requirements and, also, the requirements that produce most satisfaction on the customers.", "which Algorithm(s) ?", "NSGA-II", 908.0, 915.0], ["In the authors' previous work, a sequence of image-processing algorithms was developed that was suitable for detecting and classifying ships from panchromatic Quickbird electro-optical satellite imagery. Presented in this paper are several new algorithms, which improve the performance and enhance the capabilities of the ship detection software, as well as an overview on how land masking is performed. Specifically, this paper describes the new algorithms for enhanced detection including for the reduction of false detects such as glint and clouds. Improved cloud detection and filtering algorithms are described as well as several texture classification algorithms are used to characterize the background statistics of the ocean texture. These detection algorithms employ both cloud and glint removal techniques, which we describe. Results comparing ship detection with and without these false detect reduction algorithms are provided. These are components of a larger effort to develop a low-cost solution for detecting the presence of ships from readily-available overhead commercial imagery and comparing this information against various open-source ship-registry databases to categorize contacts for follow-on analysis.", "which Band ?", "PAN", NaN, NaN], ["In this letter, we propose a novel computational model for automatic ship detection in optical satellite images. The model first selects salient candidate regions across entire detection scene by using a bottom-up visual attention mechanism. Then, two complementary types of top-down cues are employed to discriminate the selected ship candidates. Specifically, in addition to the detailed appearance analysis of candidates, a neighborhood similarity-based method is further exploited to characterize their local context interactions. Furthermore, the framework of our model is designed in a multiscale and hierarchical manner which provides a plausible approximation to a visual search process and reasonably distributes the computational resources. Experiments over panchromatic SPOT5 data prove the effectiveness and computational efficiency of the proposed model.", "which Band ?", "PAN", NaN, NaN], ["In this paper, a new ship detection method is proposed after analyzing the characteristics of panchromatic remote sensing images and ship targets. Firstly, AdaBoost(Adaptive Boosting) classifiers trained by Haar features are utilized to make coarse detection of ship targets. Then LSD (Line Segment Detector) is adopted to extract the line features in target slices to make fine detection. Experimental results on a dataset of panchromatic remote sensing images with a spatial resolution of 2m show that the proposed algorithm can achieve high detection rate and low false alarm rate. Meanwhile, the algorithm can meet the needs of practical applications on DSP (Digital Signal Processor).", "which Band ?", "PAN", NaN, NaN], ["Describes a syntactic approach to deducing the logical structure of printed documents from their physical layout. Page layout is described by a two-dimensional grammar, similar to a context-free string grammar, and a chart parser is used to parse segmented page images according to the grammar. This process is part of a system which reads scanned document images and produces computer-readable text in a logical mark-up format such as SGML. The system is briefly outlined, the grammar formalism and the parsing algorithm are described in detail, and some experimental results are reported.<>", "which Output Representation ?", "SGML", 436.0, 440.0], ["Sentiment analysis has been a major area of interest, for which the existence of highquality resources is crucial. In Arabic, there is a reasonable number of sentiment lexicons but with major deficiencies. The paper presents a large-scale Standard Arabic Sentiment Lexicon (SLSA) that is publicly available for free and avoids the deficiencies in the current resources. SLSA has the highest up-to-date reported coverage. The construction of SLSA is based on linking the lexicon of AraMorph with SentiWordNet along with a few heuristics and powerful back-off. SLSA shows a relative improvement of 37.8% over a state-of-theart lexicon when tested for accuracy. It also outperforms it by an absolute 3.5% of F1-score when tested for sentiment analysis.", "which Lexicon ?", "SLSA ", 370.0, 375.0], ["Background: Seasonal influenza virus outbreaks cause annual epidemics, mostly during winter in temperate zone countries, especially resulting in increased morbidity and higher mortality in children. In order to conduct rapid screening for influenza in pediatric outpatient units, we developed a pediatric infection screening system with a radar respiration monitor. Methods: The system conducts influenza screening within 10 seconds based on vital signs (i.e., respiration rate monitored using a 24 GHz microwave radar; facial temperature, using a thermopile array; and heart rate, using a pulse photosensor). A support vector machine (SVM) classification method was used to discriminate influenza children from healthy children based on vital signs. To assess the classification performance of the screening system that uses the SVM, we conducted influenza screening for 70 children (i.e., 27 seasonal influenza patients (11 \u00b1 2 years) at a pediatric clinic and 43 healthy control subjects (9 \u00b1 4 years) at a pediatric dental clinic) in the winter of 2013-2014. Results: The screening system using the SVM identified 26 subjects with influenza (22 of the 27 influenza patients and 4 of the 43 healthy subjects). The system discriminated 44 subjects as healthy (5 of the 27 influenza patients and 39 of the 43 healthy subjects), with sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 81.5%, 90.7%, 84.6%, and 88.6%, respectively. Conclusion: The SVM-based screening system achieved classification results for the outpatient children based on vital signs with comparatively high NPV within 10 seconds. At pediatric clinics and hospitals, our system seems potentially useful in the first screening step for infections in the future.", "which Computational intelligence technologie ?", "SVM", 636.0, 639.0], ["This paper uses VAR models to investigate the impact of real exchange rate volatility on U.S. bilateral imports from the United Kingdom, France, Germany, Japan and Canada. The VAR systems include U.S. and foreign macro variables, and are estimated separately for each country. The major results suggest that the effect of volatility on imports is weak, although permanent shocks to volatility do have a negative impact on this measure of trade, and those effects are relatively more important over the flexible rate period. Copyright 1989 by MIT Press.", "which Countries and Estimation technique used ?", "VAR", 16.0, 19.0], ["A critical step in the process of reusing existing WSDL-specified services for building web-based applications is the discovery of potentially relevant services. However, the category-based service discovery, such as UDDI, is clearly insufficient. Semantic Web Services, augmenting Web service descriptions using Semantic Web technology, were introduced to facilitate the publication, discovery, and execution of Web services at the semantic level. Semantic matchmaker enhances the capability of UDDI service registries in the Semantic Web Services architecture by applying some matching algorithms between advertisements and requests described in OWL-S to recognize various degrees of matching for Web services. Based on Semantic Web Service framework, semantic matchmaker, specification matching and probabilistic matching approach, this paper proposes a fuzzy-set based semantic similarity matching algorithm for Web Service to support a more automated and veracity service discovery process in the Semantic Web Service Framework.", "which Specification Languages ?", "WSDL", 51.0, 55.0], ["OBJECTIVE To investigate the oral health of 12-year-old children of different deprivation but similar fluoridation status from South Asian and White Caucasian ethnic groups. DESIGN An epidemiological survey of 12-year-old children using BASCD criteria, with additional tooth erosion, ethnic classification and postcode data. CLINICAL SETTING Examinations were completed in schools in Leicestershire and Rutland, England, UK. Participants A random sample of 1,753 12-year-old children from all schools in the study area. MAIN OUTCOME MEASURES Caries experience was measured using the DMFT index diagnosed at the caries into dentine (D3) threshold, and tooth erosion using the index employed in the Children's Dental Health UK study reported in 1993. RESULTS The overall prevalence of caries was greater in White than Asian children, but varied at different levels of deprivation and amongst different Asian religious groups. There was a significant positive association between caries and deprivation for White children, but the reverse was true for non-Muslim Asians. White Low Deprivation children had significantly less tooth erosion, but erosion experience increased with decreasing deprivation in non-Muslim Asians. CONCLUSIONS Oral health is associated with ethnicity and linked to deprivation on an ethnic basis. The intra-Asian dental health disadvantage found in the primary dentition of Muslim children is perpetuated into the permanent dentition.", "which n ?", "1,753", 457.0, 462.0], ["Over the last couple of years, face recognition researchers have been developing new techniques. These developments are being fueled by advances in computer vision techniques, computer design, sensor design, and interest in fielding face recognition systems. Such advances hold the promise of reducing the error rate in face recognition systems by an order of magnitude over Face Recognition Vendor Test (FRVT) 2002 results. The face recognition grand challenge (FRGC) is designed to achieve this performance goal by presenting to researchers a six-experiment challenge problem along with data corpus of 50,000 images. The data consists of 3D scans and high resolution still imagery taken under controlled and uncontrolled conditions. This paper describes the challenge problem, data corpus, and presents baseline performance and preliminary results on natural statistics of facial imagery.", "which Amount of data ?", "50,000", 604.0, 610.0], ["This paper provides a working definition of what the middle-income trap is. We start by defining four income groups of GDP per capita in 1990 PPP dollars: low-income below $2,000; lower-middle-income between $2,000 and $7,250; upper-middle-income between $7,250 and $11,750; and high-income above $11,750. We then classify 124 countries for which we have consistent data for 1950\u20132010. In 2010, there were 40 low-income countries in the world, 38 lower-middle-income, 14 upper-middle-income, and 32 high-income countries. Then we calculate the threshold number of years for a country to be in the middle-income trap: a country that becomes lower-middle-income (i.e., that reaches $2,000 per capita income) has to attain an average growth rate of per capita income of at least 4.7 percent per annum to avoid falling into the lower-middle-income trap (i.e., to reach $7,250, the upper-middle-income threshold); and a country that becomes upper-middle-income (i.e., that reaches $7,250 per capita income) has to attain an average growth rate of per capita income of at least 3.5 percent per annum to avoid falling into the upper-middle-income trap (i.e., to reach $11,750, the high-income level threshold). Avoiding the middle-income trap is, therefore, a question of how to grow fast enough so as to cross the lower-middle-income segment in at most 28 years, and the upper-middle-income segment in at most 14 years. Finally, the paper proposes and analyzes one possible reason why some countries get stuck in the middle-income trap: the role played by the changing structure of the economy (from low-productivity activities into high-productivity activities), the types of products exported (not all products have the same consequences for growth and development), and the diversification of the economy. We compare the exports of countries in the middle-income trap with those of countries that graduated from it, across eight dimensions that capture different aspects of a country\u2019s capabilities to undergo structural transformation, and test whether they are different. Results indicate that, in general, they are different. We also compare Korea, Malaysia, and the Philippines according to the number of products that each exports with revealed comparative advantage. We find that while Korea was able to gain comparative advantage in a significant number of sophisticated products and was well connected, Malaysia and the Philippines were able to gain comparative advantage in electronics only.", "which Sample Period ?", "1950\u20132010", 375.0, 384.0], ["This paper provides a working definition of what the middle-income trap is. We start by defining four income groups of GDP per capita in 1990 PPP dollars: low-income below $2,000; lower-middle-income between $2,000 and $7,250; upper-middle-income between $7,250 and $11,750; and high-income above $11,750. We then classify 124 countries for which we have consistent data for 1950\u20132010. In 2010, there were 40 low-income countries in the world, 38 lower-middle-income, 14 upper-middle-income, and 32 high-income countries. Then we calculate the threshold number of years for a country to be in the middle-income trap: a country that becomes lower-middle-income (i.e., that reaches $2,000 per capita income) has to attain an average growth rate of per capita income of at least 4.7 percent per annum to avoid falling into the lower-middle-income trap (i.e., to reach $7,250, the upper-middle-income threshold); and a country that becomes upper-middle-income (i.e., that reaches $7,250 per capita income) has to attain an average growth rate of per capita income of at least 3.5 percent per annum to avoid falling into the upper-middle-income trap (i.e., to reach $11,750, the high-income level threshold). Avoiding the middle-income trap is, therefore, a question of how to grow fast enough so as to cross the lower-middle-income segment in at most 28 years, and the upper-middle-income segment in at most 14 years. Finally, the paper proposes and analyzes one possible reason why some countries get stuck in the middle-income trap: the role played by the changing structure of the economy (from low-productivity activities into high-productivity activities), the types of products exported (not all products have the same consequences for growth and development), and the diversification of the economy. We compare the exports of countries in the middle-income trap with those of countries that graduated from it, across eight dimensions that capture different aspects of a country\u2019s capabilities to undergo structural transformation, and test whether they are different. Results indicate that, in general, they are different. We also compare Korea, Malaysia, and the Philippines according to the number of products that each exports with revealed comparative advantage. We find that while Korea was able to gain comparative advantage in a significant number of sophisticated products and was well connected, Malaysia and the Philippines were able to gain comparative advantage in electronics only.", "which Time period ?", "1950\u20132010", 375.0, 384.0], ["The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.", "which Time period ?", "from the 1810s to the 2000s", 114.0, 141.0], ["This paper investigates the short-run and long-run causality issues between electricity consumption and economic growth in Turkey by using the co-integration and vector error-correction models with structural breaks. It employs annual data covering the period 1968\u20132005. The study also explores the causal relationship between these variables in terms of the three error-correction based Granger causality models. The empirical results are as follows: i) Both variables are nonstationary in levels and stationary in the first differences with/without structural breaks, ii) there exists a longrun relationship between variables, iii) there is unidirectional causality running from the electricity consumption to economic growth. The overall results indicate that \u201cgrowth hypothesis\u201d for electricity consumption and growth nexus holds in Turkey. Thus, energy conservation policies, such as rationing electricity consumption, may harm economic growth in Turkey.", "which Period ?", "1968\u20132005", 260.0, 269.0], ["The Implementation of Enterprise Resource Planning ERP systems require huge investments while ineffective implementations of such projects are commonly observed. A considerable number of these projects have been reported to fail or take longer than it was initially planned, while previous studies show that the aim of rapid implementation of such projects has not been successful and the failure of the fundamental goals in these projects have imposed huge amounts of costs on investors. Some of the major consequences are the reduction in demand for such products and the introduction of further skepticism to the managers and investors of ERP systems. In this regard, it is important to understand the factors determining success or failure of ERP implementation. The aim of this paper is to study the critical success factors CSFs in implementing ERP systems and to develop a conceptual model which can serve as a basis for ERP project managers. These critical success factors that are called \"core critical success factors\" are extracted from 62 published papers using the content analysis and the entropy method. The proposed conceptual model has been verified in the context of five multinational companies.", "which Period ?", "Implementation", 4.0, 18.0], ["ABSTRACT The research related to Enterprise Resource Planning (ERP) has grown over the past several years. This growing body of ERP research results in an increased need to review this extant literature with the intent of identifying gaps and thus motivate researchers to close this breach. Therefore, this research was intended to critique, synthesize and analyze both the content (e.g., topics, focus) and processes (i.e., methods) of the ERP literature, and then enumerates and discusses an agenda for future research efforts. To accomplish this, we analyzed 49 ERP articles published (1999-2004) in top Information Systems (IS) and Operations Management (OM) journals. We found an increasing level of activity during the 5-year period and a slightly biased distribution of ERP articles targeted at IS journals compared to OM. We also found several research methods either underrepresented or absent from the pool of ERP research. We identified several areas of need within the ERP literature, none more prevalent than the need to analyze ERP within the context of the supply chain. INTRODUCTION Davenport (1998) described the strengths and weaknesses of using Enterprise Resource Planning (ERP). He called attention to the growth of vendors like SAP, Baan, Oracle, and People-Soft, and defined this software as, \"...the seamless integration of all the information flowing through a companyfinancial and accounting information, human resource information, supply chain information, and customer information.\" (Davenport, 1998). Since the time of that article, there has been a growing interest among researchers and practitioners in how organization implement and use ERP systems (Amoako-Gyampah and Salam, 2004; Bendoly and Jacobs, 2004; Gattiker and Goodhue, 2004; Lander, Purvis, McCray and Leigh, 2004; Luo and Strong, 2004; Somers and Nelson, 2004; Zoryk-Schalla, Fransoo and de Kok, 2004). This interest is a natural continuation of trends in Information Technology (IT), such as MRP II, (Olson, 2004; Teltumbde, 2000; Toh and Harding, 1999) and in business practice improvement research, such as continuous process improvement and business process reengineering (Markus and Tanis, 2000; Ng, Ip and Lee, 1999; Reijers, Limam and van der Aalst, 2003; Toh and Harding, 1999). This growing body of ERP research results in an increased need to review this extant literature with the intent of \"identifying critical knowledge gaps and thus motivate researchers to close this breach\" (Webster and Watson, 2002). Also, as noted by Scandura & Williams (2000), in order for research to advance, the methods used by researchers must periodically be evaluated to provide insights into the methods utilized and thus the areas of need. These two interrelated needs provide the motivation for this paper. In essence, this research critiques, synthesizes and analyzes both the content (e.g., topics, focus) and processes (i.e., methods) of the ERP literature and then enumerates and discusses an agenda for future research efforts. The remainder of the paper is organized as follows: Section 2 describes the approach to the analysis of the ERP research. Section 3 contains the results and a review of the literature. Section 4 discusses our findings and the needs relative to future ERP research efforts. Finally, section 5 summarizes the research. RESEARCH STUDY We captured the trends pertaining to (1) the number and distribution of ERP articles published in the leading journals, (2) methodologies employed in ERP research, and (3) emphasis relative to topic of ERP research. During the analysis of the ERP literature, we identified gaps and needs in the research and therefore enumerate and discuss a research agenda which allows the progression of research (Webster and Watson, 2002). In short, we sought to paint a representative landscape of the current ERP literature base in order to influence the direction of future research efforts relative to ERP. \u2026", "which Coverage ?", "1999-2004", 589.0, 598.0], ["This study provides an updated annotated bibliography of ERP publications published in the main IS conferences and journals during the period 2001-2005, categorizing them through an ERP lifecycle-based framework that is structured in phases. The first version of this bibliography was published in 2001 (Esteves and Pastor, 2001c). However, so far, we have extended the bibliography with a significant number of new publications in all the categories used in this paper. We also reviewed the categories and some incongruities were eliminated.", "which Coverage ?", "2001-2005", 142.0, 151.0], ["Pd/Al2O3 catalysts coated with various thiolate self-assembled monolayers (SAMs) were used to direct the partial hydrogenation of 18-carbon polyunsaturated fatty acids, yielding a product stream enriched in monounsaturated fatty acids (with low saturated fatty acid content), a favorable result for increasing the oxidative stability of biodiesel. The uncoated Pd/Al2O3 catalyst quickly saturated all fatty acid reactants under hydrogenation conditions, but the addition of alkanethiol SAMs markedly increased the reaction selectivity to the monounsaturated product oleic acid to a level of 80\u201390%, even at conversions >70%. This effect, which is attributed to steric effects between the SAMs and reactants, was consistent with the relative consumption rates of linoleic and oleic acid using alkanethiol-coated and uncoated Pd/Al2O3 catalysts. With an uncoated Pd/Al2O3 catalyst, each fatty acid, regardless of its degree of saturation had a reaction rate of \u223c0.2 mol reactant consumed per mole of surface palladium per ...", "which conv (%) ?", ">70", NaN, NaN], ["This study utilizes standard- and nested-EKC models to investigate the income-environment relation for Nigeria, between 1960 and 2008. The results from the standard-EKC model provides weak evidence of an inverted-U shaped relationship with turning point (T.P) around $280.84, while the nested model presents strong evidence of an N-shaped relationship between income and emissions in Nigeria, with a T.P around $237.23. Tests for structural breaks caused by the 1973 oil price shocks and 1986 Structural Adjustment are not rejected, implying that these factors have not significantly affected the income-environment relationship in Nigeria. Further, results from the rolling interdecadal analysis shows that the observed relationship is stable and insensitive to the sample interval chosen. Overall, our findings imply that economic development is compatible with environmental improvements in Nigeria. However, tighter and concentrated environmental policy regimes will be required to ensure that the relationship is maintained around the first two-strands of the N-shape", "which Shape of EKC ?", "N-shaped", 330.0, 338.0], ["This paper analyses the relationship between GDP and carbon dioxide emissions for Mauritius and vice-versa in a historical perspective. Using rigorous econometrics analysis, our results suggest that the carbon dioxide emission trajectory is closely related to the GDP time path. We show that emissions elasticity on income has been increasing over time. By estimating the EKC for the period 1975-2009, we were unable to prove the existence of a reasonable turning point and thus no EKC \u201cU\u201d shape was obtained. Our results suggest that Mauritius could not curb its carbon dioxide emissions in the last three decades. Thus, as hypothesized, the cost of degradation associated with GDP grows over time and it suggests that the economic and human activities are having increasingly negative environmental impacts on the country as cpmpared to their economic prosperity.", "which Shape of EKC ?", "increasing", 332.0, 342.0], ["Purpose \u2013 The purpose of this paper is to examine the relationship among environmental pollution, economic growth and energy consumption per capita in the case of Pakistan. The per capital carbon dioxide (CO2) emission is used as the environmental indicator, the commercial energy use per capita as the energy consumption indicator, and the per capita gross domestic product (GDP) as the economic indicator.Design/methodology/approach \u2013 The investigation is made on the basis of the environmental Kuznets curve (EKC), using time series data from 1971 to 2006, by applying different econometric tools like ADF Unit Root Johansen Co\u2010integration VECM and Granger causality tests.Findings \u2013 The Granger causality test shows that there is a long term relationship between these three indicators, with bidirectional causality between per capita CO2 emission and per capita energy consumption. A monotonically increasing curve between GDP and CO2 emission has been found for the sample period, rejecting the EKC relationship, i...", "which Shape of EKC ?", "increasing", 903.0, 913.0], ["This paper examines the dynamic causal relationship between carbon dioxide emissions, energy consumption, economic growth, foreign trade and urbanization using time series data for the period of 1960-2009. Short-run unidirectional causalities are found from energy consumption and trade openness to carbon dioxide emissions, from trade openness to energy consumption, from carbon dioxide emissions to economic growth, and from economic growth to trade openness. The test results also support the evidence of existence of long-run relationship among the variables in the form of Equation (1) which also conform the results of bounds and Johansen conintegration tests. It is found that over time higher energy consumption in Japan gives rise to more carbon dioxide emissions as a result the environment will be polluted more. But in respect of economic growth, trade openness and urbanization the environmental quality is found to be normal good in the long-run.", "which Shape of EKC ?", "emissions", 75.0, 84.0], ["This paper applies the quantile fixed effects technique in exploring the CO2 environmental Kuznets curve within two groups of economic development (OECD and Non-OECD countries) and six geographical regions West, East Europe, Latin America, East Asia, West Asia and Africa. A comparison of the findings resulting from the use of this technique with those of conventional fixed effects method reveals that the latter may depict a flawed summary of the prevailing incomeemissions nexus depending on the conditional quantile examined. We also extend the Machado and Mata decomposition method to the Kuznets curve framework to explore the most important explanations for the CO2 emissions gap between OECD and Non-OECD countries. We find a statistically significant OECD-Non-OECD emissions gap and this contracts as we ascent the emissions distribution. The decomposition further reveals that there are non-income related factors working against the Non-OECD group's greening. We tentatively conclude that deliberate and systematic mitigation of current CO2 emissions in the Non-OECD group is required. JEL Classification: Q56, Q58.", "which Shape of EKC ?", "emissions", 674.0, 683.0], ["Previous studies show that the environmental quality and economic growth can be represented by the inverted U curve called Environmental Kuznets Curve (EKC). In this study, we conduct empirical analyses on detecting the existence of EKC using the five common pollutants emissions (i.e. CO2, SO2, BOD, SPM10, and GHG) as proxy for environmental quality. The data spanning from year 1961 to 2009 and cover 40 countries. We seek to investigate if the EKC hypothesis holds in two groups of economies, i.e. developed versus developing economies. Applying panel data approach, our results show that the EKC does not hold in all countries. We also detect the existence of U shape and increasing trend in other cases. The results reveal that CO2 and SPM10 are good data to proxy for environmental pollutant and they can be explained well by GDP. Also, it is observed that the developed countries have higher turning points than the developing countries. Higher economic growth may lead to different impacts on environmental quality in different economies.", "which Shape of EKC ?", "Developing countries", 924.0, 944.0], ["With the increase in smart devices and abundance of video contents, efficient techniques for the indexing, analysis and retrieval of videos are becoming more and more desirable. Improved indexing and automated analysis of millions of videos could be accomplished by getting videos tagged automatically. A lot of existing methods fail to precisely tag videos because of their lack of ability to capture the video context. The context in a video represents the interactions of objects in a scene and their overall meaning. In this work, we propose a novel approach that integrates the video scene ontology with CNN (Convolutional Neural Network) for improved video tagging. Our method captures the content of a video by extracting the information from individual key frames. The key frames are then fed to a CNN based deep learning model to train its parameters. The trained parameters are used to generate the most frequent tags. Highly frequent tags are used to summarize the input video. The proposed technique is benchmarked on the most widely used dataset of video activities, namely, UCF-101. Our method managed to achieve an overall accuracy of 99.8% with an F1- score of 96.2%.", "which has Data Source ?", "UCF-101", 1088.0, 1095.0], ["Abstract Hackathons, time-bounded events where participants write computer code and build apps, have become a popular means of socializing tech students and workers to produce \u201cinnovation\u201d despite little promise of material reward. Although they offer participants opportunities for learning new skills and face-to-face networking and set up interaction rituals that create an emotional \u201chigh,\u201d potential advantage is even greater for the events\u2019 corporate sponsors, who use them to outsource work, crowdsource innovation, and enhance their reputation. Ethnographic observations and informal interviews at seven hackathons held in New York during the course of a single school year show how the format of the event and sponsors\u2019 discursive tropes, within a dominant cultural frame reflecting the appeal of Silicon Valley, reshape unpaid and precarious work as an extraordinary opportunity, a ritual of ecstatic labor, and a collective imaginary for fictional expectations of innovation that benefits all, a powerful strategy for manufacturing workers\u2019 consent in the \u201cnew\u201d economy.", "which has Data Source ?", "seven hackathons", 606.0, 622.0], ["Entrepreneurs and start-up founders using innovation spaces and hubs often find themselves inside a filter bubble or echo chamber, where like-minded people tend to come up with similar ideas and recommend similar approaches to innovation. This trend towards homophily and a polarisation of like-mindedness is aggravated by algorithmic filtering and recommender systems embedded in mobile technology and social media platforms. Yet, genuine innovation thrives on social inclusion fostering a diversity of ideas. To escape these echo chambers, we designed and tested the Skunkworks Finder - an exploratory tool that employs social network analysis to help users discover spaces of difference and otherness in their local urban innovation ecosystem.", "which has Data Source ?", "social media", 403.0, 415.0], ["Traditional sources of information for small and rural communities have been disappearing over the past decade. A lot of the information and discussion related to such local geographic areas is now scattered across websites of numerous local organizations, individual blogs, social media and other user-generated media (YouTube, Flickr). It is important to capture this information and make it easily accessible to local citizens to facilitate citizen engagement and social interaction. Furthermore, a system that has location-based support can provide local citizens with an engaging way to interact with this information and identify the local issues most relevant to them. A location-based interface for a local geographic area enables people to identify and discuss local issues related to specific locations such as a particular street or a road construction site. We created an information aggregator, called the Virtual Town Square (VTS), to support and facilitate local discussion and interaction. We created a location-based interface for users to access the information collected by VTS. In this paper, we discuss focus group interviews with local citizens that motivated our design of a local news and information aggregator to facilitate civic participation. We then discuss the unique design challenges in creating such a local news aggregator and our design approach to create a local information ecosystem. We describe VTS and the initial evaluation and feedback we received from local users and through weekly meetings with community partners.", "which has Data Source ?", "social media", 275.0, 287.0], ["The application of thinner cadmium sulfide (CdS) window layer is a feasible approach to improve the performance of cadmium telluride (CdTe) thin film solar cells. However, the reduction of compactness and continuity of thinner CdS always deteriorates the device performance. In this work, transparent Al2O3 films with different thicknesses, deposited by using atomic layer deposition (ALD), were utilized as buffer layers between the front electrode transparent conductive oxide (TCO) and CdS layers to solve this problem, and then, thin-film solar cells with a structure of TCO/Al2O3/CdS/CdTe/BC/Ni were fabricated. The characteristics of the ALD-Al2O3 films were studied by UV\u2013visible transmittance spectrum, Raman spectroscopy, and atomic force microscopy (AFM). The light and dark J\u2013V performances of solar cells were also measured by specific instrumentations. The transmittance measurement conducted on the TCO/Al2O3 films verified that the transmittance of TCO/Al2O3 were comparable to that of single TCO layer, meaning that no extra absorption loss occurred when Al2O3 buffer layers were introduced into cells. Furthermore, due to the advantages of the ALD method, the ALD-Al2O3 buffer layers formed an extremely continuous and uniform coverage on the substrates to effectively fill and block the tiny leakage channels in CdS/CdTe polycrystalline films and improve the characteristics of the interface between TCO and CdS. However, as the thickness of alumina increased, the negative effects of cells were gradually exposed, especially the increase of the series resistance (Rs) and the more serious \u201croll-over\u201d phenomenon. Finally, the cell conversion efficiency (\u03b7) of more than 13.0% accompanied by optimized uniformity performances was successfully achieved corresponding to the 10 nm thick ALD-Al2O3 thin film.", "which Solar cell structure ?", "Al2O3/CdS/CdTe", 587.0, 601.0], ["After a thorough analysis of existing Internet of Things (IoT) related ontologies, in this paper we propose a solution that aims to achieve semantic interoperability among heterogeneous testbeds. Our model is framed within the EU H2020's FIESTA-IoT project, that aims to seamlessly support the federation of testbeds through the usage of semantic-based technologies. Our proposed model (ontology) takes inspiration from the well-known Noy et al. methodology for reusing and interconnecting existing ontologies. To build the ontology, we leverage a number of core concepts from various mainstream ontologies and taxonomies, such as Semantic Sensor Network (SSN), M3-lite (a lite version of M3 and also an outcome of this study), WGS84, IoT-lite, Time, and DUL. In addition, we also introduce a set of tools that aims to help external testbeds adapt their respective datasets to the developed ontology.", "which Ontologies which have been used as referenced ?", "DUL", 755.0, 758.0], ["During the annual hunt in a privately owned Austrian game population in fall 2019 and 2020, 64 red deer (Cervus elaphus), 5 fallow deer (Dama dama), 6 mouflon (Ovis gmelini musimon), and 95 wild boars (Sus scrofa) were shot and sampled for PCR testing. Pools of spleen, lung, and tonsillar swabs were screened for specific nucleic acids of porcine circoviruses. Wild ruminants were additionally tested for herpesviruses and pestiviruses, and wild boars were screened for pseudorabies virus (PrV) and porcine lymphotropic herpesviruses (PLHV-1-3). PCV2 was detectable in 5% (3 of 64) of red deer and 75% (71 of 95) of wild boar samples. In addition, 24 wild boar samples (25%) but none of the ruminants tested positive for PCV3 specific nucleic acids. Herpesviruses were detected in 15 (20%) ruminant samples. Sequence analyses showed the closest relationships to fallow deer herpesvirus and elk gammaherpesvirus. In wild boars, PLHV-1 was detectable in 10 (11%), PLHV-2 in 44 (46%), and PLHV-3 in 66 (69%) of animals, including 36 double and 3 triple infections. No pestiviruses were detectable in any ruminant samples, and all wild boar samples were negative in PrV-PCR. Our data demonstrate a high prevalence of PCV2 and PLHVs in an Austrian game population, confirm the presence of PCV3 in Austrian wild boars, and indicate a low risk of spillover of notifiable animal diseases into the domestic animal population.", "which Has Virus ?", "PLHV-1", 536.0, 542.0], ["During the annual hunt in a privately owned Austrian game population in fall 2019 and 2020, 64 red deer (Cervus elaphus), 5 fallow deer (Dama dama), 6 mouflon (Ovis gmelini musimon), and 95 wild boars (Sus scrofa) were shot and sampled for PCR testing. Pools of spleen, lung, and tonsillar swabs were screened for specific nucleic acids of porcine circoviruses. Wild ruminants were additionally tested for herpesviruses and pestiviruses, and wild boars were screened for pseudorabies virus (PrV) and porcine lymphotropic herpesviruses (PLHV-1-3). PCV2 was detectable in 5% (3 of 64) of red deer and 75% (71 of 95) of wild boar samples. In addition, 24 wild boar samples (25%) but none of the ruminants tested positive for PCV3 specific nucleic acids. Herpesviruses were detected in 15 (20%) ruminant samples. Sequence analyses showed the closest relationships to fallow deer herpesvirus and elk gammaherpesvirus. In wild boars, PLHV-1 was detectable in 10 (11%), PLHV-2 in 44 (46%), and PLHV-3 in 66 (69%) of animals, including 36 double and 3 triple infections. No pestiviruses were detectable in any ruminant samples, and all wild boar samples were negative in PrV-PCR. Our data demonstrate a high prevalence of PCV2 and PLHVs in an Austrian game population, confirm the presence of PCV3 in Austrian wild boars, and indicate a low risk of spillover of notifiable animal diseases into the domestic animal population.", "which Has Virus ?", "PLHV-2", 963.0, 969.0], ["The ongoing COVID-19 pandemic has stimulated a search for reservoirs and species potentially involved in back and forth transmission. Studies have postulated bats as one of the key reservoirs of coronaviruses (CoVs), and different CoVs have been detected in bats. So far, CoVs have not been found in bats in Sweden and we therefore tested whether they carry CoVs. In summer 2020, we sampled a total of 77 adult bats comprising 74 Myotis daubentonii, 2 Pipistrellus pygmaeus, and 1 M. mystacinus bats in southern Sweden. Blood, saliva and feces were sampled, processed and subjected to a virus next-generation sequencing target enrichment protocol. An Alphacoronavirus was detected and sequenced from feces of a M. daubentonii adult female bat. Phylogenetic analysis of the almost complete virus genome revealed a close relationship with Finnish and Danish strains. This was the first finding of a CoV in bats in Sweden, and bats may play a role in the transmission cycle of CoVs in Sweden. Focused and targeted surveillance of CoVs in bats is warranted, with consideration of potential conflicts between public health and nature conservation required as many bat species in Europe are threatened and protected.", "which Has Virus ?", "COVID-19", 12.0, 20.0], ["The ongoing COVID-19 pandemic has stimulated a search for reservoirs and species potentially involved in back and forth transmission. Studies have postulated bats as one of the key reservoirs of coronaviruses (CoVs), and different CoVs have been detected in bats. So far, CoVs have not been found in bats in Sweden and we therefore tested whether they carry CoVs. In summer 2020, we sampled a total of 77 adult bats comprising 74 Myotis daubentonii, 2 Pipistrellus pygmaeus, and 1 M. mystacinus bats in southern Sweden. Blood, saliva and feces were sampled, processed and subjected to a virus next-generation sequencing target enrichment protocol. An Alphacoronavirus was detected and sequenced from feces of a M. daubentonii adult female bat. Phylogenetic analysis of the almost complete virus genome revealed a close relationship with Finnish and Danish strains. This was the first finding of a CoV in bats in Sweden, and bats may play a role in the transmission cycle of CoVs in Sweden. Focused and targeted surveillance of CoVs in bats is warranted, with consideration of potential conflicts between public health and nature conservation required as many bat species in Europe are threatened and protected.", "which Has Virus ?", "coronavirus", NaN, NaN], ["The ecology of ebolaviruses is still poorly understood and the role of bats in outbreaks needs to be further clarified. Straw-colored fruit bats (Eidolon helvum) are the most common fruit bats in Africa and antibodies to ebolaviruses have been documented in this species. Between December 2018 and November 2019, samples were collected at approximately monthly intervals in roosting and feeding sites from 820 bats from an Eidolon helvum colony. Dried blood spots (DBS) were tested for antibodies to Zaire, Sudan, and Bundibugyo ebolaviruses. The proportion of samples reactive with GP antigens increased significantly with age from 0\u20139/220 (0\u20134.1%) in juveniles to 26\u2013158/225 (11.6\u201370.2%) in immature adults and 10\u2013225/372 (2.7\u201360.5%) in adult bats. Antibody responses were lower in lactating females. Viral RNA was not detected in 456 swab samples collected from 152 juvenile and 214 immature adult bats. Overall, our study shows that antibody levels increase in young bats suggesting that seroconversion to Ebola or related viruses occurs in older juvenile and immature adult bats. Multiple year monitoring would be needed to confirm this trend. Knowledge of the periods of the year with the highest risk of Ebolavirus circulation can guide the implementation of strategies to mitigate spill-over events.", "which Has Virus ?", "Ebola", 1010.0, 1015.0], ["During the annual hunt in a privately owned Austrian game population in fall 2019 and 2020, 64 red deer (Cervus elaphus), 5 fallow deer (Dama dama), 6 mouflon (Ovis gmelini musimon), and 95 wild boars (Sus scrofa) were shot and sampled for PCR testing. Pools of spleen, lung, and tonsillar swabs were screened for specific nucleic acids of porcine circoviruses. Wild ruminants were additionally tested for herpesviruses and pestiviruses, and wild boars were screened for pseudorabies virus (PrV) and porcine lymphotropic herpesviruses (PLHV-1-3). PCV2 was detectable in 5% (3 of 64) of red deer and 75% (71 of 95) of wild boar samples. In addition, 24 wild boar samples (25%) but none of the ruminants tested positive for PCV3 specific nucleic acids. Herpesviruses were detected in 15 (20%) ruminant samples. Sequence analyses showed the closest relationships to fallow deer herpesvirus and elk gammaherpesvirus. In wild boars, PLHV-1 was detectable in 10 (11%), PLHV-2 in 44 (46%), and PLHV-3 in 66 (69%) of animals, including 36 double and 3 triple infections. No pestiviruses were detectable in any ruminant samples, and all wild boar samples were negative in PrV-PCR. Our data demonstrate a high prevalence of PCV2 and PLHVs in an Austrian game population, confirm the presence of PCV3 in Austrian wild boars, and indicate a low risk of spillover of notifiable animal diseases into the domestic animal population.", "which Has Virus ?", "herpesvirus", 875.0, 886.0], ["During the annual hunt in a privately owned Austrian game population in fall 2019 and 2020, 64 red deer (Cervus elaphus), 5 fallow deer (Dama dama), 6 mouflon (Ovis gmelini musimon), and 95 wild boars (Sus scrofa) were shot and sampled for PCR testing. Pools of spleen, lung, and tonsillar swabs were screened for specific nucleic acids of porcine circoviruses. Wild ruminants were additionally tested for herpesviruses and pestiviruses, and wild boars were screened for pseudorabies virus (PrV) and porcine lymphotropic herpesviruses (PLHV-1-3). PCV2 was detectable in 5% (3 of 64) of red deer and 75% (71 of 95) of wild boar samples. In addition, 24 wild boar samples (25%) but none of the ruminants tested positive for PCV3 specific nucleic acids. Herpesviruses were detected in 15 (20%) ruminant samples. Sequence analyses showed the closest relationships to fallow deer herpesvirus and elk gammaherpesvirus. In wild boars, PLHV-1 was detectable in 10 (11%), PLHV-2 in 44 (46%), and PLHV-3 in 66 (69%) of animals, including 36 double and 3 triple infections. No pestiviruses were detectable in any ruminant samples, and all wild boar samples were negative in PrV-PCR. Our data demonstrate a high prevalence of PCV2 and PLHVs in an Austrian game population, confirm the presence of PCV3 in Austrian wild boars, and indicate a low risk of spillover of notifiable animal diseases into the domestic animal population.", "which Has Virus ?", "rabies", NaN, NaN], ["Abstract SARS-CoV-2 emerged in China at the end of 2019 and has rapidly become a pandemic with roughly 2.7 million recorded COVID-19 cases and greater than 189,000 recorded deaths by April 23rd, 2020 (www.WHO.org). There are no FDA approved antivirals or vaccines for any coronavirus, including SARS-CoV-2. Current treatments for COVID-19 are limited to supportive therapies and off-label use of FDA approved drugs. Rapid development and human testing of potential antivirals is greatly needed. A quick way to test compounds with potential antiviral activity is through drug repurposing. Numerous drugs are already approved for human use and subsequently there is a good understanding of their safety profiles and potential side effects, making them easier to fast-track to clinical studies in COVID-19 patients. Here, we present data on the antiviral activity of 20 FDA approved drugs against SARS-CoV-2 that also inhibit SARS-CoV and MERS-CoV. We found that 17 of these inhibit SARS-CoV-2 at a range of IC50 values at non-cytotoxic concentrations. We directly follow up with seven of these to demonstrate all are capable of inhibiting infectious SARS-CoV-2 production. Moreover, we have evaluated two of these, chloroquine and chlorpromazine, in vivo using a mouse-adapted SARS-CoV model and found both drugs protect mice from clinical disease.", "which Has participant ?", "chloroquine and chlorpromazine", 1339.0, 1369.0], ["A high-throughput screen of the NIH molecular libraries sample collection and subsequent optimization of a lead dipeptide-like series of severe acute respiratory syndrome (SARS) main protease (3CLpro) inhibitors led to the identification of probe compound ML188 (16-(R), (R)-N-(4-(tert-butyl)phenyl)-N-(2-(tert-butylamino)-2-oxo-1-(pyridin-3-yl)ethyl)furan-2-carboxamide, Pubchem CID: 46897844). Unlike the majority of reported coronavirus 3CLpro inhibitors that act via covalent modification of the enzyme, 16-(R) is a noncovalent SARS-CoV 3CLpro inhibitor with moderate MW and good enzyme and antiviral inhibitory activity. A multicomponent Ugi reaction was utilized to rapidly explore structure-activity relationships within S(1'), S(1), and S(2) enzyme binding pockets. The X-ray structure of SARS-CoV 3CLpro bound with 16-(R) was instrumental in guiding subsequent rounds of chemistry optimization. 16-(R) provides an excellent starting point for the further design and refinement of 3CLpro inhibitors that act by a noncovalent mechanism of action.", "which Has participant ?", "virus", NaN, NaN], ["Abstract SARS-CoV-2 emerged in China at the end of 2019 and has rapidly become a pandemic with roughly 2.7 million recorded COVID-19 cases and greater than 189,000 recorded deaths by April 23rd, 2020 (www.WHO.org). There are no FDA approved antivirals or vaccines for any coronavirus, including SARS-CoV-2. Current treatments for COVID-19 are limited to supportive therapies and off-label use of FDA approved drugs. Rapid development and human testing of potential antivirals is greatly needed. A quick way to test compounds with potential antiviral activity is through drug repurposing. Numerous drugs are already approved for human use and subsequently there is a good understanding of their safety profiles and potential side effects, making them easier to fast-track to clinical studies in COVID-19 patients. Here, we present data on the antiviral activity of 20 FDA approved drugs against SARS-CoV-2 that also inhibit SARS-CoV and MERS-CoV. We found that 17 of these inhibit SARS-CoV-2 at a range of IC50 values at non-cytotoxic concentrations. We directly follow up with seven of these to demonstrate all are capable of inhibiting infectious SARS-CoV-2 production. Moreover, we have evaluated two of these, chloroquine and chlorpromazine, in vivo using a mouse-adapted SARS-CoV model and found both drugs protect mice from clinical disease.", "which Has participant ?", "virus", NaN, NaN], ["The rising threat of pandemic viruses, such as SARS-CoV-2, requires development of new preclinical discovery platforms that can more rapidly identify therapeutics that are active in vitro and also translate in vivo. Here we show that human organ-on-a-chip (Organ Chip) microfluidic culture devices lined by highly differentiated human primary lung airway epithelium and endothelium can be used to model virus entry, replication, strain-dependent virulence, host cytokine production, and recruitment of circulating immune cells in response to infection by respiratory viruses with great pandemic potential. We provide a first demonstration of drug repurposing by using oseltamivir in influenza A virus-infected organ chip cultures and show that co-administration of the approved anticoagulant drug, nafamostat, can double oseltamivir\u2019s therapeutic time window. With the emergence of the COVID-19 pandemic, the Airway Chips were used to assess the inhibitory activities of approved drugs that showed inhibition in traditional cell culture assays only to find that most failed when tested in the Organ Chip platform. When administered in human Airway Chips under flow at a clinically relevant dose, one drug \u2013 amodiaquine - significantly inhibited infection by a pseudotyped SARS-CoV-2 virus. Proof of concept was provided by showing that amodiaquine and its active metabolite (desethylamodiaquine) also significantly reduce viral load in both direct infection and animal-to-animal transmission models of native SARS-CoV-2 infection in hamsters. These data highlight the value of Organ Chip technology as a more stringent and physiologically relevant platform for drug repurposing, and suggest that amodiaquine should be considered for future clinical testing.", "which Has participant ?", "virus", 403.0, 408.0], ["Drug repositioning is the only feasible option to immediately address the COVID-19 global challenge. We screened a panel of 48 FDA-approved drugs against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) which were preselected by an assay of SARS-CoV. We identified 24 potential antiviral drug candidates against SARS-CoV-2 infection. Some drug candidates showed very low 50% inhibitory concentrations (IC 50 s), and in particular, two FDA-approved drugs\u2014niclosamide and ciclesonide\u2014were notable in some respects.", "which Has participant ?", "virus", NaN, NaN], ["The rising threat of pandemic viruses, such as SARS-CoV-2, requires development of new preclinical discovery platforms that can more rapidly identify therapeutics that are active in vitro and also translate in vivo. Here we show that human organ-on-a-chip (Organ Chip) microfluidic culture devices lined by highly differentiated human primary lung airway epithelium and endothelium can be used to model virus entry, replication, strain-dependent virulence, host cytokine production, and recruitment of circulating immune cells in response to infection by respiratory viruses with great pandemic potential. We provide a first demonstration of drug repurposing by using oseltamivir in influenza A virus-infected organ chip cultures and show that co-administration of the approved anticoagulant drug, nafamostat, can double oseltamivir\u2019s therapeutic time window. With the emergence of the COVID-19 pandemic, the Airway Chips were used to assess the inhibitory activities of approved drugs that showed inhibition in traditional cell culture assays only to find that most failed when tested in the Organ Chip platform. When administered in human Airway Chips under flow at a clinically relevant dose, one drug \u2013 amodiaquine - significantly inhibited infection by a pseudotyped SARS-CoV-2 virus. Proof of concept was provided by showing that amodiaquine and its active metabolite (desethylamodiaquine) also significantly reduce viral load in both direct infection and animal-to-animal transmission models of native SARS-CoV-2 infection in hamsters. These data highlight the value of Organ Chip technology as a more stringent and physiologically relevant platform for drug repurposing, and suggest that amodiaquine should be considered for future clinical testing.", "which Has participant ?", "SARS-CoV-2", 47.0, 57.0], ["Abstract To identify potential therapeutic stop-gaps for SARS-CoV-2, we evaluated a library of 1,670 approved and reference compounds in an unbiased, cellular image-based screen for their ability to suppress the broad impacts of the SARS-CoV-2 virus on phenomic profiles of human renal cortical epithelial cells using deep learning. In our assay, remdesivir is the only antiviral tested with strong efficacy, neither chloroquine nor hydroxychloroquine have any beneficial effect in this human cell model, and a small number of compounds not currently being pursued clinically for SARS-CoV-2 have efficacy. We observed weak but beneficial class effects of \u03b2-blockers, mTOR/PI3K inhibitors and Vitamin D analogues and a mild amplification of the viral phenotype with \u03b2-agonists.", "which Has participant ?", "virus", 244.0, 249.0], ["We describe here the design, synthesis, molecular modeling, and biological evaluation of a series of small molecule, nonpeptide inhibitors of SARS-CoV PLpro. Our initial lead compound was identified via high-throughput screening of a diverse chemical library. We subsequently carried out structure-activity relationship studies and optimized the lead structure to potent inhibitors that have shown antiviral activity against SARS-CoV infected Vero E6 cells. Upon the basis of the X-ray crystal structure of inhibitor 24-bound to SARS-CoV PLpro, a drug design template was created. Our structure-based modification led to the design of a more potent inhibitor, 2 (enzyme IC(50) = 0.46 microM; antiviral EC(50) = 6 microM). Interestingly, its methylamine derivative, 49, displayed good enzyme inhibitory potency (IC(50) = 1.3 microM) and the most potent SARS antiviral activity (EC(50) = 5.2 microM) in the series. We have carried out computational docking studies and generated a predictive 3D-QSAR model for SARS-CoV PLpro inhibitors.", "which Has participant ?", "Vero E6 cells", 443.0, 456.0], ["A SEIR simulation model for the COVID-19 pandemic was developed (http://covidsim.eu) and applied to a hypothetical European country of 10 million population. Our results show which interventions potentially push the epidemic peak into the subsequent year (when vaccinations may be available) or which fail. Different levels of control (via contact reduction) resulted in 22% to 63% of the population sick, 0.2% to 0.6% hospitalised, and 0.07% to 0.28% dead (n=6,450 to 28,228).", "which Total Deaths ?", "6,450", 460.0, 465.0], ["A SEIR simulation model for the COVID-19 pandemic was developed (http://covidsim.eu) and applied to a hypothetical European country of 10 million population. Our results show which interventions potentially push the epidemic peak into the subsequent year (when vaccinations may be available) or which fail. Different levels of control (via contact reduction) resulted in 22% to 63% of the population sick, 0.2% to 0.6% hospitalised, and 0.07% to 0.28% dead (n=6,450 to 28,228).", "which Total Deaths ?", "28,228", 469.0, 475.0], ["Self-sustaining human-to-human transmission of the novel coronavirus (2019-nCov) is the only plausible explanation of the scale of the outbreak in Wuhan. We estimate that, on average, each case infected 2.6 (uncertainty range: 1.5-3.5) other people up to 18 January 2020, based on an analysis combining our past estimates of the size of the outbreak in Wuhan with computational modelling of potential epidemic trajectories. This implies that control measures need to block well over 60% of transmission to be effective in controlling the outbreak. It is likely, based on the experience of SARS and MERS-CoV, that the number of secondary cases caused by a case of 2019-nCoV is highly variable \u2013 with many cases causing no secondary infections, and a few causing many. Whether transmission is continuing at the same rate currently depends on the effectiveness of current control measures implemented in China and the extent to which the populations of affected areas have adopted risk-reducing behaviours. In the absence of antiviral drugs or vaccines, control relies upon the prompt detection and isolation of symptomatic cases. It is unclear at the current time whether this outbreak can be contained within China; uncertainties include the severity spectrum of the disease caused by this virus and whether cases with relatively mild symptoms are able to transmit the virus efficiently. Identification and testing of potential cases need to be as extensive as is permitted by healthcare and diagnostic testing capacity \u2013 including the identification, testing and isolation of suspected cases with only mild to moderate disease (e.g. influenza-like illness), when logistically feasible.", "which 95% Confidence interval ?", "1.5-3.5", 227.0, 234.0], ["Abstract Backgrounds An ongoing outbreak of a novel coronavirus (2019-nCoV) pneumonia hit a major city of China, Wuhan, December 2019 and subsequently reached other provinces/regions of China and countries. We present estimates of the basic reproduction number, R 0 , of 2019-nCoV in the early phase of the outbreak. Methods Accounting for the impact of the variations in disease reporting rate, we modelled the epidemic curve of 2019-nCoV cases time series, in mainland China from January 10 to January 24, 2020, through the exponential growth. With the estimated intrinsic growth rate ( \u03b3 ), we estimated R 0 by using the serial intervals (SI) of two other well-known coronavirus diseases, MERS and SARS, as approximations for the true unknown SI. Findings The early outbreak data largely follows the exponential growth. We estimated that the mean R 0 ranges from 2.24 (95%CI: 1.96-2.55) to 3.58 (95%CI: 2.89-4.39) associated with 8-fold to 2-fold increase in the reporting rate. We demonstrated that changes in reporting rate substantially affect estimates of R 0 . Conclusion The mean estimate of R 0 for the 2019-nCoV ranges from 2.24 to 3.58, and significantly larger than 1. Our findings indicate the potential of 2019-nCoV to cause outbreaks.", "which 95% Confidence interval ?", "1.96-2.55", 879.0, 888.0], ["Since first identified, the epidemic scale of the recently emerged novel coronavirus (2019-nCoV) in Wuhan, China, has increased rapidly, with cases arising across China and other countries and regions. using a transmission model, we estimate a basic reproductive number of 3.11 (95%CI, 2.39-4.13); 58-76% of transmissions must be prevented to stop increasing; Wuhan case ascertainment of 5.0% (3.6-7.4); 21022 (11090-33490) total infections in Wuhan 1 to 22 January.", "which 95% Confidence interval ?", "2.39-4.13", 286.0, 295.0], ["Abstract Backgrounds An ongoing outbreak of a novel coronavirus (2019-nCoV) pneumonia hit a major city of China, Wuhan, December 2019 and subsequently reached other provinces/regions of China and countries. We present estimates of the basic reproduction number, R 0 , of 2019-nCoV in the early phase of the outbreak. Methods Accounting for the impact of the variations in disease reporting rate, we modelled the epidemic curve of 2019-nCoV cases time series, in mainland China from January 10 to January 24, 2020, through the exponential growth. With the estimated intrinsic growth rate ( \u03b3 ), we estimated R 0 by using the serial intervals (SI) of two other well-known coronavirus diseases, MERS and SARS, as approximations for the true unknown SI. Findings The early outbreak data largely follows the exponential growth. We estimated that the mean R 0 ranges from 2.24 (95%CI: 1.96-2.55) to 3.58 (95%CI: 2.89-4.39) associated with 8-fold to 2-fold increase in the reporting rate. We demonstrated that changes in reporting rate substantially affect estimates of R 0 . Conclusion The mean estimate of R 0 for the 2019-nCoV ranges from 2.24 to 3.58, and significantly larger than 1. Our findings indicate the potential of 2019-nCoV to cause outbreaks.", "which 95% Confidence interval ?", "2.89-4.39", 906.0, 915.0], ["Background The 2019 novel Coronavirus (COVID-19) emerged in Wuhan, China in December 2019 and has been spreading rapidly in China. Decisions about its pandemic threat and the appropriate level of public health response depend heavily on estimates of its basic reproduction number and assessments of interventions conducted in the early stages of the epidemic. Methods We conducted a mathematical modeling study using five independent methods to assess the basic reproduction number (R0) of COVID-19, using data on confirmed cases obtained from the China National Health Commission for the period 10th January to 8th February. We analyzed the data for the period before the closure of Wuhan city (10th January to 23rd January) and the post-closure period (23rd January to 8th February) and for the whole period, to assess both the epidemic risk of the virus and the effectiveness of the closure of Wuhan city on spread of COVID-19. Findings Before the closure of Wuhan city the basic reproduction number of COVID-19 was 4.38 (95% CI: 3.63-5.13), dropping to 3.41 (95% CI: 3.16-3.65) after the closure of Wuhan city. Over the entire epidemic period COVID-19 had a basic reproduction number of 3.39 (95% CI: 3.09-3.70), indicating it has a very high transmissibility. Interpretation COVID-19 is a highly transmissible virus with a very high risk of epidemic outbreak once it emerges in metropolitan areas. The closure of Wuhan city was effective in reducing the severity of the epidemic, but even after closure of the city and the subsequent expansion of that closure to other parts of Hubei the virus remained extremely infectious. Emergency planners in other cities should consider this high infectiousness when considering responses to this virus.", "which 95% Confidence interval ?", "3.09-3.70", 1205.0, 1214.0], ["Background The 2019 novel Coronavirus (COVID-19) emerged in Wuhan, China in December 2019 and has been spreading rapidly in China. Decisions about its pandemic threat and the appropriate level of public health response depend heavily on estimates of its basic reproduction number and assessments of interventions conducted in the early stages of the epidemic. Methods We conducted a mathematical modeling study using five independent methods to assess the basic reproduction number (R0) of COVID-19, using data on confirmed cases obtained from the China National Health Commission for the period 10th January to 8th February. We analyzed the data for the period before the closure of Wuhan city (10th January to 23rd January) and the post-closure period (23rd January to 8th February) and for the whole period, to assess both the epidemic risk of the virus and the effectiveness of the closure of Wuhan city on spread of COVID-19. Findings Before the closure of Wuhan city the basic reproduction number of COVID-19 was 4.38 (95% CI: 3.63-5.13), dropping to 3.41 (95% CI: 3.16-3.65) after the closure of Wuhan city. Over the entire epidemic period COVID-19 had a basic reproduction number of 3.39 (95% CI: 3.09-3.70), indicating it has a very high transmissibility. Interpretation COVID-19 is a highly transmissible virus with a very high risk of epidemic outbreak once it emerges in metropolitan areas. The closure of Wuhan city was effective in reducing the severity of the epidemic, but even after closure of the city and the subsequent expansion of that closure to other parts of Hubei the virus remained extremely infectious. Emergency planners in other cities should consider this high infectiousness when considering responses to this virus.", "which 95% Confidence interval ?", "3.16-3.65", 1071.0, 1080.0], ["Background: In December 2019, an outbreak of coronavirus disease (COVID-19)was identified in Wuhan, China and, later on, detected in other parts of China. Our aim is to evaluate the effectiveness of the evolution of interventions and self-protection measures, estimate the risk of partial lifting control measures and predict the epidemic trend of the virus in mainland China excluding Hubei province based on the published data and a novel mathematical model. Methods: A novel COVID-19 transmission dynamic model incorporating the intervention measures implemented in China is proposed. We parameterize the model by using the Markov Chain Monte Carlo (MCMC) method and estimate the control reproduction number Rc, as well as the effective daily reproduction ratio Re(t), of the disease transmission in mainland China excluding Hubei province. Results: The estimation outcomes indicate that the control reproduction number is 3.36 (95% CI 3.20-3.64) and Re(t) has dropped below 1 since January 31st, 2020, which implies that the containment strategies implemented by the Chinese government in mainland China excluding Hubei province are indeed effective and magnificently suppressed COVID-19 transmission. Moreover, our results show that relieving personal protection too early may lead to the spread of disease for a longer time and more people would be infected, and may even cause epidemic or outbreak again. By calculating the effective reproduction ratio, we proved that the contact rate should be kept at least less than 30% of the normal level by April, 2020. Conclusions: To ensure the epidemic ending rapidly, it is necessary to maintain the current integrated restrict interventions and self-protection measures, including travel restriction, quarantine of entry, contact tracing followed by quarantine and isolation and reduction of contact, like wearing masks, etc. People should be fully aware of the real-time epidemic situation and keep sufficient personal protection until April. If all the above conditions are met, the outbreak is expected to be ended by April in mainland China apart from Hubei province.", "which 95% Confidence interval ?", "3.20-3.64", 939.0, 948.0], ["Background The 2019 novel Coronavirus (COVID-19) emerged in Wuhan, China in December 2019 and has been spreading rapidly in China. Decisions about its pandemic threat and the appropriate level of public health response depend heavily on estimates of its basic reproduction number and assessments of interventions conducted in the early stages of the epidemic. Methods We conducted a mathematical modeling study using five independent methods to assess the basic reproduction number (R0) of COVID-19, using data on confirmed cases obtained from the China National Health Commission for the period 10th January to 8th February. We analyzed the data for the period before the closure of Wuhan city (10th January to 23rd January) and the post-closure period (23rd January to 8th February) and for the whole period, to assess both the epidemic risk of the virus and the effectiveness of the closure of Wuhan city on spread of COVID-19. Findings Before the closure of Wuhan city the basic reproduction number of COVID-19 was 4.38 (95% CI: 3.63-5.13), dropping to 3.41 (95% CI: 3.16-3.65) after the closure of Wuhan city. Over the entire epidemic period COVID-19 had a basic reproduction number of 3.39 (95% CI: 3.09-3.70), indicating it has a very high transmissibility. Interpretation COVID-19 is a highly transmissible virus with a very high risk of epidemic outbreak once it emerges in metropolitan areas. The closure of Wuhan city was effective in reducing the severity of the epidemic, but even after closure of the city and the subsequent expansion of that closure to other parts of Hubei the virus remained extremely infectious. Emergency planners in other cities should consider this high infectiousness when considering responses to this virus.", "which 95% Confidence interval ?", "3.63-5.13", 1033.0, 1042.0], ["English Abstract: Background: Since the emergence of the first pneumonia cases in Wuhan, China, the novel coronavirus (2019-nCov) infection has been quickly spreading out to other provinces and neighbouring countries. Estimation of the basic reproduction number by means of mathematical modelling can be helpful for determining the potential and severity of an outbreak, and providing critical information for identifying the type of disease interventions and intensity. Methods: A deterministic compartmental model was devised based on the clinical progression of the disease, epidemiological status of the individuals, and the intervention measures. Findings: The estimation results based on likelihood and model analysis reveal that the control reproduction number may be as high as 6.47 (95% CI 5.71-7.23). Sensitivity analyses reveal that interventions, such as intensive contact tracing followed by quarantine and isolation, can effectively reduce the control reproduction number and transmission risk, with the effect of travel restriction of Wuhan on 2019-nCov infection in Beijing being almost equivalent to increasing quarantine by 100-thousand baseline value. Interpretation: It is essential to assess how the expensive, resource-intensive measures implemented by the Chinese authorities can contribute to the prevention and control of the 2019-nCov infection, and how long should be maintained. Under the most restrictive measures, the outbreak is expected to peak within two weeks (since January 23rd 2020) with significant low peak value. With travel restriction (no imported exposed individuals to Beijing), the number of infected individuals in 7 days will decrease by 91.14% in Beijing, compared with the scenario of no travel restriction. Mandarin Abstract: \u80cc\u666f\uff1a\u81ea\u4ece\u4e2d\u56fd\u6b66\u6c49\u51fa\u73b0\u7b2c\u4e00\u4f8b\u80ba\u708e\u75c5\u4f8b\u4ee5\u6765\uff0c\u65b0\u578b\u51a0\u72b6\u75c5\u6bd2\uff082019-nCov\uff09\u611f\u67d3\u5df2\u8fc5\u901f\u4f20\u64ad\u5230\u5176\u4ed6\u7701\u4efd\u548c\u5468\u8fb9\u56fd\u5bb6\u3002\u901a\u8fc7\u6570\u5b66\u6a21\u578b\u4f30\u8ba1\u57fa\u672c\u518d\u751f\u6570\uff0c\u6709\u52a9\u4e8e\u786e\u5b9a\u75ab\u60c5\u7206\u53d1\u7684\u53ef\u80fd\u6027\u548c\u4e25\u91cd\u6027\uff0c\u5e76\u4e3a\u786e\u5b9a\u75be\u75c5\u5e72\u9884\u7c7b\u578b\u548c\u5f3a\u5ea6\u63d0\u4f9b\u5173\u952e\u4fe1\u606f\u3002 \u65b9\u6cd5\uff1a\u6839\u636e\u75be\u75c5\u7684\u4e34\u5e8a\u8fdb\u5c55\uff0c\u4e2a\u4f53\u7684\u6d41\u884c\u75c5\u5b66\u72b6\u51b5\u548c\u5e72\u9884\u63aa\u65bd\uff0c\u8bbe\u8ba1\u786e\u5b9a\u6027\u7684\u4ed3\u5ba4\u6a21\u578b\u3002 \u7ed3\u679c\uff1a\u57fa\u4e8e\u4f3c\u7136\u51fd\u6570\u548c\u6a21\u578b\u5206\u6790\u7684\u4f30\u8ba1\u7ed3\u679c\u8868\u660e\uff0c\u63a7\u5236\u518d\u751f\u6570\u53ef\u80fd\u9ad8\u8fbe6.47\uff0895\uff05CI 5.71-7.23\uff09\u3002\u654f\u611f\u6027\u5206\u6790\u663e\u793a\uff0c\u5bc6\u96c6\u63a5\u89e6\u8ffd\u8e2a\u548c\u9694\u79bb\u7b49\u5e72\u9884\u63aa\u65bd\u53ef\u4ee5\u6709\u6548\u51cf\u5c11\u63a7\u5236\u518d\u751f\u6570\u548c\u4f20\u64ad\u98ce\u9669\uff0c\u6b66\u6c49\u5c01\u57ce\u63aa\u65bd\u5bf9\u5317\u4eac2019-nCov\u611f\u67d3\u7684\u5f71\u54cd\u51e0\u4e4e\u7b49\u540c\u4e8e\u589e\u52a0\u9694\u79bb\u63aa\u65bd10\u4e07\u7684\u57fa\u7ebf\u503c\u3002 \u89e3\u91ca\uff1a\u5fc5\u987b\u8bc4\u4f30\u4e2d\u56fd\u5f53\u5c40\u5b9e\u65bd\u7684\u6602\u8d35\uff0c\u8d44\u6e90\u5bc6\u96c6\u578b\u63aa\u65bd\u5982\u4f55\u6709\u52a9\u4e8e\u9884\u9632\u548c\u63a7\u52362019-nCov\u611f\u67d3\uff0c\u4ee5\u53ca\u5e94\u7ef4\u6301\u591a\u957f\u65f6\u95f4\u3002\u5728\u6700\u4e25\u683c\u7684\u63aa\u65bd\u4e0b\uff0c\u9884\u8ba1\u75ab\u60c5\u5c06\u5728\u4e24\u5468\u5185\uff08\u81ea2020\u5e741\u670823\u65e5\u8d77\uff09\u8fbe\u5230\u5cf0\u503c\uff0c\u5cf0\u503c\u8f83\u4f4e\u3002\u4e0e\u6ca1\u6709\u51fa\u884c\u9650\u5236\u7684\u60c5\u51b5\u76f8\u6bd4\uff0c\u6709\u4e86\u51fa\u884c\u9650\u5236\uff08\u5373\u6ca1\u6709\u8f93\u5165\u7684\u6f5c\u4f0f\u7c7b\u4e2a\u4f53\u8fdb\u5165\u5317\u4eac\uff09\uff0c\u5317\u4eac\u76847\u5929\u611f\u67d3\u8005\u6570\u91cf\u5c06\u51cf\u5c1191.14\uff05\u3002", "which 95% Confidence interval ?", "5.71-7.23", 799.0, 808.0], ["A novel coronavirus, named SARS-CoV-2, emerged in 2019 from Hubei region in China and rapidly spread worldwide. As no approved therapeutics exists to treat Covid-19, the disease associated to SARS-Cov-2, there is an urgent need to propose molecules that could quickly enter into clinics. Repurposing of approved drugs is a strategy that can bypass the time consuming stages of drug development. In this study, we screened the Prestwick Chemical Library\u00ae composed of 1,520 approved drugs in an infected cell-based assay. 90 compounds were identified. The robustness of the screen was assessed by the identification of drugs, such as Chloroquine derivatives and protease inhibitors, already in clinical trials. The hits were sorted according to their chemical composition and their known therapeutic effect, then EC50 and CC50 were determined for a subset of compounds. Several drugs, such as Azithromycine, Opipramol, Quinidine or Omeprazol present antiviral potency with 2www.WHO.org). There are no FDA approved antivirals or vaccines for any coronavirus, including SARS-CoV-2. Current treatments for COVID-19 are limited to supportive therapies and off-label use of FDA approved drugs. Rapid development and human testing of potential antivirals is greatly needed. A quick way to test compounds with potential antiviral activity is through drug repurposing. Numerous drugs are already approved for human use and subsequently there is a good understanding of their safety profiles and potential side effects, making them easier to fast-track to clinical studies in COVID-19 patients. Here, we present data on the antiviral activity of 20 FDA approved drugs against SARS-CoV-2 that also inhibit SARS-CoV and MERS-CoV. We found that 17 of these inhibit SARS-CoV-2 at a range of IC50 values at non-cytotoxic concentrations. We directly follow up with seven of these to demonstrate all are capable of inhibiting infectious SARS-CoV-2 production. Moreover, we have evaluated two of these, chloroquine and chlorpromazine, in vivo using a mouse-adapted SARS-CoV model and found both drugs protect mice from clinical disease.", "which has endpoint ?", "IC50", 1131.0, 1135.0], ["OBJECTIVE To determine whether therapeutic concentrations of levetiracetam can be achieved in cats and to establish reasonable i.v. and oral dosing intervals that would not be associated with adverse effects in cats. ANIMALS 10 healthy purpose-bred cats. PROCEDURES In a randomized crossover study, levetiracetam (20 mg/kg) was administered orally and i.v. to each cat. Blood samples were collected 0, 10, 20, and 40 minutes and 1, 1.5, 2, 3, 4, 6, 9, 12, and 24 hours after administration. Plasma levetiracetam concentrations were determined via high-performance liquid chromatography. RESULTS Mean \u00b1 SD peak concentration was 25.54 \u00b1 7.97 \u03bcg/mL. The mean y-intercept for i.v. administration was 37.52 \u00b1 6.79 \u03bcg/mL. Half-life (harmonic mean \u00b1 pseudo-SD) was 2.95 \u00b1 0.95 hours and 2.86 \u00b1 0.65 hours for oral and i.v. administration, respectively. Mean volume of distribution at steady state was 0.52 \u00b1 0.09 L/kg, and mean clearance was 2.0 \u00b1 0.60 mL/kg/min. Mean oral bioavailability was 102 \u00b1 39%. Plasma drug concentrations were maintained in the therapeutic range reported for humans (5 to 45 \u03bcg/mL) for at least 9 hours after administration in 7 of 10 cats. Only mild, transient hypersalivation was evident in some cats after oral administration. CONCLUSIONS AND CLINICAL RELEVANCE Levetiracetam (20 mg/kg) administered orally or i.v. to cats every 8 hours should achieve and maintain concentrations within the therapeutic range for humans. Levetiracetam administration has favorable pharmacokinetics for clinical use, was apparently tolerated well, and may be a reasonable alternative antiepileptic drug in cats.", "which Disease definitions (characterization) ?", "clear", NaN, NaN], ["Background The diagnosis of feline epilepsy of unknown cause (EUC) requires a thorough diagnostic evaluation, otherwise the prevalence of EUC could be overestimated. Hypothesis Feline EUC is a clinically defined disease entity, which differs from feline hippocampal necrosis by the absence of magnetic resonance imaging (MRI) signal alteration of the hippocampus. The objectives of this study were (1) to evaluate the prevalence of EUC in a hospital population of cats by applying well\u2010defined inclusion criteria, and (2) to describe the clinical course of EUC. Animals Eighty\u2010one cats with recurrent seizures. Methods Retrospective study\u2014medical records were reviewed for cats presented for evaluation of recurrent seizures (2005\u20132010). Inclusion criteria were a defined diagnosis based on laboratory data, and either MRI or histopathology. Final outcome was confirmed by telephone interview with the owner. Magnetic resonance images were reviewed to evaluate hippocampal morphology and signal alterations. Results Epilepsy of unknown cause was diagnosed in 22% of cats with epilepsy. Physical, neurologic, and laboratory examinations, and either 1.5 T MRI and cerebrospinal fluid analysis or postmortem examination failed to identify an underlying cause. Cats with EUC had a higher survival rate (P < .05) and seizure remission occurred frequently (44.4%). Conclusion and Clinical Importance A detailed clinical evaluation and diagnostic imaging with MRI is recommended in any cat with recurrent seizures. The prognosis of cats with normal MRI findings and a clinical diagnosis of EUC are good. Standardized imaging guidelines should be established to assess the hippocampus in cats.", "which Disease definitions (characterization) ?", "well", 481.0, 485.0], ["CASE DESCRIPTION A 10-year-old domestic shorthair cat was evaluated because of presumed seizures. CLINICAL FINDINGS The cat had intermittent mydriasis, hyperthermia, and facial twitching. Findings of MRI and CSF sample analysis were unremarkable, and results of infectious disease testing were negative. Treatment was initiated with phenobarbital, zonisamide, and levetiracetam; however, the presumed seizure activity continued. Results of analysis of continuous electroencephalographic recording indicated the cat had nonconvulsive status epilepticus. TREATMENT AND OUTCOME The cat was treated with phenobarbital IV (6 mg/kg [2.7 mg/lb] q 30 min during a 9-hour period; total dose, 108 mg/kg [49.1 mg/lb]); treatment was stopped when a burst-suppression electroencephalographic pattern was detected. During this high-dose phenobarbital treatment period, an endotracheal tube was placed and the cat was monitored and received fluids, hetastarch, and dopamine IV. Continuous mechanical ventilation was not required. After treatment, the cat developed unclassified cardiomyopathy, azotemia, anemia, and pneumonia. These problems resolved during a 9-month period. CLINICAL RELEVANCE Findings for the cat of this report indicated electroencephalographic evidence of nonconvulsive status epilepticus. Administration of a high total dose of phenobarbital and monitoring of treatment by use of electroencephalography were successful for resolution of the problem, and treatment sequelae resolved.", "which Pre-treatment SF (seizures/ month or year) ?", "continuous", 452.0, 462.0], ["OBJECTIVE To determine whether therapeutic concentrations of levetiracetam can be achieved in cats and to establish reasonable i.v. and oral dosing intervals that would not be associated with adverse effects in cats. ANIMALS 10 healthy purpose-bred cats. PROCEDURES In a randomized crossover study, levetiracetam (20 mg/kg) was administered orally and i.v. to each cat. Blood samples were collected 0, 10, 20, and 40 minutes and 1, 1.5, 2, 3, 4, 6, 9, 12, and 24 hours after administration. Plasma levetiracetam concentrations were determined via high-performance liquid chromatography. RESULTS Mean \u00b1 SD peak concentration was 25.54 \u00b1 7.97 \u03bcg/mL. The mean y-intercept for i.v. administration was 37.52 \u00b1 6.79 \u03bcg/mL. Half-life (harmonic mean \u00b1 pseudo-SD) was 2.95 \u00b1 0.95 hours and 2.86 \u00b1 0.65 hours for oral and i.v. administration, respectively. Mean volume of distribution at steady state was 0.52 \u00b1 0.09 L/kg, and mean clearance was 2.0 \u00b1 0.60 mL/kg/min. Mean oral bioavailability was 102 \u00b1 39%. Plasma drug concentrations were maintained in the therapeutic range reported for humans (5 to 45 \u03bcg/mL) for at least 9 hours after administration in 7 of 10 cats. Only mild, transient hypersalivation was evident in some cats after oral administration. CONCLUSIONS AND CLINICAL RELEVANCE Levetiracetam (20 mg/kg) administered orally or i.v. to cats every 8 hours should achieve and maintain concentrations within the therapeutic range for humans. Levetiracetam administration has favorable pharmacokinetics for clinical use, was apparently tolerated well, and may be a reasonable alternative antiepileptic drug in cats.", "which allocation concealment ?", "high", 547.0, 551.0], ["Phenobarbital was administered to eight healthy cats as a single intravenous dose of 10 mg/kg. Serum phenobarbital concentrations were determined using an immunoassay technique. The intravenous data were fitted to one-, two- and three-compartment models. After statistical comparison of the three models, a two-compartment model was selected. Following intravenous administration, the drug was rapidly distributed (distribution half-life = 0.046 +/- 0.007 h) with a large apparent volume of distribution (931 +/- 44.8 mL/kg). Subsequent elimination of phenobarbital from the body was slow (elimination half-life = 58.8 +/- 4.21 h). Three weeks later, a single oral dose of phenobarbital (10 mg/kg) was administered to the same group of cats. A one-compartment model with an input component was used to describe the results. After oral administration, the initial rapid absorption phase (absorption half-life = 0.382 +/- 0.099 h) was followed by a plateau in the serum concentration (13.5 +/- 0.148 micrograms/mL) for approximately 10 h. The half-life of the terminal elimination phase (76.1 +/- 6.96 h) was not significantly different from the half-life determined for the intravenous route. Bioavailability of the oral drug was high (F = 1.20 +/- 0.120). Based on the pharmacokinetic parameters determined in this study, phenobarbital appears to be a suitable drug for use as an anticonvulsant in the cat.", "which allocation concealment ?", "high", 1229.0, 1233.0], ["OBJECTIVE To determine whether therapeutic concentrations of levetiracetam can be achieved in cats and to establish reasonable i.v. and oral dosing intervals that would not be associated with adverse effects in cats. ANIMALS 10 healthy purpose-bred cats. PROCEDURES In a randomized crossover study, levetiracetam (20 mg/kg) was administered orally and i.v. to each cat. Blood samples were collected 0, 10, 20, and 40 minutes and 1, 1.5, 2, 3, 4, 6, 9, 12, and 24 hours after administration. Plasma levetiracetam concentrations were determined via high-performance liquid chromatography. RESULTS Mean \u00b1 SD peak concentration was 25.54 \u00b1 7.97 \u03bcg/mL. The mean y-intercept for i.v. administration was 37.52 \u00b1 6.79 \u03bcg/mL. Half-life (harmonic mean \u00b1 pseudo-SD) was 2.95 \u00b1 0.95 hours and 2.86 \u00b1 0.65 hours for oral and i.v. administration, respectively. Mean volume of distribution at steady state was 0.52 \u00b1 0.09 L/kg, and mean clearance was 2.0 \u00b1 0.60 mL/kg/min. Mean oral bioavailability was 102 \u00b1 39%. Plasma drug concentrations were maintained in the therapeutic range reported for humans (5 to 45 \u03bcg/mL) for at least 9 hours after administration in 7 of 10 cats. Only mild, transient hypersalivation was evident in some cats after oral administration. CONCLUSIONS AND CLINICAL RELEVANCE Levetiracetam (20 mg/kg) administered orally or i.v. to cats every 8 hours should achieve and maintain concentrations within the therapeutic range for humans. Levetiracetam administration has favorable pharmacokinetics for clinical use, was apparently tolerated well, and may be a reasonable alternative antiepileptic drug in cats.", "which Blinding of outcome assessment ?", "high", 547.0, 551.0], ["Phenobarbital was administered to eight healthy cats as a single intravenous dose of 10 mg/kg. Serum phenobarbital concentrations were determined using an immunoassay technique. The intravenous data were fitted to one-, two- and three-compartment models. After statistical comparison of the three models, a two-compartment model was selected. Following intravenous administration, the drug was rapidly distributed (distribution half-life = 0.046 +/- 0.007 h) with a large apparent volume of distribution (931 +/- 44.8 mL/kg). Subsequent elimination of phenobarbital from the body was slow (elimination half-life = 58.8 +/- 4.21 h). Three weeks later, a single oral dose of phenobarbital (10 mg/kg) was administered to the same group of cats. A one-compartment model with an input component was used to describe the results. After oral administration, the initial rapid absorption phase (absorption half-life = 0.382 +/- 0.099 h) was followed by a plateau in the serum concentration (13.5 +/- 0.148 micrograms/mL) for approximately 10 h. The half-life of the terminal elimination phase (76.1 +/- 6.96 h) was not significantly different from the half-life determined for the intravenous route. Bioavailability of the oral drug was high (F = 1.20 +/- 0.120). Based on the pharmacokinetic parameters determined in this study, phenobarbital appears to be a suitable drug for use as an anticonvulsant in the cat.", "which Blinding of outcome assessment ?", "high", 1229.0, 1233.0], ["Phenobarbital was administered to eight healthy cats as a single intravenous dose of 10 mg/kg. Serum phenobarbital concentrations were determined using an immunoassay technique. The intravenous data were fitted to one-, two- and three-compartment models. After statistical comparison of the three models, a two-compartment model was selected. Following intravenous administration, the drug was rapidly distributed (distribution half-life = 0.046 +/- 0.007 h) with a large apparent volume of distribution (931 +/- 44.8 mL/kg). Subsequent elimination of phenobarbital from the body was slow (elimination half-life = 58.8 +/- 4.21 h). Three weeks later, a single oral dose of phenobarbital (10 mg/kg) was administered to the same group of cats. A one-compartment model with an input component was used to describe the results. After oral administration, the initial rapid absorption phase (absorption half-life = 0.382 +/- 0.099 h) was followed by a plateau in the serum concentration (13.5 +/- 0.148 micrograms/mL) for approximately 10 h. The half-life of the terminal elimination phase (76.1 +/- 6.96 h) was not significantly different from the half-life determined for the intravenous route. Bioavailability of the oral drug was high (F = 1.20 +/- 0.120). Based on the pharmacokinetic parameters determined in this study, phenobarbital appears to be a suitable drug for use as an anticonvulsant in the cat.", "which Randomization ?", "high", 1229.0, 1233.0], ["OBJECTIVE To determine whether therapeutic concentrations of levetiracetam can be achieved in cats and to establish reasonable i.v. and oral dosing intervals that would not be associated with adverse effects in cats. ANIMALS 10 healthy purpose-bred cats. PROCEDURES In a randomized crossover study, levetiracetam (20 mg/kg) was administered orally and i.v. to each cat. Blood samples were collected 0, 10, 20, and 40 minutes and 1, 1.5, 2, 3, 4, 6, 9, 12, and 24 hours after administration. Plasma levetiracetam concentrations were determined via high-performance liquid chromatography. RESULTS Mean \u00b1 SD peak concentration was 25.54 \u00b1 7.97 \u03bcg/mL. The mean y-intercept for i.v. administration was 37.52 \u00b1 6.79 \u03bcg/mL. Half-life (harmonic mean \u00b1 pseudo-SD) was 2.95 \u00b1 0.95 hours and 2.86 \u00b1 0.65 hours for oral and i.v. administration, respectively. Mean volume of distribution at steady state was 0.52 \u00b1 0.09 L/kg, and mean clearance was 2.0 \u00b1 0.60 mL/kg/min. Mean oral bioavailability was 102 \u00b1 39%. Plasma drug concentrations were maintained in the therapeutic range reported for humans (5 to 45 \u03bcg/mL) for at least 9 hours after administration in 7 of 10 cats. Only mild, transient hypersalivation was evident in some cats after oral administration. CONCLUSIONS AND CLINICAL RELEVANCE Levetiracetam (20 mg/kg) administered orally or i.v. to cats every 8 hours should achieve and maintain concentrations within the therapeutic range for humans. Levetiracetam administration has favorable pharmacokinetics for clinical use, was apparently tolerated well, and may be a reasonable alternative antiepileptic drug in cats.", "which Selective reporting ?", "high", 547.0, 551.0], ["Phenobarbital was administered to eight healthy cats as a single intravenous dose of 10 mg/kg. Serum phenobarbital concentrations were determined using an immunoassay technique. The intravenous data were fitted to one-, two- and three-compartment models. After statistical comparison of the three models, a two-compartment model was selected. Following intravenous administration, the drug was rapidly distributed (distribution half-life = 0.046 +/- 0.007 h) with a large apparent volume of distribution (931 +/- 44.8 mL/kg). Subsequent elimination of phenobarbital from the body was slow (elimination half-life = 58.8 +/- 4.21 h). Three weeks later, a single oral dose of phenobarbital (10 mg/kg) was administered to the same group of cats. A one-compartment model with an input component was used to describe the results. After oral administration, the initial rapid absorption phase (absorption half-life = 0.382 +/- 0.099 h) was followed by a plateau in the serum concentration (13.5 +/- 0.148 micrograms/mL) for approximately 10 h. The half-life of the terminal elimination phase (76.1 +/- 6.96 h) was not significantly different from the half-life determined for the intravenous route. Bioavailability of the oral drug was high (F = 1.20 +/- 0.120). Based on the pharmacokinetic parameters determined in this study, phenobarbital appears to be a suitable drug for use as an anticonvulsant in the cat.", "which Selective reporting ?", "high", 1229.0, 1233.0], ["With the eventual goal of making zonisamide (ZNS), a relatively new antiepileptic drug, available for the treatment of epilepsy in cats, the pharmacokinetics after a single oral administration at 10 mg/kg and the toxicity after 9-week daily administration of 20 mg/kg/day of ZNS were studied in healthy cats. Pharmacokinetic parameters obtained with a single administration of ZNS at 10 mg/day were as follows: C max =13.1 \u03bcg/ml; T max =4.0 h; T 1/2 =33.0 h; areas under the curves (AUCs)=720.3 \u03bcg/mlh (values represent the medians). The study with daily administrations revealed that the toxicity of ZNS was comparatively low in cats, suggesting that it may be an available drug for cats. However, half of the cats that were administered 20 mg/kg/day daily showed adverse reactions such as anorexia, diarrhoea, vomiting, somnolence and locomotor ataxia.", "which Incomplete outcome data ?", "low", 623.0, 626.0], ["We report an evaluation of the treatment and outcome of cats with suspected primary epilepsy. Phenobarbital therapy was used alone or in combination with other anti-epileptic drugs. Outcome after treatment was evaluated mainly on the basis of number of seizures per year and categorised into four groups: seizure-free, good control (1\u20135 seizures per year), moderate control (6\u201310 seizures per year) and poor control (more than 10 seizures per year). About 40\u201350% of cases became seizure-free, 20\u201330% were considered good-to-moderately controlled and about 30% were poorly controlled depending on the year of treatment considered. The duration of seizure events after treatment decreased in 26/36 cats and was unchanged in eight cats. The subjective severity of seizure also decreased in 25 cats and was unchanged in nine cats. Twenty-six cats had a good quality of life, nine cats an impaired quality of life and one cat a bad quality of life. Despite being free of seizures for years, cessation of treatment may lead to recurrence of seizures in most cats.", "which Study groups ?", "moderate", 357.0, 365.0], ["In this paper, we present a tool called X2OWL that aims at building an OWL ontology from an XML datasource. This method is based on XML schema to automatically generate the ontology structure, as well as, a set of mapping bridges. The presented method also includes a refinement step that allows to clean the mapping bridges and possibly to restructure the generated ontology.", "which rule ?", "Automatic", NaN, NaN], ["One of the promises of the Semantic Web is to support applications that easily and seamlessly deal with heterogeneous data. Most data on the Web, however, is in the Extensible Markup Language (XML) format, but using XML requires applications to understand the format of each data source that they access. To achieve the benefits of the Semantic Web involves transforming XML into the Semantic Web language, OWL (Ontology Web Language), a process that generally has manual or only semi-automatic components. In this paper we present a set of patterns that enable the direct, automatic transformation from XML Schema into OWL allowing the integration of much XML data in the Semantic Web. We focus on an advanced logical representation of XML Schema components and present an implementation, including a comparison with related work.", "which rule ?", "Automatic", 485.0, 494.0], ["XML has become the de-facto standard of data exchange format in E-businesses. Although XML can support syntactic inter-operability, problems arise when data sources represented as XML documents are needed to be integrated. The reason is that XML lacks support for efficient sharing of conceptualization. The Web Ontology Language (OWL) can play an important role here as it can enable semantic inter-operability, and it supports the representation of domain knowledge using classes, properties and instances for applications. In many applications it is required to convert huge XML documents automatically to OWL ontologies, which is receiving a lot of attention. There are some existing converters for this job. Unfortunately they have serious shortcomings, e. g., they do not address the handling of characteristics like internal references, (transitive) import(s), include etc. which are commonly used in XML Schemas. To alleviate these drawbacks, we propose a new framework for mapping XML to OWL automatically. We illustrate our technique on examples to show the efficacy of our approach. We also provide the performance measures of our approach on some standard datasets. We also check the correctness of the conversion process.", "which rule ?", "Automatic", NaN, NaN], ["The eXtensible Markup Language (XML) can be used as data exchange format in different domains. It allows different parties to exchange data by providing common understanding of the basic concepts in the domain. XML covers the syntactic level, but lacks support for reasoning. Ontology can provide a semantic representation of domain knowledge which supports efficient reasoning and expressive power. One of the most popular ontology languages is the Web Ontology Language (OWL). It can represent domain knowledge using classes, properties, axioms and instances for the use in a distributed environment such as the World Wide Web. This paper presents a new method for automatic generation of OWL ontology from XML data sources.", "which rule ?", "Automatic", 667.0, 676.0], ["DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.", "which rule ?", "Automatic", NaN, NaN], ["Deep neural networks (DNNs) have become the gold standard for solving challenging classification problems, especially given complex sensor inputs (e.g., images and video). While DNNs are powerful, they are also brittle, and their inner workings are not fully understood by humans, leading to their use as \u201cblack-box\u201d models. DNNs often generalize poorly when provided new data sampled from slightly shifted distributions; DNNs are easily manipulated by adversarial examples; and the decision-making process of DNNs can be difficult for humans to interpret. To address these challenges, we propose integrating DNNs with external sources of semantic knowledge. Large quantities of meaningful, formalized knowledge are available in knowledge graphs and other databases, many of which are publicly obtainable. But at present, these sources are inaccessible to deep neural methods, which can only exploit patterns in the signals they are given to classify. In this work, we conduct experiments on the ADE20K dataset, using scene classification as an example task where combining DNNs with external knowledge graphs can result in more robust and explainable models. We align the atomic concepts present in ADE20K (i.e., objects) to WordNet, a hierarchically-organized lexical database. Using this knowledge graph, we expand the concept categories which can be identified in ADE20K and relate these concepts in a hierarchical manner. The neural architecture we present performs scene classification using these concepts, illuminating a path toward DNNs which can efficiently exploit high-level knowledge in place of excessive quantities of direct sensory input. We hypothesize and experimentally validate that incorporating background knowledge via an external knowledge graph into a deep learning-based model should improve the explainability and robustness of the model.", "which Machine Learning Model Integration ?", "external", 619.0, 627.0], ["Visual Question Answering (VQA) has attracted much attention in both computer vision and natural language processing communities, not least because it offers insight into the relationships between two important sources of information. Current datasets, and the models built upon them, have focused on questions which are answerable by direct analysis of the question and image alone. The set of such questions that require no external information to answer is interesting, but very limited. It excludes questions which require common sense, or basic factual knowledge to answer, for example. Here we introduce FVQA (Fact-based VQA), a VQA dataset which requires, and supports, much deeper reasoning. FVQA primarily contains questions that require external information to answer. We thus extend a conventional visual question answering dataset, which contains image-question-answer triplets, through additional image-question-answer-supporting fact tuples. Each supporting-fact is represented as a structural triplet, such as . We evaluate several baseline models on the FVQA dataset, and describe a novel model which is capable of reasoning about an image on the basis of supporting-facts.", "which Machine Learning Model Integration ?", "external", 426.0, 434.0], ["Deep neural networks have achieved promising results in stock trend prediction. However, most of these models have two common drawbacks, including (i) current methods are not sensitive enough to abrupt changes of stock trend, and (ii) forecasting results are not interpretable for humans. To address these two problems, we propose a novel Knowledge-Driven Temporal Convolutional Network (KDTCN) for stock trend prediction and explanation. Firstly, we extract structured events from financial news, and utilize external knowledge from knowledge graph to obtain event embeddings. Then, we combine event embeddings and price values together to forecast stock trend. We evaluate the prediction accuracy to show how knowledge-driven events work on abrupt changes. We also visualize the effect of events and linkage among events based on knowledge graph, to explain why knowledge-driven events are common sources of abrupt changes. Experiments demonstrate that KDTCN can (i) react to abrupt changes much faster and outperform state-of-the-art methods on stock datasets, as well as (ii) facilitate the explanation of prediction particularly with abrupt changes.", "which Machine Learning Model Integration ?", "external", 510.0, 518.0], ["Machine learning explanation can significantly boost machine learning's application in decision making, but the usability of current methods is limited in human-centric explanation, especially for transfer learning, an important machine learning branch that aims at utilizing knowledge from one learning domain (i.e., a pair of dataset and prediction task) to enhance prediction model training in another learning domain. In this paper , we propose an ontology-based approach for human-centric explanation of transfer learning. Three kinds of knowledge-based explanatory evidence, with different granularities, including general factors, particular narrators and core contexts are first proposed and then inferred with both local ontologies and external knowledge bases. The evaluation with US flight data and DB-pedia has presented their confidence and availability in explaining the transferability of feature representation in flight departure delay forecasting.", "which Machine Learning Model Integration ?", "external", 745.0, 753.0], ["Lexical Semantic Change detection, i.e., the task of identifying words that change meaning over time, is a very active research area, with applications in NLP, lexicography, and linguistics. Evaluation is currently the most pressing problem in Lexical Semantic Change detection, as no gold standards are available to the community, which hinders progress. We present the results of the first shared task that addresses this gap by providing researchers with an evaluation framework and manually annotated, high-quality datasets for English, German, Latin, and Swedish. 33 teams submitted 186 systems, which were evaluated on two subtasks.", "which Languages ?", "German", 541.0, 547.0], ["In this paper, we describe our method for detection of lexical semantic change, i.e., word sense changes over time. We examine semantic differences between specific words in two corpora, chosen from different time periods, for English, German, Latin, and Swedish. Our method was created for the SemEval 2020 Task 1: Unsupervised Lexical Semantic Change Detection. We ranked 1st in Sub-task 1: binary change detection, and 4th in Sub-task 2: ranked change detection. We present our method which is completely unsupervised and language independent. It consists of preparing a semantic vector space for each corpus, earlier and later; computing a linear transformation between earlier and later spaces, using Canonical Correlation Analysis and orthogonal transformation;and measuring the cosines between the transformed vector for the target word from the earlier corpus and the vector for the target word in the later corpus.", "which Languages ?", "German", 236.0, 242.0], ["Lexical Semantic Change detection, i.e., the task of identifying words that change meaning over time, is a very active research area, with applications in NLP, lexicography, and linguistics. Evaluation is currently the most pressing problem in Lexical Semantic Change detection, as no gold standards are available to the community, which hinders progress. We present the results of the first shared task that addresses this gap by providing researchers with an evaluation framework and manually annotated, high-quality datasets for English, German, Latin, and Swedish. 33 teams submitted 186 systems, which were evaluated on two subtasks.", "which Languages ?", "English", 532.0, 539.0], ["In this paper, we describe our method for detection of lexical semantic change, i.e., word sense changes over time. We examine semantic differences between specific words in two corpora, chosen from different time periods, for English, German, Latin, and Swedish. Our method was created for the SemEval 2020 Task 1: Unsupervised Lexical Semantic Change Detection. We ranked 1st in Sub-task 1: binary change detection, and 4th in Sub-task 2: ranked change detection. We present our method which is completely unsupervised and language independent. It consists of preparing a semantic vector space for each corpus, earlier and later; computing a linear transformation between earlier and later spaces, using Canonical Correlation Analysis and orthogonal transformation;and measuring the cosines between the transformed vector for the target word from the earlier corpus and the vector for the target word in the later corpus.", "which Languages ?", "English", 227.0, 234.0], ["Lexical Semantic Change detection, i.e., the task of identifying words that change meaning over time, is a very active research area, with applications in NLP, lexicography, and linguistics. Evaluation is currently the most pressing problem in Lexical Semantic Change detection, as no gold standards are available to the community, which hinders progress. We present the results of the first shared task that addresses this gap by providing researchers with an evaluation framework and manually annotated, high-quality datasets for English, German, Latin, and Swedish. 33 teams submitted 186 systems, which were evaluated on two subtasks.", "which Languages ?", "Latin", 549.0, 554.0], ["In this paper, we describe our method for detection of lexical semantic change, i.e., word sense changes over time. We examine semantic differences between specific words in two corpora, chosen from different time periods, for English, German, Latin, and Swedish. Our method was created for the SemEval 2020 Task 1: Unsupervised Lexical Semantic Change Detection. We ranked 1st in Sub-task 1: binary change detection, and 4th in Sub-task 2: ranked change detection. We present our method which is completely unsupervised and language independent. It consists of preparing a semantic vector space for each corpus, earlier and later; computing a linear transformation between earlier and later spaces, using Canonical Correlation Analysis and orthogonal transformation;and measuring the cosines between the transformed vector for the target word from the earlier corpus and the vector for the target word in the later corpus.", "which Languages ?", "Latin", 244.0, 249.0], ["Lexical Semantic Change detection, i.e., the task of identifying words that change meaning over time, is a very active research area, with applications in NLP, lexicography, and linguistics. Evaluation is currently the most pressing problem in Lexical Semantic Change detection, as no gold standards are available to the community, which hinders progress. We present the results of the first shared task that addresses this gap by providing researchers with an evaluation framework and manually annotated, high-quality datasets for English, German, Latin, and Swedish. 33 teams submitted 186 systems, which were evaluated on two subtasks.", "which Languages ?", "Swedish", 560.0, 567.0], ["In this paper, we describe our method for detection of lexical semantic change, i.e., word sense changes over time. We examine semantic differences between specific words in two corpora, chosen from different time periods, for English, German, Latin, and Swedish. Our method was created for the SemEval 2020 Task 1: Unsupervised Lexical Semantic Change Detection. We ranked 1st in Sub-task 1: binary change detection, and 4th in Sub-task 2: ranked change detection. We present our method which is completely unsupervised and language independent. It consists of preparing a semantic vector space for each corpus, earlier and later; computing a linear transformation between earlier and later spaces, using Canonical Correlation Analysis and orthogonal transformation;and measuring the cosines between the transformed vector for the target word from the earlier corpus and the vector for the target word in the later corpus.", "which Languages ?", "Swedish", 255.0, 262.0], ["We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word\u2019s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.", "which Languages ?", "Latin", 23.0, 28.0], ["Accurately answering a question about a given image requires combining observations with general knowledge. While this is effortless for humans, reasoning with general knowledge remains an algorithmic challenge. To advance research in this direction a novel `fact-based' visual question answering (FVQA) task has been introduced recently along with a large set of curated facts which link two entities, i.e., two possible answers, via a relation. Given a question-image pair, deep network techniques have been employed to successively reduce the large set of facts until one of the two entities of the final remaining fact is predicted as the answer. We observe that a successive process which considers one fact at a time to form a local decision is sub-optimal. Instead, we develop an entity graph and use a graph convolutional network to `reason' about the correct answer by jointly considering all entities. We show on the challenging FVQA dataset that this leads to an improvement in accuracy of around 7% compared to the state of the art.", "which Machine Learning Input ?", "image", 46.0, 51.0], ["Applications which use human speech as an input require a speech interface with high recognition accuracy. The words or phrases in the recognised text are annotated with a machine-understandable meaning and linked to knowledge graphs for further processing by the target application. These semantic annotations of recognised words can be represented as a subject-predicate-object triples which collectively form a graph often referred to as a knowledge graph. This type of knowledge representation facilitates to use speech interfaces with any spoken input application, since the information is represented in logical, semantic form, retrieving and storing can be followed using any web standard query languages. In this work, we develop a methodology for linking speech input to knowledge graphs and study the impact of recognition errors in the overall process. We show that for a corpus with lower WER, the annotation and linking of entities to the DBpedia knowledge graph is considerable. DBpedia Spotlight, a tool to interlink text documents with the linked open data is used to link the speech recognition output to the DBpedia knowledge graph. Such a knowledge-based speech recognition interface is useful for applications such as question answering or spoken dialog systems.", "which Machine Learning Input ?", "text", 146.0, 150.0], ["One characteristic that sets humans apart from modern learning-based computer vision algorithms is the ability to acquire knowledge about the world and use that knowledge to reason about the visual world. Humans can learn about the characteristics of objects and the relationships that occur between them to learn a large variety of visual concepts, often with few examples. This paper investigates the use of structured prior knowledge in the form of knowledge graphs and shows that using this knowledge improves performance on image classification. We build on recent work on end-to-end learning on graphs, introducing the Graph Search Neural Network as a way of efficiently incorporating large knowledge graphs into a vision classification pipeline. We show in a number of experiments that our method outperforms standard neural network baselines for multi-label classification.", "which Machine Learning Input ?", "image", 529.0, 534.0], ["Learning Analytics by nature relies on computational information processing activities intended to extract from raw data some interesting aspects that can be used to obtain insights into the behaviours of learners, the design of learning experiences, etc. There is a large variety of computational techniques that can be employed, all with interesting properties, but it is the interpretation of their results that really forms the core of the analytics process. In this paper, we look at a specific data mining method, namely sequential pattern extraction, and we demonstrate an approach that exploits available linked open data for this interpretation task. Indeed, we show through a case study relying on data about students' enrolment in course modules how linked data can be used to provide a variety of additional dimensions through which the results of the data mining method can be explored, providing, at interpretation time, new input into the analytics process.", "which Machine Learning Input ?", "raw", 112.0, 115.0], ["We describe an architecture for building speech-enabled conversational agents, deployed as self-contained Web services, with ability to provide inference processing on very large knowledge bases and its application to voice enabled chatbots in a virtual storytelling environment. The architecture integrates inference engines, natural language pattern matching components and story-specific information extraction from RDF/XML files. Our Web interface is dynamically generated by server side agents supporting multi-modal interface components (speech and animation). Prolog refactorings of the WordNet lexical knowledge base, FrameNet and the Open Mind common sense knowledge repository are combined with internet meta-search to provide high-quality knowledge sources to our conversational agents. An example of conversational agent with speech capabilities is deployed on the Web at http://logic.csci.unt.edu:8080/wordnet_agent/frame.html. The agent is also accessible for live multi-user text-based chat, through a Yahoo Instant Messenger protocol adaptor, from wired or wireless devices, as the jinni_agent Yahoo IM \"handle\".", "which Machine Learning Input ?", "text", 990.0, 994.0], ["Visual Question Answering (VQA) has attracted much attention in both computer vision and natural language processing communities, not least because it offers insight into the relationships between two important sources of information. Current datasets, and the models built upon them, have focused on questions which are answerable by direct analysis of the question and image alone. The set of such questions that require no external information to answer is interesting, but very limited. It excludes questions which require common sense, or basic factual knowledge to answer, for example. Here we introduce FVQA (Fact-based VQA), a VQA dataset which requires, and supports, much deeper reasoning. FVQA primarily contains questions that require external information to answer. We thus extend a conventional visual question answering dataset, which contains image-question-answer triplets, through additional image-question-answer-supporting fact tuples. Each supporting-fact is represented as a structural triplet, such as . We evaluate several baseline models on the FVQA dataset, and describe a novel model which is capable of reasoning about an image on the basis of supporting-facts.", "which Machine Learning Input ?", "image", 371.0, 376.0], ["We describe a method for visual question answering which is capable of reasoning about contents of an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can provide an explanation of the reasoning by which it developed its answer. The method is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperforms it significantly in the testing. We also provide a dataset and a protocol by which to evaluate such methods, thus addressing one of the key issues in general visual ques- tion answering.", "which Machine Learning Input ?", "image", 102.0, 107.0], ["Open-domain question answering (QA) is an important problem in AI and NLP that is emerging as a bellwether for progress on the generalizability of AI methods and techniques. Much of the progress in open-domain QA systems has been realized through advances in information retrieval methods and corpus construction. In this paper, we focus on the recently introduced ARC Challenge dataset, which contains 2,590 multiple choice questions authored for grade-school science exams. These questions are selected to be the most challenging for current QA systems, and current state of the art performance is only slightly better than random chance. We present a system that reformulates a given question into queries that are used to retrieve supporting text from a large corpus of science-related text. Our rewriter is able to incorporate background knowledge from ConceptNet and -- in tandem with a generic textual entailment system trained on SciTail that identifies support in the retrieved results -- outperforms several strong baselines on the end-to-end QA task despite only being trained to identify essential terms in the original source question. We use a generalizable decision methodology over the retrieved evidence and answer candidates to select the best answer. By combining query reformulation, background knowledge, and textual entailment our system is able to outperform several strong baselines on the ARC dataset.", "which Machine Learning Input ?", "text", 746.0, 750.0], ["Computer-aided food identification and quantity estimation have caught more attention in recent years because of the growing concern of our health. The identification problem is usually defined as an image categorization or classification problem and several researches have been proposed. In this paper, we address the issues of feature descriptors in the food identification problem and introduce a preliminary approach for the quantity estimation using depth information. Sparse coding is utilized in the SIFT and Local binary pattern feature descriptors, and these features combined with gabor and color features are used to represent food items. A multi-label SVM classifier is trained for each feature, and these classifiers are combined with multi-class Adaboost algorithm. For evaluation, 50 categories of worldwide food are used, and each category contains 100 photographs from different sources, such as manually taken or from Internet web albums. An overall accuracy of 68.3% is achieved, and success at top-N candidates achieved 80.6%, 84.8%, and 90.9% accuracy accordingly when N equals 2, 3, and 5, thus making mobile application practical. The experimental results show that the proposed methods greatly improve the performance of original SIFT and LBP feature descriptors. On the other hand, for quantity estimation using depth information, a straight forward method is proposed for certain food, while transparent food ingredients such as pure water and cooked rice are temporarily excluded.", "which Annotation ?", "Label", 659.0, 664.0], ["The unified method described previously for combining high-precision nonrelativistic variational calculations with relativistic and quantum electrodynamic corrections is applied to the 1s 2 1 S 0 , 1s2s 1 S 0 , 1s2s 3 S 1 , 1s2p 1 P 1 , and 1s2p 3 P 0,1,2 staters of helium-like ions. Detailed tabulations are presented for all ions in the range 2 \u2264 Z \u2264 100 and are compared with a wide range of experimental data up to 34 Kr + . The results for 90 U + significantly alter the recent Lamb shift measurement of Munger and Gould from 70.4 \u00b1 8.3 to 71.0 \u00b1 8.3 eV, in comparison with a revised theoretical value of 74.3 \u00b1 0.4 eV. The improved agreement is due to the inclusion of higher order two-electron corrections in the present work.", "which Paper type ?", "Theoretical", 590.0, 601.0], ["The unified method described previously for combining high-precision nonrelativistic variational calculations with relativistic and quantum electrodynamic corrections is applied to the 1s 2 1 S 0 , 1s2s 1 S 0 , 1s2s 3 S 1 , 1s2p 1 P 1 , and 1s2p 3 P 0,1,2 staters of helium-like ions. Detailed tabulations are presented for all ions in the range 2 \u2264 Z \u2264 100 and are compared with a wide range of experimental data up to 34 Kr + . The results for 90 U + significantly alter the recent Lamb shift measurement of Munger and Gould from 70.4 \u00b1 8.3 to 71.0 \u00b1 8.3 eV, in comparison with a revised theoretical value of 74.3 \u00b1 0.4 eV. The improved agreement is due to the inclusion of higher order two-electron corrections in the present work.", "which Paper type ?", "Theoretical", 590.0, 601.0], ["The unified method described previously for combining high-precision nonrelativistic variational calculations with relativistic and quantum electrodynamic corrections is applied to the 1s 2 1 S 0 , 1s2s 1 S 0 , 1s2s 3 S 1 , 1s2p 1 P 1 , and 1s2p 3 P 0,1,2 staters of helium-like ions. Detailed tabulations are presented for all ions in the range 2 \u2264 Z \u2264 100 and are compared with a wide range of experimental data up to 34 Kr + . The results for 90 U + significantly alter the recent Lamb shift measurement of Munger and Gould from 70.4 \u00b1 8.3 to 71.0 \u00b1 8.3 eV, in comparison with a revised theoretical value of 74.3 \u00b1 0.4 eV. The improved agreement is due to the inclusion of higher order two-electron corrections in the present work.", "which Paper type ?", "Theoretical", 590.0, 601.0], ["The unified method described previously for combining high-precision nonrelativistic variational calculations with relativistic and quantum electrodynamic corrections is applied to the 1s 2 1 S 0 , 1s2s 1 S 0 , 1s2s 3 S 1 , 1s2p 1 P 1 , and 1s2p 3 P 0,1,2 staters of helium-like ions. Detailed tabulations are presented for all ions in the range 2 \u2264 Z \u2264 100 and are compared with a wide range of experimental data up to 34 Kr + . The results for 90 U + significantly alter the recent Lamb shift measurement of Munger and Gould from 70.4 \u00b1 8.3 to 71.0 \u00b1 8.3 eV, in comparison with a revised theoretical value of 74.3 \u00b1 0.4 eV. The improved agreement is due to the inclusion of higher order two-electron corrections in the present work.", "which Paper type ?", "Theoretical", 590.0, 601.0], ["The unified method described previously for combining high-precision nonrelativistic variational calculations with relativistic and quantum electrodynamic corrections is applied to the 1s 2 1 S 0 , 1s2s 1 S 0 , 1s2s 3 S 1 , 1s2p 1 P 1 , and 1s2p 3 P 0,1,2 staters of helium-like ions. Detailed tabulations are presented for all ions in the range 2 \u2264 Z \u2264 100 and are compared with a wide range of experimental data up to 34 Kr + . The results for 90 U + significantly alter the recent Lamb shift measurement of Munger and Gould from 70.4 \u00b1 8.3 to 71.0 \u00b1 8.3 eV, in comparison with a revised theoretical value of 74.3 \u00b1 0.4 eV. The improved agreement is due to the inclusion of higher order two-electron corrections in the present work.", "which Paper type ?", "Theoretical", 590.0, 601.0], ["BACKGROUND: Duchenne muscular dystrophy (DMD) causes progressive respiratory muscle weakness and decline in function, which can go undetected without monitoring. DMD respiratory care guidelines recommend scheduled respiratory assessments and use of respiratory assist devices. To determine the extent of adherence to these guidelines, we evaluated respiratory assessments and interventions among males with DMD in the Muscular Dystrophy Surveillance, Tracking, and Research Network (MD STARnet) from 2000 to 2011. METHODS: MD STARnet is a population-based surveillance system that identifies all individuals born during or after 1982 residing in Arizona, Colorado, Georgia, Hawaii, Iowa, and western New York with Duchenne or Becker muscular dystrophy. We analyzed MD STARnet respiratory care data for non-ambulatory adolescent males (12\u201317 y old) and men (\u226518 y old) with DMD, assessing whether: (1) pulmonary function was measured twice yearly; (2) awake and asleep hypoventilation testing was performed at least yearly; (3) home mechanical insufflation-exsufflation, noninvasive ventilation, and tracheostomy/ventilators were prescribed; and (4) pulmonologists provided evaluations. RESULTS: During 2000\u20132010, no more than 50% of both adolescents and men had their pulmonary function monitored twice yearly in any of the years; 67% or fewer were assessed for awake and sleep hypoventilation yearly. Although the use of mechanical insufflation-exsufflation and noninvasive ventilation is probably increasing, prior use of these devices did not prevent all tracheostomies, and at least 18 of 29 tracheostomies were performed due to acute respiratory illnesses. Fewer than 32% of adolescents and men had pulmonologist evaluations in 2010\u20132011. CONCLUSIONS: Since the 2004 publication of American Thoracic Society guidelines, there have been few changes in pulmonary clinical practice. Frequencies of respiratory assessments and assist device use among males with DMD were lower than recommended in clinical guidelines. Collaboration of respiratory therapists and pulmonologists with clinicians caring for individuals with DMD should be encouraged to ensure access to the full spectrum of in-patient and out-patient pulmonary interventions.", "which Subject Label ?", "American", 1787.0, 1795.0], ["BACKGROUND: Duchenne muscular dystrophy (DMD) causes progressive respiratory muscle weakness and decline in function, which can go undetected without monitoring. DMD respiratory care guidelines recommend scheduled respiratory assessments and use of respiratory assist devices. To determine the extent of adherence to these guidelines, we evaluated respiratory assessments and interventions among males with DMD in the Muscular Dystrophy Surveillance, Tracking, and Research Network (MD STARnet) from 2000 to 2011. METHODS: MD STARnet is a population-based surveillance system that identifies all individuals born during or after 1982 residing in Arizona, Colorado, Georgia, Hawaii, Iowa, and western New York with Duchenne or Becker muscular dystrophy. We analyzed MD STARnet respiratory care data for non-ambulatory adolescent males (12\u201317 y old) and men (\u226518 y old) with DMD, assessing whether: (1) pulmonary function was measured twice yearly; (2) awake and asleep hypoventilation testing was performed at least yearly; (3) home mechanical insufflation-exsufflation, noninvasive ventilation, and tracheostomy/ventilators were prescribed; and (4) pulmonologists provided evaluations. RESULTS: During 2000\u20132010, no more than 50% of both adolescents and men had their pulmonary function monitored twice yearly in any of the years; 67% or fewer were assessed for awake and sleep hypoventilation yearly. Although the use of mechanical insufflation-exsufflation and noninvasive ventilation is probably increasing, prior use of these devices did not prevent all tracheostomies, and at least 18 of 29 tracheostomies were performed due to acute respiratory illnesses. Fewer than 32% of adolescents and men had pulmonologist evaluations in 2010\u20132011. CONCLUSIONS: Since the 2004 publication of American Thoracic Society guidelines, there have been few changes in pulmonary clinical practice. Frequencies of respiratory assessments and assist device use among males with DMD were lower than recommended in clinical guidelines. Collaboration of respiratory therapists and pulmonologists with clinicians caring for individuals with DMD should be encouraged to ensure access to the full spectrum of in-patient and out-patient pulmonary interventions.", "which Subject Label ?", "Clinical", 1866.0, 1874.0], ["BACKGROUND: Duchenne muscular dystrophy (DMD) causes progressive respiratory muscle weakness and decline in function, which can go undetected without monitoring. DMD respiratory care guidelines recommend scheduled respiratory assessments and use of respiratory assist devices. To determine the extent of adherence to these guidelines, we evaluated respiratory assessments and interventions among males with DMD in the Muscular Dystrophy Surveillance, Tracking, and Research Network (MD STARnet) from 2000 to 2011. METHODS: MD STARnet is a population-based surveillance system that identifies all individuals born during or after 1982 residing in Arizona, Colorado, Georgia, Hawaii, Iowa, and western New York with Duchenne or Becker muscular dystrophy. We analyzed MD STARnet respiratory care data for non-ambulatory adolescent males (12\u201317 y old) and men (\u226518 y old) with DMD, assessing whether: (1) pulmonary function was measured twice yearly; (2) awake and asleep hypoventilation testing was performed at least yearly; (3) home mechanical insufflation-exsufflation, noninvasive ventilation, and tracheostomy/ventilators were prescribed; and (4) pulmonologists provided evaluations. RESULTS: During 2000\u20132010, no more than 50% of both adolescents and men had their pulmonary function monitored twice yearly in any of the years; 67% or fewer were assessed for awake and sleep hypoventilation yearly. Although the use of mechanical insufflation-exsufflation and noninvasive ventilation is probably increasing, prior use of these devices did not prevent all tracheostomies, and at least 18 of 29 tracheostomies were performed due to acute respiratory illnesses. Fewer than 32% of adolescents and men had pulmonologist evaluations in 2010\u20132011. CONCLUSIONS: Since the 2004 publication of American Thoracic Society guidelines, there have been few changes in pulmonary clinical practice. Frequencies of respiratory assessments and assist device use among males with DMD were lower than recommended in clinical guidelines. Collaboration of respiratory therapists and pulmonologists with clinicians caring for individuals with DMD should be encouraged to ensure access to the full spectrum of in-patient and out-patient pulmonary interventions.", "which Subject Label ?", "Access", 2157.0, 2163.0], ["BACKGROUND: Duchenne muscular dystrophy (DMD) causes progressive respiratory muscle weakness and decline in function, which can go undetected without monitoring. DMD respiratory care guidelines recommend scheduled respiratory assessments and use of respiratory assist devices. To determine the extent of adherence to these guidelines, we evaluated respiratory assessments and interventions among males with DMD in the Muscular Dystrophy Surveillance, Tracking, and Research Network (MD STARnet) from 2000 to 2011. METHODS: MD STARnet is a population-based surveillance system that identifies all individuals born during or after 1982 residing in Arizona, Colorado, Georgia, Hawaii, Iowa, and western New York with Duchenne or Becker muscular dystrophy. We analyzed MD STARnet respiratory care data for non-ambulatory adolescent males (12\u201317 y old) and men (\u226518 y old) with DMD, assessing whether: (1) pulmonary function was measured twice yearly; (2) awake and asleep hypoventilation testing was performed at least yearly; (3) home mechanical insufflation-exsufflation, noninvasive ventilation, and tracheostomy/ventilators were prescribed; and (4) pulmonologists provided evaluations. RESULTS: During 2000\u20132010, no more than 50% of both adolescents and men had their pulmonary function monitored twice yearly in any of the years; 67% or fewer were assessed for awake and sleep hypoventilation yearly. Although the use of mechanical insufflation-exsufflation and noninvasive ventilation is probably increasing, prior use of these devices did not prevent all tracheostomies, and at least 18 of 29 tracheostomies were performed due to acute respiratory illnesses. Fewer than 32% of adolescents and men had pulmonologist evaluations in 2010\u20132011. CONCLUSIONS: Since the 2004 publication of American Thoracic Society guidelines, there have been few changes in pulmonary clinical practice. Frequencies of respiratory assessments and assist device use among males with DMD were lower than recommended in clinical guidelines. Collaboration of respiratory therapists and pulmonologists with clinicians caring for individuals with DMD should be encouraged to ensure access to the full spectrum of in-patient and out-patient pulmonary interventions.", "which Subject Label ?", "Clinicians", 2083.0, 2093.0], ["BACKGROUND: Duchenne muscular dystrophy (DMD) causes progressive respiratory muscle weakness and decline in function, which can go undetected without monitoring. DMD respiratory care guidelines recommend scheduled respiratory assessments and use of respiratory assist devices. To determine the extent of adherence to these guidelines, we evaluated respiratory assessments and interventions among males with DMD in the Muscular Dystrophy Surveillance, Tracking, and Research Network (MD STARnet) from 2000 to 2011. METHODS: MD STARnet is a population-based surveillance system that identifies all individuals born during or after 1982 residing in Arizona, Colorado, Georgia, Hawaii, Iowa, and western New York with Duchenne or Becker muscular dystrophy. We analyzed MD STARnet respiratory care data for non-ambulatory adolescent males (12\u201317 y old) and men (\u226518 y old) with DMD, assessing whether: (1) pulmonary function was measured twice yearly; (2) awake and asleep hypoventilation testing was performed at least yearly; (3) home mechanical insufflation-exsufflation, noninvasive ventilation, and tracheostomy/ventilators were prescribed; and (4) pulmonologists provided evaluations. RESULTS: During 2000\u20132010, no more than 50% of both adolescents and men had their pulmonary function monitored twice yearly in any of the years; 67% or fewer were assessed for awake and sleep hypoventilation yearly. Although the use of mechanical insufflation-exsufflation and noninvasive ventilation is probably increasing, prior use of these devices did not prevent all tracheostomies, and at least 18 of 29 tracheostomies were performed due to acute respiratory illnesses. Fewer than 32% of adolescents and men had pulmonologist evaluations in 2010\u20132011. CONCLUSIONS: Since the 2004 publication of American Thoracic Society guidelines, there have been few changes in pulmonary clinical practice. Frequencies of respiratory assessments and assist device use among males with DMD were lower than recommended in clinical guidelines. Collaboration of respiratory therapists and pulmonologists with clinicians caring for individuals with DMD should be encouraged to ensure access to the full spectrum of in-patient and out-patient pulmonary interventions.", "which Subject Label ?", "Collaboration", 2019.0, 2032.0], ["BACKGROUND: Duchenne muscular dystrophy (DMD) causes progressive respiratory muscle weakness and decline in function, which can go undetected without monitoring. DMD respiratory care guidelines recommend scheduled respiratory assessments and use of respiratory assist devices. To determine the extent of adherence to these guidelines, we evaluated respiratory assessments and interventions among males with DMD in the Muscular Dystrophy Surveillance, Tracking, and Research Network (MD STARnet) from 2000 to 2011. METHODS: MD STARnet is a population-based surveillance system that identifies all individuals born during or after 1982 residing in Arizona, Colorado, Georgia, Hawaii, Iowa, and western New York with Duchenne or Becker muscular dystrophy. We analyzed MD STARnet respiratory care data for non-ambulatory adolescent males (12\u201317 y old) and men (\u226518 y old) with DMD, assessing whether: (1) pulmonary function was measured twice yearly; (2) awake and asleep hypoventilation testing was performed at least yearly; (3) home mechanical insufflation-exsufflation, noninvasive ventilation, and tracheostomy/ventilators were prescribed; and (4) pulmonologists provided evaluations. RESULTS: During 2000\u20132010, no more than 50% of both adolescents and men had their pulmonary function monitored twice yearly in any of the years; 67% or fewer were assessed for awake and sleep hypoventilation yearly. Although the use of mechanical insufflation-exsufflation and noninvasive ventilation is probably increasing, prior use of these devices did not prevent all tracheostomies, and at least 18 of 29 tracheostomies were performed due to acute respiratory illnesses. Fewer than 32% of adolescents and men had pulmonologist evaluations in 2010\u20132011. CONCLUSIONS: Since the 2004 publication of American Thoracic Society guidelines, there have been few changes in pulmonary clinical practice. Frequencies of respiratory assessments and assist device use among males with DMD were lower than recommended in clinical guidelines. Collaboration of respiratory therapists and pulmonologists with clinicians caring for individuals with DMD should be encouraged to ensure access to the full spectrum of in-patient and out-patient pulmonary interventions.", "which Subject Label ?", "Devices", 268.0, 275.0], ["BACKGROUND: Duchenne muscular dystrophy (DMD) causes progressive respiratory muscle weakness and decline in function, which can go undetected without monitoring. DMD respiratory care guidelines recommend scheduled respiratory assessments and use of respiratory assist devices. To determine the extent of adherence to these guidelines, we evaluated respiratory assessments and interventions among males with DMD in the Muscular Dystrophy Surveillance, Tracking, and Research Network (MD STARnet) from 2000 to 2011. METHODS: MD STARnet is a population-based surveillance system that identifies all individuals born during or after 1982 residing in Arizona, Colorado, Georgia, Hawaii, Iowa, and western New York with Duchenne or Becker muscular dystrophy. We analyzed MD STARnet respiratory care data for non-ambulatory adolescent males (12\u201317 y old) and men (\u226518 y old) with DMD, assessing whether: (1) pulmonary function was measured twice yearly; (2) awake and asleep hypoventilation testing was performed at least yearly; (3) home mechanical insufflation-exsufflation, noninvasive ventilation, and tracheostomy/ventilators were prescribed; and (4) pulmonologists provided evaluations. RESULTS: During 2000\u20132010, no more than 50% of both adolescents and men had their pulmonary function monitored twice yearly in any of the years; 67% or fewer were assessed for awake and sleep hypoventilation yearly. Although the use of mechanical insufflation-exsufflation and noninvasive ventilation is probably increasing, prior use of these devices did not prevent all tracheostomies, and at least 18 of 29 tracheostomies were performed due to acute respiratory illnesses. Fewer than 32% of adolescents and men had pulmonologist evaluations in 2010\u20132011. CONCLUSIONS: Since the 2004 publication of American Thoracic Society guidelines, there have been few changes in pulmonary clinical practice. Frequencies of respiratory assessments and assist device use among males with DMD were lower than recommended in clinical guidelines. Collaboration of respiratory therapists and pulmonologists with clinicians caring for individuals with DMD should be encouraged to ensure access to the full spectrum of in-patient and out-patient pulmonary interventions.", "which Subject Label ?", "Full spectrum", 2171.0, 2184.0], ["BACKGROUND: Duchenne muscular dystrophy (DMD) causes progressive respiratory muscle weakness and decline in function, which can go undetected without monitoring. DMD respiratory care guidelines recommend scheduled respiratory assessments and use of respiratory assist devices. To determine the extent of adherence to these guidelines, we evaluated respiratory assessments and interventions among males with DMD in the Muscular Dystrophy Surveillance, Tracking, and Research Network (MD STARnet) from 2000 to 2011. METHODS: MD STARnet is a population-based surveillance system that identifies all individuals born during or after 1982 residing in Arizona, Colorado, Georgia, Hawaii, Iowa, and western New York with Duchenne or Becker muscular dystrophy. We analyzed MD STARnet respiratory care data for non-ambulatory adolescent males (12\u201317 y old) and men (\u226518 y old) with DMD, assessing whether: (1) pulmonary function was measured twice yearly; (2) awake and asleep hypoventilation testing was performed at least yearly; (3) home mechanical insufflation-exsufflation, noninvasive ventilation, and tracheostomy/ventilators were prescribed; and (4) pulmonologists provided evaluations. RESULTS: During 2000\u20132010, no more than 50% of both adolescents and men had their pulmonary function monitored twice yearly in any of the years; 67% or fewer were assessed for awake and sleep hypoventilation yearly. Although the use of mechanical insufflation-exsufflation and noninvasive ventilation is probably increasing, prior use of these devices did not prevent all tracheostomies, and at least 18 of 29 tracheostomies were performed due to acute respiratory illnesses. Fewer than 32% of adolescents and men had pulmonologist evaluations in 2010\u20132011. CONCLUSIONS: Since the 2004 publication of American Thoracic Society guidelines, there have been few changes in pulmonary clinical practice. Frequencies of respiratory assessments and assist device use among males with DMD were lower than recommended in clinical guidelines. Collaboration of respiratory therapists and pulmonologists with clinicians caring for individuals with DMD should be encouraged to ensure access to the full spectrum of in-patient and out-patient pulmonary interventions.", "which Subject Label ?", "Respiratory therapist", NaN, NaN], ["N-doped TiO(2) nanoparticles modified with carbon (denoted N-TiO(2)/C) were successfully prepared by a facile one-pot hydrothermal treatment in the presence of L-lysine, which acts as a ligand to control the nanocrystal growth and as a source of nitrogen and carbon. As-prepared nanocomposites were characterized by thermogravimetric analysis (TGA), X-ray diffraction (XRD), transmission electron microscopy (TEM), high-resolution transmission electron microscopy (HRTEM), Raman spectroscopy, ultraviolet-visible (UV-vis) diffuse reflectance spectroscopy, X-ray photoelectron spectroscopy (XPS), Fourier transform infrared spectroscopy (FTIR), electron paramagnetic resonance (EPR) spectra, and N(2) adsorption-desorption analysis. The photocatalytic activities of the as-prepared photocatalysts were measured by the degradation of methyl orange (MO) under visible light irradiation at \u03bb\u2265 400 nm. The results show that N-TiO(2)/C nanocomposites increase absorption in the visible light region and exhibit a higher photocatalytic activity than pure TiO(2), commercial P25 and previously reported N-doped TiO(2) photocatalysts. We have demonstrated that the nitrogen was doped into the lattice and the carbon species were modified on the surface of the photocatalysts. N-doping narrows the band gap and C-modification enhances the visible light harvesting and accelerates the separation of the photo-generated electrons and holes. As a consequence, the photocatalytic activity is significantly improved. The molar ratio of L-lysine/TiCl(4) and the pH of the hydrothermal reaction solution are important factors affecting the photocatalytic activity of the N-TiO(2)/C; the optimum molar ratio of L-lysine/TiCl(4) is 8 and the optimum pH is ca. 4, at which the catalyst exhibits the highest reactivity. Our findings demonstrate that the as-obtained N-TiO(2)/C photocatalyst is a better and more promising candidate than well studied N-doped TiO(2) alternatives as visible light photocatalysts for potential applications in environmental purification.", "which chemical doping method ?", "hydrothermal", 118.0, 130.0], ["A novel double hydrothermal method to prepare the boron and nitrogen codoped TiO2 is developed. Two different ways have been used for the synthesis of the catalysts, one through the addition of boron followed by nitrogen, and the other through the addition of nitrogen first and then by boron. The X-ray photoelectron spectroscopy analysis indicates the synergistic effect of boron and nitrogen with the formation of Ti\u2212B\u2212N\u2212Ti and Ti\u2212N\u2212B\u2212O compounds on the surface of catalysts when nitrogen is introduced to the materials first. When the boron is added first, only Ti\u2212N\u2212B\u2212O species occurs on the surface of catalysts. The above two compounds are all thought to enhance the photocatalytic activities of codoped TiO2. Density functional theory simulations are also performed to investigate the B\u2212N synergistic effect. For the (101) surface, the formation of Ti\u2212B\u2212N\u2212Ti structures gives rise to the localized states within the TiO2 band gap.", "which chemical doping method ?", "hydrothermal", 15.0, 27.0], ["Nanocrystalline anatase titanium dioxide powders were produced by a hydrothermal synthesis route in pure form and substituted with trivalent Ga3+ and Y3+ or pentavalent Nb5+ with the intention of creating acceptor or donor states, respectively. The electrical conductivity of each powder was measured using the powder-solution-composite (PSC) method. The conductivity increased with the addition of Nb5+ from 3 similar to x similar to 10-3 similar to S/cm to 10 similar to x similar to 10-3 similar to S/cm in as-prepared powders, and from 0.3 similar to x similar to 10-3 similar to S/cm to 0.9 similar to x similar to 10-3 similar to S/cm in heat-treated powders (520 degrees C, 1 similar to h). In contrast, substitution with Ga3+ and Y3+ had no measureable effect on the material's conductivity. The lack of change with the addition of Ga3+ and Y3+, and relatively small increase upon Nb5+ addition is attributed to ionic compensation owing to the highly oxidizing nature of hydrothermal synthesis.", "which chemical doping method ?", "hydrothermal", 68.0, 80.0], ["Nanoparticles of titanium dioxide co-doped with nitrogen and iron (III) were first prepared using the homogeneous precipitation-hydrothermal method. The structure and properties of the co-doped were studied by XRD, XPS, Raman, FL, and UV-diffuse reflectance spectra. By analyzing the structures and photocatalytic activities of the undoped and nitrogen and/or Fe3+-doped TiO2 under ultraviolet and visible light irradiation, the probable mechanism of co-doped particles was investigated. It is presumed that the nitrogen and Fe3+ ion doping induced the formation of new states closed to the valence band and conduction band, respectively. The co-operation of the nitrogen and Fe3+ ion leads to the much narrowing of the band gap and greatly improves the photocatalytic activity in the visible light region. Meanwhile, the co-doping can also promote the separation of the photogenerated electrons and holes to accelerate the transmission of photocurrent carrier. The photocatalyst co-doped with nitrogen and 0.5% Fe3+ sho...", "which chemical doping method ?", "hydrothermal", 128.0, 140.0], ["A set of polycrystalline TiO2 photocatalysts loaded with various ions of transition metals (Co, Cr, Cu, Fe, Mo, V, and W) were prepared by using the wet impregnation method. The samples were characterized by using some bulk and surface techniques, namely X-ray diffraction, BET specific surface area determination, scanning electron microscopy, point of zero charge determination, and femtosecond pump\u2212probe diffuse reflectance spectroscopy (PP-DRS). The samples were employed as catalysts for 4-nitrophenol photodegradation in aqueous suspension, used as a probe reaction. The characterization results have confirmed the difficulty to find a straightforward correlation between photoactivity and single specific properties of the powders. Diffuse reflectance measurements showed a slight shift in the band gap transition to longer wavelengths and an extension of the absorption in the visible region for almost all the doped samples. SEM observation and EDX measurements indicated a similar morphology for all the parti...", "which chemical doping method ?", "impregnation", 153.0, 165.0], ["Nitrogen-doped TiO2 nanocatalysts with a homogeneous anatase structure were successfully synthesized through a microemulsion\u2212hydrothermal method by using some organic compounds such as triethylamine, urea, thiourea, and hydrazine hydrate. Analysis by Raman and X-ray photoemission spectroscopy indicated that nitrogen was doped effectively and most nitrogen dopants might be present in the chemical environment of Ti\u2212O\u2212N and O\u2212Ti\u2212N. A shift of the absorption edge to a lower energy and a stronger absorption in the visible light region were observed. The results of photodegradation or the organic pollutant rhodamine B in the visible light irradiation (\u03bb > 420 nm) suggested that the TiO2 photocatalysts after nitrogen doping were greatly improved compared with the undoped TiO2 photocatalysts and Degussa P-25; especially the nitrogen-doped TiO2 using triathylamine as the nitrogen source showed the highest photocatalytic activity, which also showed a higher efficiency for photodecomposition of 2,4-dichlorophenol. T...", "which chemical doping method ?", "microemulsion", 111.0, 124.0], ["Summaries of meetings are very important as they convey the essential content of discussions in a concise form. Both participants and non-participants are interested in the summaries of meetings to plan for their future work. Generally, it is time consuming to read and understand the whole documents. Therefore, summaries play an important role as the readers are interested in only the important context of discussions. In this work, we address the task of meeting document summarization. Automatic summarization systems on meeting conversations developed so far have been primarily extractive, resulting in unacceptable summaries that are hard to read. The extracted utterances contain disfluencies that affect the quality of the extractive summaries. To make summaries much more readable, we propose an approach to generating abstractive summaries by fusing important content from several utterances. We first separate meeting transcripts into various topic segments, and then identify the important utterances in each segment using a supervised learning approach. The important utterances are then combined together to generate a one-sentence summary. In the text generation step, the dependency parses of the utterances in each segment are combined together to create a directed graph. The most informative and well-formed sub-graph obtained by integer linear programming (ILP) is selected to generate a one-sentence summary for each topic segment. The ILP formulation reduces disfluencies by leveraging grammatical relations that are more prominent in non-conversational style of text, and therefore generates summaries that is comparable to human-written abstractive summaries. Experimental results show that our method can generate more informative summaries than the baselines. In addition, readability assessments by human judges as well as log-likelihood estimates obtained from the dependency parser show that our generated summaries are significantly readable and well-formed.", "which Summarization Type ?", "Abstractive", 830.0, 841.0], ["Automatic summarization techniques on meeting conversations developed so far have been primarily extractive, resulting in poor summaries. To improve this, we propose an approach to generate abstractive summaries by fusing important content from several utterances. Any meeting is generally comprised of several discussion topic segments. For each topic segment within a meeting conversation, we aim to generate a one sentence summary from the most important utterances using an integer linear programming-based sentence fusion approach. Experimental results show that our method can generate more informative summaries than the baselines.", "which Summarization Type ?", "Abstractive", 190.0, 201.0], ["Several approaches to automatic speech summarization are discussed below, using the ICSI Meetings corpus. We contrast feature-based approaches using prosodic and lexical features with maximal marginal relevance and latent semantic analysis approaches to summarization. While the latter two techniques are borrowed directly from the field of text summarization, feature-based approaches using prosodic information are able to utilize characteristics unique to speech data. We also investigate how the summarization results might deteriorate when carried out on ASR output as opposed to manual transcripts. All of the summaries are of an extractive variety, and are compared using the software ROUGE.", "which Summarization Type ?", "Extractive", 636.0, 646.0], ["We address the challenge of generating natural language abstractive summaries for spoken meetings in a domain-independent fashion. We apply Multiple-Sequence Alignment to induce abstract generation templates that can be used for different domains. An Overgenerateand-Rank strategy is utilized to produce and rank candidate abstracts. Experiments using in-domain and out-of-domain training on disparate corpora show that our system uniformly outperforms state-of-the-art supervised extract-based approaches. In addition, human judges rate our system summaries significantly higher than compared systems in fluency and overall quality.", "which Data Domain ?", "Different", 229.0, 238.0], ["Abstract Background Molecular Biology accumulated substantial amounts of data concerning functions of genes and proteins. Information relating to functional descriptions is generally extracted manually from textual data and stored in biological databases to build up annotations for large collections of gene products. Those annotation databases are crucial for the interpretation of large scale analysis approaches using bioinformatics or experimental techniques. Due to the growing accumulation of functional descriptions in biomedical literature the need for text mining tools to facilitate the extraction of such annotations is urgent. In order to make text mining tools useable in real world scenarios, for instance to assist database curators during annotation of protein function, comparisons and evaluations of different approaches on full text articles are needed. Results The Critical Assessment for Information Extraction in Biology (BioCreAtIvE) contest consists of a community wide competition aiming to evaluate different strategies for text mining tools, as applied to biomedical literature. We report on task two which addressed the automatic extraction and assignment of Gene Ontology (GO) annotations of human proteins, using full text articles. The predictions of task 2 are based on triplets of protein \u2013 GO term \u2013 article passage . The annotation-relevant text passages were returned by the participants and evaluated by expert curators of the GO annotation (GOA) team at the European Institute of Bioinformatics (EBI). Each participant could submit up to three results for each sub-task comprising task 2. In total more than 15,000 individual results were provided by the participants. The curators evaluated in addition to the annotation itself, whether the protein and the GO term were correctly predicted and traceable through the submitted text fragment. Conclusion Concepts provided by GO are currently the most extended set of terms used for annotating gene products, thus they were explored to assess how effectively text mining tools are able to extract those annotations automatically. Although the obtained results are promising, they are still far from reaching the required performance demanded by real world applications. Among the principal difficulties encountered to address the proposed task, were the complex nature of the GO terms and protein names (the large range of variants which are used to express proteins and especially GO terms in free text), and the lack of a standard training set. A range of very different strategies were used to tackle this task. The dataset generated in line with the BioCreative challenge is publicly available and will allow new possibilities for training information extraction methods in the domain of molecular biology.", "which Data Domain ?", "Molecular Biology", 20.0, 37.0], ["Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.", "which Semantic roles ?", "Temporal", 1631.0, 1639.0], ["Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.", "which Semantic roles ?", "Condition", NaN, NaN], ["Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.", "which Semantic roles ?", "Location", 1621.0, 1629.0], ["Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.", "which Semantic roles ?", "Manner", 1641.0, 1647.0], ["Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.", "which Semantic roles ?", "Purpose", NaN, NaN], ["Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.", "which Semantic roles ?", "Source", NaN, NaN], ["We present a framework to synthesize character movements based on high level parameters, such that the produced movements respect the manifold of human motion, trained on a large motion capture dataset. The learned motion manifold, which is represented by the hidden units of a convolutional autoencoder, represents motion data in sparse components which can be combined to produce a wide range of complex movements. To map from high level parameters to the motion manifold, we stack a deep feedforward neural network on top of the trained autoencoder. This network is trained to produce realistic motion sequences from parameters such as a curve over the terrain that the character should follow, or a target location for punching and kicking. The feedforward control network and the motion manifold are trained independently, allowing the user to easily switch between feedforward networks according to the desired interface, without re-training the motion manifold. Once motion is generated it can be edited by performing optimization in the space of the motion manifold. This allows for imposing kinematic constraints, or transforming the style of the motion, while ensuring the edited motion remains natural. As a result, the system can produce smooth, high quality motion sequences without any manual pre-processing of the training data.", "which Activity ?", "Kicking", 736.0, 743.0], ["We present a framework to synthesize character movements based on high level parameters, such that the produced movements respect the manifold of human motion, trained on a large motion capture dataset. The learned motion manifold, which is represented by the hidden units of a convolutional autoencoder, represents motion data in sparse components which can be combined to produce a wide range of complex movements. To map from high level parameters to the motion manifold, we stack a deep feedforward neural network on top of the trained autoencoder. This network is trained to produce realistic motion sequences from parameters such as a curve over the terrain that the character should follow, or a target location for punching and kicking. The feedforward control network and the motion manifold are trained independently, allowing the user to easily switch between feedforward networks according to the desired interface, without re-training the motion manifold. Once motion is generated it can be edited by performing optimization in the space of the motion manifold. This allows for imposing kinematic constraints, or transforming the style of the motion, while ensuring the edited motion remains natural. As a result, the system can produce smooth, high quality motion sequences without any manual pre-processing of the training data.", "which Activity ?", "Punching", 723.0, 731.0], ["Thanks to the proliferation of academic services on the Web and the opening of educational content, today, students can access a large number of free learning resources, and interact with value-added services. In this context, Learning Analytics can be carried out on a large scale thanks to the proliferation of open practices that promote the sharing of datasets. However, the opening or sharing of data managed through platforms and educational services, without considering the protection of users' sensitive data, could cause some privacy issues. Data anonymization is a strategy that should be adopted during lifecycle of data processing to reduce security risks. In this research, we try to characterize how much and how the anonymization techniques have been used in learning analytics proposals. From an initial exploration made in the Scopus database, we found that less than 6% of the papers focused on LA have also covered the privacy issue. Finally, through a specific case, we applied data anonymization and learning analytics to demonstrate that both technique can be integrated, in a reliably and effectively way, to support decision making in educational institutions.", "which Publication Stage ?", "Final", NaN, NaN], ["The thesis is the final step in academic formation of students. Its development may experience some difficulties that cause delays in delivery times. The internet allows students to access relevant and large amounts of information for use in the development of their theses. The internet also allows students to interact with others in order to create knowledge networks. However, exposure to too much information can produce infoxication. Therefore, there is a need to organise such information and technological resources. Through a personal learning environment (PLE), students can use current technology and online resources to develop their projects. Furthermore, by means of an ontological model, the underlying knowledge in the domain and environment can be represented in a readable format for machines. This paper presents an ontological model called PLET4Thesis, which has been designed in order to organise the process of thesis development using the elements required to create a PLE.", "which Publication Stage ?", "Final", 18.0, 23.0], ["Educational innovation is a set of ideas, processes, and strategies applied in academic centers to improve teaching and learning processes. The application of Communication and Information Technologies in the field of educational innovation has been fundamental to promote changes that lead to the improvement of administrative and academic processes. However, some studies show the deficient and not very innovative use of technologies in the Latin American region. To determine the existing gap, the authors are executing a project that tries to know the current state of the art on this topic. This study corresponds to the first step of the project. Here, the authors describe the systematic search process designed to find the doctoral theses that have been developed on educational innovation generated by ICT. To meet this objective, the process has three phases: (1) identification of centralized repositories of doctoral theses in Spanish, (2) evaluation of selected repositories according to specific search criteria, and (3) selecting centralized repositories where it is possible to find theses on educational innovation and ICT. The analysis of 5 of the 222 repositories found indicates that each system offers different search characteristics, and the results show that the best option is to combine the sets of results since the theses come from different institutions. Finally, considering that the abilities of users to manage information systems can be diverse, providers and administrators of repositories should enhance their search services in such a way that all users can find and use the resources published.", "which Publication Stage ?", "Final", NaN, NaN], ["Tables are ubiquitous in digital libraries. In scientific documents, tables are widely used to present experimental results or statistical data in a condensed fashion. However, current search engines do not support table search. The difficulty of automatic extracting tables from un-tagged documents, the lack of a universal table metadata specification, and the limitation of the existing ranking schemes make table search problem challenging. In this paper, we describe TableSeer, a search engine for tables. TableSeer crawls digital libraries, detects tables from documents, extracts tables metadata, indexes and ranks tables, and provides a user-friendly search interface. We propose an extensive set of medium-independent metadata for tables that scientists and other users can adopt for representing table information. In addition, we devise a novel page box-cutting method to improve the performance of the table detection. Given a query, TableSeer ranks the matched tables using an innovative ranking algorithm - TableRank. TableRank rates each \u20edquery, table\u2102 pair with a tailored vector space model and a specific term weighting scheme. Overall, TableSeer eliminates the burden of manually extract table data from digital libraries and enables users to automatically examine tables. We demonstrate the value of TableSeer with empirical studies on scientific documents.", "which Method automation ?", "Automatic", 247.0, 256.0], ["In this article, we share our experiences of using digital technologies and various media to present historical narratives of a museum object collection aiming to provide an engaging experience on multiple platforms. Based on P. Joseph\u2019s article, Dawson presented multiple interpretations and historical views of the Markham car collection across various platforms using multimedia resources. Through her creative production, she explored how to use cylindrical panoramas and rich media to offer new ways of telling the controversial story of the contested heritage of a museum\u2019s veteran and vintage car collection. The production\u2019s usability was investigated involving five experts before it was published online and the general users\u2019 experience was investigated. In this article, we present an important component of findings which indicates that virtual panorama tours featuring multimedia elements could be successful in attracting new audiences and that using this type of storytelling technique can be effective in the museum sector. The storyteller panorama tour presented here may stimulate GLAM (galleries, libraries, archives, and museums) professionals to think of new approaches, implement new strategies or services to engage their audiences more effectively. The research may ameliorate the education of future professionals as well.", "which has stakeholder ?", "Professional", NaN, NaN], ["A diverse and changing array of digital media have been used to present heritage online. While websites have been created for online heritage outreach for nearly two decades, social media is employed increasingly to complement and in some cases replace the use of websites. These same social media are used by stakeholders as a form of participatory culture, to create communities and to discuss heritage independently of narratives offered by official institutions such as museums, memorials, and universities. With difficult or \u201cdark\u201d heritage\u2014places of memory centering on deaths, disasters, and atrocities\u2014these online representations and conversations can be deeply contested. Examining the websites and social media of difficult heritage, with an emphasis on the trans-Atlantic slave trade provides insights into the efficacy of online resources provided by official institutions, as well as the unofficial, participatory communities of stakeholders who use social media for collective memories.", "which has stakeholder ?", "communities", 369.0, 380.0], ["ABSTRACT \u2018Heritage Interpretation\u2019 has always been considered as an effective learning, communication and management tool that increases visitors\u2019 awareness of and empathy to heritage sites or artefacts. Yet the definition of \u2018digital heritage interpretation\u2019 is still wide and so far, no significant method and objective are evident within the domain of \u2018digital heritage\u2019 theory and discourse. Considering \u2018digital heritage interpretation\u2019 as a process rather than as a tool to present or communicate with end-users, this paper presents a critical application of a theoretical construct ascertained from multiple disciplines and explicates four objectives for a comprehensive interpretive process. A conceptual model is proposed and further developed into a conceptual framework with fifteen considerations. This framework is then implemented and tested on an online platform to assess its impact on end-users\u2019 interpretation level. We believe the presented interpretive framework (PrEDiC) will help heritage professionals and media designers to develop interpretive heritage project.", "which has stakeholder ?", "Visitor", NaN, NaN], ["Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.", "which Method ?", "omics data paper structure", 1306.0, 1332.0], ["This paper presents a heuristic column generation method for solving vehicle routing problems with a heterogeneous fleet of vehicles. The method may also solve the fleet size and composition vehicle routing problem and new best known solutions are reported for a set of classical problems. Numerical results show that the method is robust and efficient, particularly for medium and large size problem instances.", "which Method ?", "Heuristic column generation", 22.0, 49.0], ["By analysing the merits and demerits of the existing linear model for fleet planning, this paper presents an algorithm which combines the linear programming technique with that of dynamic programming to improve the solution to linear model for fleet planning. This new approach has not only the merits that the linear model for fleet planning has, but also the merit of saving computing time. The numbers of ships newly added into the fleet every year are always integers in the final optimal solution. The last feature of the solution directly meets the requirements of practical application. Both the mathematical model of the dynamic fleet planning and its algorithm are put forward in this paper. A calculating example is also given.", "which Method ?", "Linear programming", 138.0, 156.0], ["ABSTRACT On December 31, 2019, the World Health Organization was notified about a cluster of pneumonia of unknown aetiology in the city of Wuhan, China. Chinese authorities later identified a new coronavirus (2019-nCoV) as the causative agent of the outbreak. As of January 23, 2020, 655 cases have been confirmed in China and several other countries. Understanding the transmission characteristics and the potential for sustained human-to-human transmission of 2019-nCoV is critically important for coordinating current screening and containment strategies, and determining whether the outbreak constitutes a public health emergency of international concern (PHEIC). We performed stochastic simulations of early outbreak trajectories that are consistent with the epidemiological findings to date. We found the basic reproduction number, R 0 , to be around 2.2 (90% high density interval 1.4\u20143.8), indicating the potential for sustained human-to-human transmission. Transmission characteristics appear to be of a similar magnitude to severe acute respiratory syndrome-related coronavirus (SARS-CoV) and the 1918 pandemic influenza. These findings underline the importance of heightened screening, surveillance and control efforts, particularly at airports and other travel hubs, in order to prevent further international spread of 2019-nCoV.", "which Method ?", "Stochastic simulations of early outbreak trajectories", 681.0, 734.0], ["This paper proposes a methodology which discriminates the articles by the target authors (\u201ctrue\u201d articles) from those by other homonymous authors (\u201cfalse\u201d articles). Author name searches for 2,595 \u201csource\u201d authors in six subject fields retrieved about 629,000 articles. In order to extract true articles from the large amount of the retrieved articles, including many false ones, two filtering stages were applied. At the first stage any retrieved article was eliminated as false if either its affiliation addresses had little similarity to those of its source article or there was no citation relationship between the journal of the retrieved article and that of its source article. At the second stage, a sample of retrieved articles was subjected to manual judgment, and utilizing the judgment results, discrimination functions based on logistic regression were defined. These discrimination functions demonstrated both the recall ratio and the precision of about 95% and the accuracy (correct answer ratio) of 90\u201395%. Existence of common coauthor(s), address similarity, title words similarity, and interjournal citation relationships between the retrieved and source articles were found to be the effective discrimination predictors. Whether or not the source author was from a specific country was also one of the important predictors. Furthermore, it was shown that a retrieved article is almost certainly true if it was cited by, or cocited with, its source article. The method proposed in this study would be effective when dealing with a large number of articles whose subject fields and affiliation addresses vary widely. \u00a9 2011 Wiley Periodicals, Inc.", "which Method ?", "Logistic regression", 840.0, 859.0], ["This paper addresses a fleet-sizing problem in the context of the truck-rental industry. Specifically, trucks that vary in capacity and age are utilized over space and time to meet customer demand. Operational decisions (including demand allocation and empty truck repositioning) and tactical decisions (including asset procurements and sales) are explicitly examined in a linear programming model to determine the optimal fleet size and mix. The method uses a time-space network, common to fleet-management problems, but also includes capital cost decisions, wherein assets of different ages carry different costs, as is common to replacement analysis problems. A two-phase solution approach is developed to solve large-scale instances of the problem. Phase I allocates customer demand among assets through Benders decomposition with a demand-shifting algorithm assuring feasibility in each subproblem. Phase II uses the initial bounds and dual variables from Phase I and further improves the solution convergence without increasing computer memory requirements through the use of Lagrangian relaxation. Computational studies are presented to show the effectiveness of the approach for solving large problems within reasonable solution gaps.", "which Method ?", "Linear programming", 373.0, 391.0], ["Accurate prediction of network paths between arbitrary hosts on the Internet is of vital importance for network operators, cloud providers, and academic researchers. We present PredictRoute, a system that predicts network paths between hosts on the Internet using historical knowledge of the data and control plane. In addition to feeding on freely available traceroutes and BGP routing tables, PredictRoute optimally explores network paths towards chosen BGP prefixes. PredictRoute's strategy for exploring network paths discovers 4X more autonomous system (AS) hops than other well-known strategies used in practice today. Using a corpus of traceroutes, PredictRoute trains probabilistic models of routing towards prefixes on the Internet to predict network paths and their likelihood. PredictRoute's AS-path predictions differ from the measured path by at most 1 hop, 75% of the time. We expose PredictRoute's path prediction capability via a REST API to facilitate its inclusion in other applications and studies. We additionally demonstrate the utility of PredictRoute in improving real-world applications for circumventing Internet censorship and preserving anonymity online.", "which Method ?", "PredictRoute trains probabilistic models of routing towards prefixes on the Internet to predict network paths and their likelihood.", NaN, NaN], ["This paper addresses the problem of name disambiguation in the context of digital libraries that administer bibliographic citations. The problem occurs when multiple authors share a common name or when multiple name variations for an author appear in citation records. Name disambiguation is not a trivial task, and most digital libraries do not provide an ecient way to accurately identify the citation records for an author. Furthermore, lack of complete meta-data information in digital libraries hinders the development of a generic algorithm that can be applicable to any dataset. We propose a heuristic-based, unsupervised and adaptive method that also examines users\u2019 interactions in order to include users\u2019 feedback in the disambiguation process. Moreover, the method exploits important features associated with author and citation records, such as co-authors, aliation, publication title, venue, etc., creating a multilayered hierarchical clustering algorithm which transforms itself according to the available information, and forms clusters of unambiguous records. Our experiments on a set of researchers\u2019 names considered to be highly ambiguous produced high precision and recall results, and decisively armed the viability of our algorithm.", "which Method ?", "Heuristic-based", 599.0, 614.0], ["Twitter, with its rising popularity as a micro-blogging website, has inevitably attracted the attention of spammers. Spammers use myriad of techniques to evade security mechanisms and post spam messages, which are either unwelcome advertisements for the victim or lure victims in to clicking malicious URLs embedded in spam tweets. In this paper, we propose several novel features capable of distinguishing spam accounts from legitimate accounts. The features analyze the behavioral and content entropy, bait-techniques, and profile vectors characterizing spammers, which are then fed into supervised learning algorithms to generate models for our tool, CATS. Using our system on two real-world Twitter data sets, we observe a 96% detection rate with about 0.8% false positive rate beating state of the art detection approach. Our analysis reveals detection of more than 90% of spammers with less than five tweets and about half of the spammers detected with only a single tweet. Our feature computation has low latency and resource requirement making fast detection feasible. Additionally, we cluster the unknown spammers to identify and understand the prevalent spam campaigns on Twitter.", "which Method ?", "Supervised learning algorithms", 590.0, 620.0], ["Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and \u201cfeeling left out\u201d can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.", "which Method ?", "focus groups", 671.0, 683.0], ["Wireless sensor networks consist of small battery powered devices with limited energy resources. Once deployed, the small sensor nodes are usually inaccessible to the user, and thus replacement of the energy source is not feasible. Hence, energy efficiency is a key design issue that needs to be enhanced in order to improve the life span of the network. Several network layer protocols have been proposed to improve the effective lifetime of a network with a limited energy supply. In this article we propose a centralized routing protocol called base-station controlled dynamic clustering protocol (BCDCP), which distributes the energy dissipation evenly among all sensor nodes to improve network lifetime and average energy savings. The performance of BCDCP is then compared to clustering-based schemes such as low-energy adaptive clustering hierarchy (LEACH), LEACH-centralized (LEACH-C), and power-efficient gathering in sensor information systems (PEGASIS). Simulation results show that BCDCP reduces overall energy consumption and improves network lifetime over its comparatives.", "which Method ?", "Centralized", 512.0, 523.0], ["Biomedical research is growing at such an exponential pace that scientists, researchers, and practitioners are no more able to cope with the amount of published literature in the domain. The knowledge presented in the literature needs to be systematized in such a way that claims and hypotheses can be easily found, accessed, and validated. Knowledge graphs can provide such a framework for semantic knowledge representation from literature. However, in order to build a knowledge graph, it is necessary to extract knowledge as relationships between biomedical entities and normalize both entities and relationship types. In this paper, we present and compare a few rule-based and machine learning-based (Naive Bayes, Random Forests as examples of traditional machine learning methods and DistilBERT and T5-based models as examples of modern deep learning transformers) methods for scalable relationship extraction from biomedical literature, and for the integration into the knowledge graphs. We examine how resilient are these various methods to unbalanced and fairly small datasets, showing that transformer-based models handle well both small datasets, due to pre-training on large C4 dataset, as well as unbalanced data. The best performing model was the DistilBERT-based model \ufb01ne-tuned on balanced data, with a reported F1-score of 0.89.", "which Method ?", "machine learning method", NaN, NaN], ["Abstract Following the introduction of unprecedented \u201cstay-at-home\u201d national policies, the COVID-19 pandemic recently started declining in Europe. Our research aims were to characterize the changepoint in the flow of the COVID-19 epidemic in each European country and to evaluate the association of the level of social distancing with the observed decline in the national epidemics. Interrupted time series analyses were conducted in 28 European countries. Social distance index was calculated based on Google Community Mobility Reports. Changepoints were estimated by threshold regression, national findings were analyzed by Poisson regression, and the effect of social distancing in mixed effects Poisson regression model. Our findings identified the most probable changepoints in 28 European countries. Before changepoint, incidence of new COVID-19 cases grew by 24% per day on average. From the changepoint, this growth rate was reduced to 0.9%, 0.3% increase, and to 0.7% and 1.7% decrease by increasing social distancing quartiles. The beneficial effect of higher social distance quartiles (i.e., turning the increase into decline) was statistically significant for the fourth quartile. Notably, many countries in lower quartiles also achieved a flat epidemic curve. In these countries, other plausible COVID-19 containment measures could contribute to controlling the first wave of the disease. The association of social distance quartiles with viral spread could also be hindered by local bottlenecks in infection control. Our results allow for moderate optimism related to the gradual lifting of social distance measures in the general population, and call for specific attention to the protection of focal micro-societies enriching high-risk elderly subjects, including nursing homes and chronic care facilities.", "which Method ?", "Interrupted time series", 383.0, 406.0], ["Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.", "which Method ?", "metadata import workflow", 893.0, 917.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Method ?", "proposed deep neural model", 1580.0, 1606.0], ["Abstract. The Indian Ocean (44\u00b0 S\u201330\u00b0 N) plays an important role in the global carbon cycle, yet it remains one of the most poorly sampled ocean regions. Several approaches have been used to estimate net sea\u2013air CO2 fluxes in this region: interpolated observations, ocean biogeochemical models, atmospheric and ocean inversions. As part of the RECCAP (REgional Carbon Cycle Assessment and Processes) project, we combine these different approaches to quantify and assess the magnitude and variability in Indian Ocean sea\u2013air CO2 fluxes between 1990 and 2009. Using all of the models and inversions, the median annual mean sea\u2013air CO2 uptake of \u22120.37 \u00b1 0.06 PgC yr\u22121 is consistent with the \u22120.24 \u00b1 0.12 PgC yr\u22121 calculated from observations. The fluxes from the southern Indian Ocean (18\u201344\u00b0 S; \u22120.43 \u00b1 0.07 PgC yr\u22121 are similar in magnitude to the annual uptake for the entire Indian Ocean. All models capture the observed pattern of fluxes in the Indian Ocean with the following exceptions: underestimation of upwelling fluxes in the northwestern region (off Oman and Somalia), overestimation in the northeastern region (Bay of Bengal) and underestimation of the CO2 sink in the subtropical convergence zone. These differences were mainly driven by lack of atmospheric CO2 data in atmospheric inversions, and poor simulation of monsoonal currents and freshwater discharge in ocean biogeochemical models. Overall, the models and inversions do capture the phase of the observed seasonality for the entire Indian Ocean but overestimate the magnitude. The predicted sea\u2013air CO2 fluxes by ocean biogeochemical models (OBGMs) respond to seasonal variability with strong phase lags with reference to climatological CO2 flux, whereas the atmospheric inversions predicted an order of magnitude higher seasonal flux than OBGMs. The simulated interannual variability by the OBGMs is weaker than that found by atmospheric inversions. Prediction of such weak interannual variability in CO2 fluxes by atmospheric inversions was mainly caused by a lack of atmospheric data in the Indian Ocean. The OBGM models suggest a small strengthening of the sink over the period 1990\u20132009 of \u22120.01 PgC decade\u22121. This is inconsistent with the observations in the southwestern Indian Ocean that shows the growth rate of oceanic pCO2 was faster than the observed atmospheric CO2 growth, a finding attributed to the trend of the Southern Annular Mode (SAM) during the 1990s.\n ", "which Method ?", "Ocean biogeochemical models", 274.0, 301.0], ["Due to the imbalance of energy consumption of nodes in wireless sensor networks (WSNs), some local nodes die prematurely, which causes the network partitions and then shortens the lifetime of the network. The phenomenon is called \u201chot spot\u201d or \u201cenergy hole\u201d problem. For this problem, an energy-aware distributed unequal clustering protocol (EADUC) in multihop heterogeneous WSNs is proposed. Compared with the previous protocols, the cluster heads obtained by EADUC can achieve balanced energy, good distribution, and seamless coverage for all the nodes.Moreover, the complexity of time and control message is low. Simulation experiments show that EADUC can prolong the lifetime of the network significantly.", "which Method ?", "Distributed", 301.0, 312.0], ["Over the past decades, a tremendous amount of research has been done on the use of machine learning for speech processing applications, especially speech recognition. However, in the past few years, research has focused on utilizing deep learning for speech-related applications. This new area of machine learning has yielded far better results when compared to others in a variety of applications including speech, and thus became a very attractive area of research. This paper provides a thorough examination of the different studies that have been conducted since 2006, when deep learning first arose as a new area of machine learning, for speech applications. A thorough statistical analysis is provided in this review which was conducted by extracting specific information from 174 papers published between the years 2006 and 2018. The results provided in this paper shed light on the trends of research in this area as well as bring focus to new research topics.", "which Method ?", "Machine learning", 83.0, 99.0], ["We report the first measurements of new production (15N tracer technique), the component of primary production that sustains on extraneous nutrient inputs to the euphotic zone, in the Bay of Bengal. Experiments done in two different seasons consistently show high new production (averaging around 4 mmol N m\u22122 d\u22121 during post monsoon and 5.4 mmol N m\u22122 d\u22121 during pre monsoon), validating the earlier conjecture of high new production, based on pCO2 measurements, in the Bay. Averaged over annual time scales, higher new production could cause higher rate of removal of organic carbon. This could also be one of the reasons for comparable organic carbon fluxes observed in the sediment traps of the Bay of Bengal and the eastern Arabian Sea. Thus, oceanic regions like Bay of Bengal may play a more significant role in removing the excess CO2 from the atmosphere than hitherto believed.", "which Method ?", "15N tracer", 52.0, 62.0], ["Due to the imbalance of energy consumption of nodes in wireless sensor networks (WSNs), some local nodes die prematurely, which causes the network partitions and then shortens the lifetime of the network. The phenomenon is called \u201chot spot\u201d or \u201cenergy hole\u201d problem. For this problem, an energy-aware distributed unequal clustering protocol (EADUC) in multihop heterogeneous WSNs is proposed. Compared with the previous protocols, the cluster heads obtained by EADUC can achieve balanced energy, good distribution, and seamless coverage for all the nodes. Moreover, the complexity of time and control message is low. Simulation experiments show that EADUC can prolong the lifetime of the network significantly.", "which Method ?", "Distributed", 301.0, 312.0], ["Student understanding and retention can be enhanced and improved by providing alternative learning activities and environments. Education theory recognizes the value of incorporating alternative activities (games, exercises and simulations) to stimulate student interest in the educational environment, enhance transfer of knowledge and improve learned retention with meaningful repetition. In this case study, we investigate using an online version of the television game show, \u2018Deal or No Deal\u2019, to enhance student understanding and retention by playing the game to learn expected value in an introductory statistics course, and to foster development of critical thinking skills necessary to succeed in the modern business environment. Enhancing the thinking process of problem solving using repetitive games should also improve a student's ability to follow non-mathematical problem-solving processes, which should improve the overall ability to process information and make logical decisions. Learning and retention are measured to evaluate the success of the students\u2019 performance.", "which Method ?", "Case Study", 399.0, 409.0], ["Prolonged network lifetime, scalability, and load balancing are important requirements for many ad-hoc sensor network applications. Clustering sensor nodes is an effective technique for achieving these goals. In this work, we propose a new energy-efficient approach for clustering nodes in ad-hoc sensor networks. Based on this approach, we present a protocol, HEED (hybrid energy-efficient distributed clustering), that periodically selects cluster heads according to a hybrid of their residual energy and a secondary parameter, such as node proximity to its neighbors or node degree. HEED does not make any assumptions about the distribution or density of nodes, or about node capabilities, e.g., location-awareness. The clustering process terminates in O(1) iterations, and does not depend on the network topology or size. The protocol incurs low overhead in terms of processing cycles and messages exchanged. It also achieves fairly uniform cluster head distribution across the network. A careful selection of the secondary clustering parameter can balance load among cluster heads. Our simulation results demonstrate that HEED outperforms weight-based clustering protocols in terms of several cluster characteristics. We also apply our approach to a simple application to demonstrate its effectiveness in prolonging the network lifetime and supporting data aggregation.", "which Method ?", "Distributed", 391.0, 402.0], ["Abstract We estimated the reproduction number of 2020 Iranian COVID-19 epidemic using two different methods: R 0 was estimated at 4.4 (95% CI, 3.9, 4.9) (generalized growth model) and 3.50 (1.28, 8.14) (epidemic doubling time) (February 19 - March 1) while the effective R was estimated at 1.55 (1.06, 2.57) (March 6-19).", "which Method ?", "generalized growth model", 154.0, 178.0], ["Vision is becoming more and more common in applications such as localization, autonomous navigation, path finding and many other computer vision applications. This paper presents an improved technique for feature matching in the stereo images captured by the autonomous vehicle. The Scale Invariant Feature Transform (SIFT) algorithm is used to extract distinctive invariant features from images but this algorithm has a high complexity and a long computational time. In order to reduce the computation time, this paper proposes a SIFT improvement technique based on a Self-Organizing Map (SOM) to perform the matching procedure more efficiently for feature matching problems. Experimental results on real stereo images show that the proposed algorithm performs feature group matching with lower computation time than the original SIFT algorithm. The results showing improvement over the original SIFT are validated through matching examples between different pairs of stereo images. The proposed algorithm can be applied to stereo vision based autonomous vehicle navigation for obstacle avoidance, as well as many other feature matching and computer vision applications.", "which Method ?", "Local", NaN, NaN], ["This paper addresses the problem of name disambiguation in the context of digital libraries that administer bibliographic citations. The problem occurs when multiple authors share a common name or when multiple name variations for an author appear in citation records. Name disambiguation is not a trivial task, and most digital libraries do not provide an ecient way to accurately identify the citation records for an author. Furthermore, lack of complete meta-data information in digital libraries hinders the development of a generic algorithm that can be applicable to any dataset. We propose a heuristic-based, unsupervised and adaptive method that also examines users\u2019 interactions in order to include users\u2019 feedback in the disambiguation process. Moreover, the method exploits important features associated with author and citation records, such as co-authors, aliation, publication title, venue, etc., creating a multilayered hierarchical clustering algorithm which transforms itself according to the available information, and forms clusters of unambiguous records. Our experiments on a set of researchers\u2019 names considered to be highly ambiguous produced high precision and recall results, and decisively armed the viability of our algorithm.", "which Method ?", "Unsupervised and Adaptive", 616.0, 641.0], ["A new stereo matching algorithm is introduced that performs iterative refinement on the results of adaptive support-weight stereo matching. During each iteration of disparity refinement, adaptive support-weights are used by the algorithm to penalize disparity differences within local windows. Analytical results show that the addition of iterative refinement to adaptive support-weight stereo matching does not significantly increase complexity. In addition, this new algorithm does not rely on image segmentation or plane fitting, which are used by the majority of the most accurate stereo matching algorithms. As a result, this algorithm has lower complexity, is more suitable for parallel implementation, and does not force locally planar surfaces within the scene. When compared to other algorithms that do not rely on image segmentation or plane fitting, results show that the new stereo matching algorithm is one of the most accurate listed on the Middlebury performance benchmark.", "which Method ?", "Local", 279.0, 284.0], ["Clustering is a standard approach for achieving efficient and scalable performance in wireless sensor networks. Most of the published clustering algorithms strive to generate the minimum number of disjoint clusters. However, we argue that guaranteeing some degree of overlap among clusters can facilitate many applications, like inter-cluster routing, topology discovery and node localization, recovery from cluster head failure, etc. We formulate the overlapping multi-hop clustering problem as an extension to the k-dominating set problem. Then we propose MOCA; a randomized distributed multi-hop clustering algorithm for organizing the sensors into overlapping clusters. We validate MOCA in a simulated environment and analyze the effect of different parameters, e.g. node density and network connectivity, on its performance. The simulation results demonstrate that MOCA is scalable, introduces low overhead and produces approximately equal-sized clusters.", "which Method ?", "Distributed", 577.0, 588.0], ["Background Visual atypicalities in autism spectrum disorder (ASD) are a well documented phenomenon, beginning as early as 2\u20136 months of age and manifesting in a significantly decreased attention to the eyes, direct gaze and socially salient information. Early emerging neurobiological deficits in perceiving social stimuli as rewarding or its active avoidance due to the anxiety it entails have been widely purported as potential reasons for this atypicality. Parallel research evidence also points to the significant benefits of animal presence for reducing social anxiety and enhancing social interaction in children with autism. While atypicality in social attention in ASD has been widely substantiated, whether this atypicality persists equally across species types or is confined to humans has not been a key focus of research insofar. Methods We attempted a comprehensive examination of the differences in visual attention to static images of human and animal faces (40 images; 20 human faces and 20 animal faces) among children with ASD using an eye tracking paradigm. 44 children (ASD n = 21; TD n = 23) participated in the study (10,362 valid observations) across five regions of interest (left eye, right eye, eye region, face and screen). Results Results obtained revealed significantly greater social attention across human and animal stimuli in typical controls when compared to children with ASD. However in children with ASD, a significantly greater attention allocation was seen to animal faces and eye region and lesser attention to the animal mouth when compared to human faces, indicative of a clear attentional preference to socially salient regions of animal stimuli. The positive attentional bias toward animals was also seen in terms of a significantly greater visual attention to direct gaze in animal images. Conclusion Our results suggest the possibility that atypicalities in social attention in ASD may not be uniform across species. It adds to the current neural and biomarker evidence base of the potentially greater social reward processing and lesser social anxiety underlying animal stimuli as compared to human stimuli in children with ASD.", "which Method ?", "eye tracking", 1054.0, 1066.0], ["Binocular stereo matching is one of the most important algorithms in the field of computer vision. Adaptive support-weight approaches, the current state-of-the-art local methods, produce results comparable to those generated by global methods. However, excessive time consumption is the main problem of these algorithms since the computational complexity is proportionally related to the support window size. In this paper, we present a novel cost aggregation method inspired by domain transformation, a recently proposed dimensionality reduction technique. This transformation enables the aggregation of 2-D cost data to be performed using a sequence of 1-D filters, which lowers computation and memory costs compared to conventional 2-D filters. Experiments show that the proposed method outperforms the state-of-the-art local methods in terms of computational performance, since its computational complexity is independent of the input parameters. Furthermore, according to the experimental results with the Middlebury dataset and real-world images, our algorithm is currently one of the most accurate and efficient local algorithms.", "which Method ?", "Local", 164.0, 169.0], ["\nPurpose\nIn terms of entrepreneurship, open data benefits include economic growth, innovation, empowerment and new or improved products and services. Hackathons encourage the development of new applications using open data and the creation of startups based on these applications. Researchers focus on factors that affect nascent entrepreneurs\u2019 decision to create a startup but researches in the field of open data hackathons have not been fully investigated yet. This paper aims to suggest a model that incorporates factors that affect the decision of establishing a startup by developers who have participated in open data hackathons.\n\n\nDesign/methodology/approach\nIn total, 70 papers were examined and analyzed using a three-phased literature review methodology, which was suggested by Webster and Watson (2002). These surveys investigated several factors that affect a nascent entrepreneur to create a startup.\n\n\nFindings\nEventually, by identifying the motivations for developers to participate in a hackathon, and understanding the benefits of the use of open data, researchers will be able to elaborate the proposed model and evaluate if the contest has contributed to the decision of establish a startup and what factors affect the decision to establish a startup apply to open data developers, and if the participants of the contest agree with these factors.\n\n\nOriginality/value\nThe paper expands the scope of open data research on entrepreneurship field, stating the need for more research to be conducted regarding the open data in entrepreneurship through hackathons.\n", "which Method ?", "three-phased literature review methodology", 898.0, 940.0], ["In this paper, we propose an efficient framework for edge-preserving stereo matching. Local methods for stereo matching are more suitable than global methods for real-time applications. Moreover, we can obtain accurate depth maps by using edge-preserving filter for the cost aggregation process in local stereo matching. The computational cost is high, since we must perform the filter for every number of disparity ranges if the order of the edge-preserving filter is constant time. Therefore, we propose an efficient iterative framework which propagates edge-awareness by using single time edge preserving filtering. In our framework, box filtering is used for the cost aggregation, and then the edge-preserving filtering is once used for refinement of the obtained depth map from the box aggregation. After that, we iteratively estimate a new depth map by local stereo matching which utilizes the previous result of the depth map for feedback of the matching cost. Note that the kernel size of the box filter is varied as coarse-to-fine manner at each iteration. Experimental results show that small and large areas of incorrect regions are gradually corrected. Finally, the accuracy of the depth map estimated by our framework is comparable to the state-of-the-art of stereo matching methods with global optimization methods. Moreover, the computational time of our method is faster than the optimization based method.", "which Method ?", "Local", 86.0, 91.0], ["Most current statistical natural language processing models use only local features so as to permit dynamic programming in inference, but this makes them unable to fully account for the long distance structure that is prevalent in language use. We show how to solve this dilemma with Gibbs sampling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference. We use this technique to augment an existing CRF-based information extraction system with long-distance dependency models, enforcing label consistency and extraction template consistency constraints. This technique results in an error reduction of up to 9% over state-of-the-art systems on two established information extraction tasks.", "which Method ?", "Gibbs sampling", 284.0, 298.0], ["Abstract The novel coronavirus disease 2019 (COVID-19) caused by SARS-COV-2 has raised myriad of global concerns. There is currently no FDA approved antiviral strategy to alleviate the disease burden. The conserved 3-chymotrypsin-like protease (3CLpro), which controls coronavirus replication is a promising drug target for combating the coronavirus infection. This study screens some African plants derived alkaloids and terpenoids as potential inhibitors of coronavirus 3CLpro using in silico approach. Bioactive alkaloids (62) and terpenoids (100) of plants native to Africa were docked to the 3CLpro of the novel SARS-CoV-2. The top twenty alkaloids and terpenoids with high binding affinities to the SARS-CoV-2 3CLpro were further docked to the 3CLpro of SARS-CoV and MERS-CoV. The docking scores were compared with 3CLpro-referenced inhibitors (Lopinavir and Ritonavir). The top docked compounds were further subjected to ADEM/Tox and Lipinski filtering analyses for drug-likeness prediction analysis. This ligand-protein interaction study revealed that more than half of the top twenty alkaloids and terpenoids interacted favourably with the coronaviruses 3CLpro, and had binding affinities that surpassed that of lopinavir and ritonavir. Also, a highly defined hit-list of seven compounds (10-Hydroxyusambarensine, Cryptoquindoline, 6-Oxoisoiguesterin, 22-Hydroxyhopan-3-one, Cryptospirolepine, Isoiguesterin and 20-Epibryonolic acid) were identified. Furthermore, four non-toxic, druggable plant derived alkaloids (10-Hydroxyusambarensine, and Cryptoquindoline) and terpenoids (6-Oxoisoiguesterin and 22-Hydroxyhopan-3-one), that bind to the receptor-binding site and catalytic dyad of SARS-CoV-2 3CLpro were identified from the predictive ADME/tox and Lipinski filter analysis. However, further experimental analyses are required for developing these possible leads into natural anti-COVID-19 therapeutic agents for combating the pandemic. Communicated by Ramaswamy H. Sarma", "which Method ?", "in silico", 485.0, 494.0], ["Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .", "which Method ?", "DBpedia Information Extraction Framework (DIEF)", NaN, NaN], ["Name ambiguity has long been viewed as a challenging problem in many applications, such as scientific literature management, people search, and social network analysis. When we search a person name in these systems, many documents (e.g., papers, web pages) containing that person's name may be returned. It is hard to determine which documents are about the person we care about. Although much research has been conducted, the problem remains largely unsolved, especially with the rapid growth of the people information available on the Web. In this paper, we try to study this problem from a new perspective and propose an ADANA method for disambiguating person names via active user interactions. In ADANA, we first introduce a pairwise factor graph (PFG) model for person name disambiguation. The model is flexible and can be easily extended by incorporating various features. Based on the PFG model, we propose an active name disambiguation algorithm, aiming to improve the disambiguation performance by maximizing the utility of the user's correction. Experimental results on three different genres of data sets show that with only a few user corrections, the error rate of name disambiguation can be reduced to 3.1%. A real system has been developed based on the proposed method and is available online.", "which Method ?", "Pairwise Factor Graph", 730.0, 751.0], ["The classical local disparity methods use simple and efficient structure to reduce the computation complexity. To increase the accuracy of the disparity map, new local methods utilize additional processing steps such as iteration, segmentation, calibration and propagation, similar to global methods. In this paper, we present an efficient one-pass local method with no iteration. The proposed method is also extended to video disparity estimation by using motion information as well as imposing spatial temporal consistency. In local method, the accuracy of stereo matching depends on precise similarity measure and proper support window. For the accuracy of similarity measure, we propose a novel three-moded cross census transform with a noise buffer, which increases the robustness to image noise in flat areas. The proposed similarity measure can be used in the same form in both stereo images and videos. We further improve the reliability of the aggregation by adopting the advanced support weight and incorporating motion flow to achieve better depth map near moving edges in video scene. The experimental results show that the proposed method is the best performing local method on the Middlebury stereo benchmark test and outperforms the other state-of-the-art methods on video disparity evaluation.", "which Method ?", "Local", 14.0, 19.0], ["In the well\u2010known vehicle routing problem (VRP), a set of identical vehicles located at a central depot is to be optimally routed to supply customers with known demands subject to vehicle capacity constraints. An important variant of the VRP arises when a mixed fleet of vehicles, characterized by different capacities and costs, is available for distribution activities. The problem is known as fleet size and mix VRP with fixed costs FSMF and has several practical applications. In this article, we present a new mixed integer programming formulation for FSMF based on a two\u2010commodity network flow approach. New valid inequalities are proposed to strengthen the linear programming relaxation of the mathematical formulation. The effectiveness of the proposed cuts is extensively tested on benchmark instances. \u00a9 2009 Wiley Periodicals, Inc. NETWORKS, 2009", "which Method ?", "Mixed integer programming", 515.0, 540.0], ["This paper presents a new deterministic annealing metaheuristic for the fleet size and mix vehicle-routing problem with time windows. The objective is to service, at minimal total cost, a set of customers within their time windows by a heterogeneous capacitated vehicle fleet. First, we motivate and define the problem. We then give a mathematical formulation of the most studied variant in the literature in the form of a mixed-integer linear program. We also suggest an industrially relevant, alternative definition that leads to a linear mixed-integer formulation. The suggested metaheuristic solution method solves both problem variants and comprises three phases. In Phase 1, high-quality initial solutions are generated by means of a savings-based heuristic that combines diversification strategies with learning mechanisms. In Phase 2, an attempt is made to reduce the number of routes in the initial solution with a new local search procedure. In Phase 3, the solution from Phase 2 is further improved by a set of four local search operators that are embedded in a deterministic annealing framework to guide the improvement process. Some new implementation strategies are also suggested for efficient time window feasibility checks. Extensive computational experiments on the 168 benchmark instances have shown that the suggested method outperforms the previously published results and found 167 best-known solutions. Experimental results are also given for the new problem variant.", "which Method ?", "Deterministic annealing", 26.0, 49.0], ["Hot spots in a wireless sensor network emerge as locations under heavy traffic load. Nodes in such areas quickly deplete energy resources, leading to disruption in network services. This problem is common for data collection scenarios in which Cluster Heads (CH) have a heavy burden of gathering and relaying information. The relay load on CHs especially intensifies as the distance to the sink decreases. To balance the traffic load and the energy consumption in the network, the CH role should be rotated among all nodes and the cluster sizes should be carefully determined at different parts of the network. This paper proposes a distributed clustering algorithm, Energy-efficient Clustering (EC), that determines suitable cluster sizes depending on the hop distance to the data sink, while achieving approximate equalization of node lifetimes and reduced energy consumption levels. We additionally propose a simple energy-efficient multihop data collection protocol to evaluate the effectiveness of EC and calculate the end-to-end energy consumption of this protocol; yet EC is suitable for any data collection protocol that focuses on energy conservation. Performance results demonstrate that EC extends network lifetime and achieves energy equalization more effectively than two well-known clustering algorithms, HEED and UCR.", "which Method ?", "Distributed", 633.0, 644.0], ["Name disambiguation, which aims to identify multiple names which correspond to one person and same names which refer to different persons, is one of the most important basic problems in many areas such as natural language processing, information retrieval and digital libraries. Microsoft academic search data in KDD Cup 2013 Track 2 task brings one such challenge to the researchers in the knowledge discovery and data mining community. Besides the real-world and large-scale characteristic, the Track 2 task raises several challenges: (1) Consideration of both synonym and polysemy problems; (2) Existence of huge amount of noisy data with missing attributes; (3) Absence of labeled data that makes this challenge a cold start problem. In this paper, we describe our solution to Track 2 of KDD Cup 2013. The challenge of this track is author disambiguation, which aims at identifying whether authors are the same person by using academic publication data. We propose a multi-phase semi-supervised approach to deal with the challenge. First, we preprocess the dataset and generate features for models, then construct a coauthor-based network and employ community detection to accomplish first-phase disambiguation task, which handles the cold-start problem. Second, using results in first phase, we use support vector machine and various other models to utilize noisy data with missing attributes in the dataset. Further, we propose a self-taught procedure to solve ambiguity in coauthor information, boosting performance of results from other models. Finally, by blending results from different models, we finally achieves 6th place with 0.98717 mean F-score on public leaderboard and 7th place with 0.98651 mean F-score on private leaderboard.", "which Method ?", "Support Vector Machine", 1304.0, 1326.0], ["The question/answer-based computer game Age of Computers was introduced to replace traditional weekly paper exercises in a course in computer fundamentals in 2003. Questionnaire evaluations and observation of student behavior have indicated that the students found the game more motivating than paper exercises and that a majority of the students also perceived the game to have a higher learning effect than paper exercises or textbook reading. This paper reports on a controlled experiment to compare the learning effectiveness of game play with traditional paper exercises, as well as with textbook reading. The results indicated that with equal time being spent on the various learning activities, the effect of game play was only equal to that of the other activities, not better. Yet this result is promising enough, as the increased motivation means that students work harder in the course. Also, the results indicate that the game has potential for improvement, in particular with respect to its feedback on the more complicated questions.", "which Method ?", "experiment", 481.0, 491.0], ["The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.", "which Method ?", "novel pulse-chase strategy (Fierro-Monti et al., accompanying article)", NaN, NaN], ["Matching cost aggregation is one of the oldest and still popular methods for stereo correspondence. While effective and efficient, cost aggregation methods typically aggregate the matching cost by summing/averaging over a user-specified, local support region. This is obviously only locally-optimal, and the computational complexity of the full-kernel implementation usually depends on the region size. In this paper, the cost aggregation problem is re-examined and a non-local solution is proposed. The matching cost values are aggregated adaptively based on pixel similarity on a tree structure derived from the stereo image pair to preserve depth edges. The nodes of this tree are all the image pixels, and the edges are all the edges between the nearest neighboring pixels. The similarity between any two pixels is decided by their shortest distance on the tree. The proposed method is non-local as every node receives supports from all other nodes on the tree. As can be expected, the proposed non-local solution outperforms all local cost aggregation methods on the standard (Middlebury) benchmark. Besides, it has great advantage in extremely low computational complexity: only a total of 2 addition/subtraction operations and 3 multiplication operations are required for each pixel at each disparity level. It is very close to the complexity of unnormalized box filtering using integral image which requires 6 addition/subtraction operations. Unnormalized box filter is the fastest local cost aggregation method but blurs across depth edges. The proposed method was tested on a MacBook Air laptop computer with a 1.8 GHz Intel Core i7 CPU and 4 GB memory. The average runtime on the Middlebury data sets is about 90 milliseconds, and is only about 1.25\u00d7 slower than unnormalized box filter. A non-local disparity refinement method is also proposed based on the non-local cost aggregation method.", "which Method ?", "Local", 238.0, 243.0], ["Census transform is a non-parametric local transform. Its weakness is that the results relied on the center pixel too much. This paper proposes a modified Census transform based on the neighborhood information for stereo matching. By improving the classic Census transform, the new technique utilizes more bits to represent the differences between the pixel and its neighborhood information. The result image of the modified Census transform has more detailed information at depth discontinuity. After stereo correspondence, sub-pixel interpolation and the disparity refinement, a better dense disparity map can be obtained. The experiments present that the proposed algorithm has simple mechanism and strong robustness. It can improve the accuracy of matching and is applicable to hardware systems.", "which Method ?", "Local", 37.0, 42.0], ["Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.", "which Method ?", "Distributed", 9.0, 20.0], ["Although graph cuts (GC) is popularly used in many computer vision problems, slow execution time due to its high complexity hinders wide usage. Manycore solution using Graphics Processing Unit (GPU) may solve this problem. However, conventional GC implementation does not fully exploit GPU's computing power. To address this issue, a new GC algorithm which is suitable for GPU environment is presented in this paper. First, we present a novel graph construction method that accelerates the convergence speed of GC. Next, a repetitive block-based push and relabel method is used to increase the data transfer efficiency. Finally, we propose a low-overhead global relabeling algorithm to increase the GPU occupancy ratio. The experiments on Middlebury stereo dataset shows that 5.2X speedup can be achieved over the baseline implementation, with identical GPU platform and parameters.", "which Method ?", "Global", 655.0, 661.0], ["We study the impact of heterogeneity of nodes, in terms of their energy, in wireless sensor networks that are hierarchically clustered. In these networks some of the nodes become cluster heads, aggregate the data of their cluster members and transmit it to the sink. We assume that a percentage of the population of sensor nodes is equipped with additional energy resources\u2014this is a source of heterogeneity which may result from the initial setting or as the operation of the network evolves. We also assume that the sensors are randomly (uniformly) distributed and are not mobile, the coordinates of the sink and the dimensions of the sensor field are known. We show that the behavior of such sensor networks becomes very unstable once the first node dies, especially in the presence of node heterogeneity. Classical clustering protocols assume that all the nodes are equipped with the same amount of energy and as a result, they can not take full advantage of the presence of node heterogeneity. We propose SEP, a heterogeneous-aware protocol to prolong the time interval before the death of the first node (we refer to as stability period), which is crucial for many applications where the feedback from the sensor network must be reliable. SEP is based on weighted election probabilities of each node to become cluster head according to the remaining energy in each node. We show by simulation that SEP always prolongs the stability period compared to (and that the average throughput is greater than) the one obtained using current clustering protocols. We conclude by studying the sensitivity of our SEP protocol to heterogeneity parameters capturing energy imbalance in the network. We found that SEP yields longer stability region for higher values of extra energy brought by more powerful nodes.", "which Method ?", "Distributed", 551.0, 562.0], ["Planning the composition of a vehicle fleet in order to satisfy transportation service demands is an important resource management activity for any trucking company. Its complexity is such, however, that formal fleet management cannot be done adequately without the help of a decision support system. An important part of such a system is the generation of minimal discounted cost plans covering the purchase, replacement, sale, and/or rental of the vehicles necessary to deal with a seasonal stochastic demand. A stochastic programming model is formulated to address this problem. It reduces to a separable program based on information about the service demand, the state of the current fleet, and the cash flows generated by an acquisition/disposal plan. An efficient algorithm for solving the model is also presented. The discussion concerns the operations of a number of Canadian road carriers. >", "which Method ?", "Stochastic programming", 514.0, 536.0], ["Knowledge graphs (KGs) are widely used for modeling scholarly communication, performing scientometric analyses, and supporting a variety of intelligent services to explore the literature and predict research dynamics. However, they often suffer from incompleteness (e.g., missing affiliations, references, research topics), leading to a reduced scope and quality of the resulting analyses. This issue is usually tackled by computing knowledge graph embeddings (KGEs) and applying link prediction techniques. However, only a few KGE models are capable of taking weights of facts in the knowledge graph into account. Such weights can have different meanings, e.g. describe the degree of association or the degree of truth of a certain triple. In this paper, we propose the Weighted Triple Loss, a new loss function for KGE models that takes full advantage of the additional numerical weights on facts and it is even tolerant to incorrect weights. We also extend the Rule Loss, a loss function that is able to exploit a set of logical rules, in order to work with weighted triples. The evaluation of our solutions on several knowledge graphs indicates significant performance improvements with respect to the state of the art. Our main use case is the large-scale AIDA knowledge graph, which describes 21 million research articles. Our approach enables to complete information about affiliation types, countries, and research topics, greatly improving the scope of the resulting scientometrics analyses and providing better support to systems for monitoring and predicting research dynamics.", "which Method ?", "knowledge graph embeddings", 433.0, 459.0], ["Objective: Contemporary and future outpatient long-term artificial pancreas (AP) studies need to cope with the well-known large intra- and interday glucose variability occurring in type 1 diabetic (T1D) subjects. Here, we propose an adaptive model predictive control (MPC) strategy to account for it and test it in silico. Methods: A run-to-run (R2R) approach adapts the subcutaneous basal insulin delivery during the night and the carbohydrate-to-insulin ratio (CR) during the day, based on some performance indices calculated from subcutaneous continuous glucose sensor data. In particular, R2R aims, first, to reduce the percentage of time in hypoglycemia and, secondarily, to improve the percentage of time in euglycemia and average glucose. In silico simulations are performed by using the University of Virginia/Padova T1D simulator enriched by incorporating three novel features: intra- and interday variability of insulin sensitivity, different distributions of CR at breakfast, lunch, and dinner, and dawn phenomenon. Results: After about two months, using the R2R approach with a scenario characterized by a random $\\pm$30% variation of the nominal insulin sensitivity the time in range and the time in tight range are increased by 11.39% and 44.87%, respectively, and the time spent above 180 mg/dl is reduced by 48.74%. Conclusions : An adaptive MPC algorithm based on R2R shows in silico great potential to capture intra- and interday glucose variability by improving both overnight and postprandial glucose control without increasing hypoglycemia. Significance: Making an AP adaptive is key for long-term real-life outpatient studies. These good in silico results are very encouraging and worth testing in vivo.", "which Method ?", "run-to-run (R2R) approach", NaN, NaN], ["Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, but abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards automatically compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In pattern-based query log extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.4M statements for 130K popular Wikidata entities.", "which Method ?", "pattern-based query log extraction", 946.0, 980.0], ["In this paper, we study a cost-allocation problem that arises in a distribution-planning situation at the Logistics Department at Norsk Hydro Olje AB, Stockholm, Sweden. We consider the routes from one depot during one day. The total distribution cost for these routes is to be divided among the customers that are visited. This cost-allocation problem is formulated as a vehicle-routing game (VRG), allowing the use of vehicles with different capacities. Cost-allocation methods based on different concepts from cooperative game theory, such as the core and the nucleolus, are discussed. A procedure that can be used to investigate whether the core is empty or not is presented, as well as a procedure to compute the nucleolus. Computational results for the Norsk Hydro case are presented and discussed.", "which Method ?", "Cooperative game theory", 513.0, 536.0], ["The disparity estimation problem is commonly solved using graph cut (GC) methods, in which the disparity assignment problem is transformed to one of minimizing global energy function. Although such an approach yields an accurate disparity map, the computational cost is relatively high. Accordingly, this paper proposes a hierarchical bilateral disparity structure (HBDS) algorithm in which the efficiency of the GC method is improved without any loss in the disparity estimation performance by dividing all the disparity levels within the stereo image hierarchically into a series of bilateral disparity structures of increasing fineness. To address the well-known foreground fattening effect, a disparity refinement process is proposed comprising a fattening foreground region detection procedure followed by a disparity recovery process. The efficiency and accuracy of the HBDS-based GC algorithm are compared with those of the conventional GC method using benchmark stereo images selected from the Middlebury dataset. In addition, the general applicability of the proposed approach is demonstrated using several real-world stereo images.", "which Method ?", "Global", 160.0, 166.0], ["The production of thermoset polymers is increasing globally owing to their advantageous properties, particularly when applied as composite materials. Though these materials are traditionally used in more durable, longer-lasting applications, ultimately, they become waste at the end of their usable lifetimes. Current recycling practices are not applicable to traditional thermoset waste, owing to their network structures and lack of processability. Recently, researchers have been developing thermoset polymers with the right functionalities to be chemically degraded under relatively benign conditions postuse, providing a route to future management of thermoset waste. This review presents thermosets containing hydrolytically or solvolytically cleavable bonds, such as esters and acetals. Hydrolysis and solvolysis mechanisms are discussed, and various factors that influence the degradation rates are examined. Degradable thermosets with impressive mechanical, thermal, and adhesion behavior are discussed, illustrating that the design of material end of life need not limit material performance. Expected final online publication date for the Annual Review of Chemical and Biomolecular Engineering, Volume 11 is June 8, 2020. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.", "which Method ?", "solvolysis", 809.0, 819.0], ["Background: Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods: We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results: The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions: Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities.", "which Method ?", "generation interval", 302.0, 321.0], ["Using mobile games in education combines situated and active learning with fun in a potentially excellent manner. The effects of a mobile city game called Frequency 1550, which was developed by The Waag Society to help pupils in their first year of secondary education playfully acquire historical knowledge of medieval Amsterdam, were investigated in terms of pupil engagement in the game, historical knowledge, and motivation for History in general and the topic of the Middle Ages in particular. A quasi-experimental design was used with 458 pupils from 20 classes from five schools. The pupils in 10 of the classes played the mobile history game whereas the pupils in the other 10 classes received a regular, project-based lesson series. The results showed those pupils who played the game to be engaged and to gain significantly more knowledge about medieval Amsterdam than those pupils who received regular project-based instruction. No significant differences were found between the two groups with respect to motivation for History or the Middle Ages. The impact of location-based technology and game-based learning on pupil knowledge and motivation are discussed along with suggestions for future research.", "which Method ?", "Quasi-experimental", 501.0, 519.0], ["This experimental study investigated whether computer-based video games facilitate children's cognitive learning. In comparison to traditional computer-assisted instruction (CAI), this study explored the impact of the varied types of instructional delivery strategies on children's learning achievement. One major research null hypothesis was tested: no statistically significant differences in students' achievement when they receive two different instructional treatments: (1) traditional CAI; and (2) a computer-based video game. One hundred and eight third-graders from a middle/high socio-economic standard school district in Taiwan participated in the study. Results indicate that computer-based video game playing not only improves participants' fact/recall processes (F=5.288, p<;.05), but also promotes problem-solving skills by recognizing multiple solutions for problems (F=5.656, p<;.05).", "which Method ?", "experiment", NaN, NaN], ["Regional air\u2010sea fluxes of anthropogenic CO2 are estimated using a Green's function inversion method that combines data\u2010based estimates of anthropogenic CO2 in the ocean with information about ocean transport and mixing from a suite of Ocean General Circulation Models (OGCMs). In order to quantify the uncertainty associated with the estimated fluxes owing to modeled transport and errors in the data, we employ 10 OGCMs and three scenarios representing biases in the data\u2010based anthropogenic CO2 estimates. On the basis of the prescribed anthropogenic CO2 storage, we find a global uptake of 2.2 \u00b1 0.25 Pg C yr\u22121, scaled to 1995. This error estimate represents the standard deviation of the models weighted by a CFC\u2010based model skill score, which reduces the error range and emphasizes those models that have been shown to reproduce observed tracer concentrations most accurately. The greatest anthropogenic CO2 uptake occurs in the Southern Ocean and in the tropics. The flux estimates imply vigorous northward transport in the Southern Hemisphere, northward cross\u2010equatorial transport, and equatorward transport at high northern latitudes. Compared with forward simulations, we find substantially more uptake in the Southern Ocean, less uptake in the Pacific Ocean, and less global uptake. The large\u2010scale spatial pattern of the estimated flux is generally insensitive to possible biases in the data and the models employed. However, the global uptake scales approximately linearly with changes in the global anthropogenic CO2 inventory. Considerable uncertainties remain in some regions, particularly the Southern Ocean.", "which Method ?", "Green's function inversion method", 67.0, 100.0], ["Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and \u201cfeeling left out\u201d can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.", "which Method ?", "interview", NaN, NaN], ["Over the past decades, a tremendous amount of research has been done on the use of machine learning for speech processing applications, especially speech recognition. However, in the past few years, research has focused on utilizing deep learning for speech-related applications. This new area of machine learning has yielded far better results when compared to others in a variety of applications including speech, and thus became a very attractive area of research. This paper provides a thorough examination of the different studies that have been conducted since 2006, when deep learning first arose as a new area of machine learning, for speech applications. A thorough statistical analysis is provided in this review which was conducted by extracting specific information from 174 papers published between the years 2006 and 2018. The results provided in this paper shed light on the trends of research in this area as well as bring focus to new research topics.", "which Method ?", "Results", 337.0, 344.0], ["Abstract. The Indian Ocean (44\u00b0 S\u201330\u00b0 N) plays an important role in the global carbon cycle, yet it remains one of the most poorly sampled ocean regions. Several approaches have been used to estimate net sea\u2013air CO2 fluxes in this region: interpolated observations, ocean biogeochemical models, atmospheric and ocean inversions. As part of the RECCAP (REgional Carbon Cycle Assessment and Processes) project, we combine these different approaches to quantify and assess the magnitude and variability in Indian Ocean sea\u2013air CO2 fluxes between 1990 and 2009. Using all of the models and inversions, the median annual mean sea\u2013air CO2 uptake of \u22120.37 \u00b1 0.06 PgC yr\u22121 is consistent with the \u22120.24 \u00b1 0.12 PgC yr\u22121 calculated from observations. The fluxes from the southern Indian Ocean (18\u201344\u00b0 S; \u22120.43 \u00b1 0.07 PgC yr\u22121 are similar in magnitude to the annual uptake for the entire Indian Ocean. All models capture the observed pattern of fluxes in the Indian Ocean with the following exceptions: underestimation of upwelling fluxes in the northwestern region (off Oman and Somalia), overestimation in the northeastern region (Bay of Bengal) and underestimation of the CO2 sink in the subtropical convergence zone. These differences were mainly driven by lack of atmospheric CO2 data in atmospheric inversions, and poor simulation of monsoonal currents and freshwater discharge in ocean biogeochemical models. Overall, the models and inversions do capture the phase of the observed seasonality for the entire Indian Ocean but overestimate the magnitude. The predicted sea\u2013air CO2 fluxes by ocean biogeochemical models (OBGMs) respond to seasonal variability with strong phase lags with reference to climatological CO2 flux, whereas the atmospheric inversions predicted an order of magnitude higher seasonal flux than OBGMs. The simulated interannual variability by the OBGMs is weaker than that found by atmospheric inversions. Prediction of such weak interannual variability in CO2 fluxes by atmospheric inversions was mainly caused by a lack of atmospheric data in the Indian Ocean. The OBGM models suggest a small strengthening of the sink over the period 1990\u20132009 of \u22120.01 PgC decade\u22121. This is inconsistent with the observations in the southwestern Indian Ocean that shows the growth rate of oceanic pCO2 was faster than the observed atmospheric CO2 growth, a finding attributed to the trend of the Southern Annular Mode (SAM) during the 1990s.\n ", "which Method ?", "Ocean inversions", 319.0, 335.0], ["This Editorial describes the rationale, focus, scope and technology behind the newly launched, open access, innovative Food Modelling Journal (FMJ). The Journal is designed to publish those outputs of the research cycle that usually precede the publication of the research article, but have their own value and re-usability potential. Such outputs are methods, models, software and data. The Food Modelling Journal is launched by the AGINFRA+ community and is integrated with the AGINFRA+ Virtual Research Environment (VRE) to facilitate and streamline the authoring, peer review and publication of the manuscripts via the ARPHA Publishing Platform.", "which Method ?", "Food modelling", 127.0, 141.0], ["We present a new window-based method for correspondence search using varying support-weights. We adjust the support-weights of the pixels in a given support window based on color similarity and geometric proximity to reduce the image ambiguity. Our method outperforms other local methods on standard stereo benchmarks.", "which Method ?", "Local", 274.0, 279.0], ["By analysing the merits and demerits of the existing linear model for fleet planning, this paper presents an algorithm which combines the linear programming technique with that of dynamic programming to improve the solution to linear model for fleet planning. This new approach has not only the merits that the linear model for fleet planning has, but also the merit of saving computing time. The numbers of ships newly added into the fleet every year are always integers in the final optimal solution. The last feature of the solution directly meets the requirements of practical application. Both the mathematical model of the dynamic fleet planning and its algorithm are put forward in this paper. A calculating example is also given.", "which Method ?", "dynamic programming", 180.0, 199.0], ["In this paper we present a clustering scheme to create a hierarchical control structure for multi-hop wireless networks. A cluster is defined as a subset of vertices, whose induced graph is connected. In addition, a cluster is required to obey certain constraints that are useful for management and scalability of the hierarchy. All these constraints cannot be met simultaneously for general graphs, but we show how such a clustering can be obtained for wireless network topologies. Finally, we present an efficient distributed implementation of our clustering algorithm for a set of wireless nodes to create the set of desired clusters.", "which Method ?", "Distributed", 516.0, 527.0], ["This paper addresses a fleet-sizing problem in the context of the truck-rental industry. Specifically, trucks that vary in capacity and age are utilized over space and time to meet customer demand. Operational decisions (including demand allocation and empty truck repositioning) and tactical decisions (including asset procurements and sales) are explicitly examined in a linear programming model to determine the optimal fleet size and mix. The method uses a time-space network, common to fleet-management problems, but also includes capital cost decisions, wherein assets of different ages carry different costs, as is common to replacement analysis problems. A two-phase solution approach is developed to solve large-scale instances of the problem. Phase I allocates customer demand among assets through Benders decomposition with a demand-shifting algorithm assuring feasibility in each subproblem. Phase II uses the initial bounds and dual variables from Phase I and further improves the solution convergence without increasing computer memory requirements through the use of Lagrangian relaxation. Computational studies are presented to show the effectiveness of the approach for solving large problems within reasonable solution gaps.", "which Method ?", "Lagrangian relaxation", 1082.0, 1103.0], ["This paper describes the Semi-Global Matching (SGM) stereo method. It uses a pixelwise, Mutual Information based matching cost for compensating radiometric differences of input images. Pixelwise matching is supported by a smoothness constraint that is usually expressed as a global cost function. SGM performs a fast approximation by pathwise optimizations from all directions. The discussion also addresses occlusion detection, subpixel refinement and multi-baseline matching. Additionally, postprocessing steps for removing outliers, recovering from specific problems of structured environments and the interpolation of gaps are presented. Finally, strategies for processing almost arbitrarily large images and fusion of disparity images using orthographic projection are proposed.A comparison on standard stereo images shows that SGM is among the currently top-ranked algorithms and is best, if subpixel accuracy is considered. The complexity is linear to the number of pixels and disparity range, which results in a runtime of just 1-2s on typical test images. An in depth evaluation of the Mutual Information based matching cost demonstrates a tolerance against a wide range of radiometric transformations. Finally, examples of reconstructions from huge aerial frame and pushbroom images demonstrate that the presented ideas are working well on practical problems.", "which Method ?", "Global", 30.0, 36.0], ["Owing to the dynamic nature of sensor network applications the adoption of adaptive cluster-based topologies has many untapped desirable benefits for the wireless sensor networks. In this study, the authors explore such possibility and present an adaptive clustering algorithm to increase the network's lifetime while maintaining the required network connectivity. The proposed scheme features capability of cluster heads to adjust their power level to achieve optimal degree and maintain this value throughout the network operation. Under the proposed method a topology control allows an optimal degree, which results in a better distributed sensors and well-balanced clustering system enhancing networks' lifetime. The simulation results show that the proposed clustering algorithm maintains the required degree for inter-cluster connectivity on many more rounds compared with hybrid energy-efficient distributed clustering (HEED), energy-efficient clustering scheme (EECS), low-energy adaptive clustering hierarchy (LEACH) and energy-based LEACH.", "which Method ?", "Distributed", 631.0, 642.0], ["In the present economic climate, it is often the case that profits can only be improved, or for that matter maintained, by improving efficiency and cutting costs. This is particularly notorious in the shipping business, where it has been seen that the competition is getting tougher among carriers, thus alliances and partnerships are resulting for cost effective services in recent years. In this scenario, effective planning methods are important not only for strategic but also operating tasks, covering their entire transportation systems. Container fleet size planning is an important part of the strategy of any shipping line. This paper addresses the problem of fleet size planning for refrigerated containers, to achieve cost-effective services in a competitive maritime shipping market. An analytical model is first discussed to determine the optimal size of an own dry container fleet. Then, this is extended for an own refrigerated container fleet, which is the case when an extremely unbalanced trade represents one of the major investment decisions to be taken by liner operators. Next, a simulation model is developed for fleet sizing in a more practical situation and, by using this, various scenarios are analysed to determine the most convenient composition of refrigerated fleet between own and leased containers for the transpacific cargo trade.", "which Method ?", "Simulation", 1102.0, 1112.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Method ?", "approach", 984.0, 992.0], ["We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a highway network over characters, whose output is given to a long short-term memory (LSTM) recurrent neural network language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60% fewer parameters. On languages with rich morphology (Arabic, Czech, French, German, Spanish, Russian), the model outperforms word-level/morpheme-level LSTM baselines, again with fewer parameters. The results suggest that on many languages, character inputs are sufficient for language modeling. Analysis of word representations obtained from the character composition part of the model reveals that the model is able to encode, from characters only, both semantic and orthographic information.", "which Method ?", "Highway network", 194.0, 209.0], ["Objective: The UVA/Padova Type 1 Diabetes (T1DM) Simulator has been shown to be representative of a T1DM population observed in a clinical trial, but has not yet been identified on T1DM data. Moreover, the current version of the simulator is \u201csingle meal\u201d while making it \u201csingle-day centric,\u201d i.e., by describing intraday variability, would be a step forward to create more realistic in silico scenarios. Here, we propose a Bayesian method for the identification of the model from plasma glucose and insulin concentrations only, by exploiting the prior model parameter distribution. Methods: The database consists of 47 T1DM subjects, who received dinner, breakfast, and lunch (respectively, 80, 50, and 60 CHO grams) in three 23-h occasions (one openand one closed-loop). The model is identified using the Bayesian Maximum a Posteriori technique, where the prior parameter distribution is that of the simulator. Diurnal variability of glucose absorption and insulin sensitivity is allowed. Results: The model well describes glucose traces (coefficient of determination R2 = 0.962 \u00b1 0.027) and the posterior parameter distribution is similar to that included in the simulator. Absorption parameters at breakfast are significantly different from those at lunch and dinner, reflecting more rapid dynamics of glucose absorption. Insulin sensitivity varies in each individual but without a specific pattern. Conclusion: The incorporation of glucose absorption and insulin sensitivity diurnal variability into the simulator makes it more realistic. Significance: The proposed method, applied to the increasing number of longterm artificial pancreas studies, will allow to describe week/month variability, thus further refining the simulator.", "which Method ?", "Bayesian method", 425.0, 440.0], ["In this paper, we formulate a stereo matching algorithm with careful handling of disparity, discontinuity, and occlusion. The algorithm works with a global matching stereo model based on an energy-minimization framework. The global energy contains two terms, the data term and the smoothness term. The data term is first approximated by a color-weighted correlation, then refined in occluded and low-texture areas in a repeated application of a hierarchical loopy belief propagation algorithm. The experimental results are evaluated on the Middlebury data sets, showing that our algorithm is the top performer among all the algorithms listed there.", "which Method ?", "Global", 149.0, 155.0], ["This paper describes a successful implementation of a decision support system that is used by the fleet management division at North American Van Lines to plan fleet configuration. At the heart of the system is a large linear programming (LP) model that helps management decide what type of tractors to sell to owner/operators or to trade in each week. The system is used to answer a wide variety of \u201cWhat if\u201d questions, many of which have significant financial impact.", "which Method ?", "Linear programming", 219.0, 237.0], ["Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.", "which Method ?", "estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning", 467.0, 618.0], ["In this article, we present a fast and high quality stereo matching algorithm on FPGA using cost aggregation (CA) and fast locally consistent (FLC) dense stereo. In many software programs, global matching algorithms are used in order to obtain accurate disparity maps. Although their error rates are considerably low, their processing speeds are far from that required for real-time processing because of their complex processing sequences. In order to realize real-time processing, many hardware systems have been proposed to date. They have achieved considerably high processing speeds; however, their error rates are not as good as those of software programs, because simple local matching algorithms have been widely used in those systems. In our system, sophisticated local matching algorithms (CA and FLC) that are suitable for FPGA implementation are used to achieve low error rate while maintaining the high processing speed. We evaluate the performance of our circuit on Xilinx Vertex-6 FPGAs. Its error rate is comparable to that of top-level software algorithms, and its processing speed is nearly 2 clock cycles per pixel, which reaches 507.9 fps for 640 480 pixel images.", "which Method ?", "Local", 678.0, 683.0], ["Motivation: Coreference resolution, the process of identifying different mentions of an entity, is a very important component in a text-mining system. Compared with the work in news articles, the existing study of coreference resolution in biomedical texts is quite preliminary by only focusing on specific types of anaphors like pronouns or definite noun phrases, using heuristic methods, and running on small data sets. Therefore, there is a need for an in-depth exploration of this task in the biomedical domain. Results: In this article, we presented a learning-based approach to coreference resolution in the biomedical domain. We made three contributions in our study. Firstly, we annotated a large scale coreference corpus, MedCo, which consists of 1,999 medline abstracts in the GENIA data set. Secondly, we proposed a detailed framework for the coreference resolution task, in which we augmented the traditional learning model by incorporating non-anaphors into training. Lastly, we explored various sources of knowledge for coreference resolution, particularly, those that can deal with the complexity of biomedical texts. The evaluation on the MedCo corpus showed promising results. Our coreference resolution system achieved a high precision of 85.2% with a reasonable recall of 65.3%, obtaining an F-measure of 73.9%. The results also suggested that our augmented learning model significantly boosted precision (up to 24.0%) without much loss in recall (less than 5%), and brought a gain of over 8% in F-measure.", "which Method ?", "proposed a detailed framework for the coreference resolution task, in which we augmented the traditional learning model by incorporating non-anaphors into training", 816.0, 979.0], ["The current global plastics economy is highly linear, with the exceptional performance and low carbon footprint of polymeric materials at odds with dramatic increases in plastic waste. Transitioning to a circular economy that retains plastic in its highest value condition is essential to reduce environmental impacts, promoting reduction, reuse, and recycling. Mechanical recycling is an essential tool in an environmentally and economically sustainable economy of plastics, but current mechanical recycling processes are limited by cost, degradation of mechanical properties, and inconsistent quality products. This review covers the current methods and challenges for the mechanical recycling of the five main packaging plastics: poly(ethylene terephthalate), polyethylene, polypropylene, polystyrene, and poly(vinyl chloride) through the lens of a circular economy. Their reprocessing induced degradation mechanisms are introduced and strategies to improve their recycling are discussed. Additionally, this review briefly examines approaches to improve polymer blending in mixed plastic waste streams and applications of lower quality recyclate.", "which Method ?", "mechanical", 362.0, 372.0], ["This paper proposes a decentralized algorithm for organizing an ad hoc sensor network into clusters with directional antennas. The proposed autonomous clustering scheme aims to reduce the sensing redundancy and maintain sufficient sensing coverage and network connectivity in sensor networks. With directional antennas, random waiting timers, and local criterions, cluster performance may be substantially improved and sensing redundancy can be drastically suppressed. The simulation results show that the proposed scheme achieves connected coverage and provides efficient network topology management.", "which Method ?", "Centralized", NaN, NaN], ["Representation learning is a critical ingredient for natural language processing systems. Recent Transformer language models like BERT learn powerful textual representations, but these models are targeted towards token- and sentence-level training objectives and do not leverage information on inter-document relatedness, which limits their document-level representation power. For applications on scientific documents, such as classification and recommendation, accurate embeddings of documents are a necessity. We propose SPECTER, a new method to generate document-level embedding of scientific papers based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, Specter can be easily applied to downstream applications without task-specific fine-tuning. Additionally, to encourage further research on document-level models, we introduce SciDocs, a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation. We show that Specter outperforms a variety of competitive baselines on the benchmark.", "which Method ?", "Transformer", 97.0, 108.0], ["We consider the problem of finding the minimum number of vehicles required to visit once a set of nodes subject to time window constraints, for a homogeneous fleet of vehicles located at a common depot. This problem can be formulated as a network flow problem with additional time constraints. The paper presents an optimal solution approach using the augmented Lagrangian method. Two Lagrangian relaxations are studied. In the first one, the time constraints are relaxed producing network subproblems which are easy to solve, but the bound obtained is weak. In the second relaxation, constraints requiring that each node be visited are relaxed producing shortest path subproblems with time window constraints and integrality conditions. The bound produced is always excellent. Numerical results for several actual school busing problems with up to 223 nodes are discussed. Comparisons with a set partitioning formulation solved by column generation are given.", "which Method ?", "Lagrangian relaxation", NaN, NaN], ["Cross-modal retrieval between visual data and natural language description remains a long-standing challenge in multimedia. While recent image-text retrieval methods offer great promise by learning deep representations aligned across modalities, most of these methods are plagued by the issue of training with small-scale datasets covering a limited number of images with ground-truth sentences. Moreover, it is extremely expensive to create a larger dataset by annotating millions of images with sentences and may lead to a biased model. Inspired by the recent success of webly supervised learning in deep neural networks, we capitalize on readily-available web images with noisy annotations to learn robust image-text joint representation. Specifically, our main idea is to leverage web images and corresponding tags, along with fully annotated datasets, in training for learning the visual-semantic joint embedding. We propose a two-stage approach for the task that can augment a typical supervised pair-wise ranking loss based formulation with weakly-annotated web images to learn a more robust visual-semantic embedding. Experiments on two standard benchmark datasets demonstrate that our method achieves a significant performance gain in image-text retrieval compared to state-of-the-art approaches.", "which Has value ?", "Visual", 30.0, 36.0], ["Neural networks provide new possibilities to automatically learn complex language patterns and query-document relations. Neural IR models have achieved promising results in learning query-document relevance patterns, but few explorations have been done on understanding the text content of a query or a document. This paper studies leveraging a recently-proposed contextual neural language model, BERT, to provide deeper text understanding for IR. Experimental results demonstrate that the contextual text representations from BERT are more effective than traditional word embeddings. Compared to bag-of-words retrieval models, the contextual language model can better leverage language structures, bringing large improvements on queries written in natural languages. Combining the text understanding ability with search knowledge leads to an enhanced pre-trained BERT model that can benefit related search tasks where training data are limited.", "which Has value ?", "Word embedding", NaN, NaN], ["The identification of Afrotropical hoverflies is very difficult because of limited recent taxonomic revisions and the lack of comprehensive identification keys. In order to assist in their identification, and to improve the taxonomy of this group, we constructed a reference dataset of 513 COI barcodes of 90 of the more common nominal species from Ghana, Togo, Benin and Nigeria (W Africa) and added ten publically available COI barcodes from nine nominal Afrotropical species to this (total: 523 COI barcodes; 98 nominal species; 26 genera). The identification accuracy of this dataset was evaluated with three methods (K2P distance-based, Neighbor-Joining (NJ) / Maximum Likelihood (ML) analysis, and using SpeciesIdentifier). Results of the three methods were highly congruent and showed a high identification success. Nine species pairs showed a low (< 0.03) mean interspecific K2P distance that resulted in several incorrect identifications. A high (> 0.03) maximum intraspecific K2P distance was observed in eight species and barcodes of these species not always formed single clusters in the NJ / ML analayses which may indicate the occurrence of cryptic species. Optimal K2P thresholds to differentiate intra- from interspecific K2P divergence were highly different among the three subfamilies (Eristalinae: 0.037, Syrphinae: 0.06, Microdontinae: 0.007\u20130.02), and among the different general suggesting that optimal thresholds are better defined at the genus level. In addition to providing an alternative identification tool, our study indicates that DNA barcoding improves the taxonomy of Afrotropical hoverflies by selecting (groups of) taxa that deserve further taxonomic study, and by attributing the unknown sex to species for which only one of the sexes is known.", "which Biogeographical region ?", "Afrotropical", 22.0, 34.0], ["Abstract. Carrion-breeding Sarcophagidae (Diptera) can be used to estimate the post-mortem interval in forensic cases. Difficulties with accurate morphological identifications at any life stage and a lack of documented thermobiological profiles have limited their current usefulness. The molecular-based approach of DNA barcoding, which utilises a 648-bp fragment of the mitochondrial cytochrome oxidase subunit I gene, was evaluated in a pilot study for discrimination between 16 Australian sarcophagids. The current study comprehensively evaluated barcoding for a larger taxon set of 588 Australian sarcophagids. In total, 39 of the 84 known Australian species were represented by 580 specimens, which includes 92% of potentially forensically important species. A further eight specimens could not be identified, but were included nonetheless as six unidentifiable taxa. A neighbour-joining tree was generated and nucleotide sequence divergences were calculated. All species except Sarcophaga (Fergusonimyia) bancroftorum, known for high morphological variability, were resolved as monophyletic (99.2% of cases), with bootstrap support of 100. Excluding S. bancroftorum, the mean intraspecific and interspecific variation ranged from 1.12% and 2.81\u201311.23%, respectively, allowing for species discrimination. DNA barcoding was therefore validated as a suitable method for molecular identification of Australian Sarcophagidae, which will aid in the implementation of this fauna in forensic entomology.", "which Biogeographical region ?", "Australian", 481.0, 491.0], ["Abstract Background Various methods have been proposed to assign unknown specimens to known species using their DNA barcodes, while others have focused on using genetic divergence thresholds to estimate \u201cspecies\u201d diversity for a taxon, without a well-developed taxonomy and/or an extensive reference library of DNA barcodes. The major goals of the present work were to: a) conduct the largest species-level barcoding study of the Muscidae to date and characterize the range of genetic divergence values in the northern Nearctic fauna; b) evaluate the correspondence between morphospecies and barcode groupings defined using both clustering-based and threshold-based approaches; and c) use the reference library produced to address taxonomic issues. Results Our data set included 1114 individuals and their COI sequences (951 from Churchill, Manitoba), representing 160 morphologically-determined species from 25 genera, covering 89% of the known fauna of Churchill and 23% of the Nearctic fauna. Following an iterative process through which all specimens belonging to taxa with anomalous divergence values and/or monophyly issues were re-examined, identity was modified for 9 taxa, including the reinstatement of Phaonia luteva (Walker) stat. nov. as a species distinct from Phaonia errans (Meigen). In the post-reassessment data set, no distinct gap was found between maximum pairwise intraspecific distances (range 0.00-3.01%) and minimum interspecific distances (range: 0.77-11.33%). Nevertheless, using a clustering-based approach, all individuals within 98% of species grouped with their conspecifics with high (>95%) bootstrap support; in contrast, a maximum species discrimination rate of 90% was obtained at the optimal threshold of 1.2%. DNA barcoding enabled the determination of females from 5 ambiguous species pairs and confirmed that 16 morphospecies were genetically distinct from named taxa. There were morphological differences among all distinct genetic clusters; thus, no cases of cryptic species were detected. Conclusions Our findings reveal the great utility of building a well-populated, species-level reference barcode database against which to compare unknowns. When such a library is unavailable, it is still possible to obtain a fairly accurate (within ~10%) rapid assessment of species richness based upon a barcode divergence threshold alone, but this approach is most accurate when the threshold is tuned to a particular taxon.", "which Biogeographical region ?", "Nearctic", 519.0, 527.0], ["DNA barcoding has gained increased recognition as a molecular tool for species identification in various groups of organisms. In this preliminary study, we tested the efficacy of a 615\u2010bp fragment of the cytochrome c oxidase I (COI) as a DNA barcode in the medically important family Simuliidae, or black flies. A total of 65 (25%) morphologically distinct species and sibling species in species complexes of the 255 recognized Nearctic black fly species were used to create a preliminary barcode profile for the family. Genetic divergence among congeners averaged 14.93% (range 2.83\u201315.33%), whereas intraspecific genetic divergence between morphologically distinct species averaged 0.72% (range 0\u20133.84%). DNA barcodes correctly identified nearly 100% of the morphologically distinct species (87% of the total sampled taxa), whereas in species complexes (13% of the sampled taxa) maximum values of divergence were comparatively higher (max. 4.58\u20136.5%), indicating cryptic diversity. The existence of sibling species in Prosimulium travisi and P. neomacropyga was also demonstrated, thus confirming previous cytological evidence about the existence of such cryptic diversity in these two taxa. We conclude that DNA barcoding is an effective method for species identification and discovery of cryptic diversity in black flies.", "which Biogeographical region ?", "Nearctic", 428.0, 436.0], ["Dasysyrphus Enderlein (Diptera: Syrphidae) has posed taxonomic challenges to researchers in the past, primarily due to their lack of interspecific diagnostic characters. In the present study, DNA data (mitochondrial cytochrome c oxidase sub-unit I\u2014COI) were combined with morphology to help delimit species. This led to two species being resurrected from synonymy (D. laticaudus and D. pacificus) and the discovery of one new species (D. occidualis sp. nov.). An additional new species was described based on morphology alone (D. richardi sp. nov.), as the specimens were too old to obtain COI. Part of the taxonomic challenge presented by this group arises from missing type specimens. Neotypes are designated here for D. pauxillus and D. pinastri to bring stability to these names. An illustrated key to 13 Nearctic species is presented, along with descriptions, maps and supplementary data. A phylogeny based on COI is also presented and discussed.", "which Biogeographical region ?", "Nearctic", 809.0, 817.0], ["This data release provides COI barcodes for 366 species of parasitic flies (Diptera: Tachinidae), enabling the DNA based identification of the majority of northern European species and a large proportion of Palearctic genera, regardless of the developmental stage. The data will provide a tool for taxonomists and ecologists studying this ecologically important but challenging parasitoid family. A comparison of minimum distances between the nearest neighbors revealed the mean divergence of 5.52% that is approximately the same as observed earlier with comparable sampling in Lepidoptera, but clearly less than in Coleoptera. Full barcode-sharing was observed between 13 species pairs or triplets, equaling to 7.36% of all species. Delimitation based on Barcode Index Number (BIN) system was compared with traditional classification of species and interesting cases of possible species oversplits and cryptic diversity are discussed. Overall, DNA barcodes are effective in separating tachinid species and provide novel insight into the taxonomy of several genera.", "which Biogeographical region ?", "Palearctic", 207.0, 217.0], ["Recent comprehensive data provided through the DAISIE project (www.europe-aliens.org) have facilitated the development of the first pan-European assessment of the impacts of alien plants, vertebrates, and invertebrates \u2013 in terrestrial, freshwater, and marine environments \u2013 on ecosystem services. There are 1094 species with documented ecological impacts and 1347 with economic impacts. The two taxonomic groups with the most species causing impacts are terrestrial invertebrates and terrestrial plants. The North Sea is the maritime region that suffers the most impacts. Across taxa and regions, ecological and economic impacts are highly correlated. Terrestrial invertebrates create greater economic impacts than ecological impacts, while the reverse is true for terrestrial plants. Alien species from all taxonomic groups affect \u201csupporting\u201d, \u201cprovisioning\u201d, \u201cregulating\u201d, and \u201ccultural\u201d services and interfere with human well-being. Terrestrial vertebrates are responsible for the greatest range of impacts, and these are widely distributed across Europe. Here, we present a review of the financial costs, as the first step toward calculating an estimate of the economic consequences of alien species in Europe.", "which Habitat ?", "Marine", 253.0, 259.0], ["BACKGROUND AND AIMS The successful spread of invasive plants in new environments is often linked to multiple introductions and a diverse gene pool that facilitates local adaptation to variable environmental conditions. For clonal plants, however, phenotypic plasticity may be equally important. Here the primary adaptive strategy in three non-native, clonally reproducing macrophytes (Egeria densa, Elodea canadensis and Lagarosiphon major) in New Zealand freshwaters were examined and an attempt was made to link observed differences in plant morphology to local variation in habitat conditions. METHODS Field populations with a large phenotypic variety were sampled in a range of lakes and streams with different chemical and physical properties. The phenotypic plasticity of the species before and after cultivation was studied in a common garden growth experiment, and the genetic diversity of these same populations was also quantified. KEY RESULTS For all three species, greater variation in plant characteristics was found before they were grown in standardized conditions. Moreover, field populations displayed remarkably little genetic variation and there was little interaction between habitat conditions and plant morphological characteristics. CONCLUSIONS The results indicate that at the current stage of spread into New Zealand, the primary adaptive strategy of these three invasive macrophytes is phenotypic plasticity. However, while limited, the possibility that genetic diversity between populations may facilitate ecotypic differentiation in the future cannot be excluded. These results thus indicate that invasive clonal aquatic plants adapt to new introduced areas by phenotypic plasticity. Inorganic carbon, nitrogen and phosphorous were important in controlling plant size of E. canadensis and L. major, but no other relationships between plant characteristics and habitat conditions were apparent. This implies that within-species differences in plant size can be explained by local nutrient conditions. All together this strongly suggests that invasive clonal aquatic plants adapt to a wide range of habitats in introduced areas by phenotypic plasticity rather than local adaptation.", "which Habitat ?", "Freshwater", NaN, NaN], ["Biotic resistance, the process by which new colonists are excluded from a community by predation from and/or competition with resident species, can prevent or limit species invasions. We examined whether biotic resistance by native predators on Caribbean coral reefs has influenced the invasion success of red lionfishes (Pterois volitans and Pterois miles), piscivores from the Indo-Pacific. Specifically, we surveyed the abundance (density and biomass) of lionfish and native predatory fishes that could interact with lionfish (either through predation or competition) on 71 reefs in three biogeographic regions of the Caribbean. We recorded protection status of the reefs, and abiotic variables including depth, habitat type, and wind/wave exposure at each site. We found no relationship between the density or biomass of lionfish and that of native predators. However, lionfish densities were significantly lower on windward sites, potentially because of habitat preferences, and in marine protected areas, most likely because of ongoing removal efforts by reserve managers. Our results suggest that interactions with native predators do not influence the colonization or post-establishment population density of invasive lionfish on Caribbean reefs.", "which Habitat ?", "Marine", 987.0, 993.0], ["Abstract During the early 1990s, 2 Eurasian macrofouling mollusks, the zebra mussel Dreissena polymorpha and the quagga mussel D. bugensis, colonized the freshwater section of the St. Lawrence River and decimated native mussel populations through competitive interference. For several years, zebra mussels dominated molluscan biomass in the river; however, quagga mussels have increased in abundance and are apparently displacing zebra mussels from the Soulanges Canal, west of the Island of Montreal. The ratio of quagga mussel biomass to zebra mussel biomass on the canal wall is correlated with depth, and quagga mussels constitute >99% of dreissenid biomass on bottom sediments. This dominance shift did not substantially affect the total dreissenid biomass, which has remained at 3 to 5 kg fresh mass /m2 on the canal walls for nearly a decade. The mechanism for this shift is unknown, but may be related to a greater bioenergetic efficiency for quaggas, which attained larger shell sizes than zebra mussels at all depths. Similar events have occurred in the lower Great Lakes where zebra mussels once dominated littoral macroinvertebrate biomass, demonstrating that a well-established and prolific invader can be replaced by another introduced species without prior extinction.", "which Habitat ?", "Freshwater", 154.0, 164.0], ["We assess the relative contribution of human, biological and climatic factors in explaining the colonization success of two highly invasive freshwater decapods: the signal crayfish (Pacifastacus leniusculus) and the red swamp crayfish (Procambarus clarkii).", "which Habitat ?", "Freshwater", 140.0, 150.0], ["The enemy release hypothesis, which posits that exotic species are less regulated by enemies than native species, has been well-supported in terrestrial systems but rarely tested in marine systems. Here, the enemy release hypothesis was tested in a marine system by excluding large enemies (>1.3 cm) in dock fouling communities in Washington, USA. After documenting the distribution and abundance of potential enemies such as chitons, gastropods and flatworms at 4 study sites, exclusion experiments were conducted to test the hypotheses that large grazing ene- mies (1) reduced recruitment rates in the exotic ascidian Botrylloides violaceus and native species, (2) reduced B. violaceus and native species abundance, and (3) altered fouling community struc- ture. Experiments demonstrated that, as predicted by the enemy release hypothesis, exclusion of large enemies did not significantly alter B. violaceus recruitment or abundance and it did signifi- cantly increase abundance or recruitment of 2 common native species. However, large enemy exclusion had no significant effects on most native species or on overall fouling community struc- ture. Furthermore, neither B. violaceus nor total exotic species abundance correlated positively with abundance of large enemies across sites. I therefore conclude that release from large ene- mies is likely not an important mechanism for the success of exotic species in Washington fouling communities.", "which Habitat ?", "Marine", 182.0, 188.0], ["European countries in general, and England in particular, have a long history of introducing non-native fish species, but there exist no detailed studies of the introduction pathways and propagules pressure for any European country. Using the nine regions of England as a preliminary case study, the potential relationship between the occurrence in the wild of non-native freshwater fishes (from a recent audit of non-native species) and the intensity (i.e. propagule pressure) and diversity of fish imports was investigated. The main pathways of introduction were via imports of fishes for ornamental use (e.g. aquaria and garden ponds) and sport fishing, with no reported or suspected cases of ballast water or hull fouling introductions. The recorded occurrence of non-native fishes in the wild was found to be related to the time (number of years) since the decade of introduction. A shift in the establishment rate, however, was observed in the 1970s after which the ratio of established-to-introduced species declined. The number of established non-native fish species observed in the wild was found to increase significantly (P < 0\u00b705) with increasing import intensity (log10x + 1 of the numbers of fish imported for the years 2000\u20132004) and with increasing consignment diversity (log10x + 1 of the numbers of consignment types imported for the years 2000\u20132004). The implications for policy and management are discussed.", "which Habitat ?", "Freshwater", 372.0, 382.0], ["Theoretically, disturbance and diversity can influence the success of invasive colonists if (1) resource limitation is a prime determinant of invasion success and (2) disturbance and diversity affect the availability of required resources. However, resource limitation is not of overriding importance in all systems, as exemplified by marine soft sediments, one of Earth's most widespread habitat types. Here, we tested the disturbance-invasion hypothesis in a marine soft-sediment system by altering rates of biogenic disturbance and tracking the natural colonization of plots by invasive species. Levels of sediment disturbance were controlled by manipulating densities of burrowing spatangoid urchins, the dominant biogenic sediment mixers in the system. Colonization success by two invasive species (a gobiid fish and a semelid bivalve) was greatest in plots with sediment disturbance rates < 500 cm(3) x m(-2) x d(-1), at the low end of the experimental disturbance gradient (0 to > 9000 cm(3) x m(-2) x d(-1)). Invasive colonization declined with increasing levels of sediment disturbance, counter to the disturbance-invasion hypothesis. Increased sediment disturbance by the urchins also reduced the richness and diversity of native macrofauna (particularly small, sedentary, surface feeders), though there was no evidence of increased availability of resources with increased disturbance that would have facilitated invasive colonization: sediment food resources (chlorophyll a and organic matter content) did not increase, and space and access to overlying water were not limited (low invertebrate abundance). Thus, our study revealed the importance of biogenic disturbance in promoting invasion resistance in a marine soft-sediment community, providing further evidence of the valuable role of bioturbation in soft-sediment systems (bioturbation also affects carbon processing, nutrient recycling, oxygen dynamics, benthic community structure, and so on.). Bioturbation rates are influenced by the presence and abundance of large burrowing species (like spatangoid urchins). Therefore, mass mortalities of large bioturbators could inflate invasion risk and alter other aspects of ecosystem performance in marine soft-sediment habitats.", "which Habitat ?", "Marine", 335.0, 341.0], ["Aquatic macrophytes play an important role in the survival and proliferation of invertebrates in freshwater eco- systems. Epiphytic invertebrate communities may be altered through the replacement of native macrophytes by exotic mac- rophytes, even when the macrophytes are close relatives and have similar morphology. We sampled an invasive exotic macrophyte, Eurasian watermilfoil (Myriophyllum spicatum), and native milfoils Myriophyllum sibericum and Myriophyl- lum alterniflorum in four bodies of water in southern Quebec and upstate New York during the summer of 2005. Within each waterbody, we compared the abundance, diversity, and community composition of epiphytic macroinvertebrates on exotic and native Myriophyllum. In general, both M. sibericum and M. alterniflorum had higher invertebrate diversity and higher invertebrate biomass and supported more gastropods than the exotic M. spicatum. In late summer, invertebrate den- sity tended to be higher on M. sibericum than on M. spicatum, but lower on M. alterniflorum than on M. spicatum. Our re- sults demonstrate that M. spicatum supports macroinvertebrate communities that may differ from those on structurally similar native macrophytes, although these differences vary across sites and sampling dates. Thus, the replacement of na- tive milfoils by M. spicatum may have indirect effects on aquatic food webs. Resume\u00b4 : Les macrophytes aquatiques jouent un role important dans la survie et la proliferation des invertebres dans les ecosystemes d'eau douce. Les communautes d'invertebresepiphytes peuvent etre modifiees par le remplacement des mac- rophytes indigenes par des marcophytes exotiques, meme lorsque ces macrophytes sont de proches parents et possedent une morphologie similaire. Nous avons echantillonneun macrophyte exotique envahissant, le myriophylle a epis (Myrio- phyllum spicatum), et des myriophylles indigenes (Myriophyllum sibericum et Myriophyllum alterniflorum) dans des plans d'eau du sud du Quebec et du nord de l'etat de New York durant l'ete\u00b4 2005. Dans chaque plan d'eau, nous avons compare\u00b4 l'abondance, la diversiteet la composition de la communautede macroinvertebresepiphytes sur les Myriophyllum exotique et indigenes. En general, tant M. sibericum que M. alterniflorum portent une plus grande diversiteet une biomasse plus importante d'invertebres ainsi qu'un plus grand nombre de gasteropodes que le M. spicatum exotique. En fin d'ete\u00b4, la den- sitedes invertebres tend a etre plus importante sur M. sibericum que sur M. spicatum, mais plus faible sur M. alterniflorum que sur M. spicatum. Nos resultats demontrent que M. spicatum porte des communautes de macroinvertebres qui peuvent differer de celles qui se retrouvent sur les macrophytes indigenes de structure similaire, bien que ces differences puissent varier d'un site aun autre et d'une date d'echantillonnage a l'autre. Ainsi, le remplacement des myriophylles indigenes par M. spicatum peut avoir des effets indirects sur les reseaux alimentaires aquatiques. (Traduit par la Redaction)", "which Habitat ?", "Freshwater", 97.0, 107.0], ["Species introduced to novel regions often leave behind many parasite species. Signatures of parasite release could thus be used to resolve cryptogenic (uncertain) origins such as that of Littorina littorea, a European marine snail whose history in North America has been debated for over 100 years. Through extensive field and literature surveys, we examined species richness of parasitic trematodes infecting this snail and two co-occurring congeners, L. saxatilis and L. obtusata, both considered native throughout the North Atlantic. Of the three snails, only L. littorea possessed significantly fewer trematode species in North America, and all North American trematodes infecting the three Littorina spp. were a nested subset of Europe. Surprisingly, several of L. littorea's missing trematodes in North America infected the other Littorina congeners. Most likely, long separation of these trematodes from their former host resulted in divergence of the parasites' recognition of L. littorea. Overall, these patterns of parasitism suggest a recent invasion from Europe to North America for L. littorea and an older, natural expansion from Europe to North America for L. saxatilis and L. obtusata.", "which Habitat ?", "Marine", 218.0, 224.0], ["Abstract Invasive species threaten global biodiversity, but multiple invasions make predicting the impacts difficult because of potential synergistic effects. We examined the impact of introduced beaver Castor canadensis, brook trout Salvelinus fontinalis, and rainbow trout Oncorhynchus mykiss on native stream fishes in the Cape Horn Biosphere Reserve, Chile. The combined effects of introduced species on the structure of the native freshwater fish community were quantified by electrofishing 28 stream reaches within four riparian habitat types (forest, grassland, shrubland, and beaver-affected habitat) in 23 watersheds and by measuring related habitat variables (water velocity, substrate type, depth, and the percentage of pools). Three native stream fish species (puye Galaxias maculatus [also known as inanga], Aplochiton taeniatus, and A. zebra) were found along with brook trout and rainbow trout, but puye was the only native species that was common and widespread. The reaches affected by beaver impoundmen...", "which Habitat ?", "Freshwater", 436.0, 446.0], ["Invasions by exotic plants may be more likely if exotics have low rates of attack by natural enemies, includ - ing post-dispersal seed predators (granivores). We investigated this idea with a field experiment conducted near Newmarket, Ontario, in which we experimentally excluded vertebrate and terrestrial insect seed predators from seeds of 43 native and exotic old-field plants. Protection from vertebrates significantly increased recovery of seeds; vertebrate exclusion produced higher recovery than controls for 30 of the experimental species, increasing overall seed recovery from 38.2 to 45.6%. Losses to vertebrates varied among species, significantly increasing with seed mass. In contrast, insect exclusion did not significantly improve seed recovery. There was no evidence that aliens benefitted from a re - duced rate of post-dispersal seed predation. The impacts of seed predators did not differ significantly between natives and exotics, which instead showed very similar responses to predator exclusion treatments. These results indicate that while vertebrate granivores had important impacts, especially on large-seeded species, exotics did not generally benefit from reduced rates of seed predation. Instead, differences between natives and exotics were small compared with interspecific variation within these groups. Resume : L'invasion par les plantes adventices est plus plausible si ces plantes ont peu d'ennemis naturels, incluant les predateurs post-dispersion des graines (granivores). Les auteurs ont examine cette idee lors d'une experience sur le ter- rain, conduite pres de Newmarket en Ontario, dans laquelle ils ont experimentalement empeche les predateurs de grai- nes, vertebres et insectes terrestres, d'avoir acces aux graines de 43 especes de plantes indigenes ou exotiques, de vielles prairies. La protection contre les vertebres augmente significativement la survie des graines; l'exclusion permet de recuperer plus de graines comparativement aux temoins chez 30 especes de plantes experimentales, avec une aug- mentation generale de recuperation allant de 38.2 a 45.6%. Les pertes occasionnees par les vertebres varient selon les especes, augmentant significativement avec la grosseur des graines. Au contraire, l'exclusion des insectes n'augmente pas significativement les nombres de graines recuperees. Ils n'y a pas de preuve que les adventices auraient beneficie d'une reduction du taux de predation post-dispersion des graines. Les impacts des predateurs de graines ne different pas significativement entre les especes indigenes et introduites, qui montrent au contraire des reactions tres similaires aux traitements d'exclusion des predateurs. Ces resultats indiquent que bien que les granivores vertebres aient des im - pacts importants, surtout sur les especes a grosses graines, les plantes introduites ne beneficient generalement pas de taux reduits de predation des graines. Au contraire, les differences entre les plantes indigenes et les plantes introduites sont petites comparativement a la variation interspecifique a l'interieur de chacun de ces groupes. Mots cles : adventices, exotiques, granivores, envahisseurs, vieilles prairies, predateurs de graines. (Traduit par la Redaction) Blaney and Kotanen 292", "which Habitat ?", "Terrestrial", 295.0, 306.0], ["Ants are omnipresent in most terrestrial ecosystems, and plants have responded to their dominance by evolving traits that either facilitate positive interactions with ants or reduce negative ones. Because ants are generally poor pollinators, plants often protect their floral nectar against ants. Ants were historically absent from the geographically isolated Hawaiian archipelago, which harbors one of the most endemic floras in the world. We hypothesized that native Hawaiian plants lack floral features that exclude ants and therefore would be heavily exploited by introduced, invasive ants. To test this hypothesis, ant\u2013flower interactions involving co-occurring native and introduced plants were observed in 10 sites on three Hawaiian Islands. We quantified the residual interaction strength of each pair of ant\u2013plant species as the deviation of the observed interaction frequency from a null-model prediction based on available nectar sugar in a local plant community and local ant activity at sugar baits. As pred...", "which Habitat ?", "Terrestrial", 29.0, 40.0], ["Introduction vectors for marine non-native species, such as oyster culture and boat foul- ing, often select for organisms dependent on hard substrates during some or all life stages. In soft- sediment estuaries, hard substrate is a limited resource, which can increase with the introduction of hard habitat-creating non-native species. Positive interactions between non-native, habitat-creating species and non-native species utilizing such habitats could be a mechanism for enhanced invasion success. Most previous studies on aquatic invasive habitat-creating species have demonstrated posi- tive responses in associated communities, but few have directly addressed responses of other non- native species. We explored the association of native and non-native species with invasive habitat- creating species by comparing communities associated with non-native, reef-building tubeworms Ficopomatus enigmaticus and native oysters Ostrea conchaphila in Elkhorn Slough, a central Califor- nia estuary. Non-native habitat supported greater densities of associated organisms\u2014primarily highly abundant non-native amphipods (e.g. Monocorophium insidiosum, Melita nitida), tanaid (Sinelebus sp.), and tube-dwelling polychaetes (Polydora spp.). Detritivores were the most common trophic group, making up disproportionately more of the community associated with F. enigmaticus than was the case in the O. conchaphila community. Analysis of similarity (ANOSIM) showed that native species' community structure varied significantly among sites, but not between biogenic habi- tats. In contrast, non-natives varied with biogenic habitat type, but not with site. Thus, reefs of the invasive tubeworm F. enigmaticus interact positively with other non-native species.", "which Habitat ?", "Marine", 25.0, 31.0], ["Geographic patterns of species richness have been linked to many physical and biological drivers. In this study, we document and explain gradients of species richness for native and introduced freshwater fish in Chilean lakes. We focus on the role of the physical environment to explain native richness patterns. For patterns of introduced salmonid richness and dominance, we also examine the biotic resistance and human activity hypotheses. We were particularly interested in identifying the factors that best explain the persistence of salmonid\u2010free lakes in Patagonia.", "which Habitat ?", "Freshwater", 193.0, 203.0], ["Although many studies have investigated how community characteristics such as diversity and disturbance relate to invasibility, the mechanisms underlying biotic resistance to introduced species are not well understood. I manipulated the functional group composition of native algal communities and invaded them with the introduced, Japanese seaweed Sargassum muticum to understand how individual functional groups contributed to overall invasion resistance. The results suggested that space preemption by crustose and turfy algae inhibited S. muticum recruitment and that light preemption by canopy and understory algae reduced S. muticum survivorship. However, other mechanisms I did not investigate could have contributed to these two results. In this marine community the sequential preemption of key resources by different functional groups in different stages of the invasion generated resistance to invasion by S. muticum. Rather than acting collectively on a single resource the functional groups in this system were important for preempting either space or light, but not both resources. My experiment has important implications for diversity-invasibility studies, which typically look for an effect of diversity on individual resources. Overall invasion resistance will be due to the additive effects of individual functional groups (or species) summed over an invader's life cycle. Therefore, the cumulative effect of multiple functional groups (or species) acting on multiple resources is an alternative mechanism that could generate negative relationships between diversity and invasibility in a variety of biological systems.", "which Habitat ?", "Marine", 754.0, 760.0], ["1. Freshwaters are subject to particularly high rates of species introductions; hence, invaders increasingly co-occur and may interact to enhance impacts on ecosystem structure and function. As trophic interactions are a key mechanism by which invaders influence communities, we used a combination of approaches to investigate the feeding preferences and community impacts of two globally invasive large benthic decapods that co-occur in freshwaters: the signal crayfish (Pacifastacus leniusculus) and Chinese mitten crab (Eriocheir sinensis). 2. In laboratory preference tests, both consumed similar food items, including chironomids, isopods and the eggs of two coarse fish species. In a comparison of predatory functional responses with a native crayfish Austropotamobius pallipes), juvenile E. sinensis had a greater predatory intensity than the native A. pallipes on the keystone shredder Gammarus pulex, and also displayed a greater preference than P. leniusculus for this prey item. 3. In outdoor mesocosms (n = 16) used to investigate community impacts, the abundance of amphipods, isopods, chironomids and gastropods declined in the presence of decapods, and a decapod >gastropod >periphyton trophic cascade was detected when both species were present. Eriocheir sinensis affected a wider range of animal taxa than P. leniusculus. 4. Stable-isotope and gut-content analysis of wild-caught adult specimens of both invaders revealed a wide and overlapping range of diet items including macrophytes, algae, terrestrial detritus, macroinvertebrates and fish. Both decapods were similarly enriched in 15N and occupied the same trophic level as Ephemeroptera, Odonata and Notonecta. Eriocheir sinensis d13C values were closely aligned with macrophytes indicating a reliance on energy from this basal resource, supported by evidence of direct consumption from gut contents. Pacifastacus leniusculus d13C values were intermediate between those of terrestrial leaf litter and macrophytes, suggesting reliance on both allochthonous and autochthonous energy pathways. 5. Our results suggest that E. sinensis is likely to exert a greater per capita impact on the macroinvertebrate communities in invaded systems than P. leniusculus, with potential indirect effects on productivity and energy flow through the community.", "which Habitat ?", "Freshwater", NaN, NaN], ["Summary 1 With biological invasions causing widespread problems in ecosystems, methods to curb the colonization success of invasive species are needed. The effective management of invasive species will require an integrated approach that restores community structure and ecosystem processes while controlling propagule pressure of non-native species. 2 We tested the hypotheses that restoring native vegetation and minimizing propagule pressure of invasive species slows the establishment of an invader. In field and greenhouse experiments, we evaluated (i) the effects of a native submersed aquatic plant species, Vallisneria americana, on the colonization success of a non-native species, Hydrilla verticillata; and (ii) the effects of H. verticillata propagule density on its colonization success. 3 Results from the greenhouse experiment showed that V. americana decreased H. verticillata colonization through nutrient draw-down in the water column of closed mesocosms, although data from the field experiment, located in a tidal freshwater region of Chesapeake Bay that is open to nutrient fluxes, suggested that V. americana did not negatively impact H. verticillata colonization. However, H. verticillata colonization was greater in a treatment of plastic V. americana look-alikes, suggesting that the canopy of V. americana can physically capture H. verticillata fragments. Thus pre-emption effects may be less clear in the field experiment because of complex interactions between competitive and facilitative effects in combination with continuous nutrient inputs from tides and rivers that do not allow nutrient draw-down to levels experienced in the greenhouse. 4 Greenhouse and field tests differed in the timing, duration and density of propagule inputs. However, irrespective of these differences, propagule pressure of the invader affected colonization success except in situations when the native species could draw-down nutrients in closed greenhouse mesocosms. In that case, no propagules were able to colonize. 5 Synthesis and applications. We have shown that reducing propagule pressure through targeted management should be considered to slow the spread of invasive species. This, in combination with restoration of native species, may be the best defence against non-native species invasion. Thus a combined strategy of targeted control and promotion of native plant growth is likely to be the most sustainable and cost-effective form of invasive species management.", "which Habitat ?", "Freshwater", 1034.0, 1044.0], ["1 Invading species typically need to overcome multiple limiting factors simultaneously in order to become established, and understanding how such factors interact to regulate the invasion process remains a major challenge in ecology. 2 We used the invasion of marine algal communities by the seaweed Sargassum muticum as a study system to experimentally investigate the independent and interactive effects of disturbance and propagule pressure in the short term. Based on our experimental results, we parameterized an integrodifference equation model, which we used to examine how disturbances created by different benthic herbivores influence the longer term invasion success of S. muticum. 3 Our experimental results demonstrate that in this system neither disturbance nor propagule input alone was sufficient to maximize invasion success. Rather, the interaction between these processes was critical for understanding how the S. muticum invasion is regulated in the short term. 4 The model showed that both the size and spatial arrangement of herbivore disturbances had a major impact on how disturbance facilitated the invasion, by jointly determining how much space\u2010limitation was alleviated and how readily disturbed areas could be reached by dispersing propagules. 5 Synthesis. Both the short\u2010term experiment and the long\u2010term model show that S. muticum invasion success is co\u2010regulated by disturbance and propagule pressure. Our results underscore the importance of considering interactive effects when making predictions about invasion success.", "which Habitat ?", "Marine", 260.0, 266.0], ["Summary 1. The exotic cladoceran Daphnia lumholtzi has recently invaded freshwater systems throughout the United States. Daphnia lumholtzi possesses extravagant head spines that are longer than those found on any other North American Daphnia. These spines are effective at reducing predation from many of the predators that are native to newly invaded habitats; however, they are plastic both in nature and in laboratory cultures. The purpose of this experiment was to better understand what environmental cues induce and maintain these effective predator-deterrent spines. We conducted life-table experiments on individual D. lumholtzi grown in water conditioned with an invertebrate insect predator, Chaoborus punctipennis, and water conditioned with a vertebrate fish predator, Lepomis macrochirus. 2. Daphnia lumholtzi exhibited morphological plasticity in response to kairomones released by both predators. However, direct exposure to predator kairomones during postembryonic development did not induce long spines in D. lumholtzi. In contrast, neonates produced from individuals exposed to Lepomis kairomones had significantly longer head and tail spines than neonates produced from control and Chaoborus individuals. These results suggest that there may be a maternal, or pre-embryonic, effect of kairomone exposure on spine development in D. lumholtzi. 3. Independent of these morphological shifts, D. lumholtzi also exhibited plasticity in life history characteristics in response to predator kairomones. For example, D. lumholtzi exhibited delayed reproduction in response to Chaoborus kairomones, and significantly more individuals produced resting eggs, or ephippia, in the presence of Lepomis kairomones.", "which Habitat ?", "Freshwater", 72.0, 82.0], ["Aim To quantify the vulnerability of habitats to invasion by alien plants having accounted for the effects of propagule pressure, time and sampling effort. Location New Zealand. Methods We used spatial, temporal and habitat information taken from 9297 herbarium records of 301 alien plant species to examine the vulnerability of 11 terrestrial habitats to plant invasions. A null model that randomized species records across habitats was used to account for variation in sampling effort and to derive a relative measure of invasion based either on all records for a species or only its first record. The relative level of invasion was related to the average distance of each habitat from the nearest conurbation, which was used as a proxy for propagule pressure. The habitat in which a species was first recorded was compared to the habitats encountered for all records of that species to determine whether the initial habitat could predict subsequent habitat occupancy. Results Variation in sampling effort in space and time significantly masked the underlying vulnerability of habitats to plant invasions. Distance from the nearest conurbation had little effect on the relative level of invasion in each habitat, but the number of first records of each species significantly declined with increasing distance. While Urban, Streamside and Coastal habitats were over-represented as sites of initial invasion, there was no evidence of major invasion hotspots from which alien plants might subsequently spread. Rather, the data suggest that certain habitats (especially Roadsides) readily accumulate alien plants from other habitats. Main conclusions Herbarium records combined with a suitable null model provide a powerful tool for assessing the relative vulnerability of habitats to plant invasion. The first records of alien plants tend to be found near conurbations, but this pattern disappears with subsequent spread. Regardless of the habitat where a species was first recorded, ultimately most alien plants spread to Roadside and Sparse habitats. This information suggests that such habitats may be useful targets for weed surveillance and monitoring.", "which Habitat ?", "Terrestrial", 332.0, 343.0], ["The eastern Mediterranean is a hotspot of biological invasions. Numerous species of Indo-pacific origin have colonized the Mediterranean in recent times, including tropical symbiont-bearing foraminifera. Among these is the species Pararotalia calcariformata. Unlike other invasive foraminifera, this species was discovered only two decades ago and is restricted to the eastern Mediterranean coast. Combining ecological, genetic and physiological observations, we attempt to explain the recent invasion of this species in the Mediterranean Sea. Using morphological and genetic data, we confirm the species attribution to P. calcariformata McCulloch 1977 and identify its symbionts as a consortium of diatom species dominated by Minutocellus polymorphus. We document photosynthetic activity of its endosymbionts using Pulse Amplitude Modulated Fluorometry and test the effects of elevated temperatures on growth rates of asexual offspring. The culturing of asexual offspring for 120 days shows a 30-day period of rapid growth followed by a period of slower growth. A subsequent 48-day temperature sensitivity experiment indicates a similar developmental pathway and high growth rate at 28\u00b0C, whereas an almost complete inhibition of growth was observed at 20\u00b0C and 35\u00b0C. This indicates that the offspring of this species may have lower tolerance to cold temperatures than what would be expected for species native to the Mediterranean. We expand this hypothesis by applying a Species Distribution Model (SDM) based on modern occurrences in the Mediterranean using three environmental variables: irradiance, turbidity and yearly minimum temperature. The model reproduces the observed restricted distribution and indicates that the range of the species will drastically expand westwards under future global change scenarios. We conclude that P. calcariformata established a population in the Levant because of the recent warming in the region. In line with observations from other groups of organisms, our results indicate that continued warming of the eastern Mediterranean will facilitate the invasion of more tropical marine taxa into the Mediterranean, disturbing local biodiversity and ecosystem structure.", "which Habitat ?", "Marine", 2117.0, 2123.0], ["Invasions by non\u2010indigenous species (NIS) are recognized as important stressors of many communities throughout the world. Here, we evaluated available data on the role of NIS in marine and estuarine communities and their interactions with other anthropogenic stressors, using an intensive analysis of the Chesapeake Bay region as a case study. First, we reviewed the reported ecological impacts of 196 species that occur in tidal waters of the bay, including species that are known invaders as well as some that are cryptogenic (i.e., of uncertain origin). Second, we compared the impacts reported in and out of the bay region for the same 54 species of plants and fish from this group that regularly occur in the region\u00c2's tidal waters. Third, we assessed the evidence for interaction in the distribution or performance of these 54 plant and fish species within the bay and other stressors. Of the 196 known and possible NIS, 39 (20%) were thought to have some significant impact on a resident population, community, habitat, or process within the bay region. However, quantitative data on impacts were found for only 12 of the 39, representing 31% of this group and 6% of all 196 species surveyed. The patterns of reported impacts in the bay for plants and fish were nearly identical: 29% were reported to have significant impacts, but quantitative impact data existed for only 7% (4/54) of these species. In contrast, 74% of the same species were reported to have significant impacts outside of the bay, and some quantitative impact data were found for 44% (24 /54) of them. Although it appears that 20% of the plant and fish species in our analysis may have significant impacts in the bay region based upon impacts measured elsewhere, we suggest that studies outside the region cannot reliably predict such impacts. We surmise that quantitative impact measures for individual bays or estuaries generally exist for <5% of the NIS present, and many of these measures are not particularly informative. Despite the increasing knowledge of marine invasions at many sites, it is evident that we understand little about the full extent and variety of the impacts they create singly and cumulatively. Given the multiple anthropogenic stressors that overlap with NIS in estuaries, we predict NIS\u2010stressor interactions play an important role in the pattern and impact of invasions.", "which Habitat ?", "Marine", 178.0, 184.0], ["Predicting community susceptibility to invasion has become a priority for preserving biodiversity. We tested the hypothesis that the occurrence and abundance of the seaweed Caulerpa racemosa in the north-western (NW) Mediterranean would increase with increasing levels of human disturbance. Data from a survey encompassing areas subjected to different human influences (i.e. from urbanized to protected areas) were fitted by means of generalized linear mixed models, including descriptors of habitats and communities. The incidence of occurrence of C. racemosa was greater on urban than extra-urban or protected reefs, along the coast of Tuscany and NW Sardinia, respectively. Within the Marine Protected Area of Capraia Island (Tuscan Archipelago), the probability of detecting C. racemosa did not vary according to the degree of protection (partial versus total). Human influence was, however, a poor predictor of the seaweed cover. At the seascape level, C. racemosa was more widely spread within degraded (i.e. Posidonia oceanica dead matte or algal turfs) than in better preserved habitats (i.e. canopy-forming macroalgae or P. oceanica seagrass meadows). At a smaller spatial scale, the presence of the seaweed was positively correlated to the diversity of macroalgae and negatively to that of sessile invertebrates. These results suggest that C. racemosa can take advantage of habitat degradation. Thus, predicting invasion scenarios requires a thorough knowledge of ecosystem structure, at a hierarchy of levels of biological organization (from the landscape to the assemblage) and detailed information on the nature and intensity of sources of disturbance and spatial scales at which they operate.", "which Habitat ?", "Marine", 688.0, 694.0], ["A venerable generalization about community resistance to invasions is that more diverse communities are more resistant to invasion. However, results of experimental and observational studies often conflict, leading to vigorous debate about the mechanistic importance of diversity in determining invasion success in the field, as well as other eco- system properties, such as productivity and stability. In this study, we employed both field experiments and observational approaches to assess the effects of diversity on the invasion of a subtidal marine invertebrate community by three species of nonindigenous ascidians (sea squirts). In experimentally assembled communities, decreasing native diversity in- creased the survival and final percent cover of invaders, whereas the abundance of individual species had no effect on these measures of invasion success. Increasing native diversity also decreased the availability of open space, the limiting resource in this system, by buffering against fluctuations in the cover of individual species. This occurred because temporal patterns of abundance differed among species, so space was most consistently and completely occupied when more species were present. When we held diversity constant, but manipulated resource availability, we found that the settlement and recruitment of new invaders dramatically increased with increasing availability of open space. This suggests that the effect of diversity on invasion success is largely due to its effects on resource (space) availability. Apart from invasion resistance, the increased temporal stability found in more diverse communities may itself be considered an enhancement of ecosystem func- tion. In field surveys, we found a strong negative correlation between native-species richness and the number and frequency of nonnative invaders at the scale of both a single quadrat (25 3 25 cm), and an entire site (50 3 50 m). Such a pattern suggests that the means by which diversity affects invasion resistance in our experiments is important in determining the distribution of invasive species in the field. Further synthesis of mechanistic and ob- servational approaches should be encouraged, as this will increase our understanding of the conditions under which diversity does (and does not) play an important role in deter- mining the distribution of invaders in the field.", "which Habitat ?", "Marine", 547.0, 553.0], ["ABSTRACT: Despite mounting evidence of invasive species\u2019 impacts on the environment and society,our ability to predict invasion establishment, spread, and impact are inadequate. Efforts to explainand predict invasion outcomes have been limited primarily to terrestrial and freshwater ecosystems.Invasions are also common in coastal marine ecosystems, yet to date predictive marine invasion mod-els are absent. Here we present a model based on biological attributes associated with invasion suc-cess (establishment) of marine molluscs that compares successful and failed invasions from a groupof 93 species introduced to San Francisco Bay (SFB) in association with commercial oyster transfersfrom eastern North America (ca. 1869 to 1940). A multiple logistic regression model correctly classi-fied 83% of successful and 80% of failed invaders according to their source region abundance at thetime of oyster transfers, tolerance of low salinity, and developmental mode. We tested the generalityof the SFB invasion model by applying it to 3 coastal locations (2 in North America and 1 in Europe)that received oyster transfers from the same source and during the same time as SFB. The model cor-rectly predicted 100, 75, and 86% of successful invaders in these locations, indicating that abun-dance, environmental tolerance (ability to withstand low salinity), and developmental mode not onlyexplain patterns of invasion success in SFB, but more importantly, predict invasion success in geo-graphically disparate marine ecosystems. Finally, we demonstrate that the proportion of marine mol-luscs that succeeded in the latter stages of invasion (i.e. that establish self-sustaining populations,spread and become pests) is much greater than has been previously predicted or shown for otheranimals and plants.KEY WORDS: Invasion \u00b7 Bivalve \u00b7 Gastropod \u00b7 Mollusc \u00b7 Marine \u00b7 Oyster \u00b7 Vector \u00b7 Risk assessment", "which Habitat ?", "Marine", 332.0, 338.0], ["Freshwater ecosystems are at the forefront of the global biodiversity crisis, with more declining and extinct species than in terrestrial or marine environments. Hydrologic alterations and biological invasions represent two of the greatest threats to freshwater biota, yet the importance of linkages between these drivers of environmental change remains uncertain. Here, we quantitatively test the hypothesis that impoundments facilitate the introduction and establishment of aquatic invasive species in lake ecosystems. By combining data on boating activity, water body physicochemistry, and geographical distribution of five nuisance invaders in the Laurentian Great Lakes region, we show that non-indigenous species are 2.4 to 300 times more likely to occur in impoundments than in natural lakes, and that impoundments frequently support multiple invaders. Furthermore, comparisons of the contemporary and historical landscapes revealed that impoundments enhance the invasion risk of natural lakes by increasing their...", "which Habitat ?", "Freshwater", 0.0, 10.0], ["Most introduced species apparently have little impact on native biodiversity, but the proliferation of human vectors that transport species worldwide increases the probability of a region being affected by high\u2010impact invaders \u2013 i.e. those that cause severe declines in native species populations. Our study determined whether the number of high\u2010impact invaders can be predicted from the total number of invaders in an area, after controlling for species\u2013area effects. These two variables are positively correlated in a set of 16 invaded freshwater and marine systems from around the world. The relationship is a simple linear function; there is no evidence of synergistic or antagonistic effects of invaders across systems. A similar relationship is found for introduced freshwater fishes across 149 regions. In both data sets, high\u2010impact invaders comprise approximately 10% of the total number of invaders. Although the mechanism driving this correlation is likely a sampling effect, it is not simply the proportional sampling of a constant number of repeat\u2010offenders; in most cases, an invader is not reported to have strong impacts on native species in the majority of regions it invades. These findings link vector activity and the negative impacts of introduced species on biodiversity, and thus justify management efforts to reduce invasion rates even where numerous invasions have already occurred.", "which Habitat ?", "Freshwater and marine", 538.0, 559.0], ["Recent comprehensive data provided through the DAISIE project (www.europe-aliens.org) have facilitated the development of the first pan-European assessment of the impacts of alien plants, vertebrates, and invertebrates \u2013 in terrestrial, freshwater, and marine environments \u2013 on ecosystem services. There are 1094 species with documented ecological impacts and 1347 with economic impacts. The two taxonomic groups with the most species causing impacts are terrestrial invertebrates and terrestrial plants. The North Sea is the maritime region that suffers the most impacts. Across taxa and regions, ecological and economic impacts are highly correlated. Terrestrial invertebrates create greater economic impacts than ecological impacts, while the reverse is true for terrestrial plants. Alien species from all taxonomic groups affect \u201csupporting\u201d, \u201cprovisioning\u201d, \u201cregulating\u201d, and \u201ccultural\u201d services and interfere with human well-being. Terrestrial vertebrates are responsible for the greatest range of impacts, and these are widely distributed across Europe. Here, we present a review of the financial costs, as the first step toward calculating an estimate of the economic consequences of alien species in Europe.", "which Habitat ?", "Terrestrial", 224.0, 235.0], ["The introduction of non-indigenous species (NIS) across the major European seas is a dynamic non-stop process. Up to September 2004, 851 NIS (the majority being zoobenthic organ- isms) have been reported in European marine and brackish waters, the majority during the 1960s and 1970s. The Mediterranean is by far the major recipient of exotic species with an average of one introduction every 4 wk over the past 5 yr. Of the 25 species recorded in 2004, 23 were reported in the Mediterranean and only two in the Baltic. The most updated patterns and trends in the rate, mode of introduction and establishment success of introductions were examined, revealing a process similar to introductions in other parts of the world, but with the uniqueness of migrants through the Suez Canal into the Mediterranean (Lessepsian or Erythrean migration). Shipping appears to be the major vector of introduction (excluding the Lessepsian migration). Aquaculture is also an important vector with target species outnumbered by those introduced unintentionally. More than half of immigrants have been estab- lished in at least one regional sea. However, for a significant part of the introductions both the establishment success and mode of introduction remain unknown. Finally, comparing trends across taxa and seas is not as accurate as could have been wished because there are differences in the spatial and taxonomic effort in the study of NIS. These differences lead to the conclusion that the number of NIS remains an underestimate, calling for continuous updating and systematic research.", "which Habitat ?", "Marine", 216.0, 222.0], ["Propagule pressure is recognized as a fundamental driver of freshwater fish invasions, though few studies have quantified its role. Natural experiments can be used to quantify the role of this factor relative to others in driving establishment success. An irrigation network in South Africa takes water from an inter-basin water transfer (IBWT) scheme to supply multiple small irrigation ponds. We compared fish community composition upstream, within, and downstream of the irrigation network, to show that this system is a unidirectional dispersal network with a single immigration source. We then assessed the effect of propagule pressure and biological adaptation on the colonization success of nine fish species across 30 recipient ponds of varying age. Establishing species received significantly more propagules at the source than did incidental species, while rates of establishment across the ponds displayed a saturation response to propagule pressure. This shows that propagule pressure is a significant driver of establishment overall. Those species that did not establish were either extremely rare at the immigration source or lacked the reproductive adaptations to breed in the ponds. The ability of all nine species to arrive at some of the ponds illustrates how long-term continuous propagule pressure from IBWT infrastructure enables range expansion of fishes. The quantitative link between propagule pressure and success and rate of population establishment confirms the driving role of this factor in fish invasion ecology.", "which Habitat ?", "Freshwater", 60.0, 70.0], ["Anthropogenic disturbance is considered a risk factor in the establishment of non\u2010indigenous species (NIS); however, few studies have investigated the role of anthropogenic disturbance in facilitating the establishment and spread of NIS in marine environments. A baseline survey of native and NIS was undertaken in conjunction with a manipulative experiment to determine the effect that heavy metal pollution had on the diversity and invasibility of marine hard\u2010substrate assemblages. The study was repeated at two sites in each of two harbours in New South Wales, Australia. The survey sampled a total of 47 sessile invertebrate taxa, of which 15 (32%) were identified as native, 19 (40%) as NIS, and 13 (28%) as cryptogenic. Increasing pollution exposure decreased native species diversity at all study sites by between 33% and 50%. In contrast, there was no significant change in the numbers of NIS. Percentage cover was used as a measure of spatial dominance, with increased pollution exposure leading to increased NIS dominance across all sites. At three of the four study sites, assemblages that had previously been dominated by natives changed to become either extensively dominated by NIS or equally occupied by native and NIS alike. No single native or NIS was repeatedly responsible for the observed changes in native species diversity or NIS dominance at all sites. Rather, the observed effects of pollution were driven by a diverse range of taxa and species. These findings have important implications for both the way we assess pollution impacts, and for the management of NIS. When monitoring the response of assemblages to pollution, it is not sufficient to simply assess changes in community diversity. Rather, it is important to distinguish native from NIS components since both are expected to respond differently. In order to successfully manage current NIS, we first need to address levels of pollution within recipient systems in an effort to bolster the resilience of native communities to invasion.", "which Habitat ?", "Marine", 240.0, 246.0], ["Abstract Biological invasions are a key threat to freshwater biodiversity, and identifying determinants of invasion success is a global conservation priority. The establishment of introduced species is predicted to be hindered by pre-existing, functionally similar invasive species. Over a five-year period we, however, find that in the River Lee (UK), recently introduced non-native virile crayfish (Orconectes virilis) increased in range and abundance, despite the presence of established alien signal crayfish (Pacifastacus leniusculus). In regions of sympatry, virile crayfish had a detrimental effect on signal crayfish abundance but not vice versa. Competition experiments revealed that virile crayfish were more aggressive than signal crayfish and outcompeted them for shelter. Together, these results provide early evidence for the potential over-invasion of signal crayfish by competitively dominant virile crayfish. Based on our results and the limited distribution of virile crayfish in Europe, we recommend that efforts to contain them within the Lee catchment be implemented immediately.", "which Habitat ?", "Freshwater", 50.0, 60.0], ["Abstract. Nonnative crayfish have been widely introduced and are a major threat to freshwater biodiversity and ecosystem functioning. Despite documentation of the ecological effects of nonnative crayfish from >3 decades of case studies, no comprehensive synthesis has been done to test quantitatively for their general or species-specific effects on recipient ecosystems. We provide the first global meta-analysis of the ecological effects of nonnative crayfish under experimental settings to compare effects among species and across levels of ecological organization. Our meta-analysis revealed strong, but variable, negative ecological impacts of nonnative crayfish with strikingly consistent effects among introduced species. In experimental settings, nonnative crayfish generally affect all levels of freshwater food webs. Nonnative crayfish reduce the abundance of basal resources like aquatic macrophytes, prey on invertebrates like snails and mayflies, and reduce abundances and growth of amphibians and fish, but they do not consistently increase algal biomass. Nonnative crayfish tend to have larger positive effects on growth of algae and larger negative effects on invertebrates and fish than native crayfish, but effect sizes vary considerably. Our study supports the assessment of crayfish as strong interactors in food webs that have significant effects across native taxa via polytrophic, generalist feeding habits. Nonnative crayfish species identity may be less important than extrinsic attributes of the recipient ecosystems in determining effects of nonnative crayfish. We identify some understudied and emerging nonnative crayfish that should be studied further and suggest expanding research to encompass more comparisons of native vs nonnative crayfish and different geographic regions. The consistent and general negative effects of nonnative crayfish warrant efforts to discourage their introduction beyond native ranges.", "which Habitat ?", "Freshwater", 83.0, 93.0], ["The paper provides the first estimate of the composition and structure of alien plants occurring in the wild in the European continent, based on the results of the DAISIE project (2004\u20132008), funded by the 6th Framework Programme of the European Union and aimed at \u201ccreating an inventory of invasive species that threaten European terrestrial, freshwater and marine environments\u201d. The plant section of the DAISIE database is based on national checklists from 48 European countries/regions and Israel; for many of them the data were compiled during the project and for some countries DAISIE collected the first comprehensive checklists of alien species, based on primary data (e.g., Cyprus, Greece, F. Y. R. O. Macedonia, Slovenia, Ukraine). In total, the database contains records of 5789 alien plant species in Europe (including those native to a part of Europe but alien to another part), of which 2843 are alien to Europe (of extra-European origin). The research focus was on naturalized species; there are in total 3749 naturalized aliens in Europe, of which 1780 are alien to Europe. This represents a marked increase compared to 1568 alien species reported by a previous analysis of data in Flora Europaea (1964\u20131980). Casual aliens were marginally considered and are represented by 1507 species with European origins and 872 species whose native range falls outside Europe. The highest diversity of alien species is concentrated in industrialized countries with a tradition of good botanical recording or intensive recent research. The highest number of all alien species, regardless of status, is reported from Belgium (1969), the United Kingdom (1779) and Czech Republic (1378). The United Kingdom (857), Germany (450), Belgium (447) and Italy (440) are countries with the most naturalized neophytes. The number of naturalized neophytes in European countries is determined mainly by the interaction of temperature and precipitation; it increases with increasing precipitation but only in climatically warm and moderately warm regions. Of the nowadays naturalized neophytes alien to Europe, 50% arrived after 1899, 25% after 1962 and 10% after 1989. At present, approximately 6.2 new species, that are capable of naturalization, are arriving each year. Most alien species have relatively restricted European distributions; half of all naturalized species occur in four or fewer countries/regions, whereas 70% of non-naturalized species occur in only one region. Alien species are drawn from 213 families, dominated by large global plant families which have a weedy tendency and have undergone major radiations in temperate regions (Asteraceae, Poaceae, Rosaceae, Fabaceae, Brassicaceae). There are 1567 genera, which have alien members in European countries, the commonest being globally-diverse genera comprising mainly urban and agricultural weeds (e.g., Amaranthus, Chenopodium and Solanum) or cultivated for ornamental purposes (Cotoneaster, the genus richest in alien species). Only a few large genera which have successfully invaded (e.g., Oenothera, Oxalis, Panicum, Helianthus) are predominantly of non-European origin. Conyza canadensis, Helianthus tuberosus and Robinia pseudoacacia are most widely distributed alien species. Of all naturalized aliens present in Europe, 64.1% occur in industrial habitats and 58.5% on arable land and in parks and gardens. Grasslands and woodlands are also highly invaded, with 37.4 and 31.5%, respectively, of all naturalized aliens in Europe present in these habitats. Mires, bogs and fens are least invaded; only approximately 10% of aliens in Europe occur there. Intentional introductions to Europe (62.8% of the total number of naturalized aliens) prevail over unintentional (37.2%). Ornamental and horticultural introductions escaped from cultivation account for the highest number of species, 52.2% of the total. Among unintentional introductions, contaminants of seed, mineral materials and other commodities are responsible for 1091 alien species introductions to Europe (76.6% of all species introduced unintentionally) and 363 species are assumed to have arrived as stowaways (directly associated with human transport but arriving independently of commodity). Most aliens in Europe have a native range in the same continent (28.6% of all donor region records are from another part of Europe where the plant is native); in terms of species numbers the contribution of Europe as a region of origin is 53.2%. Considering aliens to Europe separately, 45.8% of species have their native distribution in North and South America, 45.9% in Asia, 20.7% in Africa and 5.3% in Australasia. Based on species composition, European alien flora can be classified into five major groups: (1) north-western, comprising Scandinavia and the UK; (2) west-central, extending from Belgium and the Netherlands to Germany and Switzerland; (3) Baltic, including only the former Soviet Baltic states; (4) east-central, comprizing the remainder of central and eastern Europe; (5) southern, covering the entire Mediterranean region. The clustering patterns cut across some European bioclimatic zones; cultural factors such as regional trade links and traditional local preferences for crop, forestry and ornamental species are also important by influencing the introduced species pool. Finally, the paper evaluates a state of the art in the field of plant invasions in Europe, points to research gaps and outlines avenues of further research towards documenting alien plant invasions in Europe. The data are of varying quality and need to be further assessed with respect to the invasion status and residence time of the species included. This concerns especially the naturalized/casual status; so far, this information is available comprehensively for only 19 countries/regions of the 49 considered. Collating an integrated database on the alien flora of Europe can form a principal contribution to developing a European-wide management strategy of alien species.", "which Habitat ?", "Terrestrial", 331.0, 342.0], ["Because species invasions are a principal driver of the human-induced biodiversity crisis, the identification of the major determinants of global invasions is a prerequisite for adopting sound conservation policies. Three major hypotheses, which are not necessarily mutually exclusive, have been proposed to explain the establishment of non-native species: the \u201chuman activity\u201d hypothesis, which argues that human activities facilitate the establishment of non-native species by disturbing natural landscapes and by increasing propagule pressure; the \u201cbiotic resistance\u201d hypothesis, predicting that species-rich communities will readily impede the establishment of non-native species; and the \u201cbiotic acceptance\u201d hypothesis, predicting that environmentally suitable habitats for native species are also suitable for non-native species. We tested these hypotheses and report here a global map of fish invasions (i.e., the number of non-native fish species established per river basin) using an original worldwide dataset of freshwater fish occurrences, environmental variables, and human activity indicators for 1,055 river basins covering more than 80% of Earth's surface. First, we identified six major invasion hotspots where non-native species represent more than a quarter of the total number of species. According to the World Conservation Union, these areas are also characterised by the highest proportion of threatened fish species. Second, we show that the human activity indicators account for most of the global variation in non-native species richness, which is highly consistent with the \u201chuman activity\u201d hypothesis. In contrast, our results do not provide support for either the \u201cbiotic acceptance\u201d or the \u201cbiotic resistance\u201d hypothesis. We show that the biogeography of fish invasions matches the geography of human impact at the global scale, which means that natural processes are blurred by human activities in driving fish invasions in the world's river systems. In view of our findings, we fear massive invasions in developing countries with a growing economy as already experienced in developed countries. Anticipating such potential biodiversity threats should therefore be a priority.", "which Habitat ?", "Freshwater", 1023.0, 1033.0], ["Background Coastal landscapes are being transformed as a consequence of the increasing demand for infrastructures to sustain residential, commercial and tourist activities. Thus, intertidal and shallow marine habitats are largely being replaced by a variety of artificial substrata (e.g. breakwaters, seawalls, jetties). Understanding the ecological functioning of these artificial habitats is key to planning their design and management, in order to minimise their impacts and to improve their potential to contribute to marine biodiversity and ecosystem functioning. Nonetheless, little effort has been made to assess the role of human disturbances in shaping the structure of assemblages on marine artificial infrastructures. We tested the hypothesis that some negative impacts associated with the expansion of opportunistic and invasive species on urban infrastructures can be related to the severe human disturbances that are typical of these environments, such as those from maintenance and renovation works. Methodology/Principal Findings Maintenance caused a marked decrease in the cover of dominant space occupiers, such as mussels and oysters, and a significant enhancement of opportunistic and invasive forms, such as biofilm and macroalgae. These effects were particularly pronounced on sheltered substrata compared to exposed substrata. Experimental application of the disturbance in winter reduced the magnitude of the impacts compared to application in spring or summer. We use these results to identify possible management strategies to inform the improvement of the ecological value of artificial marine infrastructures. Conclusions/Significance We demonstrate that some of the impacts of globally expanding marine urban infrastructures, such as those related to the spread of opportunistic, and invasive species could be mitigated through ecologically-driven planning and management of long-term maintenance of these structures. Impact mitigation is a possible outcome of policies that consider the ecological features of built infrastructures and the fundamental value of controlling biodiversity in marine urban systems.", "which Habitat ?", "Marine", 202.0, 208.0], ["Recent comprehensive data provided through the DAISIE project (www.europe-aliens.org) have facilitated the development of the first pan-European assessment of the impacts of alien plants, vertebrates, and invertebrates \u2013 in terrestrial, freshwater, and marine environments \u2013 on ecosystem services. There are 1094 species with documented ecological impacts and 1347 with economic impacts. The two taxonomic groups with the most species causing impacts are terrestrial invertebrates and terrestrial plants. The North Sea is the maritime region that suffers the most impacts. Across taxa and regions, ecological and economic impacts are highly correlated. Terrestrial invertebrates create greater economic impacts than ecological impacts, while the reverse is true for terrestrial plants. Alien species from all taxonomic groups affect \u201csupporting\u201d, \u201cprovisioning\u201d, \u201cregulating\u201d, and \u201ccultural\u201d services and interfere with human well-being. Terrestrial vertebrates are responsible for the greatest range of impacts, and these are widely distributed across Europe. Here, we present a review of the financial costs, as the first step toward calculating an estimate of the economic consequences of alien species in Europe.", "which Habitat ?", "Freshwater", 237.0, 247.0], ["Hussner A (2012). Alien aquatic plant species in European countries. Weed Research52, 297\u2013306. Summary Alien aquatic plant species cause serious ecological and economic impacts to European freshwater ecosystems. This study presents a comprehensive overview of all alien aquatic plants in Europe, their places of origin and their distribution within the 46 European countries. In total, 96 aquatic species from 30 families have been reported as aliens from at least one European country. Most alien aquatic plants are native to Northern America, followed by Asia and Southern America. Elodea canadensis is the most widespread alien aquatic plant in Europe, reported from 41 European countries. Azolla filiculoides ranks second (25), followed by Vallisneria spiralis (22) and Elodea nuttallii (20). The highest number of alien aquatic plant species has been found in Italy and France (34 species), followed by Germany (27), Belgium and Hungary (both 26) and the Netherlands (24). Even though the number of alien aquatic plants seems relatively small, the European and Mediterranean Plant Protection Organization (EPPO, http://www.eppo.org) has listed 18 of these species as invasive or potentially invasive within the EPPO region. As ornamental trade has been regarded as the major pathway for the introduction of alien aquatic plants, trading bans seem to be the most effective option to reduce the risk of further unintended entry of alien aquatic plants into Europe.", "which Habitat ?", "Freshwater", 189.0, 199.0], ["AbstractThe spread of nonnative species over the last century has profoundly altered freshwater ecosystems, resulting in novel species assemblages. Interactions between nonnative species may alter their impacts on native species, yet few studies have addressed multispecies interactions. The spread of whirling disease, caused by the nonnative parasite Myxobolus cerebralis, has generated declines in wild trout populations across western North America. Westslope Cutthroat Trout Oncorhynchus clarkii lewisi in the northern Rocky Mountains are threatened by hybridization with introduced Rainbow Trout O. mykiss. Rainbow Trout are more susceptible to whirling disease than Cutthroat Trout and may be more vulnerable due to differences in spawning location. We hypothesized that the presence of whirling disease in a stream would (1) reduce levels of introgressive hybridization at the site scale and (2) limit the size of the hybrid zone at the whole-stream scale. We measured levels of introgression and the spatial ext...", "which Habitat ?", "Freshwater", 85.0, 95.0], ["Theory predicts that systems that are more diverse should be more resistant to exotic species, but experimental tests are needed to verify this. In experimental communities of sessile marine invertebrates, increased species richness significantly decreased invasion success, apparently because species-rich communities more completely and efficiently used available space, the limiting resource in this system. Declining biodiversity thus facilitates invasion in this system, potentially accelerating the loss of biodiversity and the homogenization of the world's biota.", "which Habitat ?", "Marine", 184.0, 190.0], ["I examined the distribution and abundance of bird species across an urban gradient, and concomitant changes in community structure, by censusing summer resident bird populations at six sites in Santa Clara County, California (all former oak woodlands). These sites represented a gradient of urban land use that ranged from relatively undisturbed to highly developed, and included a biological preserve, recreational area, golf course, residential neighborhood, office park, and business district. The composition of the bird community shifted from predominantly native species in the undisturbed area to invasive and exotic species in the business district. Species richness, Shannon diversity, and bird biomass peaked at moderately disturbed sites. One or more species reached maximal densities in each of the sites, and some species were restricted to a given site. The predevelopment bird species (assumed to be those found at the most undisturbed site) dropped out gradually as the sites became more urban. These patterns were significantly related to shifts in habitat structure that occurred along the gradient, as determined by canonical correspondence analysis (CCA) using the environmental variables of percent land covered by pavement, buildings, lawn, grasslands, and trees or shrubs. I compared each formal site to four additional sites with similar levels of development within a two-county area to verify that the bird communities at the formal study sites were rep- resentative of their land use category.", "which Hypothesis type ?", "Urban", 68.0, 73.0], ["Theory suggests that introduction effort (propagule size or number) should be a key determinant of establishment success for exotic species. Unfortunately, however, propagule pressure is not recorded for most introductions. Studies must therefore either use proxies whose efficacy must be largely assumed, or ignore effort altogether. The results of such studies will be flawed if effort is not distributed at random with respect to other characteristics that are predicted to influence success. We use global data for more than 600 introduction events for birds to show that introduction effort is both the strongest correlate of introduction success, and correlated with a large number of variables previously thought to influence success. Apart from effort, only habitat generalism relates to establishment success in birds.", "which Measure of invasion success ?", "Establishment", 99.0, 112.0], ["Summary 1. Wetlands in urban regions are subjected to a wide variety of anthropogenic disturbances, many of which may promote invasions of exotic plant species. In order to devise management strategies, the influence of different aspects of the urban and natural environments on invasion and community structure must be understood. 2. The roles of soil variables, anthropogenic effects adjacent to and within the wetlands, and vegetation structure on exotic species occurrence within 21 forested wetlands in north-eastern New Jersey, USA, were compared. The hypotheses were tested that different vegetation strata and different invasive species respond similarly to environmental factors, and that invasion increases with increasing direct human impact, hydrologic disturbance, adjacent residential land use and decreasing wetland area. Canonical correspondence analyses, correlation and logistic regression analyses were used to examine invasion by individual species and overall site invasion, as measured by the absolute and relative number of exotic species in the site flora. 3. Within each stratum, different sets of environmental factors separated exotic and native species. Nutrients, soil clay content and pH, adjacent land use and canopy composition were the most frequently identified factors affecting species, but individual species showed highly individualistic responses to the sets of environmental variables, often responding in opposite ways to the same factor. 4. Overall invasion increased with decreasing area but only when sites > 100 ha were included. Unexpectedly, invasion decreased with increasing proportions of industrial/commercial adjacent land use. 5. The hypotheses were only partially supported; invasion does not increase in a simple way with increasing human presence and disturbance. 6. Synthesis and applications . The results suggest that a suite of environmental conditions can be identified that are associated with invasion into urban wetlands, which can be widely used for assessment and management. However, a comprehensive ecosystem approach is needed that places the remediation of physical alterations from urbanization within a landscape context. Specifically, sediment, inputs and hydrologic changes need to be related to adjoining urban land use and to the overlapping requirements of individual native and exotic species.", "which Measure of invasion success ?", "Number of exotic species", 1037.0, 1061.0], ["Few field experiments have examined the effects of both resource availability and propagule pressure on plant community invasibility. Two non-native forest species, a herb and a shrub (Hesperis matronalis and Rhamnus cathartica, respectively), were sown into 60 1-m 2 sub-plots distributed across three plots. These contained reconstructed native plant communities in a replaced surface soil layer in a North American forest interior. Resource availability and propagule pressure were manipulated as follows: understorey light level (shaded/unshaded), nutrient availability (control/fertilized), and seed pressures of the two non-native species (control/low/high). Hesperis and Rhamnus cover and the above-ground biomass of Hesperis were significantly higher in shaded sub-plots and at greater propagule pressures. Similarly, the above-ground biomass of Rhamnus was significantly increased with propagule pressure, although this was a function of density. In contrast, of species that seeded into plots from the surrounding forest during the growing season, the non-native species had significantly greater cover in unshaded sub-plots. Plants in these unshaded sub-plots were significantly taller than plants in shaded sub-plots, suggesting a greater fitness. Total and non-native species richness varied significantly among plots indicating the importance of fine-scale dispersal patterns. None of the experimental treatments influenced native species. Since the forest seed bank in our study was colonized primarily by non-native ruderal species that dominated understorey vegetation, the management of invasions by non-native species in forest understoreys will have to address factors that influence light levels and dispersal pathways.", "which Measure of invasion success ?", "Biomass", 713.0, 720.0], ["1. Information on the approximate number of individuals released is available for 47 of the 133 exotic bird species introduced to New Zealand in the late 19th and early 20th centuries. Of these, 21 species had populations surviving in the wild in 1969-79. The long interval between introduction and assessment of outcome provides a rare opportunity to examine the factors correlated with successful establishment without the uncertainty of long-term population persistence associated with studies of short duration. 2. The probability of successful establishment was strongly influenced by the number of individuals released during the main period of introductions. Eight-three per cent of species that had more than 100 individuals released within a 10-year period became established, compared with 21% of species that had less than 100 birds released. The relationship between the probability of establishment and number of birds released was similar to that found in a previous study of introductions of exotic birds to Australia. 3. It was possible to look for a within-family influence on the success of introduction of the number of birds released in nine bird families. A positive influence was found within seven families and no effect in two families. This preponderance of families with a positive effect was statistically significant. 4. A significant effect of body weight on the probability of successful establishment was found, and negative effects of clutch size and latitude of origin. However, the statistical significance of these effects varied according to whether comparison was or was not restricted to within-family variation. After applying the Bonferroni adjustment to significance levels, to allow for the large number of variables and factors being considered, only the effect of the number of birds released was statistically significant. 5. No significant effects on the probability of successful establishment were apparent for the mean date of release, the minimum number of years in which birds were released, the hemisphere of origin (northern or southern) and the size and diversity of latitudinal distribution of the natural geographical range.", "which Measure of invasion success ?", "Establishment", 399.0, 412.0], ["Few field experiments have examined the effects of both resource availability and propagule pressure on plant community invasibility. Two non-native forest species, a herb and a shrub (Hesperis matronalis and Rhamnus cathartica, respectively), were sown into 60 1-m 2 sub-plots distributed across three plots. These contained reconstructed native plant communities in a replaced surface soil layer in a North American forest interior. Resource availability and propagule pressure were manipulated as follows: understorey light level (shaded/unshaded), nutrient availability (control/fertilized), and seed pressures of the two non-native species (control/low/high). Hesperis and Rhamnus cover and the above-ground biomass of Hesperis were significantly higher in shaded sub-plots and at greater propagule pressures. Similarly, the above-ground biomass of Rhamnus was significantly increased with propagule pressure, although this was a function of density. In contrast, of species that seeded into plots from the surrounding forest during the growing season, the non-native species had significantly greater cover in unshaded sub-plots. Plants in these unshaded sub-plots were significantly taller than plants in shaded sub-plots, suggesting a greater fitness. Total and non-native species richness varied significantly among plots indicating the importance of fine-scale dispersal patterns. None of the experimental treatments influenced native species. Since the forest seed bank in our study was colonized primarily by non-native ruderal species that dominated understorey vegetation, the management of invasions by non-native species in forest understoreys will have to address factors that influence light levels and dispersal pathways.", "which Measure of invasion success ?", "Cover", 686.0, 691.0], ["Whether or not a bird species will establish a new population after invasion of uncolonized habitat depends, from theory, on its life-history attributes and initial population size. Data about initial population sizes are often unobtainable for natural and deliberate avian invasions. In New Zealand, however, contemporary documentation of introduction efforts allowed us to systematically compare unsuccessful and successful invaders without bias. We obtained data for 79 species involved in 496 introduction events and used the present-day status of each species as the dependent variable in fitting multiple logistic regression models. We found that introduction efforts for species that migrated within their endemic ranges were significantly less likely to be successful than those for nonmigratory species with similar introduction efforts. Initial population size, measured as number of releases and as the minimum number of propagules liberated in New Zealand, significantly increased the probability of translocation success. A null model showed that species released more times had a higher probability per release of successful establishment. Among 36 species for which data were available, successful invaders had significantly higher natality/mortality ratios. Successful invaders were also liberated at significantly more sites. Invasion of New Zealand by exotic birds was therefore primarily related to management, an outcome that has implications for conservation biology.", "which Measure of invasion success ?", "Establishment", 1139.0, 1152.0], ["Identifying mechanisms governing the establishment and spread of invasive species is a fundamental challenge in invasion biology. Because species invasions are frequently observed only after the species presents an environmental threat, research identifying the contributing agents to dispersal and subsequent spread are confined to retrograde observations. Here, we use a combination of seasonal surveys and experimental approaches to test the relative importance of behavioral and abiotic factors in determining the local co-occurrence of two invasive ant species, the established Argentine ant (Linepithema humile Mayr) and the newly invasive Asian needle ant (Pachycondyla chinensis Emery). We show that the broader climatic envelope of P. chinensis enables it to establish earlier in the year than L. humile. We also demonstrate that increased P. chinensis propagule pressure during periods of L. humile scarcity contributes to successful P. chinensis early season establishment. Furthermore, we show that, although L. humile is the numerically superior and behaviorally dominant species at baits, P. chinensis is currently displacing L. humile across the invaded landscape. By identifying the features promoting the displacement of one invasive ant by another we can better understand both early determinants in the invasion process and factors limiting colony expansion and survival.", "which Measure of invasion success ?", "Establishment", 37.0, 50.0], ["Though established populations of invasive species can exert substantial competitive effects on native populations, exotic propagules may require disturbances that decrease competitive interference by resident species in order to become established. We compared the relative competitiveness of native perennial and exotic annual grasses in a California coastal prairie grassland to test whether the introduction of exotic propagules to coastal grasslands in the 19th century was likely to have been sufficient to shift community composition from native perennial to exotic annual grasses. Under experimental field con- ditions, we compared the aboveground productivity of native species alone to native species competing with exotics, and exotic species alone to exotic species competing with natives. Over the course of the four-year experiment, native grasses became increasingly dominant in the mixed-assemblage plots containing natives and exotics. Although the competitive interactions in the first growing season favored the exotics, over time the native grasses significantly reduced the productivity of exotic grasses. The number of exotic seedlings emerging and the biomass of dicot seedlings removed during weeding were also significantly lower in plots containing natives as compared to plots that did not contain natives. We found evidence that the ability of established native perennial species to limit space available for exotic annual seeds to germinate and to limit the light available to exotic seedlings reduced exotic productivity and shifted competitive interactions in favor of the natives. If interactions between native perennial and exotic annual grasses follow a similar pattern in other coastal grassland habitats, then the introduction of exotic grass propagules alone without changes in land use or climate, or both, was likely insufficient to convert the region's grasslands.", "which Measure of invasion success ?", "Biomass", 1175.0, 1182.0], ["Propagule pressure is intuitively a key factor in biological invasions: increased availability of propagules increases the chances of establishment, persistence, naturalization, and invasion. The role of propagule pressure relative to disturbance and various environmental factors is, however, difficult to quantify. We explored the relative importance of factors driving invasions using detailed data on the distribution and percentage cover of alien tree species on South Africa\u2019s Agulhas Plain (2,160 km2). Classification trees based on geology, climate, land use, and topography adequately explained distribution but not abundance (canopy cover) of three widespread invasive species (Acacia cyclops, Acacia saligna, and Pinus pinaster). A semimechanistic model was then developed to quantify the roles of propagule pressure and environmental heterogeneity in structuring invasion patterns. The intensity of propagule pressure (approximated by the distance from putative invasion foci) was a much better predictor of canopy cover than any environmental factor that was considered. The influence of environmental factors was then assessed on the residuals of the first model to determine how propagule pressure interacts with environmental factors. The mediating effect of environmental factors was species specific. Models combining propagule pressure and environmental factors successfully predicted more than 70% of the variation in canopy cover for each species.", "which Measure of invasion success ?", "Cover", 437.0, 442.0], ["1 Understanding why some alien plant species become invasive when others fail is a fundamental goal in invasion ecology. We used detailed historical planting records of alien plant species introduced to Amani Botanical Garden, Tanzania and contemporary surveys of their invasion status to assess the relative ability of phylogeny, propagule pressure, residence time, plant traits and other factors to explain the success of alien plant species at different stages of the invasion process. 2 Species with native ranges centred in the tropics and with larger seeds were more likely to regenerate, whereas naturalization success was explained by longer residence time, faster growth rate, fewer seeds per fruit, smaller seed mass and shade tolerance. 3 Naturalized species spreading greater distances from original plantings tended to have more seeds per fruit, whereas species dispersed by canopy\u2010feeding animals and with native ranges centred on the tropics tended to have spread more widely in the botanical garden. Species dispersed by canopy\u2010feeding animals and with greater seed mass were more likely to be established in closed forest. 4 Phylogeny alone made a relatively minor contribution to the explanatory power of statistical models, but a greater proportion of variation in spread within the botanical garden and in forest establishment was explained by phylogeny alone than for other models. Phylogeny jointly with variables also explained a greater proportion of variation in forest establishment than in other models. Phylogenetic correction weakened the importance of dispersal syndrome in explaining compartmental spread, seed mass in the forest establishment model, and all factors except for growth rate and residence time in the naturalization model. 5 Synthesis. This study demonstrates that it matters considerably how invasive species are defined when trying to understand the relative ability of multiple variables to explain invasion success. By disentangling different invasion stages and using relatively objective criteria to assess species status, this study highlights that relatively simple models can help to explain why some alien plants are able to naturalize, spread and even establish in closed tropical forests.", "which Measure of invasion success ?", "Establishment", 1333.0, 1346.0], ["Species that are frequently introduced to an exotic range have a high potential of becoming invasive. Besides propagule pressure, however, no other generally strong determinant of invasion success is known. Although evidence has accumulated that human affiliates (domesticates, pets, human commensals) also have high invasion success, existing studies do not distinguish whether this success can be completely explained by or is partly independent of propagule pressure. Here, we analyze both factors independently, propagule pressure and human affiliation. We also consider a third factor directly related to humans, hunting, and 17 traits on each species' population size and extent, diet, body size, and life history. Our dataset includes all 2362 freshwater fish, mammals, and birds native to Europe or North America. In contrast to most previous studies, we look at the complete invasion process consisting of (1) introduction, (2) establishment, and (3) spread. In this way, we not only consider which of the introduced species became invasive but also which species were introduced. Of the 20 factors tested, propagule pressure and human affiliation were the two strongest determinants of invasion success across all taxa and steps. This was true for multivariate analyses that account for intercorrelations among variables as well as univariate analyses, suggesting that human affiliation influenced invasion success independently of propagule pressure. Some factors affected the different steps of the invasion process antagonistically. For example, game species were much more likely to be introduced to an exotic continent than nonhunted species but tended to be less likely to establish themselves and spread. Such antagonistic effects show the importance of considering the complete invasion process.", "which Measure of invasion success ?", "Spread", 960.0, 966.0], ["Abstract: Major progress in understanding biological invasions has recently been made by quantitatively comparing successful and unsuccessful invasions. We used such an approach to test hypotheses about the role of climatic suitability, life history, and historical factors in the establishment and subsequent spread of 40 species of mammal that have been introduced to mainland Australia. Relative to failed species, the 23 species that became established had a greater area of climatically suitable habitat available in Australia, had previously become established elsewhere, had a larger overseas range, and were introduced more times. These relationships held after phylogeny was controlled for, but successful species were also significantly more likely to be nonmigratory. A forward\u2010selection model included only two of the nine variables for which we had data for all species: climatic suitability and introduction effort. When the model was adjusted for phylogeny, those same two variables were included, along with previous establishment success. Of the established species, those with a larger geographic range size in Australia had a greater area of climatically suitable habitat, had traits associated with a faster population growth rate (small body size, shorter life span, lower weaning age, more offspring per year), were nonherbivorous, and had a larger overseas range size. When the model was adjusted for phylogeny, the importance of climatic suitability and the life\u2010history traits remained significant, but overseas range size was no longer important and species with greater introduction effort had a larger geographic range size. Two variables explained variation in geographic range size in a forward\u2010selection model: species with smaller body mass and greater longevity tended to have larger range sizes in Australia. These results mirror those from a recent analysis of exotic\u2010bird introductions into Australia, suggesting that, at least among vertebrate taxa, similar factors predict establishment and spread. Our approach and results are being used to assess the risks of exotic vertebrates becoming established and spreading in Australia.", "which Measure of invasion success ?", "Establishment", 281.0, 294.0], ["Ecological theory on biological invasions attempts to characterize the predictors of invasion success and the relative importance of the different drivers of population establishment. An outstanding question is how propagule pressure determines the probability of population establishment, where propagule pressure is the number of individuals of a species introduced into a specific location (propagule size) and their frequency of introduction (propagule number). Here, we used large-scale replicated mesocosm ponds over three reproductive seasons to identify how propagule size and number predict the probability of establishment of one of world's most invasive fish, Pseudorasbora parva, as well as its effect on the somatic growth of individuals during establishment. We demonstrated that, although a threshold of 11 introduced pairs of fish (a pair is 1 male, 1 female) was required for establishment probability to exceed 95%, establishment also occurred at low propagule size (1-5 pairs). Although single introduction events were as effective as multiple events at enabling establishment, the propagule sizes used in the multiple introductions were above the detected threshold for establishment. After three reproductive seasons, population abundance was also a function of propagule size, with rapid increases in abundance only apparent when propagule size exceeded 25 pairs. This was initially assisted by adapted biological traits, including rapid individual somatic growth that helped to overcome demographic bottlenecks.", "which Measure of invasion success ?", "Establishment", 169.0, 182.0], ["The literature on alien animal invaders focuses largely on successful invasions over broad geographic scales and rarely examines failed invasions. As a result, it is difficult to make predictions about which species are likely to become successful invaders or which environments are likely to be most susceptible to invasion. To address these issues, we developed a data set on fish invasions in watersheds throughout California (USA) that includes failed introductions. Our data set includes information from three stages of the invasion process (establishment, spread, and integration). We define seven categorical predictor variables (trophic status, size of native range, parental care, maximum adult size, physiological tolerance, distance from nearest native source, and propagule pressure) and one continuous predictor variable (prior invasion success) for all introduced species. Using an information-theoretic approach we evaluate 45 separate hypotheses derived from the invasion literature over these three sta...", "which Measure of invasion success ?", "Establishment", 548.0, 561.0], ["1. The movement of species from their native ranges to alien environments is a serious threat to biological diversity. The number of individuals involved in an invasion provides a strong theoretical basis for determining the likelihood of establishment of an alien species. 2. Here a field experiment was used to manipulate the critical first stages of the invasion of an alien insect, a psyllid weed biocontrol agent, Arytainilla spartiophila Forster, in New Zealand and to observe the progress of the invasion over the following 6 years. 3. Fifty-five releases were made along a linear transect 135 km long: 10 releases of two, four, 10, 30 and 90 psyllids and five releases of 270 psyllids. Six years after their original release, psyllids were present in 22 of the 55 release sites. Analysis by logistic regression showed that the probability of establishment was significantly and positively related to initial release size, but that this effect was important only during the psyllids' first year in the field. 4. Although less likely to establish, some of the releases of two and four psyllids did survive 5 years in the field. Overall, releases that survived their first year had a 96% chance of surviving thereafter, providing the release site remained secure. The probability of colony loss due to site destruction remained the same throughout the experiment, whereas the probability of natural extinction reduced steeply over time. 5. During the first year colonies were undergoing a process of establishment and, in most cases, population size decreased. After this first year, a period of exponential growth ensued. 6. A lag period was observed before the populations increased dramatically in size. This was thought to be due to inherent lags caused by the nature of population growth, which causes the smaller releases to appear to have a longer lag period.", "which Measure of invasion success ?", "Establishment", 239.0, 252.0], ["Aim We used alien plant species introduced to a botanic garden to investigate the relative importance of species traits (leaf traits, dispersal syndrome) and introduction characteristics (propagule pressure, residence time and distance to forest) in explaining establishment success in surrounding tropical forest. We also used invasion scores from a weed risk assessment protocol as an independent measure of invasion risk and assessed differences in variables between high\u2010 and low\u2010risk species.", "which Measure of invasion success ?", "Establishment", 261.0, 274.0], ["Summary 1. The global spread of non-native species is a major concern for ecologists, particularly in regards to aquatic systems. Predicting the characteristics of successful invaders has been a goal of invasion biology for decades. Quantitative analysis of species characteristics may allow invasive species profiling and assist the development of risk assessment strategies. 2. In the current analysis we developed a data base on fish invasions in catchments throughout California that distinguishes among the establishment, spread and integration stages of the invasion process, and separates social and biological factors related to invasion success. 3. Using Akaike's information criteria (AIC), logistic and multiple regression models, we show suites of biological variables, which are important in predicting establishment (parental care and physiological tolerance), spread (life span, distance from nearest native source and trophic status) and abundance (maximum size, physiological tolerance and distance from nearest native source). Two variables indicating human interest in a species (propagule pressure and prior invasion success) are predictors of successful establishment and prior invasion success is a predictor of spread and integration. 4. Despite the idiosyncratic nature of the invasion process, our results suggest some assistance in the search for characteristics of fish species that successfully transition between invasion stages.", "which Measure of invasion success ?", "Establishment", 512.0, 525.0], ["Propagule pressure is recognized as a fundamental driver of freshwater fish invasions, though few studies have quantified its role. Natural experiments can be used to quantify the role of this factor relative to others in driving establishment success. An irrigation network in South Africa takes water from an inter-basin water transfer (IBWT) scheme to supply multiple small irrigation ponds. We compared fish community composition upstream, within, and downstream of the irrigation network, to show that this system is a unidirectional dispersal network with a single immigration source. We then assessed the effect of propagule pressure and biological adaptation on the colonization success of nine fish species across 30 recipient ponds of varying age. Establishing species received significantly more propagules at the source than did incidental species, while rates of establishment across the ponds displayed a saturation response to propagule pressure. This shows that propagule pressure is a significant driver of establishment overall. Those species that did not establish were either extremely rare at the immigration source or lacked the reproductive adaptations to breed in the ponds. The ability of all nine species to arrive at some of the ponds illustrates how long-term continuous propagule pressure from IBWT infrastructure enables range expansion of fishes. The quantitative link between propagule pressure and success and rate of population establishment confirms the driving role of this factor in fish invasion ecology.", "which Measure of invasion success ?", "Establishment", 230.0, 243.0], ["The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents\u2019 circumstances, such as a spouse\u2019s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.", "which observation type ?", "academic", NaN, NaN], ["Granular activated carbon (GAC) materials were prepared via simple gas activation of silkworm cocoons and were coated on ZnO nanorods (ZNRs) by the facile hydrothermal method. The present combination of GAC and ZNRs shows a core-shell structure (where the GAC is coated on the surface of ZNRs) and is exposed by systematic material analysis. The as-prepared samples were then fabricated as dual-functional sensors and, most fascinatingly, the as-fabricated core-shell structure exhibits better UV and H2 sensing properties than those of as-fabricated ZNRs and GAC. Thus, the present core-shell structure-based H2 sensor exhibits fast responses of 11% (10 ppm) and 23.2% (200 ppm) with ultrafast response and recovery. However, the UV sensor offers an ultrahigh photoresponsivity of 57.9 A W-1, which is superior to that of as-grown ZNRs (0.6 A W-1). Besides this, switching photoresponse of GAC/ZNR core-shell structures exhibits a higher switching ratio (between dark and photocurrent) of 1585, with ultrafast response and recovery, than that of as-grown ZNRs (40). Because of the fast adsorption ability of GAC, it was observed that the finest distribution of GAC on ZNRs results in rapid electron transportation between the conduction bands of GAC and ZNRs while sensing H2 and UV. Furthermore, the present core-shell structure-based UV and H2 sensors also well-retained excellent sensitivity, repeatability, and long-term stability. Thus, the salient feature of this combination is that it provides a dual-functional sensor with biowaste cocoon and ZnO, which is ecological and inexpensive.", "which Method of nanomaterial synthesis ?", "Hydrothermal", 155.0, 167.0], ["We report herein a glucose biosensor based on glucose oxidase (GOx) immobilized on ZnO nanorod array grown by hydrothermal decomposition. In a phosphate buffer solution with a pH value of 7.4, negatively charged GOx was immobilized on positively charged ZnO nanorods through electrostatic interaction. At an applied potential of +0.8V versus Ag\u2215AgCl reference electrode, ZnO nanorods based biosensor presented a high and reproducible sensitivity of 23.1\u03bcAcm\u22122mM\u22121 with a response time of less than 5s. The biosensor shows a linear range from 0.01to3.45mM and an experiment limit of detection of 0.01mM. An apparent Michaelis-Menten constant of 2.9mM shows a high affinity between glucose and GOx immobilized on ZnO nanorods.", "which Method of nanomaterial synthesis ?", "hydrothermal", 110.0, 122.0], ["Stable bioimaging with nanomaterials in living cells has been a great challenge and of great importance for understanding intracellular events and elucidating various biological phenomena. Herein, we demonstrate that N,S co-doped carbon dots (N,S-CDs) produced by one-pot reflux treatment of C3N3S3 with ethane diamine at a relatively low temperature (80 \u00b0C) exhibit a high fluorescence quantum yield of about 30.4%, favorable biocompatibility, low-toxicity, strong resistance to photobleaching and good stability. The N,S-CDs as an effective temperature indicator exhibit good temperature-dependent fluorescence with a sensational linear response from 20 to 80 \u00b0C. In addition, the obtained N,S-CDs facilitate high selectivity detection of tetracycline (TC) with a detection limit as low as 3 \u00d7 10-10 M and a wide linear range from 1.39 \u00d7 10-5 to 1.39 \u00d7 10-9 M. More importantly, the N,S-CDs display an unambiguous bioimaging ability in the detection of intracellular temperature and TC with satisfactory results.", "which Method of nanomaterial synthesis ?", "Reflux", 272.0, 278.0], ["Herein, we incorporated dual biotemplates, i.e., cellulose nanocrystals (CNC) and apoferritin, into electrospinning solution to achieve three distinct benefits, i.e., (i) facile synthesis of a WO3 nanotube by utilizing the self-agglomerating nature of CNC in the core of as-spun nanofibers, (ii) effective sensitization by partial phase transition from WO3 to Na2W4O13 induced by interaction between sodium-doped CNC and WO3 during calcination, and (iii) uniform functionalization with monodispersive apoferritin-derived Pt catalytic nanoparticles (2.22 \u00b1 0.42 nm). Interestingly, the sensitization effect of Na2W4O13 on WO3 resulted in highly selective H2S sensing characteristics against seven different interfering molecules. Furthermore, synergistic effects with a bioinspired Pt catalyst induced a remarkably enhanced H2S response ( Rair/ Rgas = 203.5), unparalleled selectivity ( Rair/ Rgas < 1.3 for the interfering molecules), and rapid response (<10 s)/recovery (<30 s) time at 1 ppm of H2S under 95% relative humidity level. This work paves the way for a new class of cosensitization routes to overcome critical shortcomings of SMO-based chemical sensors, thus providing a potential platform for diagnosis of halitosis.", "which Method of nanomaterial synthesis ?", "Electrospinning", 100.0, 115.0], ["BACKGROUND AND OBJECTIVE Patients who receive cochlear implants (CIs) constitutes a significant population in Iran. This population needs regular monitor on long-term outcomes, educational placement and quality of life. Currently, there is no national or regional registry on the long term outcomes of CI users in Iran. The present study aims to introduce the design and implementation of a national patient-outcomes registry on CI recipients for Iran. This Iranian CI registry (ICIR) provides an integrated framework for data collection and sharing, scientific communication and collaboration inCI research. METHODS The national ICIR is a prospective patient-outcomes registry for patients who are implanted in one of Iranian centers. The registry is based on an integrated database that utilizes a secure web-based platform to collect response data from clinicians and patient's proxy via electronic case report forms (e-CRFs) at predefined intervals. The CI candidates are evaluated with a set of standardized and non-standardized questionnaires prior to initial device activation(as baseline variables) and at three-monthly interval follow-up intervals up to 24 months and annually thereafter. RESULTS The software application of the ICIR registry is designed in a user-friendly graphical interface with different entry fields. The collected data are categorized into four subsets including personal information, clinical data, surgery data and commission results. The main parameters include audiometric performance of patient, device use, patient comorbidities, device use, quality of life and health-related utilities, across different types of CI devices from different manufacturers. CONCLUSION The ICIR database could be used by the increasingly growing network of CI centers in Iran. Clinicians, academic and industrial researchers as well as healthcare policy makers could use this database to develop more effective CI devices and better management of the recipients as well as to develop national guidelines.", "which has Application Scope ?", "Clinical", 1417.0, 1425.0], ["In this paper, we present electronic participatory budgeting (ePB) as a novel application domain for recommender systems. On public data from the ePB platforms of three major US cities - Cambridge, Miami and New York City-, we evaluate various methods that exploit heterogeneous sources and models of user preferences to provide personalized recommendations of citizen proposals. We show that depending on characteristics of the cities and their participatory processes, particular methods are more effective than others for each city. This result, together with open issues identified in the paper, call for further research in the area.", "which has Application Scope ?", "City", 217.0, 221.0], ["In e-participation platforms, citizens suggest, discuss and vote online for initiatives aimed to address a wide range of issues and problems in a city, such as economic development, public safety, budges, infrastructure, housing, environment, social rights, and health care. For a particular citizen, the number of proposals and debates may be overwhelming, and recommender systems could help filtering and ranking those that are more relevant. Focusing on a particular case, the `Decide Madrid' platform, in this paper we empirically investigate which sources of user preferences and recommendation approaches could be more effective, in terms of several aspects, namely precision, coverage and diversity.", "which has Application Scope ?", "City", 146.0, 150.0], ["With the advancement of smart city, the development of intelligent mobile terminal and wireless network, the traditional text information service no longer meet the needs of the community residents, community image service appeared as a new media service. \u201cThere are pictures of the truth\u201d has become a community residents to understand and master the new dynamic community, image information service has become a new information service. However, there are two major problems in image information service. Firstly, the underlying eigenvalues extracted by current image feature extraction techniques are difficult for users to understand, and there is a semantic gap between the image content itself and the user\u2019s understanding; secondly, in community life of the image data increasing quickly, it is difficult to find their own interested image data. Aiming at the two problems, this paper proposes a unified image semantic scene model to express the image content. On this basis, a collaborative filtering recommendation model of fusion scene semantics is proposed. In the recommendation model, a comprehensiveness and accuracy user interest model is proposed to improve the recommendation quality. The results of the present study have achieved good results in the pilot cities of Wenzhou and Yan'an, and it is applied normally.", "which has Application Scope ?", "User", 708.0, 712.0], ["Registration systems for diseases and other health outcomes provide important resource for biomedical research, as well as tools for public health surveillance and improvement of quality of care. The Ministry of Health and Medical Education (MOHME) of Iran launched a national program to establish registration systems for different diseases and health outcomes. Based on the national program, we organized several workshops and training programs and disseminated the concepts and knowledge of the registration systems. Following a call for proposals, we received 100 applications and after thorough evaluation and corrections by the principal investigators, we approved and granted about 80 registries for three years. Having strong steering committee, committed executive and scientific group, establishing national and international collaboration, stating clear objectives, applying feasible software, and considering stable financing were key components for a successful registry and were considered in the evaluation processes. We paid particulate attention to non-communicable diseases, which constitute an emerging public health problem. We prioritized establishment of regional population-based cancer registries (PBCRs) in 10 provinces in collaboration with the International Agency for Research on Cancer. This initiative was successful and registry programs became popular among researchers and research centers and created several national and international collaborations in different areas to answer important public health and clinical questions. In this paper, we report the details of the program and list of registries that were granted in the first round.", "which Geographical scope ?", "National", 268.0, 276.0], ["BACKGROUND AND OBJECTIVE Patients who receive cochlear implants (CIs) constitutes a significant population in Iran. This population needs regular monitor on long-term outcomes, educational placement and quality of life. Currently, there is no national or regional registry on the long term outcomes of CI users in Iran. The present study aims to introduce the design and implementation of a national patient-outcomes registry on CI recipients for Iran. This Iranian CI registry (ICIR) provides an integrated framework for data collection and sharing, scientific communication and collaboration inCI research. METHODS The national ICIR is a prospective patient-outcomes registry for patients who are implanted in one of Iranian centers. The registry is based on an integrated database that utilizes a secure web-based platform to collect response data from clinicians and patient's proxy via electronic case report forms (e-CRFs) at predefined intervals. The CI candidates are evaluated with a set of standardized and non-standardized questionnaires prior to initial device activation(as baseline variables) and at three-monthly interval follow-up intervals up to 24 months and annually thereafter. RESULTS The software application of the ICIR registry is designed in a user-friendly graphical interface with different entry fields. The collected data are categorized into four subsets including personal information, clinical data, surgery data and commission results. The main parameters include audiometric performance of patient, device use, patient comorbidities, device use, quality of life and health-related utilities, across different types of CI devices from different manufacturers. CONCLUSION The ICIR database could be used by the increasingly growing network of CI centers in Iran. Clinicians, academic and industrial researchers as well as healthcare policy makers could use this database to develop more effective CI devices and better management of the recipients as well as to develop national guidelines.", "which Geographical scope ?", "Regional", 255.0, 263.0], ["Remote sensing techniques have emerged as an asset for various geological studies. Satellite images obtained by different sensors contain plenty of information related to the terrain. Digital image processing further helps in customized ways for the prospecting of minerals. In this study, an attempt has been made to map the hydrothermally altered zones using multispectral and hyperspectral datasets of South East Rajasthan. Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Hyperion (Level1R) dataset have been processed to generate different Band Ratio Composites (BRCs). For this study, ASTER derived BRCs were generated to delineate the alteration zones, gossans, abundant clays and host rocks. ASTER and Hyperion images were further processed to extract mineral end members and classified mineral maps have been produced using Spectral Angle Mapper (SAM) method. Results were validated with the geological map of the area which shows positive agreement with the image processing outputs. Thus, this study concludes that the band ratios and image processing in combination play significant role in demarcation of alteration zones which may provide pathfinders for mineral prospecting studies. Keywords\u2014Advanced space-borne thermal emission and reflection radiometer, ASTER, Hyperion, Band ratios, Alteration zones, spectral angle mapper.", "which Minerals Mapped/ Identified ?", "Gossan", NaN, NaN], ["Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.", "which Minerals Mapped/ Identified ?", "Carbonate", NaN, NaN], ["Satellite-based hyperspectral imaging became a reality in November 2000 with the successful launch and operation of the Hyperion system on board the EO-1 platform. Hyperion is a pushbroom imager with 220 spectral bands in the 400-2500 nm wavelength range, a 30 meter pixel size and a 7.5 km swath. Pre-launch characterization of Hyperion measured low signal to noise (SNR<40:1) for the geologically significant shortwave infrared (SWIR) wavelength region (2000-2500 nm). The impact of this low SNR on Hyperion's capacity to resolve spectral detail was evaluated for the Mount Fitton test site in South Australia, which comprises a diverse range of minerals with narrow, diagnostic absorption bands in the SWIR. Following radiative transfer correction of the Hyperion radiance at sensor data to surface radiance (apparent reflectance), diagnostic spectral signatures were clearly apparent, including: green vegetation; talc; dolomite; chlorite; white mica and possibly tremolite. Even though the derived surface composition maps generated from these image endmembers were noisy (both random and column), they were nonetheless spatially coherent and correlated well with the known geology. In addition, the Hyperion data were used to measure and map spectral shifts of <10 nm in the SWIR related to white mica chemical variations.", "which Minerals Mapped/ Identified ?", "Tremolite", 968.0, 977.0], ["Satellite-based hyperspectral imaging became a reality in November 2000 with the successful launch and operation of the Hyperion system on board the EO-1 platform. Hyperion is a pushbroom imager with 220 spectral bands in the 400-2500 nm wavelength range, a 30 meter pixel size and a 7.5 km swath. Pre-launch characterization of Hyperion measured low signal to noise (SNR<40:1) for the geologically significant shortwave infrared (SWIR) wavelength region (2000-2500 nm). The impact of this low SNR on Hyperion's capacity to resolve spectral detail was evaluated for the Mount Fitton test site in South Australia, which comprises a diverse range of minerals with narrow, diagnostic absorption bands in the SWIR. Following radiative transfer correction of the Hyperion radiance at sensor data to surface radiance (apparent reflectance), diagnostic spectral signatures were clearly apparent, including: green vegetation; talc; dolomite; chlorite; white mica and possibly tremolite. Even though the derived surface composition maps generated from these image endmembers were noisy (both random and column), they were nonetheless spatially coherent and correlated well with the known geology. In addition, the Hyperion data were used to measure and map spectral shifts of <10 nm in the SWIR related to white mica chemical variations.", "which Minerals Mapped/ Identified ?", "Talc", 918.0, 922.0], ["Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.", "which Minerals Mapped/ Identified ?", "Muscovite", 1091.0, 1100.0], ["Satellite-based hyperspectral imaging became a reality in November 2000 with the successful launch and operation of the Hyperion system on board the EO-1 platform. Hyperion is a pushbroom imager with 220 spectral bands in the 400-2500 nm wavelength range, a 30 meter pixel size and a 7.5 km swath. Pre-launch characterization of Hyperion measured low signal to noise (SNR<40:1) for the geologically significant shortwave infrared (SWIR) wavelength region (2000-2500 nm). The impact of this low SNR on Hyperion's capacity to resolve spectral detail was evaluated for the Mount Fitton test site in South Australia, which comprises a diverse range of minerals with narrow, diagnostic absorption bands in the SWIR. Following radiative transfer correction of the Hyperion radiance at sensor data to surface radiance (apparent reflectance), diagnostic spectral signatures were clearly apparent, including: green vegetation; talc; dolomite; chlorite; white mica and possibly tremolite. Even though the derived surface composition maps generated from these image endmembers were noisy (both random and column), they were nonetheless spatially coherent and correlated well with the known geology. In addition, the Hyperion data were used to measure and map spectral shifts of <10 nm in the SWIR related to white mica chemical variations.", "which Minerals Mapped/ Identified ?", "Mica", 950.0, 954.0], ["Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.", "which Minerals Mapped/ Identified ?", "Calcite", 1370.0, 1377.0], ["R\u00c9SUM\u00c9 La m\u00e9thode d'analyse en composantes principales est utilis\u00e9e couramment pour la cartographie des alt\u00e9rations dans les provinces m\u00e9tallog\u00e9niques. Trois techniques d'analyse en composantes principales sont utilis\u00e9es pour la cartographie des alt\u00e9rations autour d'intrusions porphyriques dans la zone de Meiduk: la m\u00e9thode s\u00e9lective, la m\u00e9thode s\u00e9lective d\u00e9velopp\u00e9e ou Crosta et la m\u00e9thode standard. Dans cette \u00e9tude, on compare les r\u00e9sultats de l'application de ces trois techniques diff\u00e9rentes sur les bandes Landsat TM. La comparaison est bas\u00e9e principalement sur l'analyse visuelle et les observations des r\u00e9sultats sur le terrain. L'analyse en composantes principales s\u00e9lective utilisant les bandes 5 et 7 de TM est la plus ad\u00e9quate pour la cartographie des alt\u00e9rations. Toutefois, les zones v\u00e9g\u00e9tales sont aussi mises en \u00e9vidence dans l'image PC2. L'application de la m\u00e9thode s\u00e9lective d\u00e9velopp\u00e9e sur les bandes 1, 4, 5 et 7 de TM pour la cartographie des hydroxyles met en \u00e9vidence des halos d'alt\u00e9ration autour des intrusions, mais son application \u00e0 la cartographie des oxydes de fer permet de distinguer une zone \u00e9tendue avec lithologie s\u00e9dimentaire. La m\u00e9thode d'analyse en composantes principales standard par contre permet de rehausser les alt\u00e9rations dans l'image PC5, mais elle est laborieuse en termes de temps machine. La cartographie des hydroxyles utilisant la technique s\u00e9lective d\u00e9velopp\u00e9e ou Crosta est plus appropri\u00e9e pour la cartographie des alt\u00e9rations autour des masses intrusives porphyriques.", "which Minerals Mapped/ Identified ?", "Hydroxyl", NaN, NaN], ["Satellite-based hyperspectral imaging became a reality in November 2000 with the successful launch and operation of the Hyperion system on board the EO-1 platform. Hyperion is a pushbroom imager with 220 spectral bands in the 400-2500 nm wavelength range, a 30 meter pixel size and a 7.5 km swath. Pre-launch characterization of Hyperion measured low signal to noise (SNR<40:1) for the geologically significant shortwave infrared (SWIR) wavelength region (2000-2500 nm). The impact of this low SNR on Hyperion's capacity to resolve spectral detail was evaluated for the Mount Fitton test site in South Australia, which comprises a diverse range of minerals with narrow, diagnostic absorption bands in the SWIR. Following radiative transfer correction of the Hyperion radiance at sensor data to surface radiance (apparent reflectance), diagnostic spectral signatures were clearly apparent, including: green vegetation; talc; dolomite; chlorite; white mica and possibly tremolite. Even though the derived surface composition maps generated from these image endmembers were noisy (both random and column), they were nonetheless spatially coherent and correlated well with the known geology. In addition, the Hyperion data were used to measure and map spectral shifts of <10 nm in the SWIR related to white mica chemical variations.", "which Minerals Mapped/ Identified ?", "Chlorite", 934.0, 942.0], ["The Takab area, located in north\u2010west Iran, is an important gold mineralized region with a long history of gold mining. The gold is associated with toxic metals/metalloids. In this study, Advanced Space Borne Thermal Emission and Reflection Radiometer data are evaluated for mapping gold and base\u2010metal mineralization through alteration mapping. Two different methods are used for argillic and silicic alteration mapping: selective principal\u2010component analysis and matched filter processing (MF). Running a selective principal\u2010component analysis using the main spectral characteristics of key alteration minerals enhanced the altered areas in PC2. MF using spectral library and laboratory spectra of the study area samples gave similar results. However, MF, using the image reference spectra from principal component (PC) images, produced the best results and indicated the advantage of using image spectra rather than library spectra in spectral mapping techniques. It seems that argillic alteration is more effective than silicic alteration for exploration purposes. It is suggested that alteration mapping can also be used to delineate areas contaminated by potentially toxic metals.", "which Minerals Mapped/ Identified ?", "Argillic alteration", 981.0, 1000.0], ["A Named-Entity Recognition (NER) is part of the process in Text Mining and it is a very useful process for information extraction. This NER tool can be used to assist user in identifying and detecting entities such as person, location or organization. However, different languages may have different morphologies and thus require different NER processes. For instance, an English NER process cannot be applied in processing Malay articles due to the different morphology used in different languages. This paper proposes a Rule-Based Named-Entity Recognition algorithm for Malay articles. The proposed Malay NER is designed based on a Malay part-of-speech (POS) tagging features and contextual features that had been implemented to handle Malay articles. Based on the POS results, proper names will be identified or detected as the possible candidates for annotation. Besides that, there are some symbols and conjunctions that will also be considered in the process of identifying named-entity for Malay articles. Several manually constructed dictionaries will be used to handle three named-entities; Person, Location and Organizations. The experimental results show a reasonable output of 89.47% for the F-Measure value. The proposed Malay NER algorithm can be further improved by having more complete dictionaries and refined rules to be used in order to identify the correct Malay entities system.", "which Language/domain ?", "Malay", 424.0, 429.0], ["Named Entity Recognition aims to identify and to classify rigid designators in text such as proper names, biological species, and temporal expressions into some predefined categories. There has been growing interest in this field of research since the early 1990s. Named Entity Recognition has a vital role in different fields of natural language processing such as Machine Translation, Information Extraction, Question Answering System and various other fields. In this paper, Named Entity Recognition for Nepali text, based on the Support Vector Machine (SVM) is presented which is one of machine learning approaches for the classification task. A set of features are extracted from training data set. Accuracy and efficiency of SVM classifier are analyzed in three different sizes of training data set. Recognition systems are tested with ten datasets for Nepali text. The strength of this work is the efficient feature extraction and the comprehensive recognition techniques. The Support Vector Machine based Named Entity Recognition is limited to use a certain set of features and it uses a small dictionary which affects its performance. The learning performance of recognition system is observed. It is found that system can learn well from the small set of training data and increase the rate of learning on the increment of training size.", "which Language/domain ?", "Nepali", 507.0, 513.0], ["This paper addresses the problem of Named Entity Recognition in Query (NERQ), which involves detection of the named entity in a given query and classification of the named entity into predefined classes. NERQ is potentially useful in many applications in web search. The paper proposes taking a probabilistic approach to the task using query log data and Latent Dirichlet Allocation. We consider contexts of a named entity (i.e., the remainders of the named entity in queries) as words of a document, and classes of the named entity as topics. The topic model is constructed by a novel and general learning method referred to as WS-LDA (Weakly Supervised Latent Dirichlet Allocation), which employs weakly supervised learning (rather than unsupervised learning) using partially labeled seed entities. Experimental results show that the proposed method based on WS-LDA can accurately perform NERQ, and outperform the baseline methods.", "which Language/domain ?", "Queries", 468.0, 475.0], ["Entity Recognition (NER) is used to locate and classify atomic elements in text into predetermined classes such as the names of persons, organizations, locations, concepts etc. NER is used in many applications like text summarization, text classification, question answering and machine translation systems etc. For English a lot of work has already done in field of NER, where capitalization is a major clue for rules, whereas Indian Languages do not have such feature. This makes the task difficult for Indian languages. This paper explains the Named Entity Recognition System for Punjabi language text summarization. A Condition based approach has been used for developing NER system for Punjabi language. Various rules have been developed like prefix rule, suffix rule, propername rule, middlename rule and lastname rule. For implementing NER, various resources in Punjabi, have been developed like a list of prefix names, a list of suffix names, a list of proper names, middle names and last names. The Precision, Recall and F-Score for condition based NER approach are 89.32%, 83.4% and 86.25% respectively.", "which Language/domain ?", "Punjabi", 583.0, 590.0], ["Named Entity Recognition or Extraction (NER) is an important task for automated text processing for industries and academia engaged in the field of language processing, intelligence gathering and Bioinformatics. In this paper we discuss the general problem of Named Entity Recognition, more specifically the challenges in NER in languages that do not have language resources e.g. large annotated corpora. We specifically address the challenges for Urdu NER and differentiate it from other South Asian (Indic) languages. We discuss the differences between Hindi and Urdu and conclude that the NER computational models for Hindi cannot be applied to Urdu. A rule-based Urdu NER algorithm is presented that outperforms the models that use statistical learning.", "which Language/domain ?", "Urdu", 448.0, 452.0], ["Name identification has been worked on quite intensively for the past few years, and has been incorporated into several products revolving around natural language processing tasks. Many researchers have attacked the name identification problem in a variety of languages, but only a few limited research efforts have focused on named entity recognition for Arabic script. This is due to the lack of resources for Arabic named entities and the limited amount of progress made in Arabic natural language processing in general. In this article, we present the results of our attempt at the recognition and extraction of the 10 most important categories of named entities in Arabic script: the person name, location, company, date, time, price, measurement, phone number, ISBN, and file name. We developed the system Named Entity Recognition for Arabic (NERA) using a rulebased approach. The resources created are: a Whitelist representing a dictionary of names, and a grammar, in the form of regular expressions, which are responsible for recognizing the named entities. A filtration mechanism is used that serves two different purposes: (a) revision of the results from a named entity extractor by using metadata, in terms of a Blacklist or rejecter, about ill-formed named entities and (b) disambiguation of identical or overlapping textual matches returned by different name entity extractors to get the correct choice. In NERA, we addressed major challenges posed by NER in the Arabic language arising due to the complexity of the language, peculiarities in the Arabic orthographic system, nonstandardization of the written text, ambiguity, and lack of resources. NERA has been effectively evaluated using our own tagged corpus; it achieved satisfactory results in terms of precision, recall, and F-measure.", "which Language/domain ?", "Arabic", 356.0, 362.0], ["Named entity recognition has served many natural language processing tasks such as information retrieval, machine translation, and question answering systems. Many researchers have addressed the name identification issue in a variety of languages and recently some research efforts have started to focus on named entity recognition for the Arabic language. We present a working Arabic information extraction (IE) system that is used to analyze large volumes of news texts every day to extract the named entity (NE) types person, organization, location, date, and number, as well as quotations (direct reported speech) by and about people. The named entity recognition (NER) system was not developed for Arabic, but instead a multilingual NER system was adapted to also cover Arabic. The Semitic language Arabic substantially differs from the Indo-European and Finno-Ugric languages currently covered. This article thus describes what Arabic language-specific resources had to be developed and what changes needed to be made to the rule set in order to be applicable to the Arabic language. The achieved evaluation results are generally satisfactory, but could be improved for certain entity types.", "which Language/domain ?", "Arabic", 340.0, 346.0], ["Named Entity Recognition (NER) is a task which helps in finding out Persons name, Location names, Brand names, Abbreviations, Date, Time etc and classifies the m into predefined different categories. NER plays a major role in various Natural Language Processing (NLP) fields like Information Extraction, Machine Translations and Question Answering. This paper describes the problems of NER in the context of Urdu Language and provides relevant solutions. The system is developed to tag thirteen different Named Entities (NE), twelve NE proposed by IJCNLP-08 and Izaafats. We have used the Rule Based approach and developed the various rules to extract the Named Entities in the given Urdu text.", "which Language/domain ?", "Urdu", 408.0, 412.0], ["The named entity recognition task aims at identifying and classifying named entities within an open-domain text. This task has been garnering significant attention recently as it has been shown to help improve the performance of many natural language processing applications. In this paper, we investigate the impact of using different sets of features in three discriminative machine learning frameworks, namely, support vector machines, maximum entropy and conditional random fields for the task of named entity recognition. Our language of interest is Arabic. We explore lexical, contextual and morphological features and nine data-sets of different genres and annotations. We measure the impact of the different features in isolation and incrementally combine them in order to evaluate the robustness to noise of each approach. We achieve the highest performance using a combination of 15 features in conditional random fields using broadcast news data (Fbeta = 1=83.34).", "which Language/domain ?", "Arabic", 555.0, 561.0], ["abstract:The ability to interact with and properly manage data in a research setting has become increasingly important in all disciplines. Libraries are attempting to identify their role in providing data management services. However, humanities faculty's conceptions of data and their data management practices are not well-known. This qualitative study explores the data management practices of humanities faculty at a four-year university and examines their perceptions of the term data.", "which has methodology ?", "Qualitative", 337.0, 348.0], ["In the industrial sector there are many processes where the visual inspection is essential, the automation of that processes becomes a necessity to guarantee the quality of several objects. In this paper we propose a methodology for textile quality inspection based on the texture cue of an image. To solve this, we use a Neuro-Symbolic Hybrid System (NSHS) that allow us to combine an artificial neural network and the symbolic representation of the expert knowledge. The artificial neural network uses the CasCor learning algorithm and we use production rules to represent the symbolic knowledge. The features used for inspection has the advantage of being tolerant to rotation and scale changes. We compare the results with those obtained from an automatic computer vision task, and we conclude that results obtained using the proposed methodology are better.", "which has methodology ?", "Neuro-symbolic Hybrid System ", NaN, NaN], ["The objective of this study was to optimise raw tooke flour-(RTF), vital gluten (VG) and water absorption (WA) with respect to bread-making quality and cost effectiveness of RTF/wheat composite flour. The hypothesis generated for this study was that optimal substitution of RTF and VG into wheat has no significant effect on baking quality of the resultant composite flour. A basic white wheat bread recipe was adopted and response surface methodology (RSM) procedures applied. A D-optimal design was employed with the following variables: RTF (x1) 0-33%, WA (x2) -2FWA to +2FWA and VG (x3) 0 - 3%. Seven responses were modelled. Baking worth number, volume yield and cost were simultaneously optimized using desirability function approach. Models developed adequately described the relationships and were confirmed by validation studies. RTF showed the greatest effect on all models, which effect impaired baking performance of composite flour. VG and Farinograph water absorption (FWA) as well as their interaction improved bread quality. Vitality of VG was enhanced by RTF. The optimal formulation for maximum baking quality was 0.56%(x1), 0.33%(x2) and -1.24(x3) while a formulation of 22%(x1), 3%(x2) and +1.13(x3) maximized RTF incorporation in the respective and composite bread quality at lowest cost. Thus, the set hypothesis was not rejected. Key words: Raw tooke flour, composite bread, baking quality, response surface methodology, Farinograph water absorption, vital gluten.", "which has methodology ?", "response surface methodology", 423.0, 451.0], ["The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.", "which Has preprocessing steps ?", "lemmatised", 306.0, 316.0], ["The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.", "which Has preprocessing steps ?", "tagged for part-of-speech", 321.0, 346.0], ["One of the biomedical entity types of relevance for medicine or biosciences are chemical compounds and drugs. The correct detection these entities is critical for other text mining applications building on them, such as adverse drug-reaction detection, medication-related fake news or drug-target extraction. Although a significant effort was made to detect mentions of drugs/chemicals in English texts, so far only very limited attempts were made to recognize them in medical documents in other languages. Taking into account the growing amount of medical publications and clinical records written in Spanish, we have organized the first shared task on detecting drug and chemical entities in Spanish medical documents. Additionally, we included a clinical concept-indexing sub-track asking teams to return SNOMED-CT identifiers related to drugs/chemicals for a collection of documents. For this task, named PharmaCoNER, we generated annotation guidelines together with a corpus of 1,000 manually annotated clinical case studies. A total of 22 teams participated in the sub-track 1, (77 system runs), and 7 teams in the sub-track 2 (19 system runs). Top scoring teams used sophisticated deep learning approaches yielding very competitive results with F-measures above 0.91. These results indicate that there is a real interest in promoting biomedical text mining efforts beyond English. We foresee that the PharmaCoNER annotation guidelines, corpus and participant systems will foster the development of new resources for clinical and biomedical text mining systems of Spanish medical data.", "which Data domains ?", "Clinical", 574.0, 582.0], ["A crucial step toward the goal of automatic extraction of propositional information from natural language text is the identification of semantic relations between constituents in sentences. We examine the problem of distinguishing among seven relation types that can occur between the entities \"treatment\" and \"disease\" in bioscience text, and the problem of identifying such entities. We compare five generative graphical models and a neural network, using lexical, syntactic, and semantic features, finding that the latter help achieve high classification accuracy.", "which Data domains ?", "Bioscience", 323.0, 333.0], ["One of the biomedical entity types of relevance for medicine or biosciences are chemical compounds and drugs. The correct detection these entities is critical for other text mining applications building on them, such as adverse drug-reaction detection, medication-related fake news or drug-target extraction. Although a significant effort was made to detect mentions of drugs/chemicals in English texts, so far only very limited attempts were made to recognize them in medical documents in other languages. Taking into account the growing amount of medical publications and clinical records written in Spanish, we have organized the first shared task on detecting drug and chemical entities in Spanish medical documents. Additionally, we included a clinical concept-indexing sub-track asking teams to return SNOMED-CT identifiers related to drugs/chemicals for a collection of documents. For this task, named PharmaCoNER, we generated annotation guidelines together with a corpus of 1,000 manually annotated clinical case studies. A total of 22 teams participated in the sub-track 1, (77 system runs), and 7 teams in the sub-track 2 (19 system runs). Top scoring teams used sophisticated deep learning approaches yielding very competitive results with F-measures above 0.91. These results indicate that there is a real interest in promoting biomedical text mining efforts beyond English. We foresee that the PharmaCoNER annotation guidelines, corpus and participant systems will foster the development of new resources for clinical and biomedical text mining systems of Spanish medical data.", "which Data domains ?", "Medicine", 52.0, 60.0], ["We introduce the STEM (Science, Technology, Engineering, and Medicine) Dataset for Scientific Entity Extraction, Classification, and Resolution, version 1.0 (STEM-ECR v1.0). The STEM-ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises abstracts in 10 STEM disciplines that were found to be the most prolific ones on a major publishing platform. We describe the creation of such a multidisciplinary corpus and highlight the obtained findings in terms of the following features: 1) a generic conceptual formalism for scientific entities in a multidisciplinary scientific context; 2) the feasibility of the domain-independent human annotation of scientific entities under such a generic formalism; 3) a performance benchmark obtainable for automatic extraction of multidisciplinary scientific entities using BERT-based neural models; 4) a delineated 3-step entity resolution procedure for human annotation of the scientific entities via encyclopedic entity linking and lexicographic word sense disambiguation; and 5) human evaluations of Babelfy returned encyclopedic links and lexicographic senses for our entities. Our findings cumulatively indicate that human annotation and automatic learning of multidisciplinary scientific concepts as well as their semantic disambiguation in a wide-ranging setting as STEM is reasonable.", "which Data domains ?", "Engineering", 44.0, 55.0], ["Software contributions to academic research are relatively invisible, especially to the formalized scholarly reputation system based on bibliometrics. In this article, we introduce a gold\u2010standard dataset of software mentions from the manual annotation of 4,971 academic PDFs in biomedicine and economics. The dataset is intended to be used for automatic extraction of software mentions from PDF format research publications by supervised learning at scale. We provide a description of the dataset and an extended discussion of its creation process, including improved text conversion of academic PDFs. Finally, we reflect on our challenges and lessons learned during the dataset creation, in hope of encouraging more discussion about creating datasets for machine learning use.", "which Data domains ?", "Biomedicine", 279.0, 290.0], ["This paper introduces the ACL Reference Dataset for Terminology Extraction and Classification, version 2.0 (ACL RD-TEC 2.0). The ACL RD-TEC 2.0 has been developed with the aim of providing a benchmark for the evaluation of term and entity recognition tasks based on specialised text from the computational linguistics domain. This release of the corpus consists of 300 abstracts from articles in the ACL Anthology Reference Corpus, published between 1978\u20132006. In these abstracts, terms (i.e., single or multi-word lexical units with a specialised meaning) are manually annotated. In addition to their boundaries in running text, annotated terms are classified into one of the seven categories method, tool, language resource (LR), LR product, model, measures and measurements, and other. To assess the quality of the annotations and to determine the difficulty of this annotation task, more than 171 of the abstracts are annotated twice, independently, by each of the two annotators. In total, 6,818 terms are identified and annotated in more than 1300 sentences, resulting in a specialised vocabulary made of 3,318 lexical forms, mapped to 3,471 concepts. We explain the development of the annotation guidelines and discuss some of the challenges we encountered in this annotation task.", "which Data domains ?", "Computational Linguistics", 292.0, 317.0], ["Software contributions to academic research are relatively invisible, especially to the formalized scholarly reputation system based on bibliometrics. In this article, we introduce a gold\u2010standard dataset of software mentions from the manual annotation of 4,971 academic PDFs in biomedicine and economics. The dataset is intended to be used for automatic extraction of software mentions from PDF format research publications by supervised learning at scale. We provide a description of the dataset and an extended discussion of its creation process, including improved text conversion of academic PDFs. Finally, we reflect on our challenges and lessons learned during the dataset creation, in hope of encouraging more discussion about creating datasets for machine learning use.", "which Data domains ?", "Economics", 295.0, 304.0], ["The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and disease NER were 61, 74 and 51%, respectively. Response times ranged from fractions-of-a-second to over a minute per article. We present a description of the challenge and summary of results, demonstrating how curation groups can effectively use interoperable NER technologies to simplify text-mining pipeline implementation. Database URL: http://ctdbase.org/", "which Data domains ?", "Toxicogenomics", 419.0, 433.0], ["We present a system for named entity recognition (ner) in astronomy journal articles. We have developed this system on a ne corpus comprising approximately 200,000 words of text from astronomy articles. These have been manually annotated with \u223c40 entity types of interest to astronomers. We report on the challenges involved in extracting the corpus, defining entity classes and annotating scientific text. We investigate which features of an existing state-of-the-art Maximum Entropy approach perform well on astronomy text. Our system achieves an F-score of 87.8%.", "which Data domains ?", "Astronomy journal articles", 58.0, 84.0], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Data domains ?", "tissue expression", NaN, NaN], ["We introduce the STEM (Science, Technology, Engineering, and Medicine) Dataset for Scientific Entity Extraction, Classification, and Resolution, version 1.0 (STEM-ECR v1.0). The STEM-ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises abstracts in 10 STEM disciplines that were found to be the most prolific ones on a major publishing platform. We describe the creation of such a multidisciplinary corpus and highlight the obtained findings in terms of the following features: 1) a generic conceptual formalism for scientific entities in a multidisciplinary scientific context; 2) the feasibility of the domain-independent human annotation of scientific entities under such a generic formalism; 3) a performance benchmark obtainable for automatic extraction of multidisciplinary scientific entities using BERT-based neural models; 4) a delineated 3-step entity resolution procedure for human annotation of the scientific entities via encyclopedic entity linking and lexicographic word sense disambiguation; and 5) human evaluations of Babelfy returned encyclopedic links and lexicographic senses for our entities. Our findings cumulatively indicate that human annotation and automatic learning of multidisciplinary scientific concepts as well as their semantic disambiguation in a wide-ranging setting as STEM is reasonable.", "which Data domains ?", "Medicine", 61.0, 69.0], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Data domains ?", "protein-protein interactions", 518.0, 546.0], ["Abstract Background Molecular Biology accumulated substantial amounts of data concerning functions of genes and proteins. Information relating to functional descriptions is generally extracted manually from textual data and stored in biological databases to build up annotations for large collections of gene products. Those annotation databases are crucial for the interpretation of large scale analysis approaches using bioinformatics or experimental techniques. Due to the growing accumulation of functional descriptions in biomedical literature the need for text mining tools to facilitate the extraction of such annotations is urgent. In order to make text mining tools useable in real world scenarios, for instance to assist database curators during annotation of protein function, comparisons and evaluations of different approaches on full text articles are needed. Results The Critical Assessment for Information Extraction in Biology (BioCreAtIvE) contest consists of a community wide competition aiming to evaluate different strategies for text mining tools, as applied to biomedical literature. We report on task two which addressed the automatic extraction and assignment of Gene Ontology (GO) annotations of human proteins, using full text articles. The predictions of task 2 are based on triplets of protein \u2013 GO term \u2013 article passage . The annotation-relevant text passages were returned by the participants and evaluated by expert curators of the GO annotation (GOA) team at the European Institute of Bioinformatics (EBI). Each participant could submit up to three results for each sub-task comprising task 2. In total more than 15,000 individual results were provided by the participants. The curators evaluated in addition to the annotation itself, whether the protein and the GO term were correctly predicted and traceable through the submitted text fragment. Conclusion Concepts provided by GO are currently the most extended set of terms used for annotating gene products, thus they were explored to assess how effectively text mining tools are able to extract those annotations automatically. Although the obtained results are promising, they are still far from reaching the required performance demanded by real world applications. Among the principal difficulties encountered to address the proposed task, were the complex nature of the GO terms and protein names (the large range of variants which are used to express proteins and especially GO terms in free text), and the lack of a standard training set. A range of very different strategies were used to tackle this task. The dataset generated in line with the BioCreative challenge is publicly available and will allow new possibilities for training information extraction methods in the domain of molecular biology.", "which Data domains ?", "Molecular Biology", 20.0, 37.0], ["This paper presents the fourth edition of the Bacteria Biotope task at BioNLP Open Shared Tasks 2019. The task focuses on the extraction of the locations and phenotypes of microorganisms from PubMed abstracts and full-text excerpts, and the characterization of these entities with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on biodiversity for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, and the challenge organization. We also provide an analysis of the results obtained by participants, and inspect the evolution of the results since the last edition in 2016.", "which Data domains ?", "Microbiology", 477.0, 489.0], ["A key challenge for manufacturers today is efficiently producing and delivering products on time. Issues include demand for customized products, changes in orders, and equipment status change, complicating the decision-making process. A real-time digital representation of the manufacturing operation would help address these challenges. Recent technology advancements of smart sensors, IoT, and cloud computing make it possible to realize a \"digital twin\" of a manufacturing system or process. Digital twins or surrogates are data-driven virtual representations that replicate, connect, and synchronize the operation of a manufacturing system or process. They utilize dynamically collected data to track system behaviors, analyze performance, and help make decisions without interrupting production. In this paper, we define digital surrogate, explore their relationships to simulation, digital thread, artificial intelligence, and IoT. We identify the technology and standard requirements and challenges for implementing digital surrogates. A production planning case is used to exemplify the digital surrogate concept.", "which has Temporal Integration ?", "Real-time", 237.0, 246.0], ["Aqueous acidic ozone (O3)-containing solutions are increasingly used for silicon treatment in photovoltaic and semiconductor industries. We studied the behavior of aqueous hydrofluoric acid (HF)-containing solutions (i.e., HF\u2013O3, HF\u2013H2SO4\u2013O3, and HF\u2013HCl\u2013O3 mixtures) toward boron-doped solar-grade (100) silicon wafers. The solubility of O3 and etching rates at 20 \u00b0C were investigated. The mixtures were analyzed for the potential oxidizing species by UV\u2013vis and Raman spectroscopy. Concentrations of O3 (aq), O3 (g), and Cl2 (aq) were determined by titrimetric volumetric analysis. F\u2013, Cl\u2013, and SO42\u2013 ion contents were determined by ion chromatography. Model experiments were performed to investigate the oxidation of H-terminated silicon surfaces by H2O\u2013O2, H2O\u2013O3, H2O\u2013H2SO4\u2013O3, and H2O\u2013HCl\u2013O3 mixtures. The oxidation was monitored by diffuse reflection infrared Fourier transformation (DRIFT) spectroscopy. The resulting surfaces were examined by scanning electron microscopy (SEM) and X-ray photoelectron spectrosc...", "which Type of etching mixture ?", "acidic", 8.0, 14.0], ["Wet bulk micromachining is a popular technique for the fabrication of microstructures in research labs as well as in industry. However, increasing the throughput still remains an active area of research, and can be done by increasing the etching rate. Moreover, the release time of a freestanding structure can be reduced if the undercutting rate at convex corners can be improved. In this paper, we investigate a non-conventional etchant in the form of NH2OH added in 5 wt% tetramethylammonium hydroxide (TMAH) to determine its etching characteristics. Our analysis is focused on a Si{1 0 0} wafer as this is the most widely used in the fabrication of planer devices (e.g. complementary metal oxide semiconductors) and microelectromechanical systems (e.g. inertial sensors). We perform a systematic and parametric analysis with concentrations of NH2OH varying from 5% to 20% in step of 5%, all in 5 wt% TMAH, to obtain the optimum concentration for achieving improved etching characteristics including higher etch rate, undercutting at convex corners, and smooth etched surface morphology. Average surface roughness (R a), etch depth, and undercutting length are measured using a 3D scanning laser microscope. Surface morphology of the etched Si{1 0 0} surface is examined using a scanning electron microscope. Our investigation has revealed a two-fold increment in the etch rate of a {1 0 0} surface with the addition of NH2OH in the TMAH solution. Additionally, the incorporation of NH2OH significantly improves the etched surface morphology and the undercutting at convex corners, which is highly desirable for the quick release of microstructures from the substrate. The results presented in this paper are extremely useful for engineering applications and will open a new direction of research for scientists in both academic and industrial laboratories.", "which Type of etching ?", "wet", 0.0, 3.0], ["Dissolving microneedles (MNs) display high efficiency in delivering poorly permeable drugs and vaccines. Here, two-layer dissolving polymeric MN patches composed of gelatin and sodium carboxymethyl cellulose (CMC) were fabricated with a two-step casting and centrifuging process to localize the insulin in the needle and achieve efficient transdermal delivery of insulin. In vitro skin insertion capability was determined by staining with tissue-marking dye after insertion, and the real-time penetration depth was monitored using optical coherence tomography. Confocal microscopy images revealed that the rhodamine 6G and fluorescein isothiocyanate-labeled insulin (insulin-FITC) can gradually diffuse from the puncture sites to deeper tissue. Ex vivo drug-release profiles showed that 50% of the insulin was released and penetrated across the skin after 1 h, and the cumulative permeation reached 80% after 5 h. In vivo and pharmacodynamic studies were then conducted to estimate the feasibility of the administration of insulin-loaded dissolving MN patches on diabetic mice for glucose regulation. The total area above the glucose level versus time curve as an index of hypoglycemic effect was 128.4 \u00b1 28.3 (% h) at 0.25 IU/kg. The relative pharmacologic availability and relative bioavailability (RBA) of insulin from MN patches were 95.6 and 85.7%, respectively. This study verified that the use of gelatin/CMC MN patches for insulin delivery achieved a satisfactory RBA compared to traditional hypodermic injection and presented a promising device to deliver poorly permeable protein drugs for diabetic therapy. \u00a9 2016 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 105A: 84-93, 2017.", "which Microneedles structure ?", "microneedle", NaN, NaN], ["Controlled synthesis of a hybrid nanomaterial based on titanium oxide and single-layer graphene (SLG) using atomic layer deposition (ALD) is reported here. The morphology and crystallinity of the oxide layer on SLG can be tuned mainly with the deposition temperature, achieving either a uniform amorphous layer at 60 \u00b0C or \u223c2 nm individual nanocrystals on the SLG at 200 \u00b0C after only 20 ALD cycles. A continuous and uniform amorphous layer formed on the SLG after 180 cycles at 60 \u00b0C can be converted to a polycrystalline layer containing domains of anatase TiO2 after a postdeposition annealing at 400 \u00b0C under vacuum. Using aberration-corrected transmission electron microscopy (AC-TEM), characterization of the structure and chemistry was performed on an atomic scale and provided insight into understanding the nucleation and growth. AC-TEM imaging and electron energy loss spectroscopy revealed that rocksalt TiO nanocrystals were occasionally formed at the early stage of nucleation after only 20 ALD cycles. Understanding and controlling nucleation and growth of the hybrid nanomaterial are crucial to achieving novel properties and enhanced performance for a wide range of applications that exploit the synergetic functionalities of the ensemble.", "which Material structure ?", "Amorphous", 295.0, 304.0], ["Abstract Background The biological research literature is a major repository of knowledge. As the amount of literature increases, it will get harder to find the information of interest on a particular topic. There has been an increasing amount of work on text mining this literature, but comparing this work is hard because of a lack of standards for making comparisons. To address this, we worked with colleagues at the Protein Design Group, CNB-CSIC, Madrid to develop BioCreAtIvE (Critical Assessment for Information Extraction in Biology), an open common evaluation of systems on a number of biological text mining tasks. We report here on task 1A, which deals with finding mentions of genes and related entities in text. \"Finding mentions\" is a basic task, which can be used as a building block for other text mining tasks. The task makes use of data and evaluation software provided by the (US) National Center for Biotechnology Information (NCBI). Results 15 teams took part in task 1A. A number of teams achieved scores over 80% F-measure (balanced precision and recall). The teams that tried to use their task 1A systems to help on other BioCreAtIvE tasks reported mixed results. Conclusion The 80% plus F-measure results are good, but still somewhat lag the best scores achieved in some other domains such as newswire, due in part to the complexity and length of gene names, compared to person or organization names in newswire.", "which Coarse-grained Entity type ?", "Protein", 421.0, 428.0], ["Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task\u2014a result that approaches the human inter-annotator agreement (0.8875)\u2014and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system\u2019s ability to return real-time results: the average response time for each team\u2019s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "which Coarse-grained Entity type ?", "Chemical", 303.0, 311.0], ["The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and disease NER were 61, 74 and 51%, respectively. Response times ranged from fractions-of-a-second to over a minute per article. We present a description of the challenge and summary of results, demonstrating how curation groups can effectively use interoperable NER technologies to simplify text-mining pipeline implementation. Database URL: http://ctdbase.org/", "which Coarse-grained Entity type ?", "Chemical", 1986.0, 1994.0], ["Abstract Background The biological research literature is a major repository of knowledge. As the amount of literature increases, it will get harder to find the information of interest on a particular topic. There has been an increasing amount of work on text mining this literature, but comparing this work is hard because of a lack of standards for making comparisons. To address this, we worked with colleagues at the Protein Design Group, CNB-CSIC, Madrid to develop BioCreAtIvE (Critical Assessment for Information Extraction in Biology), an open common evaluation of systems on a number of biological text mining tasks. We report here on task 1A, which deals with finding mentions of genes and related entities in text. \"Finding mentions\" is a basic task, which can be used as a building block for other text mining tasks. The task makes use of data and evaluation software provided by the (US) National Center for Biotechnology Information (NCBI). Results 15 teams took part in task 1A. A number of teams achieved scores over 80% F-measure (balanced precision and recall). The teams that tried to use their task 1A systems to help on other BioCreAtIvE tasks reported mixed results. Conclusion The 80% plus F-measure results are good, but still somewhat lag the best scores achieved in some other domains such as newswire, due in part to the complexity and length of gene names, compared to person or organization names in newswire.", "which Coarse-grained Entity type ?", "Gene", 1373.0, 1377.0], ["There is an increasing need to facilitate automated access to information relevant for chemical compounds and drugs described in text, including scientific articles, patents or health agency reports. A number of recent efforts have implemented natural language processing (NLP) and text mining technologies for the chemical domain (ChemNLP or chemical text mining). Due to the lack of manually labeled Gold Standard datasets together with comprehensive annotation guidelines, both the implementation as well as the comparative assessment of ChemNLP technologies BSF opaque. Two key components for most chemical text mining technologies are the indexing of documents with chemicals (chemical document indexing CDI ) and finding the mentions of chemicals in text (chemical entity mention recognition CEM ). These two tasks formed part of the chemical compound and drug named entity recognition (CHEMDNER) task introduced at the fourth BioCreative challenge, a community effort to evaluate biomedical text mining applications. For this task, the CHEMDNER text corpus was constructed, consisting of 10,000 abstracts containing a total of 84,355 mentions of chemical compounds and drugs that have been manually labeled by domain experts following specific annotation guidelines. This corpus covers representative abstracts from major chemistry-related sub-disciplines such as medicinal chemistry, biochemistry, organic chemistry and toxicology. A total of 27 teams \u2013 23 academic and 4 commercial HSPVQT, comprised of 87 researchers \u2013 submitted results for this task. Of these teams, 26 provided submissions for the CEM subtask and 23 for the CDI subtask. Teams were provided with the manual annotations of 7,000 abstracts to implement and train their systems and then had to return predictions for the 3,000 test set abstracts during a short period of time. When comparing exact matches of the automated results against the manually labeled Gold Standard annotations, the best teams reached an F-score \u22c6 Corresponding author Proceedings of the fourth BioCreative challenge evaluation workshop, vol. 2 of 87.39% JO the CEM task and of 88.20% JO the CDI task. This can be regarded as a very competitive result when compared to the expected upper boundary, the agreement between to human annotators, at 91%. In general, the technologies used to detect chemicals and drugs by the teams included machine learning methods (particularly CRFs using a considerable range of different features), interaction of chemistry-related lexical resources and manual rules (e.g., to cover abbreviations, chemical formula or chemical identifiers). By promoting the availability of the software of the participating systems as well as through the release of the CHEMDNER corpus to enable implementation of new tools, this work fosters the development of text mining applications like the automatic extraction of biochemical reactions, toxicological properties of compounds, or the detection of associations between genes or mutations BOE drugs in the context pharmacogenomics.", "which Coarse-grained Entity type ?", "Chemical", 87.0, 95.0], ["The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and disease NER were 61, 74 and 51%, respectively. Response times ranged from fractions-of-a-second to over a minute per article. We present a description of the challenge and summary of results, demonstrating how curation groups can effectively use interoperable NER technologies to simplify text-mining pipeline implementation. Database URL: http://ctdbase.org/", "which Coarse-grained Entity type ?", "Gene", 1980.0, 1984.0], ["Gene function curation via Gene Ontology (GO) annotation is a common task among Model Organism Database groups. Owing to its manual nature, this task is considered one of the bottlenecks in literature curation. There have been many previous attempts at automatic identification of GO terms and supporting information from full text. However, few systems have delivered an accuracy that is comparable with humans. One recognized challenge in developing such systems is the lack of marked sentence-level evidence text that provides the basis for making GO annotations. We aim to create a corpus that includes the GO evidence text along with the three core elements of GO annotations: (i) a gene or gene product, (ii) a GO term and (iii) a GO evidence code. To ensure our results are consistent with real-life GO data, we recruited eight professional GO curators and asked them to follow their routine GO annotation protocols. Our annotators marked up more than 5000 text passages in 200 articles for 1356 distinct GO terms. For evidence sentence selection, the inter-annotator agreement (IAA) results are 9.3% (strict) and 42.7% (relaxed) in F1-measures. For GO term selection, the IAAs are 47% (strict) and 62.9% (hierarchical). Our corpus analysis further shows that abstracts contain \u223c10% of relevant evidence sentences and 30% distinct GO terms, while the Results/Experiment section has nearly 60% relevant sentences and >70% GO terms. Further, of those evidence sentences found in abstracts, less than one-third contain enough experimental detail to fulfill the three core criteria of a GO annotation. This result demonstrates the need of using full-text articles for text mining GO annotations. Through its use at the BioCreative IV GO (BC4GO) task, we expect our corpus to become a valuable resource for the BioNLP research community. Database URL: http://www.biocreative.org/resources/corpora/bc-iv-go-task-corpus/.", "which Coarse-grained Entity type ?", "Gene", 0.0, 4.0], ["Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task\u2014a result that approaches the human inter-annotator agreement (0.8875)\u2014and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system\u2019s ability to return real-time results: the average response time for each team\u2019s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "which Coarse-grained Entity type ?", "Disease", 312.0, 319.0], ["The BioCreative NLM-Chem track calls for a community effort to fine-tune automated recognition of chemical names in biomedical literature. Chemical names are one of the most searched biomedical entities in PubMed and \u2013 as highlighted during the COVID-19 pandemic \u2013 their identification may significantly advance research in multiple biomedical subfields. While previous community challenges focused on identifying chemical names mentioned in titles and abstracts, the full text contains valuable additional detail. We organized the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document. This manuscript summarizes the BioCreative NLM-Chem track. We received a total of 88 submissions in total from 17 teams worldwide. The highest performance achieved for the Chemical Identification task was 0.8672 f-score (0.8759 precision, 0.8587 recall) for strict NER performance and 0.8136 f-score (0.8621 precision, 0.7702 recall) for strict normalization performance. The highest performance achieved for the Chemical Indexing task was 0.4825 f-score (0.4397 precision, 0.5344 recall). The NLM-Chem track dataset and other challenge materials are publicly available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/. This community challenge demonstrated 1) the current substantial achievements in deep learning technologies can be utilized to further improve automated prediction accuracy, and 2) the Chemical Indexing task is substantially more challenging. We look forward to further development of biomedical text mining methods to respond to the rapid growth of biomedical literature. Keywords\u2014 biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; text mining; chemical entity recognition; chemical indexing", "which Coarse-grained Entity types ?", "Chemical", 98.0, 106.0], ["Statistical named entity recognisers require costly hand-labelled training data and, as a result, most existing corpora are small. We exploit Wikipedia to create a massive corpus of named entity annotated text. We transform Wikipedia\u2019s links into named entity annotations by classifying the target articles into common entity types (e.g. person, organisation and location). Comparing to MUC, CONLL and BBN corpora, Wikipedia generally performs better than other cross-corpus train/test pairs.", "which Coarse-grained Entity types ?", "Location", 363.0, 371.0], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which Coarse-grained Entity types ?", "Mouse", 583.0, 588.0], ["Statistical named entity recognisers require costly hand-labelled training data and, as a result, most existing corpora are small. We exploit Wikipedia to create a massive corpus of named entity annotated text. We transform Wikipedia\u2019s links into named entity annotations by classifying the target articles into common entity types (e.g. person, organisation and location). Comparing to MUC, CONLL and BBN corpora, Wikipedia generally performs better than other cross-corpus train/test pairs.", "which Coarse-grained Entity types ?", "Person", 338.0, 344.0], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which Coarse-grained Entity types ?", "Yeast", 567.0, 572.0], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which Coarse-grained Entity types ?", "Fly", 574.0, 577.0], ["Research on specialized biological systems is often hampered by a lack of consistent terminology, especially across species. In bacterial Type IV secretion systems genes within one set of orthologs may have over a dozen different names. Classifying research publications based on biological processes, cellular components, molecular functions, and microorganism species should improve the precision and recall of literature searches allowing researchers to keep up with the exponentially growing literature, through resources such as the Pathosystems Resource Integration Center (PATRIC, patricbrc.org). We developed named entity recognition (NER) tools for four entities related to Type IV secretion systems: 1) bacteria names, 2) biological processes, 3) molecular functions, and 4) cellular components. These four entities are important to pathogenesis and virulence research but have received less attention than other entities, e.g., genes and proteins. Based on an annotated corpus, large domain terminological resources, and machine learning techniques, we developed recognizers for these entities. High accuracy rates (>80%) are achieved for bacteria, biological processes, and molecular function. Contrastive experiments highlighted the effectiveness of alternate recognition strategies; results of term extraction on contrasting document sets demonstrated the utility of these classes for identifying T4SS-related documents.", "which Entity types ?", "Bacteria names", 713.0, 727.0], ["This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2016, which follows the previous 2013 and 2011 editions. The task focuses on the extraction of the locations (biotopes and geographical places) of bacteria from PubMe abstracts and the characterization of bacteria and their associated habitats with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on bacteria habitats for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, the challenge organization, and the evaluation metrics. We also provide an analysis of the results obtained by participants.", "which Entity types ?", "Bacteria", 24.0, 32.0], ["One of the biomedical entity types of relevance for medicine or biosciences are chemical compounds and drugs. The correct detection these entities is critical for other text mining applications building on them, such as adverse drug-reaction detection, medication-related fake news or drug-target extraction. Although a significant effort was made to detect mentions of drugs/chemicals in English texts, so far only very limited attempts were made to recognize them in medical documents in other languages. Taking into account the growing amount of medical publications and clinical records written in Spanish, we have organized the first shared task on detecting drug and chemical entities in Spanish medical documents. Additionally, we included a clinical concept-indexing sub-track asking teams to return SNOMED-CT identifiers related to drugs/chemicals for a collection of documents. For this task, named PharmaCoNER, we generated annotation guidelines together with a corpus of 1,000 manually annotated clinical case studies. A total of 22 teams participated in the sub-track 1, (77 system runs), and 7 teams in the sub-track 2 (19 system runs). Top scoring teams used sophisticated deep learning approaches yielding very competitive results with F-measures above 0.91. These results indicate that there is a real interest in promoting biomedical text mining efforts beyond English. We foresee that the PharmaCoNER annotation guidelines, corpus and participant systems will foster the development of new resources for clinical and biomedical text mining systems of Spanish medical data.", "which Entity types ?", "Drug", 228.0, 232.0], ["The goal of the Genic Regulation Network task (GRN) is to extract a regulation network that links and integrates a v a r i e y o f m o l e a interactions between genes and proteins of the well-studied model bacterium Bacillus subtilis. It is an extension of the BI task of BioNLP-ST\u201911. The corpus is composed of sentences selected from publicly available PubMed scientific", "which Entity types ?", "Gene", NaN, NaN], ["The active gene annotation corpus (AGAC) was developed to support knowledge discovery for drug repurposing. Based on the corpus, the AGAC track of the BioNLP Open Shared Tasks 2019 was organized, to facilitate cross-disciplinary collaboration across BioNLP and Pharmacoinformatics communities, for drug repurposing. The AGAC track consists of three subtasks: 1) named entity recognition, 2) thematic relation extraction, and 3) loss of function (LOF) / gain of function (GOF) topic classification. The AGAC track was participated by five teams, of which the performance are compared and analyzed. The the results revealed a substantial room for improvement in the design of the task, which we analyzed in terms of \u201cimbalanced data\u201d, \u201cselective annotation\u201d and \u201clatent topic annotation\u201d.", "which Entity types ?", "Gene", 11.0, 15.0], ["One of the biomedical entity types of relevance for medicine or biosciences are chemical compounds and drugs. The correct detection these entities is critical for other text mining applications building on them, such as adverse drug-reaction detection, medication-related fake news or drug-target extraction. Although a significant effort was made to detect mentions of drugs/chemicals in English texts, so far only very limited attempts were made to recognize them in medical documents in other languages. Taking into account the growing amount of medical publications and clinical records written in Spanish, we have organized the first shared task on detecting drug and chemical entities in Spanish medical documents. Additionally, we included a clinical concept-indexing sub-track asking teams to return SNOMED-CT identifiers related to drugs/chemicals for a collection of documents. For this task, named PharmaCoNER, we generated annotation guidelines together with a corpus of 1,000 manually annotated clinical case studies. A total of 22 teams participated in the sub-track 1, (77 system runs), and 7 teams in the sub-track 2 (19 system runs). Top scoring teams used sophisticated deep learning approaches yielding very competitive results with F-measures above 0.91. These results indicate that there is a real interest in promoting biomedical text mining efforts beyond English. We foresee that the PharmaCoNER annotation guidelines, corpus and participant systems will foster the development of new resources for clinical and biomedical text mining systems of Spanish medical data.", "which Entity types ?", "Gene", NaN, NaN], ["Research on specialized biological systems is often hampered by a lack of consistent terminology, especially across species. In bacterial Type IV secretion systems genes within one set of orthologs may have over a dozen different names. Classifying research publications based on biological processes, cellular components, molecular functions, and microorganism species should improve the precision and recall of literature searches allowing researchers to keep up with the exponentially growing literature, through resources such as the Pathosystems Resource Integration Center (PATRIC, patricbrc.org). We developed named entity recognition (NER) tools for four entities related to Type IV secretion systems: 1) bacteria names, 2) biological processes, 3) molecular functions, and 4) cellular components. These four entities are important to pathogenesis and virulence research but have received less attention than other entities, e.g., genes and proteins. Based on an annotated corpus, large domain terminological resources, and machine learning techniques, we developed recognizers for these entities. High accuracy rates (>80%) are achieved for bacteria, biological processes, and molecular function. Contrastive experiments highlighted the effectiveness of alternate recognition strategies; results of term extraction on contrasting document sets demonstrated the utility of these classes for identifying T4SS-related documents.", "which Entity types ?", "Biological processes", 280.0, 300.0], ["This paper presents the preparation, resources, results and analysis of the Epigenetics and Post-translational Modifications (EPI) task, a main task of the BioNLP Shared Task 2011. The task concerns the extraction of detailed representations of 14 protein and DNA modification events, the catalysis of these reactions, and the identification of instances of negated or speculatively stated event instances. Seven teams submitted final results to the EPI task in the shared task, with the highest-performing system achieving 53% F-score in the full task and 69% F-score in the extraction of a simplified set of core event arguments.", "which Entity types ?", "Protein", 248.0, 255.0], ["We present two related tasks of the BioNLP Shared Tasks 2011: Bacteria Gene Renaming (Rename) and Bacteria Gene Interactions (GI). We detail the objectives, the corpus specification, the evaluation metrics, and we summarize the participants' results. Both issued from PubMed scientific literature abstracts, the Rename task aims at extracting gene name synonyms, and the GI task aims at extracting genic interaction events, mainly about gene transcriptional regulations in bacteria.", "which Entity types ?", "Gene", 71.0, 75.0], ["The goal of the Genic Regulation Network task (GRN) is to extract a regulation network that links and integrates a v a r i e y o f m o l e a interactions between genes and proteins of the well-studied model bacterium Bacillus subtilis. It is an extension of the BI task of BioNLP-ST\u201911. The corpus is composed of sentences selected from publicly available PubMed scientific", "which Entity types ?", "Protein", NaN, NaN], ["This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2013, which follows BioNLP-ST-11. The Bacteria Biotope task aims to extract the location of bacteria from scientific web pages and to characterize these locations with respect to the OntoBiotope ontology. Bacteria locations are crucil knowledge in biology for phenotype studies. The paper details the corpus specifications, the evaluation metrics, and it summarizes and discusses the participant results.", "which Entity types ?", "Bacteria", 24.0, 32.0], ["This paper presents the fourth edition of the Bacteria Biotope task at BioNLP Open Shared Tasks 2019. The task focuses on the extraction of the locations and phenotypes of microorganisms from PubMed abstracts and full-text excerpts, and the characterization of these entities with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on biodiversity for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, and the challenge organization. We also provide an analysis of the results obtained by participants, and inspect the evolution of the results since the last edition in 2016.", "which Entity types ?", "Phenotype", NaN, NaN], ["We present the design, preparation, results and analysis of the Cancer Genetics (CG) event extraction task, a main task of the BioNLP Shared Task (ST) 2013. The CG task is an information extraction task targeting the recognition of events in text, represented as structured n-ary associations of given physical entities. In addition to addressing the cancer domain, the CG task is differentiated from previous event extraction tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple levels of biological organization, ranging from the molecular through the cellular and organ levels up to whole organisms. Final test set submissions were accepted from six teams. The highest-performing system achieved an Fscore of 55.4%. This level of performance is broadly comparable with the state of the art for established molecular-level extraction tasks, demonstrating that event extraction resources and methods generalize well to higher levels of biological organization and are applicable to the analysis of scientific texts on cancer. The CG task continues as an open challenge to all interested parties, with tools and resources available from http://2013. bionlp-st.org/.", "which Entity types ?", "Organism", NaN, NaN], ["We present two related tasks of the BioNLP Shared Tasks 2011: Bacteria Gene Renaming (Rename) and Bacteria Gene Interactions (GI). We detail the objectives, the corpus specification, the evaluation metrics, and we summarize the participants' results. Both issued from PubMed scientific literature abstracts, the Rename task aims at extracting gene name synonyms, and the GI task aims at extracting genic interaction events, mainly about gene transcriptional regulations in bacteria.", "which Entity types ?", "Transcription", NaN, NaN], ["Research on specialized biological systems is often hampered by a lack of consistent terminology, especially across species. In bacterial Type IV secretion systems genes within one set of orthologs may have over a dozen different names. Classifying research publications based on biological processes, cellular components, molecular functions, and microorganism species should improve the precision and recall of literature searches allowing researchers to keep up with the exponentially growing literature, through resources such as the Pathosystems Resource Integration Center (PATRIC, patricbrc.org). We developed named entity recognition (NER) tools for four entities related to Type IV secretion systems: 1) bacteria names, 2) biological processes, 3) molecular functions, and 4) cellular components. These four entities are important to pathogenesis and virulence research but have received less attention than other entities, e.g., genes and proteins. Based on an annotated corpus, large domain terminological resources, and machine learning techniques, we developed recognizers for these entities. High accuracy rates (>80%) are achieved for bacteria, biological processes, and molecular function. Contrastive experiments highlighted the effectiveness of alternate recognition strategies; results of term extraction on contrasting document sets demonstrated the utility of these classes for identifying T4SS-related documents.", "which Entity types ?", "Molecular functions", 323.0, 342.0], ["Software contributions to academic research are relatively invisible, especially to the formalized scholarly reputation system based on bibliometrics. In this article, we introduce a gold\u2010standard dataset of software mentions from the manual annotation of 4,971 academic PDFs in biomedicine and economics. The dataset is intended to be used for automatic extraction of software mentions from PDF format research publications by supervised learning at scale. We provide a description of the dataset and an extended discussion of its creation process, including improved text conversion of academic PDFs. Finally, we reflect on our challenges and lessons learned during the dataset creation, in hope of encouraging more discussion about creating datasets for machine learning use.", "which Entity types ?", "Software", 0.0, 8.0], ["This paper presents the Bacteria Biotope task as part of the BioNLP Shared Tasks 2011. The Bacteria Biotope task aims at extracting the location of bacteria from scientific Web pages. Bacteria location is a crucial knowledge in biology for phenotype studies. The paper details the corpus specification, the evaluation metrics, summarizes and discusses the participant results.", "which Entity types ?", "Bacteria", 24.0, 32.0], ["We present two related tasks of the BioNLP Shared Tasks 2011: Bacteria Gene Renaming (Rename) and Bacteria Gene Interactions (GI). We detail the objectives, the corpus specification, the evaluation metrics, and we summarize the participants' results. Both issued from PubMed scientific literature abstracts, the Rename task aims at extracting gene name synonyms, and the GI task aims at extracting genic interaction events, mainly about gene transcriptional regulations in bacteria.", "which Entity types ?", "Action", NaN, NaN], ["Research on specialized biological systems is often hampered by a lack of consistent terminology, especially across species. In bacterial Type IV secretion systems genes within one set of orthologs may have over a dozen different names. Classifying research publications based on biological processes, cellular components, molecular functions, and microorganism species should improve the precision and recall of literature searches allowing researchers to keep up with the exponentially growing literature, through resources such as the Pathosystems Resource Integration Center (PATRIC, patricbrc.org). We developed named entity recognition (NER) tools for four entities related to Type IV secretion systems: 1) bacteria names, 2) biological processes, 3) molecular functions, and 4) cellular components. These four entities are important to pathogenesis and virulence research but have received less attention than other entities, e.g., genes and proteins. Based on an annotated corpus, large domain terminological resources, and machine learning techniques, we developed recognizers for these entities. High accuracy rates (>80%) are achieved for bacteria, biological processes, and molecular function. Contrastive experiments highlighted the effectiveness of alternate recognition strategies; results of term extraction on contrasting document sets demonstrated the utility of these classes for identifying T4SS-related documents.", "which Entity types ?", "Cellular components", 302.0, 321.0], ["Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task\u2014a result that approaches the human inter-annotator agreement (0.8875)\u2014and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system\u2019s ability to return real-time results: the average response time for each team\u2019s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "which Number of development data mentions ?", "Chemical", 303.0, 311.0], ["Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task\u2014a result that approaches the human inter-annotator agreement (0.8875)\u2014and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system\u2019s ability to return real-time results: the average response time for each team\u2019s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "which Number of development data mentions ?", "Disease", 312.0, 319.0], ["Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task\u2014a result that approaches the human inter-annotator agreement (0.8875)\u2014and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system\u2019s ability to return real-time results: the average response time for each team\u2019s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "which Number of test data mentions ?", "Chemical", 303.0, 311.0], ["Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task\u2014a result that approaches the human inter-annotator agreement (0.8875)\u2014and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system\u2019s ability to return real-time results: the average response time for each team\u2019s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "which Number of test data mentions ?", "Disease", 312.0, 319.0], ["Abstract Background Molecular Biology accumulated substantial amounts of data concerning functions of genes and proteins. Information relating to functional descriptions is generally extracted manually from textual data and stored in biological databases to build up annotations for large collections of gene products. Those annotation databases are crucial for the interpretation of large scale analysis approaches using bioinformatics or experimental techniques. Due to the growing accumulation of functional descriptions in biomedical literature the need for text mining tools to facilitate the extraction of such annotations is urgent. In order to make text mining tools useable in real world scenarios, for instance to assist database curators during annotation of protein function, comparisons and evaluations of different approaches on full text articles are needed. Results The Critical Assessment for Information Extraction in Biology (BioCreAtIvE) contest consists of a community wide competition aiming to evaluate different strategies for text mining tools, as applied to biomedical literature. We report on task two which addressed the automatic extraction and assignment of Gene Ontology (GO) annotations of human proteins, using full text articles. The predictions of task 2 are based on triplets of protein \u2013 GO term \u2013 article passage . The annotation-relevant text passages were returned by the participants and evaluated by expert curators of the GO annotation (GOA) team at the European Institute of Bioinformatics (EBI). Each participant could submit up to three results for each sub-task comprising task 2. In total more than 15,000 individual results were provided by the participants. The curators evaluated in addition to the annotation itself, whether the protein and the GO term were correctly predicted and traceable through the submitted text fragment. Conclusion Concepts provided by GO are currently the most extended set of terms used for annotating gene products, thus they were explored to assess how effectively text mining tools are able to extract those annotations automatically. Although the obtained results are promising, they are still far from reaching the required performance demanded by real world applications. Among the principal difficulties encountered to address the proposed task, were the complex nature of the GO terms and protein names (the large range of variants which are used to express proteins and especially GO terms in free text), and the lack of a standard training set. A range of very different strategies were used to tackle this task. The dataset generated in line with the BioCreative challenge is publicly available and will allow new possibilities for training information extraction methods in the domain of molecular biology.", "which Number of test data mentions ?", "Protein", 770.0, 777.0], ["Abstract Background Molecular Biology accumulated substantial amounts of data concerning functions of genes and proteins. Information relating to functional descriptions is generally extracted manually from textual data and stored in biological databases to build up annotations for large collections of gene products. Those annotation databases are crucial for the interpretation of large scale analysis approaches using bioinformatics or experimental techniques. Due to the growing accumulation of functional descriptions in biomedical literature the need for text mining tools to facilitate the extraction of such annotations is urgent. In order to make text mining tools useable in real world scenarios, for instance to assist database curators during annotation of protein function, comparisons and evaluations of different approaches on full text articles are needed. Results The Critical Assessment for Information Extraction in Biology (BioCreAtIvE) contest consists of a community wide competition aiming to evaluate different strategies for text mining tools, as applied to biomedical literature. We report on task two which addressed the automatic extraction and assignment of Gene Ontology (GO) annotations of human proteins, using full text articles. The predictions of task 2 are based on triplets of protein \u2013 GO term \u2013 article passage . The annotation-relevant text passages were returned by the participants and evaluated by expert curators of the GO annotation (GOA) team at the European Institute of Bioinformatics (EBI). Each participant could submit up to three results for each sub-task comprising task 2. In total more than 15,000 individual results were provided by the participants. The curators evaluated in addition to the annotation itself, whether the protein and the GO term were correctly predicted and traceable through the submitted text fragment. Conclusion Concepts provided by GO are currently the most extended set of terms used for annotating gene products, thus they were explored to assess how effectively text mining tools are able to extract those annotations automatically. Although the obtained results are promising, they are still far from reaching the required performance demanded by real world applications. Among the principal difficulties encountered to address the proposed task, were the complex nature of the GO terms and protein names (the large range of variants which are used to express proteins and especially GO terms in free text), and the lack of a standard training set. A range of very different strategies were used to tackle this task. The dataset generated in line with the BioCreative challenge is publicly available and will allow new possibilities for training information extraction methods in the domain of molecular biology.", "which Number of test data mentions ?", "Proteins", 112.0, 120.0], ["Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task\u2014a result that approaches the human inter-annotator agreement (0.8875)\u2014and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system\u2019s ability to return real-time results: the average response time for each team\u2019s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "which Number of training data mentions ?", "Chemical", 303.0, 311.0], ["Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task\u2014a result that approaches the human inter-annotator agreement (0.8875)\u2014and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system\u2019s ability to return real-time results: the average response time for each team\u2019s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "which Number of training data mentions ?", "Disease", 312.0, 319.0], ["Abstract Background Molecular Biology accumulated substantial amounts of data concerning functions of genes and proteins. Information relating to functional descriptions is generally extracted manually from textual data and stored in biological databases to build up annotations for large collections of gene products. Those annotation databases are crucial for the interpretation of large scale analysis approaches using bioinformatics or experimental techniques. Due to the growing accumulation of functional descriptions in biomedical literature the need for text mining tools to facilitate the extraction of such annotations is urgent. In order to make text mining tools useable in real world scenarios, for instance to assist database curators during annotation of protein function, comparisons and evaluations of different approaches on full text articles are needed. Results The Critical Assessment for Information Extraction in Biology (BioCreAtIvE) contest consists of a community wide competition aiming to evaluate different strategies for text mining tools, as applied to biomedical literature. We report on task two which addressed the automatic extraction and assignment of Gene Ontology (GO) annotations of human proteins, using full text articles. The predictions of task 2 are based on triplets of protein \u2013 GO term \u2013 article passage . The annotation-relevant text passages were returned by the participants and evaluated by expert curators of the GO annotation (GOA) team at the European Institute of Bioinformatics (EBI). Each participant could submit up to three results for each sub-task comprising task 2. In total more than 15,000 individual results were provided by the participants. The curators evaluated in addition to the annotation itself, whether the protein and the GO term were correctly predicted and traceable through the submitted text fragment. Conclusion Concepts provided by GO are currently the most extended set of terms used for annotating gene products, thus they were explored to assess how effectively text mining tools are able to extract those annotations automatically. Although the obtained results are promising, they are still far from reaching the required performance demanded by real world applications. Among the principal difficulties encountered to address the proposed task, were the complex nature of the GO terms and protein names (the large range of variants which are used to express proteins and especially GO terms in free text), and the lack of a standard training set. A range of very different strategies were used to tackle this task. The dataset generated in line with the BioCreative challenge is publicly available and will allow new possibilities for training information extraction methods in the domain of molecular biology.", "which Number of training data mentions ?", "Proteins", 112.0, 120.0], ["Answering complex questions is a time-consuming activity for humans that requires reasoning and integration of information. Recent work on reading comprehension made headway in answering simple questions, but tackling complex questions is still an ongoing research challenge. Conversely, semantic parsers have been successful at handling compositionality, but only when the information resides in a target knowledge-base. In this paper, we present a novel framework for answering broad and complex questions, assuming answering simple questions is possible using a search engine and a reading comprehension model. We propose to decompose complex questions into a sequence of simple questions, and compute the final answer from the sequence of answers. To illustrate the viability of our approach, we create a new dataset of complex questions, ComplexWebQuestions, and present a model that decomposes questions and interacts with the web to compute an answer. We empirically demonstrate that question decomposition improves performance from 20.8 precision@1 to 27.5 precision@1 on this new dataset.", "which Question Types ?", "Complex", 10.0, 17.0], ["The growth of the Web in recent years has resulted in the development of various online platforms that provide healthcare information services. These platforms contain an enormous amount of information, which could be beneficial for a large number of people. However, navigating through such knowledgebases to answer specific queries of healthcare consumers is a challenging task. A majority of such queries might be non-factoid in nature, and hence, traditional keyword-based retrieval models do not work well for such cases. Furthermore, in many scenarios, it might be desirable to get a short answer that sufficiently answers the query, instead of a long document with only a small amount of useful information. In this paper, we propose a neural network model for ranking documents for question answering in the healthcare domain. The proposed model uses a deep attention mechanism at word, sentence, and document levels, for efficient retrieval for both factoid and non-factoid queries, on documents of varied lengths. Specifically, the word-level cross-attention allows the model to identify words that might be most relevant for a query, and the hierarchical attention at sentence and document levels allows it to do effective retrieval on both long and short documents. We also construct a new large-scale healthcare question-answering dataset, which we use to evaluate our model. Experimental evaluation results against several state-of-the-art baselines show that our model outperforms the existing retrieval techniques.", "which Question Types ?", "Non-factoid", 417.0, 428.0], ["Wikidata is becoming an increasingly important knowledge base whose usage is spreading in the research community. However, most question answering systems evaluation datasets rely on Freebase or DBpedia. We present two new datasets in order to train and benchmark QA systems over Wikidata. The first is a translation of the popular SimpleQuestions dataset to Wikidata, the second is a dataset created by collecting user feedbacks.", "which Question Types ?", "Simple", NaN, NaN], ["The growth of the Web in recent years has resulted in the development of various online platforms that provide healthcare information services. These platforms contain an enormous amount of information, which could be beneficial for a large number of people. However, navigating through such knowledgebases to answer specific queries of healthcare consumers is a challenging task. A majority of such queries might be non-factoid in nature, and hence, traditional keyword-based retrieval models do not work well for such cases. Furthermore, in many scenarios, it might be desirable to get a short answer that sufficiently answers the query, instead of a long document with only a small amount of useful information. In this paper, we propose a neural network model for ranking documents for question answering in the healthcare domain. The proposed model uses a deep attention mechanism at word, sentence, and document levels, for efficient retrieval for both factoid and non-factoid queries, on documents of varied lengths. Specifically, the word-level cross-attention allows the model to identify words that might be most relevant for a query, and the hierarchical attention at sentence and document levels allows it to do effective retrieval on both long and short documents. We also construct a new large-scale healthcare question-answering dataset, which we use to evaluate our model. Experimental evaluation results against several state-of-the-art baselines show that our model outperforms the existing retrieval techniques.", "which Question Types ?", "Factoid", 421.0, 428.0], ["This paper presents the Graded Word Similarity in Context (GWSC) task which asked participants to predict the effects of context on human perception of similarity in English, Croatian, Slovene and Finnish. We received 15 submissions and 11 system description papers. A new dataset (CoSimLex) was created for evaluation in this task: it contains pairs of words, each annotated within two different contexts. Systems beat the baselines by significant margins, but few did well in more than one language or subtask. Almost every system employed a Transformer model, but with many variations in the details: WordNet sense embeddings, translation of contexts, TF-IDF weightings, and the automatic creation of datasets for fine-tuning were all used to good effect.", "which Language ?", "Croatian", 175.0, 183.0], ["This paper presents the Graded Word Similarity in Context (GWSC) task which asked participants to predict the effects of context on human perception of similarity in English, Croatian, Slovene and Finnish. We received 15 submissions and 11 system description papers. A new dataset (CoSimLex) was created for evaluation in this task: it contains pairs of words, each annotated within two different contexts. Systems beat the baselines by significant margins, but few did well in more than one language or subtask. Almost every system employed a Transformer model, but with many variations in the details: WordNet sense embeddings, translation of contexts, TF-IDF weightings, and the automatic creation of datasets for fine-tuning were all used to good effect.", "which Language ?", "Finnish", 197.0, 204.0], ["This paper presents the Graded Word Similarity in Context (GWSC) task which asked participants to predict the effects of context on human perception of similarity in English, Croatian, Slovene and Finnish. We received 15 submissions and 11 system description papers. A new dataset (CoSimLex) was created for evaluation in this task: it contains pairs of words, each annotated within two different contexts. Systems beat the baselines by significant margins, but few did well in more than one language or subtask. Almost every system employed a Transformer model, but with many variations in the details: WordNet sense embeddings, translation of contexts, TF-IDF weightings, and the automatic creation of datasets for fine-tuning were all used to good effect.", "which Language ?", "Slovene", 185.0, 192.0], ["This paper presents the Graded Word Similarity in Context (GWSC) task which asked participants to predict the effects of context on human perception of similarity in English, Croatian, Slovene and Finnish. We received 15 submissions and 11 system description papers. A new dataset (CoSimLex) was created for evaluation in this task: it contains pairs of words, each annotated within two different contexts. Systems beat the baselines by significant margins, but few did well in more than one language or subtask. Almost every system employed a Transformer model, but with many variations in the details: WordNet sense embeddings, translation of contexts, TF-IDF weightings, and the automatic creation of datasets for fine-tuning were all used to good effect.", "which Language ?", "English", 166.0, 173.0], ["We present the design, preparation, results and analysis of the Cancer Genetics (CG) event extraction task, a main task of the BioNLP Shared Task (ST) 2013. The CG task is an information extraction task targeting the recognition of events in text, represented as structured n-ary associations of given physical entities. In addition to addressing the cancer domain, the CG task is differentiated from previous event extraction tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple levels of biological organization, ranging from the molecular through the cellular and organ levels up to whole organisms. Final test set submissions were accepted from six teams. The highest-performing system achieved an Fscore of 55.4%. This level of performance is broadly comparable with the state of the art for established molecular-level extraction tasks, demonstrating that event extraction resources and methods generalize well to higher levels of biological organization and are applicable to the analysis of scientific texts on cancer. The CG task continues as an open challenge to all interested parties, with tools and resources available from http://2013. bionlp-st.org/.", "which Event / Relation Types ?", "Pathological", 487.0, 499.0], ["This paper presents the preparation, resources, results and analysis of the Epigenetics and Post-translational Modifications (EPI) task, a main task of the BioNLP Shared Task 2011. The task concerns the extraction of detailed representations of 14 protein and DNA modification events, the catalysis of these reactions, and the identification of instances of negated or speculatively stated event instances. Seven teams submitted final results to the EPI task in the shared task, with the highest-performing system achieving 53% F-score in the full task and 69% F-score in the extraction of a simplified set of core event arguments.", "which Event / Relation Types ?", "Catalysis", 289.0, 298.0], ["We present the design, preparation, results and analysis of the Cancer Genetics (CG) event extraction task, a main task of the BioNLP Shared Task (ST) 2013. The CG task is an information extraction task targeting the recognition of events in text, represented as structured n-ary associations of given physical entities. In addition to addressing the cancer domain, the CG task is differentiated from previous event extraction tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple levels of biological organization, ranging from the molecular through the cellular and organ levels up to whole organisms. Final test set submissions were accepted from six teams. The highest-performing system achieved an Fscore of 55.4%. This level of performance is broadly comparable with the state of the art for established molecular-level extraction tasks, demonstrating that event extraction resources and methods generalize well to higher levels of biological organization and are applicable to the analysis of scientific texts on cancer. The CG task continues as an open challenge to all interested parties, with tools and resources available from http://2013. bionlp-st.org/.", "which Event / Relation Types ?", "General", NaN, NaN], ["We present the design, preparation, results and analysis of the Cancer Genetics (CG) event extraction task, a main task of the BioNLP Shared Task (ST) 2013. The CG task is an information extraction task targeting the recognition of events in text, represented as structured n-ary associations of given physical entities. In addition to addressing the cancer domain, the CG task is differentiated from previous event extraction tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple levels of biological organization, ranging from the molecular through the cellular and organ levels up to whole organisms. Final test set submissions were accepted from six teams. The highest-performing system achieved an Fscore of 55.4%. This level of performance is broadly comparable with the state of the art for established molecular-level extraction tasks, demonstrating that event extraction resources and methods generalize well to higher levels of biological organization and are applicable to the analysis of scientific texts on cancer. The CG task continues as an open challenge to all interested parties, with tools and resources available from http://2013. bionlp-st.org/.", "which Event / Relation Types ?", "Molecular", 575.0, 584.0], ["We present the Pathway Curation (PC) task, a main event extraction task of the BioNLP shared task (ST) 2013. The PC task concerns the automatic extraction of biomolecular reactions from text. The task setting, representation and semantics are defined with respect to pathway model standards and ontologies (SBML, BioPAX, SBO) and documents selected by relevance to specific model reactions. Two BioNLP ST 2013 participants successfully completed the PC task. The highest achieved Fscore, 52.8%, indicates that event extraction is a promising approach to supporting pathway curation efforts. The PC task continues as an open challenge with data, resources and tools available from http://2013.bionlp-st.org/", "which Event / Relation Types ?", "Pathway", 15.0, 22.0], ["We present the design, preparation, results and analysis of the Cancer Genetics (CG) event extraction task, a main task of the BioNLP Shared Task (ST) 2013. The CG task is an information extraction task targeting the recognition of events in text, represented as structured n-ary associations of given physical entities. In addition to addressing the cancer domain, the CG task is differentiated from previous event extraction tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple levels of biological organization, ranging from the molecular through the cellular and organ levels up to whole organisms. Final test set submissions were accepted from six teams. The highest-performing system achieved an Fscore of 55.4%. This level of performance is broadly comparable with the state of the art for established molecular-level extraction tasks, demonstrating that event extraction resources and methods generalize well to higher levels of biological organization and are applicable to the analysis of scientific texts on cancer. The CG task continues as an open challenge to all interested parties, with tools and resources available from http://2013. bionlp-st.org/.", "which Event types ?", "Pathological", 487.0, 499.0], ["This paper presents the preparation, resources, results and analysis of the Epigenetics and Post-translational Modifications (EPI) task, a main task of the BioNLP Shared Task 2011. The task concerns the extraction of detailed representations of 14 protein and DNA modification events, the catalysis of these reactions, and the identification of instances of negated or speculatively stated event instances. Seven teams submitted final results to the EPI task in the shared task, with the highest-performing system achieving 53% F-score in the full task and 69% F-score in the extraction of a simplified set of core event arguments.", "which Event types ?", "Catalysis", 289.0, 298.0], ["We present the design, preparation, results and analysis of the Cancer Genetics (CG) event extraction task, a main task of the BioNLP Shared Task (ST) 2013. The CG task is an information extraction task targeting the recognition of events in text, represented as structured n-ary associations of given physical entities. In addition to addressing the cancer domain, the CG task is differentiated from previous event extraction tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple levels of biological organization, ranging from the molecular through the cellular and organ levels up to whole organisms. Final test set submissions were accepted from six teams. The highest-performing system achieved an Fscore of 55.4%. This level of performance is broadly comparable with the state of the art for established molecular-level extraction tasks, demonstrating that event extraction resources and methods generalize well to higher levels of biological organization and are applicable to the analysis of scientific texts on cancer. The CG task continues as an open challenge to all interested parties, with tools and resources available from http://2013. bionlp-st.org/.", "which Event types ?", "General", NaN, NaN], ["We present the design, preparation, results and analysis of the Cancer Genetics (CG) event extraction task, a main task of the BioNLP Shared Task (ST) 2013. The CG task is an information extraction task targeting the recognition of events in text, represented as structured n-ary associations of given physical entities. In addition to addressing the cancer domain, the CG task is differentiated from previous event extraction tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple levels of biological organization, ranging from the molecular through the cellular and organ levels up to whole organisms. Final test set submissions were accepted from six teams. The highest-performing system achieved an Fscore of 55.4%. This level of performance is broadly comparable with the state of the art for established molecular-level extraction tasks, demonstrating that event extraction resources and methods generalize well to higher levels of biological organization and are applicable to the analysis of scientific texts on cancer. The CG task continues as an open challenge to all interested parties, with tools and resources available from http://2013. bionlp-st.org/.", "which Event types ?", "Molecular", 575.0, 584.0], ["We present the Pathway Curation (PC) task, a main event extraction task of the BioNLP shared task (ST) 2013. The PC task concerns the automatic extraction of biomolecular reactions from text. The task setting, representation and semantics are defined with respect to pathway model standards and ontologies (SBML, BioPAX, SBO) and documents selected by relevance to specific model reactions. Two BioNLP ST 2013 participants successfully completed the PC task. The highest achieved Fscore, 52.8%, indicates that event extraction is a promising approach to supporting pathway curation efforts. The PC task continues as an open challenge with data, resources and tools available from http://2013.bionlp-st.org/", "which Event types ?", "Pathway", 15.0, 22.0], ["Abstract Objective: To investigate the association between dietary patterns (DP) and overweight risk in the Malaysian Adult Nutrition Surveys (MANS) of 2003 and 2014. Design: DP were derived from the MANS FFQ using principal component analysis. The cross-sectional association of the derived DP with prevalence of overweight was analysed. Setting: Malaysia. Participants: Nationally representative sample of Malaysian adults from MANS (2003, n 6928; 2014, n 3000). Results: Three major DP were identified for both years. These were \u2018Traditional\u2019 (fish, eggs, local cakes), \u2018Western\u2019 (fast foods, meat, carbonated beverages) and \u2018Mixed\u2019 (ready-to-eat cereals, bread, vegetables). A fourth DP was generated in 2003, \u2018Flatbread & Beverages\u2019 (flatbread, creamer, malted beverages), and 2014, \u2018Noodles & Meat\u2019 (noodles, meat, eggs). These DP accounted for 25\u00b76 and 26\u00b76 % of DP variations in 2003 and 2014, respectively. For both years, Traditional DP was significantly associated with rural households, lower income, men and Malay ethnicity, while Western DP was associated with younger age and higher income. Mixed DP was positively associated with women and higher income. None of the DP showed positive association with overweight risk, except for reduced adjusted odds of overweight with adherence to Traditional DP in 2003. Conclusions: Overweight could not be attributed to adherence to a single dietary pattern among Malaysian adults. This may be due to the constantly morphing dietary landscape in Malaysia, especially in urban areas, given the ease of availability and relative affordability of multi-ethnic and international foods. Timely surveys are recommended to monitor implications of these changes.", "which Study design ?", "Cross-sectional ", 249.0, 265.0], ["AbstractThis study systematised and synthesised the results of observational studies that were aimed at supporting the association between dietary patterns and cardiometabolic risk (CMR) factors among adolescents. Relevant scientific articles were searched in PUBMED, EMBASE, SCIENCE DIRECT, LILACS, WEB OF SCIENCE and SCOPUS. Observational studies that included the measurement of any CMR factor in healthy adolescents and dietary patterns were included. The search strategy retained nineteen articles for qualitative analysis. Among retained articles, the effects of dietary pattern on the means of BMI (n 18), waist circumference (WC) (n 9), systolic blood pressure (n 7), diastolic blood pressure (n 6), blood glucose (n 5) and lipid profile (n 5) were examined. Systematised evidence showed that an unhealthy dietary pattern appears to be associated with poor mean values of CMR factors among adolescents. However, evidence of a protective effect of healthier dietary patterns in this group remains unclear. Considering the number of studies with available information, a meta-analysis of anthropometric measures showed that dietary patterns characterised by the highest intake of unhealthy foods resulted in a higher mean BMI (0\u00b757 kg/m\u00b2; 95 % CI 0\u00b751, 0\u00b763) and WC (0\u00b757 cm; 95 % CI 0\u00b747, 0\u00b767) compared with low intake of unhealthy foods. Controversially, patterns characterised by a low intake of healthy foods were associated with a lower mean BMI (\u22120\u00b741 kg/m\u00b2; 95 % CI \u22120\u00b746,\u22120\u00b736) and WC (\u22120\u00b743 cm; 95 % CI \u22120\u00b752,\u22120\u00b733). An unhealthy dietary pattern may influence markers of CMR among adolescents, but considering the small number and limitations of the studies included, further studies are warranted to strengthen the evidence of this relation.", "which Study design ?", "Meta-analysis", 1272.0, 1285.0], ["Platinum treated fullerene/TiO2 composites (Pt-fullerene/TiO2) were prepared using a sol\u2013gel method. The composite obtained was characterized by FT-IR, BET surface area measurements, X-ray diffraction, energy dispersive X-ray analysis, transmission electron microscopy (TEM) and UV-vis analysis. A methyl orange (MO) solution under visible light irradiation was used to determine the photocatalytic activity. Excellent photocatalytic degradation of a MO solution was observed using the Pt-TiO2, fullerene-TiO2 and Pt-fullerene/TiO2 composites under visible light. An increase in photocatalytic activity was observed and Pt-fullerene/TiO2 has the best photocatalytic activity, which may be attributable to increase of the photo-absorption effect by the fullerene and the cooperative effect of the Pt.", "which Incident light ?", "Visible", 332.0, 339.0], ["The association of heptamethine cyanine cation 1(+) with various counterions A (A = Br(-), I(-), PF(6)(-), SbF(6)(-), B(C(6)F(5))(4)(-), TRISPHAT) was realized. The six different ion pairs have been characterized by X-ray diffraction, and their absorption properties were studied in polar (DCM) and apolar (toluene) solvents. A small, hard anion (Br(-)) is able to strongly polarize the polymethine chain, resulting in the stabilization of an asymmetric dipolar-like structure in the crystal and in nondissociating solvents. On the contrary, in more polar solvents or when it is associated with a bulky soft anion (TRISPHAT or B(C(6)F(5))(4)(-)), the same cyanine dye adopts preferentially the ideal polymethine state. The solid-state and solution absorption properties of heptamethine dyes are therefore strongly correlated to the nature of the counterion.", "which Counterion interaction in Toluene ?", "Associated", 579.0, 589.0], ["The frequency distributions of the first six Lyman lines of hydrogen-like carbon, oxygen, neon, magnesium, aluminum, and silicon ions broadened by the local fields of both ions and electrons are calculated for dense plasmas. The electron collisions are treated by an impact theory allowing (approximately) for level splittings caused by the ion fields, finite duration of the collisions, and screening of the electron fields. Ion effects are calculated in the quasistatic, linear Stark-effect approximation, using distribution functions of Hooper and Tighe which include correlation and shielding effects. Theoretical uncertainties from the various approximations are estimated, and the scaling of the profiles with density, temperature and nuclear charge is discussed. A correction for the effects caused by low frequency field fluctuations is suggested.", "which paper:caategory ?", "Theoretical", 606.0, 617.0], ["The calculation of the spectral line broadening of lithium-like ions is presented. The motivation for these calculations is to extend present theoretical calculations to more complex atomic structures and provide further diagnostic possibilities. The profiles of Li I, Ti XX and Br XXXIII are shown as a representative sampling of the possible effects which can occur. The calculations are performed for all level 2 to level 3 and 4 transitions, with dipole-forbidden and overlapping components fully taken into account.", "which paper:caategory ?", "Theoretical", 142.0, 153.0], ["We present in this paper quantum mechanical calculations for the electron impact Stark linewidths of the 2s3s\u20132s3p transitions for the four beryllium-like ions from N IV to Ne VII. Calculations are made in the frame of the impact approximation and intermediate coupling, taking into account fine-structure effects. A comparison between our calculations, experimental and other theoretical results, shows a good agreement. This is the first time that such a good agreement is found between quantum and experimental linewidths of highly charged ions.", "which paper:caategory ?", "Theoretical", 377.0, 388.0], ["Aims. We present relativistic quantum mechanical calculations of electron-impact broadening of the singlet and triplet transition 2s3s \u2190 2s3p in four Be-like ions from NIV to NeVII. Methods. In our theoretical calculations, the K-matrix and related symmetry information determined by the colliding systems are generated by the DARC codes. Results. A careful comparison between our calculations and experimental results shows good agreement. Our calculated widths of spectral lines also agree with earlier theoretical results. Our investigations provide new methods of calculating electron-impact broadening parameters for plasma diagnostics.", "which paper:caategory ?", "Theoretical", 198.0, 209.0], ["Agile development methodologies have been gaining acceptance in the mainstream software development community. While there are numerous studies of agile development in academic and educational settings, there has been little detailed reporting of the usage, penetration and success of agile methodologies in traditional, professional software development organizations. We report on the results of an empirical study conducted at Microsoft to learn about agile development and its perception by people in development, testing, and management. We found that one-third of the study respondents use agile methodologies to varying degrees, and most view it favorably due to improved communication between team members, quick releases and the increased flexibility of agile designs. The scrum variant of agile methodologies is by far the most popular at Microsoft. Our findings also indicate that developers are most worried about scaling agile to larger projects (greater than twenty members), attending too many meetings and the coordinating agile and non-agile teams.", "which Agile\nMethod ?", "Agile", 0.0, 5.0], ["The high rate of systems development (SD) failure is often attributed to the complexity of traditional SD methodologies (e.g. Waterfall) and their inability to cope with changes brought about by today's dynamic and evolving business environment. Agile methodologies (AM) have emerged to challenge traditional SD and overcome their limitations. Yet empirical research into AM is sparse. This paper develops and tests a research model that hypothesizes the effects of five characteristics of agile systems development (iterative development; continuous integration; test-driven design; feedback; and collective ownership) on two dependent stakeholder satisfaction measures, namely stakeholder satisfaction with the development process and with the development outcome. An empirical study of 59 South African development projects (using self reported data) provided support for all hypothesized relationships and generally supports the efficacy of AM. Iteration and integration together with collective ownership have the strongest effects on the dependent satisfaction measures.", "which Agile\nMethod ?", "Agile", 246.0, 251.0], ["Programmers are living in an age of accelerated change. State of the art technology that was employed to facilitate projects a few years ago are typically obsolete today. Presently, there are requirements for higher quality software with less tolerance for errors, produced in compressed timelines with fewer people. Therefore, project success is more elusive than ever and is contingent upon many key aspects. One of the most crucial aspects is social factors. These social factors, such as knowledge sharing, motivation, and customer collaboration, can be addressed through agile practices. This paper will demonstrate two successful industrial software projects which are different in all aspects; however, both still apply agile practices to address social factors. The readers will see how agile practices in both projects were adapted to fit each unique team environment. The paper will also provide lessons learned and recommendations based on retrospective reviews and observations. These recommendations can lead to an improved chance of success in a software development project.", "which Agile\nMethod ?", "Agile", 576.0, 581.0], ["The widespread adoption of agile methodologies raises the question of their continued and effective usage in organizations. An agile usage model consisting of innovation, sociological, technological, team, and organizational factors is used to inform an analysis of post-adoptive usage of agile practices in two major organizations. Analysis of the two case studies found that a methodology champion and top management support were the most important factors influencing continued usage, while innovation factors such as compatibility seemed less influential. Both horizontal and vertical usage was found to have significant impact on the effectiveness of agile usage.", "which Agile\nMethod ?", "Agile", 27.0, 32.0], ["Agile method proponents believe that organizational culture has an effect on the extent to which an agile method is used. Research into the relationship between organizational culture and information systems development methodology deployment has been explored by others using the Competing Values Framework (CVF). However this relationship has not been explored with respect to the agile development methodologies. Based on a multi-case study of nine projects we show that specific organizational culture factors correlate with effective use of an agile method. Our results contribute to the literature on organizational culture and system development methodology use.", "which Agile\nMethod ?", "Agile", 0.0, 5.0], ["Agile software development is often, but not always, associated with the term dasiaproject chemistry,psila or the positive team climate that can contribute to high performance. A qualitative study involving 22 participants in agile teams sought to explore this connection, and answer the question: what aspects of agile software development are related to team cohesion? The following is a discussion of participant experiences as seen through a socio-psychological lens. It draws from social-identity theory and socio-psychological literature to explain, not only how, but why agile methodologies support teamwork and collective progress. Agile practices are shown to produce a socio-psychological environment of high-performance, with many of the practical benefits of agile practices being supported and mediated by social and personal concerns.", "which Agile\nMethod ?", "Agile", 0.0, 5.0], ["In this article, factors considered critical for the success of projects managed using Scrum are correlated to the results of software projects in industry. Using a set of 25 factors compiled in by other researchers, a cross section survey was conducted to evaluate the presence or application of these factors in 11 software projects that used Scrum in 9 different software companies located in Recife-PE, Brazil. The questionnaire was applied to 65 developers and Scrum Masters, representing 75% (65/86) of the professionals that have participated in the projects. The result was correlated with the level of success achieved by the projects, measured by the subjective perception of the project participant, using Spearman's rank correlation coefficient. The main finding is that only 32% (8/25) of the factors correlated positively with project success, raising the question of whether the factors hypothesized in the literature as being critical to the success of agile software projects indeed have an effect on project success. Given the limitations regarding the generalization of this result, other forms of empirical results, in particular case-studies, are needed to test this question.", "which Agile\nMethod ?", "Scrum", 87.0, 92.0], ["There are many evidences in the literature that the use self-managing teams has positive impacts on several dimensions of team effectiveness. Agile methods, supported by the Agile Manifesto, defend the use of self-managing teams in software development in substitution of hierarchically managed, traditional teams. The goal of this research was to study how a self-managing software team works in practice and how the behaviors of the software organization support or hinder the effectiveness of such teams. We performed a single case holistic case study, looking in depth into the actual behavior of a mature Scrum team in industry. Using interviews and participant observation, we collected qualitative data from five team members in several interactions. We extract the behavior of the team and of the software company in terms of the determinants of self-managing team effectiveness defined in a theoretical model from the literature. We found evidence that 17 out of 24 determinants of this model exist in the studied context. We concluded that certain determinants can support or facilitate the adoption of methodologies like Scrum, while the use of Scrum may affect other determinants.", "which Agile\nMethod ?", "Scrum", 610.0, 615.0], ["Agile development methodologies have gained great interest in research and practice. As their introduction considerably changes traditional working habits of developers, the long-term acceptance of agile methodologies becomes a critical success factor. Yet, current studies primarily examine the early adoption stage of agile methodologies. To investigate the long-term acceptance, we conducted a study at a leading insurance company that introduced Scrum in 2007. Using a qualitative research design and the Diffusion of Innovations Theory as a lens for analysis, we gained in-depth insights into factors influencing the acceptance of Scrum. Particularly, developers felt Scrum to be more compatible to their actual working practices. Moreover, they perceived the use of Scrum to deliver numerous relative advantages. However, we also identified possible barriers to acceptance since developers felt both the complexity of Scrum and the required discipline to be higher in comparison with traditional development methodologies.", "which Agile\nMethod ?", "Scrum", 450.0, 455.0], ["A cationic biopolymer, chitosan, is proposed for use in artificial tear formulations. It is endowed with good wetting properties as well as an antibacterial effect that are desirable in cases of dry eye, which is often complicated by secondary infections. Solutions containing 0.5% w/v of a low molecular weight (M(w)) chitosan (160 kDa) were assessed for antibacterial efficacy against E. coli and S. aureus by using the usual broth-dilution technique. The in vitro evaluation showed that concentrations of chitosan as low as 0.0375% still exert a bacteriostatic effect against E. coli. Minimal inhibitory concentration (MIC) values of chitosan were calculated to be as low as 0.375 mg/ml for E. coli and 0.15 mg/ml for S. aureus. Gamma scintigraphic studies demonstrated that chitosan formulations remain on the precorneal surface as long as commonly used commercial artificial tears (Protagent collyrium and Protagent-SE unit-dose) having a 5-fold higher viscosity.", "which Advantages ?", "antibacterial", 143.0, 156.0], ["BACKGROUND The minor efficacy of chlorhexidine (CHX) on other cariogenic bacteria than mutans streptococci such as Streptococcus sanguinis may contribute to uneffective antiplaque strategies. METHODS AND RESULTS In addition to CHX (0.1%) as positive control and saline as negative control, two chitosan derivatives (0.2%) and their CHX combinations were applied to planktonic and attached sanguinis streptococci for 2 min. In a preclinical biofilm model, the bacteria suspended in human sterile saliva were allowed to attach to human enamel slides for 60 min under flow conditions mimicking human salivation. The efficacy of the test agents on streptococci was screened by the following parameters: vitality status, colony-forming units (CFU)/ml and cell density on enamel. The first combination reduced the bacterial vitality to approximately 0% and yielded a strong CFU reduction of 2-3 log(10) units, much stronger than CHX alone. Furthermore, the first chitosan derivative showed a significant decrease of the surface coverage with these treated streptococci after attachment to enamel. CONCLUSIONS Based on these results, a new CHX formulation would be beneficial unifying the bioadhesive properties of chitosan with the antibacterial activity of CHX synergistically resulting in a superior antiplaque effect than CHX alone.", "which Advantages ?", "Bioadhesive", 1182.0, 1193.0], ["Recent decades have witnessed the fast and impressive development of nanocarriers as a drug delivery system. Considering the safety, delivery efficiency and stability of nanocarriers, there are many obstacles in accomplishing successful clinical translation of these nanocarrier-based drug delivery systems. The gap has urged drug delivery scientists to develop innovative nanocarriers with high compatibility, stability and longer circulation time. Exosomes are nanometer-sized, lipid-bilayer-enclosed extracellular vesicles secreted by many types of cells. Exosomes serving as versatile drug vehicles have attracted increasing attention due to their inherent ability of shuttling proteins, lipids and genes among cells and their natural affinity to target cells. Attractive features of exosomes, such as nanoscopic size, low immunogenicity, high biocompatibility, encapsulation of various cargoes and the ability to overcome biological barriers, distinguish them from other nanocarriers. To date, exosome-based nanocarriers delivering small molecule drugs as well as bioactive macromolecules have been developed for the treatment of many prevalent and obstinate diseases including cancer, CNS disorders and some other degenerative diseases. Exosome-based nanocarriers have a huge prospect in overcoming many hindrances encountered in drug and gene delivery. This review highlights the advances as well as challenges of exosome-based nanocarriers as drug vehicles. Special focus has been placed on the advantages of exosomes in delivering various cargoes and in treating obstinate diseases, aiming to offer new insights for exploring exosomes in the field of drug delivery.", "which Advantages ?", "Low immunogenicity", 823.0, 841.0], ["Nanoemulsions are kinetically stable liquid-in-liquid dispersions with droplet sizes on the order of 100 nm. Their small size leads to useful properties such as high surface area per unit volume, robust stability, optically transparent appearance, and tunable rheology. Nanoemulsions are finding application in diverse areas such as drug delivery, food, cosmetics, pharmaceuticals, and material synthesis. Additionally, they serve as model systems to understand nanoscale colloidal dispersions. High and low energy methods are used to prepare nanoemulsions, including high pressure homogenization, ultrasonication, phase inversion temperature and emulsion inversion point, as well as recently developed approaches such as bubble bursting method. In this review article, we summarize the major methods to prepare nanoemulsions, theories to predict droplet size, physical conditions and chemical additives which affect droplet stability, and recent applications.", "which Advantages ?", "Small size", 115.0, 125.0], ["The software architectures of business, mission, or safety critical systems must be carefully designed to balance an exacting set of quality concerns describing characteristics such as security, reliability, and performance. Unfortunately, software architectures tend to degrade over time as maintainers modify the system without understanding the underlying architectural decisions. Although this problem can be mitigated by manually tracing architectural decisions into the code, the cost and effort required to do this can be prohibitively expensive. In this paper we therefore present a novel approach for automating the construction of traceability links for architectural tactics. Our approach utilizes machine learning methods and lightweight structural analysis to detect tactic-related classes. The detected tactic-related classes are then mapped to a Tactic Traceability Information Model. We train our trace algorithm using code extracted from fifteen performance-centric and safety-critical open source software systems and then evaluate it against the Apache Hadoop framework. Our results show that automatically generated traceability links can support software maintenance activities while helping to preserve architectural qualities.", "which Automation ?", "Automatic", NaN, NaN], ["In this paper we present an approach for supporting the semi-automated abstraction of architectural models throughout the software lifecycle. It addresses the problem that the design and the implementation of a software system often drift apart as software systems evolve, leading to architectural knowledge evaporation. Our approach provides concepts and tool support for the semi-automatic abstraction of architectural knowledge from implemented systems and keeping the abstracted architectural knowledge up-to-date. In particular, we propose architecture abstraction concepts that are supported through a domain-specific language (DSL). Our main focus is on providing architectural abstraction specifications in the DSL that only need to be changed, if the architecture changes, but can tolerate non-architectural changes in the underlying source code. The DSL and its tools support abstracting the source code into UML component models for describing the architecture. Once the software architect has defined an architectural abstraction in the DSL, we can automatically generate UML component models from the source code and check whether the architectural design constraints are fulfilled by the models. Our approach supports full traceability between source code elements and architectural abstractions, and allows software architects to compare different versions of the generated UML component model with each other. We evaluate our research results by studying the evolution of architectural abstractions in different consecutive versions and the execution times for five existing open source systems.", "which Automation ?", "Semi-Automatic", 377.0, 391.0], ["ABSTRACT This paper systematically develops a set of general and supporting design principles and specifications for a \"Dynamic Emergency Response Management Information System\" (DERMIS) by identifying design premises resulting from the use of the \"Emergency Management Information System and Reference Index\" (EMISARI) and design concepts resulting from a comprehensive literature review. Implicit in crises of varying scopes and proportions are communication and information needs that can be addressed by today's information and communication technologies. However, what is required is organizing the premises and concepts that can be mapped into a set of generic design principles in turn providing a framework for the sensible development of flexible and dynamic Emergency Response Information Systems. A framework is presented for the system design and development that addresses the communication and information needs of first responders as well as the decision making needs of command and control personnel. The framework also incorporates thinking about the value of insights and information from communities of geographically dispersed experts and suggests how that expertise can be brought to bear on crisis decision making. Historic experience is used to suggest nine design premises. These premises are complemented by a series of five design concepts based upon the review of pertinent and applicable research. The result is a set of eight general design principles and three supporting design considerations that are recommended to be woven into the detailed specifications of a DERMIS. The resulting DERMIS design model graphically indicates the heuristic taken by this paper and suggests that the result will be an emergency response system flexible, robust, and dynamic enough to support the communication and information needs of emergency and crisis personnel on all levels. In addition it permits the development of dynamic emergency response information systems with tailored flexibility to support and be integrated across different sizes and types of organizations. This paper provides guidelines for system analysts and designers, system engineers, first responders, communities of experts, emergency command and control personnel, and MIS/IT researchers. SECTIONS 1. Introduction 2. Historical Insights about EMISARI 3. The emergency Response Atmosphere of OEP 4. Resulting Requirements for Emergency Response and Conceptual Design Specifics 4.1 Metaphors 4.2 Roles 4.3 Notifications 4.4 Context Visibility 4.5 Hypertext 5. Generalized Design Principles 6. Supporting Design Considerations 6.1 Resource Databases and Community Collaboration 6.2 Collective Memory 6.3 Online Communities of Experts 7. Conclusions and Final Observations 8. References 1. INTRODUCTION There have been, since 9/11, considerable efforts to propose improvements in the ability to respond to emergencies. However, the vast majority of these efforts have concentrated on infrastructure improvements to aid in mitigation of the impacts of either a man-made or natural disaster. In the area of communication and information systems to support the actual ongoing reaction to a disaster situation, the vast majority of the efforts have focused on the underlying technology to reliably support survivability of the underlying networks and physical facilities (Kunreuther and LernerLam 2002; Mork 2002). The fact that there were major failures of the basic technology and loss of the command center for 48 hours in the 9/11 event has made this an understandable result. The very workable commercial paging and digital mail systems supplied immediately afterwards by commercial firms (Michaels 2001; Vatis 2002) to the emergency response workers demonstrated that the correction of underlying technology is largely a process of setting integration standards and deciding to spend the necessary funds to update antiquated systems. \u2026", "which paper:Study Type ?", "Conceptual", 2441.0, 2451.0], ["Emergency response requires an efficient information supply chain for the smooth operations of intraand inter-organizational emergency management processes. However, the breakdown of this information supply chain due to the lack of consistent data standards presents a significant problem. In this paper, we adopt a theory driven novel approach to develop a XML-based data model that prescribes a comprehensive set of data standards (semantics and internal structures) for emergency management to better address the challenges of information interoperability. Actual documents currently being used in mitigating chemical emergencies from a large number of incidents are used in the analysis stage. The data model development is guided by Activity Theory and is validated through a RFC-like process used in standards development. This paper applies the standards to the real case of a chemical incident scenario. Further, it complies with the national leading initiatives in emergency standards (National Information Exchange Model).", "which paper:Study Type ?", "Activity theory", 738.0, 753.0], ["\u2022his paper investigates the challenges faced in designing an integrated information platform for emergency response management and uses the Beijing Olympic Games as a case study. The research methods are grounded in action research, participatory design, and situation-awareness oriented design. The completion of a more than two-year industrial secondment and six-month field studies ensured that a full understanding of user requirements had been obtained. A service-centered architecture was proposed to satisfy these user requirements. The proposed architecture consists mainly of information gathering, database management, and decision support services. The decision support services include situational overview, instant risk assessment, emergency response preplan, and disaster development prediction. Abstracting from the experience obtained while building this system, we outline a set of design principles in the general domain of information systems (IS) development for emergency management. These design principles form a contribution to the information systems literature because they provide guidance to developers who are aiming to support emergency response and the development of such systems that have not yet been adequately met by any existing types of IS. We are proud that the information platform developed was deployed in the real world and used in the 2008 Beijing", "which paper:Study Type ?", "action research, participatory design, and situation-awareness oriented design", 216.0, 294.0], ["We present a fast local clustering service, FLOC, that partitions a multi-hop wireless network into nonoverlapping and approximately equal-sited clusters. Each cluster has a clusterhead such that all nodes within unit distance of the clusterhead belong to the cluster but no node beyond distance m from the clusterhead belongs to the cluster. By asserting m /spl ges/ 2, FLOC achieves locality: effects of cluster formation and faults/changes at any part of the network are contained within most m units. By taking unit distance to be the reliable communication radius and m to be the maximum communication radius, FLOC exploits the double-band nature of wireless radio-model and achieves clustering in constant time regardless of the network size. Through simulations and experiments with actual deployments, we analyze the tradeoffs between clustering time and the quality of clustering, and suggest suitable parameters for FLOC to achieve a fast completion time without compromising the quality of the resulting clustering.", "which Alg. Complexity ?", "Constant", 703.0, 711.0], ["A wireless sensor network consists of nodes that can communicate with each other via wireless links. One way to support efficient communication between sensors is to organize the network into several groups, called clusters, with each cluster electing one node as the head of cluster. The paper describes a constant time clustering algorithm that can be applied on wireless sensor networks. This approach is an extension to the Younis and Fahmy method (1). The simulation results show that the extension can generate a small number of cluster heads in relatively few rounds, especially in sparse networks.", "which Alg. Complexity ?", "Constant", 307.0, 315.0], ["We consider distribution systems with a depot and many geographically dispersed retailers each of which faces external demands occurring at constant, deterministic but retailer specific rates. All stock enters the system through the depot from where it is distributed to the retailers by a fleet of capacitated vehicles combining deliveries into efficient routes. Inventories are kept at the retailers but not at the depot. We wish to determine feasible replenishment strategies i.e., inventory rules and routing patterns minimising infinite horizon long-run average transportation and inventory costs. We restrict ourselves to a class of strategies in which a collection of regions sets of retailers is specified which cover all outlets: if an outlet belongs to several regions, a specific fraction of its sales/operations is assigned to each of these regions. Each time one of the retailers in a given region receives a delivery, this delivery is made by a vehicle who visits all other outlets in the region as well in an efficient route. We describe a class of low complexity heuristics and show under mild probabilistic assumptions that the generated solutions are asymptotically optimal within the above class of strategies. We also show that lower and upper bounds on the system-wide costs may be computed and that these bounds are asymptotically tight under the same assumptions. A numerical study exhibits the performance of these heuristics and bounds for problems of moderate size.", "which Demand ?", "Deterministic", 150.0, 163.0], ["In vendor-managed inventory replenishment, the vendor decides when to make deliveries to customers, how much to deliver, and how to combine shipments using the available vehicles. This gives rise to the inventory-routing problem in which the goal is to coordinate inventory replenishment and transportation to minimize costs. The problem tackled in this paper is the stochastic inventory-routing problem, where stochastic demands are specified through general discrete distributions. The problem is formulated as a discounted infinite-horizon Markov decision problem. Heuristics based on finite scenario trees are developed. Computational results confirm the efficiency of these heuristics.", "which Demand ?", "Stochastic", 367.0, 377.0], ["We describe a dynamic and stochastic vehicle dispatching problem called the delivery dispatching problem. This problem is modeled as a Markov decision process. Because exact solution of this model is impractical, we adopt a heuristic approach for handling the problem. The heuristic is based in part on a decomposition of the problem by customer, where customer subproblems generate penalty functions that are applied in a master dispatching problem. We describe how to compute bounds on the algorithm's performance, and apply it to several examples with good results.", "which Demand ?", "Stochastic", 26.0, 36.0], ["We analyze three queueing control problems that model a dynamic stochastic distribution system, where a single capacitated vehicle serves a finite number of retailers in a make-to-stock fashion. The objective in each of these vehicle routing and inventory problems is to minimize the long run average inventory (holding and backordering) and transportation cost. In all three problems, the controller dynamically specifies whether a vehicle at the warehouse should idle orembark with a full load. In the first problem, the vehicle must travel along a prespecified (TSP) tour of all retailers, and the controller dynamically decides how many units to deliver to each retailer. In the second problem, the vehicle delivers an entire load to one retailer (direct shipping) and the controller decides which retailer to visit next. The third problem allows the additional dynamic choice between the TSP and direct shipping options. Motivated by existing heavy traffic limit theorems, we make a time scale decomposition assumption that allows us to approximate these queueing control problems by diffusion control problems, which are explicitly solved in the fixed route problems, and numerically solved in the dynamic routing case. Simulation experiments confirm that the heavy traffic approximations are quite accurate over a broad range of problem parameters. Our results lead to some new observations about the behavior of this complex system.", "which Demand ?", "Stochastic", 64.0, 74.0], ["This work is motivated by the need to solve the inventory routing problem when implementing a business practice called vendor managed inventory replenishment (VMI). With VMI, vendors monitor their customers' inventories and decide when and how much inventory should be replenished at each customer. The inventory routing problem attempts to coordinate inventory replenishment and transportation in such a way that the cost is minimized over the long run. We formulate a Markov decision process model of the stochastic inventory routing problem and propose approximation methods to find good solutions with reasonable computational effort. We indicate how the proposed approach can be used for other Markov decision processes involving the control of multiple resources.", "which Demand ?", "Stochastic", 507.0, 517.0], ["We consider distribution systems with a single depot and many retailers each of which faces external demands for a single item that occurs at a specific deterministic demand rate. All stock enters the systems through the depot where it can be stored and then picked up and distributed to the retailers by a fleet of vehicles, combining deliveries into efficient routes. We extend earlier methods for obtaining low complexity lower bounds and heuristics for systems without central stock. We show under mild probabilistic assumptions that the generated solutions and bounds come asymptotically within a few percentage points of optimality (within the considered class of strategies). A numerical study exhibits the performance of these heuristics and bounds for problems of moderate size.", "which Demand ?", "Deterministic", 153.0, 166.0], ["We consider distribution systems with a central warehouse and many retailers that stock a number of different products. Deterministic demand occurs at the retailers for each product. The warehouse acts as a break-bulk center and does not keep any inventory. The products are delivered from the warehouse to the retailers by vehicles that combine the deliveries to several retailers into efficient vehicle routes. The objective is to determine replenishment policies that specify the delivery quantities and the vehicle routes used for the delivery, so as to minimize the long-run average inventory and transportation costs. A new heuristic that develops a stationary nested joint replenishment policy for the problem is presented in this paper. Unlike existing methods, the proposed heuristic is capable of solving problems involving distribution systems with multiple products. Results of a computational study on randomly generated single-product problems are also presented.", "which Demand ?", "Deterministic", 120.0, 133.0], ["In the development of various large-scale sensor systems, a particularly challenging problem is how to dynamically organize the sensors into a wireless communication network and route sensed information from the field sensors to a remote base station. This paper presents a new energy-efficient dynamic clustering technique for large-scale sensor networks. By monitoring the received signal power from its neighboring nodes, each node estimates the number of active nodes in realtime and computes its optimal probability of becoming a cluster head, so that the amount of energy spent in both intra- and inter-cluster communications can be minimized. Based on the clustered architecture, this paper also proposes a simple multihop routing algorithm that is designed to be both energy-efficient and power-aware, so as to prolong the network lifetime. The new clustering and routing algorithms scale well and converge fast for large-scale dynamic sensor networks, as shown by our extensive simulation results.", "which Dynamism ?", "Dynamic", 295.0, 302.0], ["Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.", "which Dynamism ?", "Static", 416.0, 422.0], ["Abstract Based on the Environmental Kuznets Curve theory, the authors choose provincial panel data of China in 1990\u20132007 and adopt panel unit root and co-integration testing method to study whether there is Environmental Kuznets Curve for China\u2019s carbon emissions. The research results show that: carbon emissions per capita of the eastern region and the central region of China fit into Environmental Kuznets Curve, but that of the western region does not. On this basis, the authors carry out scenario analysis on the occurrence time of the inflection point of carbon emissions per capita of different regions, and describe a specific time path.", "which EKC Turnaround point(s) ?", "Eastern", 332.0, 339.0], ["Abstract Based on the Environmental Kuznets Curve theory, the authors choose provincial panel data of China in 1990\u20132007 and adopt panel unit root and co-integration testing method to study whether there is Environmental Kuznets Curve for China\u2019s carbon emissions. The research results show that: carbon emissions per capita of the eastern region and the central region of China fit into Environmental Kuznets Curve, but that of the western region does not. On this basis, the authors carry out scenario analysis on the occurrence time of the inflection point of carbon emissions per capita of different regions, and describe a specific time path.", "which EKC Turnaround point(s) ?", "Western", 433.0, 440.0], ["Abstract Based on the Environmental Kuznets Curve theory, the authors choose provincial panel data of China in 1990\u20132007 and adopt panel unit root and co-integration testing method to study whether there is Environmental Kuznets Curve for China\u2019s carbon emissions. The research results show that: carbon emissions per capita of the eastern region and the central region of China fit into Environmental Kuznets Curve, but that of the western region does not. On this basis, the authors carry out scenario analysis on the occurrence time of the inflection point of carbon emissions per capita of different regions, and describe a specific time path.", "which EKC Turnaround point(s) ?", "Central", 355.0, 362.0], ["This study utilizes standard- and nested-EKC models to investigate the income-environment relation for Nigeria, between 1960 and 2008. The results from the standard-EKC model provides weak evidence of an inverted-U shaped relationship with turning point (T.P) around $280.84, while the nested model presents strong evidence of an N-shaped relationship between income and emissions in Nigeria, with a T.P around $237.23. Tests for structural breaks caused by the 1973 oil price shocks and 1986 Structural Adjustment are not rejected, implying that these factors have not significantly affected the income-environment relationship in Nigeria. Further, results from the rolling interdecadal analysis shows that the observed relationship is stable and insensitive to the sample interval chosen. Overall, our findings imply that economic development is compatible with environmental improvements in Nigeria. However, tighter and concentrated environmental policy regimes will be required to ensure that the relationship is maintained around the first two-strands of the N-shape", "which EKC Turnaround point(s) ?", "Nested model", 286.0, 298.0], ["The efficient and effective management of empty containers is an important problem in the shipping industry. Not only does it have an economic effect, but it also has an environmental and sustainability impact, since the reduction of empty container movements will reduce fuel consumption and reduce congestion and emissions. The purposes of this paper are: to identify critical factors that affect empty container movements; to quantify the scale of empty container repositioning in major shipping routes; and to evaluate and contrast different strategies that shipping lines, and container operators, could adopt to reduce their empty container repositioning costs. The critical factors that affect empty container repositioning are identified through a review of the literature and observations of industrial practice. Taking three major routes (Trans-Pacific, Trans-Atlantic, Europe\u2013Asia) as examples, with the assumption that trade demands could be balanced among the whole network regardless the identities of individual shipping lines, the most optimistic estimation of empty container movements can be calculated. This quantifies the scale of the empty repositioning problem. Depending on whether shipping lines are coordinating the container flows over different routes and whether they are willing to share container fleets, four strategies for empty container repositioning are presented. Mathematical programming is then applied to evaluate and contrast the performance of these strategies in three major routes. 1A preliminary version was presented in IAME Annual Conference at Dalian, China, 2\u20134 April 2008.", "which Container flow ?", "Empty", 42.0, 47.0], ["Containerized liner trades have been growing steadily since the globalization of world economies intensified in the early 1990s. However, these trades are typically imbalanced in terms of the numbers of inbound and outbound containers. As a result, the relocation of empty containers has become one of the major problems faced by liner operators. In this paper, we consider the dynamic empty container allocation problem where we need to reposition empty containers and to determine the number of leased con tainers needed to meet customers? demand over time. We formulate this problem as a two-stage stochastic network: in stage one, the parameters such as supplies, demands, and ship capacities for empty containers are deterministic; whereas in stage two, these parameters are random variables. We need to make decisions in stage one such that the total of the stage one cost and the expected stage two cost is minimized. By taking advantage of the network structure, we show how a stochastic quasi-gradient method and a stochastic hybrid approximation procedure can be applied to solve the problem. In addition, we propose some new variations of these methods that seem to work faster in practice. We conduct numerical tests to evaluate the value of the two-stage stochastic model over a rolling horizon environment and to investigate the behavior of the solution methods with different implementations.", "which Container flow ?", "Empty", 267.0, 272.0], ["This paper addresses empty container reposition planning by plainly considering safety stock management and geographical regions. This plan could avoid drawback in practice which collects mass empty containers at a port then repositions most empty containers at a time. Empty containers occupy slots on vessel and the liner shipping company loses chance to yield freight revenue. The problem is drawn up as a two-stage problem. The upper problem is identified to estimate the empty container stock at each port and the lower problem models the empty container reposition planning with shipping service network as the Transportation Problem by Liner Problem. We looked at case studies of the Taiwan Liner Shipping Company to show the application of the proposed model. The results show the model provides optimization techniques to minimize cost of empty container reposition and to provide an evidence to adjust strategy of restructuring the shipping service network.", "which Container flow ?", "Empty", 21.0, 26.0], ["We present a fast local clustering service, FLOC, that partitions a multi-hop wireless network into nonoverlapping and approximately equal-sited clusters. Each cluster has a clusterhead such that all nodes within unit distance of the clusterhead belong to the cluster but no node beyond distance m from the clusterhead belongs to the cluster. By asserting m /spl ges/ 2, FLOC achieves locality: effects of cluster formation and faults/changes at any part of the network are contained within most m units. By taking unit distance to be the reliable communication radius and m to be the maximum communication radius, FLOC exploits the double-band nature of wireless radio-model and achieves clustering in constant time regardless of the network size. Through simulations and experiments with actual deployments, we analyze the tradeoffs between clustering time and the quality of clustering, and suggest suitable parameters for FLOC to achieve a fast completion time without compromising the quality of the resulting clustering.", "which Cluster Properties Cluster size ?", "Equal", 133.0, 138.0], ["Clustering provides an effective way for prolonging the lifetime of a wireless sensor network. Current clustering algorithms usually utilize two techniques, selecting cluster heads with more residual energy and rotating cluster heads periodically, to distribute the energy consumption among nodes in each cluster and extend the network lifetime. However, they rarely consider the hot spots problem in multihop wireless sensor networks. When cluster heads cooperate with each other to forward their data to the base station, the cluster heads closer to the base station are burdened with heavy relay traffic and tend to die early, leaving areas of the network uncovered and causing network partition. To address the problem, we propose an energy-efficient unequal clustering (EEUC) mechanism for periodical data gathering in wireless sensor networks. It partitions the nodes into clusters of unequal size, and clusters closer to the base station have smaller sizes than those farther away from the base station. Thus cluster heads closer to the base station can preserve some energy for the inter-cluster data forwarding. We also propose an energy-aware multihop routing protocol for the inter-cluster communication. Simulation results show that our unequal clustering mechanism balances the energy consumption well among all sensor nodes and achieves an obvious improvement on the network lifetime", "which Cluster Properties Cluster size ?", "Unequal", 755.0, 762.0], ["There has been proliferation of research on seeking for distributing the energy consumption among nodes in each cluster and between cluster heads to extend the network lifetime. However, they hardly consider the hot spots problem caused by heavy relay traffic forwarded. In this paper, we propose a distributed and randomized clustering algorithm that consists of unequal sized clusters. The cluster heads closer to the base station may focus more on inter-cluster communication while distant cluster heads concentrate more on intra-cluster communication. As a result, it nearly guarantees no communication in the network gets excessively long communication distance that significantly attenuates signal strength. Simulation results show that our algorithm achieves abundant improvement in terms of the coverage time and network lifetime, especially when the density of distributed nodes is high.", "which Cluster Properties Cluster size ?", "Unequal", 364.0, 371.0], ["Due to the imbalance of energy consumption of nodes in wireless sensor networks (WSNs), some local nodes die prematurely, which causes the network partitions and then shortens the lifetime of the network. The phenomenon is called \u201chot spot\u201d or \u201cenergy hole\u201d problem. For this problem, an energy-aware distributed unequal clustering protocol (EADUC) in multihop heterogeneous WSNs is proposed. Compared with the previous protocols, the cluster heads obtained by EADUC can achieve balanced energy, good distribution, and seamless coverage for all the nodes.Moreover, the complexity of time and control message is low. Simulation experiments show that EADUC can prolong the lifetime of the network significantly.", "which Cluster Properties Cluster size ?", "Unequal", 313.0, 320.0], ["Due to the imbalance of energy consumption of nodes in wireless sensor networks (WSNs), some local nodes die prematurely, which causes the network partitions and then shortens the lifetime of the network. The phenomenon is called \u201chot spot\u201d or \u201cenergy hole\u201d problem. For this problem, an energy-aware distributed unequal clustering protocol (EADUC) in multihop heterogeneous WSNs is proposed. Compared with the previous protocols, the cluster heads obtained by EADUC can achieve balanced energy, good distribution, and seamless coverage for all the nodes. Moreover, the complexity of time and control message is low. Simulation experiments show that EADUC can prolong the lifetime of the network significantly.", "which Cluster Properties Cluster size ?", "Unequal", 313.0, 320.0], ["Prolonging the lifetime of wireless sensor networks has always been a determining factor when designing and deploying such networks. Clustering is one technique that can be used to extend the lifetime of sensor networks by grouping sensors together. However, there exists the hot spot problem which causes an unbalanced energy consumption in equally formed clusters. In this paper, we propose UHEED, an unequal clustering algorithm which mitigates this problem and which leads to a more uniform residual energy in the network and improves the network lifetime. Furthermore, from the simulation results presented, we were able to deduce the most appropriate unequal cluster size to be used.", "which Cluster Properties Cluster size ?", "Unequal", 403.0, 410.0], ["In order to prolong the lifetime of wireless sensor networks, this paper presents a multihop routing protocol with unequal clustering (MRPUC). On the one hand, cluster heads deliver the data to the base station with relay to reduce energy consumption. On the other hand, MRPUC uses many measures to balance the energy of nodes. First, it selects the nodes with more residual energy as cluster heads, and clusters closer to the base station have smaller sizes to preserve some energy during intra-cluster communication for inter-cluster packets forwarding. Second, when regular nodes join clusters, they consider not only the distance to cluster heads but also the residual energy of cluster heads. Third, cluster heads choose those nodes as relay nodes, which have minimum energy consumption for forwarding and maximum residual energy to avoid dying earlier. Simulation results show that MRPUC performs much better than similar protocols.", "which Cluster Properties Cluster size ?", "Unequal", 115.0, 122.0], ["In order to gather information more efficiently, wireless sensor networks (WSNs) are partitioned into clusters. The most of the proposed clustering algorithms do not consider the location of the base station. This situation causes hot spots problem in multi-hop WSNs. Unequal clustering mechanisms, which are designed by considering the base station location, solve this problem. In this paper, we introduce a fuzzy unequal clustering algorithm (EAUCF) which aims to prolong the lifetime of WSNs. EAUCF adjusts the cluster-head radius considering the residual energy and the distance to the base station parameters of the sensor nodes. This helps decreasing the intra-cluster work of the sensor nodes which are closer to the base station or have lower battery level. We utilize fuzzy logic for handling the uncertainties in cluster-head radius estimation. We compare our algorithm with some popular algorithms in literature, namely LEACH, CHEF and EEUC, according to First Node Dies (FND), Half of the Nodes Alive (HNA) and energy-efficiency metrics. Our simulation results show that EAUCF performs better than the other algorithms in most of the cases. Therefore, EAUCF is a stable and energy-efficient clustering algorithm to be utilized in any real time WSN application.", "which Cluster Properties Cluster size ?", "Unequal", 268.0, 275.0], ["The antifungal activity of Artemisia herba alba was found to be associated with two major volatile compounds isolated from the fresh leaves of the plant. Carvone and piperitone were isolated and identified by GC/MS, GC/IR, and NMR spectroscopy. Antifungal activity was measured against Penicillium citrinum (ATCC 10499) and Mucora rouxii (ATCC 24905). The antifungal activity (IC50) of the purified compounds was estimated to be 5 \u03bc g/ml, 2 \u03bc g/ml against Penicillium citrinum and 7 \u03bc g/ml, 1.5 \u03bc g/ml against Mucora rouxii carvone and piperitone, respectively.", "which Plant material status ?", "Fresh", 127.0, 132.0], ["Abstract The essential oil obtained by hydrodistillation from the aerial parts of Artemisia herba-alba Asso growing wild in M'sila-Algeria, was investigated using both capillary GC and GC/MS techniques. The oil yield was 1.02% based on dry weight. Sixty-eight components amounting to 94.7% of the oil were identifed, 33 of them being reported for the frst time in Algerian A. herba-alba oil and 21 of these components have not been previously reported in A. herba-alba oils. The oil contained camphor (19.4%), trans-pinocarveol (16.9%), chrysanthenone (15.8%) and \u03b2-thujone (15%) as major components. Monoterpenoids are the main components (86.1%), and the irregular monoterpenes fraction represented a 3.1% yield.", "which Plant material status ?", "Dry", 236.0, 239.0], ["SUMMARYArtemisia herba-alba Asso has been successfully cultivated in the Tunisian arid zone. However, information regarding the effects of the harvest frequency on its biomass and essential oil yields is very limited. In this study, the effects of three different frequencies of harvesting the upper half of the A. herba-alba plant tuft were compared. The harvest treatments were: harvesting the same individual plants at the flowering stage annually; harvesting the same individual plants at the full vegetative growth stage annually and harvesting the same individual plants every six months. Statistical analyses indicated that all properties studied were affected by the harvest frequency. Essential oil yield, depended both on the dry biomass and its essential oil content, and was significantly higher from plants harvested annually at the flowering stage than the other two treatments. The composition of the \u03b2- and \u03b1-thujone-rich oils did not vary throughout the experimental period.", "which Plant material status ?", "Dry", 823.0, 826.0], ["Networking together hundreds or thousands of cheap microsensor nodes allows users to accurately monitor a remote environment by intelligently combining the data from the individual nodes. These networks require robust wireless communication protocols that are energy efficient and provide low latency. We develop and analyze low-energy adaptive clustering hierarchy (LEACH), a protocol architecture for microsensor networks that combines the ideas of energy-efficient cluster-based routing and media access together with application-specific data aggregation to achieve good performance in terms of system lifetime, latency, and application-perceived quality. LEACH includes a new, distributed cluster formation technique that enables self-organization of large numbers of nodes, algorithms for adapting clusters and rotating cluster head positions to evenly distribute the energy load among all the nodes, and techniques to enable distributed signal processing to save communication resources. Our results show that LEACH can improve system lifetime by an order of magnitude compared with general-purpose multihop approaches.", "which Load balancing ?", "Good", 570.0, 574.0], ["We study the impact of heterogeneity of nodes, in terms of their energy, in wireless sensor networks that are hierarchically clustered. In these networks some of the nodes become cluster heads, aggregate the data of their cluster members and transmit it to the sink. We assume that a percentage of the population of sensor nodes is equipped with additional energy resources\u2014this is a source of heterogeneity which may result from the initial setting or as the operation of the network evolves. We also assume that the sensors are randomly (uniformly) distributed and are not mobile, the coordinates of the sink and the dimensions of the sensor field are known. We show that the behavior of such sensor networks becomes very unstable once the first node dies, especially in the presence of node heterogeneity. Classical clustering protocols assume that all the nodes are equipped with the same amount of energy and as a result, they can not take full advantage of the presence of node heterogeneity. We propose SEP, a heterogeneous-aware protocol to prolong the time interval before the death of the first node (we refer to as stability period), which is crucial for many applications where the feedback from the sensor network must be reliable. SEP is based on weighted election probabilities of each node to become cluster head according to the remaining energy in each node. We show by simulation that SEP always prolongs the stability period compared to (and that the average throughput is greater than) the one obtained using current clustering protocols. We conclude by studying the sensitivity of our SEP protocol to heterogeneity parameters capturing energy imbalance in the network. We found that SEP yields longer stability region for higher values of extra energy brought by more powerful nodes.", "which Node type ?", "heterogeneous", 1017.0, 1030.0], ["This paper presents two instance-level transfer learning based algorithms for cross lingual opinion analysis by transferring useful translated opinion examples from other languages as the supplementary training data for improving the opinion classifier in target language. Starting from the union of small training data in target language and large translated examples in other languages, the Transfer AdaBoost algorithm is applied to iteratively reduce the influence of low quality translated examples. Alternatively, starting only from the training data in target language, the Transfer Self-training algorithm is designed to iteratively select high quality translated examples to enrich the training data set. These two algorithms are applied to sentence- and document-level cross lingual opinion analysis tasks, respectively. The evaluations show that these algorithms effectively improve the opinion analysis by exploiting small target language training data and large cross lingual training data.", "which Computational cost ?", "High", 647.0, 651.0], ["The least-significant-bit (LSB)-based approach is a popular type of steganographic algorithms in the spatial domain. However, we find that in most existing approaches, the choice of embedding positions within a cover image mainly depends on a pseudorandom number generator without considering the relationship between the image content itself and the size of the secret message. Thus the smooth/flat regions in the cover images will inevitably be contaminated after data hiding even at a low embedding rate, and this will lead to poor visual quality and low security based on our analysis and extensive experiments, especially for those images with many smooth regions. In this paper, we expand the LSB matching revisited image steganography and propose an edge adaptive scheme which can select the embedding regions according to the size of secret message and the difference between two consecutive pixels in the cover image. For lower embedding rates, only sharper edge regions are used while keeping the other smoother regions as they are. When the embedding rate increases, more edge regions can be released adaptively for data hiding by adjusting just a few parameters. The experimental results evaluated on 6000 natural images with three specific and four universal steganalytic algorithms show that the new scheme can enhance the security significantly compared with typical LSB-based approaches as well as their edge adaptive ones, such as pixel-value-differencing-based approaches, while preserving higher visual quality of stego images at the same time.", "which Invisibility ?", "High", NaN, NaN], ["Reversible data embedding has drawn lots of interest recently. Being reversible, the original digital content can be completely restored. We present a novel reversible data-embedding method for digital images. We explore the redundancy in digital images to achieve very high embedding capacity, and keep the distortion low.", "which Invisibility ?", "Low", 319.0, 322.0], ["The least-significant-bit (LSB)-based approach is a popular type of steganographic algorithms in the spatial domain. However, we find that in most existing approaches, the choice of embedding positions within a cover image mainly depends on a pseudorandom number generator without considering the relationship between the image content itself and the size of the secret message. Thus the smooth/flat regions in the cover images will inevitably be contaminated after data hiding even at a low embedding rate, and this will lead to poor visual quality and low security based on our analysis and extensive experiments, especially for those images with many smooth regions. In this paper, we expand the LSB matching revisited image steganography and propose an edge adaptive scheme which can select the embedding regions according to the size of secret message and the difference between two consecutive pixels in the cover image. For lower embedding rates, only sharper edge regions are used while keeping the other smoother regions as they are. When the embedding rate increases, more edge regions can be released adaptively for data hiding by adjusting just a few parameters. The experimental results evaluated on 6000 natural images with three specific and four universal steganalytic algorithms show that the new scheme can enhance the security significantly compared with typical LSB-based approaches as well as their edge adaptive ones, such as pixel-value-differencing-based approaches, while preserving higher visual quality of stego images at the same time.", "which Payload\nCapacity ?", "High", NaN, NaN], ["Reversible data embedding has drawn lots of interest recently. Being reversible, the original digital content can be completely restored. We present a novel reversible data-embedding method for digital images. We explore the redundancy in digital images to achieve very high embedding capacity, and keep the distortion low.", "which Payload\nCapacity ?", "Low", 319.0, 322.0], ["Reversible data embedding has drawn lots of interest recently. Being reversible, the original digital content can be completely restored. We present a novel reversible data-embedding method for digital images. We explore the redundancy in digital images to achieve very high embedding capacity, and keep the distortion low.", "which Robustness against\nimage\nmanipulation ?", "High", 270.0, 274.0], ["The least-significant-bit (LSB)-based approach is a popular type of steganographic algorithms in the spatial domain. However, we find that in most existing approaches, the choice of embedding positions within a cover image mainly depends on a pseudorandom number generator without considering the relationship between the image content itself and the size of the secret message. Thus the smooth/flat regions in the cover images will inevitably be contaminated after data hiding even at a low embedding rate, and this will lead to poor visual quality and low security based on our analysis and extensive experiments, especially for those images with many smooth regions. In this paper, we expand the LSB matching revisited image steganography and propose an edge adaptive scheme which can select the embedding regions according to the size of secret message and the difference between two consecutive pixels in the cover image. For lower embedding rates, only sharper edge regions are used while keeping the other smoother regions as they are. When the embedding rate increases, more edge regions can be released adaptively for data hiding by adjusting just a few parameters. The experimental results evaluated on 6000 natural images with three specific and four universal steganalytic algorithms show that the new scheme can enhance the security significantly compared with typical LSB-based approaches as well as their edge adaptive ones, such as pixel-value-differencing-based approaches, while preserving higher visual quality of stego images at the same time.", "which Robustness against\nimage\nmanipulation ?", "High", NaN, NaN], ["Reversible data embedding has drawn lots of interest recently. Being reversible, the original digital content can be completely restored. We present a novel reversible data-embedding method for digital images. We explore the redundancy in digital images to achieve very high embedding capacity, and keep the distortion low.", "which Robustness against\nstatistical attacks ?", "High", 270.0, 274.0], ["In this paper presents a novel reversible watermarking scheme. The proposed scheme use an interpolation technique to generate residual values named as interpolation error. Additionally apply the additive expansion to these interpolation-errors, this project achieve a highly efficient reversible watermarking scheme which can guarantee high image quality without sacrificing embedding capacity. The experimental results show the proposed reversible scheme provides a higher capacity and achieves better image quality for watermarked images. The computational cost of the proposed scheme is small.", "which Robustness against\nstatistical attacks ?", "High", 336.0, 340.0], ["The least-significant-bit (LSB)-based approach is a popular type of steganographic algorithms in the spatial domain. However, we find that in most existing approaches, the choice of embedding positions within a cover image mainly depends on a pseudorandom number generator without considering the relationship between the image content itself and the size of the secret message. Thus the smooth/flat regions in the cover images will inevitably be contaminated after data hiding even at a low embedding rate, and this will lead to poor visual quality and low security based on our analysis and extensive experiments, especially for those images with many smooth regions. In this paper, we expand the LSB matching revisited image steganography and propose an edge adaptive scheme which can select the embedding regions according to the size of secret message and the difference between two consecutive pixels in the cover image. For lower embedding rates, only sharper edge regions are used while keeping the other smoother regions as they are. When the embedding rate increases, more edge regions can be released adaptively for data hiding by adjusting just a few parameters. The experimental results evaluated on 6000 natural images with three specific and four universal steganalytic algorithms show that the new scheme can enhance the security significantly compared with typical LSB-based approaches as well as their edge adaptive ones, such as pixel-value-differencing-based approaches, while preserving higher visual quality of stego images at the same time.", "which Tolerance to RS\nSteganalysis ?", "High", NaN, NaN], ["Reversible data embedding has drawn lots of interest recently. Being reversible, the original digital content can be completely restored. We present a novel reversible data-embedding method for digital images. We explore the redundancy in digital images to achieve very high embedding capacity, and keep the distortion low.", "which Tolerance to RS\nSteganalysis ?", "Low", 319.0, 322.0], ["The least-significant-bit (LSB)-based approach is a popular type of steganographic algorithms in the spatial domain. However, we find that in most existing approaches, the choice of embedding positions within a cover image mainly depends on a pseudorandom number generator without considering the relationship between the image content itself and the size of the secret message. Thus the smooth/flat regions in the cover images will inevitably be contaminated after data hiding even at a low embedding rate, and this will lead to poor visual quality and low security based on our analysis and extensive experiments, especially for those images with many smooth regions. In this paper, we expand the LSB matching revisited image steganography and propose an edge adaptive scheme which can select the embedding regions according to the size of secret message and the difference between two consecutive pixels in the cover image. For lower embedding rates, only sharper edge regions are used while keeping the other smoother regions as they are. When the embedding rate increases, more edge regions can be released adaptively for data hiding by adjusting just a few parameters. The experimental results evaluated on 6000 natural images with three specific and four universal steganalytic algorithms show that the new scheme can enhance the security significantly compared with typical LSB-based approaches as well as their edge adaptive ones, such as pixel-value-differencing-based approaches, while preserving higher visual quality of stego images at the same time.", "which Utilization of edge\nareas ?", "High", NaN, NaN], ["Reversible data embedding has drawn lots of interest recently. Being reversible, the original digital content can be completely restored. We present a novel reversible data-embedding method for digital images. We explore the redundancy in digital images to achieve very high embedding capacity, and keep the distortion low.", "which Utilization of edge\nareas ?", "Low", 319.0, 322.0], ["Background Previous studies have demonstrated a correlation between surgical experience and performance on a virtual reality arthroscopy simulator but only provided single time point evaluations. Additional longitudinal studies are necessary to confirm the validity of virtual reality simulation before these teaching aids can be more fully recommended for surgical education. Hypothesis Subjects will show improved performance on simulator retesting several years after an initial baseline evaluation, commensurate with their advanced surgical experience. Study Design Controlled laboratory study. Methods After gaining further arthroscopic experience, 10 orthopaedic residents underwent retesting 3 years after initial evaluation on a Procedicus virtual reality arthroscopy simulator. Using a paired t test, simulator parameters were compared in each subject before and after additional arthroscopic experience. Subjects were evaluated for time to completion, number of probe collisions with the tissues, average probe velocity, and distance traveled with the tip of the simulated probe compared to an optimal computer-determined distance. In addition, to evaluate consistency of simulator performance, results were compared to historical controls of equal experience. Results Subjects improved significantly ( P < .02 for all) in the 4 simulator parameters: completion time (\u221251 %), probe collisions (\u221229%), average velocity (+122%), and distance traveled (\u2212;32%). With the exception of probe velocity, there were no significant differences between the performance of this group and that of a historical group with equal experience, indicating that groups with similar arthroscopic experience consistently demonstrate equivalent scores on the simulator. Conclusion Subjects significantly improved their performance on simulator retesting 3 years after initial evaluation. Additionally, across independent groups with equivalent surgical experience, similar performance can be expected on simulator parameters; thus it may eventually be possible to establish simulator benchmarks to indicate likely arthroscopic skill. Clinical Relevance These results further validate the use of surgical simulation as an important tool for the evaluation of surgical skills.", "which Evaluator ?", "Independent", 1899.0, 1910.0], ["We consider distribution systems with a central warehouse and many retailers that stock a number of different products. Deterministic demand occurs at the retailers for each product. The warehouse acts as a break-bulk center and does not keep any inventory. The products are delivered from the warehouse to the retailers by vehicles that combine the deliveries to several retailers into efficient vehicle routes. The objective is to determine replenishment policies that specify the delivery quantities and the vehicle routes used for the delivery, so as to minimize the long-run average inventory and transportation costs. A new heuristic that develops a stationary nested joint replenishment policy for the problem is presented in this paper. Unlike existing methods, the proposed heuristic is capable of solving problems involving distribution systems with multiple products. Results of a computational study on randomly generated single-product problems are also presented.", "which Fleet size ?", "multiple", 860.0, 868.0], ["This work is motivated by the need to solve the inventory routing problem when implementing a business practice called vendor managed inventory replenishment (VMI). With VMI, vendors monitor their customers' inventories and decide when and how much inventory should be replenished at each customer. The inventory routing problem attempts to coordinate inventory replenishment and transportation in such a way that the cost is minimized over the long run. We formulate a Markov decision process model of the stochastic inventory routing problem and propose approximation methods to find good solutions with reasonable computational effort. We indicate how the proposed approach can be used for other Markov decision processes involving the control of multiple resources.", "which Fleet size ?", "multiple", 750.0, 758.0], ["We analyze three queueing control problems that model a dynamic stochastic distribution system, where a single capacitated vehicle serves a finite number of retailers in a make-to-stock fashion. The objective in each of these vehicle routing and inventory problems is to minimize the long run average inventory (holding and backordering) and transportation cost. In all three problems, the controller dynamically specifies whether a vehicle at the warehouse should idle orembark with a full load. In the first problem, the vehicle must travel along a prespecified (TSP) tour of all retailers, and the controller dynamically decides how many units to deliver to each retailer. In the second problem, the vehicle delivers an entire load to one retailer (direct shipping) and the controller decides which retailer to visit next. The third problem allows the additional dynamic choice between the TSP and direct shipping options. Motivated by existing heavy traffic limit theorems, we make a time scale decomposition assumption that allows us to approximate these queueing control problems by diffusion control problems, which are explicitly solved in the fixed route problems, and numerically solved in the dynamic routing case. Simulation experiments confirm that the heavy traffic approximations are quite accurate over a broad range of problem parameters. Our results lead to some new observations about the behavior of this complex system.", "which Fleet size ?", "single", 104.0, 110.0], ["This paper addresses a novel liner shipping fleet deployment problem characterized by cargo transshipment, multiple container routing options and uncertain demand, with the objective of maximizing the expected profit. This problem is formulated as a stochastic program and solved by the sample average approximation method. In this technique the objective function of the stochastic program is approximated by a sample average estimate derived from a random sample, and then the resulting deterministic program is solved. This process is repeated with different samples to obtain a good candidate solution along with the statistical estimate of its optimality gap. We apply the proposed model to a case study inspired from real-world problems faced by a major liner shipping company. Results show that the case is efficiently solved to 1% of relative optimality gap at 95% confidence level.", "which oute ?", "multiple", 107.0, 115.0], ["A liner shipping company seeks to provide liner services with shorter transit time compared with the benchmark of market-level transit time because of the ever-increasing competition. When the itineraries of its liner service routes are determined, the liner shipping company designs the schedules of the liner routes such that the wait time at transshipment ports is minimized. As a result of transshipment, multiple paths are available for delivering containers from the origin port to the destination port. Therefore, the medium-term (3 to 6 months) schedule design problem and the operational-level container-routing problem must be investigated simultaneously. The schedule design and container-routing problems were formulated by minimization of the sum of the total transshipment cost and penalty cost associated with longer transit time than the market-level transit time, minus the bonus for shorter transit time. The formulation is nonlinear, noncontinuous, and nonconvex. A genetic local search approach was developed to find good solutions to the problem. The proposed solution method was applied to optimize the Asia\u2013Europe\u2013Oceania liner shipping services of a global liner company.", "which oute ?", "multiple", 409.0, 417.0], ["Consider the problem of allocating multiple products by a distributor with limited capacity (truck size), who has a fixed sequence of customers (retailers) whose demands are unknown. Each time the distributor visits a customer, he gets information about the realization of the demand for this customer, but he does not yet know the demands of the following customers. The decision faced by the distributor is how much to allocate to each customer given that the penalties for not satisfying demand are not identical. In addition, we optimally solve the problem of loading the truck with the multiple products, given the limited storage capacity. This framework can also be used for the general problem of seat allocation in the airline industry. As with the truck in the distribution problem, the airplane has limited capacity. A critical decision is how to allocate the available seats between early and late reservations (sequence of customers), for the different fare classes (multiple products), where the revenues from discount (early) and regular (late) passengers are different.", "which outing ?", "multiple", 35.0, 43.0], ["We consider distribution systems with a central warehouse and many retailers that stock a number of different products. Deterministic demand occurs at the retailers for each product. The warehouse acts as a break-bulk center and does not keep any inventory. The products are delivered from the warehouse to the retailers by vehicles that combine the deliveries to several retailers into efficient vehicle routes. The objective is to determine replenishment policies that specify the delivery quantities and the vehicle routes used for the delivery, so as to minimize the long-run average inventory and transportation costs. A new heuristic that develops a stationary nested joint replenishment policy for the problem is presented in this paper. Unlike existing methods, the proposed heuristic is capable of solving problems involving distribution systems with multiple products. Results of a computational study on randomly generated single-product problems are also presented.", "which outing ?", "multiple", 860.0, 868.0], ["This work is motivated by the need to solve the inventory routing problem when implementing a business practice called vendor managed inventory replenishment (VMI). With VMI, vendors monitor their customers' inventories and decide when and how much inventory should be replenished at each customer. The inventory routing problem attempts to coordinate inventory replenishment and transportation in such a way that the cost is minimized over the long run. We formulate a Markov decision process model of the stochastic inventory routing problem and propose approximation methods to find good solutions with reasonable computational effort. We indicate how the proposed approach can be used for other Markov decision processes involving the control of multiple resources.", "which outing ?", "multiple", 750.0, 758.0], ["We consider the problem of integrating inventory control and vehicle routing into a cost-effective strategy for a distribution system consisting of one depot and many geographically dispersed retailers. All stock enters the system through the depot and is distributed to the retailers by vehicles of limited constant capacity. We assume that each one of the retailers faces a constant, retailer specific, demand rate and that inventory is charged only at the retailers but not at the depot. We provide a lower bound on the long run average cost over all inventory-routing strategies. We use this lower bound to show that the effectiveness of direct shipping over all inventory-routing strategies is at least 94% whenever the Economic Lot Size of each of the retailers is at least 71% of vehicle capacity. The effectiveness deteriorates as the Economic Lot Sizes become smaller. These results are important because they provide useful guidelines as to when to embark into the much more difficult task of finding cost-effective routes. Additional advantages of direct shipping are lower in-transit inventory and ease of coordination.", "which outing ?", "direct", 642.0, 648.0], ["In this paper, we consider one warehouse/multiple retailer systems with transportation costs. The planning horizon is infinite and the warehouse keeps no central inventory. It is shown that the fully loaded direct shipping strategy is optimal among all possible shipping/allocation strategies if the truck capacity is smaller than a certain quantity, and a bound is provided for the general case.", "which outing ?", "direct", 207.0, 213.0], ["Vendor managed inventory replenishment is a business practice in which vendors monitor their customers' inventories, and decide when and how much inventory should be replenished. The inventory routing problem addresses the coordination of inventory management and transportation. The ability to solve the inventory routing problem contributes to the realization of the potential savings in inventory and transportation costs brought about by vendor managed inventory replenishment. The inventory routing problem is hard, especially if a large number of customers is involved. We formulate the inventory routing problem as a Markov decision process, and we propose approximation methods to find good solutions with reasonable computational effort. Computational results are presented for the inventory routing problem with direct deliveries.", "which outing ?", "direct", 822.0, 828.0], ["A recent survey of the empirical studies examining the effects of exchange rate volatility on international trade concluded that \"the large majority of empirical studies... are unable to establish a systematically significant link between measured exchange rate variability and the volume of international trade, whether on an aggregated or on a bilateral basis\" (International Monetary Fund, Exchange Rate Volatility and World Trade, Washington, July 1984, p. 36). A recent paper by M.A. Akhtar and R.S. Hilton (\"Exchange Rate Uncertainty and International Trade,\" Federal Reserve Bank of New York, May 1984), in contrast, suggests that exchange rate volatility, as measured by the standard deviation of indices of nominal effective exchange rates, has had significant adverse effects on the trade in manufactures of the United States and the Federal Republic of Germany. The purpose of the present study is to test the robustness of Akhtar and Hilton's empirical results, with their basic theoretical framework taken as given. The study extends their analysis to include France, Japan, and the United Kingdom; it then examines the robustness of the results with respect to changes in the choice of sample period, volatility measure, and estimation techniques. The main conclusion of the analysis is that the methodology of Akhtar and Hilton fails to establish a systematically significant link between exchange rate volatility and the volume of international trade. This is not to say that significant adverse effects cannot be detected in individual cases, but rather that, viewed in the large, the results tend to be insignificant or unstable. Specifically, the results suggest that straightforward application of Akhtar and Hilton's methodology to three additional countries (France, Japan, and the United Kingdom) yields mixed results; that their methodology seems to be flawed in several respects, and that correction for such flaws has the effect of weakening their conclusions; that the estimates are quite sensitive to fairly minor variations in methodology; and that \"revised\" estimates for the five countries do not, for the most part, support the hypothesis that exchange rate volatility has had a systematically adverse effect on trade. /// Un rA\u00a9cent aperA\u00a7u des A\u00a9tudes empiriques consacrA\u00a9es aux effets de l'instabilitA\u00a9 des taux de change sur le commerce international conclut que \"dans leur grande majoritA\u00a9, les A\u00a9tudes empiriques... ne rA\u00a9ussissent pas A A\u00a9tablir un lien significatif et systA\u00a9matique entre la variabilitA\u00a9 mesurA\u00a9e des taux de change et le volume du commerce international, que celui-ci soit exprimA\u00a9 sous forme globale ou bilatA\u00a9rale\" (Fonds monA\u00a9taire international, Exchange Rate Volatility and World Trade, Washington, juillet 1984, page 36). Par contre, un article publiA\u00a9 rA\u00a9cemment par M.A. Akhtar et R.S. Hilton (\"Exchange Rate Uncertainty and International Trade\", Federal Reserve Bank of New York, mai 1984) soutient que l'instabilitA\u00a9 des taux de change, mesurA\u00a9e par l'A\u00a9cart type des indices des taux de change effectifs nominaux, a eu un effet dA\u00a9favorable significatif sur le commerce de produits manufacturA\u00a9s des Etats-Unis et de la RA\u00a9publique fA\u00a9dA\u00a9rale d'Allemagne. La prA\u00a9sente A\u00a9tude a pour objet d'A\u00a9valuer la soliditA\u00a9 des rA\u00a9sultats empiriques prA\u00a9sentA\u00a9s par Akhtar et Hilton, en prenant comme donnA\u00a9 leur cadre thA\u00a9orique de base. L'auteur A\u00a9tend l'analyse au cas de la France, du Japon et du Royaume-Uni; elle cherche ensuite dans quelle mesure ces rA\u00a9sultats restent valables si l'on modifie la pA\u00a9riode de rA\u00a9fA\u00a9rence, la mesure de l'instabilitA\u00a9 et les techniques d'estimation. La principale conclusion de cette A\u00a9tude est que la mA\u00a9thode utilisA\u00a9e par Akhtar et Hilton n'A\u00a9tablit pas de lien significatif et systA\u00a9matique entre l'instabilitA\u00a9 des taux de change et le volume du commerce international. Ceci ne veut pas dire que l'on ne puisse pas constater dans certains cas particuliers des effets dA\u00a9favorables significatifs, mais plutA\u00b4t que, pris dans leur ensemble, les rA\u00a9sultats sont peu significatifs ou peu stables. Plus prA\u00a9cisA\u00a9ment, cette A\u00a9tude laisse entendre qu'une application systA\u00a9matique de la mA\u00a9thode d'Akhtar et Hilton A trois pays supplA\u00a9mentaires (France, Japon et Royaume-Uni) donne des rA\u00a9sultats mitigA\u00a9s; que leur mA\u00a9thode semble prA\u00a9senter plusieurs dA\u00a9fauts et que la correction de ces dA\u00a9fauts a pour effet d'affaiblir la portA\u00a9e de leurs conclusions; que leurs estimations sont trA\u00a8s sensibles A des variations relativement mineures de la mA\u00a9thode utilisA\u00a9e et que la plupart des estimations \"rA\u00a9visA\u00a9es\" pour les cinq pays ne confirment pas l'hypothA\u00a8se selon laquelle l'instabilitA\u00a9 des taux de change aurait eu un effet systA\u00a9matiquement nA\u00a9gatif sur le commerce international. /// En un examen reciente de los estudios empA\u00adricos sobre los efectos de la inestabilidad de los tipos de cambio en el comercio internacional se llega a la conclusiA\u00b3n de que \"la gran mayorA\u00ada de estos anAilisis empA\u00adricos no consiguen demostrar sistemAiticamente un vA\u00adnculo significativo entre los diferentes grados de variabilidad cambiaria y el volumen del comercio internacional, tanto sea en tA\u00a9rminos agregados como bilaterales\". (Fondo Monetario Internacional, Exchange Rate Volatility and World Trade, Washington, julio de 1984, pAig. 36). Un estudio reciente de M.A. Akhtar y R.S. Hilton (\"Exchange Rate Uncertainty and International Trade,\" Banco de la Reserva Federal de Nueva York, mayo de 1984) indica, por el contrario, que la inestabilidad de los tipos de cambio, expresada segAon la desviaciA\u00b3n estAindar de los A\u00adndices de los tipos de cambio efectivos nominales, ha tenido efectos negativos considerables en el comercio de productos manufacturados de Estados Unidos y de la RepAoblica Federal de Alemania. El presente estudio tiene por objeto comprobar la solidez de los resultados empA\u00adricos de Akhtar y Hilton, tomando como base de partida su marco teA\u00b3rico bAisico. El estudio amplA\u00ada su anAilisis incluyendo a Francia, JapA\u00b3n y el Reino Unido, pasando luego a examinar la solidez de los resultados con respecto a variaciones en la selecciA\u00b3n del perA\u00adodo de la muestra, medida de la inestabilidad y tA\u00a9cnicas de estimaciA\u00b3n. La conclusiA\u00b3n principal del anAilisis es que la metodologA\u00ada de Akhtar y Hilton no logra establecer un vA\u00adnculo significativo sistemAitico entre la inestabilidad de los tipos de cambio y el volumen del comercio internacional. Esto no quiere decir que no puedan obsevarse en casos especA\u00adficos efectos negativos importantes, sino mAis bien que, en tA\u00a9rminos generales, los resultados no suelen ser ni considerables ni estables. En concreto, de los resultados se desprende que la aplicaciA\u00b3n directa de la metodologA\u00ada de Akhtar y Hilton a tres nuevos paA\u00adses (Francia, JapA\u00b3n y el Reino Unido) arroja resultados dispares; que esta metodologA\u00ada parece ser defectuosa en varios aspectos y que la correcciA\u00b3n de tales deficiencias tiene como efecto el debilitamiento de sus conclusiones; que las estimaciones son muy sensibles a modificaciones poco importantes de la metodologA\u00ada, y que las estimaciones \"revisadas\" para los cinco paA\u00adses no confirman, en su mayor parte, la hipotA\u00a9sis de que la inestabilidad de los tipos de cambio ha ejercido un efecto negativo sistemAitico en el comercio exterior.", "which Nominal or real exchange rate used ?", "Nominal", 716.0, 723.0], ["What is the effect of nominal exchange rate variability on trade? I argue that the methods conventionally used to answer this perennial question are plagued by a variety of sources of systematic bias. I propose a novel approach that simultaneously addresses all of these biases, and present new estimates from a broad sample of countries from 1970 to 1997. The answer to the question is: Not much.", "which Nominal or real exchange rate used ?", "Nominal", 22.0, 29.0], ["This paper uses VAR models to investigate the impact of real exchange rate volatility on U.S. bilateral imports from the United Kingdom, France, Germany, Japan and Canada. The VAR systems include U.S. and foreign macro variables, and are estimated separately for each country. The major results suggest that the effect of volatility on imports is weak, although permanent shocks to volatility do have a negative impact on this measure of trade, and those effects are relatively more important over the flexible rate period. Copyright 1989 by MIT Press.", "which Nominal or real exchange rate used ?", "Real", 56.0, 60.0], ["Unless very specific assumptions are made, theory alone cannot determine the sign of the relation between real exchange rate uncertainty and exports. On the one hand, convexity of the profit function with respect to prices implies that an increase in price uncertainty raises the expected returns in the export sector. On the other, potential asymmetries in the cost of adjusting factors of production (for example, investment irreversibility) and risk aversion tend to make the uncertainty-exports relation negative. This article examines these issues using a simple risk-aversion model. Export equations allowing for uncertainty are then estimated for six developing countries. Contrary to the ambiguity of the theory, the empirical relation is strongly negative. Estimates indicate that a 5 percent increase in the annual standard deviation of the real exchange rate can reduce exports by 2 to 30 percent in the short run. These effects are substantially magnified in the long run.", "which Nominal or real exchange rate used ?", "Real", 106.0, 110.0], ["The paper examines the impact of exchange rate volatility on the exports of five Asian countries. The countries are Turkey, South Korea, Malaysia, Indonesia and Pakistan. The impact of a volatility term on exports is examined by using an Engle-Granger residual-based cointegrating technique. The results indicate that the exchange rate volatility reduced real exports for these countries. This might mean that producers in these countries are risk-averse. The producers will prefer to sell in domestic markets rather than foreign markets if the exchange rate volatility increases.", "which Nominal or real exchange rate used ?", "Real", 355.0, 359.0], ["This paper investigates the impact of real exchange rate volatility on Turkey\u2019s exports to its most important trading partners using quarterly data for the period 1982 to 2001. Cointegration and error correction modeling approaches are applied, and estimates of the cointegrating relations are obtained using Johansen\u2019s multivariate procedure. Estimates of the short-run dynamics are obtained through the error correction technique. Our results indicate that exchange rate volatility has a significant positive effect on export volume in the long run. This result may indicate that firms operating in a small economy, like Turkey, have little option for dealing with increased exchange rate risk.", "which Nominal or real exchange rate used ?", "Real", 38.0, 42.0], ["Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.", "which CH election ?", "Random", NaN, NaN], ["Sensor webs consisting of nodes with limited battery power and wireless communications are deployed to collect useful information from the field. Gathering sensed information in an energy efficient manner is critical to operate the sensor network for a long period of time. In W. Heinzelman et al. (Proc. Hawaii Conf. on System Sci., 2000), a data collection problem is defined where, in a round of communication, each sensor node has a packet to be sent to the distant base station. If each node transmits its sensed data directly to the base station then it will deplete its power quickly. The LEACH protocol presented by W. Heinzelman et al. is an elegant solution where clusters are formed to fuse data before transmitting to the base station. By randomizing the cluster heads chosen to transmit to the base station, LEACH achieves a factor of 8 improvement compared to direct transmissions, as measured in terms of when nodes die. In this paper, we propose PEGASIS (power-efficient gathering in sensor information systems), a near optimal chain-based protocol that is an improvement over LEACH. In PEGASIS, each node communicates only with a close neighbor and takes turns transmitting to the base station, thus reducing the amount of energy spent per round. Simulation results show that PEGASIS performs better than LEACH by about 100 to 300% when 1%, 20%, 50%, and 100% of nodes die for different network sizes and topologies.", "which CH election ?", "Random", NaN, NaN], ["Wireless sensor networks with thousands of tiny sensor nodes are expected to find wide applicability and increasing deployment in coming years, as they enable reliable monitoring and analysis of the environment. In this paper we propose a modification to a well-known protocol for sensor networks called Low Energy Adaptive Clustering Hierarchy (LEACH). This last is designed for sensor networks where end- user wants to remotely monitor the environment. In such situation, the data from the individual nodes must be sent to a central base station, often located far from the sensor network, through which the end-user can access the data. In this context our contribution is represented by building a two-level hierarchy to realize a protocol that saves better the energy consumption. Our TL-LEACH uses random rotation of local cluster base stations (primary cluster-heads and secondary cluster-heads). In this way we build, where it is possible, a two-level hierarchy. This permits to better distribute the energy load among the sensors in the network especially when the density of network is higher. TL- LEACH uses localized coordination to enable scalability and robustness. We evaluated the performances of our protocol with NS-2 and we observed that our protocol outperforms the LEACH in terms of energy consumption and lifetime of the network.", "which CH election ?", "Random", 804.0, 810.0], ["Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.", "which Clustering Process CH Election ?", "Random", NaN, NaN], ["Clustering is a standard approach for achieving efficient and scalable performance in wireless sensor networks. Most of the published clustering algorithms strive to generate the minimum number of disjoint clusters. However, we argue that guaranteeing some degree of overlap among clusters can facilitate many applications, like inter-cluster routing, topology discovery and node localization, recovery from cluster head failure, etc. We formulate the overlapping multi-hop clustering problem as an extension to the k-dominating set problem. Then we propose MOCA; a randomized distributed multi-hop clustering algorithm for organizing the sensors into overlapping clusters. We validate MOCA in a simulated environment and analyze the effect of different parameters, e.g. node density and network connectivity, on its performance. The simulation results demonstrate that MOCA is scalable, introduces low overhead and produces approximately equal-sized clusters.", "which Clustering Process CH Election ?", "Random", NaN, NaN], ["A wireless network consisting of a large number of small sensors with low-power transceivers can be an effective tool for gathering data in a variety of environments. The data collected by each sensor is communicated through the network to a single processing center that uses all reported data to determine characteristics of the environment or detect an event. The communication or message passing process must be designed to conserve the limited energy resources of the sensors. Clustering sensors into groups, so that sensors communicate information only to clusterheads and then the clusterheads communicate the aggregated information to the processing center, may save energy. In this paper, we propose a distributed, randomized clustering algorithm to organize the sensors in a wireless sensor network into clusters. We then extend this algorithm to generate a hierarchy of clusterheads and observe that the energy savings increase with the number of levels in the hierarchy. Results in stochastic geometry are used to derive solutions for the values of parameters of our algorithm that minimize the total energy spent in the network when all sensors report data through the clusterheads to the processing center.", "which Clustering Process CH Election ?", "Random", NaN, NaN], ["Prolonged network lifetime, scalability, and load balancing are important requirements for many ad-hoc sensor network applications. Clustering sensor nodes is an effective technique for achieving these goals. In this work, we propose a new energy-efficient approach for clustering nodes in ad-hoc sensor networks. Based on this approach, we present a protocol, HEED (hybrid energy-efficient distributed clustering), that periodically selects cluster heads according to a hybrid of their residual energy and a secondary parameter, such as node proximity to its neighbors or node degree. HEED does not make any assumptions about the distribution or density of nodes, or about node capabilities, e.g., location-awareness. The clustering process terminates in O(1) iterations, and does not depend on the network topology or size. The protocol incurs low overhead in terms of processing cycles and messages exchanged. It also achieves fairly uniform cluster head distribution across the network. A careful selection of the secondary clustering parameter can balance load among cluster heads. Our simulation results demonstrate that HEED outperforms weight-based clustering protocols in terms of several cluster characteristics. We also apply our approach to a simple application to demonstrate its effectiveness in prolonging the network lifetime and supporting data aggregation.", "which Clustering Process CH Election ?", "Hybrid", 367.0, 373.0], ["Wireless sensor networks are expected to find wide applicability and increasing deployment in the near future. In this paper, we propose a formal classification of sensor networks, based on their mode of functioning, as proactive and reactive networks. Reactive networks, as opposed to passive data collecting proactive networks, respond immediately to changes in the relevant parameters of interest. We also introduce a new energy efficient protocol, TEEN (Threshold sensitive Energy Efficient sensor Network protocol) for reactive networks. We evaluate the performance of our protocol for a simple temperature sensing application. In terms of energy efficiency, our protocol has been observed to outperform existing conventional sensor network protocols.", "which Nature ?", "Reactive", 234.0, 242.0], ["Containerized liner trades have been growing steadily since the globalization of world economies intensified in the early 1990s. However, these trades are typically imbalanced in terms of the numbers of inbound and outbound containers. As a result, the relocation of empty containers has become one of the major problems faced by liner operators. In this paper, we consider the dynamic empty container allocation problem where we need to reposition empty containers and to determine the number of leased con tainers needed to meet customers? demand over time. We formulate this problem as a two-stage stochastic network: in stage one, the parameters such as supplies, demands, and ship capacities for empty containers are deterministic; whereas in stage two, these parameters are random variables. We need to make decisions in stage one such that the total of the stage one cost and the expected stage two cost is minimized. By taking advantage of the network structure, we show how a stochastic quasi-gradient method and a stochastic hybrid approximation procedure can be applied to solve the problem. In addition, we propose some new variations of these methods that seem to work faster in practice. We conduct numerical tests to evaluate the value of the two-stage stochastic model over a rolling horizon environment and to investigate the behavior of the solution methods with different implementations.", "which Market ?", "Stochastic", 601.0, 611.0], ["This paper addresses a novel liner shipping fleet deployment problem characterized by cargo transshipment, multiple container routing options and uncertain demand, with the objective of maximizing the expected profit. This problem is formulated as a stochastic program and solved by the sample average approximation method. In this technique the objective function of the stochastic program is approximated by a sample average estimate derived from a random sample, and then the resulting deterministic program is solved. This process is repeated with different samples to obtain a good candidate solution along with the statistical estimate of its optimality gap. We apply the proposed model to a case study inspired from real-world problems faced by a major liner shipping company. Results show that the case is efficiently solved to 1% of relative optimality gap at 95% confidence level.", "which Market ?", "Stochastic", 250.0, 260.0], ["Eye localization is an important part in face recognition system, because its precision closely affects the performance of face recognition. Although various methods have already achieved high precision on the face images with high quality, their precision will drop on low quality images. In this paper, we propose a robust eye localization method for low quality face images to improve the eye detection rate and localization precision. First, we propose a probabilistic cascade (P-Cascade) framework, in which we reformulate the traditional cascade classifier in a probabilistic way. The P-Cascade can give chance to each image patch contributing to the final result, regardless the patch is accepted or rejected by the cascade. Second, we propose two extensions to further improve the robustness and precision in the P-Cascade framework. There are: (1) extending feature set, and (2) stacking two classifiers in multiple scales. Extensive experiments on JAFFE, BioID, LFW and a self-collected video surveillance database show that our method is comparable to state-of-the-art methods on high quality images and can work well on low quality images. This work supplies a solid base for face recognition applications under unconstrained or surveillance environments.", "which Challenges ?", "scale", NaN, NaN], ["In this paper, 2D cascaded AdaBoost, a novel classifier designing framework, is presented and applied to eye localization. By the term \"2D\", we mean that in our method there are two cascade classifiers in two directions: The first one is a cascade designed by bootstrapping the positive samples, and the second one, as the component classifiers of the first one, is cascaded by bootstrapping the negative samples. The advantages of the 2D structure include: (1) it greatly facilitates the classifier designing on huge-scale training set; (2) it can easily deal with the significant variations within the positive (or negative) samples; (3) both the training and testing procedures are more efficient. The proposed structure is applied to eye localization and evaluated on four public face databases, extensive experimental results verified the effectiveness, efficiency, and robustness of the proposed method", "which Challenges ?", "scale", 518.0, 523.0], ["This paper presents a new eye localization method via Multiscale Sparse Dictionaries (MSD). We built a pyramid of dictionaries that models context information at multiple scales. Eye locations are estimated at each scale by fitting the image through sparse coefficients of the dictionary. By using context information, our method is robust to various eye appearances. The method also works efficiently since it avoids sliding a search window in the image during localization. The experiments in BioID database prove the effectiveness of our method.", "which Challenges ?", "scale", 215.0, 220.0], ["Focused on facial features localization on multi-view face arbitrarily rotated in plane, a novel detection algorithm based improved SVM is proposed. First, the face is located by the rotation invariant multi-view (RIMV) face detector and its pose in plane is corrected by rotation. After the searching ranges of the facial features are determined, the crossing detection method which uses the brow-eye and nose-mouth features and the improved SVM detectors trained by large scale multi-view facial features examples is adopted to find the candidate eye, nose and mouth regions,. Based on the fact that the window region with higher value in the SVM discriminant function is relatively closer to the object, and the same object tends to be repeatedly detected by near windows, the candidate eyes, nose and mouth regions are filtered and merged to refine their location on the multi-view face. Experiments show that the algorithm has very good accuracy and robustness to the facial features localization with expression and arbitrary face pose in complex background.", "which Challenges ?", "Background", 1053.0, 1063.0], ["The ubiquitous application of eye tracking is precluded by the requirement of dedicated and expensive hardware, such as infrared high definition cameras. Therefore, systems based solely on appearance (i.e. not involving active infrared illumination) are being proposed in literature. However, although these systems are able to successfully locate eyes, their accuracy is significantly lower than commercial eye tracking devices. Our aim is to perform very accurate eye center location and tracking, using a simple Web cam. By means of a novel relevance mechanism, the proposed method makes use of isophote properties to gain invariance to linear lighting changes (contrast and brightness), to achieve rotational invariance and to keep low computational costs. In this paper we test our approach for accurate eye location and robustness to changes in illumination and pose, using the BioIDand the Yale Face B databases, respectively. We demonstrate that our system can achieve a considerable improvement in accuracy over state of the art techniques.", "which Challenges ?", "Lighting", 647.0, 655.0], ["Focused on facial features localization on multi-view face arbitrarily rotated in plane, a novel detection algorithm based improved SVM is proposed. First, the face is located by the rotation invariant multi-view (RIMV) face detector and its pose in plane is corrected by rotation. After the searching ranges of the facial features are determined, the crossing detection method which uses the brow-eye and nose-mouth features and the improved SVM detectors trained by large scale multi-view facial features examples is adopted to find the candidate eye, nose and mouth regions,. Based on the fact that the window region with higher value in the SVM discriminant function is relatively closer to the object, and the same object tends to be repeatedly detected by near windows, the candidate eyes, nose and mouth regions are filtered and merged to refine their location on the multi-view face. Experiments show that the algorithm has very good accuracy and robustness to the facial features localization with expression and arbitrary face pose in complex background.", "which Challenges ?", "expression", 1007.0, 1017.0], ["Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported thus far still need to be improved about precision and computational time for successful applications. In this paper, we propose an improved eye localization method based on multi-scale Gator feature vector models. The proposed method first tries to locate eyes in the downscaled face image by utilizing Gabor Jet similarity between Gabor feature vector at an initial eye coordinates and the eye model bunch of the corresponding scale. The proposed method finally locates eyes in the original input face image after it processes in the same way recursively in each scaled face image by using the eye coordinates localized in the downscaled image as initial eye coordinates. Experiments verify that our proposed method improves the precision rate without causing much computational overhead compared with other eye localization methods reported in the previous researches.", "which Challenges ?", "scale", 307.0, 312.0], ["Focused on facial features localization on multi-view face arbitrarily rotated in plane, a novel detection algorithm based improved SVM is proposed. First, the face is located by the rotation invariant multi-view (RIMV) face detector and its pose in plane is corrected by rotation. After the searching ranges of the facial features are determined, the crossing detection method which uses the brow-eye and nose-mouth features and the improved SVM detectors trained by large scale multi-view facial features examples is adopted to find the candidate eye, nose and mouth regions,. Based on the fact that the window region with higher value in the SVM discriminant function is relatively closer to the object, and the same object tends to be repeatedly detected by near windows, the candidate eyes, nose and mouth regions are filtered and merged to refine their location on the multi-view face. Experiments show that the algorithm has very good accuracy and robustness to the facial features localization with expression and arbitrary face pose in complex background.", "which Challenges ?", "scale", 474.0, 479.0], ["Multi-agent systems (MASs) have received tremendous attention from scholars in different disciplines, including computer science and civil engineering, as a means to solve complex problems by subdividing them into smaller tasks. The individual tasks are allocated to autonomous entities, known as agents. Each agent decides on a proper action to solve the task using multiple inputs, e.g., history of actions, interactions with its neighboring agents, and its goal. The MAS has found multiple applications, including modeling complex systems, smart grids, and computer networks. Despite their wide applicability, there are still a number of challenges faced by MAS, including coordination between agents, security, and task allocation. This survey provides a comprehensive discussion of all aspects of MAS, starting from definitions, features, applications, challenges, and communications to evaluation. A classification on MAS applications and challenges is provided along with references for further studies. We expect this paper to serve as an insightful and comprehensive resource on the MAS for researchers and practitioners in the area.", "which Challenges ?", "Task allocation", 719.0, 734.0], ["This paper examines the interactions between routing and inventory-management decisions in a two-level supply chain consisting of a cross-docking warehouse and N retailers. Retailer demand is normally distributed and independent across retailers and over time. Travel times are fixed between pairs of system sites. Every m time periods, system inventory is replenished at the warehouse, whereupon an uncapacitated vehicle departs on a route that visits each retailer once and only once, allocating all of its inventory based on the status of inventory at the retailers who have not yet received allocations. The retailers experience newsvendor-type inventory-holding and backorder-penalty costs each period; the vehicle experiences in-transit inventory-holding costs each period. Our goal is to determine a combined system inventory-replenishment, routing, and inventory-allocation policy that minimizes the total expected cost/period of the system over an infinite time horizon. Our analysis begins by examining the determination of the optimal static route, i.e., the best route if the vehicle must travel the same route every replenishment-allocation cycle. Here we demonstrate that the optimal static route is not the shortest-total-distance (TSP) route, but depends on the variance of customer demands, and, if in-transit inventory-holding costs are charged, also on mean customer demands. We then examine dynamic-routing policies, i.e., policies that can change the route from one system-replenishment-allocation cycle to another, based on the status of the retailers' inventories. Here we argue that in the absence of transportation-related cost, the optimal dynamic-routing policy should be viewed as balancing management's ability to respond to system uncertainties (by changing routes) against system uncertainties that are induced by changing routes. We then examine the performance of a change-revert heuristic policy. Although its routing decisions are not fully dynamic, but determined and fixed for a given cycle at the time of each system replenishment, simulation tests with N = 2 and N = 6 retailers indicate that its use can substantially reduce system inventory-related costs even if most of the time the chosen route is the optimal static route.", "which approach ?", "Simulation", 2070.0, 2080.0], ["This paper examines the interactions between routing and inventory-management decisions in a two-level supply chain consisting of a cross-docking warehouse and N retailers. Retailer demand is normally distributed and independent across retailers and over time. Travel times are fixed between pairs of system sites. Every m time periods, system inventory is replenished at the warehouse, whereupon an uncapacitated vehicle departs on a route that visits each retailer once and only once, allocating all of its inventory based on the status of inventory at the retailers who have not yet received allocations. The retailers experience newsvendor-type inventory-holding and backorder-penalty costs each period; the vehicle experiences in-transit inventory-holding costs each period. Our goal is to determine a combined system inventory-replenishment, routing, and inventory-allocation policy that minimizes the total expected cost/period of the system over an infinite time horizon. Our analysis begins by examining the determination of the optimal static route, i.e., the best route if the vehicle must travel the same route every replenishment-allocation cycle. Here we demonstrate that the optimal static route is not the shortest-total-distance (TSP) route, but depends on the variance of customer demands, and, if in-transit inventory-holding costs are charged, also on mean customer demands. We then examine dynamic-routing policies, i.e., policies that can change the route from one system-replenishment-allocation cycle to another, based on the status of the retailers' inventories. Here we argue that in the absence of transportation-related cost, the optimal dynamic-routing policy should be viewed as balancing management's ability to respond to system uncertainties (by changing routes) against system uncertainties that are induced by changing routes. We then examine the performance of a change-revert heuristic policy. Although its routing decisions are not fully dynamic, but determined and fixed for a given cycle at the time of each system replenishment, simulation tests with N = 2 and N = 6 retailers indicate that its use can substantially reduce system inventory-related costs even if most of the time the chosen route is the optimal static route.", "which approach ?", "change-revert heuristic", 1899.0, 1922.0], ["The idea of price-directed control is to use an operating policy that exploits optimal dual prices from a mathematical programming relaxation of the underlying control problem. We apply it to the problem of replenishing inventory to subsets of products/locations, such as in the distribution of industrial gases, so as to minimize long-run time average replenishment costs. Given a marginal value for each product/location, whenever there is a stockout the dispatcher compares the total value of each feasible replenishment with its cost, and chooses one that maximizes the surplus. We derive this operating policy using a linear functional approximation to the optimal value function of a semi-Markov decision process on continuous spaces. This approximation also leads to a math program whose optimal dual prices yield values and whose optimal objective value gives a lower bound on system performance. We use duality theory to show that optimal prices satisfy several structural properties and can be interpreted as estimates of lowest achievable marginal costs. On real-world instances, the price-directed policy achieves superior, near optimal performance as compared with other approaches.", "which approach ?", "Markov decision process", 695.0, 718.0], ["We address the problem of distributing a limited amount of inventory among customers using a fleet of vehicles so as to maximize profit. Both the inventory allocation and the vehicle routing problems are important logistical decisions. In many practical situations, these two decisions are closely interrelated, and therefore, require a systematic approach to take into account both activities jointly. We formulate the integrated problem as a mixed integer program and develop a Lagrangian-based procedure to generate both good upper bounds and heuristic solutions. Computational results show that the procedure is able to generate solutions with small gaps between the upper and lower bounds for a wide range of cost structures.", "which approach ?", "Mixed integer program", 444.0, 465.0], ["We consider the problem of integrating inventory control and vehicle routing into a cost-effective strategy for a distribution system consisting of one depot and many geographically dispersed retailers. All stock enters the system through the depot and is distributed to the retailers by vehicles of limited constant capacity. We assume that each one of the retailers faces a constant, retailer specific, demand rate and that inventory is charged only at the retailers but not at the depot. We provide a lower bound on the long run average cost over all inventory-routing strategies. We use this lower bound to show that the effectiveness of direct shipping over all inventory-routing strategies is at least 94% whenever the Economic Lot Size of each of the retailers is at least 71% of vehicle capacity. The effectiveness deteriorates as the Economic Lot Sizes become smaller. These results are important because they provide useful guidelines as to when to embark into the much more difficult task of finding cost-effective routes. Additional advantages of direct shipping are lower in-transit inventory and ease of coordination.", "which approach ?", "Lower bound", 504.0, 515.0], ["We consider a new approach to stochastic inventory/routing that approximates the future costs of current actions using optimal dual prices of a linear program. We obtain two such linear programs by formulating the control problem as a Markov decision process and then replacing the optimal value function with the sum of single-customer inventory value functions. The resulting approximation yields statewise lower bounds on optimal infinite-horizon discounted costs. We present a linear program that takes into account inventory dynamics and economics in allocating transportation costs for stochastic inventory routing. On test instances we find that these allocations do not introduce any error in the value function approximations relative to the best approximations that can be achieved without them. Also, unlike other approaches, we do not restrict the set of allowable vehicle itineraries in any way. Instead, we develop an efficient algorithm to both generate and eliminate itineraries during solution of the linear programs and control policy. In simulation experiments, the price-directed policy outperforms other policies from the literature.", "which approach ?", "Markov decision process", 235.0, 258.0], ["Social media for emergency management has emerged as a vital resource for government agencies across the globe. In this study, we explore social media strategies employed by governments to respond to major weather-related events. Using social media monitoring software, we analyze how social media is used in six cities following storms in the winter of 2012. We listen, monitor, and assess online discourse available on the full range of social media outlets (e.g., Twitter, Facebook, blogs). To glean further insight, we conduct a survey and extract themes from citizen comments and government's response. We conclude with recommendations on how practitioners can develop social media strategies that enable citizen participation in emergency management.", "which approach ?", "social media strategies employed by governments to respond to major weather-related events.", NaN, NaN], ["Is there a \u201cmiddle-income trap\u201d? Theory suggests that the determinants of growth at low and high income levels may be different. If countries struggle to transition from growth strategies that are effective at low income levels to growth strategies that are effective at high income levels, they may stagnate at some middle income level; this phenomenon can be thought of as a \u201cmiddle-income trap.\u201d Defining income levels based on per capita gross domestic product relative to the United States, we do not find evidence for (unusual) stagnation at any particular middle income level. However, we do find evidence that the determinants of growth at low and high income levels differ. These findings suggest a mixed conclusion: middle-income countries may need to change growth strategies in order to transition smoothly to high income growth strategies, but this can be done smoothly and does not imply the existence of a middle-income trap.", "which approach ?", "Relative", 465.0, 473.0], ["This work is motivated by the need to solve the inventory routing problem when implementing a business practice called vendor managed inventory replenishment (VMI). With VMI, vendors monitor their customers' inventories and decide when and how much inventory should be replenished at each customer. The inventory routing problem attempts to coordinate inventory replenishment and transportation in such a way that the cost is minimized over the long run. We formulate a Markov decision process model of the stochastic inventory routing problem and propose approximation methods to find good solutions with reasonable computational effort. We indicate how the proposed approach can be used for other Markov decision processes involving the control of multiple resources.", "which approach ?", "Markov decision process", 470.0, 493.0], ["We consider distribution systems with a single depot and many retailers each of which faces external demands for a single item that occurs at a specific deterministic demand rate. All stock enters the systems through the depot where it can be stored and then picked up and distributed to the retailers by a fleet of vehicles, combining deliveries into efficient routes. We extend earlier methods for obtaining low complexity lower bounds and heuristics for systems without central stock. We show under mild probabilistic assumptions that the generated solutions and bounds come asymptotically within a few percentage points of optimality (within the considered class of strategies). A numerical study exhibits the performance of these heuristics and bounds for problems of moderate size.", "which approach ?", "Lower bound", NaN, NaN], ["An industrial gases tanker vehicle visitsn customers on a tour, with a possible ( n + 1)st customer added at the end. The amount of needed product at each customer is a known random process, typically a Wiener process. The objective is to adjust dynamically the amount of product provided on scene to each customer so as to minimize total expected costs, comprising costs of earliness, lateness, product shortfall, and returning to the depot nonempty. Earliness costs are computed by invocation of an annualized incremental cost argument. Amounts of product delivered to each customer are not known until the driver is on scene at the customer location, at which point the customer is either restocked to capacity or left with some residual empty capacity, the policy determined by stochastic dynamic programming. The methodology has applications beyond industrial gases.", "which approach ?", "Stochastic dynamic programming", 782.0, 812.0], ["We introduce a new genetic algorithm (GA) approach for the integrated inventory distribution problem (IIDP). We present the developed genetic representation and use a randomized version of a previously developed construction heuristic to generate the initial random population. We design suitable crossover and mutation operators for the GA improvement phase. The comparison of results shows the significance of the designed GA over the construction heuristic and demonstrates the capability of reaching solutions within 20% of the optimum on sets of randomly generated test problems.", "which approach ?", "Genetic algorithm", 19.0, 36.0], ["SUMMARYA case report of a large primary malignant mesenchymoma of the liver is presented. This tumor was successfully removed with normal liver tissue surrounding the tumor by right hepatolobectomy. The pathologic characteristics and clinical behavior of tumors falling into this general category are", "which Site ?", "Right", 176.0, 181.0], ["A successful surgical case of malignant undifferentiated (embryonal) sarcoma of the liver (USL), a rare tumor normally found in children, is reported. The patient was a 21-year-old woman, complaining of epigastric pain and abdominal fullness. Chemical analyses of the blood and urine and complete blood counts revealed no significant changes, and serum alpha-fetoprotein levels were within normal limits. A physical examination demonstrated a film, slightly tender lesion at the liver's edge palpable 10 cm below the xiphoid process. CT scan and ultrasonography showed an oval mass, confined to the left lobe of the liver, which proved to be hypovascular on angiography. At laparotomy, a large, 18 x 15 x 13 cm tumor, found in the left hepatic lobe was resected. The lesion was dark red in color, encapsulated, smooth surfaced and of an elastic firm consistency. No metastasis was apparent. Histological examination resulted in a diagnosis of undifferentiated sarcoma of the liver. Three courses of adjuvant chemotherapy, including adriamycin, cis-diaminodichloroplatinum, vincristine and dacarbazine were administered following the surgery with no serious adverse effects. The patient remains well with no evidence of recurrence 12 months after her operation.", "which Site ?", "Left", 599.0, 603.0], ["A rare case of embryonal sarcoma of the liver in a 28\u2010year\u2010old man is reported. The patient was treated preoperatively with a combination of chemotherapy and radiation therapy. Complete surgical resection, 4.5 months after diagnosis, consisted of a left hepatic lobectomy. No viable tumor was found in the operative specimen. The patient was disease\u2010free 20 months postoperatively.", "which Site ?", "Left", 249.0, 253.0], ["IMAGING FINDINGS Case 1: Initial abdominal ultrasound scan demonstrated a large heterogeneous, echogenic mass within the liver displaying poor blood flow (Figure 1). A contrast-enhanced CT scan of the chest, abdomen and pelvis was then performed, revealing a well-defined, hypodense mass in the right lobe of the liver (Figure 2) measuring approximately 11.3 cm AP x 9.8 cm transverse x 9.2 cm in the sagittal plane. An arterial phase CT scan showed a hypodense mass with a hyperdense rim (Figure 3A) and a delayed venous phase scan showed the low-density mass with areas of increased density displaying the solid nature of the lesion (Figure 3A). These findings combined with biopsy confirmed undifferentiated embryonal sarcoma (UES). Case 2: An abdominal ultrasound scan initially revealed a large heterogeneous lesion in the center of the liver with a small amount of blood flow (Figure 4). Inconclusive ultrasound results warranted a CT scan of the chest, abdomen and pelvis with contrast, which showed a heterogeneous low-density lesion within the right lobe of the liver that extended to the left lobe (Figure 5). The mass measured approximately 12.3 AP x 12.3 transverse x 10.7 in the sagittal plane. Arterial-phase CT showed a well-defined hypodense mass with vessels coursing throughout (Figure 6A). Delayed venous phase demonstrated the solid consistency of the mass by showing continued filling in of the mass (Figure 6B). A PET scan was done to evaluate the extent of the disease. FDG-avid tissue was documented in the large lobulated hepatic mass (Figure 7A,7B).", "which Site ?", "Left", 1098.0, 1102.0], ["We report a case of undifferentiated (embryonal) sarcoma of the liver (UESL), which showed cystic formation in a 20-year-old man with no prior history of any hepatitis or liver cirrhosis. He was admitted with abdominal pain and a palpable epigastric mass. The physical examination findings were unremarkable except for a tenderness mass and the results of routine laboratory studies were all within normal limits. Abdominal ultrasound and computed tomography (CT) both showed a cystic mass in the left hepatic lobe. Subsequently, the patient underwent a tumor excision and another two times of hepatectomy because of tumor recurrence. Immunohistochemical study results showed that the tumor cells were positive for vimentin, alpha-1-antichymotrypsin (AACT) and desmin staining, and negative for alpha-fetoprotein (AFP), and eosinophilic hyaline globules in the cytoplasm of some giant cells were strongly positive for periodic acid-Schiff (PAS) staining. The pathological diagnosis was UESL. The patient is still alive with no tumor recurrence for four months.", "which Site ?", "Left", 497.0, 501.0], ["Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.", "which Inter-cluster topology ?", "direct", 343.0, 349.0], ["We consider the problem of integrating inventory control and vehicle routing into a cost-effective strategy for a distribution system consisting of a single outside vendor, a fixed number of warehouses and many geographically dispersed retailers. Each retailer faces a constant, retailer specific, demand rate and inventory holding cost is charged at the retailers and the warehouses. We show that, in an effective strategy which minimizes the asymptotic long run average cost, each warehouse receives fully loaded trucks from the vendor but never holds inventory. That is, each warehouse serves only as a coordinator of the frequency, time and sizes of deliveries to the retailers. This insight is used to construct an inventory control policy and vehicle routing strategy for multi-echelon distribution systems. Computational results are also reported.", "which Inventory ?", "Fixed", 175.0, 180.0], ["This paper presents a method that conbines a set of unsupervised algorithms in order to accurately build large taxonomies from any machine-readable dictionary (MRD). Our aim is to profit from conventional MRDs, with no explicit semantic coding. We propose a system that 1) performs fully automatic extraction of taxonomic links from MRD entries and 2) ranks the extracted relations in a way that selective manual refinement is allowed. Tested accuracy can reach around 100% depending on the degree of coverage selected, showing that taxonomy building is not limited to structured dictionaries such as LDOCE.", "which ontology learning approach ?", "automatic", 288.0, 297.0], ["Learned social ontologies can be viewed as products of a social fermentation process, i.e. a process between users who belong in communities of common interests (CoI), in open, collaborative, and communicative environments. In such a setting, social fermentation ensures the automatic encapsulation of agreement and trust of shared knowledge that participating stakeholders provide during an ontology learning task. This chapter discusses the requirements for the automated learning of social ontologies and presents a working method and results of preliminary work. Furthermore, due to its importance for the exploitation of the learned ontologies, it introduces a model for representing the interlinking of agreement, trust and the learned domain conceptualizations that are extracted from social content. The motivation behind this work is an effort towards supporting the design of methods for learning ontologies from social content i.e. methods that aim to learn not only domain conceptualizations but also the degree that agents (software and human) may trust these conceptualizations or not.", "which ontology learning approach ?", "automatic", 275.0, 284.0], ["The objective of this paper is to present the role of Ontology Learning Process in supporting an ontology engineer for creating and maintaining ontologies from textual resources. The knowledge structures that interest us are legal domain-specific ontologies. We will use these ontologies to build legal domain ontology for a Lebanese legal knowledge based system. The domain application of this work is the Lebanese criminal system. Ontologies can be learnt from various sources, such as databases, structured and unstructured documents. Here, the focus is on the acquisition of ontologies from unstructured text, provided as input. In this work, the Ontology Learning Process represents a knowledge extraction phase using Natural Language Processing techniques. The resulted ontology is considered as inexpressive ontology. There is a need to reengineer it in order to build a complete, correct and more expressive domain-specific ontology.", "which Application Domain ?", "Criminal system", 416.0, 431.0], ["The objective of this paper is to present the role of Ontology Learning Process in supporting an ontology engineer for creating and maintaining ontologies from textual resources. The knowledge structures that interest us are legal domain-specific ontologies. We will use these ontologies to build legal domain ontology for a Lebanese legal knowledge based system. The domain application of this work is the Lebanese criminal system. Ontologies can be learnt from various sources, such as databases, structured and unstructured documents. Here, the focus is on the acquisition of ontologies from unstructured text, provided as input. In this work, the Ontology Learning Process represents a knowledge extraction phase using Natural Language Processing techniques. The resulted ontology is considered as inexpressive ontology. There is a need to reengineer it in order to build a complete, correct and more expressive domain-specific ontology.", "which Application Domain ?", "Lebanese criminal system", 407.0, 431.0], ["The National Library of Medicine (NLM) is developing an automated system to produce bibliographic records for its MEDLINER database. This system, named Medical Article Record System (MARS), employs document image analysis and understanding techniques and optical character recognition (OCR). This paper describes a key module in MARS called the Automated Labeling (AL) module, which labels all zones of interest (title, author, affiliation, and abstract) automatically. The AL algorithm is based on 120 rules that are derived from an analysis of journal page layouts and features extracted from OCR output. Experiments carried out on more than 11,000 articles in over 1,000 biomedical journals show the accuracy of this rule-based algorithm to exceed 96%.", "which Application Domain ?", "biomedical journals", 674.0, 693.0], ["Ontology learning is the term used to encompass methods and techniques employed for the (semi-)automatic processing of knowledge resources that facilitate the acquisition of knowledge during ontology construction. This chapter focuses on ontology learning techniques using thesauri as input sources. Thesauri are one of the most promising sources for the creation of domain ontologies thanks to the richness of term definitions, the existence of a priori relationships between terms, and the consensus provided by their extensive use in the library context. Apart from reviewing the state of the art, this chapter shows how ontology learning techniques can be applied in the urban domain for the development of domain ontologies.", "which Application Domain ?", "Urban domain", 675.0, 687.0], ["A system for document image segmentation and ordering text areas is described and applied to both Japanese and English complex printed page layouts. There is no need to make any assumption about the shape of blocks, hence the segmentation technique can handle not only skewed images without skew-correction but also documents where column are not rectangular. In this technique, on the bottom-up strategy, the connected components are extracted from the reduced image, and classified according to their local information. The connected components are merged into lines, and lines are merged into areas. Extracted text areas are classified as body, caption, header, and footer. A tree graph of the layout of body texts is made, and we get the order of texts by preorder traversal on the graph. The authors introduce the influence range of each node, a procedure for the title part, and extraction of the white horizontal separator. Making it possible to get good results on various documents. The total system is fast and compact.<>", "which Application Domain ?", "various documents", 973.0, 990.0], ["Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.", "which Application Domain ?", "Medical", 252.0, 259.0], ["Learning ontologies requires the acquisition of relevant domain concepts and taxonomic, as well as non-taxonomic, relations. In this chapter, we present a methodology for automatic ontology enrichment and document annotation with concepts and relations of an existing domain core ontology. Natural language definitions from available glossaries in a given domain are processed and regular expressions are applied to identify general-purpose and domain-specific relations. We evaluate the methodology performance in extracting hypernymy and non-taxonomic relations. To this end, we annotated and formalized a relevant fragment of the glossary of Art and Architecture (AAT) with a set of 10 relations (plus the hypernymy relation) defined in the CRM CIDOC cultural heritage core ontology, a recent W3C standard. Finally, we assessed the generality of the approach on a set of web pages from the domains of history and biography.", "which Application Domain ?", "cultural heritage", 754.0, 771.0], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "which method ?", "the OpenBiodiv-O ontology", 1087.0, 1112.0], ["This study examines the skills and conceptual knowledge that employers require for marketing positions at different levels ranging from entry- or lower-level jobs to middle- and senior-level positions. The data for this research are based on a content analysis of 500 marketing jobs posted on Monster.com for Atlanta, Chicago, Los Angeles, New York City, and Seattle. There were notable differences between the skills and conceptual knowledge required for entry-, lower-, middle-, and upper-level marketing jobs. Technical skills appear to be much more important at all levels than what was documented in earlier research. This study discusses the implications of these research findings for the professional school pedagogical model of marketing education.", "which method ?", "content analysis", 244.0, 260.0], ["Today the Web represents a rich source of labour market data for both public and private operators, as a growing number of job offers are advertised through Web portals and services. In this paper we apply and compare several techniques, namely explicit-rules, machine learning, and LDA-based algorithms to classify a real dataset of Web job offers collected from 12 heterogeneous sources against a standard classification system of occupations.", "which method ?", "explicit-rules", 245.0, 259.0], ["Today\u2019s rapid changing and competitive environment requires educators to stay abreast of the job market in order to prepare their students for the jobs being demanded. This is more relevant about Information Technology (IT) jobs than others. However, to stay abreast of the market job demands require retrieving, sifting and analyzing large volume of data in order to understand the trends of the job market. Traditional methods of data collection and analysis are not sufficient for this kind of analysis due to the large volume of job data that is generated through the web and elsewhere. Luckily, the field of data mining has emerged to collect and sift through such large data volumes. However, even with data mining, appropriate data collection techniques and analysis need to be followed in order to correctly understand the trend. This paper illustrates our experience with employing mining techniques to understand the trend in IT Technology jobs. Data was collect using data mining techniques over a number of years from an online job agency. The data was then analyzed to reach a conclusion about the trends in the job market. Our experience in this regard along with literature review of the relevant topics is illustrated in this paper.", "which method ?", "data mining", 613.0, 624.0], ["Abstract In the current era of Big Data, existing synthesis tools such as formal meta-analyses are critical means to handle the deluge of information. However, there is a need for complementary tools that help to (a) organize evidence, (b) organize theory, and (c) closely connect evidence to theory. We present the hierarchy-of-hypotheses (HoH) approach to address these issues. In an HoH, hypotheses are conceptually and visually structured in a hierarchically nested way where the lower branches can be directly connected to empirical results. Used for organizing evidence, this tool allows researchers to conceptually connect empirical results derived through diverse approaches and to reveal under which circumstances hypotheses are applicable. Used for organizing theory, it allows researchers to uncover mechanistic components of hypotheses and previously neglected conceptual connections. In the present article, we offer guidance on how to build an HoH, provide examples from population and evolutionary biology and propose terminological clarifications.", "which method ?", "the hierarchy-of-hypotheses (HoH) approach", NaN, NaN], ["Abstract Presently, analytics degree programs exhibit a growing trend to meet a strong market demand. To explore the skill sets required for analytics positions, the authors examined a sample of online job postings related to professions such as business analyst (BA), business intelligence analyst (BIA), data analyst (DA), and data scientist (DS) using content analysis. They present a ranked list of relevant skills belonging to specific skills categories for the studied positions. Also, they conducted a pairwise comparison between DA and DS as well as BA and BIA. Overall, the authors observed that decision making, organization, communication, and structured data management are key to all job categories. The analysis shows that technical skills like statistics and programming skills are in most demand for DAs. The analysis is useful for creating clear definitions with respect to required skills for job categories in the business and data analytics domain and for designing course curricula for this domain.", "which method ?", "comparison", 518.0, 528.0], ["The ability to promptly recognise new research trends is strategic for many stakeholders, including universities, institutional funding bodies, academic publishers and companies. While the literature describes several approaches which aim to identify the emergence of new research topics early in their lifecycle, these rely on the assumption that the topic in question is already associated with a number of publications and consistently referred to by a community of researchers. Hence, detecting the emergence of a new research area at an embryonic stage, i.e., before the topic has been consistently labelled by a community of researchers and associated with a number of publications, is still an open challenge. In this paper, we begin to address this challenge by performing a study of the dynamics preceding the creation of new topics. This study indicates that the emergence of a new topic is anticipated by a significant increase in the pace of collaboration between relevant research areas, which can be seen as the \u2018parents\u2019 of the new topic. These initial findings (i) confirm our hypothesis that it is possible in principle to detect the emergence of a new topic at the embryonic stage, (ii) provide new empirical evidence supporting relevant theories in Philosophy of Science, and also (iii) suggest that new topics tend to emerge in an environment in which weakly interconnected research areas begin to cross-fertilise.", "which method ?", "number of publications", 407.0, 429.0], ["A Web content mining approach identified 20 job categories and the associated skills needs prevalent in the computing professions. Using a Web content data mining application, we extracted almost a quarter million unique IT job descriptions from various job search engines and distilled each to its required skill sets. We statistically examined these, revealing 20 clusters of similar skill sets that map to specific job definitions. The results allow software engineering professionals to tune their skills portfolio to match those in demand from real computing jobs across the US to attain more lucrative salaries and more mobility in a chaotic environment.", "which method ?", "data mining", 151.0, 162.0], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "which method ?", "the concept of the Open Biodiversity Knowledge Management System (OBKMS)", NaN, NaN], ["Data sharing and reuse are crucial to enhance scientific progress and maximize return of investments in science. Although attitudes are increasingly favorable, data reuse remains difficult due to lack of infrastructures, standards, and policies. The FAIR (findable, accessible, interoperable, reusable) principles aim to provide recommendations to increase data reuse. Because of the broad interpretation of the FAIR principles, maturity indicators are necessary to determine the FAIRness of a dataset. In this work, we propose a reproducible computational workflow to assess data FAIRness in the life sciences. Our implementation follows principles and guidelines recommended by the maturity indicator authoring group and integrates concepts from the literature. In addition, we propose a FAIR balloon plot to summarize and compare dataset FAIRness. We evaluated the feasibility of our method on three real use cases where researchers looked for six datasets to answer their scientific questions. We retrieved information from repositories (ArrayExpress, Gene Expression Omnibus, eNanoMapper, caNanoLab, NanoCommons and ChEMBL), a registry of repositories, and a searchable resource (Google Dataset Search) via application program interfaces (API) wherever possible. With our analysis, we found that the six datasets met the majority of the criteria defined by the maturity indicators, and we showed areas where improvements can easily be reached. We suggest that use of standard schema for metadata and the presence of specific attributes in registries of repositories could increase FAIRness of datasets.", "which method ?", "FAIR balloon plot", 790.0, 807.0], ["Today the Web represents a rich source of labour market data for both public and private operators, as a growing number of job offers are advertised through Web portals and services. In this paper we apply and compare several techniques, namely explicit-rules, machine learning, and LDA-based algorithms to classify a real dataset of Web job offers collected from 12 heterogeneous sources against a standard classification system of occupations.", "which method ?", "LDA-based algorithms ", 283.0, 304.0], ["The ability to promptly recognise new research trends is strategic for many stakeholders, including universities, institutional funding bodies, academic publishers and companies. While the literature describes several approaches which aim to identify the emergence of new research topics early in their lifecycle, these rely on the assumption that the topic in question is already associated with a number of publications and consistently referred to by a community of researchers. Hence, detecting the emergence of a new research area at an embryonic stage, i.e., before the topic has been consistently labelled by a community of researchers and associated with a number of publications, is still an open challenge. In this paper, we begin to address this challenge by performing a study of the dynamics preceding the creation of new topics. This study indicates that the emergence of a new topic is anticipated by a significant increase in the pace of collaboration between relevant research areas, which can be seen as the \u2018parents\u2019 of the new topic. These initial findings (i) confirm our hypothesis that it is possible in principle to detect the emergence of a new topic at the embryonic stage, (ii) provide new empirical evidence supporting relevant theories in Philosophy of Science, and also (iii) suggest that new topics tend to emerge in an environment in which weakly interconnected research areas begin to cross-fertilise.", "which method ?", "pace of collaboration between relevant research areas", 981.0, 1034.0], ["Abstract Presently, analytics degree programs exhibit a growing trend to meet a strong market demand. To explore the skill sets required for analytics positions, the authors examined a sample of online job postings related to professions such as business analyst (BA), business intelligence analyst (BIA), data analyst (DA), and data scientist (DS) using content analysis. They present a ranked list of relevant skills belonging to specific skills categories for the studied positions. Also, they conducted a pairwise comparison between DA and DS as well as BA and BIA. Overall, the authors observed that decision making, organization, communication, and structured data management are key to all job categories. The analysis shows that technical skills like statistics and programming skills are in most demand for DAs. The analysis is useful for creating clear definitions with respect to required skills for job categories in the business and data analytics domain and for designing course curricula for this domain.", "which method ?", "content analysis", 355.0, 371.0], ["The ability to promptly recognise new research trends is strategic for many stakeholders, including universities, institutional funding bodies, academic publishers and companies. While the literature describes several approaches which aim to identify the emergence of new research topics early in their lifecycle, these rely on the assumption that the topic in question is already associated with a number of publications and consistently referred to by a community of researchers. Hence, detecting the emergence of a new research area at an embryonic stage, i.e., before the topic has been consistently labelled by a community of researchers and associated with a number of publications, is still an open challenge. In this paper, we begin to address this challenge by performing a study of the dynamics preceding the creation of new topics. This study indicates that the emergence of a new topic is anticipated by a significant increase in the pace of collaboration between relevant research areas, which can be seen as the \u2018parents\u2019 of the new topic. These initial findings (i) confirm our hypothesis that it is possible in principle to detect the emergence of a new topic at the embryonic stage, (ii) provide new empirical evidence supporting relevant theories in Philosophy of Science, and also (iii) suggest that new topics tend to emerge in an environment in which weakly interconnected research areas begin to cross-fertilise.", "which method ?", "dynamics preceding the creation of new topics", 831.0, 876.0], ["Purpose \u2013 The purpose of this paper is to investigate what employers seek when recruiting library and information professionals in the UK and whether professional skills, generic skills or personal qualities are most in demand.Design/methodology/approach \u2013 A content analysis of a sample of 180 advertisements requiring a professional library or information qualification from Chartered Institute of Library and Information Professional's Library + Information Gazette over the period May 2006\u20102007.Findings \u2013 The findings reveal that a multitude of skills and qualities are required in the profession. When the results were compared with Information National Training Organisation and Library and Information Management Employability Skills research, customer service, interpersonal and communication skills, and general computing skills emerged as the requirements most frequently sought by employers. Overall, requirements from the generic skills area were most important to employers, but the research also demonstra...", "which method ?", "content analysis", 259.0, 275.0], ["Abstract Presently, analytics degree programs exhibit a growing trend to meet a strong market demand. To explore the skill sets required for analytics positions, the authors examined a sample of online job postings related to professions such as business analyst (BA), business intelligence analyst (BIA), data analyst (DA), and data scientist (DS) using content analysis. They present a ranked list of relevant skills belonging to specific skills categories for the studied positions. Also, they conducted a pairwise comparison between DA and DS as well as BA and BIA. Overall, the authors observed that decision making, organization, communication, and structured data management are key to all job categories. The analysis shows that technical skills like statistics and programming skills are in most demand for DAs. The analysis is useful for creating clear definitions with respect to required skills for job categories in the business and data analytics domain and for designing course curricula for this domain.", "which method ?", "comparison", 518.0, 528.0], ["Today the Web represents a rich source of labour market data for both public and private operators, as a growing number of job offers are advertised through Web portals and services. In this paper we apply and compare several techniques, namely explicit-rules, machine learning, and LDA-based algorithms to classify a real dataset of Web job offers collected from 12 heterogeneous sources against a standard classification system of occupations.", "which method ?", "machine learning", 261.0, 277.0], ["In this work we offer an open and data-driven skills taxonomy, which is independent of ESCO and O*NET, two popular available taxonomies that are expert-derived. Since the taxonomy is created in an algorithmic way without expert elicitation, it can be quickly updated to reflect changes in labour demand and provide timely insights to support labour market decision-making. Our proposed taxonomy also captures links between skills, aggregated job titles, and the salaries mentioned in the millions of UK job adverts used in this analysis. To generate the taxonomy, we employ machine learning methods, such as word embeddings, network community detection algorithms and consensus clustering. We model skills as a graph with individual skills as vertices and their co-occurrences in job adverts as edges. The strength of the relationships between the skills is measured using both the frequency of actual co-occurrences of skills in the same advert as well as their shared context, based on a trained word embeddings model. Once skills are represented as a network, we hierarchically group them into clusters. To ensure the stability of the resulting clusters, we introduce bootstrapping and consensus clustering stages into the methodology. While we share initial results and describe the skill clusters, the main purpose of this paper is to outline the methodology for building the taxonomy.", "which method ?", "machine learning methods", 574.0, 598.0], ["Today\u2019s rapid changing and competitive environment requires educators to stay abreast of the job market in order to prepare their students for the jobs being demanded. This is more relevant about Information Technology (IT) jobs than others. However, to stay abreast of the market job demands require retrieving, sifting and analyzing large volume of data in order to understand the trends of the job market. Traditional methods of data collection and analysis are not sufficient for this kind of analysis due to the large volume of job data that is generated through the web and elsewhere. Luckily, the field of data mining has emerged to collect and sift through such large data volumes. However, even with data mining, appropriate data collection techniques and analysis need to be followed in order to correctly understand the trend. This paper illustrates our experience with employing mining techniques to understand the trend in IT Technology jobs. Data was collect using data mining techniques over a number of years from an online job agency. The data was then analyzed to reach a conclusion about the trends in the job market. Our experience in this regard along with literature review of the relevant topics is illustrated in this paper.", "which method ?", "data mining", 613.0, 624.0], ["Purpose \u2013 The purpose of this paper is to investigate what employers seek when recruiting library and information professionals in the UK and whether professional skills, generic skills or personal qualities are most in demand.Design/methodology/approach \u2013 A content analysis of a sample of 180 advertisements requiring a professional library or information qualification from Chartered Institute of Library and Information Professional's Library + Information Gazette over the period May 2006\u20102007.Findings \u2013 The findings reveal that a multitude of skills and qualities are required in the profession. When the results were compared with Information National Training Organisation and Library and Information Management Employability Skills research, customer service, interpersonal and communication skills, and general computing skills emerged as the requirements most frequently sought by employers. Overall, requirements from the generic skills area were most important to employers, but the research also demonstra...", "which method ?", "content analysis", 259.0, 275.0], ["Abstract Presently, analytics degree programs exhibit a growing trend to meet a strong market demand. To explore the skill sets required for analytics positions, the authors examined a sample of online job postings related to professions such as business analyst (BA), business intelligence analyst (BIA), data analyst (DA), and data scientist (DS) using content analysis. They present a ranked list of relevant skills belonging to specific skills categories for the studied positions. Also, they conducted a pairwise comparison between DA and DS as well as BA and BIA. Overall, the authors observed that decision making, organization, communication, and structured data management are key to all job categories. The analysis shows that technical skills like statistics and programming skills are in most demand for DAs. The analysis is useful for creating clear definitions with respect to required skills for job categories in the business and data analytics domain and for designing course curricula for this domain.", "which method ?", "content analysis", 355.0, 371.0], ["ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.", "which proposes ?", "smart city characteristic", 449.0, 474.0], ["Name ambiguity in the context of bibliographic citation affects the quality of services in digital libraries. Previous methods are not widely applied in practice because of their high computational complexity and their strong dependency on excessive attributes, such as institutional affiliation, research area, address, etc., which are difficult to obtain in practice. To solve this problem, we propose a novel coarse\u2010to\u2010fine framework for name disambiguation which sequentially employs 3 common and easily accessible attributes (i.e., coauthor name, article title, and publication venue). Our proposed framework is based on multiple clustering and consists of 3 steps: (a) clustering articles by coauthorship and obtaining rough clusters, that is fragments; (b) clustering fragments obtained in step 1 by title information and getting bigger fragments; (c) and clustering fragments obtained in step 2 by the latent relations among venues. Experimental results on a Digital Bibliography and Library Project (DBLP) data set show that our method outperforms the existing state\u2010of\u2010the\u2010art methods by 2.4% to 22.7% on the average pairwise F1 score and is 10 to 100 times faster in terms of execution time.", "which Performance metric ?", "Pairwise F1", 1127.0, 1138.0], ["This paper proposes a methodology which discriminates the articles by the target authors (\u201ctrue\u201d articles) from those by other homonymous authors (\u201cfalse\u201d articles). Author name searches for 2,595 \u201csource\u201d authors in six subject fields retrieved about 629,000 articles. In order to extract true articles from the large amount of the retrieved articles, including many false ones, two filtering stages were applied. At the first stage any retrieved article was eliminated as false if either its affiliation addresses had little similarity to those of its source article or there was no citation relationship between the journal of the retrieved article and that of its source article. At the second stage, a sample of retrieved articles was subjected to manual judgment, and utilizing the judgment results, discrimination functions based on logistic regression were defined. These discrimination functions demonstrated both the recall ratio and the precision of about 95% and the accuracy (correct answer ratio) of 90\u201395%. Existence of common coauthor(s), address similarity, title words similarity, and interjournal citation relationships between the retrieved and source articles were found to be the effective discrimination predictors. Whether or not the source author was from a specific country was also one of the important predictors. Furthermore, it was shown that a retrieved article is almost certainly true if it was cited by, or cocited with, its source article. The method proposed in this study would be effective when dealing with a large number of articles whose subject fields and affiliation addresses vary widely. \u00a9 2011 Wiley Periodicals, Inc.", "which Performance metric ?", "Accuracy", 979.0, 987.0], ["This paper proposes a methodology which discriminates the articles by the target authors (\u201ctrue\u201d articles) from those by other homonymous authors (\u201cfalse\u201d articles). Author name searches for 2,595 \u201csource\u201d authors in six subject fields retrieved about 629,000 articles. In order to extract true articles from the large amount of the retrieved articles, including many false ones, two filtering stages were applied. At the first stage any retrieved article was eliminated as false if either its affiliation addresses had little similarity to those of its source article or there was no citation relationship between the journal of the retrieved article and that of its source article. At the second stage, a sample of retrieved articles was subjected to manual judgment, and utilizing the judgment results, discrimination functions based on logistic regression were defined. These discrimination functions demonstrated both the recall ratio and the precision of about 95% and the accuracy (correct answer ratio) of 90\u201395%. Existence of common coauthor(s), address similarity, title words similarity, and interjournal citation relationships between the retrieved and source articles were found to be the effective discrimination predictors. Whether or not the source author was from a specific country was also one of the important predictors. Furthermore, it was shown that a retrieved article is almost certainly true if it was cited by, or cocited with, its source article. The method proposed in this study would be effective when dealing with a large number of articles whose subject fields and affiliation addresses vary widely. \u00a9 2011 Wiley Periodicals, Inc.", "which Performance metric ?", "Precision", 948.0, 957.0], ["This paper addresses the problem of name disambiguation in the context of digital libraries that administer bibliographic citations. The problem occurs when multiple authors share a common name or when multiple name variations for an author appear in citation records. Name disambiguation is not a trivial task, and most digital libraries do not provide an ecient way to accurately identify the citation records for an author. Furthermore, lack of complete meta-data information in digital libraries hinders the development of a generic algorithm that can be applicable to any dataset. We propose a heuristic-based, unsupervised and adaptive method that also examines users\u2019 interactions in order to include users\u2019 feedback in the disambiguation process. Moreover, the method exploits important features associated with author and citation records, such as co-authors, aliation, publication title, venue, etc., creating a multilayered hierarchical clustering algorithm which transforms itself according to the available information, and forms clusters of unambiguous records. Our experiments on a set of researchers\u2019 names considered to be highly ambiguous produced high precision and recall results, and decisively armed the viability of our algorithm.", "which Performance metric ?", "Precision", 1171.0, 1180.0], ["Name ambiguity stems from the fact that many people or objects share identical names in the real world. Such name ambiguity decreases the performance of document retrieval, Web search, information integration, and may cause confusion in other applications. Due to the same name spellings and lack of information, it is a nontrivial task to distinguish them accurately. In this article, we focus on investigating the problem in digital libraries to distinguish publications written by authors with identical names. We present an effective framework named GHOST (abbreviation for GrapHical framewOrk for name diSambiguaTion), to solve the problem systematically. We devise a novel similarity metric, and utilize only one type of attribute (i.e., coauthorship) in GHOST. Given the similarity matrix, intermediate results are grouped into clusters with a recently introduced powerful clustering algorithm called Affinity Propagation . In addition, as a complementary technique, user feedback can be used to enhance the performance. We evaluated the framework on the real DBLP and PubMed datasets, and the experimental results show that GHOST can achieve both high precision and recall .", "which Performance metric ?", "Precision", 1160.0, 1169.0], ["This paper proposes a methodology which discriminates the articles by the target authors (\u201ctrue\u201d articles) from those by other homonymous authors (\u201cfalse\u201d articles). Author name searches for 2,595 \u201csource\u201d authors in six subject fields retrieved about 629,000 articles. In order to extract true articles from the large amount of the retrieved articles, including many false ones, two filtering stages were applied. At the first stage any retrieved article was eliminated as false if either its affiliation addresses had little similarity to those of its source article or there was no citation relationship between the journal of the retrieved article and that of its source article. At the second stage, a sample of retrieved articles was subjected to manual judgment, and utilizing the judgment results, discrimination functions based on logistic regression were defined. These discrimination functions demonstrated both the recall ratio and the precision of about 95% and the accuracy (correct answer ratio) of 90\u201395%. Existence of common coauthor(s), address similarity, title words similarity, and interjournal citation relationships between the retrieved and source articles were found to be the effective discrimination predictors. Whether or not the source author was from a specific country was also one of the important predictors. Furthermore, it was shown that a retrieved article is almost certainly true if it was cited by, or cocited with, its source article. The method proposed in this study would be effective when dealing with a large number of articles whose subject fields and affiliation addresses vary widely. \u00a9 2011 Wiley Periodicals, Inc.", "which Performance metric ?", "Recall", 927.0, 933.0], ["This paper addresses the problem of name disambiguation in the context of digital libraries that administer bibliographic citations. The problem occurs when multiple authors share a common name or when multiple name variations for an author appear in citation records. Name disambiguation is not a trivial task, and most digital libraries do not provide an ecient way to accurately identify the citation records for an author. Furthermore, lack of complete meta-data information in digital libraries hinders the development of a generic algorithm that can be applicable to any dataset. We propose a heuristic-based, unsupervised and adaptive method that also examines users\u2019 interactions in order to include users\u2019 feedback in the disambiguation process. Moreover, the method exploits important features associated with author and citation records, such as co-authors, aliation, publication title, venue, etc., creating a multilayered hierarchical clustering algorithm which transforms itself according to the available information, and forms clusters of unambiguous records. Our experiments on a set of researchers\u2019 names considered to be highly ambiguous produced high precision and recall results, and decisively armed the viability of our algorithm.", "which Performance metric ?", "Recall", 1185.0, 1191.0], ["The National Library of Medicine (NLM) is developing an automated system to produce bibliographic records for its MEDLINER database. This system, named Medical Article Record System (MARS), employs document image analysis and understanding techniques and optical character recognition (OCR). This paper describes a key module in MARS called the Automated Labeling (AL) module, which labels all zones of interest (title, author, affiliation, and abstract) automatically. The AL algorithm is based on 120 rules that are derived from an analysis of journal page layouts and features extracted from OCR output. Experiments carried out on more than 11,000 articles in over 1,000 biomedical journals show the accuracy of this rule-based algorithm to exceed 96%.", "which Performance metric ?", "labeling", 355.0, 363.0], ["Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.", "which Outcomes ?", " Snow water equivalent", NaN, NaN], ["Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.", "which Outcomes ?", " Albedo", 719.0, 726.0], ["Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.", "which Outcomes ?", " Evapotranspiration", 760.0, 779.0], ["Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.", "which Outcomes ?", " Precipitation", 236.0, 250.0], ["Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.", "which Outcomes ?", " Irrigation scheduling", 292.0, 314.0], ["Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.", "which Outcomes ?", " Sediment load", NaN, NaN], ["Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.", "which Outcomes ?", " Snow cover", 341.0, 352.0], ["Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.", "which Outcomes ?", " Surface water inventories", 427.0, 453.0], ["Platinum treated fullerene/TiO2 composites (Pt-fullerene/TiO2) were prepared using a sol\u2013gel method. The composite obtained was characterized by FT-IR, BET surface area measurements, X-ray diffraction, energy dispersive X-ray analysis, transmission electron microscopy (TEM) and UV-vis analysis. A methyl orange (MO) solution under visible light irradiation was used to determine the photocatalytic activity. Excellent photocatalytic degradation of a MO solution was observed using the Pt-TiO2, fullerene-TiO2 and Pt-fullerene/TiO2 composites under visible light. An increase in photocatalytic activity was observed and Pt-fullerene/TiO2 has the best photocatalytic activity, which may be attributable to increase of the photo-absorption effect by the fullerene and the cooperative effect of the Pt.", "which Degraded substance ?", "methyl orange", 298.0, 311.0], ["A scheme for the scheduling of flexible manufacturing systems (FMS) has been developed which divides the scheduling function (built upon a generic controller architecture) into four different steps: candidate rule selection, transient phenomena analysis, multicriteria compromise analysis, and learning. This scheme is based on a hybrid architecture which utilizes neural networks, simulation, genetic algorithms, and induction mechanism. This paper investigates the candidate rule selection process, which selects a small list of scheduling rules from a larger list of such rules. This candidate rule selector is developed by using the integration of dynamic programming and neural networks. The system achieves real-time learning using this approach. In addition, since an expert scheduler is not available, it utilizes reinforcement signals from the environment (a measure of how desirable the achieved state is as measured by the resulting performance criteria). The approach is discussed and further research issues are presented.<>", "which Criterion A ?", "Not available", 795.0, 808.0], ["A scheme for the scheduling of flexible manufacturing systems (FMS) has been developed which divides the scheduling function (built upon a generic controller architecture) into four different steps: candidate rule selection, transient phenomena analysis, multicriteria compromise analysis, and learning. This scheme is based on a hybrid architecture which utilizes neural networks, simulation, genetic algorithms, and induction mechanism. This paper investigates the candidate rule selection process, which selects a small list of scheduling rules from a larger list of such rules. This candidate rule selector is developed by using the integration of dynamic programming and neural networks. The system achieves real-time learning using this approach. In addition, since an expert scheduler is not available, it utilizes reinforcement signals from the environment (a measure of how desirable the achieved state is as measured by the resulting performance criteria). The approach is discussed and further research issues are presented.<>", "which job complexity and routing flexibility ?", "Not available", 795.0, 808.0], ["A scheme for the scheduling of flexible manufacturing systems (FMS) has been developed which divides the scheduling function (built upon a generic controller architecture) into four different steps: candidate rule selection, transient phenomena analysis, multicriteria compromise analysis, and learning. This scheme is based on a hybrid architecture which utilizes neural networks, simulation, genetic algorithms, and induction mechanism. This paper investigates the candidate rule selection process, which selects a small list of scheduling rules from a larger list of such rules. This candidate rule selector is developed by using the integration of dynamic programming and neural networks. The system achieves real-time learning using this approach. In addition, since an expert scheduler is not available, it utilizes reinforcement signals from the environment (a measure of how desirable the achieved state is as measured by the resulting performance criteria). The approach is discussed and further research issues are presented.<>", "which Resources and their constraints ?", "Not available", 795.0, 808.0], ["One of the main issues with micron-sized intracortical neural interfaces (INIs) is their long-term reliability, with one major factor stemming from the material failure caused by the heterogeneous integration of multiple materials used to realize the implant. Single crystalline cubic silicon carbide (3C-SiC) is a semiconductor material that has been long recognized for its mechanical robustness and chemical inertness. It has the benefit of demonstrated biocompatibility, which makes it a promising candidate for chronically-stable, implantable INIs. Here, we report on the fabrication and initial electrochemical characterization of a nearly monolithic, Michigan-style 3C-SiC microelectrode array (MEA) probe. The probe consists of a single 5 mm-long shank with 16 electrode sites. An ~8 \u00b5m-thick p-type 3C-SiC epilayer was grown on a silicon-on-insulator (SOI) wafer, which was followed by a ~2 \u00b5m-thick epilayer of heavily n-type (n+) 3C-SiC in order to form conductive traces and the electrode sites. Diodes formed between the p and n+ layers provided substrate isolation between the channels. A thin layer of amorphous silicon carbide (a-SiC) was deposited via plasma-enhanced chemical vapor deposition (PECVD) to insulate the surface of the probe from the external environment. Forming the probes on a SOI wafer supported the ease of probe removal from the handle wafer by simple immersion in HF, thus aiding in the manufacturability of the probes. Free-standing probes and planar single-ended test microelectrodes were fabricated from the same 3C-SiC epiwafers. Cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS) were performed on test microelectrodes with an area of 491 \u00b5m2 in phosphate buffered saline (PBS) solution. The measurements showed an impedance magnitude of 165 k\u2126 \u00b1 14.7 k\u2126 (mean \u00b1 standard deviation) at 1 kHz, anodic charge storage capacity (CSC) of 15.4 \u00b1 1.46 mC/cm2, and a cathodic CSC of 15.2 \u00b1 1.03 mC/cm2. Current-voltage tests were conducted to characterize the p-n diode, n-p-n junction isolation, and leakage currents. The turn-on voltage was determined to be on the order of ~1.4 V and the leakage current was less than 8 \u03bcArms. This all-SiC neural probe realizes nearly monolithic integration of device components to provide a likely neurocompatible INI that should mitigate long-term reliability issues associated with chronic implantation.", "which Film structure ?", "Single crystalline", 268.0, 286.0], ["Purpose To determine the effects of hypercholesterolemia in pregnant mice on the susceptibility to atherosclerosis in adult life through a new animal modeling approach. Methods Male offspring from apoE\u2212/\u2212 mice fed with regular (R) or high (H) cholesterol chow during pregnancy were randomly subjected to regular (Groups R\u2013R and H\u2013R, n = 10) or high cholesterol diet (Groups R\u2013H and H\u2013H, n = 10) for 14 weeks. Plasma lipid profiles were determined in all rats. The abdominal aorta was examined for the severity of atherosclerotic lesions in offspring. Results Lipids significantly increased while high-density lipoprotein-cholesterol/low-density lipoprotein-cholesterol decreased in mothers fed high cholesterol chow after delivery compared with before pregnancy (p < 0.01). Groups R\u2013H and H\u2013R indicated dyslipidemia and significant atherosclerotic lesions. Group H\u2013H demonstrated the highest lipids, lowest high-density lipoprotein-cholesterol/low-density lipoprotein-cholesterol, highest incidence (90%), plaque area to luminal area ratio (0.78 \u00b1 0.02) and intima to media ratio (1.57 \u00b1 0.05). Conclusion Hypercholesterolemia in pregnant mice may increase susceptibility to atherosclerosis in their adult offspring.", "which atherosclerosis incidence ?", "90%", NaN, NaN], ["The investigated new microwave plasma torch is based on an axially symmetric resonator. Microwaves of a frequency of 2.45 GHz are resonantly fed into this cavity resulting in a sufficiently high electric field to ignite plasma without any additional igniters as well as to maintain stable plasma operation. Optical emission spectroscopy was carried out to characterize a humid air plasma. OH\u2010bands were used to determine the gas rotational temperature Trot while the electron temperature was estimated by a Boltzmann plot of oxygen lines. Maximum temperatures of Trot of about 3600 K and electron temperatures of 5800 K could be measured. The electron density ne was estimated to ne \u2248 3 \u00b7 1020m\u20133 by using Saha's equation. Parametric studies in dependence of the gas flow and the supplied microwave power revealed that the maximum temperatures are independent of these parameters. However, the volume of the plasma increases with increasing microwave power and with a decrease of the gas flow. Considerations using collision frequencies, energy transfer times and power coupling provide an explanation of the observed phenomena: The optimal microwave heating is reached for electron\u2010neutral collision frequencies \u03bden being near to the angular frequency of the wave \u03c9 (\u00a9 2012 WILEY\u2010VCH Verlag GmbH & Co. KGaA, Weinheim)", "which Excitation_frequency ?", "2.45 GHz", 117.0, 125.0], ["The efficient generation of reactive oxygen species (ROS) in cold atmospheric pressure plasma jets (APPJs) is an increasingly important topic, e.g. for the treatment of temperature sensitive biological samples in the field of plasma medicine. A 13.56 MHz radio-frequency (rf) driven APPJ device operated with helium feed gas and small admixtures of oxygen (up to 1%), generating a homogeneous glow-mode plasma at low gas temperatures, was investigated. Absolute densities of ozone, one of the most prominent ROS, were measured across the 11 mm wide discharge channel by means of broadband absorption spectroscopy using the Hartley band centered at \u03bb = 255 nm. A two-beam setup with a reference beam in MachZehnder configuration is employed for improved signal-to-noise ratio allowing highsensitivity measurements in the investigated single-pass weak-absorbance regime. The results are correlated to gas temperature measurements, deduced from the rotational temperature of the N2 (C \u03a0 u \u2192 B \u03a0 g , \u03c5 = 0 \u2192 2) optical emission from introduced air impurities. The observed opposing trends of both quantities as a function of rf power input and oxygen admixture are analysed and explained in terms of a zerodimensional plasma-chemical kinetics simulation. It is found that the gas temperature as well as the densities of O and O2(b \u03a3 g ) influence the absolute O3 densities when the rf power is varied. \u2021 Current address: KROHNE Innovation GmbH, Ludwig-Krone-Str.5, 47058 Duisburg, Germany Page 1 of 26 AUTHOR SUBMITTED MANUSCRIPT PSST-101801.R1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 A cc e d M an us cr ip t", "which Excitation_frequency ?", "13.56", 245.0, 250.0], ["A plasma jet has been developed for etching materials at atmospheric pressure and between 100 and C. Gas mixtures containing helium, oxygen and carbon tetrafluoride were passed between an outer, grounded electrode and a centre electrode, which was driven by 13.56 MHz radio frequency power at 50 to 500 W. At a flow rate of , a stable, arc-free discharge was produced. This discharge extended out through a nozzle at the end of the electrodes, forming a plasma jet. Materials placed 0.5 cm downstream from the nozzle were etched at the following maximum rates: for Kapton ( and He only), for silicon dioxide, for tantalum and for tungsten. Optical emission spectroscopy was used to identify the electronically excited species inside the plasma and outside in the jet effluent.", "which Excitation_frequency ?", "13.56", 258.0, 263.0], ["The Integrated Microwave Atmospheric Plasma Source (IMAPlaS) operating with a microwave resonator at 2.45 GHz driven by a solid-state transistor oscillator generates a core plasma of high temperature (T > 1000 K), therefore producing reactive species such as NO very effectively. The effluent of the plasma source is much colder, which enables direct treatment of thermolabile materials or even living tissue. In this study the source was operated with argon, helium and nitrogen with gas flow rates between 0.3 and 1.0 slm. Depending on working gas and distance, axial gas temperatures between 30 and 250 \u00b0C were determined in front of the nozzle. Reactive species were identified by emission spectroscopy in the spectral range from vacuum ultraviolet to near infrared. The irradiance in the ultraviolet range was also measured. Using B. atrophaeus spores to test antimicrobial efficiency, we determined log10-reduction rates of up to a factor of 4.", "which Excitation_frequency ?", "2.45 GHz", 101.0, 109.0], ["The vacuum ultraviolet (VUV) emissions from 115 to 200 nm from the effluent of an RF (1.2 MHz) capillary jet fed with pure argon and binary mixtures of argon and xenon or krypton (up to 20%) are analyzed. The feed gas mixture is emanating into air at normal pressure. The Ar2 excimer second continuum, observed in the region of 120-135 nm, prevails in the pure Ar discharge. It decreases when small amounts (as low as 0.5%) of Xe or Kr are added. In that case, the resonant emission of Xe at 147 nm (or 124 nm for Kr, respectively) becomes dominant. The Xe2 second continuum at 172 nm appears for higher admixtures of Xe (10%). Furthermore, several N I emission lines, the O I resonance line, and H I line appear due to ambient air. Two absorption bands (120.6 and 124.6 nm) are present in the spectra. Their origin could be unequivocally associated to O2 and O3. The radiance is determined end-on at varying axial distance in absolute units for various mixtures of Ar/Xe and Ar/Kr and compared to pure Ar. Integration over the entire VUV wavelength region provides the integrated spectral distribution. Maximum values of 2.2 mW middotmm-2middotsr-1 are attained in pure Ar and at a distance of 4 mm from the outlet nozzle of the discharge. By adding diminutive admixtures of Kr or Xe, the intensity and spectral distribution is effectively changed.", "which Excitation_frequency ?", "1.2", 86.0, 89.0], ["The planar 13.56 MHz RF-excited low temperature atmospheric pressure plasma jet (APPJ) investigated in this study is operated with helium feed gas and a small molecular oxygen admixture. The effluent leaving the discharge through the jet's nozzle contains very few charged particles and a high reactive oxygen species' density. As its main reactive radical, essential for numerous applications, the ground state atomic oxygen density in the APPJ's effluent is measured spatially resolved with two-photon absorption laser induced fluorescence spectroscopy. The atomic oxygen density at the nozzle reaches a value of ~1016 cm\u22123. Even at several centimetres distance still 1% of this initial atomic oxygen density can be detected. Optical emission spectroscopy (OES) reveals the presence of short living excited oxygen atoms up to 10 cm distance from the jet's nozzle. The measured high ground state atomic oxygen density and the unaccounted for presence of excited atomic oxygen require further investigations on a possible energy transfer from the APPJ's discharge region into the effluent: energetic vacuum ultraviolet radiation, measured by OES down to 110 nm, reaches far into the effluent where it is presumed to be responsible for the generation of atomic oxygen.", "which Excitation_frequency ?", "13.56", 11.0, 16.0], ["An extensive electrical study was performed on a coaxial geometry atmospheric pressure plasma jet source in helium, driven by 30 kHz sine voltage. Two modes of operation were observed, a highly reproducible low-power mode that features the emission of one plasma bullet per voltage period and an erratic high-power mode in which micro-discharges appear around the grounded electrode. The minimum of power transfer efficiency corresponds to the transition between the two modes. Effective capacitance was identified as a varying property influenced by the discharge and the dissipated power. The charge carried by plasma bullets was found to be a small fraction of charge produced in the source irrespective of input power and configuration of the grounded electrode. The biggest part of the produced charge stays localized in the plasma source and below the grounded electrode, in the range 1.2\u20133.3 nC for ground length of 3\u20138 mm.", "which Excitation_frequency ?", "30 kHz", 126.0, 132.0], ["BACKGROUND An initial clinical assessment is described of a new, commercially available, computer-aided diagnosis (CAD) system using artificial intelligence (AI) for thyroid ultrasound, and its performance is evaluated in the diagnosis of malignant thyroid nodules and categorization of nodule characteristics. METHODS Patients with thyroid nodules with decisive diagnosis, whether benign or malignant, were consecutively enrolled from November 2015 to February 2016. An experienced radiologist reviewed the ultrasound image characteristics of the thyroid nodules, while another radiologist assessed the same thyroid nodules using the CAD system, providing ultrasound characteristics and a diagnosis of whether nodules were benign or malignant. The diagnostic performance and agreement of US characteristics between the experienced radiologist and the CAD system were compared. RESULTS In total, 102 thyroid nodules from 89 patients were included; 59 (57.8%) were benign and 43 (42.2%) were malignant. The CAD system showed a similar sensitivity as the experienced radiologist (90.7% vs. 88.4%, p > 0.99), but a lower specificity and a lower area under the receiver operating characteristic (AUROC) curve (specificity: 74.6% vs. 94.9%, p = 0.002; AUROC: 0.83 vs. 0.92, p = 0.021). Classifications of the ultrasound characteristics (composition, orientation, echogenicity, and spongiform) between radiologist and CAD system were in substantial agreement (\u03ba = 0.659, 0.740, 0.733, and 0.658, respectively), while the margin showed a fair agreement (\u03ba = 0.239). CONCLUSION The sensitivity of the CAD system using AI for malignant thyroid nodules was as good as that of the experienced radiologist, while specificity and accuracy were lower than those of the experienced radiologist. The CAD system showed an acceptable agreement with the experienced radiologist for characterization of thyroid nodules.", "which Specificity CAD-System ?", "74.6%", NaN, NaN], ["BACKGROUND An initial clinical assessment is described of a new, commercially available, computer-aided diagnosis (CAD) system using artificial intelligence (AI) for thyroid ultrasound, and its performance is evaluated in the diagnosis of malignant thyroid nodules and categorization of nodule characteristics. METHODS Patients with thyroid nodules with decisive diagnosis, whether benign or malignant, were consecutively enrolled from November 2015 to February 2016. An experienced radiologist reviewed the ultrasound image characteristics of the thyroid nodules, while another radiologist assessed the same thyroid nodules using the CAD system, providing ultrasound characteristics and a diagnosis of whether nodules were benign or malignant. The diagnostic performance and agreement of US characteristics between the experienced radiologist and the CAD system were compared. RESULTS In total, 102 thyroid nodules from 89 patients were included; 59 (57.8%) were benign and 43 (42.2%) were malignant. The CAD system showed a similar sensitivity as the experienced radiologist (90.7% vs. 88.4%, p > 0.99), but a lower specificity and a lower area under the receiver operating characteristic (AUROC) curve (specificity: 74.6% vs. 94.9%, p = 0.002; AUROC: 0.83 vs. 0.92, p = 0.021). Classifications of the ultrasound characteristics (composition, orientation, echogenicity, and spongiform) between radiologist and CAD system were in substantial agreement (\u03ba = 0.659, 0.740, 0.733, and 0.658, respectively), while the margin showed a fair agreement (\u03ba = 0.239). CONCLUSION The sensitivity of the CAD system using AI for malignant thyroid nodules was as good as that of the experienced radiologist, while specificity and accuracy were lower than those of the experienced radiologist. The CAD system showed an acceptable agreement with the experienced radiologist for characterization of thyroid nodules.", "which Sensitivity Radiologist ?", "88.4%", NaN, NaN], ["BACKGROUND An initial clinical assessment is described of a new, commercially available, computer-aided diagnosis (CAD) system using artificial intelligence (AI) for thyroid ultrasound, and its performance is evaluated in the diagnosis of malignant thyroid nodules and categorization of nodule characteristics. METHODS Patients with thyroid nodules with decisive diagnosis, whether benign or malignant, were consecutively enrolled from November 2015 to February 2016. An experienced radiologist reviewed the ultrasound image characteristics of the thyroid nodules, while another radiologist assessed the same thyroid nodules using the CAD system, providing ultrasound characteristics and a diagnosis of whether nodules were benign or malignant. The diagnostic performance and agreement of US characteristics between the experienced radiologist and the CAD system were compared. RESULTS In total, 102 thyroid nodules from 89 patients were included; 59 (57.8%) were benign and 43 (42.2%) were malignant. The CAD system showed a similar sensitivity as the experienced radiologist (90.7% vs. 88.4%, p > 0.99), but a lower specificity and a lower area under the receiver operating characteristic (AUROC) curve (specificity: 74.6% vs. 94.9%, p = 0.002; AUROC: 0.83 vs. 0.92, p = 0.021). Classifications of the ultrasound characteristics (composition, orientation, echogenicity, and spongiform) between radiologist and CAD system were in substantial agreement (\u03ba = 0.659, 0.740, 0.733, and 0.658, respectively), while the margin showed a fair agreement (\u03ba = 0.239). CONCLUSION The sensitivity of the CAD system using AI for malignant thyroid nodules was as good as that of the experienced radiologist, while specificity and accuracy were lower than those of the experienced radiologist. The CAD system showed an acceptable agreement with the experienced radiologist for characterization of thyroid nodules.", "which Sensitivity CAD-System ?", "90.7%", NaN, NaN], ["BACKGROUND An initial clinical assessment is described of a new, commercially available, computer-aided diagnosis (CAD) system using artificial intelligence (AI) for thyroid ultrasound, and its performance is evaluated in the diagnosis of malignant thyroid nodules and categorization of nodule characteristics. METHODS Patients with thyroid nodules with decisive diagnosis, whether benign or malignant, were consecutively enrolled from November 2015 to February 2016. An experienced radiologist reviewed the ultrasound image characteristics of the thyroid nodules, while another radiologist assessed the same thyroid nodules using the CAD system, providing ultrasound characteristics and a diagnosis of whether nodules were benign or malignant. The diagnostic performance and agreement of US characteristics between the experienced radiologist and the CAD system were compared. RESULTS In total, 102 thyroid nodules from 89 patients were included; 59 (57.8%) were benign and 43 (42.2%) were malignant. The CAD system showed a similar sensitivity as the experienced radiologist (90.7% vs. 88.4%, p > 0.99), but a lower specificity and a lower area under the receiver operating characteristic (AUROC) curve (specificity: 74.6% vs. 94.9%, p = 0.002; AUROC: 0.83 vs. 0.92, p = 0.021). Classifications of the ultrasound characteristics (composition, orientation, echogenicity, and spongiform) between radiologist and CAD system were in substantial agreement (\u03ba = 0.659, 0.740, 0.733, and 0.658, respectively), while the margin showed a fair agreement (\u03ba = 0.239). CONCLUSION The sensitivity of the CAD system using AI for malignant thyroid nodules was as good as that of the experienced radiologist, while specificity and accuracy were lower than those of the experienced radiologist. The CAD system showed an acceptable agreement with the experienced radiologist for characterization of thyroid nodules.", "which Specificity Radiologist ?", "94.9%", NaN, NaN], ["We present a counterfactual recognition (CR) task, the shared Task 5 of SemEval-2020. Counterfactuals describe potential outcomes (consequents) produced by actions or circumstances that did not happen or cannot happen and are counter to the facts (antecedent). Counterfactual thinking is an important characteristic of the human cognitive system; it connects antecedents and consequent with causal relations. Our task provides a benchmark for counterfactual recognition in natural language with two subtasks. Subtask-1 aims to determine whether a given sentence is a counterfactual statement or not. Subtask-2 requires the participating systems to extract the antecedent and consequent in a given counterfactual statement. During the SemEval-2020 official evaluation period, we received 27 submissions to Subtask-1 and 11 to Subtask-2. Our data and baseline code are made publicly available at https://zenodo.org/record/3932442. The task website and leaderboard can be found at https://competitions.codalab.org/competitions/21691.", "which Subtask 1 ?", "27 submissions", 787.0, 801.0], ["We present a counterfactual recognition (CR) task, the shared Task 5 of SemEval-2020. Counterfactuals describe potential outcomes (consequents) produced by actions or circumstances that did not happen or cannot happen and are counter to the facts (antecedent). Counterfactual thinking is an important characteristic of the human cognitive system; it connects antecedents and consequent with causal relations. Our task provides a benchmark for counterfactual recognition in natural language with two subtasks. Subtask-1 aims to determine whether a given sentence is a counterfactual statement or not. Subtask-2 requires the participating systems to extract the antecedent and consequent in a given counterfactual statement. During the SemEval-2020 official evaluation period, we received 27 submissions to Subtask-1 and 11 to Subtask-2. Our data and baseline code are made publicly available at https://zenodo.org/record/3932442. The task website and leaderboard can be found at https://competitions.codalab.org/competitions/21691.", "which Subtask 1 ?", "Determine whether a given sentence is a counterfactual statement or not", 527.0, 598.0], ["Neural network language models (NNLMs) have achieved ever-improving accuracy due to more sophisticated architectures and increasing amounts of training data. However, the inductive bias of these models (formed by the distributional hypothesis of language), while ideally suited to modeling most running text, results in key limitations for today's models. In particular, the models often struggle to learn certain spatial, temporal, or quantitative relationships, which are commonplace in text and are second-nature for human readers. Yet, in many cases, these relationships can be encoded with simple mathematical or logical expressions. How can we augment today's neural models with such encodings?In this paper, we propose a general methodology to enhance the inductive bias of NNLMs by incorporating simple functions into a neural architecture to form a hierarchical neural-symbolic language model (NSLM). These functions explicitly encode symbolic deterministic relationships to form probability distributions over words. We explore the effectiveness of this approach on numbers and geographic locations, and show that NSLMs significantly reduce perplexity in small-corpus language modeling, and that the performance improvement persists for rare tokens even on much larger corpora. The approach is simple and general, and we discuss how it can be applied to other word classes beyond numbers and geography.", "which description ?", "Neural network language models (NNLMs)", NaN, NaN], ["In this paper we describe the SemEval-2010 Cross-Lingual Lexical Substitution task, where given an English target word in context, participating systems had to find an alternative substitute word or phrase in Spanish. The task is based on the English Lexical Substitution task run at SemEval-2007. In this paper we provide background and motivation for the task, we describe the data annotation process and the scoring system, and present the results of the participating systems.", "which description ?", "given an English target word in context, participating systems had to find an alternative substitute word or phrase in Spanish", 90.0, 216.0], ["Research on definition extraction has been conducted for well over a decade, largely with significant constraints on the type of definitions considered. In this work, we present DeftEval, a SemEval shared task in which participants must extract definitions from free text using a term-definition pair corpus that reflects the complex reality of definitions in natural language. Definitions and glosses in free text often appear without explicit indicators, across sentences boundaries, or in an otherwise complex linguistic manner. DeftEval involved 3 distinct subtasks: 1) Sentence classification, 2) sequence labeling, and 3) relation extraction.", "which description ?", "a SemEval shared task in which participants must extract definitions from free text using a term-definition pair corpus that reflects the complex reality of definitions in natural language", 188.0, 376.0], ["This paper studies the importance of identifying and categorizing scientific concepts as a way to achieve a deeper understanding of the research literature of a scientific community. To reach this goal, we propose an unsupervised bootstrapping algorithm for identifying and categorizing mentions of concepts. We then propose a new clustering algorithm that uses citations' context as a way to cluster the extracted mentions into coherent concepts. Our evaluation of the algorithms against gold standards shows significant improvement over state-of-the-art results. More importantly, we analyze the computational linguistic literature using the proposed algorithms and show four different ways to summarize and understand the research community which are difficult to obtain using existing techniques.", "which description ?", "propose an unsupervised bootstrapping algorithm for identifying and categorizing mentions of concepts. We then propose a new clustering algorithm that uses citations' context as a way to cluster the extracted mentions into coherent concepts.", NaN, NaN], ["In Semantic Textual Similarity (STS), systems rate the degree of semantic equivalence, on a graded scale from 0 to 5, with 5 being the most similar. This year we set up two tasks: (i) a core task (CORE), and (ii) a typed-similarity task (TYPED). CORE is similar in set up to SemEval STS 2012 task with pairs of sentences from sources related to those of 2012, yet different in genre from the 2012 set, namely, this year we included newswire headlines, machine translation evaluation datasets and multiple lexical resource glossed sets. TYPED, on the other hand, is novel and tries to characterize why two items are deemed similar, using cultural heritage items which are described with metadata such as title, author or description. Several types of similarity have been defined, including similar author, similar time period or similar location. The annotation for both tasks leverages crowdsourcing, with relative high interannotator correlation, ranging from 62% to 87%. The CORE task attracted 34 participants with 89 runs, and the TYPED task attracted 6 teams with 14 runs.", "which description ?", "CORE is similar in set up to SemEval STS 2012 task with pairs of sentences from sources related to those of 2012, yet different in genre from the 2012 set, namely, this year we included newswire headlines, machine translation evaluation datasets and multiple lexical resource glossed sets.", NaN, NaN], ["We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a cross-linguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. The shared task has yielded improvements over the state-of-the-art baseline in all languages and settings. Full results can be found in the task\u2019s website https://competitions.codalab.org/competitions/19160.", "which description ?", "Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French", NaN, NaN], ["A fundamental problem related to RDF query processing is selectivity estimation, which is crucial to query optimization for determining a join order of RDF triple patterns. In this paper we focus research on selectivity estimation for SPARQL graph patterns. The previous work takes the join uniformity assumption when estimating the joined triple patterns. This assumption would lead to highly inaccurate estimations in the cases where properties in SPARQL graph patterns are correlated. We take into account the dependencies among properties in SPARQL graph patterns and propose a more accurate estimation model. Since star and chain query patterns are common in SPARQL graph patterns, we first focus on these two basic patterns and propose to use Bayesian network and chain histogram respectively for estimating the selectivity of them. Then, for estimating the selectivity of an arbitrary SPARQL graph pattern, we design algorithms for maximally using the precomputed statistics of the star paths and chain paths. The experiments show that our method outperforms existing approaches in accuracy.", "which description ?", "Selectivity Estimation", 57.0, 79.0], ["We present a method for characterizing a research work in terms of its focus, domain of application, and techniques used. We show how tracing these aspects over time provides a novel measure of the influence of research communities on each other. We extract these characteristics by matching semantic extraction patterns, learned using bootstrapping, to the dependency trees of sentences in an article\u2019s", "which description ?", "show how tracing these aspects over time provides a novel measure of the influence of research communities on each other.", NaN, NaN], ["We present a method for characterizing a research work in terms of its focus, domain of application, and techniques used. We show how tracing these aspects over time provides a novel measure of the influence of research communities on each other. We extract these characteristics by matching semantic extraction patterns, learned using bootstrapping, to the dependency trees of sentences in an article\u2019s", "which description ?", "present a method for characterizing a research work in terms of its focus, domain of application, and techniques used.", NaN, NaN], ["This paper presents the Graded Word Similarity in Context (GWSC) task which asked participants to predict the effects of context on human perception of similarity in English, Croatian, Slovene and Finnish. We received 15 submissions and 11 system description papers. A new dataset (CoSimLex) was created for evaluation in this task: it contains pairs of words, each annotated within two different contexts. Systems beat the baselines by significant margins, but few did well in more than one language or subtask. Almost every system employed a Transformer model, but with many variations in the details: WordNet sense embeddings, translation of contexts, TF-IDF weightings, and the automatic creation of datasets for fine-tuning were all used to good effect.", "which description ?", "to predict the effects of context on human perception of similarity in English, Croatian, Slovene and Finnish", 95.0, 204.0], ["Gene Ontology (GO) annotation is a common task among model organism databases (MODs) for capturing gene function data from journal articles. It is a time-consuming and labor-intensive task, and is thus often considered as one of the bottlenecks in literature curation. There is a growing need for semiautomated or fully automated GO curation techniques that will help database curators to rapidly and accurately identify gene function information in full-length articles. Despite multiple attempts in the past, few studies have proven to be useful with regard to assisting real-world GO curation. The shortage of sentence-level training data and opportunities for interaction between text-mining developers and GO curators has limited the advances in algorithm development and corresponding use in practical circumstances. To this end, we organized a text-mining challenge task for literature-based GO annotation in BioCreative IV. More specifically, we developed two subtasks: (i) to automatically locate text passages that contain GO-relevant information (a text retrieval task) and (ii) to automatically identify relevant GO terms for the genes in a given article (a concept-recognition task). With the support from five MODs, we provided teams with >4000 unique text passages that served as the basis for each GO annotation in our task data. Such evidence text information has long been recognized as critical for text-mining algorithm development but was never made available because of the high cost of curation. In total, seven teams participated in the challenge task. From the team results, we conclude that the state of the art in automatically mining GO terms from literature has improved over the past decade while much progress is still needed for computer-assisted GO curation. Future work should focus on addressing remaining technical challenges for improved performance of automatic GO concept recognition and incorporating practical benefits of text-mining tools into real-world GO annotation. Database URL: http://www.biocreative.org/tasks/biocreative-iv/track-4-GO/.", "which description ?", "developed two subtasks: (i) to automatically locate text passages that contain GO-relevant information (a text retrieval task) and (ii) to automatically identify relevant GO terms for the genes in a given article (a concept-recognition task).", NaN, NaN], ["Extracting information from full documents is an important problem in many domains, but most previous work focus on identifying relationships within a sentence or a paragraph. It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections. In this paper, we introduce SciREX, a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. We annotate our dataset by integrating automatic and human annotations, leveraging existing scientific knowledge resources. We develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE. Analyzing the model performance shows a significant gap between human performance and current baselines, inviting the community to use our dataset as a challenge to develop document-level IE models. Our data and code are publicly available at https://github.com/allenai/SciREX .", "which description ?", "develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE.", NaN, NaN], ["We present a database of annotated biomedical text corpora merged into a portable data structure with uniform conventions. MedTag combines three corpora, MedPost, ABGene and GENETAG, within a common relational database data model. The GENETAG corpus has been modified to reflect new definitions of genes and proteins. The MedPost corpus has been updated to include 1,000 additional sentences from the clinical medicine domain. All data have been updated with original MEDLINE text excerpts, PubMed identifiers, and tokenization independence to facilitate data accuracy, consistency and usability. The data are available in flat files along with software to facilitate loading the data into a relational SQL database from ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedTag/medtag.tar.gz.", "which description ?", "MedTag combines three corpora, MedPost, ABGene and GENETAG, within a common relational database data model.", NaN, NaN], ["In this paper we present GATE, a framework and graphical development environment which enables users to develop and deploy language engineering components and resources in a robust fashion. The GATE architecture has enabled us not only to develop a number of successful applications for various language processing tasks (such as Information Extraction), but also to build and annotate corpora and carry out evaluations on the applications generated. The framework can be used to develop applications and resources in multiple languages, based on its thorough Unicode support.", "which description ?", "framework and graphical development environment which enables users to develop and deploy language engineering components and resources in a robust fashion. The GATE architecture has enabled us not only to develop a number of successful applications for various language processing tasks (such as Information Extraction), but also to build and annotate corpora and carry out evaluations on the applications generated.", NaN, NaN], ["The MEDIQA 2021 shared tasks at the BioNLP 2021 workshop addressed three tasks on summarization for medical text: (i) a question summarization task aimed at exploring new approaches to understanding complex real-world consumer health queries, (ii) a multi-answer summarization task that targeted aggregation of multiple relevant answers to a biomedical question into one concise and relevant answer, and (iii) a radiology report summarization task addressing the development of clinically relevant impressions from radiology report findings. Thirty-five teams participated in these shared tasks with sixteen working notes submitted (fifteen accepted) describing a wide variety of models developed and tested on the shared and external datasets. In this paper, we describe the tasks, the datasets, the models and techniques developed by various teams, the results of the evaluation, and a study of correlations among various summarization evaluation measures. We hope that these shared tasks will bring new research and insights in biomedical text summarization and evaluation.", "which description ?", "addressed three tasks on summarization for medical text: (i) a question summarization task aimed at exploring new approaches to understanding complex real-world consumer health queries, (ii) a multi-answer summarization task that targeted aggregation of multiple relevant answers to a biomedical question into one concise and relevant answer, and (iii) a radiology report summarization task addressing the development of clinically relevant impressions from radiology report findings", NaN, NaN], ["Modelling language change is an increasingly important area of interest within the fields of sociolinguistics and historical linguistics. In recent years, there has been a growing number of publications whose main concern is studying changes that have occurred within the past centuries. The Corpus of Historical American English (COHA) is one of the most commonly used large corpora in diachronic studies in English. This paper describes methods applied to the downloadable version of the COHA corpus in order to overcome its main limitations, such as inconsistent lemmas and malformed tokens, without compromising its qualitative and distributional properties. The resulting corpus CCOHA contains a larger number of cleaned word tokens which can offer better insights into language change and allow for a larger variety of tasks to be performed.", "which description ?", "The Corpus of Historical American English (COHA) is one of the most commonly used large corpora in diachronic studies in English", NaN, NaN], ["In Semantic Textual Similarity (STS), systems rate the degree of semantic equivalence, on a graded scale from 0 to 5, with 5 being the most similar. This year we set up two tasks: (i) a core task (CORE), and (ii) a typed-similarity task (TYPED). CORE is similar in set up to SemEval STS 2012 task with pairs of sentences from sources related to those of 2012, yet different in genre from the 2012 set, namely, this year we included newswire headlines, machine translation evaluation datasets and multiple lexical resource glossed sets. TYPED, on the other hand, is novel and tries to characterize why two items are deemed similar, using cultural heritage items which are described with metadata such as title, author or description. Several types of similarity have been defined, including similar author, similar time period or similar location. The annotation for both tasks leverages crowdsourcing, with relative high interannotator correlation, ranging from 62% to 87%. The CORE task attracted 34 participants with 89 runs, and the TYPED task attracted 6 teams with 14 runs.", "which description ?", "tries to characterize why two items are deemed similar, using cultural heritage items which are described with metadata such as title, author or description. Several types of similarity have been defined, including similar author, similar time period or similar location.", NaN, NaN], ["In this paper, we describe HeidelTime, a system for the extraction and normalization of temporal expressions. HeidelTime is a rule-based system mainly using regular expression patterns for the extraction of temporal expressions and knowledge resources as well as linguistic clues for their normalization. In the TempEval-2 challenge, HeidelTime achieved the highest F-Score (86%) for the extraction and the best results in assigning the correct value attribute, i.e., in understanding the semantics of the temporal expressions.", "which description ?", "HeidelTime is a rule-based system mainly using regular expression patterns for the extraction of temporal expressions and knowledge resources as well as linguistic clues for their normalization.", NaN, NaN], ["The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.", "which description ?", "allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English", NaN, NaN], ["We present a counterfactual recognition (CR) task, the shared Task 5 of SemEval-2020. Counterfactuals describe potential outcomes (consequents) produced by actions or circumstances that did not happen or cannot happen and are counter to the facts (antecedent). Counterfactual thinking is an important characteristic of the human cognitive system; it connects antecedents and consequent with causal relations. Our task provides a benchmark for counterfactual recognition in natural language with two subtasks. Subtask-1 aims to determine whether a given sentence is a counterfactual statement or not. Subtask-2 requires the participating systems to extract the antecedent and consequent in a given counterfactual statement. During the SemEval-2020 official evaluation period, we received 27 submissions to Subtask-1 and 11 to Subtask-2. Our data and baseline code are made publicly available at https://zenodo.org/record/3932442. The task website and leaderboard can be found at https://competitions.codalab.org/competitions/21691.", "which description ?", "Counterfactual recognition in natural language", 443.0, 489.0], ["Despite significant progress in natural language processing, machine learning models require substantial expertannotated training data to perform well in tasks such as named entity recognition (NER) and entity relations extraction. Furthermore, NER is often more complicated when working with scientific text. For example, in polymer science, chemical structure may be encoded using nonstandard naming conventions, the same concept can be expressed using many different terms (synonymy), and authors may refer to polymers with ad-hoc labels. These challenges, which are not unique to polymer science, make it difficult to generate training data, as specialized skills are needed to label text correctly. We have previously designed polyNER, a semi-automated system for efficient identification of scientific entities in text. PolyNER applies word embedding models to generate entity-rich corpora for productive expert labeling, and then uses the resulting labeled data to bootstrap a context-based classifier. PolyNER facilitates a labeling process that is otherwise tedious and expensive. Here, we use active learning to efficiently obtain more annotations from experts and improve performance. Our approach requires just five hours of expert time to achieve discrimination capacity comparable to that of a state-of-the-art chemical NER toolkit.", "which description ?", "We have previously designed polyNER, a semi-automated system for efficient identification of scientific entities in text. PolyNER applies word embedding models to generate entity-rich corpora for productive expert labeling, and then uses the resulting labeled data to bootstrap a context-based classifier. PolyNER facilitates a labeling process that is otherwise tedious and expensive. Here, we use active learning to efficiently obtain more annotations from experts and improve performance.", NaN, NaN], ["Extracting information from full documents is an important problem in many domains, but most previous work focus on identifying relationships within a sentence or a paragraph. It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections. In this paper, we introduce SciREX, a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. We annotate our dataset by integrating automatic and human annotations, leveraging existing scientific knowledge resources. We develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE. Analyzing the model performance shows a significant gap between human performance and current baselines, inviting the community to use our dataset as a challenge to develop document-level IE models. Our data and code are publicly available at https://github.com/allenai/SciREX .", "which description ?", "a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles.", NaN, NaN], ["Abstract The novel coronavirus disease 2019 (COVID-19) caused by SARS-COV-2 has raised myriad of global concerns. There is currently no FDA approved antiviral strategy to alleviate the disease burden. The conserved 3-chymotrypsin-like protease (3CLpro), which controls coronavirus replication is a promising drug target for combating the coronavirus infection. This study screens some African plants derived alkaloids and terpenoids as potential inhibitors of coronavirus 3CLpro using in silico approach. Bioactive alkaloids (62) and terpenoids (100) of plants native to Africa were docked to the 3CLpro of the novel SARS-CoV-2. The top twenty alkaloids and terpenoids with high binding affinities to the SARS-CoV-2 3CLpro were further docked to the 3CLpro of SARS-CoV and MERS-CoV. The docking scores were compared with 3CLpro-referenced inhibitors (Lopinavir and Ritonavir). The top docked compounds were further subjected to ADEM/Tox and Lipinski filtering analyses for drug-likeness prediction analysis. This ligand-protein interaction study revealed that more than half of the top twenty alkaloids and terpenoids interacted favourably with the coronaviruses 3CLpro, and had binding affinities that surpassed that of lopinavir and ritonavir. Also, a highly defined hit-list of seven compounds (10-Hydroxyusambarensine, Cryptoquindoline, 6-Oxoisoiguesterin, 22-Hydroxyhopan-3-one, Cryptospirolepine, Isoiguesterin and 20-Epibryonolic acid) were identified. Furthermore, four non-toxic, druggable plant derived alkaloids (10-Hydroxyusambarensine, and Cryptoquindoline) and terpenoids (6-Oxoisoiguesterin and 22-Hydroxyhopan-3-one), that bind to the receptor-binding site and catalytic dyad of SARS-CoV-2 3CLpro were identified from the predictive ADME/tox and Lipinski filter analysis. However, further experimental analyses are required for developing these possible leads into natural anti-COVID-19 therapeutic agents for combating the pandemic. Communicated by Ramaswamy H. Sarma", "which Bioactive compounds ?", "10-Hydroxyusambarensine, Cryptoquindoline, 6-Oxoisoiguesterin, 22-Hydroxyhopan-3-one, Cryptospirolepine, Isoiguesterin and 20-Epibryonolic acid", 1298.0, 1441.0], ["Mental workload assessment is essential for maintaining human health and preventing accidents. Most research on this issue is limited to a single task. However, cross-task assessment is indispensable for extending a pre-trained model to new workload conditions. Because brain dynamics are complex across different tasks, it is difficult to propose efficient human-designed features based on prior knowledge. Therefore, this paper proposes a concatenated structure of deep recurrent and 3D convolutional neural networks (R3DCNNs) to learn EEG features across different tasks without prior knowledge. First, this paper adds frequency and time dimensions to EEG topographic maps based on a Morlet wavelet transformation. Then, R3DCNN is proposed to simultaneously learn EEG features from the spatial, spectral, and temporal dimensions. The proposed model is validated based on the EEG signals collected from 20 subjects. This paper employs a binary classification of low and high mental workload across spatial n-back and arithmetic tasks. The results show that the R3DCNN achieves an average accuracy of 88.9%, which is a significant increase compared with that of the state-of-the-art methods. In addition, the visualization of the convolutional layers demonstrates that the deep neural network can extract detailed features. These results indicate that R3DCNN is capable of identifying the mental workload levels for cross-task conditions.", "which Study cohort ?", "20 subjects", 905.0, 916.0], ["Abstract Background The task of recognizing and identifying species names in biomedical literature has recently been regarded as critical for a number of applications in text and data mining, including gene name recognition, species-specific document retrieval, and semantic enrichment of biomedical articles. Results In this paper we describe an open-source species name recognition and normalization software system, LINNAEUS, and evaluate its performance relative to several automatically generated biomedical corpora, as well as a novel corpus of full-text documents manually annotated for species mentions. LINNAEUS uses a dictionary-based approach (implemented as an efficient deterministic finite-state automaton) to identify species names and a set of heuristics to resolve ambiguous mentions. When compared against our manually annotated corpus, LINNAEUS performs with 94% recall and 97% precision at the mention level, and 98% recall and 90% precision at the document level. Our system successfully solves the problem of disambiguating uncertain species mentions, with 97% of all mentions in PubMed Central full-text documents resolved to unambiguous NCBI taxonomy identifiers. Conclusions LINNAEUS is an open source, stand-alone software system capable of recognizing and normalizing species name mentions with speed and accuracy, and can therefore be integrated into a range of bioinformatics and text-mining applications. The software and manually annotated corpus can be downloaded freely at http://linnaeus.sourceforge.net/.", "which Results ?", "94% recall and 97% precision at the mention level", 878.0, 927.0], ["Abstract Background The task of recognizing and identifying species names in biomedical literature has recently been regarded as critical for a number of applications in text and data mining, including gene name recognition, species-specific document retrieval, and semantic enrichment of biomedical articles. Results In this paper we describe an open-source species name recognition and normalization software system, LINNAEUS, and evaluate its performance relative to several automatically generated biomedical corpora, as well as a novel corpus of full-text documents manually annotated for species mentions. LINNAEUS uses a dictionary-based approach (implemented as an efficient deterministic finite-state automaton) to identify species names and a set of heuristics to resolve ambiguous mentions. When compared against our manually annotated corpus, LINNAEUS performs with 94% recall and 97% precision at the mention level, and 98% recall and 90% precision at the document level. Our system successfully solves the problem of disambiguating uncertain species mentions, with 97% of all mentions in PubMed Central full-text documents resolved to unambiguous NCBI taxonomy identifiers. Conclusions LINNAEUS is an open source, stand-alone software system capable of recognizing and normalizing species name mentions with speed and accuracy, and can therefore be integrated into a range of bioinformatics and text-mining applications. The software and manually annotated corpus can be downloaded freely at http://linnaeus.sourceforge.net/.", "which Results ?", "98% recall and 90% precision at the document level", 933.0, 983.0], ["Motivation: Coreference resolution, the process of identifying different mentions of an entity, is a very important component in a text-mining system. Compared with the work in news articles, the existing study of coreference resolution in biomedical texts is quite preliminary by only focusing on specific types of anaphors like pronouns or definite noun phrases, using heuristic methods, and running on small data sets. Therefore, there is a need for an in-depth exploration of this task in the biomedical domain. Results: In this article, we presented a learning-based approach to coreference resolution in the biomedical domain. We made three contributions in our study. Firstly, we annotated a large scale coreference corpus, MedCo, which consists of 1,999 medline abstracts in the GENIA data set. Secondly, we proposed a detailed framework for the coreference resolution task, in which we augmented the traditional learning model by incorporating non-anaphors into training. Lastly, we explored various sources of knowledge for coreference resolution, particularly, those that can deal with the complexity of biomedical texts. The evaluation on the MedCo corpus showed promising results. Our coreference resolution system achieved a high precision of 85.2% with a reasonable recall of 65.3%, obtaining an F-measure of 73.9%. The results also suggested that our augmented learning model significantly boosted precision (up to 24.0%) without much loss in recall (less than 5%), and brought a gain of over 8% in F-measure.", "which Results ?", "a high precision of 85.2%", NaN, NaN], ["Motivation: Coreference resolution, the process of identifying different mentions of an entity, is a very important component in a text-mining system. Compared with the work in news articles, the existing study of coreference resolution in biomedical texts is quite preliminary by only focusing on specific types of anaphors like pronouns or definite noun phrases, using heuristic methods, and running on small data sets. Therefore, there is a need for an in-depth exploration of this task in the biomedical domain. Results: In this article, we presented a learning-based approach to coreference resolution in the biomedical domain. We made three contributions in our study. Firstly, we annotated a large scale coreference corpus, MedCo, which consists of 1,999 medline abstracts in the GENIA data set. Secondly, we proposed a detailed framework for the coreference resolution task, in which we augmented the traditional learning model by incorporating non-anaphors into training. Lastly, we explored various sources of knowledge for coreference resolution, particularly, those that can deal with the complexity of biomedical texts. The evaluation on the MedCo corpus showed promising results. Our coreference resolution system achieved a high precision of 85.2% with a reasonable recall of 65.3%, obtaining an F-measure of 73.9%. The results also suggested that our augmented learning model significantly boosted precision (up to 24.0%) without much loss in recall (less than 5%), and brought a gain of over 8% in F-measure.", "which Results ?", "a reasonable recall of 65.3%", NaN, NaN], ["Motivation: Coreference resolution, the process of identifying different mentions of an entity, is a very important component in a text-mining system. Compared with the work in news articles, the existing study of coreference resolution in biomedical texts is quite preliminary by only focusing on specific types of anaphors like pronouns or definite noun phrases, using heuristic methods, and running on small data sets. Therefore, there is a need for an in-depth exploration of this task in the biomedical domain. Results: In this article, we presented a learning-based approach to coreference resolution in the biomedical domain. We made three contributions in our study. Firstly, we annotated a large scale coreference corpus, MedCo, which consists of 1,999 medline abstracts in the GENIA data set. Secondly, we proposed a detailed framework for the coreference resolution task, in which we augmented the traditional learning model by incorporating non-anaphors into training. Lastly, we explored various sources of knowledge for coreference resolution, particularly, those that can deal with the complexity of biomedical texts. The evaluation on the MedCo corpus showed promising results. Our coreference resolution system achieved a high precision of 85.2% with a reasonable recall of 65.3%, obtaining an F-measure of 73.9%. The results also suggested that our augmented learning model significantly boosted precision (up to 24.0%) without much loss in recall (less than 5%), and brought a gain of over 8% in F-measure.", "which Results ?", "an F-measure of 73.9%", NaN, NaN], ["Abstract What is the effect of social distancing policies on the spread of the new coronavirus? Social distancing policies rose to prominence as most capable of containing contagion and saving lives. Our purpose in this paper is to identify the causal effect of social distancing policies on the number of confirmed cases of COVID-19 and on contagion velocity. We align our main argument with the existing scientific consensus: social distancing policies negatively affect the number of cases. To test this hypothesis, we construct a dataset with daily information on 78 affected countries in the world. We compute several relevant measures from publicly available information on the number of cases and deaths to estimate causal effects for short-term and cumulative effects of social distancing policies. We use a time-series cross-sectional matching approach to match countries\u2019 observable histories. Causal effects (ATTs and ATEs) can be extracted via a dif-in-dif estimator. Results show that social distancing policies reduce the aggregated number of cases by 4,832 on average (or 17.5/100 thousand), but only when strict measures are adopted. This effect seems to manifest from the third week onwards.", "which Results ?", "Social distancing policies reduce the aggregated number of cases", 998.0, 1062.0], ["In this work; we investigated the differential interaction of amphiphilic antimicrobial peptides with 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) lipid structures by means of extensive molecular dynamics simulations. By using a coarse-grained (CG) model within the MARTINI force field; we simulated the peptide\u2013lipid system from three different initial configurations: (a) peptides in water in the presence of a pre-equilibrated lipid bilayer; (b) peptides inside the hydrophobic core of the membrane; and (c) random configurations that allow self-assembled molecular structures. This last approach allowed us to sample the structural space of the systems and consider cooperative effects. The peptides used in our simulations are aurein 1.2 and maculatin 1.1; two well-known antimicrobial peptides from the Australian tree frogs; and molecules that present different membrane-perturbing behaviors. Our results showed differential behaviors for each type of peptide seen in a different organization that could guide a molecular interpretation of the experimental data. While both peptides are capable of forming membrane aggregates; the aurein 1.2 ones have a pore-like structure and exhibit a higher level of organization than those conformed by maculatin 1.1. Furthermore; maculatin 1.1 has a strong tendency to form clusters and induce curvature at low peptide\u2013lipid ratios. The exploration of the possible lipid\u2013peptide structures; as the one carried out here; could be a good tool for recognizing specific configurations that should be further studied with more sophisticated methodologies.", "which Lead compound ?", "Aurein 1.2", 743.0, 753.0], ["Membrane lysis caused by antibiotic peptides is often rationalized by means of two different models: the so-called carpet model and the pore-forming model. We report here on the lytic activity of antibiotic peptides from Australian tree frogs, maculatin 1.1, citropin 1.1, and aurein 1.2, on POPC or POPC/POPG model membranes. Leakage experiments using fluorescence spectroscopy indicated that the peptide/lipid mol ratio necessary to induce 50% of probe leakage was smaller for maculatin compared with aurein or citropin, regardless of lipid membrane composition. To gain further insight into the lytic mechanism of these peptides we performed single vesicle experiments using confocal fluorescence microscopy. In these experiments, the time course of leakage for different molecular weight (water soluble) fluorescent markers incorporated inside of single giant unilamellar vesicles is observed after peptide exposure. We conclude that maculatin and its related peptides demonstrate a pore-forming mechanism (differential leakage of small fluorescent probe compared with high molecular weight markers). Conversely, citropin and aurein provoke a total membrane destabilization with vesicle burst without sequential probe leakage, an effect that can be assigned to a carpeting mechanism of lytic action. Additionally, to study the relevance of the proline residue on the membrane-action properties of maculatin, the same experimental approach was used for maculatin-Ala and maculatin-Gly (Pro-15 was replaced by Ala or Gly, respectively). Although a similar peptide/lipid mol ratio was necessary to induce 50% of leakage for POPC membranes, the lytic activity of maculatin-Ala and maculatin-Gly decreased in POPC/POPG (1:1 mol) membranes compared with that observed for the naturally occurring maculatin sequence. As observed for maculatin, the lytic action of Maculatin-Ala and maculatin-Gly is in keeping with the formation of pore-like structures at the membrane independently of lipid composition.", "which Lead compound ?", "Aurein 1.2", 277.0, 287.0], ["Aurein 1.2 is an antimicrobial peptide from the skin secretion of an Australian frog. In previous experimental work, we reported a differential action of aurein 1.2 on two probiotic strains Lactobacillus delbrueckii subsp. Bulgaricus (CIDCA331) and Lactobacillus delbrueckii subsp. Lactis (CIDCA133). The differences found were attributed to the bilayer compositions. Cell cultures and CIDCA331-derived liposomes showed higher susceptibility than the ones derived from the CIDCA133 strain, leading to content leakage and structural disruption. Here, we used Molecular Dynamics simulations to explore these systems at atomistic level. We hypothesize that if the antimicrobial peptides organized themselves to form a pore, it will be more stable in membranes that emulate the CIDCA331 strain than in those of the CIDCA133 strain. To test this hypothesis, we simulated pre-assembled aurein 1.2 pores embedded into bilayer models that emulate the two probiotic strains. It was found that the general behavior of the systems depends on the composition of the membrane rather than the pre-assemble system characteristics. Overall, it was observed that aurein 1.2 pores are more stable in the CIDCA331 model membranes. This fact coincides with the high susceptibility of this strain against antimicrobial peptide. In contrast, in the case of the CIDCA133 model membranes, peptides migrate to the water-lipid interphase, the pore shrinks and the transport of water through the pore is reduced. The tendency of glycolipids to make hydrogen bonds with peptides destabilize the pore structures. This feature is observed to a lesser extent in CIDCA 331 due to the presence of anionic lipids. Glycolipid transverse diffusion (flip-flop) between monolayers occurs in the pore surface region in all the cases considered. These findings expand our understanding of the antimicrobial peptide resistance properties of probiotic strains.", "which Lead compound ?", "Aurein 1.2", 0.0, 10.0], ["The activity of a host of antimicrobial peptides has been examined against a range of lipid bilayers mimicking bacterial and eukaryotic membranes. Despite this, the molecular mechanisms and the nature of the physicochemical properties underlying the peptide\u2013lipid interactions that lead to membrane disruption are yet to be fully elucidated. In this study, the interaction of the short antimicrobial peptide aurein 1.2 was examined in the presence of an anionic cardiolipin-containing lipid bilayer using molecular dynamics simulations. Aurein 1.2 is known to interact strongly with anionic lipid membranes. In the simulations, the binding of aurein 1.2 was associated with buckling of the lipid bilayer, the degree of which varied with the peptide concentration. The simulations suggest that the intrinsic properties of cardiolipin, especially the fact that it promotes negative membrane curvature, may help protect membranes against the action of peptides such as aurein 1.2 by counteracting the tendency of the peptide to induce positive curvature in target membranes.", "which Lead compound ?", "Aurein 1.2", 408.0, 418.0], ["Membrane lysis caused by antibiotic peptides is often rationalized by means of two different models: the so-called carpet model and the pore-forming model. We report here on the lytic activity of antibiotic peptides from Australian tree frogs, maculatin 1.1, citropin 1.1, and aurein 1.2, on POPC or POPC/POPG model membranes. Leakage experiments using fluorescence spectroscopy indicated that the peptide/lipid mol ratio necessary to induce 50% of probe leakage was smaller for maculatin compared with aurein or citropin, regardless of lipid membrane composition. To gain further insight into the lytic mechanism of these peptides we performed single vesicle experiments using confocal fluorescence microscopy. In these experiments, the time course of leakage for different molecular weight (water soluble) fluorescent markers incorporated inside of single giant unilamellar vesicles is observed after peptide exposure. We conclude that maculatin and its related peptides demonstrate a pore-forming mechanism (differential leakage of small fluorescent probe compared with high molecular weight markers). Conversely, citropin and aurein provoke a total membrane destabilization with vesicle burst without sequential probe leakage, an effect that can be assigned to a carpeting mechanism of lytic action. Additionally, to study the relevance of the proline residue on the membrane-action properties of maculatin, the same experimental approach was used for maculatin-Ala and maculatin-Gly (Pro-15 was replaced by Ala or Gly, respectively). Although a similar peptide/lipid mol ratio was necessary to induce 50% of leakage for POPC membranes, the lytic activity of maculatin-Ala and maculatin-Gly decreased in POPC/POPG (1:1 mol) membranes compared with that observed for the naturally occurring maculatin sequence. As observed for maculatin, the lytic action of Maculatin-Ala and maculatin-Gly is in keeping with the formation of pore-like structures at the membrane independently of lipid composition.", "which Lead compound ?", "Citropin 1.1", 259.0, 271.0], ["In this work; we investigated the differential interaction of amphiphilic antimicrobial peptides with 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) lipid structures by means of extensive molecular dynamics simulations. By using a coarse-grained (CG) model within the MARTINI force field; we simulated the peptide\u2013lipid system from three different initial configurations: (a) peptides in water in the presence of a pre-equilibrated lipid bilayer; (b) peptides inside the hydrophobic core of the membrane; and (c) random configurations that allow self-assembled molecular structures. This last approach allowed us to sample the structural space of the systems and consider cooperative effects. The peptides used in our simulations are aurein 1.2 and maculatin 1.1; two well-known antimicrobial peptides from the Australian tree frogs; and molecules that present different membrane-perturbing behaviors. Our results showed differential behaviors for each type of peptide seen in a different organization that could guide a molecular interpretation of the experimental data. While both peptides are capable of forming membrane aggregates; the aurein 1.2 ones have a pore-like structure and exhibit a higher level of organization than those conformed by maculatin 1.1. Furthermore; maculatin 1.1 has a strong tendency to form clusters and induce curvature at low peptide\u2013lipid ratios. The exploration of the possible lipid\u2013peptide structures; as the one carried out here; could be a good tool for recognizing specific configurations that should be further studied with more sophisticated methodologies.", "which Lead compound ?", "Maculatin 1.1", 758.0, 771.0], ["Membrane lysis caused by antibiotic peptides is often rationalized by means of two different models: the so-called carpet model and the pore-forming model. We report here on the lytic activity of antibiotic peptides from Australian tree frogs, maculatin 1.1, citropin 1.1, and aurein 1.2, on POPC or POPC/POPG model membranes. Leakage experiments using fluorescence spectroscopy indicated that the peptide/lipid mol ratio necessary to induce 50% of probe leakage was smaller for maculatin compared with aurein or citropin, regardless of lipid membrane composition. To gain further insight into the lytic mechanism of these peptides we performed single vesicle experiments using confocal fluorescence microscopy. In these experiments, the time course of leakage for different molecular weight (water soluble) fluorescent markers incorporated inside of single giant unilamellar vesicles is observed after peptide exposure. We conclude that maculatin and its related peptides demonstrate a pore-forming mechanism (differential leakage of small fluorescent probe compared with high molecular weight markers). Conversely, citropin and aurein provoke a total membrane destabilization with vesicle burst without sequential probe leakage, an effect that can be assigned to a carpeting mechanism of lytic action. Additionally, to study the relevance of the proline residue on the membrane-action properties of maculatin, the same experimental approach was used for maculatin-Ala and maculatin-Gly (Pro-15 was replaced by Ala or Gly, respectively). Although a similar peptide/lipid mol ratio was necessary to induce 50% of leakage for POPC membranes, the lytic activity of maculatin-Ala and maculatin-Gly decreased in POPC/POPG (1:1 mol) membranes compared with that observed for the naturally occurring maculatin sequence. As observed for maculatin, the lytic action of Maculatin-Ala and maculatin-Gly is in keeping with the formation of pore-like structures at the membrane independently of lipid composition.", "which Lead compound ?", "Maculatin 1.1", 244.0, 257.0], ["Photolyse laser eclair a 347 nm d'un sol de TiO 2 contenant un intercepteur d'electron adsorbe (Pt ou MV 2+ ). Etude par spectres d'absorption des especes piegees. A \u03bb max =475 nm observation de \u00abtrous\u00bb h + . Vitesses de declin de h + en solutions acide et alcaline. Trous h + en exces. Avec un sol de TiO 2 contenant un intercepteur de trous (alcool polyvinylique ou thiocyanate), observation d'un spectre a \u03bb max =650 nm attribue aux electrons pieges en exces, proches de la surface des particules colloidales", "which transient absorption:trapped electrons ?", "650 nm", 416.0, 422.0], ["In this study, we examine the abuse of online social networks at the hands of spammers through the lens of the tools, techniques, and support infrastructure they rely upon. To perform our analysis, we identify over 1.1 million accounts suspended by Twitter for disruptive activities over the course of seven months. In the process, we collect a dataset of 1.8 billion tweets, 80 million of which belong to spam accounts. We use our dataset to characterize the behavior and lifetime of spam accounts, the campaigns they execute, and the wide-spread abuse of legitimate web services such as URL shorteners and free web hosting. We also identify an emerging marketplace of illegitimate programs operated by spammers that include Twitter account sellers, ad-based URL shorteners, and spam affiliate programs that help enable underground market diversification. Our results show that 77% of spam accounts identified by Twitter are suspended within on day of their first tweet. Because of these pressures, less than 9% of accounts form social relationships with regular Twitter users. Instead, 17% of accounts rely on hijacking trends, while 52% of accounts use unsolicited mentions to reach an audience. In spite of daily account attrition, we show how five spam campaigns controlling 145 thousand accounts combined are able to persist for months at a time, with each campaign enacting a unique spamming strategy. Surprisingly, three of these campaigns send spam directing visitors to reputable store fronts, blurring the line regarding what constitutes spam on social networks.", "which Accuracy/Results ?", "77% of spam accounts identified by Twitter are suspended within on day of their first tweet", 879.0, 970.0], ["Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.", "which number of papers ?", "240 MEDLINE abstracts", 1125.0, 1146.0], ["This paper presents the formal release of {\\em MedMentions}, a new manually annotated resource for the recognition of biomedical concepts. What distinguishes MedMentions from other annotated biomedical corpora is its size (over 4,000 abstracts and over 350,000 linked mentions), as well as the size of the concept ontology (over 3 million concepts from UMLS 2017) and its broad coverage of biomedical disciplines. In addition to the full corpus, a sub-corpus of MedMentions is also presented, comprising annotations for a subset of UMLS 2017 targeted towards document retrieval. To encourage research in Biomedical Named Entity Recognition and Linking, data splits for training and testing are included in the release, and a baseline model and its metrics for entity linking are also described.", "which number of papers ?", "over 4,000", 223.0, 233.0], ["We introduce S2ORC, a large corpus of 81.1M English-language academic papers spanning many academic disciplines. The corpus consists of rich metadata, paper abstracts, resolved bibliographic references, as well as structured full text for 8.1M open access papers. Full text is annotated with automatically-detected inline mentions of citations, figures, and tables, each linked to their corresponding paper objects. In S2ORC, we aggregate papers from hundreds of academic publishers and digital archives into a unified source, and create the largest publicly-available collection of machine-readable academic text to date. We hope this resource will facilitate research and development of tools and tasks for text mining over academic text.", "which number of papers ?", "81.1M", 38.0, 43.0], ["Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task\u2014a result that approaches the human inter-annotator agreement (0.8875)\u2014and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system\u2019s ability to return real-time results: the average response time for each team\u2019s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "which number of papers ?", "1500 PubMed articles", 950.0, 970.0], ["This paper introduces the ACL Reference Dataset for Terminology Extraction and Classification, version 2.0 (ACL RD-TEC 2.0). The ACL RD-TEC 2.0 has been developed with the aim of providing a benchmark for the evaluation of term and entity recognition tasks based on specialised text from the computational linguistics domain. This release of the corpus consists of 300 abstracts from articles in the ACL Anthology Reference Corpus, published between 1978\u20132006. In these abstracts, terms (i.e., single or multi-word lexical units with a specialised meaning) are manually annotated. In addition to their boundaries in running text, annotated terms are classified into one of the seven categories method, tool, language resource (LR), LR product, model, measures and measurements, and other. To assess the quality of the annotations and to determine the difficulty of this annotation task, more than 171 of the abstracts are annotated twice, independently, by each of the two annotators. In total, 6,818 terms are identified and annotated in more than 1300 sentences, resulting in a specialised vocabulary made of 3,318 lexical forms, mapped to 3,471 concepts. We explain the development of the annotation guidelines and discuss some of the challenges we encountered in this annotation task.", "which number of papers ?", "300 abstracts from articles in the ACL Anthology Reference Corpus, published between 1978\u20132006", 365.0, 459.0], ["Community-run, formal evaluations and manually annotated text corpora are critically important for advancing biomedical text-mining research. Recently in BioCreative V, a new challenge was organized for the tasks of disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. Given the nature of both tasks, a test collection is required to contain both disease/chemical annotations and relation annotations in the same set of articles. Despite previous efforts in biomedical corpus construction, none was found to be sufficient for the task. Thus, we developed our own corpus called BC5CDR during the challenge by inviting a team of Medical Subject Headings (MeSH) indexers for disease/chemical entity annotation and Comparative Toxicogenomics Database (CTD) curators for CID relation annotation. To ensure high annotation quality and productivity, detailed annotation guidelines and automatic annotation tools were provided. The resulting BC5CDR corpus consists of 1500 PubMed articles with 4409 annotated chemicals, 5818 diseases and 3116 chemical-disease interactions. Each entity annotation includes both the mention text spans and normalized concept identifiers, using MeSH as the controlled vocabulary. To ensure accuracy, the entities were first captured independently by two annotators followed by a consensus annotation: The average inter-annotator agreement (IAA) scores were 87.49% and 96.05% for the disease and chemicals, respectively, in the test set according to the Jaccard similarity coefficient. Our corpus was successfully used for the BioCreative V challenge tasks and should serve as a valuable resource for the text-mining research community. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "which number of papers ?", "1500", 1003.0, 1007.0], ["A part-of-speech (POS) tagged corpus was built on research abstracts in biomedical domain with the Penn Treebank scheme. As consistent annotation was difficult without domain-specific knowledge we made use of the existing term annotation of the GENIA corpus. A list of frequent terms annotated in the GENIA corpus was compiled and the POS of each constituent of those terms were determined with assistance from domain specialists. The POS of the terms in the list are pre-assigned, then a tagger assigns POS to remaining words preserving the pre-assigned POS, whose results are corrected by human annotators. We also modified the PTB scheme slightly. An inter-annotator agreement tested on new 50 abstracts was 98.5%. A POS tagger trained with the annotated abstracts was tested against a gold-standard set made from the interannotator agreement. The untrained tagger had the accuracy of 83.0%. Trained with 2000 annotated abstracts the accuracy rose to 98.2%. The 2000 annotated abstracts are publicly available.", "which number of papers ?", "2000", 908.0, 912.0], ["Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.", "which number of papers ?", "1367", 647.0, 651.0], ["We describe the annotation of chemical named entities in scientific text. A set of annotation guidelines defines 5 types of named entities, and provides instructions for the resolution of special cases. A corpus of fulltext chemistry papers was annotated, with an inter-annotator agreement F score of 93%. An investigation of named entity recognition using LingPipe suggests that F scores of 63% are possible without customisation, and scores of 74% are possible with the addition of custom tokenisation and the use of dictionaries.", "which inter-coder agreement ?", "F score of 93%", NaN, NaN], ["ABSTRACT Contested heritage has increasingly been studied by scholars over the last two decades in multiple disciplines, however, there is still limited knowledge about what contested heritage is and how it is realized in society. Therefore, the purpose of this paper is to produce a systematic literature review on this topic to provide a holistic understanding of contested heritage, and delineate its current state, trends and gaps. Methodologically, four electronic databases were searched, and 102 journal articles published before 2020 were extracted. A content analysis of each article was then conducted to identify key themes and variables for classification. Findings show that while its research often lacks theoretical underpinnings, contested heritage is marked by its diversity and complexity as it becomes a global issue for both tourism and urbanization. By presenting a holistic understanding of contested heritage, this review offers an extensive investigation of the topic area to help move literature pertaining contested heritage forward.", "which has sources ?", "102 journal articles", 499.0, 519.0], ["ASTER is an advanced Thermal Emission and Reflection Radiometer, a multispectral sensor, which measures reflected and emitted electromagnetic radiation of earth surface with 14 bands. The present study aims to delineate different rock types in the Sittampundi Anorthositic Complex (SAC), Tamil Nadu using Visible (VIS), near-infrared (NIR) and short wave infrared (SWIR) reflectance data of ASTER 9 band data. We used different band ratioing, band combinations in the VNIR and SWIR region for discriminating lithological boundaries. SAC is also considered as a lunar highland analog rock. Anorthosite is a plagioclase-rich igneous rock with subordinate amounts of pyroxenes, olivine and other minerals. A methodology has been applied to correct the cross talk effect and radiance to reflectance. Principal Component Analysis (PCA) has been realized on the 9 ASTER bands in order to reduce the redundancy information in highly correlated bands. PCA derived FCC results enable the validation and support to demarcate the different lithological boundaries defined on previous geological map. The image derived spectral profiles for anorthosite are compared with the ASTER resampled laboratory spectra, JHU spectral library spectra and Apollo 14 lunar anorthosites spectra. The Spectral Angle Mapping imaging spectroscopy technique has been practiced to classify the ASTER image of the study area and found that, the processing of ASTER remote sensing data set can be used as a powerful tool for mapping the terrestrial Anorthositic regions and similar kind of process could be applied to map the planetary surfaces (E.g. Moon).", "which reference ?", "Apollo 14 lunar anorthosites spectra", 1232.0, 1268.0], ["ASTER is an advanced Thermal Emission and Reflection Radiometer, a multispectral sensor, which measures reflected and emitted electromagnetic radiation of earth surface with 14 bands. The present study aims to delineate different rock types in the Sittampundi Anorthositic Complex (SAC), Tamil Nadu using Visible (VIS), near-infrared (NIR) and short wave infrared (SWIR) reflectance data of ASTER 9 band data. We used different band ratioing, band combinations in the VNIR and SWIR region for discriminating lithological boundaries. SAC is also considered as a lunar highland analog rock. Anorthosite is a plagioclase-rich igneous rock with subordinate amounts of pyroxenes, olivine and other minerals. A methodology has been applied to correct the cross talk effect and radiance to reflectance. Principal Component Analysis (PCA) has been realized on the 9 ASTER bands in order to reduce the redundancy information in highly correlated bands. PCA derived FCC results enable the validation and support to demarcate the different lithological boundaries defined on previous geological map. The image derived spectral profiles for anorthosite are compared with the ASTER resampled laboratory spectra, JHU spectral library spectra and Apollo 14 lunar anorthosites spectra. The Spectral Angle Mapping imaging spectroscopy technique has been practiced to classify the ASTER image of the study area and found that, the processing of ASTER remote sensing data set can be used as a powerful tool for mapping the terrestrial Anorthositic regions and similar kind of process could be applied to map the planetary surfaces (E.g. Moon).", "which reference ?", "ASTER resampled laboratory spectra", 1163.0, 1197.0], ["ASTER is an advanced Thermal Emission and Reflection Radiometer, a multispectral sensor, which measures reflected and emitted electromagnetic radiation of earth surface with 14 bands. The present study aims to delineate different rock types in the Sittampundi Anorthositic Complex (SAC), Tamil Nadu using Visible (VIS), near-infrared (NIR) and short wave infrared (SWIR) reflectance data of ASTER 9 band data. We used different band ratioing, band combinations in the VNIR and SWIR region for discriminating lithological boundaries. SAC is also considered as a lunar highland analog rock. Anorthosite is a plagioclase-rich igneous rock with subordinate amounts of pyroxenes, olivine and other minerals. A methodology has been applied to correct the cross talk effect and radiance to reflectance. Principal Component Analysis (PCA) has been realized on the 9 ASTER bands in order to reduce the redundancy information in highly correlated bands. PCA derived FCC results enable the validation and support to demarcate the different lithological boundaries defined on previous geological map. The image derived spectral profiles for anorthosite are compared with the ASTER resampled laboratory spectra, JHU spectral library spectra and Apollo 14 lunar anorthosites spectra. The Spectral Angle Mapping imaging spectroscopy technique has been practiced to classify the ASTER image of the study area and found that, the processing of ASTER remote sensing data set can be used as a powerful tool for mapping the terrestrial Anorthositic regions and similar kind of process could be applied to map the planetary surfaces (E.g. Moon).", "which reference ?", "JHU spectral library spectra", 1199.0, 1227.0], ["\n\nThe phenotypic plasticity and the competitive ability of the invasive Acacia longifolia v. the indigenous Mediterranean dune species Halimium halimifolium and Pinus pinea were evaluated. In particular, we explored the hypothesis that phenotypic plasticity in response to biotic and abiotic factors explains the observed differences in competitiveness between invasive and native species. The seedlings\u2019 ability to exploit different resource availabilities was examined in a two factorial experimental design of light and nutrient treatments by analysing 20 physiological and morphological traits. Competitiveness was tested using an additive experimental design in combination with 15N-labelling experiments. Light and nutrient availability had only minor effects on most physiological traits and differences between species were not significant. Plasticity in response to changes in resource availability occurred in morphological and allocation traits, revealing A. longifolia to be a species of intermediate responsiveness. The major competitive advantage of A. longifolia was its constitutively high shoot elongation rate at most resource treatments and its effective nutrient acquisition. Further, A. longifolia was found to be highly tolerant against competition from native species. In contrast to common expectations, the competition experiment indicated that A. longifolia expressed a constant allocation pattern and a phenotypic plasticity similar to that of the native species.\n", "which Specific traits ?", "20 physiological and morphological traits", 564.0, 605.0], ["Abstract Interactions between environmental variables in anthropogenically disturbed environments and physiological traits of invasive species may help explain reasons for invasive species' establishment in new areas. Here we analyze how soil contamination along roadsides may influence the establishment of Conium maculatum (poison hemlock) in Cook County, IL, USA. We combine analyses that: (1) characterize the soil and measure concentrations of heavy metals and polycyclic aromatic hydrocarbons (PAHs) where Conium is growing; (2) assess the genetic diversity and structure of individuals among nine known populations; and (3) test for tolerance to heavy metals and evidence for local soil growth advantage with greenhouse establishment experiments. We found elevated levels of metals and PAHs in the soil where Conium was growing. Specifically, arsenic (As), cadmium (Cd), and lead (Pb) were found at elevated levels relative to U.S. EPA ecological contamination thresholds. In a greenhouse study we found that Conium is more tolerant of soils containing heavy metals (As, Cd, Pb) than two native species. For the genetic analysis a total of 217 individuals (approximately 20\u201330 per population) were scored with 5 ISSR primers, yielding 114 variable loci. We found high levels of genetic diversity in all populations but little genetic structure or differentiation among populations. Although Conium shows a general tolerance to contamination, we found few significant associations between genetic diversity metrics and a suite of measured environmental and spatial parameters. Soil contamination is not driving the peculiar spatial distribution of Conium in Cook County, but these findings indicate that Conium is likely establishing in the Chicago region partially due to its ability to tolerate high levels of metal contamination.", "which Specific traits ?", "Tolerance to heavy metals", 640.0, 665.0], ["Background: Phenotypic plasticity and ecotypic differentiation have been suggested as the main mechanisms by which widely distributed species can colonise broad geographic areas with variable and stressful conditions. Some invasive plant species are among the most widely distributed plants worldwide. Plasticity and local adaptation could be the mechanisms for colonising new areas. Aims: We addressed if Taraxacum officinale from native (Alps) and introduced (Andes) stock responded similarly to drought treatment, in terms of photosynthesis, foliar angle, and flowering time. We also evaluated if ontogeny affected fitness and physiological responses to drought. Methods: We carried out two common garden experiments with both seedlings and adults (F2) of T. officinale from its native and introduced ranges in order to evaluate their plasticity and ecotypic differentiation under a drought treatment. Results: Our data suggest that the functional response of T. officinale individuals from the introduced range to drought is the result of local adaptation rather than plasticity. In addition, the individuals from the native distribution range were more sensitive to drought than those from the introduced distribution ranges at both seedling and adult stages. Conclusions: These results suggest that local adaptation may be a possible mechanism underlying the successful invasion of T. officinale in high mountain environments of the Andes.", "which Specific traits ?", "Fitness and physiological responses to drought", 618.0, 664.0], ["Abstract How introduced plants, which may be locally adapted to specific climatic conditions in their native range, cope with the new abiotic conditions that they encounter as exotics is not well understood. In particular, it is unclear what role plasticity versus adaptive evolution plays in enabling exotics to persist under new environmental circumstances in the introduced range. We determined the extent to which native and introduced populations of St. John's Wort (Hypericum perforatum) are genetically differentiated with respect to leaf-level morphological and physiological traits that allow plants to tolerate different climatic conditions. In common gardens in Washington and Spain, and in a greenhouse, we examined clinal variation in percent leaf nitrogen and carbon, leaf \u03b413C values (as an integrative measure of water use efficiency), specific leaf area (SLA), root and shoot biomass, root/shoot ratio, total leaf area, and leaf area ratio (LAR). As well, we determined whether native European H. perforatum experienced directional selection on leaf-level traits in the introduced range and we compared, across gardens, levels of plasticity in these traits. In field gardens in both Washington and Spain, native populations formed latitudinal clines in percent leaf N. In the greenhouse, native populations formed latitudinal clines in root and shoot biomass and total leaf area, and in the Washington garden only, native populations also exhibited latitudinal clines in percent leaf C and leaf \u03b413C. Traits that failed to show consistent latitudinal clines instead exhibited significant phenotypic plasticity. Introduced St. John's Wort populations also formed significant or marginally significant latitudinal clines in percent leaf N in Washington and Spain, percent leaf C in Washington, and in root biomass and total leaf area in the greenhouse. In the Washington common garden, there was strong directional selection among European populations for higher percent leaf N and leaf \u03b413C, but no selection on any other measured trait. The presence of convergent, genetically based latitudinal clines between native and introduced H. perforatum, together with previously published molecular data, suggest that native and exotic genotypes have independently adapted to a broad-scale variation in climate that varies with latitude.", "which Specific traits ?", " Leaf-level morphological and physiological traits ", 540.0, 591.0], ["The relative importance of plasticity vs. adaptation for the spread of invasive species has rarely been studied. We examined this question in a clonal population of invasive freshwater snails (Potamopyrgus antipodarum) from the western United States by testing whether observed plasticity in life history traits conferred higher fitness across a range of temperatures. We raised isofemale lines from three populations from different climate regimes (high- and low-elevation rivers and an estuary) in a split-brood, common-garden design in three temperatures. We measured life history and growth traits and calculated population growth rate (as a measure of fitness) using an age-structured projection matrix model. We found a strong effect of temperature on all traits, but no evidence for divergence in the average level of traits among populations. Levels of genetic variation and significant reaction norm divergence for life history traits suggested some role for adaptation. Plasticity varied among traits and was lowest for size and reproductive traits compared to age-related traits and fitness. Plasticity in fitness was intermediate, suggesting that invasive populations are not general-purpose genotypes with respect to the range of temperatures studied. Thus, by considering plasticity in fitness and its component traits, we have shown that trait plasticity alone does not yield the same fitness across a relevant set of temperature conditions.", "which Specific traits ?", "Growth", 588.0, 594.0], ["Introduced species must adapt their ecology, behaviour, and morphological traits to new conditions. The successful introduction and invasive potential of a species are related to its levels of phenotypic plasticity and genetic polymorphism. We analysed changes in the body mass and length of American mink (Neovison vison) since its introduction into the Warta Mouth National Park, western Poland, in relation to diet composition and colonization progress from 1996 to 2004. Mink body mass decreased significantly during the period of population establishment within the study area, with an average decrease of 13% from 1.36 to 1.18 kg in males and of 16% from 0.83 to 0.70 kg in females. Diet composition varied seasonally and between consecutive years. The main prey items were mammals and fish in the cold season and birds and fish in the warm season. During the study period the proportion of mammals preyed upon increased in the cold season and decreased in the warm season. The proportion of birds preyed upon decreased over the study period, whereas the proportion of fish increased. Following introduction, the strictly aquatic portion of mink diet (fish and frogs) increased over time, whereas the proportion of large prey (large birds, muskrats, and water voles) decreased. The average yearly proportion of large prey and average-sized prey in the mink diet was significantly correlated with the mean body masses of males and females. Biogeographical variation in the body mass and length of mink was best explained by the percentage of large prey in the mink diet in both sexes, and by latitude for females. Together these results demonstrate that American mink rapidly changed their body mass in relation to local conditions. This phenotypic variability may be underpinned by phenotypic plasticity and/or by adaptation of quantitative genetic variation. The potential to rapidly change phenotypic variation in this manner is an important factor determining the negative ecological impacts of invasive species. \u00a9 2012 The Linnean Society of London, Biological Journal of the Linnean Society, 2012, 105, 681\u2013693.", "which Specific traits ?", "Changes in the body mass and length", 253.0, 288.0], ["Abstract Changes in morphological traits along elevation and latitudinal gradients in ectotherms are often interpreted in terms of the temperature-size rule, which states that the body size of organisms increases under low temperatures, and is therefore expected to increase with elevation and latitude. However other factors like host plant might contribute to spatial patterns in size as well, particularly for polyphagous insects. Here elevation patterns for trait size and shape in two leafminer species are examined, Liriomyza huidobrensis (Blanchard) (Diptera: Agromyzidae) and L. sativae Blanchard, along a tropical elevation gradient in Java, Indonesia. Adult leafminers were trapped from different locations in the mountainous area of Dieng in the province of Central Java. To separate environmental versus genetic effects, L. huidobrensis originating from 1378 m and 2129 m ASL were reared in the laboratory for five generations. Size variation along the elevation gradient was only found in L. huidobrensis and this followed expectations based on the temperature-size rule. There were also complex changes in wing shape along the gradient. Morphological differences were influenced by genetic and environmental effects. Findings are discussed within the context of adaptation to different elevations in the two species.", "which Specific traits ?", "Morphological differences", 1151.0, 1176.0], ["The mechanisms underlying successful biological invasions often remain unclear. In the case of the tropical water flea Daphnia lumholtzi, which invaded North America, it has been suggested that this species possesses a high thermal tolerance, which in the course of global climate change promotes its establishment and rapid spread. However, D. lumholtzi has an additional remarkable feature: it is the only water flea that forms rigid head spines in response to chemicals released in the presence of fishes. These morphologically (phenotypically) plastic traits serve as an inducible defence against these predators. Here, we show in controlled mesocosm experiments that the native North American species Daphnia pulicaria is competitively superior to D. lumholtzi in the absence of predators. However, in the presence of fish predation the invasive species formed its defences and became dominant. This observation of a predator-mediated switch in dominance suggests that the inducible defence against fish predation may represent a key adaptation for the invasion success of D. lumholtzi.", "which Specific traits ?", "Defence against fish predation", 988.0, 1018.0], ["Summary In order to evaluate the phenotypic plasticity of introduced pikeperch populations in Tunisia, the intra- and interpopulation differentiation was analysed using a biometric approach. Thus, nine meristic counts and 23 morphological measurements were taken from 574 specimens collected from three dams and a hill lake. The univariate (anova) and multivariate analyses (PCA and DFA) showed a low meristic variability between the pikeperch samples and a segregated pikeperch group from the Sidi Salem dam which displayed a high distance between mouth and pectoral fin and a high antedorsal distance. In addition, the Korba hill lake population seemed to have more important values of total length, eye diameter, maximum body height and a higher distance between mouth and operculum than the other populations. However, the most accentuated segregation was found in the Lebna sample where the individuals were characterized by high snout length, body thickness, pectoral fin length, maximum body height and distance between mouth and operculum. This study shows the existence of morphological differentiations between populations derived from a single gene pool that have been isolated in separated sites for several decades although in relatively similar environments.", "which Specific traits ?", "Nine meristic counts and 23 morphological measurements", 197.0, 251.0], ["Invasive alien species might benefit from phenotypic plasticity by being able to (i) maintain fitness in stressful environments (\u2018robust\u2019), (ii) increase fitness in favourable environments (\u2018opportunistic\u2019), or (iii) combine both abilities (\u2018robust and opportunistic\u2019). Here, we applied this framework, for the first time, to an animal, the invasive slug, Arion lusitanicus, and tested (i) whether it has a more adaptive phenotypic plasticity compared with a congeneric native slug, Arion fuscus, and (ii) whether it is robust, opportunistic or both. During one year, we exposed specimens of both species to a range of temperatures along an altitudinal gradient (700\u20132400 m a.s.l.) and to high and low food levels, and we compared the responsiveness of two fitness traits: survival and egg production. During summer, the invasive species had a more adaptive phenotypic plasticity, and at high temperatures and low food levels, it survived better and produced more eggs than A. fuscus, representing the robust phenotype. During winter, A. lusitanicus displayed a less adaptive phenotype than A. fuscus. We show that the framework developed for plants is also very useful for a better mechanistic understanding of animal invasions. Warmer summers and milder winters might lead to an expansion of this invasive species to higher altitudes and enhance its spread in the lowlands, supporting the concern that global climate change will increase biological invasions.", "which Specific traits ?", "Survival and egg production", 773.0, 800.0], ["Luo J & Cardina J (2012). Germination patterns and implications for invasiveness in three Taraxacum (Asteraceae) species. Weed Research 52, 112\u2013121. Summary The ability to germinate across different environments has been considered an important trait of invasive plant species that allows for establishment success in new habitats. Using two alien congener species of Asteraceae \u2013Taraxacum officinale (invasive) and Taraxacum laevigatum laevigatum (non-invasive) \u2013 we tested the hypothesis that invasive species germinate better than non-invasives under various conditions. The germination patterns of Taraxacum brevicorniculatum, a contaminant found in seeds of the crop Taraxacum kok-saghyz, were also investigated to evaluate its invasive potential. In four experiments, we germinated seeds along gradients of alternating temperature, constant temperature (with or without light), water potential and following accelerated ageing. Neither higher nor lower germination per se explained invasion success for the Taraxacum species tested here. At alternating temperature, the invasive T. officinale had higher germination than or similar to the non-invasive T. laevigatum. Contrary to predictions, T. laevigatum exhibited higher germination than T. officinale in environments of darkness, low water potential or after the seeds were exposed to an ageing process. These results suggested a complicated role of germination in the success of T. officinale. Taraxacum brevicorniculatum showed the highest germination among the three species in all environments. The invasive potential of this species is thus unclear and will probably depend on its performance at other life stages along environmental gradients.", "which Specific traits ?", " Germination ", 577.0, 590.0], ["Questions: Does plasticity to water availability differ between native and naturalized and laboratory plant accessions? Is there a relationship between morphological plasticity and a fitness measure? Can we account for latitudinal patterns of plasticity with rainfall data from the seed source location? Organism: We examined an array of 23 native, 14 naturalized, and 5 laboratory accessions of Arabidopsis thaliana. Methods: We employed a split-plot experimental design in the greenhouse with two water treatments. We measured morphological and fitness-related traits at various developmental stages. We utilized a published dataset representing 30-year average precipitation trends for each accession origin. Results: We detected evidence of differential patterns of plasticity between native, naturalized, and laboratory populations for several morphological traits. Native, laboratory, and naturalized populations also differed in which traits were positively associated with fitness, and did not follow the Jack-of-all-trades or Master-of-some scenarios. Significant negative relationships were detected for plasticity in morphological traits with latitude. We found modest evidence that rainfall may play a role in this latitudinal trend.", "which Specific traits ?", "Morphological and fitness-related traits", 529.0, 569.0], ["Hybrid energy systems (HESs) generate electricity from multiple energy sources that complement each other. Recently, due to the reduction in costs of photovoltaic (PV) modules and wind turbines, these types of systems have become economically competitive. In this study, a mathematical programming model is applied to evaluate the techno-economic feasibility of autonomous units located in two isolated areas of Ecuador: first, the province of Galapagos (subtropical island) and second, the province of Morona Santiago (Amazonian tropical forest). The two case studies suggest that HESs are potential solutions to reduce the dependence of rural villages on fossil fuels and viable mechanisms to bring electrical power to isolated communities in Ecuador. Our results reveal that not only from the economic but also from the environmental point of view, for the case of the Galapagos province, a hybrid energy system with a PV\u2013wind\u2013battery configuration and a levelized cost of energy (LCOE) equal to 0.36 $/kWh is the optimal energy supply system. For the case of Morona Santiago, a hybrid energy system with a PV\u2013diesel\u2013battery configuration and an LCOE equal to 0.37 $/kWh is the most suitable configuration to meet the load of a typical isolated community in Ecuador. The proposed optimization model can be used as a decision-support tool for evaluating the viability of autonomous HES projects at any other location.", "which Levelized cost of energy ?", "0.36 $/kWh", NaN, NaN], ["The effect of rhizosphere soil or root tissues amendments on the microbial mineralisation of hydrocarbons in soil slurry by the indigenous microbial communities has been investigated. In this study, rhizosphere soil and root tissues of reed canary grass (Phalaris arundinacea), channel grass (Vallisneria spiralis), blackberry (Rubus fructicosus) and goat willow (Salix caprea) were collected from the former Shell and Imperial Industries (ICI) Refinery site in Lancaster, UK. The rates and extents of 14C\u2013hydrocarbons (naphthalene, phenanthrene, hexadecane or octacosane) mineralisation in artificially spiked soils were monitored in the absence and presence of 5% (wet weight) of rhizosphere soil or root tissues. Respirometric and microbial assays were monitored in fresh (0 d) and pre\u2013incubated (28 d) artificially spiked soils following amendment with rhizosphere soil or root tissues. There were significant increases (P < 0.001) in the extents of 14C\u2013naphthalene and 14C\u2013phenanthrene mineralisation in fresh artificially spiked soils amended with rhizosphere soil and root tissues compared to those measured in unamended soils. However, amendment of fresh artificially spiked soils with rhizosphere soil and root tissues did not enhance the microbial mineralisation of 14C\u2013hexadecane or 14C\u2013octacosane by indigenous microbial communities. Apart from artificially spiked soil systems containing naphthalene (amended with reed canary grass and channel grass rhizosphere) and hexadecane amended with goat willow rhizosphere, microbial mineralisation of hydrocarbons was further enhanced following 28 d soil\u2013organic contaminants pre\u2013exposure and subsequent amendment with rhizosphere soil or root tissues. This study suggests that organic chemicals in roots and/or rhizosphere can enhance the microbial degradation of petroleum hydrocarbons in freshly contaminated soil by supporting higher numbers of hydrocarbon\u2013degrading populations, promoting microbial activity and/or enhancing bioavailability of organic contaminants.", "which Result and Conclusion ?", "There were significant increases (P < 0.001) in the extents of 14C\u2013naphthalene and 14C\u2013phenanthrene mineralisation in fresh artificially spiked soils amended with rhizosphere soil and root tissues compared to those measured in unamended soils. However, amendment of fresh artificially spiked soils with rhizosphere soil and root tissues did not enhance the microbial mineralisation of 14C\u2013hexadecane or 14C\u2013octacosane by indigenous microbial communities. Apart from artificially spiked soil systems containing naphthalene (amended with reed canary grass and channel grass rhizosphere) and hexadecane amended with goat willow rhizosphere, microbial mineralisation of hydrocarbons was further enhanced following 28 d soil\u2013organic contaminants pre\u2013exposure and subsequent amendment with rhizosphere soil or root tissues.", NaN, NaN], ["This study investigates the distribution of cadmium and lead concentrations in the outcrop rock samples collected from Abakaliki anticlinorium in the Southern Benue Trough, Nigeria. The outcrop rock samples from seven sampling locations were air\u2013dried for seventy\u2013two hours, homogenized by grinding and pass through < 63 micron mesh sieve. The ground and homogenized rock samples were pulverized and analyzed for cadmium and lead using X-Ray Fluorescence Spectrometer. The concentrations of heavy metals in the outcrop rock samples ranged from < 0.10 \u2013 7.95 mg kg\u20131 for cadmium (Cd) and < 1.00 \u2013 4966.00 mg kg\u20131 for lead (Pb). Apart from an anomalous concentration measured in Afikpo Shale (Middle Segment), the results obtained revealed that rock samples from all the sampling locations yielded cadmium concentrations of < 0.10 mg kg\u20131 and the measured concentrations were below the average crustal abundance of 0.50 mg kg\u20131. Although background concentration of <1.00 \u00b1 0.02 mg kg\u20131 was measured in Abakaliki Shale, rock samples from all the sampling locations revealed anomalous lead concentrations above average crustal abundance of 30 mg kg\u20131. The results obtained reveal important contributions towards understanding of heavy metal distribution patterns and provide baseline data that can be used for potential identification of areas at risk associated with natural sources of heavy metals contamination in the region. The use of outcrop rocks provides a cost\u2013effective approach for monitoring regional heavy metal contamination associated with dissolution and/or weathering of rocks or parent materials. Evaluation of heavy metals may be effectively used in large scale regional pollution monitoring of soil, groundwater, atmospheric and marine environment. Therefore, monitoring of heavy metal concentrations in soils, groundwater and atmospheric environment is imperative in order to prevent bioaccumulation in various ecological receptors.", "which Discussion ?", "Apart from an anomalous concentration measured in Afikpo Shale (Middle Segment), the results obtained revealed that rock samples from all the sampling locations yielded cadmium concentrations of < 0.10 mg kg\u20131 and the measured concentrations were below the average crustal abundance of 0.50 mg kg\u20131. Although background concentration of <1.00 \u00b1 0.02 mg kg\u20131 was measured in Abakaliki Shale, rock samples from all the sampling locations revealed anomalous lead concentrations above average crustal abundance of 30 mg kg\u20131. ", NaN, NaN], ["Textural characterization and heavy mineral studies of beach sediments in Ibeno and Eastern Obolo Local Government Areas of Akwa Ibom State were carried out in the present study. The main aim was to infer their provenance, transport history and environment of deposition. Sediment samples were collected at the water\u2013sediment contact along the shoreline at an interval of about 3m. Ten samples were collected from study location 1 (Ibeno Beach) and twelve samples were collected from study location 2 (Eastern Obolo Beach). A total of twenty\u2013two samples were collected from the field and brought to the laboratory for textural and compositional analyses. The results showed that the value of graphic mean size ranged from 1.70\u0424 to 2.83\u0424, sorting values ranged from 0.39\u0424 \u2013 0.60\u0424, skewness values ranged from -0.02 to 0.10 while kurtosis values ranged from 1.02 to 2.46, indicating medium to fine grained and well sorted sediments. This suggested that the sediments have been transported far from their source. Longshore current and onshore\u2013offshore movements of sediment are primarily responsible in sorting of the heavy minerals. The histogram charts for the different samples and standard deviation versus skewness indicated a beach environment of deposition. This implies that the sediments are dominated by one class of grain size; a phenomenon characteristic of beach environments. The heavy mineral assemblages identified in this research work were rutile, zircon, tourmaline, hornblende, apatite, diopside, glauconite, pumpellyite, cassiterite, epidote, garnet, augite, enstatite, andalusite and opaque minerals. The zircon-tourmaline-rutile (ZTR) index ranged from 47.30% to 87.00% with most of the samples showing a ZTR index greater than 50%. These indicated that the sediments were mineralogically sub-mature and have been transported far from their source. The heavy minerals identified are indicative of being products of reworked sediments of both metamorphic (high rank) and igneous (both mafic and sialic) origin probably derived from the basement rocks of the Oban Massif as well as reworked sediments of the Benue Trough. Therefore, findings from the present study indicated that erosion, accretion, and stability of beaches are controlled by strong hydrodynamic and hydraulic processes.", "which Discussion ?", "This suggested that the sediments have been transported far from their source. Longshore current and onshore\u2013offshore movements of sediment are primarily responsible in sorting of the heavy minerals. The histogram charts for the different samples and standard deviation versus skewness indicated a beach environment of deposition. This implies that the sediments are dominated by one class of grain size; a phenomenon characteristic of beach environments. The heavy mineral assemblages identified in this research work were rutile, zircon, tourmaline, hornblende, apatite, diopside, glauconite, pumpellyite, cassiterite, epidote, garnet, augite, enstatite, andalusite and opaque minerals. The zircon-tourmaline-rutile (ZTR) index ranged from 47.30% to 87.00% with most of the samples showing a ZTR index greater than 50%. These indicated that the sediments were mineralogically sub-mature and have been transported far from their source. The heavy minerals identified are indicative of being products of reworked sediments of both metamorphic (high rank) and igneous (both mafic and sialic) origin probably derived from the basement rocks of the Oban Massif as well as reworked sediments of the Benue Trough.", NaN, NaN], ["Iodine deficiency disorders (IDD) has been a major global public health problem threatening more than 2 billion people worldwide. Considering various human health implications associated with iodine deficiency, universal salt iodization programme has been recognized as one of the best methods of preventing iodine deficiency disorder and iodizing table salt is currently done in many countries. In this study, comparative assessment of iodine content of commercially available table salt brands in Nigerian market were investigated and iodine content were measured in ten table salt brands samples using iodometric titration. The iodine content ranged from 14.80 mg/kg \u2013 16.90 mg/kg with mean value of 15.90 mg/kg for Sea salt; 24.30 mg/kg \u2013 25.40 mg/kg with mean value of 24.60 mg/kg for Dangote salt (blue sachet); 22.10 mg/kg \u2013 23.10 mg/kg with mean value of 22.40 mg/kg for Dangote salt (red sachet); 23.30 mg/kg \u2013 24.30 mg/kg with mean value of 23.60 mg/kg for Mr Chef salt; 23.30 mg/kg \u2013 24.30 mg/kg with mean value of 23.60 mg/kg for Annapurna; 26.80 mg/kg \u2013 27.50 mg/kg with mean value of 27.20mg/kg for Uncle Palm salt; 23.30 mg/kg \u2013 29.60 mg/kg with mean content of 26.40 mg/kg for Dangote (bag); 25.40 mg/kg \u2013 26.50 mg/kg with mean value of 26.50 mg/kg for Royal salt; 36.80 mg/kg \u2013 37.20 mg/kg with mean iodine content of 37.0 mg/kg for Abakaliki refined salt, and 30.07 mg/kg \u2013 31.20 mg/kg with mean value of 31.00 mg/kg for Ikom refined salt. The mean iodine content measured in the Sea salt brand (15.70 mg/kg) was significantly P < 0.01 lower compared to those measured in other table salt brands. Although the iodine content of Abakaliki and Ikom refined salt exceed the recommended value, it is clear that only Sea salt brand falls below the World Health Organization (WHO) recommended value (20 \u2013 30 mg/kg), while the remaining table salt samples are just within the range. The results obtained have revealed that 70 % of the table salt brands were adequately iodized while 30 % of the table salt brands were not adequately iodized and provided baseline data that can be used for potential identification of human health risks associated with inadequate and/or excess iodine content in table salt brands consumed in households in Nigeria.", "which Discussion ?", "The mean iodine content measured in the Sea salt brand (15.70 mg/kg) was significantly P < 0.01 lower compared to those measured in other table salt brands. Although the iodine content of Abakaliki and Ikom refined salt exceed the recommended value, it is clear that only Sea salt brand falls below the World Health Organization (WHO) recommended value (20 \u2013 30 mg/kg), while the remaining table salt samples are just within the range. ", NaN, NaN], ["Land-use classification is essential for urban planning. Urban land-use types can be differentiated either by their physical characteristics (such as reflectivity and texture) or social functions. Remote sensing techniques have been recognized as a vital method for urban land-use classification because of their ability to capture the physical characteristics of land use. Although significant progress has been achieved in remote sensing methods designed for urban land-use classification, most techniques focus on physical characteristics, whereas knowledge of social functions is not adequately used. Owing to the wide usage of mobile phones, the activities of residents, which can be retrieved from the mobile phone data, can be determined in order to indicate the social function of land use. This could bring about the opportunity to derive land-use information from mobile phone data. To verify the application of this new data source to urban land-use classification, we first construct a vector of aggregated mobile phone data to characterize land-use types. This vector is composed of two aspects: the normalized hourly call volume and the total call volume. A semi-supervised fuzzy c-means clustering approach is then applied to infer the land-use types. The method is validated using mobile phone data collected in Singapore. Land use is determined with a detection rate of 58.03%. An analysis of the land-use classification results shows that the detection rate decreases as the heterogeneity of land use increases, and increases as the density of cell phone towers increases.", "which Has result ?", "The detection rate decreases as the heterogeneity of land use increases, and increases as the density of cell phone towers increases", 1457.0, 1589.0], ["\nPurpose\nIn terms of entrepreneurship, open data benefits include economic growth, innovation, empowerment and new or improved products and services. Hackathons encourage the development of new applications using open data and the creation of startups based on these applications. Researchers focus on factors that affect nascent entrepreneurs\u2019 decision to create a startup but researches in the field of open data hackathons have not been fully investigated yet. This paper aims to suggest a model that incorporates factors that affect the decision of establishing a startup by developers who have participated in open data hackathons.\n\n\nDesign/methodology/approach\nIn total, 70 papers were examined and analyzed using a three-phased literature review methodology, which was suggested by Webster and Watson (2002). These surveys investigated several factors that affect a nascent entrepreneur to create a startup.\n\n\nFindings\nEventually, by identifying the motivations for developers to participate in a hackathon, and understanding the benefits of the use of open data, researchers will be able to elaborate the proposed model and evaluate if the contest has contributed to the decision of establish a startup and what factors affect the decision to establish a startup apply to open data developers, and if the participants of the contest agree with these factors.\n\n\nOriginality/value\nThe paper expands the scope of open data research on entrepreneurship field, stating the need for more research to be conducted regarding the open data in entrepreneurship through hackathons.\n", "which Has result ?", "model", 571.0, 576.0], ["Textural characterization and heavy mineral studies of beach sediments in Ibeno and Eastern Obolo Local Government Areas of Akwa Ibom State were carried out in the present study. The main aim was to infer their provenance, transport history and environment of deposition. Sediment samples were collected at the water\u2013sediment contact along the shoreline at an interval of about 3m. Ten samples were collected from study location 1 (Ibeno Beach) and twelve samples were collected from study location 2 (Eastern Obolo Beach). A total of twenty\u2013two samples were collected from the field and brought to the laboratory for textural and compositional analyses. The results showed that the value of graphic mean size ranged from 1.70\u0424 to 2.83\u0424, sorting values ranged from 0.39\u0424 \u2013 0.60\u0424, skewness values ranged from -0.02 to 0.10 while kurtosis values ranged from 1.02 to 2.46, indicating medium to fine grained and well sorted sediments. This suggested that the sediments have been transported far from their source. Longshore current and onshore\u2013offshore movements of sediment are primarily responsible in sorting of the heavy minerals. The histogram charts for the different samples and standard deviation versus skewness indicated a beach environment of deposition. This implies that the sediments are dominated by one class of grain size; a phenomenon characteristic of beach environments. The heavy mineral assemblages identified in this research work were rutile, zircon, tourmaline, hornblende, apatite, diopside, glauconite, pumpellyite, cassiterite, epidote, garnet, augite, enstatite, andalusite and opaque minerals. The zircon-tourmaline-rutile (ZTR) index ranged from 47.30% to 87.00% with most of the samples showing a ZTR index greater than 50%. These indicated that the sediments were mineralogically sub-mature and have been transported far from their source. The heavy minerals identified are indicative of being products of reworked sediments of both metamorphic (high rank) and igneous (both mafic and sialic) origin probably derived from the basement rocks of the Oban Massif as well as reworked sediments of the Benue Trough. Therefore, findings from the present study indicated that erosion, accretion, and stability of beaches are controlled by strong hydrodynamic and hydraulic processes.", "which Has result ?", "The results showed that the value of graphic mean size ranged from 1.70\u0424 to 2.83\u0424, sorting values ranged from 0.39\u0424 \u2013 0.60\u0424, skewness values ranged from -0.02 to 0.10 while kurtosis values ranged from 1.02 to 2.46, indicating medium to fine grained and well sorted sediments.", NaN, NaN], ["Continuous health monitoring is a hopeful solution that can efficiently provide health-related services to elderly people suffering from chronic diseases. The emergence of the Internet of Things (IoT) technologies have led to their adoption in the development of new healthcare systems for efficient healthcare monitoring, diagnosis and treatment. This paper presents a healthcare-IoT based system where an ontology is proposed to provide semantic interoperability among heterogeneous devices and users in healthcare domain. Our work consists on integrating existing ontologies related to health, IoT domain and time, instantiating classes, and establishing reasoning rules. The model created has been validated by semantic querying. The results show the feasibility and efficiency of the proposed ontology and its capability to grow into a more understanding and specialized ontology for health monitoring and treatment.", "which Has result ?", "show the feasibility and efficiency of the proposed ontology", 746.0, 806.0], ["Abstract Presently, analytics degree programs exhibit a growing trend to meet a strong market demand. To explore the skill sets required for analytics positions, the authors examined a sample of online job postings related to professions such as business analyst (BA), business intelligence analyst (BIA), data analyst (DA), and data scientist (DS) using content analysis. They present a ranked list of relevant skills belonging to specific skills categories for the studied positions. Also, they conducted a pairwise comparison between DA and DS as well as BA and BIA. Overall, the authors observed that decision making, organization, communication, and structured data management are key to all job categories. The analysis shows that technical skills like statistics and programming skills are in most demand for DAs. The analysis is useful for creating clear definitions with respect to required skills for job categories in the business and data analytics domain and for designing course curricula for this domain.", "which Has result ?", "clear definitions with respect to required skills for job categories in the business and data analytics domain", 857.0, 967.0], ["In many data-centric semantic web applications, it is desirable to use OWL to encode the Integrity Constraints (IC) that must be satisfied by instance data. However, challenges arise due to the Open World Assumption (OWA) and the lack of a Unique Name Assumption (UNA) in OWL's standard semantics. In particular, conditions that trigger constraint violations in systems using the Closed World Assumption (CWA), will generate new inferences in standard OWL-based reasoning applications. In this paper, we present an alternative IC semantics for OWL that allows applications to work with the CWA and the weak UNA. Ontology modelers can choose which OWL axioms to be interpreted with our IC semantics. Thus application developers are able to combine open world reasoning with closed world constraint validation in a flexible way. We also show that IC validation can be reduced to query answering under certain conditions. Finally, we describe our prototype implementation based on the OWL reasoner Pellet.", "which Has result ?", "IC validation can be reduced to query answering", 845.0, 892.0], ["Internet of Things (IoT) covers a variety of applications including the Healthcare field. Consequently, medical objects become connected to each other with the purpose to share and exchange health data. These medical connected objects raise issues on how to ensure the analysis, interpretation and semantic interoperability of the extensive obtained health data with the purpose to make an appropriate decision. This paper proposes a HealthIoT ontology for representing the semantic interoperability of the medical connected objects and their data; while an algorithm alleviates the analysis of the detected vital signs and the decision-making of the doctor. The execution of this algorithm needs the definition of several SWRL rules (Semantic Web Rule Language).", "which Has result ?", "make an appropriate decision", 382.0, 410.0], ["In order to improve the signal-to-noise ratio of the hyperspectral sensors and exploit the potential of satellite hyperspectral data for predicting soil properties, we took MingShui County as the study area, which the study area is approximately 1481 km2, and we selected Gaofen-5 (GF-5) satellite hyperspectral image of the study area to explore an applicable and accurate denoising method that can effectively improve the prediction accuracy of soil organic matter (SOM) content. First, fractional-order derivative (FOD) processing is performed on the original reflectance (OR) to evaluate the optimal FOD. Second, singular value decomposition (SVD), Fourier transform (FT) and discrete wavelet transform (DWT) are used to denoise the OR and optimal FOD reflectance. Third, the spectral indexes of the reflectance under different denoising methods are extracted by optimal band combination algorithm, and the input variables of different denoising methods are selected by the recursive feature elimination (RFE) algorithm. Finally, the SOM content is predicted by a random forest prediction model. The results reveal that 0.6-order reflectance describes more useful details in satellite hyperspectral data. Five spectral indexes extracted from the reflectance under different denoising methods have a strong correlation with the SOM content, which is helpful for realizing high-accuracy SOM predictions. All three denoising methods can reduce the noise in hyperspectral data, and the accuracies of the different denoising methods are ranked DWT > FT > SVD, where 0.6-order-DWT has the highest accuracy (R2 = 0.84, RMSE = 3.36 g kg\u22121, and RPIQ = 1.71). This paper is relatively novel, in that GF-5 satellite hyperspectral data based on different denoising methods are used to predict SOM, and the results provide a highly robust and novel method for mapping the spatial distribution of SOM content at the regional scale.", "which Has result ?", "DWT > FT > SVD", 1543.0, 1557.0], ["Abstract Presently, analytics degree programs exhibit a growing trend to meet a strong market demand. To explore the skill sets required for analytics positions, the authors examined a sample of online job postings related to professions such as business analyst (BA), business intelligence analyst (BIA), data analyst (DA), and data scientist (DS) using content analysis. They present a ranked list of relevant skills belonging to specific skills categories for the studied positions. Also, they conducted a pairwise comparison between DA and DS as well as BA and BIA. Overall, the authors observed that decision making, organization, communication, and structured data management are key to all job categories. The analysis shows that technical skills like statistics and programming skills are in most demand for DAs. The analysis is useful for creating clear definitions with respect to required skills for job categories in the business and data analytics domain and for designing course curricula for this domain.", "which Has result ?", "clear definitions with respect to required skills for job categories in the business and data analytics domain", 857.0, 967.0], ["We demonstrate that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar. Indeed, its performance of 86.36% (LP/LR F1) is better than that of early lexicalized PCFG models, and surprisingly close to the current state-of-the-art. This result has potential uses beyond establishing a strong lower bound on the maximum possible accuracy of unlexicalized models: an unlexicalized PCFG is much more compact, easier to replicate, and easier to interpret than more complex lexical models, and the parsing algorithms are simpler, more widely understood, of lower asymptotic complexity, and easier to optimize.", "which Has result ?", "86.36% (LP/LR F1)", NaN, NaN], ["Televised public service announcements are video ads that are a key component of public health campaigns against smoking. Understanding the neurophysiological correlates of anti-tobacco ads is an important step toward novel objective methods of their evaluation and design. In the present study, we used functional magnetic resonance imaging (fMRI) to investigate the brain and behavioral effects of the interaction between content (\u201cargument strength,\u201d AS) and format (\u201cmessage sensation value,\u201d MSV) of anti-smoking ads in humans. Seventy-one nontreatment-seeking smokers viewed a sequence of 16 high or 16 low AS ads during an fMRI scan. Dependent variables were brain fMRI signal, the immediate recall of the ads, the immediate change in intentions to quit smoking, and the urine levels of a major nicotine metabolite cotinine at a 1 month follow-up. Whole-brain ANOVA revealed that AS and MSV interacted in the inferior frontal, inferior parietal, and fusiform gyri; the precuneus; and the dorsomedial prefrontal cortex (dMPFC). Regression analysis showed that the activation in the dMPFC predicted the urine cotinine levels 1 month later. These results characterize the key brain regions engaged in the processing of persuasive communications and suggest that brain fMRI response to anti-smoking ads could predict subsequent smoking severity in nontreatment-seeking smokers. Our findings demonstrate the importance of the quality of content for objective ad outcomes and suggest that fMRI investigation may aid the prerelease evaluation of televised public health ads.", "which Has result ?", "AS and MSV interacted in the inferior frontal, inferior parietal, and fusiform gyri; the precuneus; and the dorsomedial prefrontal cortex", 887.0, 1024.0], ["ABSTRACT \u2018Heritage Interpretation\u2019 has always been considered as an effective learning, communication and management tool that increases visitors\u2019 awareness of and empathy to heritage sites or artefacts. Yet the definition of \u2018digital heritage interpretation\u2019 is still wide and so far, no significant method and objective are evident within the domain of \u2018digital heritage\u2019 theory and discourse. Considering \u2018digital heritage interpretation\u2019 as a process rather than as a tool to present or communicate with end-users, this paper presents a critical application of a theoretical construct ascertained from multiple disciplines and explicates four objectives for a comprehensive interpretive process. A conceptual model is proposed and further developed into a conceptual framework with fifteen considerations. This framework is then implemented and tested on an online platform to assess its impact on end-users\u2019 interpretation level. We believe the presented interpretive framework (PrEDiC) will help heritage professionals and media designers to develop interpretive heritage project.", "which Has result ?", "interpretive framework (PrEDiC) ", NaN, NaN], ["Objective \u2013 Evidence from systematic reviews a decade ago suggested that face-to-face and online methods to provide information literacy training in universities were equally effective in terms of skills learnt, but there was a lack of robust comparative research. The objectives of this review were (1) to update these findings with the inclusion of more recent primary research; (2) to further enhance the summary of existing evidence by including studies of blended formats (with components of both online and face-to-face teaching) compared to single format education; and (3) to explore student views on the various formats employed. Methods \u2013 Authors searched seven databases along with a range of supplementary search methods to identify comparative research studies, dated January 1995 to October 2016, exploring skill outcomes for students enrolled in higher education programs. There were 33 studies included, of which 19 also contained comparative data on student views. Where feasible, meta-analyses were carried out to provide summary estimates of skills development and a thematic analysis was completed to identify student views across the different formats. Results \u2013 A large majority of studies (27 of 33; 82%) found no statistically significant difference between formats in skills outcomes for students. Of 13 studies that could be included in a meta-analysis, the standardized mean difference (SMD) between skill test results for face-to-face versus online formats was -0.01 (95% confidence interval -0.28 to 0.26). Of ten studies comparing blended to single delivery format, seven (70%) found no statistically significant difference between formats, and the remaining studies had mixed outcomes. From the limited evidence available across all studies, there is a potential dichotomy between outcomes measured via skill test and assignment (course work) which is worthy of further investigation. The thematic analysis of student views found no preference in relation to format on a range of measures in 14 of 19 studies (74%). The remainder identified that students perceived advantages and disadvantages for each format but had no overall preference. Conclusions \u2013 There is compelling evidence that information literacy training is effective and well received across a range of delivery formats. Further research looking at blended versus single format methods, and the time implications for each, as well as comparing assignment to skill test outcomes would be valuable. Future studies should adopt a methodologically robust design (such as the randomized controlled trial) with a large student population and validated outcome measures.", "which Has result ?", "no preference in relation to format", 1961.0, 1996.0], ["In this article, we share our experiences of using digital technologies and various media to present historical narratives of a museum object collection aiming to provide an engaging experience on multiple platforms. Based on P. Joseph\u2019s article, Dawson presented multiple interpretations and historical views of the Markham car collection across various platforms using multimedia resources. Through her creative production, she explored how to use cylindrical panoramas and rich media to offer new ways of telling the controversial story of the contested heritage of a museum\u2019s veteran and vintage car collection. The production\u2019s usability was investigated involving five experts before it was published online and the general users\u2019 experience was investigated. In this article, we present an important component of findings which indicates that virtual panorama tours featuring multimedia elements could be successful in attracting new audiences and that using this type of storytelling technique can be effective in the museum sector. The storyteller panorama tour presented here may stimulate GLAM (galleries, libraries, archives, and museums) professionals to think of new approaches, implement new strategies or services to engage their audiences more effectively. The research may ameliorate the education of future professionals as well.", "which Has result ?", "storyteller panorama tour", 1045.0, 1070.0], ["Infections encompass a set of medical conditions of very diverse kinds that can pose a significant risk to health, and even death. As with many other diseases, early diagnosis can help to provide patients with proper care to minimize the damage produced by the disease, or to isolate them to avoid the risk of spread. In this context, computational intelligence can be useful to predict the risk of infection in patients, raising early alarms that can aid medical teams to respond as quick as possible. In this paper, we survey the state of the art on infection prediction using computer science by means of a systematic literature review. The objective is to find papers where computational intelligence is used to predict infections in patients using physiological data as features. We have posed one major research question along with nine specific subquestions. The whole review process is thoroughly described, and eight databases are considered which index most of the literature published in different scholarly formats. A total of 101 relevant documents have been found in the period comprised between 2003 and 2019, and a detailed study of these documents is carried out to classify the works and answer the research questions posed, resulting to our best knowledge in the most comprehensive study of its kind. We conclude that the most widely addressed infection is by far sepsis, followed by Clostridium difficile infection and surgical site infections. Most works use machine learning techniques, from which logistic regression, support vector machines, random forest and naive Bayes are the most common. Some machine learning works provide some ideas on the problems of small data and class imbalance, which can be of interest. The current systematic literature review shows that automatic diagnosis of infectious diseases using computational intelligence is well documented in the medical literature.", "which Has result ?", "Machine learning techniques", 1480.0, 1507.0], ["Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and \u201cdisorder\u201d and \u201ctransmembrane\u201d analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein\u2014binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.", "which Has result ?", "episodic selective pressure", 1658.0, 1685.0], ["Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and \u201cdisorder\u201d and \u201ctransmembrane\u201d analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein\u2014binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.", "which Has result ?", "virus evolution", 1776.0, 1791.0], ["A statistical\u2010dynamical downscaling method is used to estimate future changes of wind energy output (Eout) of a benchmark wind turbine across Europe at the regional scale. With this aim, 22 global climate models (GCMs) of the Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble are considered. The downscaling method uses circulation weather types and regional climate modelling with the COSMO\u2010CLM model. Future projections are computed for two time periods (2021\u20132060 and 2061\u20132100) following two scenarios (RCP4.5 and RCP8.5). The CMIP5 ensemble mean response reveals a more likely than not increase of mean annual Eout over Northern and Central Europe and a likely decrease over Southern Europe. There is some uncertainty with respect to the magnitude and the sign of the changes. Higher robustness in future changes is observed for specific seasons. Except from the Mediterranean area, an ensemble mean increase of Eout is simulated for winter and a decreasing for the summer season, resulting in a strong increase of the intra\u2010annual variability for most of Europe. The latter is, in particular, probable during the second half of the 21st century under the RCP8.5 scenario. In general, signals are stronger for 2061\u20132100 compared to 2021\u20132060 and for RCP8.5 compared to RCP4.5. Regarding changes of the inter\u2010annual variability of Eout for Central Europe, the future projections strongly vary between individual models and also between future periods and scenarios within single models. This study showed for an ensemble of 22 CMIP5 models that changes in the wind energy potentials over Europe may take place in future decades. However, due to the uncertainties detected in this research, further investigations with multi\u2010model ensembles are needed to provide a better quantification and understanding of the future changes.", "which Has result ?", "changes in the wind energy potentials over Europe may take place in future decades.", NaN, NaN], ["AbstractLanguage and literary studies have studied style for centuries, and even since the advent of \u203astylistics\u2039 as a discipline at the beginning of the twentieth century, definitions of \u203astyle\u2039 have varied heavily across time, space and fields. Today, with increasingly large collections of literary texts being made available in digital form, computational approaches to literary style are proliferating. New methods from disciplines such as corpus linguistics and computer science are being adopted and adapted in interrelated fields such as computational stylistics and corpus stylistics, and are facilitating new approaches to literary style.The relation between definitions of style in established linguistic or literary stylistics, and definitions of style in computational or corpus stylistics has not, however, been systematically assessed. This contribution aims to respond to the need to redefine style in the light of this new situation and to establish a clearer perception of both the overlap and the boundaries between \u203amainstream\u2039 and \u203acomputational\u2039 and/or \u203aempirical\u2039 literary stylistics. While stylistic studies of non-literary texts are currently flourishing, our contribution deliberately centers on those approaches relevant to \u203aliterary stylistics\u2039. It concludes by proposing an operational definition of style that we hope can act as a common ground for diverse approaches to literary style, fostering transdisciplinary research.The focus of this contribution is on literary style in linguistics and literary studies (rather than in art history, musicology or fashion), on textual aspects of style (rather than production- or reception-oriented theories of style), and on a descriptive perspective (rather than a prescriptive or didactic one). Even within these limits, however, it appears necessary to build on a broad understanding of the various perspectives on style that have been adopted at different times and in different traditions. For this reason, the contribution first traces the development of the notion of style in three different traditions, those of German, Dutch and French language and literary studies. Despite the numerous links between each other, and between each of them to the British and American traditions, these three traditions each have their proper dynamics, especially with regard to the convergence and/or confrontation between mainstream and computational stylistics. For reasons of space and coherence, the contribution is limited to theoretical developments occurring since 1945.The contribution begins by briefly outlining the range of definitions of style that can be encountered across traditions today: style as revealing a higher-order aesthetic value, as the holistic \u203agestalt\u2039 of single texts, as an expression of the individuality of an author, as an artifact presupposing choice among alternatives, as a deviation from a norm or reference, or as any formal property of a text. The contribution then traces the development of definitions of style in each of the three traditions mentioned, with the aim of giving a concise account of how, in each tradition, definitions of style have evolved over time, with special regard to the way such definitions relate to empirical, quantitative or otherwise computational approaches to style in literary texts. It will become apparent how, in each of the three traditions, foundational texts continue to influence current discussions on literary style, but also how stylistics has continuously reacted to broader developments in cultural and literary theory, and how empirical, quantitative or computational approaches have long \u00adexisted, usually in parallel to or at the margins of mainstream stylistics. The review will also reflect the lines of discussion around style as a property of literary texts \u2013 or of any textual entity in general.The perspective on three stylistic traditions is accompanied by a more systematic perspective. The rationale is to work towards a common ground for literary scholars and linguists when talking about (literary) style, across traditions of stylistics, with respect for established definitions of style, but also in light of the digital paradigm. Here, we first show to what extent, at similar or different moments in time, the three traditions have developed comparable positions on style, and which definitions out of the range of possible definitions have been proposed or promoted by which authors in each of the three traditions.On the basis of this synthesis, we then conclude by proposing an operational definition of style that is an attempt to provide a common ground for both mainstream and computational literary stylistics. This definition is discussed in some detail in order to explain not only what is meant by each term in the definition, but also how it relates to computational analyses of style \u2013 and how this definition aims to avoid some of the pitfalls that can be perceived in earlier definitions of style. Our definition, we hope, will be put to use by a new generation of computational, quantitative, and empirical studies of style in literary texts.", "which Has result ?", "Definition of style", 1365.0, 1384.0], ["During global health crises, such as the recent H1N1 pandemic, the mass media provide the public with timely information regarding risk. To obtain new insights into how these messages are received, we measured neural data while participants, who differed in their preexisting H1N1 risk perceptions, viewed a TV report about H1N1. Intersubject correlation (ISC) of neural time courses was used to assess how similarly the brains of viewers responded to the TV report. We found enhanced intersubject correlations among viewers with high-risk perception in the anterior cingulate, a region which classical fMRI studies associated with the appraisal of threatening information. By contrast, neural coupling in sensory-perceptual regions was similar for the high and low H1N1-risk perception groups. These results demonstrate a novel methodology for understanding how real-life health messages are processed in the human brain, with particular emphasis on the role of emotion and differences in risk perceptions.", "which Has result ?", "enhanced intersubject correlations among viewers with high-risk perception in the anterior cingulate", 476.0, 576.0], ["Abstract While thousands of ontologies exist on the web, a unified system for handling online ontologies \u2013 in particular with respect to discovery, versioning, access, quality-control, mappings \u2013 has not yet surfaced and users of ontologies struggle with many challenges. In this paper, we present an online ontology interface and augmented archive called DBpedia Archivo, that discovers, crawls, versions and archives ontologies on the DBpedia Databus. Based on this versioned crawl, different features, quality measures and, if possible, fixes are deployed to handle and stabilize the changes in the found ontologies at web-scale. A comparison to existing approaches and ontology repositories is given .", "which Has result ?", "DBpedia Archivo", 356.0, 371.0], ["Iodine deficiency disorders (IDD) has been a major global public health problem threatening more than 2 billion people worldwide. Considering various human health implications associated with iodine deficiency, universal salt iodization programme has been recognized as one of the best methods of preventing iodine deficiency disorder and iodizing table salt is currently done in many countries. In this study, comparative assessment of iodine content of commercially available table salt brands in Nigerian market were investigated and iodine content were measured in ten table salt brands samples using iodometric titration. The iodine content ranged from 14.80 mg/kg \u2013 16.90 mg/kg with mean value of 15.90 mg/kg for Sea salt; 24.30 mg/kg \u2013 25.40 mg/kg with mean value of 24.60 mg/kg for Dangote salt (blue sachet); 22.10 mg/kg \u2013 23.10 mg/kg with mean value of 22.40 mg/kg for Dangote salt (red sachet); 23.30 mg/kg \u2013 24.30 mg/kg with mean value of 23.60 mg/kg for Mr Chef salt; 23.30 mg/kg \u2013 24.30 mg/kg with mean value of 23.60 mg/kg for Annapurna; 26.80 mg/kg \u2013 27.50 mg/kg with mean value of 27.20mg/kg for Uncle Palm salt; 23.30 mg/kg \u2013 29.60 mg/kg with mean content of 26.40 mg/kg for Dangote (bag); 25.40 mg/kg \u2013 26.50 mg/kg with mean value of 26.50 mg/kg for Royal salt; 36.80 mg/kg \u2013 37.20 mg/kg with mean iodine content of 37.0 mg/kg for Abakaliki refined salt, and 30.07 mg/kg \u2013 31.20 mg/kg with mean value of 31.00 mg/kg for Ikom refined salt. The mean iodine content measured in the Sea salt brand (15.70 mg/kg) was significantly P < 0.01 lower compared to those measured in other table salt brands. Although the iodine content of Abakaliki and Ikom refined salt exceed the recommended value, it is clear that only Sea salt brand falls below the World Health Organization (WHO) recommended value (20 \u2013 30 mg/kg), while the remaining table salt samples are just within the range. The results obtained have revealed that 70 % of the table salt brands were adequately iodized while 30 % of the table salt brands were not adequately iodized and provided baseline data that can be used for potential identification of human health risks associated with inadequate and/or excess iodine content in table salt brands consumed in households in Nigeria.", "which Has result ?", "The iodine content ranged from 14.80 mg/kg \u2013 16.90 mg/kg with mean value of 15.90 mg/kg for Sea salt; 24.30 mg/kg \u2013 25.40 mg/kg with mean value of 24.60 mg/kg for Dangote salt (blue sachet); 22.10 mg/kg \u2013 23.10 mg/kg with mean value of 22.40 mg/kg for Dangote salt (red sachet); 23.30 mg/kg \u2013 24.30 mg/kg with mean value of 23.60 mg/kg for Mr Chef salt; 23.30 mg/kg \u2013 24.30 mg/kg with mean value of 23.60 mg/kg for Annapurna; 26.80 mg/kg \u2013 27.50 mg/kg with mean value of 27.20mg/kg for Uncle Palm salt; 23.30 mg/kg \u2013 29.60 mg/kg with mean content of 26.40 mg/kg for Dangote (bag); 25.40 mg/kg \u2013 26.50 mg/kg with mean value of 26.50 mg/kg for Royal salt; 36.80 mg/kg \u2013 37.20 mg/kg with mean iodine content of 37.0 mg/kg for Abakaliki refined salt, and 30.07 mg/kg \u2013 31.20 mg/kg with mean value of 31.00 mg/kg for Ikom refined salt. ", NaN, NaN], ["Infections encompass a set of medical conditions of very diverse kinds that can pose a significant risk to health, and even death. As with many other diseases, early diagnosis can help to provide patients with proper care to minimize the damage produced by the disease, or to isolate them to avoid the risk of spread. In this context, computational intelligence can be useful to predict the risk of infection in patients, raising early alarms that can aid medical teams to respond as quick as possible. In this paper, we survey the state of the art on infection prediction using computer science by means of a systematic literature review. The objective is to find papers where computational intelligence is used to predict infections in patients using physiological data as features. We have posed one major research question along with nine specific subquestions. The whole review process is thoroughly described, and eight databases are considered which index most of the literature published in different scholarly formats. A total of 101 relevant documents have been found in the period comprised between 2003 and 2019, and a detailed study of these documents is carried out to classify the works and answer the research questions posed, resulting to our best knowledge in the most comprehensive study of its kind. We conclude that the most widely addressed infection is by far sepsis, followed by Clostridium difficile infection and surgical site infections. Most works use machine learning techniques, from which logistic regression, support vector machines, random forest and naive Bayes are the most common. Some machine learning works provide some ideas on the problems of small data and class imbalance, which can be of interest. The current systematic literature review shows that automatic diagnosis of infectious diseases using computational intelligence is well documented in the medical literature.", "which Has result ?", "Machine Learning", 1480.0, 1496.0], ["In this work we offer an open and data-driven skills taxonomy, which is independent of ESCO and O*NET, two popular available taxonomies that are expert-derived. Since the taxonomy is created in an algorithmic way without expert elicitation, it can be quickly updated to reflect changes in labour demand and provide timely insights to support labour market decision-making. Our proposed taxonomy also captures links between skills, aggregated job titles, and the salaries mentioned in the millions of UK job adverts used in this analysis. To generate the taxonomy, we employ machine learning methods, such as word embeddings, network community detection algorithms and consensus clustering. We model skills as a graph with individual skills as vertices and their co-occurrences in job adverts as edges. The strength of the relationships between the skills is measured using both the frequency of actual co-occurrences of skills in the same advert as well as their shared context, based on a trained word embeddings model. Once skills are represented as a network, we hierarchically group them into clusters. To ensure the stability of the resulting clusters, we introduce bootstrapping and consensus clustering stages into the methodology. While we share initial results and describe the skill clusters, the main purpose of this paper is to outline the methodology for building the taxonomy.", "which Has result ?", "skills taxonomy", 46.0, 61.0], ["Infections encompass a set of medical conditions of very diverse kinds that can pose a significant risk to health, and even death. As with many other diseases, early diagnosis can help to provide patients with proper care to minimize the damage produced by the disease, or to isolate them to avoid the risk of spread. In this context, computational intelligence can be useful to predict the risk of infection in patients, raising early alarms that can aid medical teams to respond as quick as possible. In this paper, we survey the state of the art on infection prediction using computer science by means of a systematic literature review. The objective is to find papers where computational intelligence is used to predict infections in patients using physiological data as features. We have posed one major research question along with nine specific subquestions. The whole review process is thoroughly described, and eight databases are considered which index most of the literature published in different scholarly formats. A total of 101 relevant documents have been found in the period comprised between 2003 and 2019, and a detailed study of these documents is carried out to classify the works and answer the research questions posed, resulting to our best knowledge in the most comprehensive study of its kind. We conclude that the most widely addressed infection is by far sepsis, followed by Clostridium difficile infection and surgical site infections. Most works use machine learning techniques, from which logistic regression, support vector machines, random forest and naive Bayes are the most common. Some machine learning works provide some ideas on the problems of small data and class imbalance, which can be of interest. The current systematic literature review shows that automatic diagnosis of infectious diseases using computational intelligence is well documented in the medical literature.", "which Has result ?", "Sepsis", 1383.0, 1389.0], ["This study investigates the distribution of cadmium and lead concentrations in the outcrop rock samples collected from Abakaliki anticlinorium in the Southern Benue Trough, Nigeria. The outcrop rock samples from seven sampling locations were air\u2013dried for seventy\u2013two hours, homogenized by grinding and pass through < 63 micron mesh sieve. The ground and homogenized rock samples were pulverized and analyzed for cadmium and lead using X-Ray Fluorescence Spectrometer. The concentrations of heavy metals in the outcrop rock samples ranged from < 0.10 \u2013 7.95 mg kg\u20131 for cadmium (Cd) and < 1.00 \u2013 4966.00 mg kg\u20131 for lead (Pb). Apart from an anomalous concentration measured in Afikpo Shale (Middle Segment), the results obtained revealed that rock samples from all the sampling locations yielded cadmium concentrations of < 0.10 mg kg\u20131 and the measured concentrations were below the average crustal abundance of 0.50 mg kg\u20131. Although background concentration of <1.00 \u00b1 0.02 mg kg\u20131 was measured in Abakaliki Shale, rock samples from all the sampling locations revealed anomalous lead concentrations above average crustal abundance of 30 mg kg\u20131. The results obtained reveal important contributions towards understanding of heavy metal distribution patterns and provide baseline data that can be used for potential identification of areas at risk associated with natural sources of heavy metals contamination in the region. The use of outcrop rocks provides a cost\u2013effective approach for monitoring regional heavy metal contamination associated with dissolution and/or weathering of rocks or parent materials. Evaluation of heavy metals may be effectively used in large scale regional pollution monitoring of soil, groundwater, atmospheric and marine environment. Therefore, monitoring of heavy metal concentrations in soils, groundwater and atmospheric environment is imperative in order to prevent bioaccumulation in various ecological receptors.", "which Has result ?", "The concentrations of heavy metals in the outcrop rock samples ranged from < 0.10 \u2013 7.95 mg kg\u20131 for cadmium (Cd) and < 1.00 \u2013 4966.00 mg kg\u20131 for lead (Pb). ", NaN, NaN], ["The increasingly pervasive nature of the Web, expanding to devices and things in everyday life, along with new trends in Artificial Intelligence call for new paradigms and a new look on Knowledge Representation and Processing at scale for the Semantic Web. The emerging, but still to be concretely shaped concept of \"Knowledge Graphs\" provides an excellent unifying metaphor for this current status of Semantic Web research. More than two decades of Semantic Web research provides a solid basis and a promising technology and standards stack to interlink data, ontologies and knowledge on the Web. However, neither are applications for Knowledge Graphs as such limited to Linked Open Data, nor are instantiations of Knowledge Graphs in enterprises \u2013 while often inspired by \u2013 limited to the core Semantic Web stack. This report documents the program and the outcomes of Dagstuhl Seminar 18371 \"Knowledge Graphs: New Directions for Knowledge Representation on the Semantic Web\", where a group of experts from academia and industry discussed fundamental questions around these topics for a week in early September 2018, including the following: what are knowledge graphs? Which applications do we see to emerge? Which open research questions still need be addressed and which technology gaps still need to be closed?", "which Has result ?", "Seminar", 879.0, 886.0], ["The objective of this study was to optimise raw tooke flour-(RTF), vital gluten (VG) and water absorption (WA) with respect to bread-making quality and cost effectiveness of RTF/wheat composite flour. The hypothesis generated for this study was that optimal substitution of RTF and VG into wheat has no significant effect on baking quality of the resultant composite flour. A basic white wheat bread recipe was adopted and response surface methodology (RSM) procedures applied. A D-optimal design was employed with the following variables: RTF (x1) 0-33%, WA (x2) -2FWA to +2FWA and VG (x3) 0 - 3%. Seven responses were modelled. Baking worth number, volume yield and cost were simultaneously optimized using desirability function approach. Models developed adequately described the relationships and were confirmed by validation studies. RTF showed the greatest effect on all models, which effect impaired baking performance of composite flour. VG and Farinograph water absorption (FWA) as well as their interaction improved bread quality. Vitality of VG was enhanced by RTF. The optimal formulation for maximum baking quality was 0.56%(x1), 0.33%(x2) and -1.24(x3) while a formulation of 22%(x1), 3%(x2) and +1.13(x3) maximized RTF incorporation in the respective and composite bread quality at lowest cost. Thus, the set hypothesis was not rejected. Key words: Raw tooke flour, composite bread, baking quality, response surface methodology, Farinograph water absorption, vital gluten.", "which optimal level of non wheat flour ?", "0.56%", NaN, NaN], ["The objective of this study was to optimise raw tooke flour-(RTF), vital gluten (VG) and water absorption (WA) with respect to bread-making quality and cost effectiveness of RTF/wheat composite flour. The hypothesis generated for this study was that optimal substitution of RTF and VG into wheat has no significant effect on baking quality of the resultant composite flour. A basic white wheat bread recipe was adopted and response surface methodology (RSM) procedures applied. A D-optimal design was employed with the following variables: RTF (x1) 0-33%, WA (x2) -2FWA to +2FWA and VG (x3) 0 - 3%. Seven responses were modelled. Baking worth number, volume yield and cost were simultaneously optimized using desirability function approach. Models developed adequately described the relationships and were confirmed by validation studies. RTF showed the greatest effect on all models, which effect impaired baking performance of composite flour. VG and Farinograph water absorption (FWA) as well as their interaction improved bread quality. Vitality of VG was enhanced by RTF. The optimal formulation for maximum baking quality was 0.56%(x1), 0.33%(x2) and -1.24(x3) while a formulation of 22%(x1), 3%(x2) and +1.13(x3) maximized RTF incorporation in the respective and composite bread quality at lowest cost. Thus, the set hypothesis was not rejected. Key words: Raw tooke flour, composite bread, baking quality, response surface methodology, Farinograph water absorption, vital gluten.", "which Upper range of non wheat flour ?", "33%", NaN, NaN], ["BACKGROUND On-farm crop species richness (CSR) may be important for maintaining the diversity and quality of diets of smallholder farming households. OBJECTIVES The objectives of this study were to 1) determine the association of CSR with the diversity and quality of household diets in Malawi and 2) assess hypothesized mechanisms for this association via both subsistence- and market-oriented pathways. METHODS Longitudinal data were assessed from nationally representative household surveys in Malawi between 2010 and 2013 (n = 3000 households). A household diet diversity score (DDS) and daily intake per adult equivalent of energy, protein, iron, vitamin A, and zinc were calculated from 7-d household consumption data. CSR was calculated from plot-level data on all crops cultivated during the 2009-2010 and 2012-2013 agricultural seasons in Malawi. Adjusted generalized estimating equations were used to assess the longitudinal relation of CSR with household diet quality and diversity. RESULTS CSR was positively associated with DDS (\u03b2: 0.08; 95% CI: 0.06, 0.12; P < 0.001), as well as daily intake per adult equivalent of energy (kilocalories) (\u03b2: 41.6; 95% CI: 20.9, 62.2; P < 0.001), protein (grams) (\u03b2: 1.78; 95% CI: 0.80, 2.75; P < 0.001), iron (milligrams) (\u03b2: 0.30; 95% CI: 0.16, 0.44; P < 0.001), vitamin A (micrograms of retinol activity equivalent) (\u03b2: 25.8; 95% CI: 12.7, 38.9; P < 0.001), and zinc (milligrams) (\u03b2: 0.26; 95% CI: 0.13, 0.38; P < 0.001). Neither proportion of harvest sold nor distance to nearest population center modified the relation between CSR and household diet diversity or quality (P \u2265 0.05). Households with greater CSR were more commercially oriented (least-squares mean proportion of harvest sold \u00b1 SE, highest tertile of CSR: 17.1 \u00b1 0.52; lowest tertile of CSR: 8.92 \u00b1 1.09) (P < 0.05). CONCLUSION Promoting on-farm CSR may be a beneficial strategy for simultaneously supporting enhanced diet quality and diversity while also creating opportunities for smallholder farmers to engage with markets in subsistence agricultural contexts.", "which p-value ?", "P < 0.001", 1071.0, 1080.0], ["Iodine deficiency disorders (IDD) has been a major global public health problem threatening more than 2 billion people worldwide. Considering various human health implications associated with iodine deficiency, universal salt iodization programme has been recognized as one of the best methods of preventing iodine deficiency disorder and iodizing table salt is currently done in many countries. In this study, comparative assessment of iodine content of commercially available table salt brands in Nigerian market were investigated and iodine content were measured in ten table salt brands samples using iodometric titration. The iodine content ranged from 14.80 mg/kg \u2013 16.90 mg/kg with mean value of 15.90 mg/kg for Sea salt; 24.30 mg/kg \u2013 25.40 mg/kg with mean value of 24.60 mg/kg for Dangote salt (blue sachet); 22.10 mg/kg \u2013 23.10 mg/kg with mean value of 22.40 mg/kg for Dangote salt (red sachet); 23.30 mg/kg \u2013 24.30 mg/kg with mean value of 23.60 mg/kg for Mr Chef salt; 23.30 mg/kg \u2013 24.30 mg/kg with mean value of 23.60 mg/kg for Annapurna; 26.80 mg/kg \u2013 27.50 mg/kg with mean value of 27.20mg/kg for Uncle Palm salt; 23.30 mg/kg \u2013 29.60 mg/kg with mean content of 26.40 mg/kg for Dangote (bag); 25.40 mg/kg \u2013 26.50 mg/kg with mean value of 26.50 mg/kg for Royal salt; 36.80 mg/kg \u2013 37.20 mg/kg with mean iodine content of 37.0 mg/kg for Abakaliki refined salt, and 30.07 mg/kg \u2013 31.20 mg/kg with mean value of 31.00 mg/kg for Ikom refined salt. The mean iodine content measured in the Sea salt brand (15.70 mg/kg) was significantly P < 0.01 lower compared to those measured in other table salt brands. Although the iodine content of Abakaliki and Ikom refined salt exceed the recommended value, it is clear that only Sea salt brand falls below the World Health Organization (WHO) recommended value (20 \u2013 30 mg/kg), while the remaining table salt samples are just within the range. The results obtained have revealed that 70 % of the table salt brands were adequately iodized while 30 % of the table salt brands were not adequately iodized and provided baseline data that can be used for potential identification of human health risks associated with inadequate and/or excess iodine content in table salt brands consumed in households in Nigeria.", "which Conclusion ?", "The results obtained have revealed that 70 % of the table salt brands were adequately iodized while 30 % of the table salt brands were not adequately iodized and provided baseline data that can be used for potential identification of human health risks associated with inadequate and/or excess iodine content in table salt brands consumed in households in Nigeria.", NaN, NaN], ["This study investigates the distribution of cadmium and lead concentrations in the outcrop rock samples collected from Abakaliki anticlinorium in the Southern Benue Trough, Nigeria. The outcrop rock samples from seven sampling locations were air\u2013dried for seventy\u2013two hours, homogenized by grinding and pass through < 63 micron mesh sieve. The ground and homogenized rock samples were pulverized and analyzed for cadmium and lead using X-Ray Fluorescence Spectrometer. The concentrations of heavy metals in the outcrop rock samples ranged from < 0.10 \u2013 7.95 mg kg\u20131 for cadmium (Cd) and < 1.00 \u2013 4966.00 mg kg\u20131 for lead (Pb). Apart from an anomalous concentration measured in Afikpo Shale (Middle Segment), the results obtained revealed that rock samples from all the sampling locations yielded cadmium concentrations of < 0.10 mg kg\u20131 and the measured concentrations were below the average crustal abundance of 0.50 mg kg\u20131. Although background concentration of <1.00 \u00b1 0.02 mg kg\u20131 was measured in Abakaliki Shale, rock samples from all the sampling locations revealed anomalous lead concentrations above average crustal abundance of 30 mg kg\u20131. The results obtained reveal important contributions towards understanding of heavy metal distribution patterns and provide baseline data that can be used for potential identification of areas at risk associated with natural sources of heavy metals contamination in the region. The use of outcrop rocks provides a cost\u2013effective approach for monitoring regional heavy metal contamination associated with dissolution and/or weathering of rocks or parent materials. Evaluation of heavy metals may be effectively used in large scale regional pollution monitoring of soil, groundwater, atmospheric and marine environment. Therefore, monitoring of heavy metal concentrations in soils, groundwater and atmospheric environment is imperative in order to prevent bioaccumulation in various ecological receptors.", "which Conclusion ?", "monitoring of heavy metal concentrations in soils, groundwater and atmospheric environment is imperative in order to prevent bioaccumulation in various ecological receptors.", NaN, NaN], ["Textural characterization and heavy mineral studies of beach sediments in Ibeno and Eastern Obolo Local Government Areas of Akwa Ibom State were carried out in the present study. The main aim was to infer their provenance, transport history and environment of deposition. Sediment samples were collected at the water\u2013sediment contact along the shoreline at an interval of about 3m. Ten samples were collected from study location 1 (Ibeno Beach) and twelve samples were collected from study location 2 (Eastern Obolo Beach). A total of twenty\u2013two samples were collected from the field and brought to the laboratory for textural and compositional analyses. The results showed that the value of graphic mean size ranged from 1.70\u0424 to 2.83\u0424, sorting values ranged from 0.39\u0424 \u2013 0.60\u0424, skewness values ranged from -0.02 to 0.10 while kurtosis values ranged from 1.02 to 2.46, indicating medium to fine grained and well sorted sediments. This suggested that the sediments have been transported far from their source. Longshore current and onshore\u2013offshore movements of sediment are primarily responsible in sorting of the heavy minerals. The histogram charts for the different samples and standard deviation versus skewness indicated a beach environment of deposition. This implies that the sediments are dominated by one class of grain size; a phenomenon characteristic of beach environments. The heavy mineral assemblages identified in this research work were rutile, zircon, tourmaline, hornblende, apatite, diopside, glauconite, pumpellyite, cassiterite, epidote, garnet, augite, enstatite, andalusite and opaque minerals. The zircon-tourmaline-rutile (ZTR) index ranged from 47.30% to 87.00% with most of the samples showing a ZTR index greater than 50%. These indicated that the sediments were mineralogically sub-mature and have been transported far from their source. The heavy minerals identified are indicative of being products of reworked sediments of both metamorphic (high rank) and igneous (both mafic and sialic) origin probably derived from the basement rocks of the Oban Massif as well as reworked sediments of the Benue Trough. Therefore, findings from the present study indicated that erosion, accretion, and stability of beaches are controlled by strong hydrodynamic and hydraulic processes.", "which Conclusion ?", "Findings from the present study indicated that erosion, accretion, and stability of beaches are controlled by strong hydrodynamic and hydraulic processes.", NaN, NaN], ["BACKGROUND: The cellular immune system is of pivotal importance with regard to the response to severe infections. Monocytes/macrophages are considered key immune cells in infections and downregulation of the surface expression of monocytic human leukocyte antigen-DR (mHLA-DR) within the major histocompatibility complex class II reflects a state of immunosuppression, also referred to as injury-associated immunosuppression. As the role of immunosuppression in coronavirus disease 2019 (COVID-19) is currently unclear, we seek to explore the level of mHLA-DR expression in COVID-19 patients. METHODS: In a preliminary prospective monocentric observational study, 16 COVID-19\u2013positive patients (75% male, median age: 68 [interquartile range 59\u201375]) requiring hospitalization were included. The median Acute Physiology and Chronic Health Evaluation-II (APACHE-II) score in 9 intensive care unit (ICU) patients with acute respiratory failure was 30 (interquartile range 25\u201332). Standardized quantitative assessment of HLA-DR on monocytes (cluster of differentiation 14+ cells) was performed using calibrated flow cytometry at baseline (ICU/hospital admission) and at days 3 and 5 after ICU admission. Baseline data were compared to hospitalized noncritically ill COVID-19 patients. RESULTS: While normal mHLA-DR expression was observed in all hospitalized noncritically ill patients (n = 7), 89% (8 of 9) critically ill patients with COVID-19\u2013induced acute respiratory failure showed signs of downregulation of mHLA-DR at ICU admission. mHLA-DR expression at admission was significantly lower in critically ill patients (median, [quartiles]: 9280 antibodies/cell [6114, 16,567]) as compared to the noncritically ill patients (30,900 antibodies/cell [26,777, 52,251]), with a median difference of 21,508 antibodies/cell (95% confidence interval [CI], 14,118\u201342,971), P = .002. Reduced mHLA-DR expression was observed to persist until day 5 after ICU admission. CONCLUSIONS: When compared to noncritically ill hospitalized COVID-19 patients, ICU patients with severe COVID-19 disease showed reduced mHLA-DR expression on circulating CD14+ monocytes at ICU admission, indicating a dysfunctional immune response. This immunosuppressive (monocytic) phenotype remained unchanged over the ensuing days after ICU admission. Strategies aiming for immunomodulation in this population of critically ill patients should be guided by an immune-monitoring program in an effort to determine who might benefit best from a given immunological intervention.", "which median mHLA-DR expression in ICU patients at admission to ICU ?", "9280 antibodies/cell", 1640.0, 1660.0], ["For research institutes, data libraries, and data archives, validating RDF data according to predefined constraints is a much sought-after feature, particularly as this is taken for granted in the XML world. Based on our work in two international working groups on RDF validation and jointly identified requirements to formulate constraints and validate RDF data, we have published 81 types of constraints that are required by various stakeholders for data applications. In this paper, we evaluate the usability of identified constraint types for assessing RDF data quality by (1) collecting and classifying 115 constraints on vocabularies commonly used in the social, behavioral, and economic sciences, either from the vocabularies themselves or from domain experts, and (2) validating 15,694 data sets (4.26 billion triples) of research data against these constraints. We classify each constraint according to (1) the severity of occurring violations and (2) based on which types of constraint languages are able to express its constraint type. Based on the large-scale evaluation, we formulate several findings to direct the further development of constraint languages.", "which Number of RDF Datasets ?", "15,694 data sets", 787.0, 803.0], ["The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.", "which Corpus statistics ?", "400 million words", 58.0, 75.0], ["We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word\u2019s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.", "which Corpus statistics ?", "a total of 13 million words", 103.0, 130.0], ["The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.", "which Corpus statistics ?", "more than 100,000 texts", 79.0, 102.0], ["Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19\u2013related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.", "which has size ?", "14 projects", 1552.0, 1563.0], ["\nPurpose\nIn terms of entrepreneurship, open data benefits include economic growth, innovation, empowerment and new or improved products and services. Hackathons encourage the development of new applications using open data and the creation of startups based on these applications. Researchers focus on factors that affect nascent entrepreneurs\u2019 decision to create a startup but researches in the field of open data hackathons have not been fully investigated yet. This paper aims to suggest a model that incorporates factors that affect the decision of establishing a startup by developers who have participated in open data hackathons.\n\n\nDesign/methodology/approach\nIn total, 70 papers were examined and analyzed using a three-phased literature review methodology, which was suggested by Webster and Watson (2002). These surveys investigated several factors that affect a nascent entrepreneur to create a startup.\n\n\nFindings\nEventually, by identifying the motivations for developers to participate in a hackathon, and understanding the benefits of the use of open data, researchers will be able to elaborate the proposed model and evaluate if the contest has contributed to the decision of establish a startup and what factors affect the decision to establish a startup apply to open data developers, and if the participants of the contest agree with these factors.\n\n\nOriginality/value\nThe paper expands the scope of open data research on entrepreneurship field, stating the need for more research to be conducted regarding the open data in entrepreneurship through hackathons.\n", "which has size ?", "70 papers", 853.0, 862.0], ["Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.", "which Number of mentions ?", "3756 software mentions", 621.0, 643.0], ["Protein modifications, in particular post-translational modifications, have a central role in bringing about the full repertoire of protein functions, and the identification of specific protein modifications is important for understanding biological systems. This task presents a number of opportunities for the automatic support of manual curation efforts. However, the sheer number of different types of protein modifications is a daunting challenge for automatic extraction that has so far not been met in full, with most studies focusing on single modifications or a few prominent ones. In this work, aim to meet this challenge: we analyse protein modification types through ontologies, databases, and literature and introduce a corpus of 360 abstracts manually annotated in the BioNLP Shared Task event representation for over 4500 mentions of proteins and 1000 statements of modification events of nearly 40 different types. We argue that together with existing resources, this corpus provides sufficient coverage of modification types to make effectively exhaustive extraction of protein modifications from text feasible.", "which Number of mentions ?", "4500 mentions of proteins", 832.0, 857.0], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which Number of mentions ?", "Mouse", 583.0, 588.0], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which Number of mentions ?", "Yeast", 567.0, 572.0], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which Number of mentions ?", "Fly", 574.0, 577.0], ["Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19\u2013related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.", "which has duration ?", "4-day", 560.0, 565.0], ["The electrochemical reduction of solid silica has been investigated in molten CaCl2 at 900 \u00b0C for the one-step, up-scalable, controllable and affordable production of nanostructured silicon with promising photo-responsive properties. Cyclic voltammetry of the metallic cavity electrode loaded with fine silica powder was performed to elaborate the electrochemical reduction mechanism. Potentiostatic electrolysis of porous and dense silica pellets was carried out at different potentials, focusing on the influences of the electrolysis potential and the microstructure of the precursory silica on the product purity and microstructure. The findings suggest a potential range between \u22120.60 and \u22120.95 V (vs. Ag/AgCl) for the production of nanostructured silicon with high purity (>99 wt%). According to the elucidated mechanism on the electro-growth of the silicon nanostructures, optimal process parameters for the controllable preparation of high-purity silicon nanoparticles and nanowires were identified. Scaling-up the optimal electrolysis was successful at the gram-scale for the preparation of high-purity silicon nanowires which exhibited promising photo-responsive properties.", "which temperature ?", "900 \u00b0C", 87.0, 93.0], ["Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179\u00b122 nm were produced. Res was loaded with high EE of 73\u00b10.9% and DL content of 6.2\u00b10.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", "which Paricle size of nanoparicles ?", "179\u00b122 nm", 1254.0, 1263.0], ["Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179\u00b122 nm were produced. Res was loaded with high EE of 73\u00b10.9% and DL content of 6.2\u00b10.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", "which Drug loading ?", "6.2\u00b10.1%", NaN, NaN], ["Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179\u00b122 nm were produced. Res was loaded with high EE of 73\u00b10.9% and DL content of 6.2\u00b10.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", "which Drug entrapment efficiency ?", "73\u00b10.9%", NaN, NaN], ["Microorganisms are well adapted to their habitat but are partially sensitive to toxic metabolites or abiotic compounds secreted by other organisms or chemically formed under the respective environmental conditions. Thermoacidophiles are challenged by pyroglutamate, a lactam that is spontaneously formed by cyclization of glutamate under aerobic thermoacidophilic conditions. It is known that growth of the thermoacidophilic crenarchaeon Saccharolobus solfataricus (formerly Sulfolobus solfataricus) is completely inhibited by pyroglutamate. In the present study, we investigated the effect of pyroglutamate on the growth of S. solfataricus and the closely related crenarchaeon Sulfolobus acidocaldarius. In contrast to S. solfataricus, S. acidocaldarius was successfully cultivated with pyroglutamate as a sole carbon source. Bioinformatical analyses showed that both members of the Sulfolobaceae have at least one candidate for a 5-oxoprolinase, which catalyses the ATP-dependent conversion of pyroglutamate to glutamate. In S. solfataricus, we observed the intracellular accumulation of pyroglutamate and crude cell extract assays showed a less effective degradation of pyroglutamate. Apparently, S. acidocaldarius seems to be less versatile regarding carbohydrates and prefers peptidolytic growth compared to S. solfataricus. Concludingly, S. acidocaldarius exhibits a more efficient utilization of pyroglutamate and is not inhibited by this compound, making it a better candidate for applications with glutamate-containing media at high temperatures.", "which Protein Target ?", "5-oxoprolinase", 932.0, 946.0], ["The BioCreative NLM-Chem track calls for a community effort to fine-tune automated recognition of chemical names in biomedical literature. Chemical names are one of the most searched biomedical entities in PubMed and \u2013 as highlighted during the COVID-19 pandemic \u2013 their identification may significantly advance research in multiple biomedical subfields. While previous community challenges focused on identifying chemical names mentioned in titles and abstracts, the full text contains valuable additional detail. We organized the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document. This manuscript summarizes the BioCreative NLM-Chem track. We received a total of 88 submissions in total from 17 teams worldwide. The highest performance achieved for the Chemical Identification task was 0.8672 f-score (0.8759 precision, 0.8587 recall) for strict NER performance and 0.8136 f-score (0.8621 precision, 0.7702 recall) for strict normalization performance. The highest performance achieved for the Chemical Indexing task was 0.4825 f-score (0.4397 precision, 0.5344 recall). The NLM-Chem track dataset and other challenge materials are publicly available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/. This community challenge demonstrated 1) the current substantial achievements in deep learning technologies can be utilized to further improve automated prediction accuracy, and 2) the Chemical Indexing task is substantially more challenging. We look forward to further development of biomedical text mining methods to respond to the rapid growth of biomedical literature. Keywords\u2014 biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; text mining; chemical entity recognition; chemical indexing", "which Total teams ?", "17 teams", 1362.0, 1370.0], ["There is an increasing need to facilitate automated access to information relevant for chemical compounds and drugs described in text, including scientific articles, patents or health agency reports. A number of recent efforts have implemented natural language processing (NLP) and text mining technologies for the chemical domain (ChemNLP or chemical text mining). Due to the lack of manually labeled Gold Standard datasets together with comprehensive annotation guidelines, both the implementation as well as the comparative assessment of ChemNLP technologies BSF opaque. Two key components for most chemical text mining technologies are the indexing of documents with chemicals (chemical document indexing CDI ) and finding the mentions of chemicals in text (chemical entity mention recognition CEM ). These two tasks formed part of the chemical compound and drug named entity recognition (CHEMDNER) task introduced at the fourth BioCreative challenge, a community effort to evaluate biomedical text mining applications. For this task, the CHEMDNER text corpus was constructed, consisting of 10,000 abstracts containing a total of 84,355 mentions of chemical compounds and drugs that have been manually labeled by domain experts following specific annotation guidelines. This corpus covers representative abstracts from major chemistry-related sub-disciplines such as medicinal chemistry, biochemistry, organic chemistry and toxicology. A total of 27 teams \u2013 23 academic and 4 commercial HSPVQT, comprised of 87 researchers \u2013 submitted results for this task. Of these teams, 26 provided submissions for the CEM subtask and 23 for the CDI subtask. Teams were provided with the manual annotations of 7,000 abstracts to implement and train their systems and then had to return predictions for the 3,000 test set abstracts during a short period of time. When comparing exact matches of the automated results against the manually labeled Gold Standard annotations, the best teams reached an F-score \u22c6 Corresponding author Proceedings of the fourth BioCreative challenge evaluation workshop, vol. 2 of 87.39% JO the CEM task and of 88.20% JO the CDI task. This can be regarded as a very competitive result when compared to the expected upper boundary, the agreement between to human annotators, at 91%. In general, the technologies used to detect chemicals and drugs by the teams included machine learning methods (particularly CRFs using a considerable range of different features), interaction of chemistry-related lexical resources and manual rules (e.g., to cover abbreviations, chemical formula or chemical identifiers). By promoting the availability of the software of the participating systems as well as through the release of the CHEMDNER corpus to enable implementation of new tools, this work fosters the development of text mining applications like the automatic extraction of biochemical reactions, toxicological properties of compounds, or the detection of associations between genes or mutations BOE drugs in the context pharmacogenomics.", "which Total teams ?", "27 teams", 1451.0, 1459.0], ["Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of \u201cwhat a chemical does to a gene/protein\" (chemical \u2192 gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical \u2013 biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor \u2013 target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. \u201cInhibitor\u201d, \u201cAntagonist\u201d, \u201cAgonist\u201d,...), some of them being regarded as highly specific for the purpose of this task (e.g. \u201cintercalation\u201d, \u201ccross-linking/alkylation\u201d) or referring to biological roles (e.g. \u201cAntibody\u201d, \u201cIncorporation into and Destabilization\u201d) and others, partially overlapping between them (e.g. \u201cBinder\u201d and \u201cLigand\u201d), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (\u201cdecreases\u201d, \u201cdirectlyDecreases\u201d, \u201cincreases\u201d, \u201cdirectlyIncreases\u201d and \u201ccausesNoChange\u201d) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su", "which Total teams ?", "30 teams submitted results for the DrugProt main track", 1210.0, 1264.0], ["The BioCreative LitCovid track calls for a community effort to tackle automated topic annotation for COVID-19 literature. The number of COVID-19-related articles in the literature is growing by about 10,000 articles per month, significantly challenging curation efforts and downstream interpretation. LitCovid is a literature database of COVID-19related articles in PubMed, which has accumulated more than 180,000 articles with millions of accesses each month by users worldwide. The rapid literature growth significantly increases the burden of LitCovid curation, especially for topic annotations. Topic annotation in LitCovid assigns one or more (up to eight) labels to articles. The annotated topics have been widely used both directly in LitCovid (e.g., accounting for ~20% of total uses) and downstream studies such as knowledge network generation and citation analysis. It is, therefore, important to develop innovative text mining methods to tackle the challenge. We organized the BioCreative LitCovid track to call for a community effort to tackle automated topic annotation for COVID-19 literature. This article summarizes the BioCreative LitCovid track in terms of data collection and team participation. The dataset is publicly available via https://ftp.ncbi.nlm.nih.gov/pub/lu/LitCovid/biocreative/. It consists of over 30K PubMed articles, one of the largest multilabel classification datasets on biomedical literature. There were 80 submissions in total from 19 teams worldwide. The highestperforming submissions achieved 0.8875, 0.9181, and 0.9394 for macro F1-score, micro F1-score, and instance-based F1-score, respectively. We look forward to further participation in developing biomedical text mining methods in response to the rapid growth of the COVID-19 literature. Keywords\u2014biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; multi-label classification; COVID-19; LitCovid;", "which Total teams ?", "19 teams", 1473.0, 1481.0], ["Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task\u2014a result that approaches the human inter-annotator agreement (0.8875)\u2014and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system\u2019s ability to return real-time results: the average response time for each team\u2019s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "which Total teams ?", "34 teams", 972.0, 980.0], ["Abstract Background The biological research literature is a major repository of knowledge. As the amount of literature increases, it will get harder to find the information of interest on a particular topic. There has been an increasing amount of work on text mining this literature, but comparing this work is hard because of a lack of standards for making comparisons. To address this, we worked with colleagues at the Protein Design Group, CNB-CSIC, Madrid to develop BioCreAtIvE (Critical Assessment for Information Extraction in Biology), an open common evaluation of systems on a number of biological text mining tasks. We report here on task 1A, which deals with finding mentions of genes and related entities in text. \"Finding mentions\" is a basic task, which can be used as a building block for other text mining tasks. The task makes use of data and evaluation software provided by the (US) National Center for Biotechnology Information (NCBI). Results 15 teams took part in task 1A. A number of teams achieved scores over 80% F-measure (balanced precision and recall). The teams that tried to use their task 1A systems to help on other BioCreAtIvE tasks reported mixed results. Conclusion The 80% plus F-measure results are good, but still somewhat lag the best scores achieved in some other domains such as newswire, due in part to the complexity and length of gene names, compared to person or organization names in newswire.", "which Total teams ?", "15 teams", 963.0, 971.0], ["Fully automated text mining (TM) systems promote efficient literature searching, retrieval, and review but are not sufficient to produce ready-to-consume curated documents. These systems are not meant to replace biocurators, but instead to assist them in one or more literature curation steps. To do so, the user interface is an important aspect that needs to be considered for tool adoption. The BioCreative Interactive task (IAT) is a track designed for exploring user-system interactions, promoting development of useful TM tools, and providing a communication channel between the biocuration and the TM communities. In BioCreative V, the IAT track followed a format similar to previous interactive tracks, where the utility and usability of TM tools, as well as the generation of use cases, have been the focal points. The proposed curation tasks are user-centric and formally evaluated by biocurators. In BioCreative V IAT, seven TM systems and 43 biocurators participated. Two levels of user participation were offered to broaden curator involvement and obtain more feedback on usability aspects. The full level participation involved training on the system, curation of a set of documents with and without TM assistance, tracking of time-on-task, and completion of a user survey. The partial level participation was designed to focus on usability aspects of the interface and not the performance per se. In this case, biocurators navigated the system by performing pre-designed tasks and then were asked whether they were able to achieve the task and the level of difficulty in completing the task. In this manuscript, we describe the development of the interactive task, from planning to execution and discuss major findings for the systems tested. Database URL: http://www.biocreative.org", "which Total teams ?", "43 biocurators", 950.0, 964.0], ["Abstract Background The goal of the first BioCreAtIvE challenge (Critical Assessment of Information Extraction in Biology) was to provide a set of common evaluation tasks to assess the state of the art for text mining applied to biological problems. The results were presented in a workshop held in Granada, Spain March 28\u201331, 2004. The articles collected in this BMC Bioinformatics supplement entitled \"A critical assessment of text mining methods in molecular biology\" describe the BioCreAtIvE tasks, systems, results and their independent evaluation. Results BioCreAtIvE focused on two tasks. The first dealt with extraction of gene or protein names from text, and their mapping into standardized gene identifiers for three model organism databases (fly, mouse, yeast). The second task addressed issues of functional annotation, requiring systems to identify specific text passages that supported Gene Ontology annotations for specific proteins, given full text articles. Conclusion The first BioCreAtIvE assessment achieved a high level of international participation (27 groups from 10 countries). The assessment provided state-of-the-art performance results for a basic task (gene name finding and normalization), where the best systems achieved a balanced 80% precision / recall or better, which potentially makes them suitable for real applications in biology. The results for the advanced task (functional annotation from free text) were significantly lower, demonstrating the current limitations of text-mining approaches where knowledge extrapolation and interpretation are required. In addition, an important contribution of BioCreAtIvE has been the creation and release of training and test data sets for both tasks. There are 22 articles in this special issue, including six that provide analyses of results or data quality for the data sets, including a novel inter-annotator consistency assessment for the test set used in task 2.", "which Total teams ?", "27 groups from 10 countries", 1073.0, 1100.0], ["We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at this https URL", "which Amount of Questions ?", "100,000+", 110.0, 117.0], ["The ambitious goal of this work is to develop a cross-lingual name tagging and linking framework for 282 languages that exist in Wikipedia. Given a document in any of these languages, our framework is able to identify name mentions, assign a coarse-grained or fine-grained type to each mention, and link it to an English Knowledge Base (KB) if it is linkable. We achieve this goal by performing a series of new KB mining methods: generating \u201csilver-standard\u201d annotations by transferring annotations from English to other languages through cross-lingual links and KB properties, refining annotations through self-training and topic selection, deriving language-specific morphology features from anchor links, and mining word translation pairs from cross-lingual links. Both name tagging and linking results for 282 languages are promising on Wikipedia data and on-Wikipedia data.", "which Supported natural languages ?", "282 languages", 101.0, 114.0], ["Abstract The text-mining services for kinome curation track, part of BioCreative VI, proposed a competition to assess the effectiveness of text mining to perform literature triage. The track has exploited an unpublished curated data set from the neXtProt database. This data set contained comprehensive annotations for 300 human protein kinases. For a given protein and a given curation axis [diseases or gene ontology (GO) biological processes], participants\u2019 systems had to identify and rank relevant articles in a collection of 5.2 M MEDLINE citations (task 1) or 530 000 full-text articles (task 2). Explored strategies comprised named-entity recognition and machine-learning frameworks. For that latter approach, participants developed methods to derive a set of negative instances, as the databases typically do not store articles that were judged as irrelevant by curators. The supervised approaches proposed by the participating groups achieved significant improvements compared to the baseline established in a previous study and compared to a basic PubMed search.", "which Number of concepts ?", "300 human protein kinases", 319.0, 344.0], ["Abstract Knowledge of the molecular interactions of biological and chemical entities and their involvement in biological processes or clinical phenotypes is important for data interpretation. Unfortunately, this knowledge is mostly embedded in the literature in such a way that it is unavailable for automated data analysis procedures. Biological expression language (BEL) is a syntax representation allowing for the structured representation of a broad range of biological relationships. It is used in various situations to extract such knowledge and transform it into BEL networks. To support the tedious and time-intensive extraction work of curators with automated methods, we developed the BEL track within the framework of BioCreative Challenges. Within the BEL track, we provide training data and an evaluation environment to encourage the text mining community to tackle the automatic extraction of complex BEL relationships. In 2017 BioCreative VI, the 2015 BEL track was repeated with new test data. Although only minor improvements in text snippet retrieval for given statements were achieved during this second BEL task iteration, a significant increase of BEL statement extraction performance from provided sentences could be seen. The best performing system reached a 32% F-score for the extraction of complete BEL statements and with the given named entities this increased to 49%. This time, besides rule-based systems, new methods involving hierarchical sequence labeling and neural networks were applied for BEL statement extraction.", "which Best score ?", "32% F-score for the extraction of complete BEL statements", 1282.0, 1339.0], ["Gene function curation via Gene Ontology (GO) annotation is a common task among Model Organism Database groups. Owing to its manual nature, this task is considered one of the bottlenecks in literature curation. There have been many previous attempts at automatic identification of GO terms and supporting information from full text. However, few systems have delivered an accuracy that is comparable with humans. One recognized challenge in developing such systems is the lack of marked sentence-level evidence text that provides the basis for making GO annotations. We aim to create a corpus that includes the GO evidence text along with the three core elements of GO annotations: (i) a gene or gene product, (ii) a GO term and (iii) a GO evidence code. To ensure our results are consistent with real-life GO data, we recruited eight professional GO curators and asked them to follow their routine GO annotation protocols. Our annotators marked up more than 5000 text passages in 200 articles for 1356 distinct GO terms. For evidence sentence selection, the inter-annotator agreement (IAA) results are 9.3% (strict) and 42.7% (relaxed) in F1-measures. For GO term selection, the IAAs are 47% (strict) and 62.9% (hierarchical). Our corpus analysis further shows that abstracts contain \u223c10% of relevant evidence sentences and 30% distinct GO terms, while the Results/Experiment section has nearly 60% relevant sentences and >70% GO terms. Further, of those evidence sentences found in abstracts, less than one-third contain enough experimental detail to fulfill the three core criteria of a GO annotation. This result demonstrates the need of using full-text articles for text mining GO annotations. Through its use at the BioCreative IV GO (BC4GO) task, we expect our corpus to become a valuable resource for the BioNLP research community. Database URL: http://www.biocreative.org/resources/corpora/bc-iv-go-task-corpus/.", "which inter-annotator agreement ?", "47% (strict) and 62.9% (hierarchical)", NaN, NaN], ["Gene function curation via Gene Ontology (GO) annotation is a common task among Model Organism Database groups. Owing to its manual nature, this task is considered one of the bottlenecks in literature curation. There have been many previous attempts at automatic identification of GO terms and supporting information from full text. However, few systems have delivered an accuracy that is comparable with humans. One recognized challenge in developing such systems is the lack of marked sentence-level evidence text that provides the basis for making GO annotations. We aim to create a corpus that includes the GO evidence text along with the three core elements of GO annotations: (i) a gene or gene product, (ii) a GO term and (iii) a GO evidence code. To ensure our results are consistent with real-life GO data, we recruited eight professional GO curators and asked them to follow their routine GO annotation protocols. Our annotators marked up more than 5000 text passages in 200 articles for 1356 distinct GO terms. For evidence sentence selection, the inter-annotator agreement (IAA) results are 9.3% (strict) and 42.7% (relaxed) in F1-measures. For GO term selection, the IAAs are 47% (strict) and 62.9% (hierarchical). Our corpus analysis further shows that abstracts contain \u223c10% of relevant evidence sentences and 30% distinct GO terms, while the Results/Experiment section has nearly 60% relevant sentences and >70% GO terms. Further, of those evidence sentences found in abstracts, less than one-third contain enough experimental detail to fulfill the three core criteria of a GO annotation. This result demonstrates the need of using full-text articles for text mining GO annotations. Through its use at the BioCreative IV GO (BC4GO) task, we expect our corpus to become a valuable resource for the BioNLP research community. Database URL: http://www.biocreative.org/resources/corpora/bc-iv-go-task-corpus/.", "which inter-annotator agreement ?", "9.3% (strict) and 42.7% (relaxed) in F1-measures", NaN, NaN], ["We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional tree-structured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the state-of-the-art feature-based model on end-to-end relation extraction, achieving 12.1% and 5.7% relative error reductions in F1-score on ACE2005 and ACE2004, respectively. We also show that our LSTM-RNN based model compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8). Finally, we present an extensive ablation analysis of several model components.", "which Has experimental datasets ?", "SemEval-2010 Task 8", 921.0, 940.0], ["Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.", "which Has experimental datasets ?", "C-NMC-2019 ALL", 835.0, 849.0], ["We report the first direct estimates of N2 fixation rates measured during the spring, 2009 using the 15N2 gas tracer technique in the eastern Arabian Sea, which is well known for significant loss of nitrogen due to intense denitrification. Carbon uptake rates are also concurrently estimated using the 13C tracer technique. The N2 fixation rates vary from \u223c0.1 to 34 mmol N m\u22122d\u22121 after correcting for the isotopic under\u2010equilibrium with dissolved air in the samples. These higher N2 fixation rates are consistent with higher chlorophyll a and low \u03b415N of natural particulate organic nitrogen. Our estimates of N2 fixation is a useful step toward reducing the uncertainty in the nitrogen budget.", "which Method (primary production) ?", "13C tracer", 302.0, 312.0], ["We report the first direct estimates of N2 fixation rates measured during the spring, 2009 using the 15N2 gas tracer technique in the eastern Arabian Sea, which is well known for significant loss of nitrogen due to intense denitrification. Carbon uptake rates are also concurrently estimated using the 13C tracer technique. The N2 fixation rates vary from \u223c0.1 to 34 mmol N m\u22122d\u22121 after correcting for the isotopic under\u2010equilibrium with dissolved air in the samples. These higher N2 fixation rates are consistent with higher chlorophyll a and low \u03b415N of natural particulate organic nitrogen. Our estimates of N2 fixation is a useful step toward reducing the uncertainty in the nitrogen budget.", "which Method (primary production) ?", "13C tracer", 302.0, 312.0], ["A catalyst that cleaves aryl-oxygen bonds but not carbon-carbon bonds may help improve lignin processing. Selective hydrogenolysis of the aromatic carbon-oxygen (C-O) bonds in aryl ethers is an unsolved synthetic problem important for the generation of fuels and chemical feedstocks from biomass and for the liquefaction of coal. Currently, the hydrogenolysis of aromatic C-O bonds requires heterogeneous catalysts that operate at high temperature and pressure and lead to a mixture of products from competing hydrogenolysis of aliphatic C-O bonds and hydrogenation of the arene. Here, we report hydrogenolyses of aromatic C-O bonds in alkyl aryl and diaryl ethers that form exclusively arenes and alcohols. This process is catalyzed by a soluble nickel carbene complex under just 1 bar of hydrogen at temperatures of 80 to 120\u00b0C; the relative reactivity of ether substrates scale as Ar-OAr>>Ar-OMe>ArCH2-OMe (Ar, Aryl; Me, Methyl). Hydrogenolysis of lignin model compounds highlights the potential of this approach for the conversion of refractory aryl ether biopolymers to hydrocarbons.", "which Additive ?", "1 bar of hydrogen", 781.0, 798.0], ["A highly efficient C-C bond cleavage of unstrained aliphatic ketones bearing \u03b2-hydrogens with olefins was achieved using a chelation-assisted catalytic system consisting of (Ph 3 P) 3 RhCl and 2-amino-3-picoline by microwave irradiation under solvent-free conditions. The addition of cyclohexylamine catalyst accelerated the reaction rate dramatically under microwave irradiation compared with the classical heating method.", "which Additive ?", "2-amino-3-picoline", 193.0, 211.0], ["A highly efficient C-C bond cleavage of unstrained aliphatic ketones bearing \u03b2-hydrogens with olefins was achieved using a chelation-assisted catalytic system consisting of (Ph 3 P) 3 RhCl and 2-amino-3-picoline by microwave irradiation under solvent-free conditions. The addition of cyclohexylamine catalyst accelerated the reaction rate dramatically under microwave irradiation compared with the classical heating method.", "which Additive ?", "amine", NaN, NaN], ["We have studied the elastic and inelastic light scattering of twelve lunar surface rocks and eleven lunar soil samples from Apollo 11, 12, 14, and 15, over the range 20-2000 em-I. The phonons occurring in this frequency region have been associated with the different chemical constituents and are used to determine the mineralogical abundances by comparison with the spectra of a wide variety of terrestrial minerals and rocks. KramersKronig analyses of the infrared reflectance spectra provided the dielectric dispersion (e' and s\") and the optical constants (n and k). The dielectric constants at \"\" 1011 Hz have been obtained for each sample and are compared with the values reported in the 10-10 Hz range. The emissivity peak at the Christianson frequencies for all the lunar samples lie within the range 1195-1250 cm-1; such values are characteristic of terrestrial basalts. The Raman light scattering spectra provided investigation of small individual grains or inclusions and gave unambiguous interpretation of some of the characteristic mineralogical components.", "which Samples source ?", "Apollo 11", 124.0, 133.0], ["Abstract Granitic lunar samples largely consist of granophyric intergrowths of silica and K-feldspar. The identification of the silica polymorph present in the granophyre can clarify the petrogenesis of the lunar granites. The presence of tridymite or cristobalite would indicate rapid crystallization at high temperature. Quartz would indicate crystallization at low temperature or perhaps intrusive, slow crystallization, allowing for the orderly transformation from high-temperature silica polymorphs (tridymite or cristobalite). We identify the silica polymorphs present in four granitic lunar samples from the Apollo 12 regolith using laser Raman spectroscopy. Typically, lunar silica occurs with a hackle fracture pattern. We did an initial density calculation on the hackle fracture pattern of quartz and determined that the volume of quartz and fracture space is consistent with a molar volume contraction from tridymite or cristobalite, both of which are less dense than quartz. Moreover, we analyzed the silica in the granitic fragments from Apollo 12 by electron-probe microanalysis and found it contains up to 0.7 wt% TiO2, consistent with initial formation as the high-temperature silica polymorphs, which have more open crystal structures that can more readily accommodate cations other than Si. The silica in Apollo 12 granitic samples crystallized rapidly as tridymite or cristobalite, consistent with extrusive volcanism. The silica then inverted to quartz at a later time, causing it to contract and fracture. A hackle fracture pattern is common in silica occurring in extrusive lunar lithologies (e.g., mare basalt). The extrusive nature of these granitic samples makes them excellent candidates to be similar to the rocks that compose positive relief silicic features such as the Gruithuisen Domes.", "which Samples source ?", "Apollo 12", 615.0, 624.0], ["This paper presents the application of Interval Type-2 fuzzy logic systems (Interval Type-2 FLS) in short term load forecasting (STLF) on special days, study case in Bali Indonesia. Type-2 FLS is characterized by a concept called footprint of uncertainty (FOU) that provides the extra mathematical dimension that equips Type-2 FLS with the potential to outperform their Type-1 counterparts. While a Type-2 FLS has the capability to model more complex relationships, the output of a Type-2 fuzzy inference engine needs to be type-reduced. Type reduction is used by applying the Karnik-Mendel (KM) iterative algorithm. This type reduction maps the output of Type-2 FSs into Type-1 FSs then the defuzzification with centroid method converts that Type-1 reduced FSs into a number. The proposed method was tested with the actual load data of special days using 4 days peak load before special days and at the time of special day for the year 2002-2006. There are 20 items of special days in Bali that are used to be forecasted in the year 2005 and 2006 respectively. The test results showed an accurate forecasting with the mean average percentage error of 1.0335% and 1.5683% in the year 2005 and 2006 respectively.", "which MAPE ?", "1.0335%", NaN, NaN], ["To discuss the implications of CSF abnormalities for the course of acute monosymptomatic optic neuritis (AMON), various CSF markers were analysed in patients being randomly selected from a population-based cohort. Paired serum and CSF were obtained within a few weeks from onset of AMON. CSF-restricted oligoclonal IgG bands, free kappa and free lambda chain bands were observed in 17, 15, and nine of 27 examined patients, respectively. Sixteen patients showed a polyspecific intrathecal synthesis of oligoclonal IgG antibodies against one or more viruses. At 1 year follow-up five patients had developed clinically definite multiple sclerosis (CDMS); all had CSF oligoclonal IgG bands and virus-specific oligoclonal IgG antibodies at onset. Due to the relative small number studied at the short-term follow-up, no firm conclusion of the prognostic value of these analyses could be reached. CSF Myelin Basic Protein-like material was increased in only two of 29 patients with AMON, but may have potential value in reflecting disease activity, as the highest values were obtained among patients with CSF sampled soon after the worst visual acuity was reached, and among patients with severe visual impairment. In most previous studies of patients with AMON qualitative and quantitative analyses of CSF IgG had a predictive value for development of CDMS, but the results are conflicting.", "which Follow-up time/ total observation time after MON n M/F ?", "1 year", 561.0, 567.0], ["Background: Patients with a clinically isolated demyelinating syndrome (CIS) are at risk of developing a second attack, thus converting into clinically definite multiple sclerosis (CDMS). Therefore, an accurate prognostic marker for that conversion might allow early treatment. Brain MRI and oligoclonal IgG band (OCGB) detection are the most frequent paraclinical tests used in MS diagnosis. A new OCGB test has shown high sensitivity and specificity in differential diagnosis of MS. Objective: To evaluate the accuracy of the new OCGB method and of current MRI criteria (MRI-C) to predict conversion of CIS to CDMS. Methods: Fifty-two patients with CIS were studied with OCGB detection and brain MRI, and followed up for 6 years. The sensitivity and specificity of both methods to predict conversion to CDMS were analyzed. Results: OCGB detection showed a sensitivity of 91.4% and specificity of 94.1%. MRI-C had a sensitivity of 74.23% and specificity of 88.2%. The presence of either OCGB or MRI-C studied simultaneously showed a sensitivity of 97.1% and specificity of 88.2%. Conclusions: The presence of oligoclonal IgG bands is highly specific and sensitive for early prediction of conversion to multiple sclerosis. MRI criteria have a high specificity but less sensitivity. The simultaneous use of both tests shows high sensitivity and specificity in predicting clinically isolated demyelinating syndrome conversion to clinically definite multiple sclerosis.", "which Follow-up time/ total observation time after MON n M/F ?", "6 years", 723.0, 730.0], ["UNLABELLED A field survey of occupants' response to the indoor environment in 10 office buildings with displacement ventilation was performed. The response of 227 occupants was analyzed. About 24% of the occupants in the survey complained that they were daily bothered by draught, mainly at the lower leg. Vertical air temperature difference measured between head and feet levels was less than 3 degrees C at all workplaces visited. Combined local discomfort because of draught and vertical temperature difference does not seem to be a serious problem in rooms with displacement ventilation. Almost one half (49%) of the occupants reported that they were daily bothered by an uncomfortable room temperature. Forty-eight per cent of the occupants were not satisfied with the air quality. PRACTICAL IMPLICATIONS The PMV and the Draught Rating indices as well as the specifications for local discomfort because of the separate impact of draught and vertical temperature difference, as defined in the present standards, are relevant for the design of a thermal environment in rooms with displacement ventilation and for its assessment in practice. Increasing the supply air temperature in order to counteract draught discomfort is a measure that should be considered carefully; even if the desired stratification of pollution in the occupied zone is preserved, an increase of the inhaled air temperature may have a negative effect on perceived air quality.", "which Place of experiment ?", "10 office buildings with displacement ventilation", 78.0, 127.0], ["The development of chromosomal abnormalities (CAs) in the Philadelphia chromosome (Ph)-negative metaphases during imatinib (IM) therapy in patients with newly diagnosed chronic myecloid leukemia (CML) has been reported only anecdotally. We assessed the frequency and significance of this phenomenon among 258 patients with newly diagnosed CML in chronic phase receiving IM. After a median follow-up of 37 months, 21 (9%) patients developed 23 CAs in Ph-negative cells; excluding -Y, this incidence was 5%. Sixteen (70%) of all CAs were observed in 2 or more metaphases. The median time from start of IM to the appearance of CAs was 18 months. The most common CAs were -Y and + 8 in 9 and 3 patients, respectively. CAs were less frequent in young patients (P = .02) and those treated with high-dose IM (P = .03). In all but 3 patients, CAs were transient and disappeared after a median of 5 months. One patient developed acute myeloid leukemia (associated with - 7). At last follow-up, 3 patients died from transplantation-related complications, myocardial infarction, and progressive disease and 2 lost cytogenetic response. CAs occur in Ph-negative cells in a small percentage of patients with newly diagnosed CML treated with IM. In rare instances, these could reflect the emergence of a new malignant clone.", "which N (%) with abnormal karyotype ?", "21 (9%)", NaN, NaN], ["Cytogenetic abnormalities, evaluated either by karyotype or by fluorescence in situ hybridization (FISH), are considered the most important prognostic factor in multiple myeloma (MM). However, there is no information about the prognostic impact of genomic changes detected by comparative genomic hybridization (CGH). We have analyzed the frequency and prognostic impact of genetic changes as detected by CGH and evaluated the relationship between these chromosomal imbalances and IGH translocation, analyzed by FISH, in 74 patients with newly diagnosed MM. Genomic changes were identified in 51 (69%) of the 74 MM patients. The most recurrent abnormalities among the cases with genomic changes were gains on chromosome regions 1q (45%), 5q (24%), 9q (24%), 11q (22%), 15q (22%), 3q (16%), and 7q (14%), while losses mainly involved chromosomes 13 (39%), 16q (18%), 6q (10%), and 8p (10%). Remarkably, the 6 patients with gains on 11q had IGH translocations. Multivariate analysis selected chromosomal losses, 11q gains, age, and type of treatment (conventional chemotherapy vs autologous transplantation) as independent parameters for predicting survival. Genomic losses retained the prognostic value irrespective of treatment approach. According to these results, losses of chromosomal material evaluated by CGH represent a powerful prognostic factor in MM patients.", "which N (%) with abnormal karyotype ?", "51 (69%)", NaN, NaN], ["Game engines enable developers to reuse assets from previously developed games, thus easing the software-engineering challenges around the video-game development experience and making the implementation of games less expensive, less technologically brittle, and more efficient. However, the construction of game engines is challenging in itself, it involves the specification of well defined architectures and typical game play behaviors, flexible enough to enable game designers to implement their vision, while, at the same time, simplifying the implementation through asset and code reuse. In this paper we present a set of lessons learned through the design and construction PhyDSL-2, a game engine for 2D physics-based games. Our experience involves the active use of modern model-driven engineering technologies, to overcome the complexity of the engine design and to systematize its maintenance and evolution.", "which Game Genres ?", "2D Physics-based Games", 707.0, 729.0], ["The main topic of this paper is the problem of developing user interfaces for educational games. Focus of educational games is usually on the knowledge while it should be evenly distributed to the user interface as well. Our proposed solution is based on the model-driven approach, thus we created a framework that incorporates meta-models, models, transformations and software tools. We demonstrated practical application of the mentioned framework by developing user interface for educational adventure game.", "which Game Genres ?", "Educational Games", 78.0, 95.0], ["Location-Based Games (LBGs) are a subclass of pervasive games that make use of location technologies to consider the players' geographic position in the game rules and mechanics. This research presents a model to describe and represent LBGs. The proposed model decouples location, mechanics, and game content from their implementation. We aim at allowing LBGs to be edited quickly and deployed on many platforms. The core model component is LEGaL, a language derived from NCL (Nested Context Language) to model and represented the game structure and its multimedia contents (e.g., video, audio, 3D objects, etc.). It allows the modelling of mission-based games by supporting spatial and temporal relationships between game elements and multimedia documents. We validated our approach by implementing a LEGaL interpreter, which was coupled to an LBG authoring tool and a Game Server. These tools enabled us to reimplement a real LBG using the proposed model to attest its utility. We also edited the original game by using an external tool to showcase how simple is to transpose an LBG using the concepts introduced in this work. Results indicate both the model and LEGaL can be used to foster the design of LBGs.", "which Game Genres ?", "Location-based Games", 0.0, 20.0], ["The development of gamification within non-game information systems as well as serious games has recently gained an important role in a variety of business fields due to promising behavioral or psychological improvements. However, industries still struggle with the high efforts of implementing gameful affordances in non-game systems. In order to decrease factors such as project costs, development cycles, and resource consumption as well as to improve the quality of products, the gamification modeling language has been proposed in prior research. However, the language is on a descriptive level only, i.e., Cannot be used to automatically generate executable software artifacts. In this paper and based on this language, we introduce a model-driven architecture for designing as well as generating building blocks for serious games. Furthermore, we give a validation of our approach by going through the different steps of designing an achievement system in the context of an existing serious game.", "which Game Genres ?", "Serious Games", 79.0, 92.0], ["Serious games are becoming an increasingly used alternative in technical/professional/academic fields. However, scenario development poses a challenging problem since it is an expensive task, only devoted to computer specialists (game developers, programmers\u2026). The ultimate goal of our work is to propose a new scenario-building approach capable of ensuring a high degree of deployment and reusability. Thus, we will define in this paper a new generation mechanism. This mechanism is built upon a model driven architecture (MDA). We have started up by enriching the existing standards, which resulted in defining a new generic meta-model (CIM). The resulting meta-model is capable of describing and standardizing game scenarios. Then, we have laid down a new transformational mechanism in order to integrate the indexed game components into operational platforms (PSM). Finally, the effectiveness of our strategy was assessed under two separate contexts (target platforms) : the claroline-connect platform and the unity 3D environment.", "which Game Genres ?", "Serious Games", 0.0, 13.0], ["Because of the increasing expansion of the videogame industry, shorten videogame time to market for diverse platforms (e.g, Mac, android, iOS, BlackBerry) is a quest. This paper presents how a Domain Specific Language (DSL) in conjunction with Model-Driven Engineering (MDE) techniques can automate the development of games, in particular, tower defense games such as Plants vs. Zombies. The DSL allows the expression of structural and behavioral aspects of tower defense games. The MDE techniques allow us to generate code from the game expressed in the DSL. The generated code is written in an existing open source language that leverages the portability of the games. We present our approach using an example so-called Space Attack. The example shows the significant benefits offered by our proposal in terms of productivity and portability.", "which Game Genres ?", "Tower Defense Games", 340.0, 359.0], ["Virtual worlds and avatar-based interactive computer games are a hype among consumers and researchers for many years now. In recent years, such games on mobile devices also became increasingly important. However, most virtual worlds require the use of proprietary clients and authoring environments and lack portability, which limits their usefulness for targeting wider audiences like e.g. in consumer marketing or sales. Using mobile devices and client-side web technologies like i.e. JavaScript in combination with a more automatic generation of customer-specific virtual worlds could help to overcome these limitations. Here, model-driven software development (MDD) provides a promising approach for automating the creation of user interface (UI) components for games on mobile devices. Therefore, in this paper an approach is proposed for the model-driven generation of UI components for virtual worlds using JavaScript and the upcoming Famo.us framework. The feasibilty of the approach is evaluated by implementing a proof-of-concept scenario.", "which Game Genres ?", "Virtual Worlds", 0.0, 14.0], ["Purpose \u2013 The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach \u2013 In total, 13 CSFs for SD are identified (i.e. long\u2010term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings \u2013 The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long\u2010term strategic goal is found to be ...", "which Critical success factors ?", "supplier certification", 391.0, 413.0], ["This paper reports the results of a survey on the critical success factors (CSFs) of web-based supply-chain management systems (WSCMS). An empirical study was conducted and an exploratory factor analysis of the survey data revealed five major dimensions of the CSFs for WSCMS implementation, namely (1) communication, (2) top management commitment, (3) data security, (4) training and education, and (5) hardware and software reliability. The findings of the results provide insights for companies using or planning to use WSCMS.", "which Critical success factors ?", "data security", 353.0, 366.0], ["Rapid industrial modernisation and economic reform have been features of the Korean economy since the 1990s, and have brought with it substantial environmental problems. In response to these problems, the Korean government has been developing approaches to promote cleaner production technologies. Green supply chain management (GSCM) is emerging to be an important approach for Korean enterprises to improve performance. The purpose of this study is to examine the impact of GSCM CSFs (critical success factors) on the BSC (balanced scorecard) performance by the structural equation modelling, using empirical results from 249 enterprise respondents involved in national GSCM business in Korea. Planning and implementation was a dominant antecedent factor in this study, followed by collaboration with partners and integration of infrastructure. However, activation of support was a negative impact to the finance performance, raising the costs and burdens. It was found out that there were important implications in the implementation of GSCM.", "which Critical success factors ?", "planning and implementation", 696.0, 723.0], ["Purpose \u2013 The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach \u2013 In total, 13 CSFs for SD are identified (i.e. long\u2010term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings \u2013 The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long\u2010term strategic goal is found to be ...", "which Critical success factors ?", "proximity to manufacturing base", 358.0, 389.0], ["This paper aims to extract the factors influencing the performance of reverse supply chains (RSCs) based on the structure equation model (SEM). We first introduce the definition of RSC and describe its current status and follow this with a literature review of previous RSC studies and the technology acceptance model . We next develop our research model and 11 hypotheses and then use SEM to test our model and identify those factors that actually influence the success of RSC. Next, we use both questionnaire and web\u2010based methods to survey five companies which have RSC operation experience in China and Korea. Using the 168 responses, we used measurement modeling test and SEM to validate our proposed hypotheses. As a result, nine hypotheses were accepted while two were rejected. We found that ease of use, perceived usefulness, service quality, channel relationship and RSC cost were the five most important factors which influence the success of RSC. Finally, we conclude by highlighting our research contribution and propose future research.", "which Critical success factors ?", "channel relationship", 852.0, 872.0], ["Purpose \u2013 The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach \u2013 In total, 13 CSFs for SD are identified (i.e. long\u2010term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings \u2013 The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long\u2010term strategic goal is found to be ...", "which Critical success factors ?", "information sharing", 438.0, 457.0], ["Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors \u201crelationship with 3PLs and skilled logistics professionals\u201d would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India.", "which Critical success factors ?", "skilled logistics", 697.0, 714.0], ["Amid the intensive competition among global industries, the relationship between manufacturers and suppliers has turned from antagonist to cooperative. Through partnerships, both parties can be mutually benefited, and the key factor that maintains such relationship lies in how manufacturers select proper suppliers. The purpose of this study is to explore the key factors considered by manufacturers in supplier selection and the relationships between these factors. Through a literature review, eight supplier selection factors, comprising price response capability, quality management capability, technological capability, delivery capability, flexible capability, management capability, commercial image, and financial capability are derived. Based on the theoretic foundation proposed by previous researchers, a causal model of supplier selection factors is further constructed. The results of a survey on high-tech industries are used to verify the relationships between the eight factors using structural equation modelling (SEM). Based on the empirical results, conclusions and suggestions are finally proposed as a reference for manufacturers and suppliers.", "which Critical success factors ?", "flexible capability", 647.0, 666.0], ["Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors \u201crelationship with 3PLs and skilled logistics professionals\u201d would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India.", "which Critical success factors ?", "industry focus", 310.0, 324.0], ["Purpose \u2013 The purpose of this paper is to review the literature on supply chain management (SCM) practices in small and medium scale enterprises (SMEs) and outlines the key insights.Design/methodology/approach \u2013 The paper describes a literature\u2010based research that has sought understand the issues of SCM for SMEs. The methodology is based on critical review of 77 research papers from high\u2010quality, international refereed journals. Mainly, issues are explored under three categories \u2013 supply chain integration, strategy and planning and implementation. This has supported the development of key constructs and propositions.Findings \u2013 The research outcomes are three fold. Firstly, paper summarizes the reported literature and classifies it based on their nature of work and contributions. Second, paper demonstrates the overall approach towards the development of constructs, research questions, and investigative questions leading to key proposition for the further research. Lastly, paper outlines the key findings an...", "which Critical success factors ?", "supply chain integration", 486.0, 510.0], ["Amid the intensive competition among global industries, the relationship between manufacturers and suppliers has turned from antagonist to cooperative. Through partnerships, both parties can be mutually benefited, and the key factor that maintains such relationship lies in how manufacturers select proper suppliers. The purpose of this study is to explore the key factors considered by manufacturers in supplier selection and the relationships between these factors. Through a literature review, eight supplier selection factors, comprising price response capability, quality management capability, technological capability, delivery capability, flexible capability, management capability, commercial image, and financial capability are derived. Based on the theoretic foundation proposed by previous researchers, a causal model of supplier selection factors is further constructed. The results of a survey on high-tech industries are used to verify the relationships between the eight factors using structural equation modelling (SEM). Based on the empirical results, conclusions and suggestions are finally proposed as a reference for manufacturers and suppliers.", "which Critical success factors ?", "commercial image", 691.0, 707.0], ["Purpose \u2013 The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach \u2013 In total, 13 CSFs for SD are identified (i.e. long\u2010term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings \u2013 The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long\u2010term strategic goal is found to be ...", "which Critical success factors ?", "direct involvement", 557.0, 575.0], ["Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors \u201crelationship with 3PLs and skilled logistics professionals\u201d would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India.", "which Critical success factors ?", "3PL experience", 342.0, 356.0], ["Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors \u201crelationship with 3PLs and skilled logistics professionals\u201d would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India.", "which Critical success factors ?", "customer focus", 326.0, 340.0], ["Amid the intensive competition among global industries, the relationship between manufacturers and suppliers has turned from antagonist to cooperative. Through partnerships, both parties can be mutually benefited, and the key factor that maintains such relationship lies in how manufacturers select proper suppliers. The purpose of this study is to explore the key factors considered by manufacturers in supplier selection and the relationships between these factors. Through a literature review, eight supplier selection factors, comprising price response capability, quality management capability, technological capability, delivery capability, flexible capability, management capability, commercial image, and financial capability are derived. Based on the theoretic foundation proposed by previous researchers, a causal model of supplier selection factors is further constructed. The results of a survey on high-tech industries are used to verify the relationships between the eight factors using structural equation modelling (SEM). Based on the empirical results, conclusions and suggestions are finally proposed as a reference for manufacturers and suppliers.", "which Critical success factors ?", "financial capability", 713.0, 733.0], ["This paper aims to extract the factors influencing the performance of reverse supply chains (RSCs) based on the structure equation model (SEM). We first introduce the definition of RSC and describe its current status and follow this with a literature review of previous RSC studies and the technology acceptance model . We next develop our research model and 11 hypotheses and then use SEM to test our model and identify those factors that actually influence the success of RSC. Next, we use both questionnaire and web\u2010based methods to survey five companies which have RSC operation experience in China and Korea. Using the 168 responses, we used measurement modeling test and SEM to validate our proposed hypotheses. As a result, nine hypotheses were accepted while two were rejected. We found that ease of use, perceived usefulness, service quality, channel relationship and RSC cost were the five most important factors which influence the success of RSC. Finally, we conclude by highlighting our research contribution and propose future research.", "which Critical success factors ?", "service quality", 835.0, 850.0], ["Purpose \u2013 The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach \u2013 In total, 13 CSFs for SD are identified (i.e. long\u2010term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings \u2013 The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long\u2010term strategic goal is found to be ...", "which Critical success factors ?", "project completion experience", 506.0, 535.0], ["Extranet is an enabler/system that enriches the information service quality in e-supply chain. This paper uses factor analysis to determine four extranet success factors: system quality, information quality, service quality, and work performance quality. A critical analysis of areas that require improvement is also conducted.", "which Critical success factors ?", "System quality", 171.0, 185.0], ["Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors \u201crelationship with 3PLs and skilled logistics professionals\u201d would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India.", "which Critical success factors ?", "investment in information", 412.0, 437.0], ["This paper uses the extant literature to identify the key success factors that are associated with performance in the Indian third-party logistics service providers (3PL) sector. We contribute to the sparse literature that has examined the relationship between key success factors and performance in the Indian 3PL context. This study offers new insights and isolates key success factors that vary in their impact on operations and financial performance measures. Specifically, we found that the key success factor of relationship with customers significantly influenced the operations measures of on-time delivery performance and customer satisfaction and the financial measure of profit growth. Similarly, the key success factor of skilled logistics professionals improved the operational measure of customer satisfaction and the financial measure of profit growth. The key success factor of breadth of service significantly affected the financial measure of revenue growth, but did not affect any operational measure. To further unravel the patterns of these results, a contingency analysis of these relationships according to firm size was also conducted. Relationship with 3PLs was significant irrespective of firm size. Our findings contribute to academic theory and managerial practice by offering context-specific suggestions on the usefulness of specific key success factors based on their potential influence on operational and financial performance in the Indian 3PL industry.", "which Critical success factors ?", "relationship with 3PLs", 1160.0, 1182.0], ["Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors \u201crelationship with 3PLs and skilled logistics professionals\u201d would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India.", "which Critical success factors ?", "relationship with 3PL", NaN, NaN], ["This study is the first attempt that assembled published academic work on critical success factors (CSFs) in supply chain management (SCM) fields. The purpose of this study are to review the CSFs in SCM and to uncover the major CSFs that are apparent in SCM literatures. This study apply literature survey techniques from published CSFs studies in SCM. A collection of 42 CSFs studies in various SCM fields are obtained from major databases. The search uses keywords such as as supply chain management, critical success factors, logistics management and supply chain drivers and barriers. From the literature survey, four major CSFs are proposed. The factors are collaborative partnership, information technology, top management support and human resource. It is hoped that this review will serve as a platform for future research in SCM and CSFs studies. Plus, this study contribute to existing SCM knowledge and further appraise the concept of CSFs.", "which Critical success factors ?", "information technology", 690.0, 712.0], ["Purpose \u2013 The purpose of this paper is to determine those factors perceived by users to influence the successful on\u2010going use of e\u2010commerce systems in business\u2010to\u2010business (B2B) buying and selling transactions through examination of the views of individuals acting in both purchasing and selling roles within the UK National Health Service (NHS) pharmaceutical supply chain.Design/methodology/approach \u2013 Literature from the fields of operations and supply chain management (SCM) and information systems (IS) is used to determine candidate factors that might influence the success of the use of e\u2010commerce. A questionnaire based on these is used for primary data collection in the UK NHS pharmaceutical supply chain. Factor analysis is used to analyse the data.Findings \u2013 The paper yields five composite factors that are perceived by users to influence successful e\u2010commerce use. \u201cSystem quality,\u201d \u201cinformation quality,\u201d \u201cmanagement and use,\u201d \u201cworld wide web \u2013 assurance and empathy,\u201d and \u201ctrust\u201d are proposed as potentia...", "which Critical success factors ?", "world wide web \u2013 assurance and empathy", 943.0, 981.0], ["Purpose \u2013 The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach \u2013 In total, 13 CSFs for SD are identified (i.e. long\u2010term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings \u2013 The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long\u2010term strategic goal is found to be ...", "which Critical success factors ?", "incentives", 315.0, 325.0], ["Purpose \u2013 The purpose of this paper is to determine those factors perceived by users to influence the successful on\u2010going use of e\u2010commerce systems in business\u2010to\u2010business (B2B) buying and selling transactions through examination of the views of individuals acting in both purchasing and selling roles within the UK National Health Service (NHS) pharmaceutical supply chain.Design/methodology/approach \u2013 Literature from the fields of operations and supply chain management (SCM) and information systems (IS) is used to determine candidate factors that might influence the success of the use of e\u2010commerce. A questionnaire based on these is used for primary data collection in the UK NHS pharmaceutical supply chain. Factor analysis is used to analyse the data.Findings \u2013 The paper yields five composite factors that are perceived by users to influence successful e\u2010commerce use. \u201cSystem quality,\u201d \u201cinformation quality,\u201d \u201cmanagement and use,\u201d \u201cworld wide web \u2013 assurance and empathy,\u201d and \u201ctrust\u201d are proposed as potentia...", "which Critical success factors ?", "management and use", 921.0, 939.0], ["This paper aims to extract the factors influencing the performance of reverse supply chains (RSCs) based on the structure equation model (SEM). We first introduce the definition of RSC and describe its current status and follow this with a literature review of previous RSC studies and the technology acceptance model . We next develop our research model and 11 hypotheses and then use SEM to test our model and identify those factors that actually influence the success of RSC. Next, we use both questionnaire and web\u2010based methods to survey five companies which have RSC operation experience in China and Korea. Using the 168 responses, we used measurement modeling test and SEM to validate our proposed hypotheses. As a result, nine hypotheses were accepted while two were rejected. We found that ease of use, perceived usefulness, service quality, channel relationship and RSC cost were the five most important factors which influence the success of RSC. Finally, we conclude by highlighting our research contribution and propose future research.", "which Critical success factors ?", "perceived usefulness", 813.0, 833.0], ["Purpose \u2013 The purpose of this paper is to determine those factors perceived by users to influence the successful on\u2010going use of e\u2010commerce systems in business\u2010to\u2010business (B2B) buying and selling transactions through examination of the views of individuals acting in both purchasing and selling roles within the UK National Health Service (NHS) pharmaceutical supply chain.Design/methodology/approach \u2013 Literature from the fields of operations and supply chain management (SCM) and information systems (IS) is used to determine candidate factors that might influence the success of the use of e\u2010commerce. A questionnaire based on these is used for primary data collection in the UK NHS pharmaceutical supply chain. Factor analysis is used to analyse the data.Findings \u2013 The paper yields five composite factors that are perceived by users to influence successful e\u2010commerce use. \u201cSystem quality,\u201d \u201cinformation quality,\u201d \u201cmanagement and use,\u201d \u201cworld wide web \u2013 assurance and empathy,\u201d and \u201ctrust\u201d are proposed as potentia...", "which Critical success factors ?", "System quality", 880.0, 894.0], ["Purpose \u2013 The purpose of the paper is to investigate the factors that affect the decision\u2010making process of Hong Kong\u2010based manufacturers when they select a third\u2010party logistics (3PL) service provider and how 3PL service providers manage to retain customer loyalty in times of financial turbulence.Design/methodology/approach \u2013 The paper presents a survey\u2010based study targeting Hong Kong\u2010based manufacturers currently using 3PL companies. It investigates the relationship between the reasons for using 3PL services and the requirements for selecting a provider, and examines the relationship between customer satisfaction and loyalty. In addition, the relationships among various dimensions \u2013 in small to medium\u2010sized enterprises (SMEs), large enterprises and companies \u2013 of contracts of various lengths are investigated.Findings \u2013 In general, the reasons for using 3PL services and the requirements for selecting 3PL service providers are positive\u2010related. The dimension of \u201creputation\u201d of satisfaction influences \u201cpri...", "which Critical success factors ?", "reputation", 977.0, 987.0], ["Amid the intensive competition among global industries, the relationship between manufacturers and suppliers has turned from antagonist to cooperative. Through partnerships, both parties can be mutually benefited, and the key factor that maintains such relationship lies in how manufacturers select proper suppliers. The purpose of this study is to explore the key factors considered by manufacturers in supplier selection and the relationships between these factors. Through a literature review, eight supplier selection factors, comprising price response capability, quality management capability, technological capability, delivery capability, flexible capability, management capability, commercial image, and financial capability are derived. Based on the theoretic foundation proposed by previous researchers, a causal model of supplier selection factors is further constructed. The results of a survey on high-tech industries are used to verify the relationships between the eight factors using structural equation modelling (SEM). Based on the empirical results, conclusions and suggestions are finally proposed as a reference for manufacturers and suppliers.", "which Critical success factors ?", "technological capability", 600.0, 624.0], ["Rapid industrial modernisation and economic reform have been features of the Korean economy since the 1990s, and have brought with it substantial environmental problems. In response to these problems, the Korean government has been developing approaches to promote cleaner production technologies. Green supply chain management (GSCM) is emerging to be an important approach for Korean enterprises to improve performance. The purpose of this study is to examine the impact of GSCM CSFs (critical success factors) on the BSC (balanced scorecard) performance by the structural equation modelling, using empirical results from 249 enterprise respondents involved in national GSCM business in Korea. Planning and implementation was a dominant antecedent factor in this study, followed by collaboration with partners and integration of infrastructure. However, activation of support was a negative impact to the finance performance, raising the costs and burdens. It was found out that there were important implications in the implementation of GSCM.", "which Critical success factors ?", "Collaboration with partners", 784.0, 811.0], ["Purpose \u2013 The purpose of this paper is to determine those factors perceived by users to influence the successful on\u2010going use of e\u2010commerce systems in business\u2010to\u2010business (B2B) buying and selling transactions through examination of the views of individuals acting in both purchasing and selling roles within the UK National Health Service (NHS) pharmaceutical supply chain.Design/methodology/approach \u2013 Literature from the fields of operations and supply chain management (SCM) and information systems (IS) is used to determine candidate factors that might influence the success of the use of e\u2010commerce. A questionnaire based on these is used for primary data collection in the UK NHS pharmaceutical supply chain. Factor analysis is used to analyse the data.Findings \u2013 The paper yields five composite factors that are perceived by users to influence successful e\u2010commerce use. \u201cSystem quality,\u201d \u201cinformation quality,\u201d \u201cmanagement and use,\u201d \u201cworld wide web \u2013 assurance and empathy,\u201d and \u201ctrust\u201d are proposed as potentia...", "which Critical success factors ?", "information quality", 898.0, 917.0], ["Amid the intensive competition among global industries, the relationship between manufacturers and suppliers has turned from antagonist to cooperative. Through partnerships, both parties can be mutually benefited, and the key factor that maintains such relationship lies in how manufacturers select proper suppliers. The purpose of this study is to explore the key factors considered by manufacturers in supplier selection and the relationships between these factors. Through a literature review, eight supplier selection factors, comprising price response capability, quality management capability, technological capability, delivery capability, flexible capability, management capability, commercial image, and financial capability are derived. Based on the theoretic foundation proposed by previous researchers, a causal model of supplier selection factors is further constructed. The results of a survey on high-tech industries are used to verify the relationships between the eight factors using structural equation modelling (SEM). Based on the empirical results, conclusions and suggestions are finally proposed as a reference for manufacturers and suppliers.", "which Critical success factors ?", "delivery capability", 626.0, 645.0], ["Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors \u201crelationship with 3PLs and skilled logistics professionals\u201d would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India.", "which Critical success factors ?", "investment in quality", 382.0, 403.0], ["Purpose \u2013 The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach \u2013 In total, 13 CSFs for SD are identified (i.e. long\u2010term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings \u2013 The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long\u2010term strategic goal is found to be ...", "which Critical success factors ?", "external environment", 484.0, 504.0], ["Purpose \u2013 The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach \u2013 In total, 13 CSFs for SD are identified (i.e. long\u2010term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings \u2013 The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long\u2010term strategic goal is found to be ...", "which Critical success factors ?", "supplier status", 537.0, 552.0], ["Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors \u201crelationship with 3PLs and skilled logistics professionals\u201d would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India.", "which Critical success factors ?", "internationalization", 274.0, 294.0], ["Purpose \u2013 The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach \u2013 In total, 13 CSFs for SD are identified (i.e. long\u2010term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings \u2013 The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long\u2010term strategic goal is found to be ...", "which Critical success factors ?", "innovation capability", 415.0, 436.0], ["This study is the first attempt that assembled published academic work on critical success factors (CSFs) in supply chain management (SCM) fields. The purpose of this study are to review the CSFs in SCM and to uncover the major CSFs that are apparent in SCM literatures. This study apply literature survey techniques from published CSFs studies in SCM. A collection of 42 CSFs studies in various SCM fields are obtained from major databases. The search uses keywords such as as supply chain management, critical success factors, logistics management and supply chain drivers and barriers. From the literature survey, four major CSFs are proposed. The factors are collaborative partnership, information technology, top management support and human resource. It is hoped that this review will serve as a platform for future research in SCM and CSFs studies. Plus, this study contribute to existing SCM knowledge and further appraise the concept of CSFs.", "which Critical success factors ?", "top management support", 714.0, 736.0], ["This study is the first attempt that assembled published academic work on critical success factors (CSFs) in supply chain management (SCM) fields. The purpose of this study are to review the CSFs in SCM and to uncover the major CSFs that are apparent in SCM literatures. This study apply literature survey techniques from published CSFs studies in SCM. A collection of 42 CSFs studies in various SCM fields are obtained from major databases. The search uses keywords such as as supply chain management, critical success factors, logistics management and supply chain drivers and barriers. From the literature survey, four major CSFs are proposed. The factors are collaborative partnership, information technology, top management support and human resource. It is hoped that this review will serve as a platform for future research in SCM and CSFs studies. Plus, this study contribute to existing SCM knowledge and further appraise the concept of CSFs.", "which Critical success factors ?", "human resource", 741.0, 755.0], ["This paper aims to extract the factors influencing the performance of reverse supply chains (RSCs) based on the structure equation model (SEM). We first introduce the definition of RSC and describe its current status and follow this with a literature review of previous RSC studies and the technology acceptance model . We next develop our research model and 11 hypotheses and then use SEM to test our model and identify those factors that actually influence the success of RSC. Next, we use both questionnaire and web\u2010based methods to survey five companies which have RSC operation experience in China and Korea. Using the 168 responses, we used measurement modeling test and SEM to validate our proposed hypotheses. As a result, nine hypotheses were accepted while two were rejected. We found that ease of use, perceived usefulness, service quality, channel relationship and RSC cost were the five most important factors which influence the success of RSC. Finally, we conclude by highlighting our research contribution and propose future research.", "which Critical success factors ?", "ease of use", 800.0, 811.0], ["Extranet is an enabler/system that enriches the information service quality in e-supply chain. This paper uses factor analysis to determine four extranet success factors: system quality, information quality, service quality, and work performance quality. A critical analysis of areas that require improvement is also conducted.", "which Critical success factors ?", "information quality", 187.0, 206.0], ["Purpose \u2013 The purpose of this paper is to explore critical factors for implementing green supply chain management (GSCM) practice in the Taiwanese electrical and electronics industries relative to European Union directives.Design/methodology/approach \u2013 A tentative list of critical factors of GSCM was developed based on a thorough and detailed analysis of the pertinent literature. The survey questionnaire contained 25 items, developed based on the literature and interviews with three industry experts, specifically quality and product assurance representatives. A total of 300 questionnaires were mailed out, and 87 were returned, of which 84 were valid, representing a response rate of 28 percent. Using the data collected, the identified critical factors were performed via factor analysis to establish reliability and validity.Findings \u2013 The results show that 20 critical factors were extracted into four dimensions, which denominated supplier management, product recycling, organization involvement and life cycl...", "which Critical success factors ?", "product recycling", 963.0, 980.0], ["This study describes shot peening effects such as shot hardness, shot size and shot projection pressure, on the residual stress distribution and fatigue life in reversed torsion of a 60SC7 spring steel. There appears to be a correlation between the fatigue strength and the area under the residual stress distribution curve. The biggest shot shows the best fatigue life improvement. However, for a shorter time of shot peening, small hard shot showed the best performance. Moreover, the superficial residual stresses and the amount of work hardening (characterised by the width of the X-ray diffraction line) do not remain stable during fatigue cycling. Indeed they decrease and their reduction rate is a function of the cyclic stress level and an inverse function of the depth of the plastically deformed surface layer.", "which Steel Grade ?", "60SC7 spring steel", 183.0, 201.0], ["Shot peening of steels at elevated temperatures (warm peening) can improve the fatigue behaviour of workpieces. For the steel AI Sf 4140 (German grade 42CrM04) in a quenched and tempered condition, it is shown that this is not only caused by the higher compressive residual stresses induced but also due to an enlarged stability of these residual stresses during cyclic bending. This can be explained by strain aging effects during shot peening, which cause different and more stable dislocation structures.", "which Steel Grade ?", "4140", 132.0, 136.0], ["Using a modified air blasting machine warm peening at 20 O C < T I 410 \"C was feasible. An optimized peening temperature of about 310 \"C was identified for a 450 \"C quenched and ternpered steel AISI 4140. Warm peening was also investigated for a normalized, a 650 \"C quenched and tempered, and a martensitically hardened material state. The quasi static surface compressive yield strengths as well as the cyclic surface yield strengths were determined from residual stress relaxation tests conducted at different stress amplitudes and numbers of loading cycles. Dynamic and static strain aging effects acting during and after warm peening clearly increased the residual stress stability and the alternating bending strength for all material states.", "which Steel Grade ?", "4140", 199.0, 203.0], ["One of the most important components in a aircraft is its landing gear, due to the high load that it is submitted to during, principally, the take off and landing. For this reason, the AISI 4340 steel is widely used in the aircraft industry for fabrication of structural components, in which strength and toughness are fundamental design requirements [I]. Fatigue is an important parameter to be considered in the behavior of mechanical components subjected to constant and variable amplitude loading. One of the known ways to improve fatigue resistance is by using the shot peening process to induce a conlpressive residual stress in the surface layers of the material, making the nucleation and propagation of fatigue cracks more difficult [2,3]. The shot peening results depend on various parameters. These parameters can be grouped in three different classes according to I<. Fathallah et a1 (41: parameters describing the treated part, parameters of stream energy produced by the process and parameters describing the contact conditions. Furthermore, relaxation of the CKSF induced by shot peening has been observed during the fatigue process 15-71. In the present research the gain in fatigue life of AISI 4340 steel, obtained by shot peening treatment, is evaluated under the two different hardnesses used in landing gear. Rotating bending fatigue tests were conducted and the CRSF was measured by an x-ray tensometry prior and during fatigue tests. The evaluation of fatigue life due the shot peening in relation to the relaxation of CRSF, of crack sources position and roughness variation is done.", "which Steel Grade ?", "4340", 190.0, 194.0], ["Abstract Previous research on the relationship between students\u2019 home and school Information and Communication Technology (ICT) resources and academic performance has shown ambiguous results. The availability of ICT resources at school has been found to be unrelated or negatively related to academic performance, whereas the availability of ICT resources at home has been found to be both positively and negatively related to academic performance. In addition, the frequency of use of ICT is related to students\u2019 academic achievement. This relationship has been found to be negative for ICT use at school, however, for ICT use at home the literature on the relationship with academic performance is again ambiguous. In addition to ICT availability and ICT use, students\u2019 attitudes towards ICT have also been found to play a role in student performance. In the present study, we examine how availability of ICT resources, students\u2019 use of those resources (at school, outside school for schoolwork, outside school for leisure), and students\u2019 attitudes toward ICT (interest in ICT, perceived ICT competence, perceived ICT autonomy) relate to individual differences in performance on a digital assessment of reading in one comprehensive model using the Dutch PISA 2015 sample of 5183 15-year-olds (49.2% male). Student gender and students\u2019 economic, social, and cultural status accounted for a substantial part of the variation in digitally assessed reading performance. Controlling for these relationships, results indicated that students with moderate access to ICT resources, moderate use of ICT at school or outside school for schoolwork, and moderate interest in ICT had the highest digitally assessed reading performance. In contrast, students who reported moderate competence in ICT had the lowest digitally assessed reading performance. In addition, frequent use of ICT outside school for leisure was negatively related to digitally assessed reading performance, whereas perceived autonomy was positively related. Taken together, the findings suggest that excessive access to ICT resources, excessive use of ICT, and excessive interest in ICT is associated with lower digitally assessed reading performance.", "which has data ?", "PISA 2015", 1256.0, 1265.0], ["Abstract As a relevant cognitive-motivational aspect of ICT literacy, a new construct ICT Engagement is theoretically based on self-determination theory and involves the factors ICT interest, Perceived ICT competence, Perceived autonomy related to ICT use, and ICT as a topic in social interaction. In this manuscript, we present different sources of validity supporting the construct interpretation of test scores in the ICT Engagement scale, which was used in PISA 2015. Specifically, we investigated the internal structure by dimensional analyses and investigated the relation of ICT Engagement aspects to other variables. The analyses are based on public data from PISA 2015 main study from Switzerland ( n = 5860) and Germany ( n = 6504). First, we could confirm the four-dimensional structure of ICT Engagement for the Swiss sample using a structural equation modelling approach. Second, ICT Engagement scales explained the highest amount of variance in ICT Use for Entertainment, followed by Practical use. Third, we found significantly lower values for girls in all ICT Engagement scales except ICT Interest. Fourth, we found a small negative correlation between the scores in the subscale \u201cICT as a topic in social interaction\u201d and reading performance in PISA 2015. We could replicate most results for the German sample. Overall, the obtained results support the construct interpretation of the four ICT Engagement subscales.", "which has data ?", "PISA 2015", 462.0, 471.0], ["The relationship between Information and Communication Technology (ICT) and science performance has been the focus of much recent research, especially due to the prevalence of ICT in our digital society. However, the exploration of this relationship has yielded mixed results. Thus, the current study aims to uncover the learning processes that are linked to students\u2019 science performance by investigating the effect of ICT variables on science for 15-year-old students in two countries with contrasting levels of technology implementation (Bulgaria n = 5,928 and Finland n = 5,882). The study analyzed PISA 2015 data using structural equation modeling to assess the impact of ICT use, availability, and comfort on students\u2019 science scores, controlling for students\u2019 socio-economic status. In both countries, results revealed that (1) ICT use and availability were associated with lower science scores and (2) students who were more comfortable with ICT performed better in science. This study can inform practical implementations of ICT in classrooms that consider the differential effect of ICT and it can advance theoretical knowledge around technology, learning, and cultural context.", "which has data ?", "PISA 2015", 603.0, 612.0], ["ABSTRACT This study analyses the consequences of the Covid-19 crisis on stress and well-being in Switzerland. In particular, we assess whether vulnerable groups in terms of social isolation, increased workload and limited socioeconomic resources are affected more than others. Using longitudinal data from the Swiss Household Panel, including a specific Covid-19 study, we estimate change score models to predict changes in perceived stress and life satisfaction at the end of the semi-lockdown in comparison to before the crisis. We find no general change in life satisfaction and a small decrease in stress. Yet, in line with our expectations, more vulnerable groups in terms of social isolation (young adults, Covid-19 risk group members, individuals without a partner), workload (women) and socioeconomic resources (unemployed and those who experienced a deteriorating financial situation) reported a decrease in life satisfaction. Stress levels decreased most strongly among high earners, workers on short-time work and the highly educated.", "which has data ?", "Swiss Household Panel", 310.0, 331.0], ["OBJECTIVE The researchers evaluated the effectiveness of paroxetine and Problem-Solving Treatment for Primary Care (PST-PC) for patients with minor depression or dysthymia. STUDY DESIGN This was an 11-week randomized placebo-controlled trial conducted in primary care practices in 2 communities (Lebanon, NH, and Seattle, Wash). Paroxetine (n=80) or placebo (n=81) therapy was started at 10 mg per day and increased to a maximum 40 mg per day, or PST-PC was provided (n=80). There were 6 scheduled visits for all treatment conditions. POPULATION A total of 241 primary care patients with minor depression (n=114) or dysthymia (n=127) were included. Of these, 191 patients (79.3%) completed all treatment visits. OUTCOMES Depressive symptoms were measured using the 20-item Hopkins Depression Scale (HSCL-D-20). Remission was scored on the Hamilton Depression Rating Scale (HDRS) as less than or equal to 6 at 11 weeks. Functional status was measured with the physical health component (PHC) and mental health component (MHC) of the 36-item Medical Outcomes Study Short Form. RESULTS All treatment conditions showed a significant decline in depressive symptoms over the 11-week period. There were no significant differences between the interventions or by diagnosis. For dysthymia the remission rate for paroxetine (80%) and PST-PC (57%) was significantly higher than for placebo (44%, P=.008). The remission rate was high for minor depression (64%) and similar for each treatment group. For the MHC there were significant outcome differences related to baseline level for paroxetine compared with placebo. For the PHC there were no significant differences between the treatment groups. CONCLUSIONS For dysthymia, paroxetine and PST-PC improved remission compared with placebo plus nonspecific clinical management. Results varied for the other outcomes measured. For minor depression, the 3 interventions were equally effective; general clinical management (watchful waiting) is an appropriate treatment option.", "which Most distal followup ?", "11 weeks", 909.0, 917.0], ["CONTEXT Insufficient evidence exists for recommendation of specific effective treatments for older primary care patients with minor depression or dysthymia. OBJECTIVE To compare the effectiveness of pharmacotherapy and psychotherapy in primary care settings among older persons with minor depression or dysthymia. DESIGN Randomized, placebo-controlled trial (November 1995-August 1998). SETTING Four geographically and clinically diverse primary care practices. PARTICIPANTS A total of 415 primary care patients (mean age, 71 years) with minor depression (n = 204) or dysthymia (n = 211) and a Hamilton Depression Rating Scale (HDRS) score of at least 10 were randomized; 311 (74.9%) completed all study visits. INTERVENTIONS Patients were randomly assigned to receive paroxetine (n = 137) or placebo (n = 140), starting at 10 mg/d and titrated to a maximum of 40 mg/d, or problem-solving treatment-primary care (PST-PC; n = 138). For the paroxetine and placebo groups, the 6 visits over 11 weeks included general support and symptom and adverse effects monitoring; for the PST-PC group, visits were for psychotherapy. MAIN OUTCOME MEASURES Depressive symptoms, by the 20-item Hopkins Symptom Checklist Depression Scale (HSCL-D-20) and the HDRS; and functional status, by the Medical Outcomes Study Short-Form 36 (SF-36) physical and mental components. RESULTS Paroxetine patients showed greater (difference in mean [SE] 11-week change in HSCL-D-20 scores, 0.21 [0. 07]; P =.004) symptom resolution than placebo patients. Patients treated with PST-PC did not show more improvement than placebo (difference in mean [SE] change in HSCL-D-20 scores, 0.11 [0.13]; P =.13), but their symptoms improved more rapidly than those of placebo patients during the latter treatment weeks (P =.01). For dysthymia, paroxetine improved mental health functioning vs placebo among patients whose baseline functioning was high (difference in mean [SE] change in SF-36 mental component scores, 5.8 [2.02]; P =. 01) or intermediate (difference in mean [SE] change in SF-36 mental component scores, 4.4 [1.74]; P =.03). Mental health functioning in dysthymia patients was not significantly improved by PST-PC compared with placebo (P>/=.12 for low-, intermediate-, and high-functioning groups). For minor depression, both paroxetine and PST-PC improved mental health functioning in patients in the lowest tertile of baseline functioning (difference vs placebo in mean [SE] change in SF-36 mental component scores, 4.7 [2.03] for those taking paroxetine; 4.7 [1.96] for the PST-PC treatment; P =.02 vs placebo). CONCLUSIONS Paroxetine showed moderate benefit for depressive symptoms and mental health function in elderly patients with dysthymia and more severely impaired elderly patients with minor depression. The benefits of PST-PC were smaller, had slower onset, and were more subject to site differences than those of paroxetine.", "which Most distal followup ?", "11 weeks", 988.0, 996.0], ["Abstract Objective: To determine whether, in the treatment of major depression in primary care, a brief psychological treatment (problem solving) was (a) as effective as antidepressant drugs and more effective than placebo; (b) feasible in practice; and (c) acceptable to patients. Design: Randomised controlled trial of problem solving treatment, amitriptyline plus standard clinical management, and drug placebo plus standard clinical management. Each treatment was delivered in six sessions over 12 weeks. Setting: Primary care in Oxfordshire. Subjects: 91 patients in primary care who had major depression. Main outcome measures: Observer and self reported measures of severity of depression, self reported measure of social outcome, and observer measure of psychological symptoms at six and 12 weeks; self reported measure of patient satisfaction at 12 weeks. Numbers of patients recovered at six and 12 weeks. Results: At six and 12 weeks the difference in score on the Hamilton rating scale for depression between problem solving and placebo treatments was significant (5.3 (95% confidence interval 1.6 to 9.0) and 4.7 (0.4 to 9.0) respectively), but the difference between problem solving and amitriptyline was not significant (1.8 (\u22121.8 to 5.5) and 0.9 (\u22123.3 to 5.2) respectively). At 12 weeks 60% (18/30) of patients given problem solving treatment had recovered on the Hamilton scale compared with 52% (16/31) given amitriptyline and 27% (8/30) given placebo. Patients were satisfied with problem solving treatment; all patients who completed treatment (28/30) rated the treatment as helpful or very helpful. The six sessions of problem solving treatment totalled a mean therapy time of 3 1/2 hours. Conclusions: As a treatment for major depression in primary care, problem solving treatment is effective, feasible, and acceptable to patients. Key messages Key messages Patient compliance with antidepressant treatment is often poor, so there is a need for a psychological treatment This study found that problem solving is an effective psychological treatment for major depression in primary care\u2014as effective as amitriptyline and more effective than placebo Problem solving is a feasible treatment in primary care, being effective when given over six sessions by a general practitioner Problem solving treatment is acceptable to patients", "which Most distal followup ?", "12 weeks", 499.0, 507.0], ["Abstract Objectives: To determine whether problem solving treatment combined with antidepressant medication is more effective than either treatment alone in the management of major depression in primary care. To assess the effectiveness of problem solving treatment when given by practice nurses compared with general practitioners when both have been trained in the technique. Design: Randomised controlled trial with four treatment groups. Setting: Primary care in Oxfordshire. Participants: Patients aged 18-65 years with major depression on the research diagnostic criteria\u2014a score of 13 or more on the 17 item Hamilton rating scale for depression and a minimum duration of illness of four weeks. Interventions: Problem solving treatment by research general practitioner or research practice nurse or antidepressant medication or a combination of problem solving treatment and antidepressant medication. Main outcome measures: Hamilton rating scale for depression, Beck depression inventory, clinical interview schedule (revised), and the modified social adjustment schedule assessed at 6, 12, and 52 weeks. Results: Patients in all groups showed a clear improvement over 12 weeks. The combination of problem solving treatment and antidepressant medication was no more effective than either treatment alone. There was no difference in outcome irrespective of who delivered the problem solving treatment. Conclusions: Problem solving treatment is an effective treatment for depressive disorders in primary care. The treatment can be delivered by suitably trained practice nurses or general practitioners. The combination of this treatment with antidepressant medication is no more effective than either treatment alone. Key messages Problem solving treatment is an effective treatment for depressive disorders in primary care Problem solving treatment can be delivered by suitably trained practice nurses as effectively as by general practitioners The combination of problem solving treatment and antidepressant medication is no more effective than either treatment alone Problem solving treatment is most likely to benefit patients who have a depressive disorder of moderate severity and who wish to participate in an active psychological treatment", "which Most distal followup ?", "52 weeks", 1102.0, 1110.0], ["The impacts of alien plants on native richness are usually assessed at small spatial scales and in locations where the alien is at high abundance. But this raises two questions: to what extent do impacts occur where alien species are at low abundance, and do local impacts translate to effects at the landscape scale? In an analysis of 47 widespread alien plant species occurring across a 1,000 km2 landscape, we examined the relationship between their local abundance and native plant species richness in 594 grassland plots. We first defined the critical abundance at which these focal alien species were associated with a decline in native \u03b1\u2010richness (plot\u2010scale species numbers), and then assessed how this local decline was translated into declines in native species \u03b3\u2010richness (landscape\u2010scale species numbers). After controlling for sampling biases and environmental gradients that might lead to spurious relationships, we found that eight out of 47 focal alien species were associated with a significant decline in native \u03b1\u2010richness as their local abundance increased. Most of these significant declines started at low to intermediate classes of abundance. For these eight species, declines in native \u03b3\u2010richness were, on average, an order of magnitude (32.0 vs. 2.2 species) greater than those found for native \u03b1\u2010richness, mostly due to spatial homogenization of native communities. The magnitude of the decrease at the landscape scale was best explained by the number of plots where an alien species was found above its critical abundance. Synthesis. Even at low abundance, alien plants may impact native plant richness at both local and landscape scales. Local impacts may result in much greater declines in native richness at larger spatial scales. Quantifying impact at the landscape scale requires consideration of not only the prevalence of an alien plant, but also its critical abundance and its effect on native community homogenization. This suggests that management approaches targeting only those locations dominated by alien plants might not mitigate impacts effectively. Our integrated approach will improve the ranking of alien species risks at a spatial scale appropriate for prioritizing management and designing conservation policies.", "which Has sample size ?", "47 focal alien species", 954.0, 976.0], ["A SEIR simulation model for the COVID-19 pandemic was developed (http://covidsim.eu) and applied to a hypothetical European country of 10 million population. Our results show which interventions potentially push the epidemic peak into the subsequent year (when vaccinations may be available) or which fail. Different levels of control (via contact reduction) resulted in 22% to 63% of the population sick, 0.2% to 0.6% hospitalised, and 0.07% to 0.28% dead (n=6,450 to 28,228).", "which Proportion of population (Deaths) ?", "0.07%", NaN, NaN], ["A SEIR simulation model for the COVID-19 pandemic was developed (http://covidsim.eu) and applied to a hypothetical European country of 10 million population. Our results show which interventions potentially push the epidemic peak into the subsequent year (when vaccinations may be available) or which fail. Different levels of control (via contact reduction) resulted in 22% to 63% of the population sick, 0.2% to 0.6% hospitalised, and 0.07% to 0.28% dead (n=6,450 to 28,228).", "which Proportion of population (Deaths) ?", "0.28%", NaN, NaN], ["A SEIR simulation model for the COVID-19 pandemic was developed (http://covidsim.eu) and applied to a hypothetical European country of 10 million population. Our results show which interventions potentially push the epidemic peak into the subsequent year (when vaccinations may be available) or which fail. Different levels of control (via contact reduction) resulted in 22% to 63% of the population sick, 0.2% to 0.6% hospitalised, and 0.07% to 0.28% dead (n=6,450 to 28,228).", "which Population ?", "10 million", 135.0, 145.0], ["Background: Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods: We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results: The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions: Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities.", "which incubation SD ?", "2.8 days", 926.0, 934.0], ["We conducted a comparative study of COVID-19 epidemic in three different settings: mainland China, the Guangdong province of China and South Korea, by formulating two disease transmission dynamics models incorporating epidemic characteristics and setting-specific interventions, and fitting the models to multi-source data to identify initial and effective reproduction numbers and evaluate effectiveness of interventions. We estimated the initial basic reproduction number for South Korea, the Guangdong province and mainland China as 2.6 (95% confidence interval (CI): (2.5, 2.7)), 3.0 (95%CI: (2.6, 3.3)) and 3.8 (95%CI: (3.5,4.2)), respectively, given a serial interval with mean of 5 days with standard deviation of 3 days. We found that the effective reproduction number for the Guangdong province and mainland China has fallen below the threshold 1 since February 8th and 18th respectively, while the effective reproduction number for South Korea remains high, suggesting that the interventions implemented need to be enhanced in order to halt further infections. We also project the epidemic trend in South Korea under different scenarios where a portion or the entirety of the integrated package of interventions in China is used. We show that a coherent and integrated approach with stringent public health interventions is the key to the success of containing the epidemic in China and specially its provinces outside its epicenter, and we show that this approach can also be effective to mitigate the burden of the COVID-19 epidemic in South Korea. The experience of outbreak control in mainland China should be a guiding reference for the rest of the world including South Korea.", "which standard deviation of serial interval ?", "3 days", 721.0, 727.0], ["Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.", "which mean serial interval ?", "4.56 (2.69, 6.42) days", NaN, NaN], ["We conducted a comparative study of COVID-19 epidemic in three different settings: mainland China, the Guangdong province of China and South Korea, by formulating two disease transmission dynamics models incorporating epidemic characteristics and setting-specific interventions, and fitting the models to multi-source data to identify initial and effective reproduction numbers and evaluate effectiveness of interventions. We estimated the initial basic reproduction number for South Korea, the Guangdong province and mainland China as 2.6 (95% confidence interval (CI): (2.5, 2.7)), 3.0 (95%CI: (2.6, 3.3)) and 3.8 (95%CI: (3.5,4.2)), respectively, given a serial interval with mean of 5 days with standard deviation of 3 days. We found that the effective reproduction number for the Guangdong province and mainland China has fallen below the threshold 1 since February 8th and 18th respectively, while the effective reproduction number for South Korea remains high, suggesting that the interventions implemented need to be enhanced in order to halt further infections. We also project the epidemic trend in South Korea under different scenarios where a portion or the entirety of the integrated package of interventions in China is used. We show that a coherent and integrated approach with stringent public health interventions is the key to the success of containing the epidemic in China and specially its provinces outside its epicenter, and we show that this approach can also be effective to mitigate the burden of the COVID-19 epidemic in South Korea. The experience of outbreak control in mainland China should be a guiding reference for the rest of the world including South Korea.", "which mean of serial interval ?", "5 days", 687.0, 693.0], ["Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.", "which mean incubation period ?", "7.1 (6.13, 8.25) days", NaN, NaN], ["Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.", "which mean incubation period ?", "9 (7.92, 10.2) days", NaN, NaN], ["The ecology of ebolaviruses is still poorly understood and the role of bats in outbreaks needs to be further clarified. Straw-colored fruit bats (Eidolon helvum) are the most common fruit bats in Africa and antibodies to ebolaviruses have been documented in this species. Between December 2018 and November 2019, samples were collected at approximately monthly intervals in roosting and feeding sites from 820 bats from an Eidolon helvum colony. Dried blood spots (DBS) were tested for antibodies to Zaire, Sudan, and Bundibugyo ebolaviruses. The proportion of samples reactive with GP antigens increased significantly with age from 0\u20139/220 (0\u20134.1%) in juveniles to 26\u2013158/225 (11.6\u201370.2%) in immature adults and 10\u2013225/372 (2.7\u201360.5%) in adult bats. Antibody responses were lower in lactating females. Viral RNA was not detected in 456 swab samples collected from 152 juvenile and 214 immature adult bats. Overall, our study shows that antibody levels increase in young bats suggesting that seroconversion to Ebola or related viruses occurs in older juvenile and immature adult bats. Multiple year monitoring would be needed to confirm this trend. Knowledge of the periods of the year with the highest risk of Ebolavirus circulation can guide the implementation of strategies to mitigate spill-over events.", "which has date ?", "Between December 2018", 272.0, 293.0], ["The ecology of ebolaviruses is still poorly understood and the role of bats in outbreaks needs to be further clarified. Straw-colored fruit bats (Eidolon helvum) are the most common fruit bats in Africa and antibodies to ebolaviruses have been documented in this species. Between December 2018 and November 2019, samples were collected at approximately monthly intervals in roosting and feeding sites from 820 bats from an Eidolon helvum colony. Dried blood spots (DBS) were tested for antibodies to Zaire, Sudan, and Bundibugyo ebolaviruses. The proportion of samples reactive with GP antigens increased significantly with age from 0\u20139/220 (0\u20134.1%) in juveniles to 26\u2013158/225 (11.6\u201370.2%) in immature adults and 10\u2013225/372 (2.7\u201360.5%) in adult bats. Antibody responses were lower in lactating females. Viral RNA was not detected in 456 swab samples collected from 152 juvenile and 214 immature adult bats. Overall, our study shows that antibody levels increase in young bats suggesting that seroconversion to Ebola or related viruses occurs in older juvenile and immature adult bats. Multiple year monitoring would be needed to confirm this trend. Knowledge of the periods of the year with the highest risk of Ebolavirus circulation can guide the implementation of strategies to mitigate spill-over events.", "which has date ?", "between December 2018", 272.0, 293.0], ["During the annual hunt in a privately owned Austrian game population in fall 2019 and 2020, 64 red deer (Cervus elaphus), 5 fallow deer (Dama dama), 6 mouflon (Ovis gmelini musimon), and 95 wild boars (Sus scrofa) were shot and sampled for PCR testing. Pools of spleen, lung, and tonsillar swabs were screened for specific nucleic acids of porcine circoviruses. Wild ruminants were additionally tested for herpesviruses and pestiviruses, and wild boars were screened for pseudorabies virus (PrV) and porcine lymphotropic herpesviruses (PLHV-1-3). PCV2 was detectable in 5% (3 of 64) of red deer and 75% (71 of 95) of wild boar samples. In addition, 24 wild boar samples (25%) but none of the ruminants tested positive for PCV3 specific nucleic acids. Herpesviruses were detected in 15 (20%) ruminant samples. Sequence analyses showed the closest relationships to fallow deer herpesvirus and elk gammaherpesvirus. In wild boars, PLHV-1 was detectable in 10 (11%), PLHV-2 in 44 (46%), and PLHV-3 in 66 (69%) of animals, including 36 double and 3 triple infections. No pestiviruses were detectable in any ruminant samples, and all wild boar samples were negative in PrV-PCR. Our data demonstrate a high prevalence of PCV2 and PLHVs in an Austrian game population, confirm the presence of PCV3 in Austrian wild boars, and indicate a low risk of spillover of notifiable animal diseases into the domestic animal population.", "which has date ?", "fall 2019", 72.0, 81.0], ["During the annual hunt in a privately owned Austrian game population in fall 2019 and 2020, 64 red deer (Cervus elaphus), 5 fallow deer (Dama dama), 6 mouflon (Ovis gmelini musimon), and 95 wild boars (Sus scrofa) were shot and sampled for PCR testing. Pools of spleen, lung, and tonsillar swabs were screened for specific nucleic acids of porcine circoviruses. Wild ruminants were additionally tested for herpesviruses and pestiviruses, and wild boars were screened for pseudorabies virus (PrV) and porcine lymphotropic herpesviruses (PLHV-1-3). PCV2 was detectable in 5% (3 of 64) of red deer and 75% (71 of 95) of wild boar samples. In addition, 24 wild boar samples (25%) but none of the ruminants tested positive for PCV3 specific nucleic acids. Herpesviruses were detected in 15 (20%) ruminant samples. Sequence analyses showed the closest relationships to fallow deer herpesvirus and elk gammaherpesvirus. In wild boars, PLHV-1 was detectable in 10 (11%), PLHV-2 in 44 (46%), and PLHV-3 in 66 (69%) of animals, including 36 double and 3 triple infections. No pestiviruses were detectable in any ruminant samples, and all wild boar samples were negative in PrV-PCR. Our data demonstrate a high prevalence of PCV2 and PLHVs in an Austrian game population, confirm the presence of PCV3 in Austrian wild boars, and indicate a low risk of spillover of notifiable animal diseases into the domestic animal population.", "which has date ?", "2019", 77.0, 81.0], ["During the annual hunt in a privately owned Austrian game population in fall 2019 and 2020, 64 red deer (Cervus elaphus), 5 fallow deer (Dama dama), 6 mouflon (Ovis gmelini musimon), and 95 wild boars (Sus scrofa) were shot and sampled for PCR testing. Pools of spleen, lung, and tonsillar swabs were screened for specific nucleic acids of porcine circoviruses. Wild ruminants were additionally tested for herpesviruses and pestiviruses, and wild boars were screened for pseudorabies virus (PrV) and porcine lymphotropic herpesviruses (PLHV-1-3). PCV2 was detectable in 5% (3 of 64) of red deer and 75% (71 of 95) of wild boar samples. In addition, 24 wild boar samples (25%) but none of the ruminants tested positive for PCV3 specific nucleic acids. Herpesviruses were detected in 15 (20%) ruminant samples. Sequence analyses showed the closest relationships to fallow deer herpesvirus and elk gammaherpesvirus. In wild boars, PLHV-1 was detectable in 10 (11%), PLHV-2 in 44 (46%), and PLHV-3 in 66 (69%) of animals, including 36 double and 3 triple infections. No pestiviruses were detectable in any ruminant samples, and all wild boar samples were negative in PrV-PCR. Our data demonstrate a high prevalence of PCV2 and PLHVs in an Austrian game population, confirm the presence of PCV3 in Austrian wild boars, and indicate a low risk of spillover of notifiable animal diseases into the domestic animal population.", "which has date ?", "2020", 86.0, 90.0], ["Background & objectives: Since December 2019, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has globally affected 195 countries. In India, suspected cases were screened for SARS-CoV-2 as per the advisory of the Ministry of Health and Family Welfare. The objective of this study was to characterize SARS-CoV-2 sequences from three identified positive cases as on February 29, 2020. Methods: Throat swab/nasal swab specimens for a total of 881 suspected cases were screened by E gene and confirmed by RdRp (1), RdRp (2) and N gene real-time reverse transcription-polymerase chain reactions and next-generation sequencing. Phylogenetic analysis, molecular characterization and prediction of B- and T-cell epitopes for Indian SARS-CoV-2 sequences were undertaken. Results: Three cases with a travel history from Wuhan, China, were confirmed positive for SARS-CoV-2. Almost complete (29,851 nucleotides) genomes of case 1, case 3 and a fragmented genome for case 2 were obtained. The sequences of Indian SARS-CoV-2 though not identical showed high (~99.98%) identity with Wuhan seafood market pneumonia virus (accession number: NC 045512). Phylogenetic analysis showed that the Indian sequences belonged to different clusters. Predicted linear B-cell epitopes were found to be concentrated in the S1 domain of spike protein, and a conformational epitope was identified in the receptor-binding domain. The predicted T-cell epitopes showed broad human leucocyte antigen allele coverage of A and B supertypes predominant in the Indian population. Interpretation & conclusions: The two SARS-CoV-2 sequences obtained from India represent two different introductions into the country. The genetic heterogeneity is as noted globally. The identified B- and T-cell epitopes may be considered suitable for future experiments towards the design of vaccines and diagnostics. Continuous monitoring and analysis of the sequences of new cases from India and the other affected countries would be vital to understand the genetic evolution and rates of substitution of the SARS-CoV-2.", "which patient characteristics ?", "case 1", 925.0, 931.0], ["Background & objectives: Since December 2019, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has globally affected 195 countries. In India, suspected cases were screened for SARS-CoV-2 as per the advisory of the Ministry of Health and Family Welfare. The objective of this study was to characterize SARS-CoV-2 sequences from three identified positive cases as on February 29, 2020. Methods: Throat swab/nasal swab specimens for a total of 881 suspected cases were screened by E gene and confirmed by RdRp (1), RdRp (2) and N gene real-time reverse transcription-polymerase chain reactions and next-generation sequencing. Phylogenetic analysis, molecular characterization and prediction of B- and T-cell epitopes for Indian SARS-CoV-2 sequences were undertaken. Results: Three cases with a travel history from Wuhan, China, were confirmed positive for SARS-CoV-2. Almost complete (29,851 nucleotides) genomes of case 1, case 3 and a fragmented genome for case 2 were obtained. The sequences of Indian SARS-CoV-2 though not identical showed high (~99.98%) identity with Wuhan seafood market pneumonia virus (accession number: NC 045512). Phylogenetic analysis showed that the Indian sequences belonged to different clusters. Predicted linear B-cell epitopes were found to be concentrated in the S1 domain of spike protein, and a conformational epitope was identified in the receptor-binding domain. The predicted T-cell epitopes showed broad human leucocyte antigen allele coverage of A and B supertypes predominant in the Indian population. Interpretation & conclusions: The two SARS-CoV-2 sequences obtained from India represent two different introductions into the country. The genetic heterogeneity is as noted globally. The identified B- and T-cell epitopes may be considered suitable for future experiments towards the design of vaccines and diagnostics. Continuous monitoring and analysis of the sequences of new cases from India and the other affected countries would be vital to understand the genetic evolution and rates of substitution of the SARS-CoV-2.", "which patient characteristics ?", "case 2", 968.0, 974.0], ["Background & objectives: Since December 2019, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has globally affected 195 countries. In India, suspected cases were screened for SARS-CoV-2 as per the advisory of the Ministry of Health and Family Welfare. The objective of this study was to characterize SARS-CoV-2 sequences from three identified positive cases as on February 29, 2020. Methods: Throat swab/nasal swab specimens for a total of 881 suspected cases were screened by E gene and confirmed by RdRp (1), RdRp (2) and N gene real-time reverse transcription-polymerase chain reactions and next-generation sequencing. Phylogenetic analysis, molecular characterization and prediction of B- and T-cell epitopes for Indian SARS-CoV-2 sequences were undertaken. Results: Three cases with a travel history from Wuhan, China, were confirmed positive for SARS-CoV-2. Almost complete (29,851 nucleotides) genomes of case 1, case 3 and a fragmented genome for case 2 were obtained. The sequences of Indian SARS-CoV-2 though not identical showed high (~99.98%) identity with Wuhan seafood market pneumonia virus (accession number: NC 045512). Phylogenetic analysis showed that the Indian sequences belonged to different clusters. Predicted linear B-cell epitopes were found to be concentrated in the S1 domain of spike protein, and a conformational epitope was identified in the receptor-binding domain. The predicted T-cell epitopes showed broad human leucocyte antigen allele coverage of A and B supertypes predominant in the Indian population. Interpretation & conclusions: The two SARS-CoV-2 sequences obtained from India represent two different introductions into the country. The genetic heterogeneity is as noted globally. The identified B- and T-cell epitopes may be considered suitable for future experiments towards the design of vaccines and diagnostics. Continuous monitoring and analysis of the sequences of new cases from India and the other affected countries would be vital to understand the genetic evolution and rates of substitution of the SARS-CoV-2.", "which patient characteristics ?", "case 3", 933.0, 939.0], ["Abstract Emerging infectious diseases, such as severe acute respiratory syndrome (SARS) and Zika virus disease, present a major threat to public health 1\u20133 . Despite intense research efforts, how, when and where new diseases appear are still a source of considerable uncertainty. A severe respiratory disease was recently reported in Wuhan, Hubei province, China. As of 25 January 2020, at least 1,975 cases had been reported since the first patient was hospitalized on 12 December 2019. Epidemiological investigations have suggested that the outbreak was associated with a seafood market in Wuhan. Here we study a single patient who was a worker at the market and who was admitted to the Central Hospital of Wuhan on 26 December 2019 while experiencing a severe respiratory syndrome that included fever, dizziness and a cough. Metagenomic RNA sequencing 4 of a sample of bronchoalveolar lavage fluid from the patient identified a new RNA virus strain from the family Coronaviridae , which is designated here \u2018WH-Human 1\u2019 coronavirus (and has also been referred to as \u20182019-nCoV\u2019). Phylogenetic analysis of the complete viral genome (29,903 nucleotides) revealed that the virus was most closely related (89.1% nucleotide similarity) to a group of SARS-like coronaviruses (genus Betacoronavirus, subgenus Sarbecovirus) that had previously been found in bats in China 5 . This outbreak highlights the ongoing ability of viral spill-over from animals to cause severe disease in humans.", "which patient characteristics ?", "patient", 442.0, 449.0], ["Novel coronavirus (SARS-CoV-2) is found to cause a large outbreak started from Wuhan since December 2019 in China and SARS-CoV-2 infections have been reported with epidemiological linkage to China in 25 countries until now. We isolated SARS-CoV-2 from the oropharyngeal sample obtained from the patient with the first laboratory-confirmed SARS-CoV-2 infection in Korea. Cytopathic effects of SARS-CoV-2 in the Vero cell cultures were confluent 3 days after the first blind passage of the sample. Coronavirus was confirmed with spherical particle having a fringe reminiscent of crown on transmission electron microscopy. Phylogenetic analyses of whole genome sequences showed that it clustered with other SARS-CoV-2 reported from Wuhan.", "which patient characteristics ?", "age", NaN, NaN], ["The first case of coronavirus disease (COVID-19) in Finland was confirmed on 29 January 2020. No secondary cases were detected. We describe the clinical picture and laboratory findings 3\u201323 days since the first symptoms. The SARS-CoV-2/Finland/1/2020 virus strain was isolated, the genome showing a single nucleotide substitution to the reference strain from Wuhan. Neutralising antibody response appeared within 9 days along with specific IgM and IgG response, targeting particularly nucleocapsid and spike proteins.", "which patient characteristics ?", "symptoms", 211.0, 219.0], ["A Web content mining approach identified 20 job categories and the associated skills needs prevalent in the computing professions. Using a Web content data mining application, we extracted almost a quarter million unique IT job descriptions from various job search engines and distilled each to its required skill sets. We statistically examined these, revealing 20 clusters of similar skill sets that map to specific job definitions. The results allow software engineering professionals to tune their skills portfolio to match those in demand from real computing jobs across the US to attain more lucrative salaries and more mobility in a chaotic environment.", "which result ?", "20 clusters of similar skill sets that map to specific job definitions", 363.0, 433.0], ["Powerful speeches can captivate audiences, whereas weaker speeches fail to engage their listeners. What is happening in the brains of a captivated audience? Here, we assess audience-wide functional brain dynamics during listening to speeches of varying rhetorical quality. The speeches were given by German politicians and evaluated as rhetorically powerful or weak. Listening to each of the speeches induced similar neural response time courses, as measured by inter-subject correlation analysis, in widespread brain regions involved in spoken language processing. Crucially, alignment of the time course across listeners was stronger for rhetorically powerful speeches, especially for bilateral regions of the superior temporal gyri and medial prefrontal cortex. Thus, during powerful speeches, listeners as a group are more coupled to each other, suggesting that powerful speeches are more potent in taking control of the listeners' brain responses. Weaker speeches were processed more heterogeneously, although they still prompted substantially correlated responses. These patterns of coupled neural responses bear resemblance to metaphors of resonance, which are often invoked in discussions of speech impact, and contribute to the literature on auditory attention under natural circumstances. Overall, this approach opens up possibilities for research on the neural mechanisms mediating the reception of entertaining or persuasive messages.", "which result ?", "alignment of the time course across listeners was stronger for rhetorically powerful speeches, especially for bilateral regions of the superior temporal gyri and medial prefrontal cortex", 577.0, 763.0], ["Today\u2019s rapid changing and competitive environment requires educators to stay abreast of the job market in order to prepare their students for the jobs being demanded. This is more relevant about Information Technology (IT) jobs than others. However, to stay abreast of the market job demands require retrieving, sifting and analyzing large volume of data in order to understand the trends of the job market. Traditional methods of data collection and analysis are not sufficient for this kind of analysis due to the large volume of job data that is generated through the web and elsewhere. Luckily, the field of data mining has emerged to collect and sift through such large data volumes. However, even with data mining, appropriate data collection techniques and analysis need to be followed in order to correctly understand the trend. This paper illustrates our experience with employing mining techniques to understand the trend in IT Technology jobs. Data was collect using data mining techniques over a number of years from an online job agency. The data was then analyzed to reach a conclusion about the trends in the job market. Our experience in this regard along with literature review of the relevant topics is illustrated in this paper.", "which result ?", "conclusion about the trends in the job market", 1090.0, 1135.0], ["The research question addressed in this article concerns whether unemployment persistency can be regarded as a phenomenon that increases employment difficulties for the less educated and, if so, whether their employment chances are reduced by an overly rapid reduction in the number of jobs with low educational requirements. The empirical case is Sweden and the data covers the period 1976-2000. The empirical analyses point towards a negative response to both questions. First, it is shown that jobs with low educational requirements have declined but still constitute a substantial share of all jobs. Secondly, educational attainment has changed at a faster rate than the job structure with increasing over-education in jobs with low educational requirements as a result. This, together with changed selection patterns into the low education group, are the main reasons for the poor employment chances of the less educated in periods with low general demand for labour.", "which Country of study ?", "Sweden", 348.0, 354.0], ["Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and \u201cdisorder\u201d and \u201ctransmembrane\u201d analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein\u2014binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.", "which Country of study ?", "Italy", 685.0, 690.0], ["This paper introduces the recently begun REINVENT research project focused on the management of heritage in the cross-border cultural landscape of Derry/Londonderry. The importance of facilitating dialogue over cultural heritage to the maintenance of \u2018thin\u2019 borders in contested cross-border contexts is underlined in the paper, as is the relatively favourable strategic policy context for progressing \u2018heritage diplomacy\u2019 on the island of Ireland. However, it is argued that more inclusive and participatory approaches to the management of heritage are required to assist in the mediation of contestation, particularly accommodating a greater diversity of \u2018non-expert\u2019 opinion, in addition to helping identify value conflicts and dissonance. The application of digital technologies in the form of Public Participation Geographic Information Systems (PPGIS) is proposed, and this is briefly discussed in relation to some of the expected benefits and methodological challenges that must be addressed in the REINVENT project. The paper concludes by emphasising the importance of dialogue and knowledge exchange between academia and heritage policymakers/practitioners.", "which Country of study ?", "Ireland", 440.0, 447.0], ["The theory of career mobility (Sicherman and Galor, Journal of Political Economy, 98(1), 169\u201392, 1990) claims that wage penalties for overeducated workers are compensated by better promotion prospects. Sicherman (Journal of Labour Economics, 9(2), 101\u201322, 1991) was able to confirm this theory in an empirical study using panel data. However, the only retest using panel data so far (Robst, Eastern Economic Journal, 21, 539\u201350, 1995) produced rather ambiguous results. In the present paper, random effects models to analyse relative wage growth are estimated using data from the German Socio-Economic Panel. It is found that overeducated workers in Germany have markedly lower relative wage growth rates than adequately educated workers. The results cast serious doubt on whether the career mobility model is able to explain overeducation in Germany. The plausibility of the results is supported by the finding that overeducated workers have less access to formal and informal on-the-job training, which is usually found to be positively correlated with wage growth even when controlling for selectivity effects (Pischke, Journal of Population Economics, 14, 523\u201348, 2001).", "which Country of study ?", "Germany", 650.0, 657.0], ["Objectives \u2013 To investigate the prevalence of active epilepsy in Croatia.", "which Country of study ?", "Croatia", 65.0, 72.0], ["This article presents a case study of a collaborative public history project between participants in two countries, the United Kingdom and Italy. Its subject matter is the bombing war in Europe, 1939-1945, which is remembered and commemorated in very different ways in these two countries: the sensitivities involved thus constitute not only a case of public history conducted at the national level but also one involving contested heritage. An account of the ways in which public history has developed in the UK and Italy is presented. This is followed by an explanation of how the bombing war has been remembered in each country. In the UK, veterans of RAF Bomber Command have long felt a sense of neglect, largely because the deliberate targeting of civilians has not fitted comfortably into the dominant victor narrative. In Italy, recollections of being bombed have remained profoundly dissonant within the received liberation discourse. The International Bomber Command Centre Digital Archive (or Archive) is then described as a case study that employs a public history approach, focusing on various aspects of its inclusive ethos, intended to preserve multiple perspectives. The Italian component of the project is highlighted, problematising the digitisation of contested heritage within the broader context of twentieth-century history. Reflections on the use of digital archiving practices and working in partnership are offered, as well as a brief account of user analytics of the Archive through its first eighteen months online.", "which Country of study ?", " Italy.", 516.0, 523.0], ["Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and \u201cdisorder\u201d and \u201ctransmembrane\u201d analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein\u2014binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.", "which Country of study ?", "Sweden", 692.0, 698.0], ["This article presents a case study of a collaborative public history project between participants in two countries, the United Kingdom and Italy. Its subject matter is the bombing war in Europe, 1939-1945, which is remembered and commemorated in very different ways in these two countries: the sensitivities involved thus constitute not only a case of public history conducted at the national level but also one involving contested heritage. An account of the ways in which public history has developed in the UK and Italy is presented. This is followed by an explanation of how the bombing war has been remembered in each country. In the UK, veterans of RAF Bomber Command have long felt a sense of neglect, largely because the deliberate targeting of civilians has not fitted comfortably into the dominant victor narrative. In Italy, recollections of being bombed have remained profoundly dissonant within the received liberation discourse. The International Bomber Command Centre Digital Archive (or Archive) is then described as a case study that employs a public history approach, focusing on various aspects of its inclusive ethos, intended to preserve multiple perspectives. The Italian component of the project is highlighted, problematising the digitisation of contested heritage within the broader context of twentieth-century history. Reflections on the use of digital archiving practices and working in partnership are offered, as well as a brief account of user analytics of the Archive through its first eighteen months online.", "which Country of study ?", "United Kingdom", 120.0, 134.0], ["Hybrid energy systems (HESs) generate electricity from multiple energy sources that complement each other. Recently, due to the reduction in costs of photovoltaic (PV) modules and wind turbines, these types of systems have become economically competitive. In this study, a mathematical programming model is applied to evaluate the techno-economic feasibility of autonomous units located in two isolated areas of Ecuador: first, the province of Galapagos (subtropical island) and second, the province of Morona Santiago (Amazonian tropical forest). The two case studies suggest that HESs are potential solutions to reduce the dependence of rural villages on fossil fuels and viable mechanisms to bring electrical power to isolated communities in Ecuador. Our results reveal that not only from the economic but also from the environmental point of view, for the case of the Galapagos province, a hybrid energy system with a PV\u2013wind\u2013battery configuration and a levelized cost of energy (LCOE) equal to 0.36 $/kWh is the optimal energy supply system. For the case of Morona Santiago, a hybrid energy system with a PV\u2013diesel\u2013battery configuration and an LCOE equal to 0.37 $/kWh is the most suitable configuration to meet the load of a typical isolated community in Ecuador. The proposed optimization model can be used as a decision-support tool for evaluating the viability of autonomous HES projects at any other location.", "which Country of study ?", "Ecuador", 412.0, 419.0], ["Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and \u201cdisorder\u201d and \u201ctransmembrane\u201d analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein\u2014binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.", "which Country of study ?", "Brazil", 823.0, 829.0], ["A survey of anastomosis groups (AG) of Rhizoctonia spp. associated with potato diseases was conducted in South Africa. In total, 112 Rhizoctonia solani and 19 binucleate Rhizoctonia (BNR) isolates were recovered from diseased potato plants, characterized for AG and pathogenicity. The AG identity of the isolates was confirmed using phylogenetic analysis of the internal transcribed spacer region of ribosomal DNA. R. solani isolates recovered belonged to AG 3-PT, AG 2-2IIIB, AG 4HG-I, AG 4HG-III, and AG 5, while BNR isolates belonged to AG A and AG R, with frequencies of 74, 6.1, 2.3, 2.3, 0.8, 12.2, and 2.3%, respectively. R. solani AG 3-PT was the most predominant AG and occurred in all the potato-growing regions sampled, whereas the other AG occurred in distinct locations. Different AG grouped into distinct clades, with high maximum parsimony and maximum-likelihood bootstrap support for both R. solani and BNR. An experiment under greenhouse conditions with representative isolates from different AG showed differences in aggressiveness between and within AG. Isolates of AG 2-2IIIB, AG 4HG-III, and AG R were the most aggressive in causing stem canker while AG 3-PT, AG 5, and AG R caused black scurf. This is the first comprehensive survey of R. solani and BNR on potato in South Africa using a molecular-based approach. This is the first report of R. solani AG 2-2IIIB and AG 4 HG-I causing stem and stolon canker and BNR AG A and AG R causing stem canker and black scurf on potato in South Africa.", "which Study location ?", "South Africa", 105.0, 117.0], ["Abstract Objective: To investigate the association between dietary patterns (DP) and overweight risk in the Malaysian Adult Nutrition Surveys (MANS) of 2003 and 2014. Design: DP were derived from the MANS FFQ using principal component analysis. The cross-sectional association of the derived DP with prevalence of overweight was analysed. Setting: Malaysia. Participants: Nationally representative sample of Malaysian adults from MANS (2003, n 6928; 2014, n 3000). Results: Three major DP were identified for both years. These were \u2018Traditional\u2019 (fish, eggs, local cakes), \u2018Western\u2019 (fast foods, meat, carbonated beverages) and \u2018Mixed\u2019 (ready-to-eat cereals, bread, vegetables). A fourth DP was generated in 2003, \u2018Flatbread & Beverages\u2019 (flatbread, creamer, malted beverages), and 2014, \u2018Noodles & Meat\u2019 (noodles, meat, eggs). These DP accounted for 25\u00b76 and 26\u00b76 % of DP variations in 2003 and 2014, respectively. For both years, Traditional DP was significantly associated with rural households, lower income, men and Malay ethnicity, while Western DP was associated with younger age and higher income. Mixed DP was positively associated with women and higher income. None of the DP showed positive association with overweight risk, except for reduced adjusted odds of overweight with adherence to Traditional DP in 2003. Conclusions: Overweight could not be attributed to adherence to a single dietary pattern among Malaysian adults. This may be due to the constantly morphing dietary landscape in Malaysia, especially in urban areas, given the ease of availability and relative affordability of multi-ethnic and international foods. Timely surveys are recommended to monitor implications of these changes.", "which Study location ?", "Malaysia", 348.0, 356.0], ["magnitude of arsenic groundwater contamination, and its related health effects, in the Ganga-MeghnaBrahmaputra (GMB) plain\u2014an area of 569,749 km2, with a population of over 500 million, which largely comprises the flood plains of 3 major river systems that flow through India and Bangladesh. Design: On the basis of our 17-yr\u2013long study thus far, we report herein the magnitude of groundwater arsenic contamination, its health effects, results of our analyses of biological and food samples, and our investigation into sources of arsenic in the GMB plain Setting: The GMB plain includes the following states in India: Uttar Pradesh in the upper and middle Ganga plain, Bihar and Jharkhand in the middle Ganga plain, West Bengal in the lower Ganga plain, and Assam in the upper Brahmaputra plain. The country of Bangladesh is located in the Padma-Meghna-Brahmaputra plain. In a preliminary study,1 we identified arsenic in water samples from hand-operated tubewells in the GMB plain. Levels in excess of 50 ppb (the permissible limit for arsenic in drinking water in India and Bangladesh) were found in samples from 51 villages in 3 arsenic-affected districts of Uttar Pradesh, 202 villages in 6 districts in Bihar, 11 villages in 1 district in Jharkhand, 3,500 villages in 9 (of a total of 18) districts in West Bengal, 2,000 villages in 50 (of a total of 64) districts in Bangladesh, and 17 villages in 2 districts in Assam. Study Populations: Because, over time, new regions of arsenic contamination have been found, affecting additional populations, the characteristics of our study subjects have varied widely. We feel that, even after working for 17 yr in the GMB plain, we have had only a glimpse of the full extent of the problem. Protocol: Thus far, on the GMB plain, we have analyzed 145,000 tubewell water samples from India and 52,000 from Bangladesh for arsenic contamination. In India, 3,781 villages had arsenic levels above 50 ppb and 5,380 villages had levels exceeding 10 ppb; in Bangladesh, the numbers were 2,000 and 2,450, respectively. We also analyzed 12,954 urine samples, 13,560 hair samples, 13,758 nail samples, and 1,300 skin scale samples from inhabitants of the arsenic-affected villages. Groundwater Arsenic Contamination in the Ganga-Padma-", "which Study location ?", "India", 270.0, 275.0], ["With ongoing research, increased information sharing and knowledge exchange, humanitarian organizations have an increasing amount of evidence at their disposal to support their decisions. Nevertheless, effectively building decisions on the increasing amount of insights and information remains challenging. At the individual, organizational, and environmental levels, various factors influence the use of evidence in the decision-making process. This research examined these factors and specifically their influence in a case-study on humanitarian organizations and their WASH interventions in Uganda. Interviewees reported several factors that impede the implementation of evidence-based decision making. Revealing that, despite advancements in the past years, evidence-based information itself is relatively small, contradictory, and non-repeatable. Moreover, the information is often not connected or in a format that can be acted upon. Most importantly, however, are the human aspects and organizational settings that limit access to and use of supporting data, information, and evidence. This research shows the importance of considering these factors, in addition to invest in creating knowledge and technologies to support evidence-based decision-making.", "which Study location ?", "Uganda", 594.0, 600.0], ["Importance Although increasingly strong evidence suggests a role of maternal total cholesterol and low-density lipoprotein cholesterol (LDLC) levels during pregnancy as a risk factor for atherosclerotic disease in the offspring, the underlying mechanisms need to be clarified for future clinical applications. Objective To test whether epigenetic signatures characterize early fetal atherogenesis associated with maternal hypercholesterolemia and to provide a quantitative estimate of the contribution of maternal cholesterol level to fetal lesion size. Design, Setting, and Participants This autopsy study analyzed 78 human fetal aorta autopsy samples from the Division of Human Pathology, Department of Advanced Biomedical Sciences, Federico II University of Naples, Naples, Italy. Maternal levels of total cholesterol, LDLC, high-density lipoprotein cholesterol (HDLC), triglycerides, and glucose and body mass index (BMI) were determined during hospitalization owing to spontaneous fetal death. Data were collected and immediately processed and analyzed to prevent degradation from January 1, 2011, through November 30, 2016. Main Outcomes and Measurements Results of DNA methylation and messenger RNA levels of the following genes involved in cholesterol metabolism were assessed: superoxide dismutase 2 (SOD2), low-density lipoprotein receptor (LDLR), sterol regulatory element binding protein 2 (SREBP2), liver X receptor &agr; (LXR&agr;), and adenosine triphosphate\u2013binding cassette transporter 1 (ABCA1). Results Among the 78 fetal samples included in the analysis (59% male; mean [SD] fetal age, 25 [3] weeks), maternal cholesterol level explained a significant proportion of the fetal aortic lesion variance in multivariate analysis (61%; P = .001) independently by the effect of levels of HDLC, triglycerides, and glucose and BMI. Moreover, maternal total cholesterol and LDLC levels were positively associated with methylation of SREBP2 in fetal aortas (Pearson correlation, 0.488 and 0.503, respectively), whereas in univariate analysis, they were inversely correlated with SREBP2 messenger RNA levels in fetal aortas (Pearson correlation, \u22120.534 and \u22120.671, respectively). Epivariations of genes controlling cholesterol metabolism in cholesterol-treated human aortic endothelial cells were also observed. Conclusions and Relevance The present study provides a stringent quantitative estimate of the magnitude of the association of maternal cholesterol levels during pregnancy with fetal aortic lesions and reveals the epigenetic response of fetal aortic SREBP2 to maternal cholesterol level. The role of maternal cholesterol level during pregnancy and epigenetic signature in offspring in cardiovascular primary prevention warrants further long-term causal relationship studies.", "which Study location ?", "Italy", 777.0, 782.0], ["Abstract Following the introduction of unprecedented \u201cstay-at-home\u201d national policies, the COVID-19 pandemic recently started declining in Europe. Our research aims were to characterize the changepoint in the flow of the COVID-19 epidemic in each European country and to evaluate the association of the level of social distancing with the observed decline in the national epidemics. Interrupted time series analyses were conducted in 28 European countries. Social distance index was calculated based on Google Community Mobility Reports. Changepoints were estimated by threshold regression, national findings were analyzed by Poisson regression, and the effect of social distancing in mixed effects Poisson regression model. Our findings identified the most probable changepoints in 28 European countries. Before changepoint, incidence of new COVID-19 cases grew by 24% per day on average. From the changepoint, this growth rate was reduced to 0.9%, 0.3% increase, and to 0.7% and 1.7% decrease by increasing social distancing quartiles. The beneficial effect of higher social distance quartiles (i.e., turning the increase into decline) was statistically significant for the fourth quartile. Notably, many countries in lower quartiles also achieved a flat epidemic curve. In these countries, other plausible COVID-19 containment measures could contribute to controlling the first wave of the disease. The association of social distance quartiles with viral spread could also be hindered by local bottlenecks in infection control. Our results allow for moderate optimism related to the gradual lifting of social distance measures in the general population, and call for specific attention to the protection of focal micro-societies enriching high-risk elderly subjects, including nursing homes and chronic care facilities.", "which Study location ?", "Europe", 139.0, 145.0], ["A hyper-pure germanium (HPGe) detector was used to measure the activity concentrations in sediment samples of rivers in South Africa, and the associated radiological hazard indices were evaluated. The results of the study indicated that the mean activity concentrations of 226Ra, 232Th and 40K in the sediment samples from the oil-rich areas are 11.13, 7.57, 22.5 ; 5.51, 4.62, 125.02 and 7.60, 5.32, 24.12 for the Bree, Klein-Brak and Bakens Rivers, respectively. In contrast, the control site (UMngeni River) values were 4.13, 3.28, and 13.04 for 226Ra, 232Th, and 40K. The average excess lifetime cancer risks are 0.394 \u00d7 , 0.393 \u00d7 , 0.277 \u00d7 and 0.163 \u00d7 for sediment samples at Bree, Klein-Brak, Bakens, and uMngeni rivers. All obtained values indicated a significant difference between the natural radionuclide concentrations in the samples from the rivers in oil-rich areas compared to those of the non-oil-rich area. The values reported for the activity concentrations and radiological hazard indices were below the average world values; hence, the risk of radiation health hazard was negligible in all study areas.", "which Study location ?", "South Africa", 120.0, 132.0], ["OBJECTIVE To evaluate the social isolation index and the speed of new cases of Covid-19 in Brazil. METHODS Quantitative ecological, documentary, descriptive study using secondary data, comparing the period from March 14 to May 1, 2020, carried out with the 27 Brazilian federative units, characterizing the study population. The data were analyzed through descriptive statistics using the Statistical Package for the Social Sciences-SPSS\u00ae software, evaluating the correlation between the social isolation index and the number of new cases of Covid-19, using Pearson's correlation coefficient. RESULTS The increase in Covid-19 cases is exponential. There was a significant, negative correlation regarding the social isolation index and the speed of the number of new cases by Pearson's coefficient, which means that as the first one increases, the second one decreases. CONCLUSION Social isolation measures have significant effects on the rate of coronavirus infection in the population.", "which Study location ?", "Brazil", 91.0, 97.0], ["In e-participation platforms, citizens suggest, discuss and vote online for initiatives aimed to address a wide range of issues and problems in a city, such as economic development, public safety, budges, infrastructure, housing, environment, social rights, and health care. For a particular citizen, the number of proposals and debates may be overwhelming, and recommender systems could help filtering and ranking those that are more relevant. Focusing on a particular case, the `Decide Madrid' platform, in this paper we empirically investigate which sources of user preferences and recommendation approaches could be more effective, in terms of several aspects, namely precision, coverage and diversity.", "which has been evaluated in the City ?", "Madrid", 488.0, 494.0], ["In this paper, we present electronic participatory budgeting (ePB) as a novel application domain for recommender systems. On public data from the ePB platforms of three major US cities - Cambridge, Miami and New York City-, we evaluate various methods that exploit heterogeneous sources and models of user preferences to provide personalized recommendations of citizen proposals. We show that depending on characteristics of the cities and their participatory processes, particular methods are more effective than others for each city. This result, together with open issues identified in the paper, call for further research in the area.", "which has been evaluated in the City ?", "Miami", 198.0, 203.0], ["With the advancement of smart city, the development of intelligent mobile terminal and wireless network, the traditional text information service no longer meet the needs of the community residents, community image service appeared as a new media service. \u201cThere are pictures of the truth\u201d has become a community residents to understand and master the new dynamic community, image information service has become a new information service. However, there are two major problems in image information service. Firstly, the underlying eigenvalues extracted by current image feature extraction techniques are difficult for users to understand, and there is a semantic gap between the image content itself and the user\u2019s understanding; secondly, in community life of the image data increasing quickly, it is difficult to find their own interested image data. Aiming at the two problems, this paper proposes a unified image semantic scene model to express the image content. On this basis, a collaborative filtering recommendation model of fusion scene semantics is proposed. In the recommendation model, a comprehensiveness and accuracy user interest model is proposed to improve the recommendation quality. The results of the present study have achieved good results in the pilot cities of Wenzhou and Yan'an, and it is applied normally.", "which has been evaluated in the City ?", "Wenzhou", 1285.0, 1292.0], ["In this paper, we present electronic participatory budgeting (ePB) as a novel application domain for recommender systems. On public data from the ePB platforms of three major US cities - Cambridge, Miami and New York City-, we evaluate various methods that exploit heterogeneous sources and models of user preferences to provide personalized recommendations of citizen proposals. We show that depending on characteristics of the cities and their participatory processes, particular methods are more effective than others for each city. This result, together with open issues identified in the paper, call for further research in the area.", "which has been evaluated in the City ?", "New York City", 208.0, 221.0], ["We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1027 canteen trays for a total of 3616 food instances belonging to 73 food classes. The food on the tray images has been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented with three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using convolutional-neural-networks-based features. The dataset, as well as the benchmark framework, are available to the research community.", "which type ?", "Multi", NaN, NaN], ["We propose a mobile food recognition system the poses of which are estimating calorie and nutritious of foods and recording a user's eating habits. Since all the processes on image recognition performed on a smart-phone, the system does not need to send images to a server and runs on an ordinary smartphone in a real-time way. To recognize food items, a user draws bounding boxes by touching the screen first, and then the system starts food item recognition within the indicated bounding boxes. To recognize them more accurately, we segment each food item region by GrubCut, extract a color histogram and SURF-based bag-of-features, and finally classify it into one of the fifty food categories with linear SVM and fast 2 kernel. In addition, the system estimates the direction of food regions where the higher SVM output score is expected to be obtained, show it as an arrow on the screen in order to ask a user to move a smartphone camera. This recognition process is performed repeatedly about once a second. We implemented this system as an Android smartphone application so as to use multiple CPU cores effectively for real-time recognition. In the experiments, we have achieved the 81.55% classification rate for the top 5 category candidates when the ground-truth bounding boxes are given. In addition, we obtained positive evaluation by user study compared to the food recording system without object recognition.", "which type ?", "Multi", NaN, NaN], ["We present a novel assistance system which supports anticipatory driving by means of fostering early deceleration. Upcoming technologies like Car2X communication provide information about a time interval which is currently uncovered. This information shall be used in the proposed system to inform drivers about future situations which require reduced speed. Such situations include traffic jams, construction sites or speed limits. The HMI is an optical output system based on line arrays of RGB-LEDs. Our contribution presents construction details as well as user evaluations. The results show an earlier deceleration of 3.9 \u2013 11.5 s and a shorter deceleration distance of 2 \u2013 166 m.", "which type ?", "Speed", 352.0, 357.0], ["In the past decade, there have been close to 350,000 fatal crashes in the United States [1]. With various improvements in traffic and vehicle safety, the number of such crashes is decreasing every year. One of the ways to reduce vehicle crashes is to prevent excessive speeding in the roads and highways. The paper aims to outline the design of an embedded system that will automatically control the speed of a motor vehicle based on its location determined by a GPS device. The embedded system will make use of an AVR ATMega128 microcontroller connected to an EM-406A GPS receiver. The large amount of location input data justifies the use of an ATMega128 microcontroller which has 128KB of programmable flash memory as well as 4KB SRAM, and a 4KB EEPROM Memory [2]. The output of the ATMega128 will be a DOGMI63W-A LCD module which will display information of the current and the set-point speed of the vehicle at the current position. A discrete indicator LED will flash at a pre-determined frequency when the speed of the vehicle has exceeded the recommended speed limit. Finally, the system will have outputs that will communicate with the Engine Control Unit (ECU) of the vehicle. For the limited scope of this project, the ECU is simulated as an external device with two inputs that will acknowledge pulse-trains of particular frequencies to limit the speed of a vehicle. The speed control system will be programmed using mixed language C and Assembly with the latter in use for some pre-written subroutines to drive the LCD module. The GPS module will transmit National Marine Electronics Association (NMEA) data strings to the microcontroller (MCU) using Serial Peripheral Interface (SPI). The MCU will use the location coordinates (latitude and longitude) and the speed from the NMEA RMC output string. The current speed is then compared against the recommended speed for the vehicle's location. The memory locations in the ATMega128 can be used to store set-point speed values against a particular set of location co-ordinates. Apart from its implementation in human operated vehicles, the project can be used to control speed of autonomous cars and to implement the idea of a variable speed limit on roads introduced by the Department of Transportation [3].", "which type ?", "Speed", 400.0, 405.0], ["In order to support drivers to maintain a predefined driving speed, we introduce ChaseLight, an in-car system that uses a programmable LED stripe mounted along the A-pillar of a car. The chase light (i.e., stripes of adjacent LEDs that are turned on and off frequently to give the illusion of lights moving along the stripe) provides ambient feedback to the driver about speed. We present a simulator based user study that uses three different types of feedback: (1) chase light with constant speed, (2) with proportional speed (i.e., chase light speed correlates with vehicle speed), and (3) with adaptive speed (i.e., chase light speed adapts to a target speed of the vehicle). Our results show that the adaptive condition is suited best to help a driver to control driving speed. The proportional speed condition resulted in a significantly slower mean speed than the baseline condition (no chase light).", "which type ?", "Speed", 61.0, 66.0], ["Electric Vehicles (EVs) are an emerging technology and open up an exciting new space for designing in-car interfaces. This technology enhances driving experience by a strong acceleration, regenerative breaking and especially a reduced noise level. However, engine vibrations and sound transmit valuable feedback to drivers of conventional cars, e.g. signaling that the engine is running and ready to go. We address this lack of feedback with Heartbeat, a multimodal electric vehicle information system. Heartbeat communicates (1) the state of the electric drive including energy flow and (2) the energy level of the batteries in a natural and experienceable way. We enhance the underlying Experience Design process by formulating working principles derived from an experience story in order to transport its essence throughout the following design phases. This way, we support the design of a consistent experience and resolve the tension between implementation constraints (e.g., space) and the persistence of the underlying story while building prototypes and integrating them into a technical environment (e.g., a dashboard).", "which type ?", "State", 534.0, 539.0], ["Looking at the right place at the right time is a critical component of driving skill. Therefore, gaze guidance has the potential to become a valuable driving assistance system. In previous work, we have already shown that complex gaze-contingent stimuli can guide attention and reduce the number of accidents in a simple driving simulator. We here set out to investigate whether cues that are simple enough to be implemented in a real car can also capture gaze during a more realistic driving task in a high-fidelity driving simulator. We used a state-of-the-art, wide-field-of-view driving simulator with an integrated eye tracker. Gaze-contingent warnings were implemented using two arrays of light-emitting diodes horizontally fitted below and above the simulated windshield. Thirteen volunteering subjects drove along predetermined routes in a simulated environment popu lated with autonomous traffic. Warnings were triggered during the approach to half of the intersections, cueing either towards the right or to the left. The remaining intersections were not cued, and served as controls. The analysis of the recorded gaze data revealed that the gaze-contingent cues did indeed have a gaze guiding effect, triggering a significant shift in gaze position towards the highlighted direction. This gaze shift was not accompanied by changes in driving behaviour, suggesting that the cues do not interfere with the driving task itself.", "which type ?", "Gaze", 98.0, 102.0], ["In demanding driving situations, the front-seat passenger can become a supporter of the driver by, e.g., monitoring the scene or providing hints about upcoming hazards or turning points. A fast and efficient communication of such spatial information can help the driver to react properly, with more foresight. As shown in previous research, this spatial referencing can be facilitated by providing the driver a visualization of the front-seat passenger's gaze. In this paper, we focus on the question how the gaze should be visualized for the driver, taking into account the feasibility of implementation in a real car. We present the results from a driving simulator study, where we compared an LED visualization (glowing LEDs on an LED stripe mounted at the bottom of the windshield, indicating the horizontal position of the gaze) with a visualization of the gaze as a dot in the simulated environment. Our results show that LED visualization comes with benefits with regard to driver distraction but also bears disadvantages with regard to accuracy and control for the front-seat passenger.", "which type ?", "Gaze", 455.0, 459.0], ["Resource Description Framework (RDF) is a general description technology that can be applied to many application domains. Redland is a exible and eAEcient implementation of RDF that complements this power and provides highlevel interfaces allowing instances of the model to be stored, queried and manipulated in C, Perl, Python, Tcl and other languages. Redland is implemented using an object-based API, providing several of the implementation classes as modules which can be added, removed or replaced to allow di erent functionality or application-speci c optimisations. The framework provides the core technology for developing new RDF applications, experimenting with implementation techniques, APIs and representation issues.", "which Engine ?", "Redland", 122.0, 129.0], ["In this paper we analyzed the social attributes and political experience of the members of the Croatian Parliament in five assemblies. We established that the multiparty parliament, during the 18 years of its existence, was dominated by men, averagely between 47 and 49 years of age, Croats, Catholics, highly educated, predominantly in the social sciences and humanities, and politicians with significant managerial and political experience acquired primarily during their work in political parties. Moreover, we found a relatively large fluctuation of parliamentarians, resulting in a lower level of parliamentary experience and a relatively short parliamentary career. Based on these indicators, it can be stated that in Croatia a socially homogenous parliamentary elite was formed, one with a potentially lower level of political competence, and that patterns of political recruitment, coherent in tendency with those in the developed democratic countries, were established.", "which Location ?", "Croatia", 724.0, 731.0], ["This paper compares the development in four Central European parliaments (Czech Republic, Hungary, Poland, and Slovenia) in the second decade after the fall of communism. At the end of the first decade, the four parliaments could be considered stabilised, functional, independent and internally organised institutions. Attention is paid particularly to the changing institutional context and pressure of \u2018Europeanisation\u2019, the changing party strengths, and the functional and political consequences of these changes. Parliaments have been transformed from primary legislative to mediating and supervisory bodies. Though Central European parliaments have become stable in their structure and formal rules as well as in their professionalisation, at the end of the second decade their stability was threatened.", "which Location ?", "Czech Republic", 74.0, 88.0], ["BACKGROUND AND OBJECTIVE Patients who receive cochlear implants (CIs) constitutes a significant population in Iran. This population needs regular monitor on long-term outcomes, educational placement and quality of life. Currently, there is no national or regional registry on the long term outcomes of CI users in Iran. The present study aims to introduce the design and implementation of a national patient-outcomes registry on CI recipients for Iran. This Iranian CI registry (ICIR) provides an integrated framework for data collection and sharing, scientific communication and collaboration inCI research. METHODS The national ICIR is a prospective patient-outcomes registry for patients who are implanted in one of Iranian centers. The registry is based on an integrated database that utilizes a secure web-based platform to collect response data from clinicians and patient's proxy via electronic case report forms (e-CRFs) at predefined intervals. The CI candidates are evaluated with a set of standardized and non-standardized questionnaires prior to initial device activation(as baseline variables) and at three-monthly interval follow-up intervals up to 24 months and annually thereafter. RESULTS The software application of the ICIR registry is designed in a user-friendly graphical interface with different entry fields. The collected data are categorized into four subsets including personal information, clinical data, surgery data and commission results. The main parameters include audiometric performance of patient, device use, patient comorbidities, device use, quality of life and health-related utilities, across different types of CI devices from different manufacturers. CONCLUSION The ICIR database could be used by the increasingly growing network of CI centers in Iran. Clinicians, academic and industrial researchers as well as healthcare policy makers could use this database to develop more effective CI devices and better management of the recipients as well as to develop national guidelines.", "which Location ?", "Iran", 110.0, 114.0], ["This paper presents the application of Interval Type-2 fuzzy logic systems (Interval Type-2 FLS) in short term load forecasting (STLF) on special days, study case in Bali Indonesia. Type-2 FLS is characterized by a concept called footprint of uncertainty (FOU) that provides the extra mathematical dimension that equips Type-2 FLS with the potential to outperform their Type-1 counterparts. While a Type-2 FLS has the capability to model more complex relationships, the output of a Type-2 fuzzy inference engine needs to be type-reduced. Type reduction is used by applying the Karnik-Mendel (KM) iterative algorithm. This type reduction maps the output of Type-2 FSs into Type-1 FSs then the defuzzification with centroid method converts that Type-1 reduced FSs into a number. The proposed method was tested with the actual load data of special days using 4 days peak load before special days and at the time of special day for the year 2002-2006. There are 20 items of special days in Bali that are used to be forecasted in the year 2005 and 2006 respectively. The test results showed an accurate forecasting with the mean average percentage error of 1.0335% and 1.5683% in the year 2005 and 2006 respectively.", "which Location ?", "Indonesia", 171.0, 180.0], ["Registration systems for diseases and other health outcomes provide important resource for biomedical research, as well as tools for public health surveillance and improvement of quality of care. The Ministry of Health and Medical Education (MOHME) of Iran launched a national program to establish registration systems for different diseases and health outcomes. Based on the national program, we organized several workshops and training programs and disseminated the concepts and knowledge of the registration systems. Following a call for proposals, we received 100 applications and after thorough evaluation and corrections by the principal investigators, we approved and granted about 80 registries for three years. Having strong steering committee, committed executive and scientific group, establishing national and international collaboration, stating clear objectives, applying feasible software, and considering stable financing were key components for a successful registry and were considered in the evaluation processes. We paid particulate attention to non-communicable diseases, which constitute an emerging public health problem. We prioritized establishment of regional population-based cancer registries (PBCRs) in 10 provinces in collaboration with the International Agency for Research on Cancer. This initiative was successful and registry programs became popular among researchers and research centers and created several national and international collaborations in different areas to answer important public health and clinical questions. In this paper, we report the details of the program and list of registries that were granted in the first round.", "which Location ?", "Iran", 252.0, 256.0], ["Background/Aims A recent study revealed increasing incidence and prevalence of inflammatory bowel disease (IBD) in Iran. The Iranian Registry of Crohn\u2019s and Colitis (IRCC) was designed recently to answer the needs. We reported the design, methods of data collection, and aims of IRCC in this paper. Methods IRCC is a multicenter prospective registry, which is established with collaboration of more than 100 gastroenterologists from different provinces of Iran. Minimum data set for IRCC was defined according to an international consensus on standard set of outcomes for IBD. A pilot feasibility study was performed on 553 IBD patients with a web-based questionnaire. The reliability of questionnaire evaluated by Cronbach\u2019s \u03b1. Results All sections of questionnaire had Cronbach\u2019s \u03b1 of more than 0.6. In pilot study, 312 of participants (56.4%) were male and mean age was 38 years (standard deviation=12.8) and 378 patients (68.35%) had ulcerative colitis, 303 subjects (54,7%) had college education and 358 patients (64.74%) were of Fars ethnicity. We found that 68 (12.3%), 44 (7.9%), and 13 (2.3%) of participants were smokers, hookah and opium users, respectively. History of appendectomy was reported in 58 of patients (10.48%). The most common medication was 5-aminosalicylate (94.39%). Conclusions To the best of our knowledge, IRCC is the first national IBD registry in the Middle East and could become a reliable infrastructure for national and international research on IBD. IRCC will improve the quality of care of IBD patients and provide national information for policy makers to better plan for controlling IBD in Iran.", "which Location ?", "Iran", 115.0, 119.0], ["Sources and implications of genetic diversity in agamic complexes are still under debate. Population studies (amplified fragment length polymorphisms, microsatellites) and karyological methods (Feulgen DNA image densitometry and flow cytometry) were employed for characterization of genetic diversity and ploidy levels of 10 populations of Ranunculus carpaticola in central Slovakia. Whereas two diploid populations showed high levels of genetic diversity, as expected for sexual reproduction, eight populations are hexaploid and harbour lower degrees of genotypic variation, but maintain high levels of heterozygosity at many loci, as is typical for apomicts. Polyploid populations consist either of a single AFLP genotype or of one dominant and a few deviating genotypes. genotype/genodive and character incompatibility analyses suggest that genotypic variation within apomictic populations is caused by mutations, but in one population probably also by recombination. This local facultative sexuality may have a great impact on regional genotypic diversity. Two microsatellite loci discriminated genotypes separated by the accumulation of few mutations (\u2018clone mates\u2019) within each AFLP clone. Genetic diversity is partitioned mainly among apomictic populations and is not geographically structured, which may be due to facultative sexuality and/or multiple colonizations of sites by different clones. Habitat differentiation and a tendency to inhabit artificial meadows is more pronounced in apomictic than in sexual populations. We hypothesize that maintenance of genetic diversity and superior colonizing abilities of apomicts in temporally and spatially heterogeneous environments are important for their distributional success.", "which Location ?", "Central Slovakia", 366.0, 382.0], ["BACKGROUND AND AIMS Pilosella officinarum (syn. Hieracium pilosella) is a highly structured species with respect to the ploidy level, with obvious cytogeographic trends. Previous non-collated data indicated a possible differentiation in the frequency of particular ploidy levels in the Czech Republic and Slovakia. Therefore, detailed sampling and ploidy level analyses were assessed to reveal a boundary of common occurrence of tetraploids on one hand and higher ploids on the other. For a better understanding of cytogeographic differentiation of P. officinarum in central Europe, a search was made for a general cytogeographic pattern in Europe based on published data. METHODS DNA-ploidy level and/or chromosome number were identified for 1059 plants using flow cytometry and/or chromosome counting on root meristem preparations. Samples were collected from 336 localities in the Czech Republic, Slovakia and north-eastern Hungary. In addition, ploidy levels were determined for plants from 18 localities in Bulgaria, Georgia, Ireland, Italy, Romania and Ukraine. KEY RESULTS Four ploidy levels were found in the studied area with a contrasting pattern of distribution. The most widespread cytotype in the western part of the Czech Republic is tetraploid (4x) reproducing sexually, while the apomictic pentaploids and mostly apomictic hexaploids (5x and 6x, respectively) clearly prevail in Slovakia and the eastern part of the Czech Republic. The boundary between common occurrence of tetraploids and higher ploids is very obvious and represents the geomorphologic boundary between the Bohemian Massif and the Western Carpathians with the adjacent part of Pannonia. Mixed populations consisting of two different ploidy levels were recorded in nearly 11% of localities. A statistically significant difference in a vertical distribution of penta- and hexaploids was observed in the Western Carpathians and the adjacent Pannonian Plain. Hexaploid populations tend to occur at lower elevations (usually below 500 m), while the pentaploid level is more or less evenly distributed up to 1000 m a.s.l. For the first time the heptaploid level (7x) was found on one site in Slovakia. In Europe, the sexual tetraploid level has clearly a sub-Atlantic character of distribution. The plants of higher ploidy level (penta- and hexa-) with mostly apomictic reproduction prevail in the northern part of Scandinavia and the British Isles, the Alps and the Western Carpathians with the adjacent part of Pannonia. A detailed overview of published data shows that extremely rare records on existence of diploid populations in the south-west Alps are with high probability erroneous and most probably refer to the closely related diploid species P. peleteriana. CONCLUSIONS The recent distribution of P. officinarum in Europe is complex and probably reflects the climatic changes during the Pleistocene and consequent postglacial migrations. Probably both penta- and hexaploids arose independently in central Europe (Alps and Carpathian Mountains) and in northern Europe (Scandinavia, Great Britain, Ireland), where the apomictic plants colonized deglaciated areas. We suggest that P. officinarum is in fact an amphidiploid species with a basic tetraploid level, which probably originated from hybridizations of diploid taxa from the section Pilosellina.", "which Location ?", "Czech Republic, Slovakia", 884.0, 908.0], ["To achieve high accuracy in prediction, a load forecasting algorithm must model various consumer behaviors in response to weather conditions or special events. Different triggers will have various effects on different customers and lead to difficulties in constructing an adequate prediction model due to non-stationary and uncertain characteristics in load variations. This paper proposes an open-ended model of short-term load forecasting (STLF) which has general prediction ability to capture the non-linear relationship between the load demand and the exogenous inputs. The prediction method uses the whale optimization algorithm, discrete wavelet transform, and multiple linear regression model (WOA-DWT-MLR model) to predict both system load and aggregated load of power consumers. WOA is used to optimize the best combination of detail and approximation signals from DWT to construct an optimal MLR model. The proposed model is validated with both the system-side data set and the end-user data set for Independent System Operator-New England (ISO-NE) and smart meter load data, respectively, based on Mean Absolute Percentage Error (MAPE) criterion. The results demonstrate that the proposed method achieves lower prediction error than existing methods and can have consistent prediction of non-stationary load conditions that exist in both test systems. The proposed method is, thus, beneficial to use in the energy management system.", "which Location ?", "New England", 1046.0, 1057.0], ["ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.", "which has smart city instance ?", "Konza", 395.0, 400.0], ["ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.", "which has smart city instance ?", "Amsterdam", 223.0, 232.0], ["ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.", "which has smart city instance ?", "Barcelona", 234.0, 243.0], ["ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.", "which has smart city instance ?", "Songdo", 348.0, 354.0], ["ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.", "which has smart city instance ?", "Skolkovo", 338.0, 346.0], ["ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.", "which has smart city instance ?", "Stockholm", 268.0, 277.0], ["ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.", "which has smart city instance ?", "London", 245.0, 251.0], ["ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.", "which has smart city instance ?", "New York", 365.0, 373.0], ["ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.", "which has smart city instance ?", "Rio de Janeiro", 375.0, 389.0], ["ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.", "which has smart city instance ?", "Cyberjaya", 279.0, 288.0], ["ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.", "which has smart city instance ?", "Singapore", 290.0, 299.0], ["ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.", "which has smart city instance ?", "King Abdullah Economic City", 301.0, 328.0], ["Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.", "which has smart city instance ?", "Karlsruhe (Germany)", NaN, NaN], ["This paper describes a dynamic framework for an atmospheric general circulation spectral model in which a reference stratified atmospheric temperature and a reference surface pressure are introduced into the governing equations so as to improve the calculation of the pressure gradient force and gradients of surface pressure and temperature. The vertical profile of the reference atmospheric temperature approximately corresponds to that of the U.S. midlatitude standard atmosphere within the troposphere and stratosphere, and the reference surface pressure is a function of surface terrain geopotential and is close to the observed mean surface pressure. Prognostic variables for the temperature and surface pressure are replaced by their perturbations from the prescribed references. The numerical algorithms of the explicit time difference scheme for vorticity and the semi-implicit time difference scheme for divergence, perturbation temperature, and perturbation surface pressure equation are given in detail. The modified numerical framework is implemented in the Community Atmosphere Model version 3 (CAM3) developed at the National Center for Atmospheric Research (NCAR) to test its validation and impact on simulated climate. Both the original and the modified models are run with the same spectral resolution (T42), the same physical parameterizations, and the same boundary conditions corresponding to the observed monthly mean sea surface temperature and sea ice concentration from 1971 to 2000. This permits one to evaluate the performance of the new dynamic framework compared to the commonly used one. Results show that there is a general improvement for the simulated climate at regional and global scales, especially for temperature and wind.", "which Earth System Model ?", "Sea Ice", 1468.0, 1475.0], ["AbstractThe authors describe carbon system formulation and simulation characteristics of two new global coupled carbon\u2013climate Earth System Models (ESM), ESM2M and ESM2G. These models demonstrate good climate fidelity as described in part I of this study while incorporating explicit and consistent carbon dynamics. The two models differ almost exclusively in the physical ocean component; ESM2M uses the Modular Ocean Model version 4.1 with vertical pressure layers, whereas ESM2G uses generalized ocean layer dynamics with a bulk mixed layer and interior isopycnal layers. On land, both ESMs include a revised land model to simulate competitive vegetation distributions and functioning, including carbon cycling among vegetation, soil, and atmosphere. In the ocean, both models include new biogeochemical algorithms including phytoplankton functional group dynamics with flexible stoichiometry. Preindustrial simulations are spun up to give stable, realistic carbon cycle means and variability. Significant differences...", "which Earth System Model ?", "Atmosphere", 742.0, 752.0], ["This paper describes a dynamic framework for an atmospheric general circulation spectral model in which a reference stratified atmospheric temperature and a reference surface pressure are introduced into the governing equations so as to improve the calculation of the pressure gradient force and gradients of surface pressure and temperature. The vertical profile of the reference atmospheric temperature approximately corresponds to that of the U.S. midlatitude standard atmosphere within the troposphere and stratosphere, and the reference surface pressure is a function of surface terrain geopotential and is close to the observed mean surface pressure. Prognostic variables for the temperature and surface pressure are replaced by their perturbations from the prescribed references. The numerical algorithms of the explicit time difference scheme for vorticity and the semi-implicit time difference scheme for divergence, perturbation temperature, and perturbation surface pressure equation are given in detail. The modified numerical framework is implemented in the Community Atmosphere Model version 3 (CAM3) developed at the National Center for Atmospheric Research (NCAR) to test its validation and impact on simulated climate. Both the original and the modified models are run with the same spectral resolution (T42), the same physical parameterizations, and the same boundary conditions corresponding to the observed monthly mean sea surface temperature and sea ice concentration from 1971 to 2000. This permits one to evaluate the performance of the new dynamic framework compared to the commonly used one. Results show that there is a general improvement for the simulated climate at regional and global scales, especially for temperature and wind.", "which Earth System Model ?", "Atmosphere", 472.0, 482.0], ["Abstract The formulation and simulation characteristics of two new global coupled climate models developed at NOAA's Geophysical Fluid Dynamics Laboratory (GFDL) are described. The models were designed to simulate atmospheric and oceanic climate and variability from the diurnal time scale through multicentury climate change, given our computational constraints. In particular, an important goal was to use the same model for both experimental seasonal to interannual forecasting and the study of multicentury global climate change, and this goal has been achieved. Two versions of the coupled model are described, called CM2.0 and CM2.1. The versions differ primarily in the dynamical core used in the atmospheric component, along with the cloud tuning and some details of the land and ocean components. For both coupled models, the resolution of the land and atmospheric components is 2\u00b0 latitude \u00d7 2.5\u00b0 longitude; the atmospheric model has 24 vertical levels. The ocean resolution is 1\u00b0 in latitude and longitude, wi...", "which Earth System Model ?", "Ocean", 788.0, 793.0], ["Abstract. The core version of the Norwegian Climate Center's Earth System Model, named NorESM1-M, is presented. The NorESM family of models are based on the Community Climate System Model version 4 (CCSM4) of the University Corporation for Atmospheric Research, but differs from the latter by, in particular, an isopycnic coordinate ocean model and advanced chemistry\u2013aerosol\u2013cloud\u2013radiation interaction schemes. NorESM1-M has a horizontal resolution of approximately 2\u00b0 for the atmosphere and land components and 1\u00b0 for the ocean and ice components. NorESM is also available in a lower resolution version (NorESM1-L) and a version that includes prognostic biogeochemical cycling (NorESM1-ME). The latter two model configurations are not part of this paper. Here, a first-order assessment of the model stability, the mean model state and the internal variability based on the model experiments made available to CMIP5 are presented. Further analysis of the model performance is provided in an accompanying paper (Iversen et al., 2013), presenting the corresponding climate response and scenario projections made with NorESM1-M.", "which Earth System Model ?", "Ocean", 333.0, 338.0], ["This document describes the CMCC Earth System Model (ESM) for the representation of the carbon cycle in the atmosphere, land, and ocean system. The structure of the report follows the software architecture of the full system. It is intended to give a technical description of the numerical models at the base of the ESM, and how they are coupled with each other.", "which Earth System Model ?", "Ocean", 130.0, 135.0], ["Abstract. The core version of the Norwegian Climate Center's Earth System Model, named NorESM1-M, is presented. The NorESM family of models are based on the Community Climate System Model version 4 (CCSM4) of the University Corporation for Atmospheric Research, but differs from the latter by, in particular, an isopycnic coordinate ocean model and advanced chemistry\u2013aerosol\u2013cloud\u2013radiation interaction schemes. NorESM1-M has a horizontal resolution of approximately 2\u00b0 for the atmosphere and land components and 1\u00b0 for the ocean and ice components. NorESM is also available in a lower resolution version (NorESM1-L) and a version that includes prognostic biogeochemical cycling (NorESM1-ME). The latter two model configurations are not part of this paper. Here, a first-order assessment of the model stability, the mean model state and the internal variability based on the model experiments made available to CMIP5 are presented. Further analysis of the model performance is provided in an accompanying paper (Iversen et al., 2013), presenting the corresponding climate response and scenario projections made with NorESM1-M.", "which Earth System Model ?", "Atmosphere", 479.0, 489.0], ["This document describes the CMCC Earth System Model (ESM) for the representation of the carbon cycle in the atmosphere, land, and ocean system. The structure of the report follows the software architecture of the full system. It is intended to give a technical description of the numerical models at the base of the ESM, and how they are coupled with each other.", "which Earth System Model ?", "Atmosphere", 108.0, 118.0], ["4OASIS3.2\u20135 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 \u00b0C in ACCESS1.0 and 0.04 \u00b0C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.", "which Earth System Model ?", "Ocean", 566.0, 571.0], ["AbstractThe authors describe carbon system formulation and simulation characteristics of two new global coupled carbon\u2013climate Earth System Models (ESM), ESM2M and ESM2G. These models demonstrate good climate fidelity as described in part I of this study while incorporating explicit and consistent carbon dynamics. The two models differ almost exclusively in the physical ocean component; ESM2M uses the Modular Ocean Model version 4.1 with vertical pressure layers, whereas ESM2G uses generalized ocean layer dynamics with a bulk mixed layer and interior isopycnal layers. On land, both ESMs include a revised land model to simulate competitive vegetation distributions and functioning, including carbon cycling among vegetation, soil, and atmosphere. In the ocean, both models include new biogeochemical algorithms including phytoplankton functional group dynamics with flexible stoichiometry. Preindustrial simulations are spun up to give stable, realistic carbon cycle means and variability. Significant differences...", "which Earth System Model ?", "Ocean", 373.0, 378.0], ["Abstract A full description of the ModelE version of the Goddard Institute for Space Studies (GISS) atmospheric general circulation model (GCM) and results are presented for present-day climate simulations (ca. 1979). This version is a complete rewrite of previous models incorporating numerous improvements in basic physics, the stratospheric circulation, and forcing fields. Notable changes include the following: the model top is now above the stratopause, the number of vertical layers has increased, a new cloud microphysical scheme is used, vegetation biophysics now incorporates a sensitivity to humidity, atmospheric turbulence is calculated over the whole column, and new land snow and lake schemes are introduced. The performance of the model using three configurations with different horizontal and vertical resolutions is compared to quality-controlled in situ data, remotely sensed and reanalysis products. Overall, significant improvements over previous models are seen, particularly in upper-atmosphere te...", "which Earth System Model ?", "Atmosphere", 1007.0, 1017.0], ["4OASIS3.2\u20135 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 \u00b0C in ACCESS1.0 and 0.04 \u00b0C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.", "which Earth System Model ?", "Atmosphere", 963.0, 973.0], ["Abstract A full description of the ModelE version of the Goddard Institute for Space Studies (GISS) atmospheric general circulation model (GCM) and results are presented for present-day climate simulations (ca. 1979). This version is a complete rewrite of previous models incorporating numerous improvements in basic physics, the stratospheric circulation, and forcing fields. Notable changes include the following: the model top is now above the stratopause, the number of vertical layers has increased, a new cloud microphysical scheme is used, vegetation biophysics now incorporates a sensitivity to humidity, atmospheric turbulence is calculated over the whole column, and new land snow and lake schemes are introduced. The performance of the model using three configurations with different horizontal and vertical resolutions is compared to quality-controlled in situ data, remotely sensed and reanalysis products. Overall, significant improvements over previous models are seen, particularly in upper-atmosphere te...", "which Earth System Model ?", "Atmosphere", 1007.0, 1017.0], ["4OASIS3.2\u20135 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 \u00b0C in ACCESS1.0 and 0.04 \u00b0C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.", "which Earth System Model ?", "Land Surface", 706.0, 718.0], ["The NCEP Climate Forecast System Reanalysis (CFSR) was completed for the 31-yr period from 1979 to 2009, in January 2010. The CFSR was designed and executed as a global, high-resolution coupled atmosphere\u2013ocean\u2013land surface\u2013sea ice system to provide the best estimate of the state of these coupled domains over this period. The current CFSR will be extended as an operational, real-time product into the future. New features of the CFSR include 1) coupling of the atmosphere and ocean during the generation of the 6-h guess field, 2) an interactive sea ice model, and 3) assimilation of satellite radiances by the Gridpoint Statistical Interpolation (GSI) scheme over the entire period. The CFSR global atmosphere resolution is ~38 km (T382) with 64 levels extending from the surface to 0.26 hPa. The global ocean's latitudinal spacing is 0.25\u00b0 at the equator, extending to a global 0.5\u00b0 beyond the tropics, with 40 levels to a depth of 4737 m. The global land surface model has four soil levels and the global sea ice m...", "which Earth System Model ?", "Ocean", 205.0, 210.0], ["Abstract A full description of the ModelE version of the Goddard Institute for Space Studies (GISS) atmospheric general circulation model (GCM) and results are presented for present-day climate simulations (ca. 1979). This version is a complete rewrite of previous models incorporating numerous improvements in basic physics, the stratospheric circulation, and forcing fields. Notable changes include the following: the model top is now above the stratopause, the number of vertical layers has increased, a new cloud microphysical scheme is used, vegetation biophysics now incorporates a sensitivity to humidity, atmospheric turbulence is calculated over the whole column, and new land snow and lake schemes are introduced. The performance of the model using three configurations with different horizontal and vertical resolutions is compared to quality-controlled in situ data, remotely sensed and reanalysis products. Overall, significant improvements over previous models are seen, particularly in upper-atmosphere te...", "which Earth System Model ?", "Atmosphere", 1007.0, 1017.0], ["4OASIS3.2\u20135 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 \u00b0C in ACCESS1.0 and 0.04 \u00b0C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.", "which Earth System Model ?", "Atmosphere", 963.0, 973.0], ["Abstract. We describe here the development and evaluation of an Earth system model suitable for centennial-scale climate prediction. The principal new components added to the physical climate model are the terrestrial and ocean ecosystems and gas-phase tropospheric chemistry, along with their coupled interactions. The individual Earth system components are described briefly and the relevant interactions between the components are explained. Because the multiple interactions could lead to unstable feedbacks, we go through a careful process of model spin up to ensure that all components are stable and the interactions balanced. This spun-up configuration is evaluated against observed data for the Earth system components and is generally found to perform very satisfactorily. The reason for the evaluation phase is that the model is to be used for the core climate simulations carried out by the Met Office Hadley Centre for the Coupled Model Intercomparison Project (CMIP5), so it is essential that addition of the extra complexity does not detract substantially from its climate performance. Localised changes in some specific meteorological variables can be identified, but the impacts on the overall simulation of present day climate are slight. This model is proving valuable both for climate predictions, and for investigating the strengths of biogeochemical feedbacks.", "which Earth System Model ?", "Ocean", 222.0, 227.0], ["[1] Soils within the impact crater Goldschmidt have been identified as spectrally distinct from the local highland material. High spatial and spectral resolution data from the Moon Mineralogy Mapper (M3) on the Chandrayaan-1 orbiter are used to examine the character of Goldschmidt crater in detail. Spectral parameters applied to a north polar mosaic of M3 data are used to discern large-scale compositional trends at the northern high latitudes, and spectra from three widely separated regions are compared to spectra from Goldschmidt. The results highlight the compositional diversity of the lunar nearside, in particular, where feldspathic soils with a low-Ca pyroxene component are pervasive, but exclusively feldspathic regions and small areas of basaltic composition are also observed. Additionally, we find that the relative strengths of the diagnostic OH/H2O absorption feature near 3000 nm are correlated with the mineralogy of the host material. On both global and local scales, the strongest hydrous absorptions occur on the more feldspathic surfaces. Thus, M3 data suggest that while the feldspathic soils within Goldschmidt crater are enhanced in OH/H2O compared to the relatively mafic nearside polar highlands, their hydration signatures are similar to those observed in the feldspathic highlands on the farside.", "which Study Area ?", " Goldschmidt crater", 269.0, 288.0], ["The Orientale basin is a multiring impact structure on the western limb of the Moon that provides a clear view of the primary lunar crust exposed during basin formation. Previously, near\u2010infrared reflectance spectra suggested that Orientale's Inner Rook Ring (IRR) is very poor in mafic minerals and may represent anorthosite excavated from the Moon's upper crust. However, detailed assessment of the mineralogy of these anorthosites was prohibited because the available spectroscopic data sets did not identify the diagnostic plagioclase absorption feature near 1250 nm. Recently, however, this absorption has been identified in several spectroscopic data sets, including the Moon Mineralogy Mapper (M3), enabling the unique identification of a plagioclase\u2010dominated lithology at Orientale for the first time. Here we present the first in\u2010depth characterization of the Orientale anorthosites based on direct measurement of their plagioclase component. In addition, detailed geologic context of the exposures is discussed based on analysis of Lunar Reconnaissance Orbiter Narrow Angle Camera images for selected anorthosite identifications. The results confirm that anorthosite is overwhelmingly concentrated in the IRR. Comparison with nonlinear spectral mixing models suggests that the anorthosite is exceedingly pure, containing >95 vol % plagioclase in most areas and commonly ~99\u2013100 vol %. These new data place important constraints on magma ocean crystallization scenarios, which must produce a zone of highly pure anorthosite spanning the entire lateral extent of the 430 km diameter IRR.", "which Study Area ?", " Orientale basin", 3.0, 19.0], ["Hyperspectral Imaging (HSI) is used to provide a wealth of information which can be used to address a variety of problems in different applications. The main requirement in all applications is the classification of HSI data. In this paper, supervised HSI classification algorithms are used to extract agriculture areas that specialize in wheat growing and get a classified image. In particular, Parallelepiped and Spectral Angel Mapper (SAM) algorithms are used. They are implemented by a software tool used to analyse and process geospatial images that is an Environment of Visualizing Images (ENVI). They are applied on Al-Kharj, Saudi Arabia as the study area. The overall accuracy after applying the algorithms on the image of the study area for SAM classification was 66.67%, and 33.33% for Parallelepiped classification. Therefore, SAM algorithm has provided a better a study area image classification.", "which Study Area ?", " Al-kharj, Saudi Arabia", 621.0, 644.0], ["Observations by the Mars Reconnaissance Orbiter/Compact Reconnaissance Imaging Spectrometer for Mars in the Mawrth Vallis region show several phyllosilicate species, indicating a wide range of past aqueous activity. Iron/magnesium (Fe/Mg)\u2013smectite is observed in light-toned outcrops that probably formed via aqueous alteration of basalt of the ancient cratered terrain. This unit is overlain by rocks rich in hydrated silica, montmorillonite, and kaolinite that may have formed via subsequent leaching of Fe and Mg through extended aqueous events or a change in aqueous chemistry. A spectral feature attributed to an Fe2+ phase is present in many locations in the Mawrth Vallis region at the transition from Fe/Mg-smectite to aluminum/silicon (Al/Si)\u2013rich units. Fe2+-bearing materials in terrestrial sediments are typically associated with microorganisms or changes in pH or cations and could be explained here by hydrothermal activity. The stratigraphy of Fe/Mg-smectite overlain by a ferrous phase, hydrated silica, and then Al-phyllosilicates implies a complex aqueous history.", "which Study Area ?", "Mawrth Vallis", 108.0, 121.0], ["The Takab area, located in north\u2010west Iran, is an important gold mineralized region with a long history of gold mining. The gold is associated with toxic metals/metalloids. In this study, Advanced Space Borne Thermal Emission and Reflection Radiometer data are evaluated for mapping gold and base\u2010metal mineralization through alteration mapping. Two different methods are used for argillic and silicic alteration mapping: selective principal\u2010component analysis and matched filter processing (MF). Running a selective principal\u2010component analysis using the main spectral characteristics of key alteration minerals enhanced the altered areas in PC2. MF using spectral library and laboratory spectra of the study area samples gave similar results. However, MF, using the image reference spectra from principal component (PC) images, produced the best results and indicated the advantage of using image spectra rather than library spectra in spectral mapping techniques. It seems that argillic alteration is more effective than silicic alteration for exploration purposes. It is suggested that alteration mapping can also be used to delineate areas contaminated by potentially toxic metals.", "which Study Area ?", "Iran", 38.0, 42.0], ["Orbital data acquired by the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) and High Resolution Imaging Science Experiment instruments on the Mars Reconnaissance Orbiter (MRO) provide a synoptic view of compositional stratigraphy on the floor of Gale crater surrounding the area where the Mars Science Laboratory (MSL) Curiosity landed. Fractured, light\u2010toned material exhibits a 2.2 \u00b5m absorption consistent with enrichment in hydroxylated silica. This material may be distal sediment from the Peace Vallis fan, with cement and fracture fill containing the silica. This unit is overlain by more basaltic material, which has 1 \u00b5m and 2 \u00b5m absorptions due to pyroxene that are typical of Martian basaltic materials. Both materials are partially obscured by aeolian dust and basaltic sand. Dunes to the southeast exhibit differences in mafic mineral signatures, with barchan dunes enhanced in olivine relative to pyroxene\u2010containing longitudinal dunes. This compositional difference may be related to aeolian grain sorting.", "which Study Area ?", "Gale crater", 260.0, 271.0], ["The Spectral Profiler (SP) onboard the Japanese SELENE (KAGUYA) spacecraft is now providing global high spectral resolution visible\u2010near infrared continuous reflectance spectra of the Moon. The reflectance spectra of impact craters on the farside of the Moon reveal lithologies that were not previously identified. The achievements of SP so far include: the most definite detection of crystalline iron\u2010bearing plagioclase with its characteristic 1.3 \u03bcm absorption band on the Moon; a new interpretation of the lithology of Tsiolkovsky crater central peaks, previously classified as \u201colivine\u2010rich,\u201d as mixtures of plagioclase and pyroxene; and the lower limit of Mg number of low\u2010Ca pyroxene found at Antoniadi crater central peak and peak ring which were estimated through direct comparison with laboratory spectra of natural and synthetic pyroxene samples.", "which Study Area ?", "Antoniadi crater", 700.0, 716.0], ["The Spectral Profiler (SP) onboard the Japanese SELENE (KAGUYA) spacecraft is now providing global high spectral resolution visible\u2010near infrared continuous reflectance spectra of the Moon. The reflectance spectra of impact craters on the farside of the Moon reveal lithologies that were not previously identified. The achievements of SP so far include: the most definite detection of crystalline iron\u2010bearing plagioclase with its characteristic 1.3 \u03bcm absorption band on the Moon; a new interpretation of the lithology of Tsiolkovsky crater central peaks, previously classified as \u201colivine\u2010rich,\u201d as mixtures of plagioclase and pyroxene; and the lower limit of Mg number of low\u2010Ca pyroxene found at Antoniadi crater central peak and peak ring which were estimated through direct comparison with laboratory spectra of natural and synthetic pyroxene samples.", "which Study Area ?", "Tsiolkovsky crater", 523.0, 541.0], ["We present details of the identification of sites that show an absorption band at visible wavelengths and a strong 2 \u03bcm band using the SELENE Spectral Profiler. All the sites exhibiting the visible feature are found on the regional dark mantle deposit (DMD) at Sinus Aestuum. All the instances of the visible feature show a strong 2 \u03bcm band, suggestive of Fe\u2010 and Cr\u2010rich spinels, which are different from previously detected Mg\u2010rich spinel. Since no visible feature is observed in other DMDs, the DMD at Sinus Aestuum is unique on the Moon. The occurrence trend of the spinels at Sinus Aestuum is also different from that of the Mg\u2010rich spinels, which are associated with impact structures. This may suggest that the spinel at Sinus Aestuum is a different origin from that of the Mg\u2010rich spinel.", "which Study Area ?", "Sinus Aestuum", 261.0, 274.0], ["Principal component analysis (PCA) is an image processing technique that has been commonly applied to Landsat Thematic Mapper (TM) data to locate hydrothermal alteration zones related to metallic deposits. With the advent of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), a 14-band multispectral sensor operating onboard the Earth Observation System (EOS)-Terra satellite, the availability of spectral information in the shortwave infrared (SWIR) portion of the electromagnetic spectrum has been greatly increased. This allows detailed spectral characterization of surface targets, particularly of those belonging to the groups of minerals with diagnostic spectral features in this wavelength range, including phyllosilicates (\u2018clay\u2019 minerals), sulphates and carbonates, among others. In this study, PCA was applied to ASTER bands covering the SWIR with the objective of mapping the occurrence of mineral endmembers related to an epithermal gold prospect in Patagonia, Argentina. The results illustrate ASTER's ability to provide information on alteration minerals which are valuable for mineral exploration activities and support the role of PCA as a very effective and robust image processing technique for that purpose.", "which Study Area ?", "Argentina", 998.0, 1007.0], ["The Elorza crater is located in the Ophir Planum region of Mars with a 40-km diameter, centered near 550.25\u2019 W, 80.72\u2019 S. Since the Elorza crater has clays rich deposits it was one of the important crater to understand the past aqueous alteration processes of planet Mars.", "which Study Area ?", " Elorza Crater", 3.0, 17.0], ["Phyllosilicates have previously been detected in layered outcrops in and around the Martian outflow channel Mawrth Vallis. CRISM spectra of these outcrops exhibit features diagnostic of kaolinite, montmorillonite, and Fe/Mg\u2010rich smectites, along with crystalline ferric oxide minerals such as hematite. These minerals occur in distinct stratigraphic horizons, implying changing environmental conditions and/or a variable sediment source for these layered deposits. Similar stratigraphic sequences occur on both sides of the outflow channel and on its floor, with Al\u2010clay\u2010bearing layers typically overlying Fe/Mg\u2010clay\u2010bearing layers. This pattern, combined with layer geometries measured using topographic data from HiRISE and HRSC, suggests that the Al\u2010clay\u2010bearing horizons at Mawrth Vallis postdate the outflow channel and may represent a later sedimentary or altered pyroclastic deposit that drapes the topography.", "which Study Area ?", "Mawrth Vallis", 108.0, 121.0], ["Background The construction of comprehensive reference libraries is essential to foster the development of DNA barcoding as a tool for monitoring biodiversity and detecting invasive species. The looper moths of British Columbia (BC), Canada present a challenging case for species discrimination via DNA barcoding due to their considerable diversity and limited taxonomic maturity. Methodology/Principal Findings By analyzing specimens held in national and regional natural history collections, we assemble barcode records from representatives of 400 species from BC and surrounding provinces, territories and states. Sequence variation in the barcode region unambiguously discriminates over 93% of these 400 geometrid species. However, a final estimate of resolution success awaits detailed taxonomic analysis of 48 species where patterns of barcode variation suggest cases of cryptic species, unrecognized synonymy as well as young species. Conclusions/Significance A catalog of these taxa meriting further taxonomic investigation is presented as well as the supplemental information needed to facilitate these investigations.", "which Study Location ?", "Canada", 234.0, 240.0], ["The identification of Afrotropical hoverflies is very difficult because of limited recent taxonomic revisions and the lack of comprehensive identification keys. In order to assist in their identification, and to improve the taxonomy of this group, we constructed a reference dataset of 513 COI barcodes of 90 of the more common nominal species from Ghana, Togo, Benin and Nigeria (W Africa) and added ten publically available COI barcodes from nine nominal Afrotropical species to this (total: 523 COI barcodes; 98 nominal species; 26 genera). The identification accuracy of this dataset was evaluated with three methods (K2P distance-based, Neighbor-Joining (NJ) / Maximum Likelihood (ML) analysis, and using SpeciesIdentifier). Results of the three methods were highly congruent and showed a high identification success. Nine species pairs showed a low (< 0.03) mean interspecific K2P distance that resulted in several incorrect identifications. A high (> 0.03) maximum intraspecific K2P distance was observed in eight species and barcodes of these species not always formed single clusters in the NJ / ML analayses which may indicate the occurrence of cryptic species. Optimal K2P thresholds to differentiate intra- from interspecific K2P divergence were highly different among the three subfamilies (Eristalinae: 0.037, Syrphinae: 0.06, Microdontinae: 0.007\u20130.02), and among the different general suggesting that optimal thresholds are better defined at the genus level. In addition to providing an alternative identification tool, our study indicates that DNA barcoding improves the taxonomy of Afrotropical hoverflies by selecting (groups of) taxa that deserve further taxonomic study, and by attributing the unknown sex to species for which only one of the sexes is known.", "which Study Location ?", "Nigeria", 372.0, 379.0], ["This study provides a first, comprehensive, diagnostic use of DNA barcodes for the Canadian fauna of noctuoids or \u201cowlet\u201d moths (Lepidoptera: Noctuoidea) based on vouchered records for 1,541 species (99.1% species coverage), and more than 30,000 sequences. When viewed from a Canada-wide perspective, DNA barcodes unambiguously discriminate 90% of the noctuoid species recognized through prior taxonomic study, and resolution reaches 95.6% when considered at a provincial scale. Barcode sharing is concentrated in certain lineages with 54% of the cases involving 1.8% of the genera. Deep intraspecific divergence exists in 7.7% of the species, but further studies are required to clarify whether these cases reflect an overlooked species complex or phylogeographic variation in a single species. Non-native species possess higher Nearest-Neighbour (NN) distances than native taxa, whereas generalist feeders have lower NN distances than those with more specialized feeding habits. We found high concordance between taxonomic names and sequence clusters delineated by the Barcode Index Number (BIN) system with 1,082 species (70%) assigned to a unique BIN. The cases of discordance involve both BIN mergers and BIN splits with 38 species falling into both categories, most likely reflecting bidirectional introgression. One fifth of the species are involved in a BIN merger reflecting the presence of 158 species sharing their barcode sequence with at least one other taxon, and 189 species with low, but diagnostic COI divergence. A very few cases (13) involved species whose members fell into both categories. Most of the remaining 140 species show a split into two or three BINs per species, while Virbia ferruginosa was divided into 16. The overall results confirm that DNA barcodes are effective for the identification of Canadian noctuoids. This study also affirms that BINs are a strong proxy for species, providing a pathway for a rapid, accurate estimation of animal diversity.", "which Study Location ?", "Canada", 276.0, 282.0], ["DNA barcoding aims to accelerate species identification and discovery, but performance tests have shown marked differences in identification success. As a consequence, there remains a great need for comprehensive studies which objectively test the method in groups with a solid taxonomic framework. This study focuses on the 180 species of butterflies in Romania, accounting for about one third of the European butterfly fauna. This country includes five eco-regions, the highest of any in the European Union, and is a good representative for temperate areas. Morphology and DNA barcodes of more than 1300 specimens were carefully studied and compared. Our results indicate that 90 per cent of the species form barcode clusters allowing their reliable identification. The remaining cases involve nine closely related species pairs, some whose taxonomic status is controversial or that hybridize regularly. Interestingly, DNA barcoding was found to be the most effective identification tool, outperforming external morphology, and being slightly better than male genitalia. Romania is now the first country to have a comprehensive DNA barcode reference database for butterflies. Similar barcoding efforts based on comprehensive sampling of specific geographical regions can act as functional modules that will foster the early application of DNA barcoding while a global system is under development.", "which Study Location ?", "Romania", 355.0, 362.0], ["Abstract DNA barcoding is a modern species identification technique that can be used to distinguish morphologically similar species, and is particularly useful when using small amounts of starting material from partial specimens or from immature stages. In order to use DNA barcoding in a surveillance program, a database containing mosquito barcode sequences is required. This study obtained Cytochrome Oxidase I (COI) sequences for 113 morphologically identified specimens, representing 29 species, six tribes and 12 genera; 17 of these species have not been previously barcoded. Three of the 29 species \u2500 Culex palpalis, Macleaya macmillani, and an unknown species originally identified as Tripteroides atripes \u2500 were initially misidentified as they are difficult to separate morphologically, highlighting the utility of DNA barcoding. While most species grouped separately (reciprocally monophyletic), the Cx. pipiens subgroup could not be genetically separated using COI. The average conspecific and congeneric p\u2010distance was 0.8% and 7.6%, respectively. In our study, we also demonstrate the utility of DNA barcoding in distinguishing exotics from endemic mosquitoes by identifying a single intercepted Stegomyia aegypti egg at an international airport. The use of DNA barcoding dramatically reduced the identification time required compared with rearing specimens through to adults, thereby demonstrating the value of this technique in biosecurity surveillance. The DNA barcodes produced by this study have been uploaded to the \u2018Mosquitoes of Australia\u2013Victoria\u2019 project on the Barcode of Life Database (BOLD), which will serve as a resource for the Victorian Arbovirus Disease Control Program and other national and international mosquito surveillance programs.", "which Study Location ?", "Australia", 1550.0, 1559.0], ["DNA barcoding has been an effective tool for species identification in several animal groups. Here, we used DNA barcoding to discriminate between 47 morphologically distinct species of Brazilian sand flies. DNA barcodes correctly identified approximately 90% of the sampled taxa (42 morphologically distinct species) using clustering based on neighbor-joining distance, of which four species showed comparatively higher maximum values of divergence (range 4.23\u201319.04%), indicating cryptic diversity. The DNA barcodes also corroborated the resurrection of two species within the shannoni complex and provided an efficient tool to differentiate between morphologically indistinguishable females of closely related species. Taken together, our results validate the effectiveness of DNA barcoding for species identification and the discovery of cryptic diversity in sand flies from Brazil.", "which Study Location ?", "Brazil", 878.0, 884.0], ["Abstract DNA barcoding is a modern species identification technique that can be used to distinguish morphologically similar species, and is particularly useful when using small amounts of starting material from partial specimens or from immature stages. In order to use DNA barcoding in a surveillance program, a database containing mosquito barcode sequences is required. This study obtained Cytochrome Oxidase I (COI) sequences for 113 morphologically identified specimens, representing 29 species, six tribes and 12 genera; 17 of these species have not been previously barcoded. Three of the 29 species \u2500 Culex palpalis, Macleaya macmillani, and an unknown species originally identified as Tripteroides atripes \u2500 were initially misidentified as they are difficult to separate morphologically, highlighting the utility of DNA barcoding. While most species grouped separately (reciprocally monophyletic), the Cx. pipiens subgroup could not be genetically separated using COI. The average conspecific and congeneric p\u2010distance was 0.8% and 7.6%, respectively. In our study, we also demonstrate the utility of DNA barcoding in distinguishing exotics from endemic mosquitoes by identifying a single intercepted Stegomyia aegypti egg at an international airport. The use of DNA barcoding dramatically reduced the identification time required compared with rearing specimens through to adults, thereby demonstrating the value of this technique in biosecurity surveillance. The DNA barcodes produced by this study have been uploaded to the \u2018Mosquitoes of Australia\u2013Victoria\u2019 project on the Barcode of Life Database (BOLD), which will serve as a resource for the Victorian Arbovirus Disease Control Program and other national and international mosquito surveillance programs.", "which Study Location ?", " Australia", 1549.0, 1559.0], ["The ecological and medical importance of black flies drives the need for rapid and reliable identification of these minute, structurally uniform insects. We assessed the efficiency of DNA barcoding for species identification of tropical black flies. A total of 351 cytochrome c oxidase subunit 1 sequences were obtained from 41 species in six subgenera of the genus Simulium in Thailand. Despite high intraspecific genetic divergence (mean = 2.00%, maximum = 9.27%), DNA barcodes provided 96% correct identification. Barcodes also differentiated cytoforms of selected species complexes, albeit with varying levels of success. Perfect differentiation was achieved for two cytoforms of Simulium feuerborni, and 91% correct identification was obtained for the Simulium angulistylum complex. Low success (33%), however, was obtained for the Simulium siamense complex. The differential efficiency of DNA barcodes to discriminate cytoforms was attributed to different levels of genetic structure and demographic histories of the taxa. DNA barcode trees were largely congruent with phylogenies based on previous molecular, chromosomal and morphological analyses, but revealed inconsistencies that will require further evaluation.", "which Study Location ?", "Thailand", 378.0, 386.0], ["Background Although they are important disease vectors mosquito biodiversity in Pakistan is poorly known. Recent epidemics of dengue fever have revealed the need for more detailed understanding of the diversity and distributions of mosquito species in this region. DNA barcoding improves the accuracy of mosquito inventories because morphological differences between many species are subtle, leading to misidentifications. Methodology/Principal Findings Sequence variation in the barcode region of the mitochondrial COI gene was used to identify mosquito species, reveal genetic diversity, and map the distribution of the dengue-vector species in Pakistan. Analysis of 1684 mosquitoes from 491 sites in Punjab and Khyber Pakhtunkhwa during 2010\u20132013 revealed 32 species with the assemblage dominated by Culex quinquefasciatus (61% of the collection). The genus Aedes (Stegomyia) comprised 15% of the specimens, and was represented by six taxa with the two dengue vector species, Ae. albopictus and Ae. aegypti, dominant and broadly distributed. Anopheles made up another 6% of the catch with An. subpictus dominating. Barcode sequence divergence in conspecific specimens ranged from 0\u20132.4%, while congeneric species showed from 2.3\u201317.8% divergence. A global haplotype analysis of disease-vectors showed the presence of multiple haplotypes, although a single haplotype of each dengue-vector species was dominant in most countries. Geographic distribution of Ae. aegypti and Ae. albopictus showed the later species was dominant and found in both rural and urban environments. Conclusions As the first DNA-based analysis of mosquitoes in Pakistan, this study has begun the construction of a barcode reference library for the mosquitoes of this region. Levels of genetic diversity varied among species. Because of its capacity to differentiate species, even those with subtle morphological differences, DNA barcoding aids accurate tracking of vector populations.", "which Study Location ?", "Pakistan", 80.0, 88.0], ["Land-use classification is essential for urban planning. Urban land-use types can be differentiated either by their physical characteristics (such as reflectivity and texture) or social functions. Remote sensing techniques have been recognized as a vital method for urban land-use classification because of their ability to capture the physical characteristics of land use. Although significant progress has been achieved in remote sensing methods designed for urban land-use classification, most techniques focus on physical characteristics, whereas knowledge of social functions is not adequately used. Owing to the wide usage of mobile phones, the activities of residents, which can be retrieved from the mobile phone data, can be determined in order to indicate the social function of land use. This could bring about the opportunity to derive land-use information from mobile phone data. To verify the application of this new data source to urban land-use classification, we first construct a vector of aggregated mobile phone data to characterize land-use types. This vector is composed of two aspects: the normalized hourly call volume and the total call volume. A semi-supervised fuzzy c-means clustering approach is then applied to infer the land-use types. The method is validated using mobile phone data collected in Singapore. Land use is determined with a detection rate of 58.03%. An analysis of the land-use classification results shows that the detection rate decreases as the heterogeneity of land use increases, and increases as the density of cell phone towers increases.", "which Study Location ?", "Singapore", 1328.0, 1337.0], ["The morphological species delimitations (i.e. morphospecies) have long been the best way to avoid the taxonomic impediment and compare insect taxa biodiversity in highly diverse tropical and subtropical regions. The development of DNA barcoding, however, has shown great potential to replace (or at least complement) the morphospecies approach, with the advantage of relying on automated methods implemented in computer programs or even online rather than in often subjective morphological features. We sampled moths extensively for two years using light traps in a patch of the highly endangered Atlantic Forest of Brazil to produce a nearly complete census of arctiines (Noctuoidea: Erebidae), whose species richness was compared using different morphological and molecular approaches (DNA barcoding). A total of 1,075 barcode sequences of 286 morphospecies were analyzed. Based on the clustering method Barcode Index Number (BIN) we found a taxonomic bias of approximately 30% in our initial morphological assessment. However, a morphological reassessment revealed that the correspondence between morphospecies and molecular operational taxonomic units (MOTUs) can be up to 94% if differences in genitalia morphology are evaluated in individuals of different MOTUs originated from the same morphospecies (putative cases of cryptic species), and by recording if individuals of different genders in different morphospecies merge together in the same MOTU (putative cases of sexual dimorphism). The results of two other clustering methods (i.e. Automatic Barcode Gap Discovery and 2% threshold) were very similar to those of the BIN approach. Using empirical data we have shown that DNA barcoding performed substantially better than the morphospecies approach, based on superficial morphology, to delimit species of a highly diverse moth taxon, and thus should be used in species inventories.", "which Study Location ?", "Brazil", 616.0, 622.0], ["This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences.", "which Study Location ?", "Canada", 127.0, 133.0], ["Sand flies include a group of insects that are of medical importance and that vary in geographic distribution, ecology, and pathogen transmission. Approximately 163 species of sand flies have been reported in Colombia. Surveillance of the presence of sand fly species and the actualization of species distribution are important for predicting risks for and monitoring the expansion of diseases which sand flies can transmit. Currently, the identification of phlebotomine sand flies is based on morphological characters. However, morphological identification requires considerable skills and taxonomic expertise. In addition, significant morphological similarity between some species, especially among females, may cause difficulties during the identification process. DNA-based approaches have become increasingly useful and promising tools for estimating sand fly diversity and for ensuring the rapid and accurate identification of species. A partial sequence of the mitochondrial cytochrome oxidase gene subunit I (COI) is currently being used to differentiate species in different animal taxa, including insects, and it is referred as a barcoding sequence. The present study explored the utility of the DNA barcode approach for the identification of phlebotomine sand flies in Colombia. We sequenced 700 bp of the COI gene from 36 species collected from different geographic localities. The COI barcode sequence divergence within a single species was <2% in most cases, whereas this divergence ranged from 9% to 26.6% among different species. These results indicated that the barcoding gene correctly discriminated among the previously morphologically identified species with an efficacy of nearly 100%. Analyses of the generated sequences indicated that the observed species groupings were consistent with the morphological identifications. In conclusion, the barcoding gene was useful for species discrimination in sand flies from Colombia.", "which Study Location ?", "Colombia", 209.0, 217.0], ["Because the tropical regions of America harbor the highest concentration of butterfly species, its fauna has attracted considerable attention. Much less is known about the butterflies of southern South America, particularly Argentina, where over 1,200 species occur. To advance understanding of this fauna, we assembled a DNA barcode reference library for 417 butterfly species of Argentina, focusing on the Atlantic Forest, a biodiversity hotspot. We tested the efficacy of this library for specimen identification, used it to assess the frequency of cryptic species, and examined geographic patterns of genetic variation, making this study the first large-scale genetic assessment of the butterflies of southern South America. The average sequence divergence to the nearest neighbor (i.e. minimum interspecific distance) was 6.91%, ten times larger than the mean distance to the furthest conspecific (0.69%), with a clear barcode gap present in all but four of the species represented by two or more specimens. As a consequence, the DNA barcode library was extremely effective in the discrimination of these species, allowing a correct identification in more than 95% of the cases. Singletons (i.e. species represented by a single sequence) were also distinguishable in the gene trees since they all had unique DNA barcodes, divergent from those of the closest non-conspecific. The clustering algorithms implemented recognized from 416 to 444 barcode clusters, suggesting that the actual diversity of butterflies in Argentina is 3%\u20139% higher than currently recognized. Furthermore, our survey added three new records of butterflies for the country (Eurema agave, Mithras hannelore, Melanis hillapana). In summary, this study not only supported the utility of DNA barcoding for the identification of the butterfly species of Argentina, but also highlighted several cases of both deep intraspecific and shallow interspecific divergence that should be studied in more detail.", "which Study Location ?", "Argentina", 224.0, 233.0], ["Abstract The DNA barcode reference library for Lepidoptera holds much promise as a tool for taxonomic research and for providing the reliable identifications needed for conservation assessment programs. We gathered sequences for the barcode region of the mitochondrial cytochrome c oxidase subunit I gene from 160 of the 176 nominal species of Erebidae moths (Insecta: Lepidoptera) known from the Iberian Peninsula. These results arise from a research project which constructing a DNA barcode library for the insect species of Spain. New records for 271 specimens (122 species) are coupled with preexisting data for 38 species from the Iberian fauna. Mean interspecific distance was 12.1%, while the mean nearest neighbour divergence was 6.4%. All 160 species possessed diagnostic barcode sequences, but one pair of congeneric taxa (Eublemma rosea and Eublemma rietzi) were assigned to the same BIN. As well, intraspecific sequence divergences higher than 1.5% were detected in four species which likely represent species complexes. This study reinforces the effectiveness of DNA barcoding as a tool for monitoring biodiversity in particular geographical areas and the strong correspondence between sequence clusters delineated by BINs and species recognized through detailed taxonomic analysis.", "which Study Location ?", "Spain", 527.0, 532.0], ["The identification of Afrotropical hoverflies is very difficult because of limited recent taxonomic revisions and the lack of comprehensive identification keys. In order to assist in their identification, and to improve the taxonomy of this group, we constructed a reference dataset of 513 COI barcodes of 90 of the more common nominal species from Ghana, Togo, Benin and Nigeria (W Africa) and added ten publically available COI barcodes from nine nominal Afrotropical species to this (total: 523 COI barcodes; 98 nominal species; 26 genera). The identification accuracy of this dataset was evaluated with three methods (K2P distance-based, Neighbor-Joining (NJ) / Maximum Likelihood (ML) analysis, and using SpeciesIdentifier). Results of the three methods were highly congruent and showed a high identification success. Nine species pairs showed a low (< 0.03) mean interspecific K2P distance that resulted in several incorrect identifications. A high (> 0.03) maximum intraspecific K2P distance was observed in eight species and barcodes of these species not always formed single clusters in the NJ / ML analayses which may indicate the occurrence of cryptic species. Optimal K2P thresholds to differentiate intra- from interspecific K2P divergence were highly different among the three subfamilies (Eristalinae: 0.037, Syrphinae: 0.06, Microdontinae: 0.007\u20130.02), and among the different general suggesting that optimal thresholds are better defined at the genus level. In addition to providing an alternative identification tool, our study indicates that DNA barcoding improves the taxonomy of Afrotropical hoverflies by selecting (groups of) taxa that deserve further taxonomic study, and by attributing the unknown sex to species for which only one of the sexes is known.", "which Study Location ?", "Benin", 362.0, 367.0], ["The identification of Afrotropical hoverflies is very difficult because of limited recent taxonomic revisions and the lack of comprehensive identification keys. In order to assist in their identification, and to improve the taxonomy of this group, we constructed a reference dataset of 513 COI barcodes of 90 of the more common nominal species from Ghana, Togo, Benin and Nigeria (W Africa) and added ten publically available COI barcodes from nine nominal Afrotropical species to this (total: 523 COI barcodes; 98 nominal species; 26 genera). The identification accuracy of this dataset was evaluated with three methods (K2P distance-based, Neighbor-Joining (NJ) / Maximum Likelihood (ML) analysis, and using SpeciesIdentifier). Results of the three methods were highly congruent and showed a high identification success. Nine species pairs showed a low (< 0.03) mean interspecific K2P distance that resulted in several incorrect identifications. A high (> 0.03) maximum intraspecific K2P distance was observed in eight species and barcodes of these species not always formed single clusters in the NJ / ML analayses which may indicate the occurrence of cryptic species. Optimal K2P thresholds to differentiate intra- from interspecific K2P divergence were highly different among the three subfamilies (Eristalinae: 0.037, Syrphinae: 0.06, Microdontinae: 0.007\u20130.02), and among the different general suggesting that optimal thresholds are better defined at the genus level. In addition to providing an alternative identification tool, our study indicates that DNA barcoding improves the taxonomy of Afrotropical hoverflies by selecting (groups of) taxa that deserve further taxonomic study, and by attributing the unknown sex to species for which only one of the sexes is known.", "which Study Location ?", "Togo", 356.0, 360.0], ["Biodiversity research in tropical ecosystems-popularized as the most biodiverse habitats on Earth-often neglects invertebrates, yet invertebrates represent the bulk of local species richness. Insect communities in particular remain strongly impeded by both Linnaean and Wallacean shortfalls, and identifying species often remains a formidable challenge inhibiting the use of these organisms as indicators for ecological and conservation studies. Here we use DNA barcoding as an alternative to the traditional taxonomic approach for characterizing and comparing the diversity of moth communities in two different ecosystems in Gabon. Though sampling remains very incomplete, as evidenced by the high proportion (59%) of species represented by singletons, our results reveal an outstanding diversity. With about 3500 specimens sequenced and representing 1385 BINs (Barcode Index Numbers, used as a proxy to species) in 23 families, the diversity of moths in the two sites sampled is higher than the current number of species listed for the entire country, highlighting the huge gap in biodiversity knowledge for this country. Both seasonal and spatial turnovers are strikingly high (18.3% of BINs shared between seasons, and 13.3% between sites) and draw attention to the need to account for these when running regional surveys. Our results also highlight the richness and singularity of savannah environments and emphasize the status of Central African ecosystems as hotspots of biodiversity.", "which Study Location ?", "Gabon", 626.0, 631.0], ["Abstract In this study, we analysed the applicability of DNA barcodes for delimitation of 79 specimens of 13 species of nonbiting midges in the subfamily Tanypodinae (Diptera: Chironomidae) from S\u00e3o Paulo State, Brazil. Our results support DNA barcoding as an excellent tool for species identification and for solving taxonomic conflicts in genus Labrundinia. Molecular analysis of cytochrome c oxidase subunit I (COI) gene sequences yielded taxon identification trees, supporting 13 cohesive species clusters, of which three similar groups were subsequently linked to morphological variation at the larval and pupal stage. Additionally, another cluster previously described by means of morphology was linked to molecular markers. We found a distinct barcode gap, and in some species substantial interspecific pairwise divergences (up to 19.3%) were observed, which permitted identification of all analysed species. The results also indicated that barcodes can be used to associate life stages of chironomids since COI was easily amplified and sequenced from different life stages with universal barcode primers. R\u00e9sum\u00e9 Notre \u00e9tude \u00e9value l'utilit\u00e9 des codes \u00e0 barres d'ADN pour d\u00e9limiter 79 sp\u00e9cimens de 13 esp\u00e8ces de moucherons de la sous-famille des Tanypodinae (Diptera: Chironomidae) provenant de l\u2019\u00e9tat de S\u00e3o Paulo, Br\u00e9sil. Notre \u00e9tude confirme l'utilisation des codes \u00e0 barres d'ADN comme un excellent outil pour l'identification des esp\u00e8ces et la solution de probl\u00e8mes taxonomiques dans genre Labrundinia. Une analyse mol\u00e9culaire des s\u00e9quences des g\u00e8nes COI fournit des arbres d'identification des taxons, d\u00e9limitant 13 groupes coh\u00e9rents d'esp\u00e8ces, dont trois groupes similaires ont \u00e9t\u00e9 reli\u00e9s subs\u00e9quemment \u00e0 une variation morphologique des stades larvaires et nymphal. De plus, un autre groupe d\u00e9crit ant\u00e9rieurement \u00e0 partir de caract\u00e8res morphologiques a \u00e9t\u00e9 reli\u00e9 \u00e0 des marqueurs mol\u00e9culaires. Il existe un \u00e9cart net entre les codes \u00e0 barres et, chez certaines esp\u00e8ces, d'importantes divergences entre les esp\u00e8ces consid\u00e9r\u00e9es deux par deux (jusqu\u2019\u00e0 19,3%), ce qui a permis l'identification de toutes les esp\u00e8ces examin\u00e9es. Nos r\u00e9sultats montrent aussi que les codes \u00e0 barres peuvent servir \u00e0 associer les diff\u00e9rents stades de vie des chironomides, car il est facile d'amplifier et de s\u00e9quencer le g\u00e8ne COI provenant des diff\u00e9rents stades avec les amorces universelles des codes \u00e0 barres.", "which Study Location ?", "Brazil", 212.0, 218.0], ["Abstract. Carrion-breeding Sarcophagidae (Diptera) can be used to estimate the post-mortem interval in forensic cases. Difficulties with accurate morphological identifications at any life stage and a lack of documented thermobiological profiles have limited their current usefulness. The molecular-based approach of DNA barcoding, which utilises a 648-bp fragment of the mitochondrial cytochrome oxidase subunit I gene, was evaluated in a pilot study for discrimination between 16 Australian sarcophagids. The current study comprehensively evaluated barcoding for a larger taxon set of 588 Australian sarcophagids. In total, 39 of the 84 known Australian species were represented by 580 specimens, which includes 92% of potentially forensically important species. A further eight specimens could not be identified, but were included nonetheless as six unidentifiable taxa. A neighbour-joining tree was generated and nucleotide sequence divergences were calculated. All species except Sarcophaga (Fergusonimyia) bancroftorum, known for high morphological variability, were resolved as monophyletic (99.2% of cases), with bootstrap support of 100. Excluding S. bancroftorum, the mean intraspecific and interspecific variation ranged from 1.12% and 2.81\u201311.23%, respectively, allowing for species discrimination. DNA barcoding was therefore validated as a suitable method for molecular identification of Australian Sarcophagidae, which will aid in the implementation of this fauna in forensic entomology.", "which Study Location ?", "Australia", NaN, NaN], ["The identification of Afrotropical hoverflies is very difficult because of limited recent taxonomic revisions and the lack of comprehensive identification keys. In order to assist in their identification, and to improve the taxonomy of this group, we constructed a reference dataset of 513 COI barcodes of 90 of the more common nominal species from Ghana, Togo, Benin and Nigeria (W Africa) and added ten publically available COI barcodes from nine nominal Afrotropical species to this (total: 523 COI barcodes; 98 nominal species; 26 genera). The identification accuracy of this dataset was evaluated with three methods (K2P distance-based, Neighbor-Joining (NJ) / Maximum Likelihood (ML) analysis, and using SpeciesIdentifier). Results of the three methods were highly congruent and showed a high identification success. Nine species pairs showed a low (< 0.03) mean interspecific K2P distance that resulted in several incorrect identifications. A high (> 0.03) maximum intraspecific K2P distance was observed in eight species and barcodes of these species not always formed single clusters in the NJ / ML analayses which may indicate the occurrence of cryptic species. Optimal K2P thresholds to differentiate intra- from interspecific K2P divergence were highly different among the three subfamilies (Eristalinae: 0.037, Syrphinae: 0.06, Microdontinae: 0.007\u20130.02), and among the different general suggesting that optimal thresholds are better defined at the genus level. In addition to providing an alternative identification tool, our study indicates that DNA barcoding improves the taxonomy of Afrotropical hoverflies by selecting (groups of) taxa that deserve further taxonomic study, and by attributing the unknown sex to species for which only one of the sexes is known.", "which Study Location ?", "Ghana", 349.0, 354.0], ["Lionfish (Pterois volitans), venomous predators from the Indo-Pacific, are recent invaders of the Caribbean Basin and southeastern coast of North America. Quantification of invasive lionfish abundances, along with potentially important physical and biological environmental characteristics, permitted inferences about the invasion process of reefs on the island of San Salvador in the Bahamas. Environmental wave-exposure had a large influence on lionfish abundance, which was more than 20 and 120 times greater for density and biomass respectively at sheltered sites as compared with wave-exposed environments. Our measurements of topographic complexity of the reefs revealed that lionfish abundance was not driven by habitat rugosity. Lionfish abundance was not negatively affected by the abundance of large native predators (or large native groupers) and was also unrelated to the abundance of medium prey fishes (total length of 5\u201310 cm). These relationships suggest that (1) higher-energy environments may impose intrinsic resistance against lionfish invasion, (2) habitat complexity may not facilitate the lionfish invasion process, (3) predation or competition by native fishes may not provide biotic resistance against lionfish invasion, and (4) abundant prey fish might not facilitate lionfish invasion success. The relatively low biomass of large grouper on this island could explain our failure to detect suppression of lionfish abundance and we encourage continuing the preservation and restoration of potential lionfish predators in the Caribbean. In addition, energetic environments might exert direct or indirect resistance to the lionfish proliferation, providing native fish populations with essential refuges.", "which Continent ?", "North America", 140.0, 153.0], ["Aim The biotic resistance hypothesis argues that complex plant and animal communities are more resistant to invasion than simpler communities. Conversely, the biotic acceptance hypothesis states that non-native and native species richness are positively related. Most tests of these hypotheses at continental scales, typically conducted on plants, have found support for biotic acceptance. We tested these hypotheses on both amphibians and reptiles across Europe and North America. Location Continental countries in Europe and states/provinces in North America. Methods We used multiple linear regression models to determine which factors predicted successful establishment of amphibians and reptiles in Europe and North America, and additional models to determine which factors predicted native species richness. Results Successful establishment of amphibians and reptiles in Europe and reptiles in North America was positively related to native species richness. We found higher numbers of successful amphibian species in Europe than in North America. Potential evapotranspiration (PET) was positively related to non-native species richness for amphibians and reptiles in Europe and reptiles in North America. PET was also the primary factor determining native species richness for both amphibians and reptiles in Europe and North America. Main conclusions We found support for the biotic acceptance hypothesis for amphibians and reptiles in Europe and reptiles in North America, suggesting that the presence of native amphibian and reptile species generally indicates good habitat for non-native species. Our data suggest that the greater number of established amphibians per native amphibians in Europe than in North America might be explained by more introductions in Europe or climate-matching of the invaders. Areas with high native species richness should be the focus of control and management efforts, especially considering that non-native species located in areas with a high number of natives can have a large impact on biological diversity.", "which Continent ?", "Europe", 456.0, 462.0], ["Abstract 1. Movement, and particularly the colonisation of new habitat patches, remains one of the least known aspects of the life history and ecology of the vast majority of species. Here, a series of experiments was conducted to rectify this problem with Delphacodes scolochloa Cronin & Wilson, a wing\u2010dimorphic planthopper of the North American Great Plains.", "which Continent ?", "North America", NaN, NaN], ["Emerald ash borer, Agrilus planipennis, is a destructive invasive forest pest in North America and European Russia. This pest species is rapidly spreading in European Russia and is likely to arrive in other countries soon. The aim is to analyze the ecological consequences of the establishment of this pest in European Russia and investigate (1) what other xylophagous beetles develop on trees affected by A. planipennis, (2) how common is the parasitoid of the emerald ash borer Spathius polonicus (Hymenoptera: Braconidae: Doryctinae) and what is the level of parasitism by this species, and (3) how susceptible is the native European ash species Fraxinus excelsior to A. planipennis. A survey of approximately 1000 Fraxinus pennsylvanica trees damaged by A. planipennis in 13 localities has shown that Hylesinus varius (Coleoptera: Curculionidae: Scolytinae), Tetrops starkii (Coleoptera: Cerambycidae) and Agrilus convexicollis (Coleoptera: Buprestidae) were common on these trees. Spathius polonicus is frequently recorded. About 50 percent of late instar larvae of A. planipennis sampled were parasitized by S. polonicus. Maps of the distributions of T. starkii, A. convexicollis and S. polonicus before and after the establishment of A. planipennis in European Russia were compiled. It is hypothesized that these species, which are native to the West Palaearctic, spread into central European Russia after A. planipennis became established there. Current observations confirm those of previous authors that native European ash Fraxinus excelsior is susceptible to A. planipennis, increasing the threat posed by this pest. The establishment of A. planipennis has resulted in a cascade of ecological effects, such as outbreaks of other xylophagous beetles in A. planipennis-infested trees. It is likely that the propagation of S. polonicus will reduce the incidence of outbreaks of A. planipennis.", "which Continent ?", "Europe", NaN, NaN], ["Summary 1 Biological invasion can permanently alter ecosystem structure and function. Invasive species are difficult to eradicate, so methods for constraining invasions would be ecologically valuable. We examined the potential of ecological restoration to constrain invasion of an old field by Agropyron cristatum, an introduced C3 grass. 2 A field experiment was conducted in the northern Great Plains of North America. One-hundred and forty restored plots were planted in 1994\u201396 with a mixture of C3 and C4 native grass seed, while 100 unrestored plots were not. Vegetation on the plots was measured periodically between 1994 and 2002. 3 Agropyron cristatum invaded the old field between 1994 and 2002, occurring in 5% of plots in 1994 and 66% of plots in 2002, and increasing in mean cover from 0\u00b72% in 1994 to 17\u00b71% in 2002. However, A. cristatum invaded one-third fewer restored than unrestored plots between 1997 and 2002, suggesting that restoration constrained invasion. Further, A. cristatum cover in restored plots decreased with increasing planted grass cover. Stepwise regression indicated that A. cristatum cover was more strongly correlated with planted grass cover than with distance from the A. cristatum source, species richness, percentage bare ground or percentage litter. 4 The strength of the negative relationship between A. cristatum and planted native grasses varied among functional groups: the correlation was stronger with species with phenology and physiology similar to A. cristatum (i.e. C3 grasses) than with dissimilar species (C4 grasses). 5 Richness and cover of naturally establishing native species decreased with increasing A. cristatum cover. In contrast, restoration had little effect on the establishment and colonization of naturally establishing native species. Thus, A. cristatum hindered colonization by native species while planted native grasses did not. 6 Synthesis and applications. To our knowledge, this study provides the first indication that restoration can act as a filter, constraining invasive species while allowing colonization by native species. These results suggest that resistance to invasion depends on the identity of species in the community and that restoration seed mixes might be tailored to constrain selected invaders. Restoring areas before invasive species become established can reduce the magnitude of biological invasion.", "which Continent ?", "North America", 406.0, 419.0], ["SUMMARY The invasive zebra mussel (Dreissena polymorpha) has quickly colonized shallow-water habitats in the North American Great Lakes since the 1980s but the quagga mussel (Dreissena bugensis) is becoming dominant in both shallow and deep-water habitats. While quagga mussel shell morphology differs between shallow and deep habitats, functional causes and consequences of such difference are unknown. We examined whether quagga mussel shell morphology could be induced by three environmental variables through developmental plasticity. We predicted that shallow-water conditions (high temperature, food quantity, water motion) would yield a morphotype typical of wild quagga mussels from shallow habitats, while deep-water conditions (low temperature, food quantity, water motion) would yield a morphotype present in deep habitats. We tested this prediction by examining shell morphology and growth rate of quagga mussels collected from shallow and deep habitats and reared under common-garden treatments that manipulated the three variables. Shell morphology was quantified using the polar moment of inertia. Of the variables tested, temperature had the greatest effect on shell morphology. Higher temperature (\u223c18\u201320\u00b0C) yielded a morphotype typical of wild shallow mussels regardless of the levels of food quantity or water motion. In contrast, lower temperature (\u223c6\u20138\u00b0C) yielded a morphotype approaching that of wild deep mussels. If shell morphology has functional consequences in particular habitats, a plastic response might confer quagga mussels with a greater ability than zebra mussels to colonize a wider range of habitats within the Great Lakes.", "which Continent ?", "North America", NaN, NaN], ["Giant hogweed, Heracleum mantegazzianum (Apiaceae), was introduced from the Caucasus into Western Europe more than 150 years ago and later became an invasive weed which created major problems for European authorities. Phytophagous insects were collected in the native range of the giant hogweed (Caucasus) and were compared to those found on plants in the invaded parts of Europe. The list of herbivores was compiled from surveys of 27 localities in nine countries during two seasons. In addition, litera- ture records for herbivores were analysed for a total of 16 Heracleum species. We recorded a total of 265 herbivorous insects on Heracleum species and we analysed them to describe the herbivore assemblages, locate vacant niches, and identify the most host- specific herbivores on H. mantegazzianum. When combining our investigations with similar studies of herbivores on other invasive weeds, all studies show a higher proportion of specialist herbivores in the native habitats compared to the invaded areas, supporting the \"enemy release hypothesis\" (ERH). When analysing the relative size of the niches (measured as plant organ biomass), we found less herbivore species per biomass on the stem and roots, and more on the leaves (Fig. 5). Most herbivores were polyphagous gener- alists, some were found to be oligophagous (feeding within the same family of host plants) and a few had only Heracleum species as host plants (monophagous). None were known to feed exclusively on H. mantegazzianum. The oligophagous herbivores were restricted to a few taxonomic groups, especially within the Hemiptera, and were particularly abundant on this weed.", "which Continent ?", "Europe", 98.0, 104.0], ["Although some plant traits have been linked to invasion success, the possible effects of regional factors, such as diversity, habitat suitability, and human activity are not well understood. Each of these mechanisms predicts a different pattern of distribution at the regional scale. Thus, where climate and soils are similar, predictions based on regional hypotheses for invasion success can be tested by comparisons of distributions in the source and receiving regions. Here, we analyse the native and alien geographic ranges of all 1567 plant species that have been introduced between eastern Asia and North America or have been introduced to both regions from elsewhere. The results reveal correlations between the spread of exotics and both the native species richness and transportation networks of recipient regions. This suggests that both species interactions and human-aided dispersal influence exotic distributions, although further work on the relative importance of these processes is needed.", "which Continent ?", "Asia", 596.0, 600.0], ["A Ponto-Caspian amphipod Dikerogammarus haemobaphes has recently invaded European waters. In the recipient area, it encountered Dreissena polymorpha, a habitat-forming bivalve, co-occurring with the gammarids in their native range. We assumed that interspecific interactions between these two species, which could develop during their long-term co-evolution, may affect the gammarid behaviour in novel areas. We examined the gammarid ability to select a habitat containing living mussels and searched for cues used in that selection. We hypothesized that they may respond to such traits of a living mussel as byssal threads, activity (e.g. valve movements, filtration) and/or shell surface properties. We conducted the pairwise habitat-choice experiments in which we offered various objects to single gammarids in the following combinations: (1) living mussels versus empty shells (the general effect of living Dreissena); (2) living mussels versus shells with added byssal threads and shells with byssus versus shells without it (the effect of byssus); (3) living mussels versus shells, both coated with nail varnish to neutralize the shell surface (the effect of mussel activity); (4) varnished versus clean living mussels (the effect of shell surface); (5) varnished versus clean stones (the effect of varnish). We checked the gammarid positions in the experimental tanks after 24 h. The gammarids preferred clean living mussels over clean shells, regardless of the presence of byssal threads under the latter. They responded to the shell surface, exhibiting preferences for clean mussels over varnished individuals. They were neither affected by the presence of byssus nor by mussel activity. The ability to detect and actively select zebra mussel habitats may be beneficial for D. haemobaphes and help it establish stable populations in newly invaded areas.", "which Continent ?", "Europe", NaN, NaN], ["Abstract The impact of human\u2010induced stressors, such as invasive species, is often measured at the organismal level, but is much less commonly scaled up to the population level. Interactions with invasive species represent an increasingly common source of stressor in many habitats. However, due to the increasing abundance of invasive species around the globe, invasive species now commonly cause stresses not only for native species in invaded areas, but also for other invasive species. I examine the European green crab Carcinus maenas, an invasive species along the northeast coast of North America, which is known to be negatively impacted in this invaded region by interactions with the invasive Asian shore crab Hemigrapsus sanguineus. Asian shore crabs are known to negatively impact green crabs via two mechanisms: by directly preying on green crab juveniles and by indirectly reducing green crab fecundity via interference (and potentially exploitative) competition that alters green crab diets. I used life\u2010table analyses to scale these two mechanistic stressors up to the population level in order to examine their relative impacts on green crab populations. I demonstrate that lost fecundity has larger impacts on per capita population growth rates, but that both predation and lost fecundity are capable of reducing population growth sufficiently to produce the declines in green crab populations that have been observed in areas where these two species overlap. By scaling up the impacts of one invader on a second invader, I have demonstrated that multiple documented interactions between these species are capable of having population\u2010level impacts and that both may be contributing to the decline of European green crabs in their invaded range on the east coast of North America.", "which Continent ?", "North America", 590.0, 603.0], ["Although many studies have documented the impact of invasive species on indigenous flora and fauna, few have rigorously examined interactions among invaders and the potential for one exotic species to replace another. European green crabs (Carcinus maenas), once common in rocky intertidal habitats of southern New England, have recently declined in abundance coincident with the invasion of the Asian shore crab (Hemigrapsus sanguineus). Over a four-year period in the late 1990s we documented a significant (40- 90%) decline in green crab abundance and a sharp (10-fold) increase in H. sanguineus at three sites in southern New England. Small, newly recruited green crabs had a significant risk of predation when paired with larger H. sanguineus in the laboratory, and recruitment of 0-yr C. maenas was reduced by H. sanguineus as well as by larger conspecifics in field- deployed cages (via predation and cannibalism, respectively). In contrast, recruitment of 0-yr H. sanguineus was not affected by larger individuals of either crab species during the same experiments. The differential susceptibility of C. maenas and H. sanguineus recruits to predation and cannibalism likely contributed to the observed decrease in C. maenas abundance and the almost exponential increase in H. sanguineus abundance during the period of study, While the Asian shore crab is primarily restricted to rocky intertidal habitats, C. maenas is found intertidally, subtidally, and in a range of substrate types in New England. Thus, the apparent replacement of C. maenas by H. sanguineus in rocky intertidal habitats of southern New England may not ameliorate the economic and ecological impacts attributed to green crab populations in other habitats of this region. For example, field experiments indicate that predation pressure on a native bivalve species (Mytilus edulis) has not nec- essarily decreased with the declines of C. maenas. While H. sanguineus has weaker per capita effects than C. maenas, its densities greatly exceed those of C. maenas at present and its population-level effects are likely comparable to the past effects of C. maenas. The Carcinus-Hemigrapsus interactions documented here are relevant in other parts of the world where green crabs and grapsid crabs interact, particularly on the west coast of North America where C. maenas has recently invaded and co-occurs with two native Hemigrapsus species.", "which Continent ?", "North America", 2311.0, 2324.0], ["1 We tested the enemy release hypothesis for invasiveness using field surveys of herbivory on 39 exotic and 30 native plant species growing in natural areas near Ottawa, Canada, and found that exotics suffered less herbivory than natives. 2 For the 39 introduced species, we also tested relationships between herbivory, invasiveness and time since introduction to North America. Highly invasive plants had significantly less herbivory than plants ranked as less invasive. Recently arrived plants also tended to be more invasive; however, there was no relationship between time since introduction and herbivory. 3 Release from herbivory may be key to the success of highly aggressive invaders. Low herbivory may also indicate that a plant possesses potent defensive chemicals that are novel to North America, which may confer resistance to pathogens or enable allelopathy in addition to deterring herbivorous insects.", "which Continent ?", "North America", 364.0, 377.0], ["Alliaria petiolata is a Eurasian biennial herb that is invasive in North America and for which phenotypic plasticity has been noted as a potentially important invasive trait. Using four European and four North American populations, we explored variation among populations in the response of a suite of antioxidant, antiherbivore, and morphological traits to the availability of water and nutrients and to jasmonic acid treatment. Multivariate analyses revealed substantial variation among populations in mean levels of these traits and in the response of this suite of traits to environmental variation, especially water availability. Univariate analyses revealed variation in plasticity among populations in the expression of all of the traits measured to at least one of these environmental factors, with the exception of leaf length. There was no evidence for continentally distinct plasticity patterns, but there was ample evidence for variation in phenotypic plasticity among the populations within continents. This implies that A. petiolata has the potential to evolve distinct phenotypic plasticity patterns within populations but that invasive populations are no more plastic than native populations.", "which Continent ?", "North America", 67.0, 80.0], ["Abstract Enemy release is a commonly accepted mechanism to explain plant invasions. Both the diploid Leucanthemum vulgare and the morphologically very similar tetraploid Leucanthemum ircutianum have been introduced into North America. To verify which species is more prevalent in North America we sampled 98 Leucanthemum populations and determined their ploidy level. Although polyploidy has repeatedly been proposed to be associated with increased invasiveness in plants, only two of the populations surveyed in North America were the tetraploid L. ircutianum . We tested the enemy release hypothesis by first comparing 20 populations of L. vulgare and 27 populations of L. ircutianum in their native range in Europe, and then comparing the European L. vulgare populations with 31 L. vulgare populations sampled in North America. Characteristics of the site and associated vegetation, plant performance and invertebrate herbivory were recorded. In Europe, plant height and density of the two species were similar but L. vulgare produced more flower heads than L. ircutianum . Leucanthemum vulgare in North America was 17 % taller, produced twice as many flower heads and grew much denser compared to L. vulgare in Europe. Attack rates by root- and leaf-feeding herbivores on L. vulgare in Europe (34 and 75 %) was comparable to that on L. ircutianum (26 and 71 %) but higher than that on L. vulgare in North America (10 and 3 %). However, herbivore load and leaf damage were low in Europe. Cover and height of the co-occurring vegetation was higher in L. vulgare populations in the native than in the introduced range, suggesting that a shift in plant competition may more easily explain the invasion success of L. vulgare than escape from herbivory.", "which Continent ?", "North America", 220.0, 233.0], ["The potential of introduced species to become invasive is often linked to their ability to colonise disturbed habitats rapidly. We studied the effects of major disturbance by severe storms on the indigenous mussel Perna perna and the invasive mussel Mytilus galloprovincialis in sympatric intertidal populations on the south coast of South Africa. At the study sites, these species dominate different shore levels and co-exist in the mid mussel zone. We tested the hypotheses that in the mid- zone P. perna would suffer less dislodgment than M. galloprovincialis, because of its greater tenacity, while M. galloprovincialis would respond with a higher re-colonisation rate. We estimated the per- cent cover of the 2 mussels in the mid-zone from photographs, once before severe storms and 3 times afterwards. M. galloprovincialis showed faster re-colonisation and 3 times more cover than P. perna 1 and 1.5 yr after the storms (when populations had recovered). Storm-driven dislodgment in the mid- zone was highest for the species that initially dominated at each site, conforming to the concept of compensatory mortality. This resulted in similar cover of the 2 species immediately after the storms. Thus, the storm wave forces exceeded the tenacity even of P. perna, while the higher recruitment rate of M. galloprovincialis can explain its greater colonisation ability. We predict that, because of its weaker attachment strength, M. galloprovincialis will be largely excluded from open coast sites where wave action is generally stronger, but that its greater capacity for exploitation competition through re-colonisation will allow it to outcompete P. perna in more sheltered areas (especially in bays) that are periodically disturbed by storms.", "which Continent ?", "Africa", 340.0, 346.0], ["Alien plants invade many ecosystems worldwide and often have substantial negative effects on ecosystem structure and functioning. Our ability to quantitatively predict these impacts is, in part, limited by the absence of suitable plant-spread models and by inadequate parameter estimates for such models. This paper explores the effects of model, plant, and environmental attributes on predicted rates and patterns of spread of alien pine trees (Pinus spp.) in South African fynbos (a mediterranean-type shrubland). A factorial experimental design was used to: (1) compare the predictions of a simple reaction-diffusion model and a spatially explicit, individual-based simulation model; (2) investigate the sensitivity of predicted rates and patterns of spread to parameter values; and (3) quantify the effects of the simulation model's spatial grain on its predictions. The results show that the spatial simulation model places greater emphasis on interactions among ecological processes than does the reaction-diffusion model. This ensures that the predictions of the two models differ substantially for some factor combinations. The most important factor in the model is dispersal ability. Fire frequency, fecundity, and age of reproductive maturity are less important, while adult mortality has little effect on the model's predictions. The simulation model's predictions are sensitive to the model's spatial grain. This suggests that simulation models that use matrices as a spatial framework should ensure that the spatial grain of the model is compatible with the spatial processes being modeled. We conclude that parameter estimation and model development must be integrated pro- cedures. This will ensure that the model's structure is compatible with the biological pro- cesses being modeled. Failure to do so may result in spurious predictions.", "which Continent ?", "Africa", NaN, NaN], ["AbstractThe spread of nonnative species over the last century has profoundly altered freshwater ecosystems, resulting in novel species assemblages. Interactions between nonnative species may alter their impacts on native species, yet few studies have addressed multispecies interactions. The spread of whirling disease, caused by the nonnative parasite Myxobolus cerebralis, has generated declines in wild trout populations across western North America. Westslope Cutthroat Trout Oncorhynchus clarkii lewisi in the northern Rocky Mountains are threatened by hybridization with introduced Rainbow Trout O. mykiss. Rainbow Trout are more susceptible to whirling disease than Cutthroat Trout and may be more vulnerable due to differences in spawning location. We hypothesized that the presence of whirling disease in a stream would (1) reduce levels of introgressive hybridization at the site scale and (2) limit the size of the hybrid zone at the whole-stream scale. We measured levels of introgression and the spatial ext...", "which Continent ?", "North America", 439.0, 452.0], ["Abstract: Invasive alien organisms pose a major threat to global biodiversity. The Cape Peninsula, South Africa, provides a case study of the threat of alien plants to native plant diversity. We sought to identify where alien plants would invade the landscape and what their threat to plant diversity could be. This information is needed to develop a strategy for managing these invasions at the landscape scale. We used logistic regression models to predict the potential distribution of six important invasive alien plants in relation to several environmental variables. The logistic regression models showed that alien plants could cover over 89% of the Cape Peninsula. Acacia cyclops and Pinus pinaster were predicted to cover the greatest area. These predictions were overlaid on the current distribution of native plant diversity for the Cape Peninsula in order to quantify the threat of alien plants to native plant diversity. We defined the threat to native plant diversity as the number of native plant species (divided into all species, rare and threatened species, and endemic species) whose entire range is covered by the predicted distribution of alien plant species. We used a null model, which assumed a random distribution of invaded sites, to assess whether area invaded is confounded with threat to native plant diversity. The null model showed that most alien species threaten more plant species than might be suggested by the area they are predicted to invade. For instance, the logistic regression model predicted that P. pinaster threatens 350 more native species, 29 more rare and threatened species, and 21 more endemic species than the null model would predict. Comparisons between the null and logistic regression models suggest that species richness and invasibility are positively correlated and that species richness is a poor indicator of invasive resistance in the study site. Our results emphasize the importance of adopting a spatially explicit approach to quantifying threats to biodiversity, and they provide the information needed to prioritize threats from alien species and the sites that need urgent management intervention.", "which Continent ?", "Africa", 105.0, 111.0], ["1 During the last centuries many alien species have established and spread in new regions, where some of them cause large ecological and economic problems. As one of the main explanations of the spread of alien species, the enemy\u2010release hypothesis is widely accepted and frequently serves as justification for biological control. 2 We used a global fungus\u2013plant host distribution data set for 140 North American plant species naturalized in Europe to test whether alien plants are generally released from foliar and floral pathogens, whether they are mainly released from pathogens that are rare in the native range, and whether geographic spread of the North American plant species in Europe is associated with release from fungal pathogens. 3 We show that the 140 North American plant species naturalized in Europe were released from 58% of their foliar and floral fungal pathogen species. However, when we also consider fungal pathogens of the native North American host range that in Europe so far have only been reported on other plant species, the estimated release is reduced to 10.3%. Moreover, in Europe North American plants have mainly escaped their rare, pathogens, of which the impact is restricted to few populations. Most importantly and directly opposing the enemy\u2010release hypothesis, geographic spread of the alien plants in Europe was negatively associated with their release from fungal pathogens. 4 Synthesis. North American plants may have escaped particular fungal species that control them in their native range, but based on total loads of fungal species, release from foliar and floral fungal pathogens does not explain the geographic spread of North American plant species in Europe. To test whether enemy release is the major driver of plant invasiveness, we urgently require more studies comparing release of invasive and non\u2010invasive alien species from enemies of different guilds, and studies that assess the actual impact of the enemies.", "which Continent ?", "North America", NaN, NaN], ["1 During the last centuries many alien species have established and spread in new regions, where some of them cause large ecological and economic problems. As one of the main explanations of the spread of alien species, the enemy\u2010release hypothesis is widely accepted and frequently serves as justification for biological control. 2 We used a global fungus\u2013plant host distribution data set for 140 North American plant species naturalized in Europe to test whether alien plants are generally released from foliar and floral pathogens, whether they are mainly released from pathogens that are rare in the native range, and whether geographic spread of the North American plant species in Europe is associated with release from fungal pathogens. 3 We show that the 140 North American plant species naturalized in Europe were released from 58% of their foliar and floral fungal pathogen species. However, when we also consider fungal pathogens of the native North American host range that in Europe so far have only been reported on other plant species, the estimated release is reduced to 10.3%. Moreover, in Europe North American plants have mainly escaped their rare, pathogens, of which the impact is restricted to few populations. Most importantly and directly opposing the enemy\u2010release hypothesis, geographic spread of the alien plants in Europe was negatively associated with their release from fungal pathogens. 4 Synthesis. North American plants may have escaped particular fungal species that control them in their native range, but based on total loads of fungal species, release from foliar and floral fungal pathogens does not explain the geographic spread of North American plant species in Europe. To test whether enemy release is the major driver of plant invasiveness, we urgently require more studies comparing release of invasive and non\u2010invasive alien species from enemies of different guilds, and studies that assess the actual impact of the enemies.", "which Continent ?", "Europe", 442.0, 448.0], ["Biotic resistance, the ability of species in a community to limit invasion, is central to our understanding of how communities at risk of invasion assemble after disturbances, but it has yet to translate into guiding principles for the restoration of invasion\u2010resistant plant communities. We combined experimental, functional, and modelling approaches to investigate processes of community assembly contributing to biotic resistance to an introduced lineage of Phragmites australis, a model invasive species in North America. We hypothesized that (i) functional group identity would be a good predictor of biotic resistance to P. australis, while species identity effect would be redundant within functional group (ii) mixtures of species would be more invasion resistant than monocultures. We classified 36 resident wetland plants into four functional groups based on eight functional traits. We conducted two competition experiments based on the additive competition design with P. australis and monocultures or mixtures of wetland plants. As an indicator of biotic resistance, we calculated a relative competition index (RCIavg) based on the average performance of P. australis in competition treatment compared with control. To explain diversity effect further, we partitioned it into selection effect and complementarity effect and tested several diversity\u2013interaction models. In monoculture treatments, RCIavg of wetland plants was significantly different among functional groups, but not within each functional group. We found the highest RCIavg for fast\u2010growing annuals, suggesting priority effect. RCIavg of wetland plants was significantly greater in mixture than in monoculture mainly due to complementarity\u2013diversity effect among functional groups. In diversity\u2013interaction models, species interaction patterns in mixtures were described best by interactions between functional groups when fitted to RCIavg or biomass, implying niche partitioning. Synthesis. Functional group identity and diversity of resident plant communities are good indicators of biotic resistance to invasion by introduced Phragmites australis, suggesting niche pre\u2010emption (priority effect) and niche partitioning (diversity effect) as underlying mechanisms. Guiding principles to understand and/or manage biological invasion could emerge from advances in community theory and the use of a functional framework. Targeting widely distributed invasive plants in different contexts and scaling up to field situations will facilitate generalization.", "which Continent ?", "North America", 511.0, 524.0], ["This study provides an updated picture of mammal invasions in Europe, based on detailed analysis of information on introductions occurring from the Neolithic to recent times. The assessment considered all information on species introductions, known extinctions and successful eradication campaigns, to reconstruct a trend of alien mammals' establishment in the region. Through a comparative analysis of the data on introduction, with the information on the impact of alien mammals on native and threatened species of Europe, the present study also provides an objective assessment of the overall impact of mammal introductions on European biodiversity, including information on impact mechanisms. The results of this assessment confirm the constant increase of mammal invasions in Europe, with no indication of a reduction of the rate of introduction. The study also confirms the severe impact of alien mammals, which directly threaten a significant number of native species, including many highly threatened species. The results could help to prioritize species for response, as required by international conventions and obligations.", "which Continent ?", "Europe", 62.0, 68.0], ["Question The relative importance of environmental vs. biotic resistance of recipient ecological communities remains poorly understood in invasion ecology. Acer negundo, a North American tree, has widely invaded riparian forests throughout Europe at the ecotone between early- (Salix spp. and Populus spp.) and late-successional (Fraxinus spp.) species. However, it is not present in the upper part of the Rhone River, where native Alnus incana occurs at an intermediate position along the successional riparian gradient. Is this absence of the invasive tree due to environmental or biotic resistance of the recipient communities, and in particular due to the presence of Alnus? Location Upper Rhone River, France. Methods We undertook a transplant experiment in an Alnus-dominated community along the Upper Rhone River, where we compared Acer negundo survival and growth, with and without biotic interactions (tree and herb layer effects), to those of four native tree species from differing successional positions in the Upper Rhone communities (P. alba, S. alba, F. excelsior and Alnus incana). Results Without biotic interactions Acer negundo performed similarly to native species, suggesting that the Upper Rhone floodplain is not protected from Acer invasion by a simple abiotic barrier. In contrast, this species performed less well than F. excelsior and Alnus incana in environments with intact tree and/or herb layers. Alnus showed the best growth rate in these conditions, indicating biotic resistance of the native plant community. Conclusions We did not find evidence for an abiotic barrier to Acer negundo invasion of the Upper Rhone River floodplain communities, but our results suggest a biotic resistance. In particular, we demonstrated that (i) additive competitive effects of the tree and herb layer led to Acer negundo suppression and (ii) Alnus incana grew more rapidly than Acer negundo in this intermediate successional niche.", "which Continent ?", "Europe", 239.0, 245.0], ["ABSTRACT Question: Do anthropogenic activities facilitate the distribution of exotic plants along steep altitudinal gradients? Location: Sani Pass road, Grassland biome, South Africa. Methods: On both sides of this road, presence and abundance of exotic plants was recorded in four 25-m long road-verge plots and in parallel 25 m \u00d7 2 m adjacent land plots, nested at five altitudinal levels: 1500, 1800, 2100, 2400 and 2700 m a.s.l. Exotic community structure was analyzed using Canonical Correspondence Analysis while a two-level nested Generalized Linear Model was fitted for richness and cover of exotics. We tested the upper altitudinal limits for all exotics along this road for spatial clustering around four potential propagule sources using a t-test. Results: Community structure, richness and abundance of exotics were negatively correlated with altitude. Greatest invasion by exotics was recorded for adjacent land at the 1500 m level. Of the 45 exotics, 16 were found at higher altitudes than expected and observations were spatially clustered around potential propagule sources. Conclusions: Spatial clustering of upper altitudinal limits around human inhabited areas suggests that exotics originate from these areas, while exceeding expected altitudinal limits suggests that distribution ranges of exotics are presently underestimated. Exotics are generally characterised by a high propagule pressure and/or persistent seedbanks, thus future tarring of the Sani Pass may result in an increase of exotic species richness and abundance. This would initially result from construction-related soil disturbance and subsequently from increased traffic, water run-off, and altered fire frequency. We suggest examples of management actions to prevent this. Nomenclature: Germishuizen & Meyer (2003).", "which Continent ?", "Africa", 176.0, 182.0], ["Propagule pressure is intuitively a key factor in biological invasions: increased availability of propagules increases the chances of establishment, persistence, naturalization, and invasion. The role of propagule pressure relative to disturbance and various environmental factors is, however, difficult to quantify. We explored the relative importance of factors driving invasions using detailed data on the distribution and percentage cover of alien tree species on South Africa\u2019s Agulhas Plain (2,160 km2). Classification trees based on geology, climate, land use, and topography adequately explained distribution but not abundance (canopy cover) of three widespread invasive species (Acacia cyclops, Acacia saligna, and Pinus pinaster). A semimechanistic model was then developed to quantify the roles of propagule pressure and environmental heterogeneity in structuring invasion patterns. The intensity of propagule pressure (approximated by the distance from putative invasion foci) was a much better predictor of canopy cover than any environmental factor that was considered. The influence of environmental factors was then assessed on the residuals of the first model to determine how propagule pressure interacts with environmental factors. The mediating effect of environmental factors was species specific. Models combining propagule pressure and environmental factors successfully predicted more than 70% of the variation in canopy cover for each species.", "which Continent ?", "Africa", 474.0, 480.0], ["Abstract Biological invasions are a key threat to freshwater biodiversity, and identifying determinants of invasion success is a global conservation priority. The establishment of introduced species is predicted to be hindered by pre-existing, functionally similar invasive species. Over a five-year period we, however, find that in the River Lee (UK), recently introduced non-native virile crayfish (Orconectes virilis) increased in range and abundance, despite the presence of established alien signal crayfish (Pacifastacus leniusculus). In regions of sympatry, virile crayfish had a detrimental effect on signal crayfish abundance but not vice versa. Competition experiments revealed that virile crayfish were more aggressive than signal crayfish and outcompeted them for shelter. Together, these results provide early evidence for the potential over-invasion of signal crayfish by competitively dominant virile crayfish. Based on our results and the limited distribution of virile crayfish in Europe, we recommend that efforts to contain them within the Lee catchment be implemented immediately.", "which Continent ?", "Europe", 998.0, 1004.0], ["Abstract Understanding how the landscape-scale replacement of indigenous plants with alien plants influences ecosystem structure and functioning is critical in a world characterized by increasing biotic homogenization. An important step in this process is to assess the impact on invertebrate communities. Here we analyse insect species richness and abundance in sweep collections from indigenous and alien (Australasian) woody plant species in South Africa's Western Cape. We use phylogenetically relevant comparisons and compare one indigenous with three Australasian alien trees within each of Fabaceae: Mimosoideae, Myrtaceae, and Proteaceae: Grevilleoideae. Although some of the alien species analysed had remarkably high abundances of herbivores, even when intentionally introduced biological control agents are discounted, overall, herbivorous insect assemblages from alien plants were slightly less abundant and less diverse compared with those from indigenous plants \u2013 in accordance with predictions from the enemy release hypothesis. However, there were no clear differences in other insect feeding guilds. We conclude that insect assemblages from alien plants are generally quite diverse, and significant differences between these and assemblages from indigenous plants are only evident for herbivorous insects.", "which Continent ?", "Africa", 451.0, 457.0], ["ABSTRACT Early successional ruderal plants in North America include numerous native and nonnative species, and both are abundant in disturbed areas. The increasing presence of nonnative plants may negatively impact a critical component of food web function if these species support fewer or a less diverse arthropod fauna than the native plant species that they displace. We compared arthropod communities on six species of common early successional native plants and six species of nonnative plants, planted in replicated native and nonnative plots in a farm field. Samples were taken twice each year for 2 yr. In most arthropod samples, total biomass and abundance were substantially higher on the native plants than on the nonnative plants. Native plants produced as much as five times more total arthropod biomass and up to seven times more species per 100 g of dry leaf biomass than nonnative plants. Both herbivores and natural enemies (predators and parasitoids) predominated on native plants when analyzed separately. In addition, species richness was about three times greater on native than on nonnative plants, with 83 species of insects collected exclusively from native plants, and only eight species present only on nonnatives. These results support a growing body of evidence suggesting that nonnative plants support fewer arthropods than native plants, and therefore contribute to reduced food resources for higher trophic levels.", "which Continent ?", "North America", 46.0, 59.0], ["Abstract: The reed Phragmites australis Cav. is aggressively invading salt marshes along the Atlantic Coast of North America. We examined the interactive role of habitat alteration (i.e., shoreline development) in driving this invasion and its consequences for plant richness in New England salt marshes. We surveyed 22 salt marshes in Narragansett Bay, Rhode Island, and quantified shoreline development, Phragmites cover, soil salinity, and nitrogen availability. Shoreline development, operationally defined as removal of the woody vegetation bordering marshes, explained >90% of intermarsh variation in Phragmites cover. Shoreline development was also significantly correlated with reduced soil salinities and increased nitrogen availability, suggesting that removing woody vegetation bordering marshes increases nitrogen availability and decreases soil salinities, thus facilitating Phragmites invasion. Soil salinity (64%) and nitrogen availability (56%) alone explained a large proportion of variation in Phragmites cover, but together they explained 80% of the variation in Phragmites invasion success. Both univariate and aggregate (multidimensional scaling) analyses of plant community composition revealed that Phragmites dominance in developed salt marshes resulted in an almost three\u2010fold decrease in plant species richness. Our findings illustrate the importance of maintaining integrity of habitat borders in conserving natural communities and provide an example of the critical role that local conservation can play in preserving these systems. In addition, our findings provide ecologists and natural resource managers with a mechanistic understanding of how human habitat alteration in one vegetation community can interact with species introductions in adjacent communities (i.e., flow\u2010on or adjacency effects) to hasten ecosystem degradation.", "which Continent ?", "North America", 111.0, 124.0], ["Data on floristic status, biological attributes, chronology and distribution of naturalized species have been shown to be a very powerful tool for discerning the patterns of plant invasions and species invasiveness. We analysed the newly compiled list of casual and naturalized plant species in Taiwan (probably the only complete data set of this kind in East Asia) and found that Taiwan is relatively lightly invaded with only 8% of the flora being casual or naturalized. Moreover, the index of casual and naturalized species per log area is also moderate, in striking contrast with many other island floras where contributions of naturalized species are much higher. Casual and naturalized species have accumulated steadily and almost linearly over the past decades. Fabaceae, Asteraceae, and Poaceae are the families with the most species. However, Amaranthaceae, Convolvulaceae, and Onagraceae have the largest ratios of casual and naturalized species to their global numbers. Ipomoea, Solanum and Crotalaria have the highest numbers of casual and naturalized species. About 60% of all genera with exotic species are new to Taiwan. Perennial herbs represent one third of the casual and naturalized flora, followed by annual herbs. About 60% of exotic species were probably introduced unintentionally onto the island; many species imported intentionally have ornamental, medicinal, or forage values. The field status of 50% of these species is unknown, but ornamentals represent noticeable proportions of naturalized species, while forage species represent a relatively larger proportion of casual species. Species introduced for medicinal purposes seem to be less invasive. Most of the casual and naturalized species of Taiwan originated from the Tropical Americas, followed by Asia and Europe.", "which Continent ?", "Asia", 360.0, 364.0], ["In recent decades the grass Phragmites australis has been aggressively in- vading coastal, tidal marshes of North America, and in many areas it is now considered a nuisance species. While P. australis has historically been restricted to the relatively benign upper border of brackish and salt marshes, it has been expanding seaward into more phys- iologically stressful regions. Here we test a leading hypothesis that the spread of P. australis is due to anthropogenic modification of coastal marshes. We did a field experiment along natural borders between stands of P. australis and the other dominant grasses and rushes (i.e., matrix vegetation) in a brackish marsh in Rhode Island, USA. We applied a pulse disturbance in one year by removing or not removing neighboring matrix vegetation and adding three levels of nutrients (specifically nitrogen) in a factorial design, and then we monitored the aboveground performance of P. australis and the matrix vegetation. Both disturbances increased the density, height, and biomass of shoots of P. australis, and the effects of fertilization were more pronounced where matrix vegetation was removed. Clear- ing competing matrix vegetation also increased the distance that shoots expanded and their reproductive output, both indicators of the potential for P. australis to spread within and among local marshes. In contrast, the biomass of the matrix vegetation decreased with increasing severity of disturbance. Disturbance increased the total aboveground production of plants in the marsh as matrix vegetation was displaced by P. australis. A greenhouse experiment showed that, with increasing nutrient levels, P. australis allocates proportionally more of its biomass to aboveground structures used for spread than to belowground struc- tures used for nutrient acquisition. Therefore, disturbances that enrich nutrients or remove competitors promote the spread of P. australis by reducing belowground competition for nutrients between P. australis and the matrix vegetation, thus allowing P. australis, the largest plant in the marsh, to expand and displace the matrix vegetation. Reducing nutrient load and maintaining buffers of matrix vegetation along the terrestrial-marsh ecotone will, therefore, be important methods of control for this nuisance species.", "which Continent ?", "North America", 108.0, 121.0], ["Woodlands comprised of planted, nonnative trees are increasing in extent globally, while native woodlands continue to decline due to human activities. The ecological impacts of planted woodlands may include changes to the communities of understory plants and animals found among these nonnative trees relative to native woodlands, as well as invasion of adjacent habitat areas through spread beyond the originally planted areas. Eucalypts (Eucalyptus spp.) are among the most widely planted trees worldwide, and are very common in California, USA. The goals of our investigation were to compare the biological communities of nonnative eucalypt woodlands to native oak woodlands in coastal central California, and to examine whether planted eucalypt groves have increased in size over the past decades. We assessed site and habitat attributes and characterized biological communities using understory plant, ground-dwelling arthropod, amphibian, and bird communities as indicators. Degree of difference between native and nonnative woodlands depended on the indicator used. Eucalypts had significantly greater canopy height and cover, and significantly lower cover by perennial plants and species richness of arthropods than oaks. Community composition of arthropods also differed significantly between eucalypts and oaks. Eucalypts had marginally significantly deeper litter depth, lower abundance of native plants with ranges limited to western North America, and lower abundance of amphibians. In contrast to these differences, eucalypt and oak groves had very similar bird community composition, species richness, and abundance. We found no evidence of \"invasional meltdown,\" documenting similar abundance and richness of nonnatives in eucalypt vs. oak woodlands. Our time-series analysis revealed that planted eucalypt groves increased 271% in size, on average, over six decades, invading adjacent areas. Our results inform science-based management of California woodlands, revealing that while bird communities would probably not be affected by restoration of eucalypt to oak woodlands, such a restoration project would not only stop the spread of eucalypts into adjacent habitats but would also enhance cover by western North American native plants and perennials, enhance amphibian abundance, and increase arthropod richness.", "which Continent ?", "North America", 1446.0, 1459.0], ["*Globally, exotic invaders threaten biodiversity and ecosystem function. Studies often report that invading plants are less affected by enemies in their invaded vs home ranges, but few studies have investigated the underlying mechanisms. *Here, we investigated the variation in prevalence, species composition and virulence of soil-borne Pythium pathogens associated with the tree Prunus serotina in its native US and non-native European ranges by culturing, DNA sequencing and controlled pathogenicity trials. *Two controlled pathogenicity experiments showed that Pythium pathogens from the native range caused 38-462% more root rot and 80-583% more seedling mortality, and 19-45% less biomass production than Pythium from the non-native range. DNA sequencing indicated that the most virulent Pythium taxa were sampled only from the native range. The greater virulence of Pythium sampled from the native range therefore corresponded to shifts in species composition across ranges rather than variation within a common Pythium species. *Prunus serotina still encounters Pythium in its non-native range but encounters less virulent taxa. Elucidating patterns of enemy virulence in native and nonnative ranges adds to our understanding of how invasive plants escape disease. Moreover, this strategy may identify resident enemies in the non-native range that could be used to manage invasive plants.", "which Continent ?", "Europe", NaN, NaN], ["Aims Adaptive evolution along geographic gradients of climatic conditions is suggested to facilitate the spread of invasive plant species, leading to clinal variation among populations in the introduced range. We investigated whether adaptation to climate is also involved in the invasive spread of an ornamental shrub, Buddleja davidii, across western and central Europe. Methods We combined a common garden experiment, replicated in three climatically different central European regions, with reciprocal transplantation to quantify genetic differentiation in growth and reproductive traits of 20 invasive B. davidii populations. Additionally, we compared compensatory regrowth among populations after clipping of stems to simulate mechanical damage.", "which Continent ?", "Europe", 365.0, 371.0], ["A prominent hypothesis for plant invasions is escape from the inhibitory effects of soil biota. Although the strength of these inhibitory effects, measured as soil feedbacks, has been assessed between natives and exotics in non\u2010native ranges, few studies have compared the strength of plant\u2013soil feedbacks for exotic species in soils from non\u2010native versus native ranges. We examined whether 6 perennial European forb species that are widespread invaders in North American grasslands (Centaurea stoebe, Euphorbia esula, Hypericum perforatum, Linaria vulgaris, Potentilla recta and Leucanthemum vulgare) experienced different suppressive effects of soil biota collected from 21 sites across both ranges. Four of the six species tested exhibited substantially reduced shoot biomass in \u2018live\u2019 versus sterile soil from Europe. In contrast, North American soils produced no significant feedbacks on any of the invasive species tested indicating a broad scale escape from the inhibitory effects of soil biota. Negative feedbacks generated by European soil varied idiosyncratically among sites and species. Since this variation did not correspond with the presence of the target species at field sites, it suggests that negative feedbacks can be generated from soil biota that are widely distributed in native ranges in the absence of density\u2010dependent effects. Synthesis. Our results show that for some invasives, native soils have strong suppressive potential, whereas this is not the case in soils from across the introduced range. Differences in regional\u2010scale evolutionary history among plants and soil biota could ultimately help explain why some exotics are able to occur at higher abundance in the introduced versus native range.", "which Continent ?", "North America", NaN, NaN], ["A central question in ecology concerns how some exotic plants that occur at low densities in their native range are able to attain much higher densities where they are introduced. This question has remained unresolved in part due to a lack of experiments that assess factors that affect the population growth or abundance of plants in both ranges. We tested two hypotheses for exotic plant success: escape from specialist insect herbivores and a greater response to disturbance in the introduced range. Within three introduced populations in Montana, USA, and three native populations in Germany, we experimentally manipulated insect herbivore pressure and created small-scale disturbances to determine how these factors affect the performance of houndstongue (Cynoglossum officinale), a widespread exotic in western North America. Herbivores reduced plant size and fecundity in the native range but had little effect on plant performance in the introduced range. Small-scale experimental disturbances enhanced seedling recruitment in both ranges, but subsequent seedling survival was more positively affected by disturbance in the introduced range. We combined these experimental results with demographic data from each population to parameterize integral projection population models to assess how enemy escape and disturbance might differentially influence C. officinale in each range. Model results suggest that escape from specialist insects would lead to only slight increases in the growth rate (lambda) of introduced populations. In contrast, the larger response to disturbance in the introduced vs. native range had much greater positive effects on lambda. These results together suggest that, at least in the regions where the experiments were performed, the differences in response to small disturbances by C. officinale contribute more to higher abundance in the introduced range compared to at home. Despite the challenges of conducting experiments on a wide biogeographic scale and the logistical constraints of adequately sampling populations within a range, this approach is a critical step forward to understanding the success of exotic plants.", "which Continent ?", "North America", 817.0, 830.0], ["Hanley ME (2012). Seedling defoliation, plant growth and flowering potential in native- and invasive-range Plantago lanceolata populations. Weed Research52, 252\u2013259. Summary The plastic response of weeds to new environmental conditions, in particular the likely relaxation of herbivore pressure, is considered vital for successful colonisation and spread. However, while variation in plant anti-herbivore resistance between native- and introduced-range populations is well studied, few authors have considered herbivore tolerance, especially at the seedling stage. This study examines variation in seedling tolerance in native (European) and introduced (North American) Plantago lanceolata populations following cotyledon removal at 14 days old. Subsequent effects on plant growth were quantified at 35 days, along with effects on flowering potential at maturity. Cotyledon removal reduced early growth for all populations, with no variation between introduced- or native-range plants. Although more variable, the effects of cotyledon loss on flowering potential were also unrelated to range. The likelihood that generalist seedling herbivores are common throughout North America may explain why no difference in seedling tolerance was apparent. However, increased flowering potential in plants from North American P. lanceolata populations was observed. As increased flowering potential was not lost, even after severe cotyledon damage, the manifestation of phenotypic plasticity in weeds at maturity may nonetheless still be shaped by plasticity in the ability to tolerate herbivory during seedling establishment.", "which Continent ?", "North America", 1166.0, 1179.0], ["What determines the number of alien species in a given region? \u2018Native biodiversity\u2019 and \u2018human impact\u2019 are typical answers to this question. Indeed, studies comparing different regions have frequently found positive relationships between number of alien species and measures of both native biodiversity (e.g. the number of native species) and human impact (e.g. human population). These relationships are typically explained by biotic acceptance or resistance, i.e. by influence of native biodiversity and human impact on the second step of the invasion process, establishment. The first step of the invasion process, introduction, has often been ignored. Here we investigate whether relationships between number of alien mammals and native biodiversity or human impact in 43 European countries are mainly shaped by differences in number of introduced mammals or establishment success. Our results suggest that correlation between number of native and established mammals is spurious, as it is simply explainable by the fact that both quantities are linked to country area. We also demonstrate that countries with higher human impact host more alien mammals than other countries because they received more introductions than other countries. Differences in number of alien mammals cannot be explained by differences in establishment success. Our findings highlight importance of human activities and question, at least for mammals in Europe, importance of biotic acceptance and resistance.", "which Continent ?", "Europe", 1435.0, 1441.0], ["Methods of risk assessment for alien species, especially for nonagricultural systems, are largely qualitative. Using a generalizable risk assessment approach and statistical models of fish introductions into the Great Lakes, North America, we developed a quantitative approach to target prevention efforts on species most likely to cause damage. Models correctly categorized established, quickly spreading, and nuisance fishes with 87 to 94% accuracy. We then identified fishes that pose a high risk to the Great Lakes if introduced from unintentional (ballast water) or intentional pathways (sport, pet, bait, and aquaculture industries).", "which Continent ?", "North America", 225.0, 238.0], ["1 The cultivation and dissemination of alien ornamental plants increases their potential to invade. More specifically, species with bird\u2010dispersed seeds can potentially infiltrate natural nucleation processes in savannas. 2 To test (i) whether invasion depends on facilitation by host trees, (ii) whether propagule pressure determines invasion probability, and (iii) whether alien host plants are better facilitators of alien fleshy\u2010fruited species than indigenous species, we mapped the distribution of alien fleshy\u2010fruited species planted inside a military base, and compared this with the distribution of alien and native fleshy\u2010fruited species established in the surrounding natural vegetation. 3 Abundance and diversity of fleshy\u2010fruited plant species was much greater beneath tree canopies than in open grassland and, although some native fleshy\u2010fruited plants were found both beneath host trees and in the open, alien fleshy\u2010fruited plants were found only beneath trees. 4 Abundance of fleshy\u2010fruited alien species in the natural savanna was positively correlated with the number of individuals of those species planted in the grounds of the military base, while the species richness of alien fleshy\u2010fruited taxa decreased with distance from the military base, supporting the notion that propagule pressure is a fundamental driver of invasions. 5 There were more fleshy\u2010fruited species beneath native Acacia tortilis than beneath alien Prosopis sp. trees of the equivalent size. Although there were significant differences in native plant assemblages beneath these hosts, the proportion of alien to native fleshy\u2010fruited species did not differ with host. 6 Synthesis. Birds facilitate invasion of a semi\u2010arid African savanna by alien fleshy\u2010fruited plants, and this process does not require disturbance. Instead, propagule pressure and a few simple biological observations define the probability that a plant will invade, with alien species planted in gardens being a major source of propagules. Some invading species have the potential to transform this savanna by overtopping native trees, leading to ecosystem\u2010level impacts. Likewise, the invasion of the open savanna by alien host trees (such as Prosopis sp.) may change the diversity, abundance and species composition of the fleshy\u2010fruited understorey. These results illustrate the complex interplay between propagule pressure, facilitation, and a range of other factors in biological invasions.", "which Continent ?", "Africa", NaN, NaN], ["The paper provides the first estimate of the composition and structure of alien plants occurring in the wild in the European continent, based on the results of the DAISIE project (2004\u20132008), funded by the 6th Framework Programme of the European Union and aimed at \u201ccreating an inventory of invasive species that threaten European terrestrial, freshwater and marine environments\u201d. The plant section of the DAISIE database is based on national checklists from 48 European countries/regions and Israel; for many of them the data were compiled during the project and for some countries DAISIE collected the first comprehensive checklists of alien species, based on primary data (e.g., Cyprus, Greece, F. Y. R. O. Macedonia, Slovenia, Ukraine). In total, the database contains records of 5789 alien plant species in Europe (including those native to a part of Europe but alien to another part), of which 2843 are alien to Europe (of extra-European origin). The research focus was on naturalized species; there are in total 3749 naturalized aliens in Europe, of which 1780 are alien to Europe. This represents a marked increase compared to 1568 alien species reported by a previous analysis of data in Flora Europaea (1964\u20131980). Casual aliens were marginally considered and are represented by 1507 species with European origins and 872 species whose native range falls outside Europe. The highest diversity of alien species is concentrated in industrialized countries with a tradition of good botanical recording or intensive recent research. The highest number of all alien species, regardless of status, is reported from Belgium (1969), the United Kingdom (1779) and Czech Republic (1378). The United Kingdom (857), Germany (450), Belgium (447) and Italy (440) are countries with the most naturalized neophytes. The number of naturalized neophytes in European countries is determined mainly by the interaction of temperature and precipitation; it increases with increasing precipitation but only in climatically warm and moderately warm regions. Of the nowadays naturalized neophytes alien to Europe, 50% arrived after 1899, 25% after 1962 and 10% after 1989. At present, approximately 6.2 new species, that are capable of naturalization, are arriving each year. Most alien species have relatively restricted European distributions; half of all naturalized species occur in four or fewer countries/regions, whereas 70% of non-naturalized species occur in only one region. Alien species are drawn from 213 families, dominated by large global plant families which have a weedy tendency and have undergone major radiations in temperate regions (Asteraceae, Poaceae, Rosaceae, Fabaceae, Brassicaceae). There are 1567 genera, which have alien members in European countries, the commonest being globally-diverse genera comprising mainly urban and agricultural weeds (e.g., Amaranthus, Chenopodium and Solanum) or cultivated for ornamental purposes (Cotoneaster, the genus richest in alien species). Only a few large genera which have successfully invaded (e.g., Oenothera, Oxalis, Panicum, Helianthus) are predominantly of non-European origin. Conyza canadensis, Helianthus tuberosus and Robinia pseudoacacia are most widely distributed alien species. Of all naturalized aliens present in Europe, 64.1% occur in industrial habitats and 58.5% on arable land and in parks and gardens. Grasslands and woodlands are also highly invaded, with 37.4 and 31.5%, respectively, of all naturalized aliens in Europe present in these habitats. Mires, bogs and fens are least invaded; only approximately 10% of aliens in Europe occur there. Intentional introductions to Europe (62.8% of the total number of naturalized aliens) prevail over unintentional (37.2%). Ornamental and horticultural introductions escaped from cultivation account for the highest number of species, 52.2% of the total. Among unintentional introductions, contaminants of seed, mineral materials and other commodities are responsible for 1091 alien species introductions to Europe (76.6% of all species introduced unintentionally) and 363 species are assumed to have arrived as stowaways (directly associated with human transport but arriving independently of commodity). Most aliens in Europe have a native range in the same continent (28.6% of all donor region records are from another part of Europe where the plant is native); in terms of species numbers the contribution of Europe as a region of origin is 53.2%. Considering aliens to Europe separately, 45.8% of species have their native distribution in North and South America, 45.9% in Asia, 20.7% in Africa and 5.3% in Australasia. Based on species composition, European alien flora can be classified into five major groups: (1) north-western, comprising Scandinavia and the UK; (2) west-central, extending from Belgium and the Netherlands to Germany and Switzerland; (3) Baltic, including only the former Soviet Baltic states; (4) east-central, comprizing the remainder of central and eastern Europe; (5) southern, covering the entire Mediterranean region. The clustering patterns cut across some European bioclimatic zones; cultural factors such as regional trade links and traditional local preferences for crop, forestry and ornamental species are also important by influencing the introduced species pool. Finally, the paper evaluates a state of the art in the field of plant invasions in Europe, points to research gaps and outlines avenues of further research towards documenting alien plant invasions in Europe. The data are of varying quality and need to be further assessed with respect to the invasion status and residence time of the species included. This concerns especially the naturalized/casual status; so far, this information is available comprehensively for only 19 countries/regions of the 49 considered. Collating an integrated database on the alien flora of Europe can form a principal contribution to developing a European-wide management strategy of alien species.", "which Continent ?", "Europe", 812.0, 818.0], [": Alien plant species have rapidly invaded and successfully displaced native species in many grasslands of western North America. Thus, the status of alien species in the nature reserve grasslands of this region warrants special attention. This study describes alien flora in nine fescue grassland study sites adjacent to three types of transportation corridors\u2014primary roads, secondary roads, and backcountry trails\u2014in Glacier National Park, Montana (U.S.A.). Parallel transects, placed at varying distances from the adjacent road or trail, were used to determine alien species richness and frequency at individual study sites. Fifteen alien species were recorded, two Eurasian grasses, Phleum pratense and Poa pratensis, being particularly common in most of the study sites. In sites adjacent to primary and secondary roads, alien species richness declined out to the most distant transect, suggesting that alien species are successfully invading grasslands from the roadside area. In study sites adjacent to backcountry trails, absence of a comparable decline and unexpectedly high levels of alien species richness 100 m from the trailside suggest that alien species have been introduced in off-trail areas. The results of this study imply that in spite of low levels of livestock grazing and other anthropogenic disturbances, fescue grasslands in nature reserves of this region are vulnerable to invasion by alien flora. Given the prominent role that roadsides play in the establishment and dispersal of alien flora, road construction should be viewed from a biological, rather than an engineering, perspective. Nature reserve man agers should establish effective roadside vegetation management programs that include monitoring, quickly treating keystone alien species upon their initial occurrence in nature reserves, and creating buffer zones on roadside leading to nature reserves. Resumen: Especies de plantas introducidas han invadido rapidamente y desplazado exitosamente especies nativas en praderas del Oeste de America del Norte. Por lo tanto el estado de las especies introducidas en las reservas de pastizales naturales de esta region exige especial atencion. Este estudio describe la flora introducida en nueve pastizales naturales de festuca, las areas de estudios son adyacentes a tres tipos decorredores de transporte\u2014caminos primarios, caminos secundarios y senderos remotos\u2014en el Parque Nacional \u201cGlacier,\u201d Montana (EE.UU). Para determinar riqueza y frecuencia de especies introducidas, se trazaron transectas paralelas, localizadas a distancias variables del camino o sendero adyacente en las areas de estudio. Se registraron quince especies introducidas. Dos pastos eurasiaticos, Phleum pratensis y Poa pratensis, resultaron particularmente abuntes en la mayoria de las areas de estudio. En lugares adyacentes a caminos primarios y secundarios, la riqueza de especies introducidas disminuyo en la direccion de las transectas mas distantes, sugiriendo que las especies introducidas estan invadiendo exitosamente las praderas desde areas aledanas a caminos. En las areas de estudio adyacentes a senderos remotos no se encontro una disminucion comparable; inesperados altos niveles de riqueza de especies introducidas a 100 m de los senderos, sugieren que las especies foraneas han sido introducidas desde otras areas fuero de los senderos. Los resultados de este estudio implican que a pesar de los bajos niveles de pastoreo y otras perturbaciones antropogenicas, los pastizales de festuca en las reservas naturales de esta region son vulnerables a la invasion de la flora introducida. Dada el rol preponderante que juegan los caminos en el establecimiento y dispersion de la flora introducida, la construccion de rutas debe ser vista desde un punto de vista biologica, mas que desde una perspectiva meramente ingenieril. Los administradores de reservas naturales deberian establecer programas efectivos de manejo de vegetacion en los bordes de los caminos. Estos programas deberian incluir monitoreo, tratamiento rapido de especies introducidas y claves tan pronto como se detecten en las reservas naturales, y creacion de zonas de transicion en los caminos que conducen a las reservas naturales.", "which Continent ?", "North America", 115.0, 128.0], ["Although much of the theory on the success of invasive species has been geared at escape from specialist enemies, the impact of introduced generalist invertebrate herbivores on both native and introduced plant species has been underappreciated. The role of nocturnal invertebrate herbivores in structuring plant communities has been examined extensively in Europe, but less so in North America. Many nocturnal generalists (slugs, snails, and earwigs) have been introduced to North America, and 96% of herbivores found during a night census at our California Central Valley site were introduced generalists. We explored the role of these herbivores in the distribution, survivorship, and growth of 12 native and introduced plant species from six families. We predicted that introduced species sharing an evolutionary history with these generalists might be less vulnerable than native plant species. We quantified plant and herbivore abundances within our heterogeneous site and also established herbivore removal experiments in 160 plots spanning the gamut of microhabitats. As 18 collaborators, we checked 2000 seedling sites every day for three weeks to assess nocturnal seedling predation. Laboratory feeding trials allowed us to quantify the palatability of plant species to the two dominant nocturnal herbivores at the site (slugs and earwigs) and allowed us to account for herbivore microhabitat preferences when analyzing attack rates on seedlings. The relationship between local slug abundance and percent cover of five common plant taxa at the field site was significantly negatively associated with the mean palatability of these taxa to slugs in laboratory trials. Moreover, seedling mortality of 12 species in open-field plots was positively correlated with mean palatability of these taxa to both slugs and earwigs in laboratory trials. Counter to expectations, seedlings of native species were neither more vulnerable nor more palatable to nocturnal generalists than those of introduced species. Growth comparison of plants within and outside herbivore exclosures also revealed no differences between native and introduced plant species, despite large impacts of herbivores on growth. Cryptic nocturnal predation on seedlings was common and had large effects on plant establishment at our site. Without intensive monitoring, such predation could easily be misconstrued as poor seedling emergence.", "which Continent ?", "North America", 380.0, 393.0], ["The mechanisms underlying successful biological invasions often remain unclear. In the case of the tropical water flea Daphnia lumholtzi, which invaded North America, it has been suggested that this species possesses a high thermal tolerance, which in the course of global climate change promotes its establishment and rapid spread. However, D. lumholtzi has an additional remarkable feature: it is the only water flea that forms rigid head spines in response to chemicals released in the presence of fishes. These morphologically (phenotypically) plastic traits serve as an inducible defence against these predators. Here, we show in controlled mesocosm experiments that the native North American species Daphnia pulicaria is competitively superior to D. lumholtzi in the absence of predators. However, in the presence of fish predation the invasive species formed its defences and became dominant. This observation of a predator-mediated switch in dominance suggests that the inducible defence against fish predation may represent a key adaptation for the invasion success of D. lumholtzi.", "which Continent ?", "North America", 152.0, 165.0], ["1 The search for general characteristics of invasive species has not been very successful yet. A reason for this could be that current invasion patterns are mainly reflecting the introduction history (i.e. time since introduction and propagule pressure) of the species. Accurate data on the introduction history are, however, rare, particularly for introduced alien species that have not established. As a consequence, few studies that tested for the effects of species characteristics on invasiveness corrected for introduction history. 2 We tested whether the naturalization success of 582 North American woody species in Europe, measured as the proportion of European geographic regions in which each species is established, can be explained by their introduction history. For 278 of these species we had data on characteristics related to growth form, life cycle, growth, fecundity and environmental tolerance. We tested whether naturalization success can be further explained by these characteristics. In addition, we tested whether the effects of species characteristics differ between growth forms. 3 Both planting frequency in European gardens and time since introduction significantly increased naturalization success, but the effect of the latter was relatively weak. After correction for introduction history and taxonomy, six of the 26 species characteristics had significant effects on naturalization success. Leaf retention and precipitation tolerance increased naturalization success. Tree species were only 56% as likely to naturalize as non\u2010tree species (vines, shrubs and subshrubs), and the effect of planting frequency on naturalization success was much stronger for non\u2010trees than for trees. On the other hand, the naturalization success of trees, but not for non\u2010trees, increased with native range size, maximum plant height and seed spread rate. 4 Synthesis. Our results suggest that introduction history, particularly planting frequency, is an important determinant of current naturalization success of North American woody species (particularly of non\u2010trees) in Europe. Therefore, studies comparing naturalization success among species should correct for introduction history. Species characteristics are also significant determinants of naturalization success, but their effects may differ between growth forms.", "which Continent ?", "Europe", 624.0, 630.0], ["The herbivore load (abundance and species richness of herbivores) on alien plants is supposed to be one of the keys to understand the invasiveness of species. We investigate the phytophagous insect communities on cabbage plants (Brassicaceae) in Europe. We compare the communities of endophagous and ectophagous insects as well as of Coleoptera and Lepidoptera on native and alien cabbage plant species. Contrary to many other reports, we found no differences in the herbivore load between native and alien hosts. The majority of insect species attacked alien as well as native hosts. Across insect species, there was no difference in the patterns of host range on native and on alien hosts. Likewise the similarity of insect communities across pairs of host species was not different between natives and aliens. We conclude that the general similarity in the community patterns between native and alien cabbage plant species are due to the chemical characteristics of this plant family. All cabbage plants share glucosinolates. This may facilitate host switches from natives to aliens. Hence the presence of native congeners may influence invasiveness of alien plants.", "which Continent ?", "Europe", 246.0, 252.0], ["Aim The biotic resistance hypothesis argues that complex plant and animal communities are more resistant to invasion than simpler communities. Conversely, the biotic acceptance hypothesis states that non-native and native species richness are positively related. Most tests of these hypotheses at continental scales, typically conducted on plants, have found support for biotic acceptance. We tested these hypotheses on both amphibians and reptiles across Europe and North America. Location Continental countries in Europe and states/provinces in North America. Methods We used multiple linear regression models to determine which factors predicted successful establishment of amphibians and reptiles in Europe and North America, and additional models to determine which factors predicted native species richness. Results Successful establishment of amphibians and reptiles in Europe and reptiles in North America was positively related to native species richness. We found higher numbers of successful amphibian species in Europe than in North America. Potential evapotranspiration (PET) was positively related to non-native species richness for amphibians and reptiles in Europe and reptiles in North America. PET was also the primary factor determining native species richness for both amphibians and reptiles in Europe and North America. Main conclusions We found support for the biotic acceptance hypothesis for amphibians and reptiles in Europe and reptiles in North America, suggesting that the presence of native amphibian and reptile species generally indicates good habitat for non-native species. Our data suggest that the greater number of established amphibians per native amphibians in Europe than in North America might be explained by more introductions in Europe or climate-matching of the invaders. Areas with high native species richness should be the focus of control and management efforts, especially considering that non-native species located in areas with a high number of natives can have a large impact on biological diversity.", "which Continent ?", "North America", 467.0, 480.0], ["Widespread encroachment of the fire-intolerant species Juniperus virginiana L. into North American grasslands and savannahs where fire has largely been removed has prompted the need to identify mechanisms driving J. virginiana encroachment. We tested whether encroachment success of J. virginiana is related to plant species diversity and composi- tion across three plant communities. We predicted J. virginiana encroachment success would (i) decrease with increasing diversity, and (ii)J.virginiana encroachment success would be unrelated to species composition. We simulated encroachment by planting J. virginiana seedlings in tallgrass prairie, old-field grassland, and upland oak forest. We used J. virginiana survival and growth as an index of encroachment success and evaluated success as a function of plant community traits (i.e., species richness, species diversity, and species composition). Our results indicated that J. virginiana encroachment suc- cess increased with increasing plant richness and diversity. Moreover, growth and survival of J. virginiana seedlings was associated with plant species composition only in the old-field grassland and upland oak forest. These results suggest that greater plant species richness and diversity provide little resistance to J. virginiana encroachment, and the results suggest resource availability and other biotic or abiotic factors are determinants of J. virginiana encroachment success.", "which Continent ?", "North America", NaN, NaN], ["European countries in general, and England in particular, have a long history of introducing non-native fish species, but there exist no detailed studies of the introduction pathways and propagules pressure for any European country. Using the nine regions of England as a preliminary case study, the potential relationship between the occurrence in the wild of non-native freshwater fishes (from a recent audit of non-native species) and the intensity (i.e. propagule pressure) and diversity of fish imports was investigated. The main pathways of introduction were via imports of fishes for ornamental use (e.g. aquaria and garden ponds) and sport fishing, with no reported or suspected cases of ballast water or hull fouling introductions. The recorded occurrence of non-native fishes in the wild was found to be related to the time (number of years) since the decade of introduction. A shift in the establishment rate, however, was observed in the 1970s after which the ratio of established-to-introduced species declined. The number of established non-native fish species observed in the wild was found to increase significantly (P < 0\u00b705) with increasing import intensity (log10x + 1 of the numbers of fish imported for the years 2000\u20132004) and with increasing consignment diversity (log10x + 1 of the numbers of consignment types imported for the years 2000\u20132004). The implications for policy and management are discussed.", "which Continent ?", "Europe", NaN, NaN], ["Propagule pressure is recognized as a fundamental driver of freshwater fish invasions, though few studies have quantified its role. Natural experiments can be used to quantify the role of this factor relative to others in driving establishment success. An irrigation network in South Africa takes water from an inter-basin water transfer (IBWT) scheme to supply multiple small irrigation ponds. We compared fish community composition upstream, within, and downstream of the irrigation network, to show that this system is a unidirectional dispersal network with a single immigration source. We then assessed the effect of propagule pressure and biological adaptation on the colonization success of nine fish species across 30 recipient ponds of varying age. Establishing species received significantly more propagules at the source than did incidental species, while rates of establishment across the ponds displayed a saturation response to propagule pressure. This shows that propagule pressure is a significant driver of establishment overall. Those species that did not establish were either extremely rare at the immigration source or lacked the reproductive adaptations to breed in the ponds. The ability of all nine species to arrive at some of the ponds illustrates how long-term continuous propagule pressure from IBWT infrastructure enables range expansion of fishes. The quantitative link between propagule pressure and success and rate of population establishment confirms the driving role of this factor in fish invasion ecology.", "which Continent ?", "Africa", 284.0, 290.0], ["Introduced competitors do not share an evolutionary history that would promote coexistence mechanisms, i.e. niche partitioning. Thus, nonnative species can harm a trophically similar native species by competing with them more intensely than other native species. However, nonnative species may only be able initially to invade habitats in which resource overlap with native species is small. The nonnative slug Arion subfuscus exists in close sympatry with the native philomycid slugs Philomycus carolinianus and Megapallifera mutabilis in central Maryland forests. Resource use by most terrestrial gastropods is poorly known, but seems to suggest high dietary and macrohabitat overlap, potentially placing native gastropod species at high risk of competitive pressure from invading species. However, A. subfuscus was introduced to North America 150 years ago, supporting the possibility that A. subfuscus initially entered an empty niche. We tested the hypothesis that P. carolinianus and M. mutabilis would exhibit greater overlap in food and microhabitat use with A. subfuscus than they would with each other. We established food preferences by examining the faecal material of wild-caught slugs, distinguishing food types and quantifying them by volume on a microgrid. We determined microhabitat preferences by surveying the substrates of slugs in the field. The overlap in substrate and food resources was greater between A. subfuscus and P. carolinianus than between the two native species. However, substrate choice was correlated with local substrate availability for P. carolinianus, suggesting flexibility in habitat use, and the slight overlap in food use between A. subfuscus and P. carolinianus may be low enough to minimize competition.", "which Continent ?", "North America", 832.0, 845.0], ["The differences in phenotypic plasticity between invasive (North American) and native (German) provenances of the invasive plant Lythrum salicaria (purple loosestrife) were examined using a multivariate reaction norm approach testing two important attributes of reaction norms described by multivariate vectors of phenotypic change: the magnitude and direction of mean trait differences between environments. Data were collected for six life history traits from native and invasive plants using a split-plot design with experimentally manipulated water and nutrient levels. We found significant differences between native and invasive plants in multivariate phenotypic plasticity for comparisons between low and high water treatments within low nutrient levels, between low and high nutrient levels within high water treatments, and for comparisons that included both a water and nutrient level change. The significant genotype x environment (G x E) effects support the argument that invasiveness of purple loosestrife is closely associated with the interaction of high levels of soil nutrient and flooding water regime. Our results indicate that native and invasive plants take different strategies for growth and reproduction; native plants flowered earlier and allocated more to flower production, while invasive plants exhibited an extended period of vegetative growth before flowering to increase height and allocation to clonal reproduction, which may contribute to increased fitness and invasiveness in subsequent years.", "which Continent ?", "North America", NaN, NaN], ["Although some plant traits have been linked to invasion success, the possible effects of regional factors, such as diversity, habitat suitability, and human activity are not well understood. Each of these mechanisms predicts a different pattern of distribution at the regional scale. Thus, where climate and soils are similar, predictions based on regional hypotheses for invasion success can be tested by comparisons of distributions in the source and receiving regions. Here, we analyse the native and alien geographic ranges of all 1567 plant species that have been introduced between eastern Asia and North America or have been introduced to both regions from elsewhere. The results reveal correlations between the spread of exotics and both the native species richness and transportation networks of recipient regions. This suggests that both species interactions and human-aided dispersal influence exotic distributions, although further work on the relative importance of these processes is needed.", "which Continent ?", "North America", 605.0, 618.0], ["Species that are frequently introduced to an exotic range have a high potential of becoming invasive. Besides propagule pressure, however, no other generally strong determinant of invasion success is known. Although evidence has accumulated that human affiliates (domesticates, pets, human commensals) also have high invasion success, existing studies do not distinguish whether this success can be completely explained by or is partly independent of propagule pressure. Here, we analyze both factors independently, propagule pressure and human affiliation. We also consider a third factor directly related to humans, hunting, and 17 traits on each species' population size and extent, diet, body size, and life history. Our dataset includes all 2362 freshwater fish, mammals, and birds native to Europe or North America. In contrast to most previous studies, we look at the complete invasion process consisting of (1) introduction, (2) establishment, and (3) spread. In this way, we not only consider which of the introduced species became invasive but also which species were introduced. Of the 20 factors tested, propagule pressure and human affiliation were the two strongest determinants of invasion success across all taxa and steps. This was true for multivariate analyses that account for intercorrelations among variables as well as univariate analyses, suggesting that human affiliation influenced invasion success independently of propagule pressure. Some factors affected the different steps of the invasion process antagonistically. For example, game species were much more likely to be introduced to an exotic continent than nonhunted species but tended to be less likely to establish themselves and spread. Such antagonistic effects show the importance of considering the complete invasion process.", "which Continent ?", "North America", 807.0, 820.0], ["Background and Aims Invasiveness of some alien plants is associated with their traits, plastic responses to environmental conditions and interpopulation differentiation. To obtain insights into the role of these processes in contributing to variation in performance, we compared congeneric species of Impatiens (Balsaminaceae) with different origin and invasion status that occur in central Europe. Methods Native I. noli-tangere and three alien species (highly invasive I. glandulifera, less invasive I. parviflora and potentially invasive I. capensis) were studied and their responses to simulated canopy shading and different nutrient and moisture levels were determined in terms of survival and seedling traits. Key Results and Conclusions Impatiens glandulifera produced high biomass in all the treatments and the control, exhibiting the \u2018Jack-and-master\u2019 strategy that makes it a strong competitor from germination onwards. The results suggest that plasticity and differentiation occurred in all the species tested and that along the continuum from plasticity to differentiation, the species at the plasticity end is the better invader. The most invasive species I. glandulifera appears to be highly plastic, whereas the other two less invasive species, I. parviflora and I. capensis, exhibited lower plasticity but rather strong population differentiation. The invasive Impatiens species were taller and exhibited higher plasticity and differentiation than native I. noli-tangere. This suggests that even within one genus, the relative importance of the phenomena contributing to invasiveness appears to be species'specific.", "which Continent ?", "Europe", 391.0, 397.0], ["One explanation for the higher abundance of invasive species in their non-native than native ranges is the escape from natural enemies. But there are few experimental studies comparing the parallel impact of enemies (or competitors and mutualists) on a plant species in its native and invaded ranges, and release from soil pathogens has been rarely investigated. Here we present evidence showing that the invasion of black cherry (Prunus serotina) into north-western Europe is facilitated by the soil community. In the native range in the USA, the soil community that develops near black cherry inhibits the establishment of neighbouring conspecifics and reduces seedling performance in the greenhouse. In contrast, in the non-native range, black cherry readily establishes in close proximity to conspecifics, and the soil community enhances the growth of its seedlings. Understanding the effects of soil organisms on plant abundance will improve our ability to predict and counteract plant invasions.", "which Continent ?", "Europe", 467.0, 473.0], ["Summary Biological invasions threaten ecosystem integrity and biodiversity, with numerous adverse implications for native flora and fauna. Established populations of two notorious freshwater invaders, the snail Tarebia granifera and the fish Pterygoplichthys disjunctivus, have been reported on three continents and are frequently predicted to be in direct competition with native species for dietary resources. Using comparisons of species' isotopic niche widths and stable isotope community metrics, we investigated whether the diets of the invasive T. granifera and P. disjunctivus overlapped with those of native species in a highly invaded river. We also attempted to resolve diet composition for both species, providing some insight into the original pathway of invasion in the Nseleni River, South Africa. Stable isotope metrics of the invasive species were similar to or consistently mid-range in comparison with their native counterparts, with the exception of markedly more uneven spread in isotopic space relative to indigenous species. Dietary overlap between the invasive P. disjunctivus and native fish was low, with the majority of shared food resources having overlaps of <0.26. The invasive T. granifera showed effectively no overlap with the native planorbid snail. However, there was a high degree of overlap between the two invasive species (\u02dc0.86). Bayesian mixing models indicated that detrital mangrove Barringtonia racemosa leaves contributed the largest proportion to P. disjunctivus diet (0.12\u20130.58), while the diet of T. granifera was more variable with high proportions of detrital Eichhornia crassipes (0.24\u20130.60) and Azolla filiculoides (0.09\u20130.33) as well as detrital Barringtonia racemosa leaves (0.00\u20130.30). Overall, although the invasive T. granifera and P. disjunctivus were not in direct competition for dietary resources with native species in the Nseleni River system, their spread in isotopic space suggests they are likely to restrict energy available to higher consumers in the food web. Establishment of these invasive populations in the Nseleni River is thus probably driven by access to resources unexploited or unavailable to native residents.", "which Continent ?", "Africa", 805.0, 811.0], ["Phalaris arundinacea L. (reed canary grass) is a major invader of wetlands in temperate North America; it creates monotypic stands and displaces native vegetation. In this study, the effect of plant canopies on the establishment of P. arundinacea from seed in a fen, fen-like mesocosms, and a fen restoration site was assessed. In Wingra Fen, canopies that were more resistant to P. arundinacea establishment had more species (eight or nine versus four to six species) and higher cover of Aster firmus. In mesocosms planted with Glyceria striata plus 1, 6, or 15 native species, all canopies closed rapidly and prevented P. arundinacea establishment from seed, regardless of the density of the matrix species or the number of added species. Only after gaps were created in the canopy was P. arundinacea able to establish seedlings; then, the 15-species treatment reduced establishment to 48% of that for single-species canopies. A similar experiment in the restoration site produced less cover of native plants, and P. a...", "which Continent ?", "North America", 88.0, 101.0], ["Species become invasive if they (i) are introduced to a new range, (ii) establish themselves, and (iii) spread. To address the global problems caused by invasive species, several studies investigated steps ii and iii of this invasion process. However, only one previous study looked at step i and examined the proportion of species that have been introduced beyond their native range. We extend this research by investigating all three steps for all freshwater fish, mammals, and birds native to Europe or North America. A higher proportion of European species entered North America than vice versa. However, the introduction rate from Europe to North America peaked in the late 19th century, whereas it is still rising in the other direction. There is no clear difference in invasion success between the two directions, so neither the imperialism dogma (that Eurasian species are exceptionally successful invaders) is supported, nor is the contradictory hypothesis that North America offers more biotic resistance to invaders than Europe because of its less disturbed and richer biota. Our results do not support the tens rule either: that approximately 10% of all introduced species establish themselves and that approximately 10% of established species spread. We find a success of approximately 50% at each step. In comparison, only approximately 5% of native vertebrates were introduced in either direction. These figures show that, once a vertebrate is introduced, it has a high potential to become invasive. Thus, it is crucial to minimize the number of species introductions to effectively control invasive vertebrates.", "which Continent ?", "Europe", 496.0, 502.0], ["Species that are frequently introduced to an exotic range have a high potential of becoming invasive. Besides propagule pressure, however, no other generally strong determinant of invasion success is known. Although evidence has accumulated that human affiliates (domesticates, pets, human commensals) also have high invasion success, existing studies do not distinguish whether this success can be completely explained by or is partly independent of propagule pressure. Here, we analyze both factors independently, propagule pressure and human affiliation. We also consider a third factor directly related to humans, hunting, and 17 traits on each species' population size and extent, diet, body size, and life history. Our dataset includes all 2362 freshwater fish, mammals, and birds native to Europe or North America. In contrast to most previous studies, we look at the complete invasion process consisting of (1) introduction, (2) establishment, and (3) spread. In this way, we not only consider which of the introduced species became invasive but also which species were introduced. Of the 20 factors tested, propagule pressure and human affiliation were the two strongest determinants of invasion success across all taxa and steps. This was true for multivariate analyses that account for intercorrelations among variables as well as univariate analyses, suggesting that human affiliation influenced invasion success independently of propagule pressure. Some factors affected the different steps of the invasion process antagonistically. For example, game species were much more likely to be introduced to an exotic continent than nonhunted species but tended to be less likely to establish themselves and spread. Such antagonistic effects show the importance of considering the complete invasion process.", "which Continent ?", "Europe", 797.0, 803.0], ["Habitat heterogeneity is predicted to profoundly influence the dynamics of indirect interspecific interactions; however, despite potentially significant consequences for multi-species persistence, this remains almost completely unexplored in large-scale natural landscapes. Moreover, how spatial habitat heterogeneity affects the persistence of interacting invasive and native species is also poorly understood. Here we show how the persistence of a native prey (water vole, Arvicola terrestris) is determined by the spatial distribution of an invasive prey (European rabbit, Oryctolagus cuniculus) and directly infer how this is defined by the mobility of a shared invasive predator (American mink, Neovison vison). This study uniquely demonstrates that variation in habitat connectivity in large-scale natural landscapes creates spatial asynchrony, enabling coexistence between apparent competitive native and invasive species. These findings highlight that unexpected interactions may be involved in species declines, and also that in such cases habitat heterogeneity should be considered in wildlife management decisions.", "which Continent ?", "Europe", NaN, NaN], ["To assess the validity of previously developed risk assessment schemes in the conditions of Central Europe, we tested (1) Australian weed risk assessment scheme (WRA; Pheloung et al. 1999); (2) WRA with additional analysis by Daehler et al. (2004); and (3) decision tree scheme of Reichard and Hamilton (1997) developed in North America, on a data set of 180 alien woody species commonly planted in the Czech Republic. This list included 17 invasive species, 9 naturalized but non\u2010invasive, 31 casual aliens, and 123 species not reported to escape from cultivation. The WRA model with additional analysis provided best results, rejecting 100% of invasive species, accepting 83.8% of non\u2010invasive, and recommending further 13.0% for additional analysis. Overall accuracy of the WRA model with additional analysis was 85.5%, higher than that of the basic WRA scheme (67.9%) and the Reichard\u2013Hamilton model (61.6%). Only the Reichard\u2013Hamilton scheme accepted some invaders. The probability that an accepted species will become an invader was zero for both WRA models and 3.2% for the Reichard\u2013Hamilton model. The probability that a rejected species would have been an invader was 77.3% for both WRA models and 24.0% for the Reichard\u2013Hamilton model. It is concluded that the WRA model, especially with additional analysis, appears to be a promising template for building a widely applicable system for screening out invasive plant introductions.", "which Continent ?", "Europe", 100.0, 106.0], ["Recent comprehensive data provided through the DAISIE project (www.europe-aliens.org) have facilitated the development of the first pan-European assessment of the impacts of alien plants, vertebrates, and invertebrates \u2013 in terrestrial, freshwater, and marine environments \u2013 on ecosystem services. There are 1094 species with documented ecological impacts and 1347 with economic impacts. The two taxonomic groups with the most species causing impacts are terrestrial invertebrates and terrestrial plants. The North Sea is the maritime region that suffers the most impacts. Across taxa and regions, ecological and economic impacts are highly correlated. Terrestrial invertebrates create greater economic impacts than ecological impacts, while the reverse is true for terrestrial plants. Alien species from all taxonomic groups affect \u201csupporting\u201d, \u201cprovisioning\u201d, \u201cregulating\u201d, and \u201ccultural\u201d services and interfere with human well-being. Terrestrial vertebrates are responsible for the greatest range of impacts, and these are widely distributed across Europe. Here, we present a review of the financial costs, as the first step toward calculating an estimate of the economic consequences of alien species in Europe.", "which Continent ?", "Europe", 67.0, 73.0], ["The ability to succeed in diverse conditions is a key factor allowing introduced species to successfully invade and spread across new areas. Two non-exclusive factors have been suggested to promote this ability: adaptive phenotypic plasticity of individuals, and the evolution of locally adapted populations in the new range. We investigated these individual and population-level factors in Polygonum cespitosum, an Asian annual that has recently become invasive in northeastern North America. We characterized individual fitness, life-history, and functional plasticity in response to two contrasting glasshouse habitat treatments (full sun/dry soil and understory shade/moist soil) in 165 genotypes sampled from nine geographically separate populations representing the range of light and soil moisture conditions the species inhabits in this region. Polygonum cespitosum genotypes from these introduced-range populations expressed broadly similar plasticity patterns. In response to full sun, dry conditions, genotypes from all populations increased photosynthetic rate, water use efficiency, and allocation to root tissues, dramatically increasing reproductive fitness compared to phenotypes expressed in simulated understory shade. Although there were subtle among-population differences in mean trait values as well as in the slope of plastic responses, these population differences did not reflect local adaptation to environmental conditions measured at the population sites of origin. Instead, certain populations expressed higher fitness in both glasshouse habitat treatments. We also compared the introduced-range populations to a single population from the native Asian range, and found that the native population had delayed phenology, limited functional plasticity, and lower fitness in both experimental environments compared with the introduced-range populations. Our results indicate that the future spread of P. cespitosum in its introduced range will likely be fueled by populations consisting of individuals able to express high fitness across diverse light and moisture conditions, rather than by the evolution of locally specialized populations.", "which Continent ?", "North America", 479.0, 492.0], ["The loss of natural enemies is a key feature of species introductions and is assumed to facilitate the increased success of species in new locales (enemy release hypothesis; ERH). The ERH is rarely tested experimentally, however, and is often assumed from observations of enemy loss. We provide a rigorous test of the link between enemy loss and enemy release by conducting observational surveys and an in situ parasitoid exclusion experiment in multiple locations in the native and introduced ranges of a gall-forming insect, Neuroterus saltatorius, which was introduced poleward, within North America. Observational surveys revealed that the gall-former experienced increased demographic success and lower parasitoid attack in the introduced range. Also, a different composition of parasitoids attacked the gall-former in the introduced range. These observational results show that enemies were lost and provide support for the ERH. Experimental results, however, revealed that, while some enemy release occurred, it was not the sole driver of demographic success. This was because background mortality in the absence of enemies was higher in the native range than in the introduced range, suggesting that factors other than parasitoids limit the species in its native range and contribute to its success in its introduced range. Our study demonstrates the importance of measuring the effect of enemies in the context of other community interactions in both ranges to understand what factors cause the increased demographic success of introduced species. This case also highlights that species can experience very different dynamics when introduced into ecologically similar communities.", "which Continent ?", "North America", 589.0, 602.0], ["Abstract: We present a generic scoring system that compares the impact of alien species among members of large taxonomic groups. This scoring can be used to identify the most harmful alien species so that conservation measures to ameliorate their negative effects can be prioritized. For all alien mammals in Europe, we assessed impact reports as completely as possible. Impact was classified as either environmental or economic. We subdivided each of these categories into five subcategories (environmental: impact through competition, predation, hybridization, transmission of disease, and herbivory; economic: impact on agriculture, livestock, forestry, human health, and infrastructure). We assigned all impact reports to one of these 10 categories. All categories had impact scores that ranged from zero (minimal) to five (maximal possible impact at a location). We summed all impact scores for a species to calculate \"potential impact\" scores. We obtained \"actual impact\" scores by multiplying potential impact scores by the percentage of area occupied by the respective species in Europe. Finally, we correlated species\u2019 ecological traits with the derived impact scores. Alien mammals from the orders Rodentia, Artiodactyla, and Carnivora caused the highest impact. In particular, the brown rat (Rattus norvegicus), muskrat (Ondathra zibethicus), and sika deer (Cervus nippon) had the highest overall scores. Species with a high potential environmental impact also had a strong potential economic impact. Potential impact also correlated with the distribution of a species in Europe. Ecological flexibility (measured as number of different habitats a species occupies) was strongly related to impact. The scoring system was robust to uncertainty in knowledge of impact and could be adjusted with weight scores to account for specific value systems of particular stakeholder groups (e.g., agronomists or environmentalists). Finally, the scoring system is easily applicable and adaptable to other taxonomic groups.", "which Continent ?", "Europe", 309.0, 315.0], ["1. Biological invasion theory predicts that the introduction and establishment of non-native species is positively correlated with propagule pressure. Releases of pet and aquarium fishes to inland waters has a long history; however, few studies have examined the demographic basis of their importation and incidence in the wild. 2. For the 1500 grid squares (10\u00d710 km) that make up England, data on human demographics (population density, numbers of pet shops, garden centres and fish farms), the numbers of non-native freshwater fishes (from consented licences) imported in those grid squares (i.e. propagule pressure), and the reported incidences (in a national database) of non-native fishes in the wild were used to examine spatial relationships between the occurrence of non-native fishes and the demographic factors associated with propagule pressure, as well as to test whether the demographic factors are statistically reliable predictors of the incidence of non-native fishes, and as such surrogate estimators of propagule pressure. 3. Principal coordinates of neighbour matrices analyses, used to generate spatially explicit models, and confirmatory factor analysis revealed that spatial distributions of non-native species in England were significantly related to human population density, garden centre density and fish farm density. Human population density and the number of fish imports were identified as the best predictors of propagule pressure. 4. Human population density is an effective surrogate estimator of non-native fish propagule pressure and can be used to predict likely areas of non-native fish introductions. In conjunction with fish movements, where available, human population densities can be used to support biological invasion monitoring programmes across Europe (and perhaps globally) and to inform management decisions as regards the prioritization of areas for the control of non-native fish introductions. \u00a9 Crown copyright 2010. Reproduced with the permission of her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd.", "which Continent ?", "Europe", 1792.0, 1798.0], ["Semiarid ecosystems such as grasslands are characterized by high temporal variability in abiotic factors, which has led to suggestions that management actions may be more effective in some years than others. Here we examine this hypothesis in the context of grassland restoration, which faces two major obstacles: the contingency of native grass establishment on unpredictable precipitation, and competition from introduced species. We established replicated restoration experiments over three years at two sites in the northern Great Plains in order to examine the extent to which the success of several restoration strategies varied between sites and among years. We worked in 50-yr-old stands of crested wheatgrass (Agropyron cristatum), an introduced perennial grass that has been planted on >10 \u00d7 106 ha in western North America. Establishment of native grasses was highly contingent on local conditions, varying fourfold among years and threefold between sites. Survivorship also varied greatly and increased signi...", "which Continent ?", "North America", 820.0, 833.0], ["1 Acer platanoides (Norway maple) is an important non\u2010native invasive canopy tree in North American deciduous forests, where native species diversity and abundance are greatly reduced under its canopy. We conducted a field experiment in North American forests to compare planted seedlings of A. platanoides and Acer saccharum (sugar maple), a widespread, common native that, like A. platanoides, is shade tolerant. Over two growing seasons in three forests we compared multiple components of seedling success: damage from natural enemies, ecophysiology, growth and survival. We reasoned that equal or superior performance by A. platanoides relative to A. saccharum indicates seedling characteristics that support invasiveness, while inferior performance indicates potential barriers to invasion. 2 Acer platanoides seedlings produced more leaves and allocated more biomass to roots, A. saccharum had greater water use efficiency, and the two species exhibited similar photosynthesis and first\u2010season mortality rates. Acer platanoides had greater winter survival and earlier spring leaf emergence, but second\u2010season mortality rates were similar. 3 The success of A. platanoides seedlings was not due to escape from natural enemies, contrary to the enemy release hypothesis. Foliar insect herbivory and disease symptoms were similarly high for both native and non\u2010native, and seedling biomass did not differ. Rather, A. platanoides compared well with A. saccharum because of its equivalent ability to photosynthesize in the low light herb layer, its higher leaf production and greater allocation to roots, and its lower winter mortality coupled with earlier spring emergence. Its only potential barrier to seedling establishment, relative to A. saccharum, was lower water use efficiency, which possibly could hinder its invasion into drier forests. 4 The spread of non\u2010native canopy trees poses an especially serious problem for native forest communities, because canopy trees strongly influence species in all forest layers. Success at reaching the canopy depends on a tree's ecology in previous life\u2010history stages, particularly as a vulnerable seedling, but little is known about seedling characteristics that promote non\u2010native tree invasion. Experimental field comparison with ecologically successful native trees provides insight into why non\u2010native trees succeed as seedlings, which is a necessary stage on their journey into the forest canopy.", "which Continent ?", "North America", NaN, NaN], ["It is commonly assumed that invasive plants grow more vigorously in their introduced than in their native range, which is then attributed to release from natural enemies or to microevolutionary changes, or both. However, few studies have tested this assumption by comparing the performance of invasive species in their native vs. introduced ranges. Here, we studied abundance, growth, reproduction, and herbivory in 10 native Chinese and 10 invasive German populations of the invasive shrub Buddleja davidii (Scrophulariaceae; butterfly bush). We found strong evidence for increased plant vigour in the introduced range: plants in invasive populations were significantly taller and had thicker stems, larger inflorescences, and heavier seeds than plants in native populations. These differences in plant performance could not be explained by a more benign climate in the introduced range. Since leaf herbivory was substantially reduced in invasive populations, our data rather suggest that escape from natural enemies, associated with increased plant growth and reproduction, contributes to the invasion success of B. davidii in Central Europe.", "which Continent ?", "Europe", 1137.0, 1143.0], ["Factors that influence the early stages of invasion can be critical to invasion success, yet are seldom studied. In particular, broad pre-adaptation to recipient climate may importantly influence early colonization success, yet few studies have explicitly examined this. I performed an experiment to determine how similarity between seed source and transplant site latitude, as a general indicator of pre-adaptation to climate, interacts with propagule pressure (100, 200 and 400 seeds/pot) to influence early colonization success of the widespread North American weed, St. John's wort Hypericum perforatum. Seeds originating from seven native European source populations were sown in pots buried in the ground in a field in western Montana. Seed source populations were either similar or divergent in latitude to the recipient transplant site. Across seed density treatments, the match between seed source and recipient latitude did not affect the proportion of pots colonized or the number of individual colonists per pot. In contrast, propagule pressure had a significant and positive effect on colonization. These results suggest that propagules from many climatically divergent source populations can be viable invaders.", "which Continent ?", "North America", NaN, NaN], ["Summary 1. The exotic cladoceran Daphnia lumholtzi has recently invaded freshwater systems throughout the United States. Daphnia lumholtzi possesses extravagant head spines that are longer than those found on any other North American Daphnia. These spines are effective at reducing predation from many of the predators that are native to newly invaded habitats; however, they are plastic both in nature and in laboratory cultures. The purpose of this experiment was to better understand what environmental cues induce and maintain these effective predator-deterrent spines. We conducted life-table experiments on individual D. lumholtzi grown in water conditioned with an invertebrate insect predator, Chaoborus punctipennis, and water conditioned with a vertebrate fish predator, Lepomis macrochirus. 2. Daphnia lumholtzi exhibited morphological plasticity in response to kairomones released by both predators. However, direct exposure to predator kairomones during postembryonic development did not induce long spines in D. lumholtzi. In contrast, neonates produced from individuals exposed to Lepomis kairomones had significantly longer head and tail spines than neonates produced from control and Chaoborus individuals. These results suggest that there may be a maternal, or pre-embryonic, effect of kairomone exposure on spine development in D. lumholtzi. 3. Independent of these morphological shifts, D. lumholtzi also exhibited plasticity in life history characteristics in response to predator kairomones. For example, D. lumholtzi exhibited delayed reproduction in response to Chaoborus kairomones, and significantly more individuals produced resting eggs, or ephippia, in the presence of Lepomis kairomones.", "which Continent ?", "North America", NaN, NaN], ["We surveyed naturally occurring leaf herbivory in nine invasive and nine non-invasive exotic plant species sampled in natural areas in Ontario, New York and Massachusetts, and found that invasive plants experienced, on average, 96% less leaf damage than non-invasive species. Invasive plants were also more taxonomically isolated than non-invasive plants, belonging to families with 75% fewer native North American genera. However, the relationship between taxonomic isolation at the family level and herbivory was weak. We suggest that invasive plants may possess novel phytochemicals with anti-herbivore properties in addition to allelopathic and anti-microbial characteristics. Herbivory could be employed as an easily measured predictor of the likelihood that recently introduced exotic plants may become invasive.", "which Continent ?", "North America", NaN, NaN], ["Providing of sustainability is one of the main priorities in normative documents in various countries. Factors affecting regional competitiveness is seen as close to them determining sustainability in many researches. The aim of this research was to identify and evaluate main factors of competitiveness for statistical regions of Latvia to promote sustainable development of the country, applying the complex regional competitiveness assessment system developed by the author. The analysis of the Regional Competitiveness Index (RCI) and its sub-indexes showed that each statistical region has both: factors promoting and hindering competitiveness. Overall the most competitive is Riga statistical region, but the last place takes Latgale statistical region. It is possible to promote equal regional development and sustainability of Latvia by implementing well-developed regional development strategy and National Action Plan. To develop such strategies, it is necessary to understand the concept of sustainable competitiveness. To evaluate sustainable competitiveness of Latvia and its regions it is necessary to develop further the methodology of regional competitiveness evaluation.", "which State ?", "Latvia", 331.0, 337.0], ["Abstract In this article we address a gap in our understanding of how household agricultural production diversity affects the diets and nutrition of young children living in rural farming communities in sub-Saharan Africa. The specific objectives of this article are to assess: (1) the association between household agricultural production diversity and child dietary diversity; and (2) the association between household agricultural production diversity and child nutritional status. We use household survey data collected from 3,040 households as part of the Realigning Agriculture for Improved Nutrition (RAIN) intervention in Zambia. The data indicate low agricultural diversity, low dietary diversity and high levels of chronic malnutrition overall in this area. We find a strong positive association between production diversity and dietary diversity among younger children aged 6\u201323 months, and significant positive associations between production diversity and height for age Z-scores and stunting among older children aged 24\u201359 months.", "which Location ?", "Zambia", 630.0, 636.0], ["Background Nutrition education is crucial for improved nutrition outcomes. However, there are no studies to the best of our knowledge that have jointly analysed the roles of nutrition education, farm production diversity and commercialization on household, women and child dietary diversity. Objective This article jointly analyses the role of nutrition education, farm production diversity and commercialization on household, women and children dietary diversity in Zimbabwe. In addition, we analyze separately the roles of crop and livestock diversity and individual agricultural practices on dietary diversity. Design Data were collected from 2,815 households randomly selected in eight districts. Negative binomial regression was used for model estimations. Results Nutrition education increased household, women, and child dietary diversity by 3, 9 and 24%, respectively. Farm production diversity had a strong and positive association with household and women dietary diversity. Crop diversification led to a 4 and 5% increase in household and women dietary diversity, respectively. Furthermore, livestock diversification and market participation were positively associated with household, women, and children dietary diversity. The cultivation of pulses and fruits increased household, women, and children dietary diversity. Vegetable production and goat rearing increased household and women dietary diversity. Conclusions Nutrition education and improving access to markets are promising strategies to improve dietary diversity at both household and individual level. Results demonstrate the value of promoting nutrition education; farm production diversity; small livestock; pulses, vegetables and fruits; crop-livestock integration; and market access for improved nutrition.", "which Location ?", "Zimbabwe", 467.0, 475.0], ["Households in low-income settings are vulnerable to seasonal changes in dietary diversity because of fluctuations in food availability and access. We assessed seasonal differences in household dietary diversity in Burkina Faso, and determined the extent to which household socioeconomic status and crop production diversity modify changes in dietary diversity across seasons, using data from the nationally representative 2014 Burkina Faso Continuous Multisectoral Survey (EMC). A household dietary diversity score based on nine food groups was created from household food consumption data collected during four rounds of the 2014 EMC. Plot-level crop production data, and data on household assets and education were used to create variables on crop diversity and household socioeconomic status, respectively. Analyses included data for 10,790 households for which food consumption data were available for at least one round. Accounting for repeated measurements and controlling for the complex survey design and confounding covariates using a weighted multi-level model, household dietary diversity was significantly higher during both lean seasons periods, and higher still during the harvest season as compared to the post-harvest season (mean: post-harvest: 4.76 (SE 0.04); beginning of lean: 5.13 (SE 0.05); end of lean: 5.21 (SE 0.05); harvest: 5.72 (SE 0.04)), but was not different between the beginning and the end of lean season. Seasonal differences in household dietary diversity were greater among households with higher food expenditures, greater crop production, and greater monetary value of crops sale (P<0.05). Seasonal changes in household dietary diversity in Burkina Faso may reflect nutritional differences among agricultural households, and may be modified both by households\u2019 socioeconomic status and agricultural characteristics.", "which Location ?", "Burkina Faso", 214.0, 226.0], ["Short-term traffic prediction (STTP) is one of the most critical capabilities in Intelligent Transportation Systems (ITS), which can be used to support driving decisions, alleviate traffic congestion and improve transportation efficiency. However, STTP of large-scale road networks remains challenging due to the difficulties of effectively modeling the diverse traffic patterns by high-dimensional time series. Therefore, this paper proposes a framework that involves a deep clustering method for STTP in large-scale road networks. The deep clustering method is employed to supervise the representation learning in a visualized way from the large unlabeled dataset. More specifically, to fully exploit the traffic periodicity, the raw series is first divided into a number of sub-series for triplet generation. The convolutional neural networks (CNNs) with triplet loss are utilized to extract the features of shape by transforming the series into visual images. The shape-based representations are then used to cluster road segments into groups. Thereafter, a model sharing strategy is further proposed to build recurrent NNs-based predictions through group-based models (GBMs). GBM is built for a type of traffic patterns, instead of one road segment exclusively or all road segments uniformly. Our framework can not only significantly reduce the number of prediction models, but also improve their generalization by virtue of being trained on more diverse examples. Furthermore, the proposed framework over a selected road network in Beijing is evaluated. Experiment results show that the deep clustering method can effectively cluster the road segments and GBM can achieve comparable prediction accuracy against the IBM with less number of prediction models.", "which Location ?", "Beijing", 1538.0, 1545.0], ["Background: Recent literature, largely from Africa, shows mixed effects of own-production on diet diversity. However, the role of own-production, relative to markets, in influencing food consumption becomes more pronounced as market integration increases. Objective: This paper investigates the relative importance of two factors - production diversity and household market integration - for the intake of a nutritious diet by women and households in rural India. Methods: Data analysis is based on primary data from an extensive agriculture-nutrition survey of 3600 Indian households that was collected in 2017. Dietary diversity scores are constructed for women and households is based on 24-hour and 7-day recall periods. Household market integration is measured as monthly household expenditure on key non-staple food groups. We measure production diversity in two ways - field-level and on-farm production diversity - in order to account for the cereal centric rice-wheat cropping system found in our study locations. The analysis is based on Ordinary Least Squares regressions where we control for a variety of village, household, and individual level covariates that affect food consumption, and village fixed effects. Robustness checks are done by way of using a Poisson regression specifications and 7-day recall period. Results: Conventional measures of field-level production diversity, like the number of crops or food groups grown, have no significant association with diet diversity. In contrast, it is on-farm production diversity (the field-level cultivation of pulses and on-farm livestock management, and kitchen gardens in the longer run) that is significantly associated with improved dietary diversity scores, thus suggesting the importance of non-staples in improving both individual and household dietary diversity. Furthermore, market purchases of non-staples like pulses and dairy products are associated with a significantly higher dietary diversity. Other significant determinants of dietary diversity include women\u2019s literacy and awareness of nutrition. These results mostly remain robust to changes in the recall period of the diet diversity measure and the nature of the empirical specification. Conclusions: This study contributes to the scarce empirical evidence related to diets in India. Additionally, our results indicate some key intervention areas - promoting livestock rearing, strengthening households\u2019 market integration (for purchase of non-staples) and increasing women\u2019s awareness about nutrition. These are more impactful than raising production diversity. ", "which Location ?", "India", 563.0, 568.0], ["ABSTRACT The role of agroecological practices in addressing food security has had limited investigation, particularly in Sub-Saharan Africa. Quasi-experimental methods were used to assess the role of agroecological practices in reducing food insecurity in smallholder households in Malawi. Two key practices \u2013 crop diversification and the incorporation of organic matter into soil \u2013 were examined. The quasi-experimental study of an agroecological intervention included survey data from 303 households and in-depth interviews with 33 households. The survey sampled 210 intervention households participating in the agroecological intervention, and 93 control households in neighboring villages. Regression analysis of food security indicators found that both agroecological practices significantly predicted higher food security and dietary diversity for smallholder households: the one-third of farming households who incorporated legume residue soon after harvest were almost three times more likely to be food secure than those who had not incorporated crop residue. Qualitative semi-structured interviews with 33 households identified several pathways through which crop diversification and crop residue incorporation contributed to household food security: direct consumption, agricultural income, and changes in underlying production relations. These findings provide evidence of agroecology\u2019s potential to address food insecurity while supporting sustainable food systems.", "which Location ?", "Malawi", 282.0, 288.0], ["Agriculture is fundamental to achieving nutrition goals; it provides the food, energy, and nutrients essential for human health and well-being. This paper has examined crop diversity and dietary diversity in six villages using the ICRISAT Village Level Studies (VLS) data from the Telangana and Maharashtra states of India. The study has used the data of cultivating households for constructing the crop diversity index while dietary diversity data is from the special purpose nutritional surveys conducted by ICRISAT in the six villages. The study has revealed that the cropping pattern is not uniform across the six study villages with dominance of mono cropping in Telangana villages and of mixed cropping in Maharashtra villages. The analysis has indicated a positive and significant correlation between crop diversity and household dietary diversity at the bivariate level. In multiple linear regression model, controlling for the other covariates, crop diversity has not shown a significant association with household dietary diversity. However, other covariates have shown strong association with dietary diversity. The regression results have revealed that households which cultivated minimum one food crop in a single cropping year have a significant and positive relationship with dietary diversity. From the study it can be inferred that crop diversity alone does not affect the household dietary diversity in the semi-arid tropics. Enhancing the evidence base and future research, especially in the fragile environment of semi-arid tropics, is highly recommended.", "which Location ?", "India", 317.0, 322.0], ["BACKGROUND On-farm crop species richness (CSR) may be important for maintaining the diversity and quality of diets of smallholder farming households. OBJECTIVES The objectives of this study were to 1) determine the association of CSR with the diversity and quality of household diets in Malawi and 2) assess hypothesized mechanisms for this association via both subsistence- and market-oriented pathways. METHODS Longitudinal data were assessed from nationally representative household surveys in Malawi between 2010 and 2013 (n = 3000 households). A household diet diversity score (DDS) and daily intake per adult equivalent of energy, protein, iron, vitamin A, and zinc were calculated from 7-d household consumption data. CSR was calculated from plot-level data on all crops cultivated during the 2009-2010 and 2012-2013 agricultural seasons in Malawi. Adjusted generalized estimating equations were used to assess the longitudinal relation of CSR with household diet quality and diversity. RESULTS CSR was positively associated with DDS (\u03b2: 0.08; 95% CI: 0.06, 0.12; P < 0.001), as well as daily intake per adult equivalent of energy (kilocalories) (\u03b2: 41.6; 95% CI: 20.9, 62.2; P < 0.001), protein (grams) (\u03b2: 1.78; 95% CI: 0.80, 2.75; P < 0.001), iron (milligrams) (\u03b2: 0.30; 95% CI: 0.16, 0.44; P < 0.001), vitamin A (micrograms of retinol activity equivalent) (\u03b2: 25.8; 95% CI: 12.7, 38.9; P < 0.001), and zinc (milligrams) (\u03b2: 0.26; 95% CI: 0.13, 0.38; P < 0.001). Neither proportion of harvest sold nor distance to nearest population center modified the relation between CSR and household diet diversity or quality (P \u2265 0.05). Households with greater CSR were more commercially oriented (least-squares mean proportion of harvest sold \u00b1 SE, highest tertile of CSR: 17.1 \u00b1 0.52; lowest tertile of CSR: 8.92 \u00b1 1.09) (P < 0.05). CONCLUSION Promoting on-farm CSR may be a beneficial strategy for simultaneously supporting enhanced diet quality and diversity while also creating opportunities for smallholder farmers to engage with markets in subsistence agricultural contexts.", "which Location ?", "Malawi", 287.0, 293.0], ["Some rural areas of Ecuador, including the Imbabura Province of the Andes Highlands, are experiencing a double burden of malnutrition where micronutrient deficiencies persist at the same time obesity is increasing as many traditional home-grown foods are being replaced with more commercially prepared convenience foods. Thus, the relationships among agricultural food production diversity (FPD), dietary diversity (DD), and household food insecurity (HFI) of the rural small holder farmers need further study. Therefore, we examined these associations in small holder farmers residing in this Province in the Andes Highlands (elevation > 2500 m). Non-pregnant maternal home managers (n = 558, x age = 44.1, SD = 16.5 y) were interviewed regarding the number of different agricultural food crops cultivated and domestic animals raised in their family farm plots. DD was determined using the Minimum Dietary Diversity for Women Score (MDD-W) based on the number of 10 different food groups consumed, and household food insecurity (HFI) was determined using the 8-item Household Food Insecurity Experience Scale. The women reported consuming an average of 53% of their total food from what they cultivated or raised. Women with higher DD [MMD-W score \u2265 5 food groups (79% of total sample)] were on farms that cultivated a greater variety of crops (x = 8.7 vs. 6.7), raised more animals (x = 17.9 vs. 12.7, p < 0.05), and reported lower HFI and significantly higher intakes of energy, protein, iron, zinc, and vitamin A (all p < 0.05). Multiple regression analyses demonstrated that FPD was only modestly related to DD, which together with years of education, per capita family income, and HFI accounted for 26% of DD variance. In rural areas of the Imbabura Province, small holder farmers still rely heavily on consumption of self-cultivated foods, but greater diversity of crops grown in family farm plots is only weakly associated with greater DD and lower HFI among the female caretakers.", "which Location ?", "Ecuador", 20.0, 27.0], ["Abstract Objective The association between farm production diversity and dietary diversity in rural smallholder households was recently analysed. Most existing studies build on household-level dietary diversity indicators calculated from 7d food consumption recalls. Herein, this association is revisited with individual-level 24 h recall data. The robustness of the results is tested by comparing household- and individual-level estimates. The role of other factors that may influence dietary diversity, such as market access and agricultural technology, is also analysed. Design A survey of smallholder farm households was carried out in Malawi in 2014. Dietary diversity scores are calculated from 24 h recall data. Production diversity scores are calculated from farm production data covering a period of 12 months. Individual- and household-level regression models are developed and estimated. Setting Data were collected in sixteen districts of central and southern Malawi. Subjects Smallholder farm households (n 408), young children (n 519) and mothers (n 408). Results Farm production diversity is positively associated with dietary diversity. However, the estimated effects are small. Access to markets for buying food and selling farm produce and use of chemical fertilizers are shown to be more important for dietary diversity than diverse farm production. Results with household- and individual-level dietary data are very similar. Conclusions Further increasing production diversity may not be the most effective strategy to improve diets in smallholder farm households. Improving access to markets, productivity-enhancing inputs and technologies seems to be more promising.", "which Location ?", "Malawi", 640.0, 646.0], ["The linkage between agriculture and nutrition is complex and often debated in the policy discourse in India. The enigma of fastest growing economy and yet the largest home of under- and mal-nourished population takes away the sheen from the glory of economic achievements of India. In this context, the study has examined the food consumption patterns, assessed the relationship between agricultural production and dietary diversity, and analysed the impact of dietary diversity on nutritional intake. The study is based on a household level panel data from 12 villages of Bihar, Jharkhand and Odisha in eastern India. The study has shown that agricultural production diversity is a major determinant of dietary diversity which in turn has a strong effect on calorie and protein intake. The study has suggested that efforts to promote agricultural diversification will be helpful to enhance food and nutrition security in the country. Agricultural programmes and policies oriented towards reducing under-nutrition should promote diversity in agricultural production rather than emphasizing on increasing production through focusing on selected staple crops as has been observed in several states of India. The huge fertilizer subsidies and government procurement schemes limited to a few crops provide little incentives for farmers to diversity their production portfolio.", "which Location ?", "India", 102.0, 107.0], ["Abstract AIMS: To study the occurrence and spatial distribution of Shiga toxin-producing Escherichia coli (STEC) O157 in calves less than 1-week-old (bobby calves) born on dairy farms in the North Island of New Zealand, and to determine the association of concentration of IgG in serum, carcass weight, gender and breed with occurrence of E. coli O157 in these calves. METHODS: In total, 309 recto-anal mucosal swabs and blood samples were collected from bobby calves at two slaughter plants in the North Island of New Zealand. The address of the farm, tag number, carcass weight, gender and breed of the sampled animals were recorded. Swabs were tested for the presence of E. coli O157 using real time PCR (RT-PCR). All the farms were mapped geographically to determine the spatial distribution of farms positive for E. coli O157. K function analysis was used to test for clustering of these farms. Multiplex PCR was used for the detection of Shiga toxin 1 (stx1), Shiga toxin 2 (stx2), E. coli attaching and effacing (eae) and Enterohaemolysin (ehxA) genes in E. coli O157 isolates. Genotypes of isolates from this study (n = 10) along with human (n = 18) and bovine isolates (n = 4) obtained elsewhere were determined using bacteriophage insertion typing for stx encoding. RESULTS: Of the 309 samples, 55 (17.7%) were positive for E. coli O157 by RT-PCR and originated from 47/197 (23.8%) farms. E. coli O157 was isolated from 10 samples of which seven isolates were positive for stx2, eae and ehxA genes and the other three isolates were positive for stx1, stx2, eae and ehxA. Bacteriophage insertion typing for stx encoding revealed that 12/18 (67%) human and 13/14 (93%) bovine isolates belonged to genotypes 1 and 3. K function analysis showed some clustering of farms positive for E. coli O157. There was no association between concentration of IgG in serum, carcass weight and gender of the calves, and samples positive for E. coli O157, assessed using linear mixed-effects models. However, Jersey calves were less likely to be positive for E. coli O157 by RT-PCR than Friesian calves (p = 0.055). CONCLUSIONS: Healthy bobby calves are an asymptomatic reservoir of E. coli O157 in New Zealand and may represent an important source of infection for humans. Carriage was not associated with concentration of IgG in serum, carcass weight or gender.", "which Publication location ?", "New Zealand", 207.0, 218.0], ["The first case of coronavirus disease (COVID-19) in Finland was confirmed on 29 January 2020. No secondary cases were detected. We describe the clinical picture and laboratory findings 3\u201323 days since the first symptoms. The SARS-CoV-2/Finland/1/2020 virus strain was isolated, the genome showing a single nucleotide substitution to the reference strain from Wuhan. Neutralising antibody response appeared within 9 days along with specific IgM and IgG response, targeting particularly nucleocapsid and spike proteins.", "which has location ?", "Finland", 52.0, 59.0], ["Background & objectives: Since December 2019, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has globally affected 195 countries. In India, suspected cases were screened for SARS-CoV-2 as per the advisory of the Ministry of Health and Family Welfare. The objective of this study was to characterize SARS-CoV-2 sequences from three identified positive cases as on February 29, 2020. Methods: Throat swab/nasal swab specimens for a total of 881 suspected cases were screened by E gene and confirmed by RdRp (1), RdRp (2) and N gene real-time reverse transcription-polymerase chain reactions and next-generation sequencing. Phylogenetic analysis, molecular characterization and prediction of B- and T-cell epitopes for Indian SARS-CoV-2 sequences were undertaken. Results: Three cases with a travel history from Wuhan, China, were confirmed positive for SARS-CoV-2. Almost complete (29,851 nucleotides) genomes of case 1, case 3 and a fragmented genome for case 2 were obtained. The sequences of Indian SARS-CoV-2 though not identical showed high (~99.98%) identity with Wuhan seafood market pneumonia virus (accession number: NC 045512). Phylogenetic analysis showed that the Indian sequences belonged to different clusters. Predicted linear B-cell epitopes were found to be concentrated in the S1 domain of spike protein, and a conformational epitope was identified in the receptor-binding domain. The predicted T-cell epitopes showed broad human leucocyte antigen allele coverage of A and B supertypes predominant in the Indian population. Interpretation & conclusions: The two SARS-CoV-2 sequences obtained from India represent two different introductions into the country. The genetic heterogeneity is as noted globally. The identified B- and T-cell epitopes may be considered suitable for future experiments towards the design of vaccines and diagnostics. Continuous monitoring and analysis of the sequences of new cases from India and the other affected countries would be vital to understand the genetic evolution and rates of substitution of the SARS-CoV-2.", "which has location ?", "India", 147.0, 152.0], ["The ongoing COVID-19 pandemic has stimulated a search for reservoirs and species potentially involved in back and forth transmission. Studies have postulated bats as one of the key reservoirs of coronaviruses (CoVs), and different CoVs have been detected in bats. So far, CoVs have not been found in bats in Sweden and we therefore tested whether they carry CoVs. In summer 2020, we sampled a total of 77 adult bats comprising 74 Myotis daubentonii, 2 Pipistrellus pygmaeus, and 1 M. mystacinus bats in southern Sweden. Blood, saliva and feces were sampled, processed and subjected to a virus next-generation sequencing target enrichment protocol. An Alphacoronavirus was detected and sequenced from feces of a M. daubentonii adult female bat. Phylogenetic analysis of the almost complete virus genome revealed a close relationship with Finnish and Danish strains. This was the first finding of a CoV in bats in Sweden, and bats may play a role in the transmission cycle of CoVs in Sweden. Focused and targeted surveillance of CoVs in bats is warranted, with consideration of potential conflicts between public health and nature conservation required as many bat species in Europe are threatened and protected.", "which has location ?", "Sweden", 308.0, 314.0], ["Abstract Background During the current worldwide pandemic, coronavirus disease 2019 (Covid-19) was first diagnosed in Iceland at the end of February. However, data are limited on how SARS-CoV-2, the virus that causes Covid-19, enters and spreads in a population. Methods We targeted testing to persons living in Iceland who were at high risk for infection (mainly those who were symptomatic, had recently traveled to high-risk countries, or had contact with infected persons). We also carried out population screening using two strategies: issuing an open invitation to 10,797 persons and sending random invitations to 2283 persons. We sequenced SARS-CoV-2 from 643 samples. Results As of April 4, a total of 1221 of 9199 persons (13.3%) who were recruited for targeted testing had positive results for infection with SARS-CoV-2. Of those tested in the general population, 87 (0.8%) in the open-invitation screening and 13 (0.6%) in the random-population screening tested positive for the virus. In total, 6% of the population was screened. Most persons in the targeted-testing group who received positive tests early in the study had recently traveled internationally, in contrast to those who tested positive later in the study. Children under 10 years of age were less likely to receive a positive result than were persons 10 years of age or older, with percentages of 6.7% and 13.7%, respectively, for targeted testing; in the population screening, no child under 10 years of age had a positive result, as compared with 0.8% of those 10 years of age or older. Fewer females than males received positive results both in targeted testing (11.0% vs. 16.7%) and in population screening (0.6% vs. 0.9%). The haplotypes of the sequenced SARS-CoV-2 viruses were diverse and changed over time. The percentage of infected participants that was determined through population screening remained stable for the 20-day duration of screening. Conclusions In a population-based study in Iceland, children under 10 years of age and females had a lower incidence of SARS-CoV-2 infection than adolescents or adults and males. The proportion of infected persons identified through population screening did not change substantially during the screening period, which was consistent with a beneficial effect of containment efforts. (Funded by deCODE Genetics\u2013Amgen.)", "which has location ?", "Iceland", 118.0, 125.0], ["The ecology of ebolaviruses is still poorly understood and the role of bats in outbreaks needs to be further clarified. Straw-colored fruit bats (Eidolon helvum) are the most common fruit bats in Africa and antibodies to ebolaviruses have been documented in this species. Between December 2018 and November 2019, samples were collected at approximately monthly intervals in roosting and feeding sites from 820 bats from an Eidolon helvum colony. Dried blood spots (DBS) were tested for antibodies to Zaire, Sudan, and Bundibugyo ebolaviruses. The proportion of samples reactive with GP antigens increased significantly with age from 0\u20139/220 (0\u20134.1%) in juveniles to 26\u2013158/225 (11.6\u201370.2%) in immature adults and 10\u2013225/372 (2.7\u201360.5%) in adult bats. Antibody responses were lower in lactating females. Viral RNA was not detected in 456 swab samples collected from 152 juvenile and 214 immature adult bats. Overall, our study shows that antibody levels increase in young bats suggesting that seroconversion to Ebola or related viruses occurs in older juvenile and immature adult bats. Multiple year monitoring would be needed to confirm this trend. Knowledge of the periods of the year with the highest risk of Ebolavirus circulation can guide the implementation of strategies to mitigate spill-over events.", "which has location ?", "Sudan", 507.0, 512.0], ["Background & objectives: Since December 2019, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has globally affected 195 countries. In India, suspected cases were screened for SARS-CoV-2 as per the advisory of the Ministry of Health and Family Welfare. The objective of this study was to characterize SARS-CoV-2 sequences from three identified positive cases as on February 29, 2020. Methods: Throat swab/nasal swab specimens for a total of 881 suspected cases were screened by E gene and confirmed by RdRp (1), RdRp (2) and N gene real-time reverse transcription-polymerase chain reactions and next-generation sequencing. Phylogenetic analysis, molecular characterization and prediction of B- and T-cell epitopes for Indian SARS-CoV-2 sequences were undertaken. Results: Three cases with a travel history from Wuhan, China, were confirmed positive for SARS-CoV-2. Almost complete (29,851 nucleotides) genomes of case 1, case 3 and a fragmented genome for case 2 were obtained. The sequences of Indian SARS-CoV-2 though not identical showed high (~99.98%) identity with Wuhan seafood market pneumonia virus (accession number: NC 045512). Phylogenetic analysis showed that the Indian sequences belonged to different clusters. Predicted linear B-cell epitopes were found to be concentrated in the S1 domain of spike protein, and a conformational epitope was identified in the receptor-binding domain. The predicted T-cell epitopes showed broad human leucocyte antigen allele coverage of A and B supertypes predominant in the Indian population. Interpretation & conclusions: The two SARS-CoV-2 sequences obtained from India represent two different introductions into the country. The genetic heterogeneity is as noted globally. The identified B- and T-cell epitopes may be considered suitable for future experiments towards the design of vaccines and diagnostics. Continuous monitoring and analysis of the sequences of new cases from India and the other affected countries would be vital to understand the genetic evolution and rates of substitution of the SARS-CoV-2.", "which has location ?", "China", 830.0, 835.0], ["ABSTRACT This study analyses the consequences of the Covid-19 crisis on stress and well-being in Switzerland. In particular, we assess whether vulnerable groups in terms of social isolation, increased workload and limited socioeconomic resources are affected more than others. Using longitudinal data from the Swiss Household Panel, including a specific Covid-19 study, we estimate change score models to predict changes in perceived stress and life satisfaction at the end of the semi-lockdown in comparison to before the crisis. We find no general change in life satisfaction and a small decrease in stress. Yet, in line with our expectations, more vulnerable groups in terms of social isolation (young adults, Covid-19 risk group members, individuals without a partner), workload (women) and socioeconomic resources (unemployed and those who experienced a deteriorating financial situation) reported a decrease in life satisfaction. Stress levels decreased most strongly among high earners, workers on short-time work and the highly educated.", "which has location ?", "Switzerland", 97.0, 108.0], ["During the annual hunt in a privately owned Austrian game population in fall 2019 and 2020, 64 red deer (Cervus elaphus), 5 fallow deer (Dama dama), 6 mouflon (Ovis gmelini musimon), and 95 wild boars (Sus scrofa) were shot and sampled for PCR testing. Pools of spleen, lung, and tonsillar swabs were screened for specific nucleic acids of porcine circoviruses. Wild ruminants were additionally tested for herpesviruses and pestiviruses, and wild boars were screened for pseudorabies virus (PrV) and porcine lymphotropic herpesviruses (PLHV-1-3). PCV2 was detectable in 5% (3 of 64) of red deer and 75% (71 of 95) of wild boar samples. In addition, 24 wild boar samples (25%) but none of the ruminants tested positive for PCV3 specific nucleic acids. Herpesviruses were detected in 15 (20%) ruminant samples. Sequence analyses showed the closest relationships to fallow deer herpesvirus and elk gammaherpesvirus. In wild boars, PLHV-1 was detectable in 10 (11%), PLHV-2 in 44 (46%), and PLHV-3 in 66 (69%) of animals, including 36 double and 3 triple infections. No pestiviruses were detectable in any ruminant samples, and all wild boar samples were negative in PrV-PCR. Our data demonstrate a high prevalence of PCV2 and PLHVs in an Austrian game population, confirm the presence of PCV3 in Austrian wild boars, and indicate a low risk of spillover of notifiable animal diseases into the domestic animal population.", "which has location ?", "Austria", NaN, NaN], ["The COVID-19 pandemic has considerably impacted many people's lives. This study examined changes in subjective wellbeing between December 2019 and May 2020 and how stress appraisals and coping strategies relate to individual differences and changes in subjective wellbeing during the early stages of the pandemic. Data were collected at 4 time points from 979 individuals in Germany. Results showed that, on average, life satisfaction, positive affect, and negative affect did not change significantly between December 2019 and March 2020 but decreased between March and May 2020. Across the latter timespan, individual differences in life satisfaction were positively related to controllability appraisals, active coping, and positive reframing, and negatively related to threat and centrality appraisals and planning. Positive affect was positively related to challenge and controllable-by-self appraisals, active coping, using emotional support, and religion, and negatively related to threat appraisal and humor. Negative affect was positively related to threat and centrality appraisals, denial, substance use, and self-blame, and negatively related to controllability appraisals and emotional support. Contrary to expectations, the effects of stress appraisals and coping strategies on changes in subjective wellbeing were small and mostly nonsignificant. These findings imply that the COVID-19 pandemic represents not only a major medical and economic crisis, but also has a psychological dimension, as it can be associated with declines in key facets of people's subjective wellbeing. Psychological practitioners should address potential declines in subjective wellbeing with their clients and attempt to enhance clients' general capability to use functional stress appraisals and effective coping strategies. (PsycInfo Database Record (c) 2020 APA, all rights reserved).", "which has location ?", "Germany", 375.0, 382.0], ["Background & objectives: Since December 2019, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has globally affected 195 countries. In India, suspected cases were screened for SARS-CoV-2 as per the advisory of the Ministry of Health and Family Welfare. The objective of this study was to characterize SARS-CoV-2 sequences from three identified positive cases as on February 29, 2020. Methods: Throat swab/nasal swab specimens for a total of 881 suspected cases were screened by E gene and confirmed by RdRp (1), RdRp (2) and N gene real-time reverse transcription-polymerase chain reactions and next-generation sequencing. Phylogenetic analysis, molecular characterization and prediction of B- and T-cell epitopes for Indian SARS-CoV-2 sequences were undertaken. Results: Three cases with a travel history from Wuhan, China, were confirmed positive for SARS-CoV-2. Almost complete (29,851 nucleotides) genomes of case 1, case 3 and a fragmented genome for case 2 were obtained. The sequences of Indian SARS-CoV-2 though not identical showed high (~99.98%) identity with Wuhan seafood market pneumonia virus (accession number: NC 045512). Phylogenetic analysis showed that the Indian sequences belonged to different clusters. Predicted linear B-cell epitopes were found to be concentrated in the S1 domain of spike protein, and a conformational epitope was identified in the receptor-binding domain. The predicted T-cell epitopes showed broad human leucocyte antigen allele coverage of A and B supertypes predominant in the Indian population. Interpretation & conclusions: The two SARS-CoV-2 sequences obtained from India represent two different introductions into the country. The genetic heterogeneity is as noted globally. The identified B- and T-cell epitopes may be considered suitable for future experiments towards the design of vaccines and diagnostics. Continuous monitoring and analysis of the sequences of new cases from India and the other affected countries would be vital to understand the genetic evolution and rates of substitution of the SARS-CoV-2.", "which has location ?", "Wuhan, China", 823.0, 835.0], ["The ecology of ebolaviruses is still poorly understood and the role of bats in outbreaks needs to be further clarified. Straw-colored fruit bats (Eidolon helvum) are the most common fruit bats in Africa and antibodies to ebolaviruses have been documented in this species. Between December 2018 and November 2019, samples were collected at approximately monthly intervals in roosting and feeding sites from 820 bats from an Eidolon helvum colony. Dried blood spots (DBS) were tested for antibodies to Zaire, Sudan, and Bundibugyo ebolaviruses. The proportion of samples reactive with GP antigens increased significantly with age from 0\u20139/220 (0\u20134.1%) in juveniles to 26\u2013158/225 (11.6\u201370.2%) in immature adults and 10\u2013225/372 (2.7\u201360.5%) in adult bats. Antibody responses were lower in lactating females. Viral RNA was not detected in 456 swab samples collected from 152 juvenile and 214 immature adult bats. Overall, our study shows that antibody levels increase in young bats suggesting that seroconversion to Ebola or related viruses occurs in older juvenile and immature adult bats. Multiple year monitoring would be needed to confirm this trend. Knowledge of the periods of the year with the highest risk of Ebolavirus circulation can guide the implementation of strategies to mitigate spill-over events.", "which has location ?", "Africa", 196.0, 202.0], ["The ongoing COVID-19 pandemic has stimulated a search for reservoirs and species potentially involved in back and forth transmission. Studies have postulated bats as one of the key reservoirs of coronaviruses (CoVs), and different CoVs have been detected in bats. So far, CoVs have not been found in bats in Sweden and we therefore tested whether they carry CoVs. In summer 2020, we sampled a total of 77 adult bats comprising 74 Myotis daubentonii, 2 Pipistrellus pygmaeus, and 1 M. mystacinus bats in southern Sweden. Blood, saliva and feces were sampled, processed and subjected to a virus next-generation sequencing target enrichment protocol. An Alphacoronavirus was detected and sequenced from feces of a M. daubentonii adult female bat. Phylogenetic analysis of the almost complete virus genome revealed a close relationship with Finnish and Danish strains. This was the first finding of a CoV in bats in Sweden, and bats may play a role in the transmission cycle of CoVs in Sweden. Focused and targeted surveillance of CoVs in bats is warranted, with consideration of potential conflicts between public health and nature conservation required as many bat species in Europe are threatened and protected.", "which has location ?", "Europe", 1174.0, 1180.0], ["The ecology of ebolaviruses is still poorly understood and the role of bats in outbreaks needs to be further clarified. Straw-colored fruit bats (Eidolon helvum) are the most common fruit bats in Africa and antibodies to ebolaviruses have been documented in this species. Between December 2018 and November 2019, samples were collected at approximately monthly intervals in roosting and feeding sites from 820 bats from an Eidolon helvum colony. Dried blood spots (DBS) were tested for antibodies to Zaire, Sudan, and Bundibugyo ebolaviruses. The proportion of samples reactive with GP antigens increased significantly with age from 0\u20139/220 (0\u20134.1%) in juveniles to 26\u2013158/225 (11.6\u201370.2%) in immature adults and 10\u2013225/372 (2.7\u201360.5%) in adult bats. Antibody responses were lower in lactating females. Viral RNA was not detected in 456 swab samples collected from 152 juvenile and 214 immature adult bats. Overall, our study shows that antibody levels increase in young bats suggesting that seroconversion to Ebola or related viruses occurs in older juvenile and immature adult bats. Multiple year monitoring would be needed to confirm this trend. Knowledge of the periods of the year with the highest risk of Ebolavirus circulation can guide the implementation of strategies to mitigate spill-over events.", "which has location ?", "Zaire", 500.0, 505.0], ["We have studied the elastic and inelastic light scattering of twelve lunar surface rocks and eleven lunar soil samples from Apollo 11, 12, 14, and 15, over the range 20-2000 em-I. The phonons occurring in this frequency region have been associated with the different chemical constituents and are used to determine the mineralogical abundances by comparison with the spectra of a wide variety of terrestrial minerals and rocks. KramersKronig analyses of the infrared reflectance spectra provided the dielectric dispersion (e' and s\") and the optical constants (n and k). The dielectric constants at \"\" 1011 Hz have been obtained for each sample and are compared with the values reported in the 10-10 Hz range. The emissivity peak at the Christianson frequencies for all the lunar samples lie within the range 1195-1250 cm-1; such values are characteristic of terrestrial basalts. The Raman light scattering spectra provided investigation of small individual grains or inclusions and gave unambiguous interpretation of some of the characteristic mineralogical components.", "which Rock type ?", "Basalt", NaN, NaN], ["Abstract Granitic lunar samples largely consist of granophyric intergrowths of silica and K-feldspar. The identification of the silica polymorph present in the granophyre can clarify the petrogenesis of the lunar granites. The presence of tridymite or cristobalite would indicate rapid crystallization at high temperature. Quartz would indicate crystallization at low temperature or perhaps intrusive, slow crystallization, allowing for the orderly transformation from high-temperature silica polymorphs (tridymite or cristobalite). We identify the silica polymorphs present in four granitic lunar samples from the Apollo 12 regolith using laser Raman spectroscopy. Typically, lunar silica occurs with a hackle fracture pattern. We did an initial density calculation on the hackle fracture pattern of quartz and determined that the volume of quartz and fracture space is consistent with a molar volume contraction from tridymite or cristobalite, both of which are less dense than quartz. Moreover, we analyzed the silica in the granitic fragments from Apollo 12 by electron-probe microanalysis and found it contains up to 0.7 wt% TiO2, consistent with initial formation as the high-temperature silica polymorphs, which have more open crystal structures that can more readily accommodate cations other than Si. The silica in Apollo 12 granitic samples crystallized rapidly as tridymite or cristobalite, consistent with extrusive volcanism. The silica then inverted to quartz at a later time, causing it to contract and fracture. A hackle fracture pattern is common in silica occurring in extrusive lunar lithologies (e.g., mare basalt). The extrusive nature of these granitic samples makes them excellent candidates to be similar to the rocks that compose positive relief silicic features such as the Gruithuisen Domes.", "which Rock type ?", "Granite", NaN, NaN], ["Quantification of mineral proportions in rocks and soils by Raman spectroscopy on a planetary surface is best done by taking many narrow-beam spectra from different locations on the rock or soil, with each spectrum yielding peaks from only one or two minerals. The proportion of each mineral in the rock or soil can then be determined from the fraction of the spectra that contain its peaks, in analogy with the standard petrographic technique of point counting. The method can also be used for nondestructive laboratory characterization of rock samples. Although Raman peaks for different minerals seldom overlap each other, it is impractical to obtain proportions of constituent minerals by Raman spectroscopy through analysis of peak intensities in a spectrum obtained by broad-beam sensing of a representative area of the target material. That is because the Raman signal strength produced by a mineral in a rock or soil is not related in a simple way through the Raman scattering cross section of that mineral to its proportion in the rock, and the signal-to-noise ratio of a Raman spectrum is poor when a sample is stimulated by a low-power laser beam of broad diameter. Results obtained by the Raman point-count method are demonstrated for a lunar thin section (14161,7062) and a rock fragment (15273,7039). Major minerals (plagioclase and pyroxene), minor minerals (cristobalite and K-feldspar), and accessory minerals (whitlockite, apatite, and baddeleyite) were easily identified. Identification of the rock types, KREEP basalt or melt rock, from the 100-location spectra was straightforward.", "which Rock type ?", "KREEP basalt", 1525.0, 1537.0], ["The restrictive measures implemented in response to the COVID-19 pandemic have triggered sudden massive changes to travel behaviors of people all around the world. This study examines the individual mobility patterns for all transport modes (walk, bicycle, motorcycle, car driven alone, car driven in company, bus, subway, tram, train, airplane) before and during the restrictions adopted in ten countries on six continents: Australia, Brazil, China, Ghana, India, Iran, Italy, Norway, South Africa and the United States. This cross-country study also aims at understanding the predictors of protective behaviors related to the transport sector and COVID-19. Findings hinge upon an online survey conducted in May 2020 (N = 9,394). The empirical results quantify tremendous disruptions for both commuting and non-commuting travels, highlighting substantial reductions in the frequency of all types of trips and use of all modes. In terms of potential virus spread, airplanes and buses are perceived to be the riskiest transport modes, while avoidance of public transport is consistently found across the countries. According to the Protection Motivation Theory, the study sheds new light on the fact that two indicators, namely income inequality, expressed as Gini index, and the reported number of deaths due to COVID-19 per 100,000 inhabitants, aggravate respondents\u2019 perceptions. This research indicates that socio-economic inequality and morbidity are not only related to actual health risks, as well documented in the relevant literature, but also to the perceived risks. These findings document the global impact of the COVID-19 crisis as well as provide guidance for transportation practitioners in developing future strategies.", "which Country ?", "Ghana", 451.0, 456.0], ["The restrictive measures implemented in response to the COVID-19 pandemic have triggered sudden massive changes to travel behaviors of people all around the world. This study examines the individual mobility patterns for all transport modes (walk, bicycle, motorcycle, car driven alone, car driven in company, bus, subway, tram, train, airplane) before and during the restrictions adopted in ten countries on six continents: Australia, Brazil, China, Ghana, India, Iran, Italy, Norway, South Africa and the United States. This cross-country study also aims at understanding the predictors of protective behaviors related to the transport sector and COVID-19. Findings hinge upon an online survey conducted in May 2020 (N = 9,394). The empirical results quantify tremendous disruptions for both commuting and non-commuting travels, highlighting substantial reductions in the frequency of all types of trips and use of all modes. In terms of potential virus spread, airplanes and buses are perceived to be the riskiest transport modes, while avoidance of public transport is consistently found across the countries. According to the Protection Motivation Theory, the study sheds new light on the fact that two indicators, namely income inequality, expressed as Gini index, and the reported number of deaths due to COVID-19 per 100,000 inhabitants, aggravate respondents\u2019 perceptions. This research indicates that socio-economic inequality and morbidity are not only related to actual health risks, as well documented in the relevant literature, but also to the perceived risks. These findings document the global impact of the COVID-19 crisis as well as provide guidance for transportation practitioners in developing future strategies.", "which Country ?", "Italy", 471.0, 476.0], ["On Saturday, 13 January 2018, residents of Hawaii received a chilling message through their smartphones. It read, in all caps, BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL. The message was mistakenly sent, but many residents lived in a threatened state of mind for the 38 minutes it took before a retraction was made. This study is based on a survey of 418 people who experienced the alert, recollecting their immediate responses, including how they attempted to verify the alert and how they used their mobile devices and social media for expressive interactions during the alert period. With the ongoing testing in the United States of nationwide Wireless Emergency Alerts, along with similar expansions of these systems in other countries, the event in Hawaii serves to illuminate how people understand and respond to mobile-based alerts. It shows the extreme speed that information\u2014including misinformation\u2014can flow in an emergency, and, for many, expressive communication affects people\u2019s reactions.", "which Country ?", "Hawaii", 43.0, 49.0], ["The restrictive measures implemented in response to the COVID-19 pandemic have triggered sudden massive changes to travel behaviors of people all around the world. This study examines the individual mobility patterns for all transport modes (walk, bicycle, motorcycle, car driven alone, car driven in company, bus, subway, tram, train, airplane) before and during the restrictions adopted in ten countries on six continents: Australia, Brazil, China, Ghana, India, Iran, Italy, Norway, South Africa and the United States. This cross-country study also aims at understanding the predictors of protective behaviors related to the transport sector and COVID-19. Findings hinge upon an online survey conducted in May 2020 (N = 9,394). The empirical results quantify tremendous disruptions for both commuting and non-commuting travels, highlighting substantial reductions in the frequency of all types of trips and use of all modes. In terms of potential virus spread, airplanes and buses are perceived to be the riskiest transport modes, while avoidance of public transport is consistently found across the countries. According to the Protection Motivation Theory, the study sheds new light on the fact that two indicators, namely income inequality, expressed as Gini index, and the reported number of deaths due to COVID-19 per 100,000 inhabitants, aggravate respondents\u2019 perceptions. This research indicates that socio-economic inequality and morbidity are not only related to actual health risks, as well documented in the relevant literature, but also to the perceived risks. These findings document the global impact of the COVID-19 crisis as well as provide guidance for transportation practitioners in developing future strategies.", "which Country ?", "China", 444.0, 449.0], ["In recent times, social media has been increasingly playing a critical role in response actions following natural catastrophes. From facilitating the recruitment of volunteers during an earthquake to supporting emotional recovery after a hurricane, social media has demonstrated its power in serving as an effective disaster response platform. Based on a case study of Thailand flooding in 2011 \u2013 one of the worst flooding disasters in more than 50 years that left the country severely impaired \u2013 this paper provides an in\u2010depth understanding on the emergent roles of social media in disaster response. Employing the perspective of boundary object, we shed light on how different boundary spanning competences of social media emerged in practice to facilitate cross\u2010boundary response actions during a disaster, with an aim to promote further research in this area. We conclude this paper with guidelines for response agencies and impacted communities to deploy social media for future disaster response.", "which Country ?", "Thailand", 369.0, 377.0], ["In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency\u2019s perspective, which undermines communities\u2019 role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.", "which Country ?", "Thailand", 548.0, 556.0], ["The devastating 2011 Great East Japan Earthquake made people aware of the importance of Information and Communication Technology (ICT) for sustaining life during and soon after a disaster. The difficulty in recovering information systems, because of the failure of ICT, hindered all recovery processes. The paper explores ways to make information systems resilient in disaster situations. Resilience is defined as quickly regaining essential capabilities to perform critical post disaster missions and to smoothly return to fully stable operations thereafter. From case studies and the literature, we propose that a frugal IS design that allows creative responses will make information systems resilient in disaster situations. A three-stage model based on a chronological sequence was employed in structuring the proposed design principles.", "which Country ?", "Japan", 32.0, 37.0], ["In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency\u2019s perspective, which undermines communities\u2019 role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.", "which Country ?", "Thailand", 548.0, 556.0], ["Abstract Background The COVID-19 pandemic and the measures taken to combat it led to severe constraints for various areas of life, including mobility. To study the effects of this disruptive situation on the mobility behaviour of entire subgroups, and how they shape their mobility in reaction to the special circumstances, can help to better understand, how people react to external changes. Methodology Aim of the study presented in this article was to investigate to what extent, how and in what areas mobility behaviour has changed during the outbreak of SARS-CoV-2 in Germany. In addition, a focus was put on the comparison of federal states with and without lockdown in order to investigate a possible contribution of this measure to changes in mobility. We asked respondents via an online survey about their trip purposes and trip frequency, their choice of transport mode and the reasons for choosing it in the context of the COVID-19 crisis. For the analyses presented in this paper, we used the data of 4157survey participants (2512 without lockdown, 1645 with lockdown). Results The data confirmed a profound impact on the mobility behaviour with a shift away from public transport and increases in car usage, walking and cycling. Comparisons of federal states with and without lockdown revealed only isolated differences. It seems that, even if the lockdown had some minor effects, its role in the observed behavioural changes was minimal.", "which Country ?", "Germany", 573.0, 580.0], ["This study explores the role of social media in social change by analyzing Twitter data collected during the 2011 Egypt Revolution. Particular attention is paid to the notion of collective sense making, which is considered a critical aspect for the emergence of collective action for social change. We suggest that collective sense making through social media can be conceptualized as human-machine collaborative information processing that involves an interplay of signs, Twitter grammar, humans, and social technologies. We focus on the occurrences of hashtags among a high volume of tweets to study the collective sense-making phenomena of milling and keynoting. A quantitative Markov switching analysis is performed to understand how the hashtag frequencies vary over time, suggesting structural changes that depict the two phenomena. We further explore different hashtags through a qualitative content analysis and find that, although many hashtags were used as symbolic anchors to funnel online users' attention to the Egypt Revolution, other hashtags were used as part of tweet sentences to share changing situational information. We suggest that hashtags functioned as a means to collect information and maintain situational awareness during the unstable political situation of the Egypt Revolution.", "which Country ?", "egypt", 114.0, 119.0], ["ABSTRACT Psychodrama was first introduced in the Korean literature in 1972, but its generalization to college students did not occur until the 1990s. Despite findings from psychodrama studies with Korean college students supporting psychodrama as effective for developing and maintaining good interpersonal relationships, as well as decreasing anxiety and stress, it is still underutilized in South Korea. Accordingly, the current study looked at implementing a psychodrama program in South Korean universities. The positive results of the program implementation suggest that psychodrama is a useful technique for improving Korean college students\u2019 general development and mental stability including secure attachment.", "which Country ?", "South Korea", 393.0, 404.0], ["The restrictive measures implemented in response to the COVID-19 pandemic have triggered sudden massive changes to travel behaviors of people all around the world. This study examines the individual mobility patterns for all transport modes (walk, bicycle, motorcycle, car driven alone, car driven in company, bus, subway, tram, train, airplane) before and during the restrictions adopted in ten countries on six continents: Australia, Brazil, China, Ghana, India, Iran, Italy, Norway, South Africa and the United States. This cross-country study also aims at understanding the predictors of protective behaviors related to the transport sector and COVID-19. Findings hinge upon an online survey conducted in May 2020 (N = 9,394). The empirical results quantify tremendous disruptions for both commuting and non-commuting travels, highlighting substantial reductions in the frequency of all types of trips and use of all modes. In terms of potential virus spread, airplanes and buses are perceived to be the riskiest transport modes, while avoidance of public transport is consistently found across the countries. According to the Protection Motivation Theory, the study sheds new light on the fact that two indicators, namely income inequality, expressed as Gini index, and the reported number of deaths due to COVID-19 per 100,000 inhabitants, aggravate respondents\u2019 perceptions. This research indicates that socio-economic inequality and morbidity are not only related to actual health risks, as well documented in the relevant literature, but also to the perceived risks. These findings document the global impact of the COVID-19 crisis as well as provide guidance for transportation practitioners in developing future strategies.", "which Country ?", "Iran", 465.0, 469.0], ["Abstract Background As a reaction to the novel coronavirus disease (COVID-19), countries around the globe have implemented various measures to reduce the spread of the virus. The transportation sector is particularly affected by the pandemic situation. The current study aims to contribute to the empirical knowledge regarding the effects of the coronavirus situation on the mobility of people by (1) broadening the perspective to the mobility rural area\u2019s residents and (2) providing subjective data concerning the perceived changes of affected persons\u2019 mobility practices, as these two aspects have scarcely been considered in research so far. Methods To address these research gaps, a mixed-methods study was conducted that integrates a qualitative telephone interview study ( N = 15) and a quantitative household survey ( N = 301). The rural district of Altmarkkreis Salzwedel in Northern Germany was chosen as a model region. Results The results provide in-depth insights into the changing mobility practices of residents of a rural area during the legal restrictions to stem the spread of the virus. A high share of respondents (62.6%) experienced no changes in their mobility behavior due to the COVID-19 pandemic situation. However, nearly one third of trips were also cancelled overall. A modal shift was observed towards the reduction of trips by car and bus, and an increase of trips by bike. The share of trips by foot was unchanged. The majority of respondents did not predict strong long-term effects of the corona pandemic on their mobility behavior.", "which Country ?", "Germany", 893.0, 900.0], ["The restrictive measures implemented in response to the COVID-19 pandemic have triggered sudden massive changes to travel behaviors of people all around the world. This study examines the individual mobility patterns for all transport modes (walk, bicycle, motorcycle, car driven alone, car driven in company, bus, subway, tram, train, airplane) before and during the restrictions adopted in ten countries on six continents: Australia, Brazil, China, Ghana, India, Iran, Italy, Norway, South Africa and the United States. This cross-country study also aims at understanding the predictors of protective behaviors related to the transport sector and COVID-19. Findings hinge upon an online survey conducted in May 2020 (N = 9,394). The empirical results quantify tremendous disruptions for both commuting and non-commuting travels, highlighting substantial reductions in the frequency of all types of trips and use of all modes. In terms of potential virus spread, airplanes and buses are perceived to be the riskiest transport modes, while avoidance of public transport is consistently found across the countries. According to the Protection Motivation Theory, the study sheds new light on the fact that two indicators, namely income inequality, expressed as Gini index, and the reported number of deaths due to COVID-19 per 100,000 inhabitants, aggravate respondents\u2019 perceptions. This research indicates that socio-economic inequality and morbidity are not only related to actual health risks, as well documented in the relevant literature, but also to the perceived risks. These findings document the global impact of the COVID-19 crisis as well as provide guidance for transportation practitioners in developing future strategies.", "which Country ?", "South Africa", 486.0, 498.0], ["Abstract Introduction The first wave of COVID-19 pandemic period has drastically changed people\u2019s lives all over the world. To cope with the disruption, digital solutions have become more popular. However, the ability to adopt digitalised alternatives is different across socio-economic and socio-demographic groups. Objective This study investigates how individuals have changed their activity-travel patterns and internet usage during the first wave of the COVID-19 pandemic period, and which of these changes may be kept. Methods An empirical data collection was deployed through online forms. 781 responses from different countries (Italy, Sweden, India and others) have been collected, and a series of multivariate analyses was carried out. Two linear regression models are presented, related to the change of travel activities and internet usage, before and during the pandemic period. Furthermore, a binary regression model is used to examine the likelihood of the respondents to adopt and keep their behaviours beyond the pandemic period. Results The results show that the possibility to change the behaviour matter. External restrictions and personal characteristics are the driving factors of the reduction in ones' daily trips. However, the estimation results do not show a strong correlation between the countries' restriction policy and the respondents' likelihood to adopt the new and online-based behaviours for any of the activities after the restriction period. Conclusion The acceptance and long-term adoption of the online alternatives for activities are correlated with the respondents' personality and socio-demographic group, highlighting the importance of promoting alternatives as a part of longer-term behavioural and lifestyle changes.", "which Country ?", "Italy", 637.0, 642.0], ["Abstract Introduction The first wave of COVID-19 pandemic period has drastically changed people\u2019s lives all over the world. To cope with the disruption, digital solutions have become more popular. However, the ability to adopt digitalised alternatives is different across socio-economic and socio-demographic groups. Objective This study investigates how individuals have changed their activity-travel patterns and internet usage during the first wave of the COVID-19 pandemic period, and which of these changes may be kept. Methods An empirical data collection was deployed through online forms. 781 responses from different countries (Italy, Sweden, India and others) have been collected, and a series of multivariate analyses was carried out. Two linear regression models are presented, related to the change of travel activities and internet usage, before and during the pandemic period. Furthermore, a binary regression model is used to examine the likelihood of the respondents to adopt and keep their behaviours beyond the pandemic period. Results The results show that the possibility to change the behaviour matter. External restrictions and personal characteristics are the driving factors of the reduction in ones' daily trips. However, the estimation results do not show a strong correlation between the countries' restriction policy and the respondents' likelihood to adopt the new and online-based behaviours for any of the activities after the restriction period. Conclusion The acceptance and long-term adoption of the online alternatives for activities are correlated with the respondents' personality and socio-demographic group, highlighting the importance of promoting alternatives as a part of longer-term behavioural and lifestyle changes.", "which Country ?", "Sweden", 644.0, 650.0], ["The restrictive measures implemented in response to the COVID-19 pandemic have triggered sudden massive changes to travel behaviors of people all around the world. This study examines the individual mobility patterns for all transport modes (walk, bicycle, motorcycle, car driven alone, car driven in company, bus, subway, tram, train, airplane) before and during the restrictions adopted in ten countries on six continents: Australia, Brazil, China, Ghana, India, Iran, Italy, Norway, South Africa and the United States. This cross-country study also aims at understanding the predictors of protective behaviors related to the transport sector and COVID-19. Findings hinge upon an online survey conducted in May 2020 (N = 9,394). The empirical results quantify tremendous disruptions for both commuting and non-commuting travels, highlighting substantial reductions in the frequency of all types of trips and use of all modes. In terms of potential virus spread, airplanes and buses are perceived to be the riskiest transport modes, while avoidance of public transport is consistently found across the countries. According to the Protection Motivation Theory, the study sheds new light on the fact that two indicators, namely income inequality, expressed as Gini index, and the reported number of deaths due to COVID-19 per 100,000 inhabitants, aggravate respondents\u2019 perceptions. This research indicates that socio-economic inequality and morbidity are not only related to actual health risks, as well documented in the relevant literature, but also to the perceived risks. These findings document the global impact of the COVID-19 crisis as well as provide guidance for transportation practitioners in developing future strategies.", "which Country ?", "Brazil", 436.0, 442.0], ["The restrictive measures implemented in response to the COVID-19 pandemic have triggered sudden massive changes to travel behaviors of people all around the world. This study examines the individual mobility patterns for all transport modes (walk, bicycle, motorcycle, car driven alone, car driven in company, bus, subway, tram, train, airplane) before and during the restrictions adopted in ten countries on six continents: Australia, Brazil, China, Ghana, India, Iran, Italy, Norway, South Africa and the United States. This cross-country study also aims at understanding the predictors of protective behaviors related to the transport sector and COVID-19. Findings hinge upon an online survey conducted in May 2020 (N = 9,394). The empirical results quantify tremendous disruptions for both commuting and non-commuting travels, highlighting substantial reductions in the frequency of all types of trips and use of all modes. In terms of potential virus spread, airplanes and buses are perceived to be the riskiest transport modes, while avoidance of public transport is consistently found across the countries. According to the Protection Motivation Theory, the study sheds new light on the fact that two indicators, namely income inequality, expressed as Gini index, and the reported number of deaths due to COVID-19 per 100,000 inhabitants, aggravate respondents\u2019 perceptions. This research indicates that socio-economic inequality and morbidity are not only related to actual health risks, as well documented in the relevant literature, but also to the perceived risks. These findings document the global impact of the COVID-19 crisis as well as provide guidance for transportation practitioners in developing future strategies.", "which Country ?", "Norway", 478.0, 484.0], ["Since the early 1990s, Argentinean grain production underwent a dramatic increase in grains production (from 26 million tons in 1988/89 to over 75 million tons in 2002/2003). Several factors contributed to this \"revolution,\" but probably one of the most important was the introduction of new genetic modification (GM) technologies, specifically herbicide-tolerant soybeans. This article analyses this process, reporting on the economic benefits accruing to producers and other participating actors as well as some of the environmental and social impacts that could be associated with the introduction of the new technologies. In doing so, it lends attention to the synergies between GM soybeans and reduced-tillage technologies and also explores some of the institutional factors that shed light on the success of this case, including aspects such as the early availability of a reliable biosafety mechanism and a special intellectual property rights (IPR) situation. In its concluding comments, this article also posts a number of questions about the replicability of the experience and some pending policy issues regarding the future exploitation of GM technologies in Argentina.", "which [7]Region ?", "Argentina", 1171.0, 1180.0], ["This article analyzes adoption and impacts of Bt cotton in Argentina against the background of monopoly pricing. Based on survey data, it is shown that the technology significantly reduces insecticide applications and increases yields; however, these advantages are curbed by the high price charged for genetically modified seeds. Using the contingent valuation method, it is shown that farmers' average willingness to pay is less than half the actual technology price. A lower price would not only increase benefits for growers, but could also multiply company profits, thus, resulting in a Pareto improvement. Implications of the sub-optimal pricing strategy are discussed.", "which [7]Region ?", "Argentina", 59.0, 68.0], ["The study analyzes ex ante the socioeconomic effects of transgenic virus resistance technology for potatoes in Mexico. All groups of potato growers could significantly gain from the transgenic varieties to be introduced, and the technology could even improve income distribution. Nonetheless, public support is needed to fully harness this potential. Different policy alternatives are tested within scenario calculations in order to supply information on how to optimize the technological outcome, both from an efficiency and an equity perspective. Transgenic disease resistance is a promising technology for developing countries. Providing these countries with better access to biotechnology should be given higher political priority.", "which [7]Region ?", "Mexico", 111.0, 117.0], ["Summary In the present paper we build a bio-economic model to estimate the impact of a biotechnology innovation in EU agriculture. Transgenic Bt maize offers the potential to efficiently control corn borers, that cause economically important losses in maize growing in Spain. Since 1998, Syngenta has commercialised the variety Compa CB, equivalent to an annual maize area of about 25,000 ha. During the six-year period 1998-2003 a total welfare gain of \u20ac15.5 million is estimated from the adoption of Bt maize, of which Spanish farmers captured two thirds, the rest accruing to the seed industry.", "which [7]Region ?", "Spain", 269.0, 274.0], ["In this study, the authors propose a mobility-based clustering (MBC) protocol for wireless sensor networks with mobile nodes. In the proposed clustering protocol, a sensor node elects itself as a cluster-head based on its residual energy and mobility. A non-cluster-head node aims at its link stability with a cluster head during clustering according to the estimated connection time. Each non-cluster-head node is allocated a timeslot for data transmission in ascending order in a time division multiple address (TDMA) schedule based on the estimated connection time. In the steady-state phase, a sensor node transmits its sensed data in its timeslot and broadcasts a joint request message to join in a new cluster and avoid more packet loss when it has lost or is going to lose its connection with its cluster head. Simulation results show that the MBC protocol can reduce the packet loss by 25% compared with the cluster-based routing (CBR) protocol and 50% compared with the low-energy adaptive clustering hierarchy-mobile (LEACH-mobile) protocol. Moreover, it outperforms both the CBR protocol and the LEACH-mobile protocol in terms of average energy consumption and average control overhead, and can better adapt to a highly mobile environment.", "which CH Properties Mobility ?", "Mobile", 112.0, 118.0], ["Abstract Belarusian occupies a very specific position among the Slavic languages. In spite of the fact that it can be characterized as a \u201cmiddle-sized\u201d Slavic language, the contiguity and all-embracing rivalry with the \u201cstrong\u201d Russian language make it into an eternally \u201csmall\u201d language. The modern Belarusian standard language was elaborated in the early twentieth century. There was a brief but fruitful period of its promotion in the 1920s, but then Russification became a relevant factor in its development for the following decades. Political factors have always held great significance in the development of Belarusian. The linguistic affinity of Belarusian and Russian in combination with other factors is an insurmountable obstacle for the spread of the Belarusian language. On the one hand, Russian speakers living in Belarus, as a rule, understand Belarusian but do not make the effort to acquire it as an active medium of communication. On the other hand, Belarusian speakers proficient in Russian do not have enough motivation to use Belarusian routinely, on account of the pervading presence of Russian in Belarusian society. As a result, they often lose their Belarusian language skills. There is considerable dissent as to the perspectives of Belarusian. Though it is the \u201ctitular\u201d language, which determines its importance in Belarus, it is also a minority language and thus faces the corresponding challenges.", "which Population under analysis ?", "Belarus", 828.0, 835.0], ["Abstract This article reports findings from a survey on language usage in Belarus, which encompasses bilingual Belarusian and Russian. First, the distribution of language usage is discussed. Then the dependency of language usage on some sociocultural conditions is explored. Finally, the changes in language usage over three generations are discussed. We find that a mixed Belarusian\u2013Russian form of speech is widely used in the cities studied and that it is spoken across all educational levels. However, it seems to be predominantly utilized in informal communication, especially among friends and family members, leaving Russian and Belarusian to more formal or public venues.", "which Population under analysis ?", "Belarus", 74.0, 81.0], ["Previous work on over-education has assumed homogeneity of workers and jobs. Relaxing these assumptions, we find that over-educated workers have lower education credentials than matched graduates. Among the over-educated graduates we distinguish between the apparently over-educated workers, who have similar unobserved skills as matched graduates, and the genuinely over-educated workers, who have a much lower skill endowment. Over-education is associated with a pay penalty of 5%-11% for apparently over-educated workers compared with matched graduates and of 22%-26% for the genuinely over-educated. Over-education originates from the lack of skills of graduates. This should be taken into consideration in the current debate on the future of higher education in the UK. Copyright The London School of Economics and Political Science 2003.", "which Population under analysis ?", "Graduates", 186.0, 195.0], ["Summary. During the early 1990s the proportion of a cohort entering higher education in the UK doubled over a short period of time. The paper investigates the effect of the expansion on graduates\u2019 early labour market attainment, focusing on overeducation. We define overeducation by combining occupation codes and a self\u2010reported measure for the appropriateness of the match between qualification and the job. We therefore define three groups of graduates: matched, apparently overeducated and genuinely overeducated. This measure is well correlated with alternative definitions of overeducation. Comparing pre\u2010 and post\u2010expansion cohorts of graduates, we find with this measure that the proportion of overeducated graduates has doubled, even though overeducation wage penalties have remained stable. We do not find that type of institution affects the probability of genuine overeducation. Apparently overeducated graduates are mostly indistinguishable from matched graduates, whereas genuinely overeducated graduates principally lack non\u2010academic skills and suffer a large wage penalty. Individual unobserved heterogeneity differs between the three groups of graduates but controlling for it does not alter these conclusions.", "which Population under analysis ?", "Graduates", 186.0, 195.0], ["This study employs national survey data to estimate the extent of overeducation in the U.S. labor force and its impact on a variety of worker attitudes. Estimates are made of the extent of overeducation and its distribution among different categories of workers, according to sex, race, age, and class background. The effects of overeducation are examined in four areas of worker attitudes: job satisfaction, political leftism, political alienation, and stratification ideology. Evidence is found of significant effects of overeducation on job satisfaction and several aspects of stratification ideology. The magnitude of these effects is small, however, and they are concentrated almost exclusively among very highly overeducated workers. No evidence is found of generalized political effects of overeducation, either in the form of increased political leftism or in the form of increased political alienation. These findings fail to support the common prediction of major political repercussions of overeducation and suggest the likelihood of alternative forms of adaptation among overeducated workers.", "which Population under analysis ?", "General", NaN, NaN], ["The research question addressed in this article concerns whether unemployment persistency can be regarded as a phenomenon that increases employment difficulties for the less educated and, if so, whether their employment chances are reduced by an overly rapid reduction in the number of jobs with low educational requirements. The empirical case is Sweden and the data covers the period 1976-2000. The empirical analyses point towards a negative response to both questions. First, it is shown that jobs with low educational requirements have declined but still constitute a substantial share of all jobs. Secondly, educational attainment has changed at a faster rate than the job structure with increasing over-education in jobs with low educational requirements as a result. This, together with changed selection patterns into the low education group, are the main reasons for the poor employment chances of the less educated in periods with low general demand for labour.", "which Population under analysis ?", "General", 946.0, 953.0], ["Abstract Fair employment policies constrain employee selection: specifically, applicants\u2019 professional experience can be a substitute for formal education. However, reflecting firm-specific job requirements, this substitution rule applies less strictly to applicants from outside the firm. Further, setting low educational job requirements decreases the risk of disparate impact charges. Using data from a large US public employer, we show that successful outsider candidates exhibit higher levels of formal education than insiders. Also, this gap in educational attainments between outsiders and insiders widens with lower advertised degree requirements. More generally, we find strong insider\u2013outsider effects on hiring decisions.", "which Population under analysis ?", "General", NaN, NaN], ["The relationship between Information and Communication Technology (ICT) and science performance has been the focus of much recent research, especially due to the prevalence of ICT in our digital society. However, the exploration of this relationship has yielded mixed results. Thus, the current study aims to uncover the learning processes that are linked to students\u2019 science performance by investigating the effect of ICT variables on science for 15-year-old students in two countries with contrasting levels of technology implementation (Bulgaria n = 5,928 and Finland n = 5,882). The study analyzed PISA 2015 data using structural equation modeling to assess the impact of ICT use, availability, and comfort on students\u2019 science scores, controlling for students\u2019 socio-economic status. In both countries, results revealed that (1) ICT use and availability were associated with lower science scores and (2) students who were more comfortable with ICT performed better in science. This study can inform practical implementations of ICT in classrooms that consider the differential effect of ICT and it can advance theoretical knowledge around technology, learning, and cultural context.", "which has countries ?", "Bulgaria", 541.0, 549.0], ["The present study investigated the factor structure of and measurement invariance in the information and communication technology (ICT) engagement construct, and the relationship between ICT engagement and students' performance on science, mathematics and reading in China and Germany. Samples were derived from the Programme for International Student Assessment (PISA) 2015 survey. Configural, metric and scalar equivalence were found in a multigroup exploratory structural equation model. In the regression model, a significantly positive association between interest in ICT and student achievement was found in China, in contrast to a significantly negative association in Germany. All achievement scores were negatively and significantly correlated with perceived ICT competence scores in China, whereas science and mathematics achievement scores were not predicted by scores on ICT competence in Germany. Similar patterns were found in China and Germany in terms of perceived autonomy in using ICT and social relatedness in using ICT to predict students' achievement. The implications of all the findings were discussed. [ABSTRACT FROM AUTHOR]", "which has countries ?", "China", 267.0, 272.0], ["The aim of the present study is twofold: (1) to identify a factor structure between variables-interest in broad science topics, perceived information and communications technology (ICT) competence, environmental awareness and optimism; and (2) to explore the relations between these variables at the country level. The first part of the aim is addressed using exploratory factor analysis with data from the Program for International Student Assessment (PISA) for 15-year-old students from Singapore and Finland. The results show that a comparable structure with four factors was verified in both countries. Correlation analyses and linear regression were used to address the second part of the aim. The results show that adolescents\u2019 interest in broad science topics can predict perceived ICT competence. Their interest in broad science topics and perceived ICT competence can predict environmental awareness in both countries. However, there is difference in predicting environmental optimism. Singaporean students\u2019 interest in broad science topics and their perceived ICT competences are positive predictors, whereas environmental awareness is a negative predictor. Finnish students\u2019 environmental awareness negatively predicted environmental optimism.", "which has countries ?", "Finland", 511.0, 518.0], ["The relationship between Information and Communication Technology (ICT) and science performance has been the focus of much recent research, especially due to the prevalence of ICT in our digital society. However, the exploration of this relationship has yielded mixed results. Thus, the current study aims to uncover the learning processes that are linked to students\u2019 science performance by investigating the effect of ICT variables on science for 15-year-old students in two countries with contrasting levels of technology implementation (Bulgaria n = 5,928 and Finland n = 5,882). The study analyzed PISA 2015 data using structural equation modeling to assess the impact of ICT use, availability, and comfort on students\u2019 science scores, controlling for students\u2019 socio-economic status. In both countries, results revealed that (1) ICT use and availability were associated with lower science scores and (2) students who were more comfortable with ICT performed better in science. This study can inform practical implementations of ICT in classrooms that consider the differential effect of ICT and it can advance theoretical knowledge around technology, learning, and cultural context.", "which has countries ?", "Finland", 564.0, 571.0], ["Abstract As a relevant cognitive-motivational aspect of ICT literacy, a new construct ICT Engagement is theoretically based on self-determination theory and involves the factors ICT interest, Perceived ICT competence, Perceived autonomy related to ICT use, and ICT as a topic in social interaction. In this manuscript, we present different sources of validity supporting the construct interpretation of test scores in the ICT Engagement scale, which was used in PISA 2015. Specifically, we investigated the internal structure by dimensional analyses and investigated the relation of ICT Engagement aspects to other variables. The analyses are based on public data from PISA 2015 main study from Switzerland ( n = 5860) and Germany ( n = 6504). First, we could confirm the four-dimensional structure of ICT Engagement for the Swiss sample using a structural equation modelling approach. Second, ICT Engagement scales explained the highest amount of variance in ICT Use for Entertainment, followed by Practical use. Third, we found significantly lower values for girls in all ICT Engagement scales except ICT Interest. Fourth, we found a small negative correlation between the scores in the subscale \u201cICT as a topic in social interaction\u201d and reading performance in PISA 2015. We could replicate most results for the German sample. Overall, the obtained results support the construct interpretation of the four ICT Engagement subscales.", "which has countries ?", "Germany", 723.0, 730.0], ["The present study investigated the factor structure of and measurement invariance in the information and communication technology (ICT) engagement construct, and the relationship between ICT engagement and students' performance on science, mathematics and reading in China and Germany. Samples were derived from the Programme for International Student Assessment (PISA) 2015 survey. Configural, metric and scalar equivalence were found in a multigroup exploratory structural equation model. In the regression model, a significantly positive association between interest in ICT and student achievement was found in China, in contrast to a significantly negative association in Germany. All achievement scores were negatively and significantly correlated with perceived ICT competence scores in China, whereas science and mathematics achievement scores were not predicted by scores on ICT competence in Germany. Similar patterns were found in China and Germany in terms of perceived autonomy in using ICT and social relatedness in using ICT to predict students' achievement. The implications of all the findings were discussed. [ABSTRACT FROM AUTHOR]", "which has countries ?", "Germany", 277.0, 284.0], ["The aim of the present study is twofold: (1) to identify a factor structure between variables-interest in broad science topics, perceived information and communications technology (ICT) competence, environmental awareness and optimism; and (2) to explore the relations between these variables at the country level. The first part of the aim is addressed using exploratory factor analysis with data from the Program for International Student Assessment (PISA) for 15-year-old students from Singapore and Finland. The results show that a comparable structure with four factors was verified in both countries. Correlation analyses and linear regression were used to address the second part of the aim. The results show that adolescents\u2019 interest in broad science topics can predict perceived ICT competence. Their interest in broad science topics and perceived ICT competence can predict environmental awareness in both countries. However, there is difference in predicting environmental optimism. Singaporean students\u2019 interest in broad science topics and their perceived ICT competences are positive predictors, whereas environmental awareness is a negative predictor. Finnish students\u2019 environmental awareness negatively predicted environmental optimism.", "which has countries ?", "Singapore", 497.0, 506.0], ["Abstract As a relevant cognitive-motivational aspect of ICT literacy, a new construct ICT Engagement is theoretically based on self-determination theory and involves the factors ICT interest, Perceived ICT competence, Perceived autonomy related to ICT use, and ICT as a topic in social interaction. In this manuscript, we present different sources of validity supporting the construct interpretation of test scores in the ICT Engagement scale, which was used in PISA 2015. Specifically, we investigated the internal structure by dimensional analyses and investigated the relation of ICT Engagement aspects to other variables. The analyses are based on public data from PISA 2015 main study from Switzerland ( n = 5860) and Germany ( n = 6504). First, we could confirm the four-dimensional structure of ICT Engagement for the Swiss sample using a structural equation modelling approach. Second, ICT Engagement scales explained the highest amount of variance in ICT Use for Entertainment, followed by Practical use. Third, we found significantly lower values for girls in all ICT Engagement scales except ICT Interest. Fourth, we found a small negative correlation between the scores in the subscale \u201cICT as a topic in social interaction\u201d and reading performance in PISA 2015. We could replicate most results for the German sample. Overall, the obtained results support the construct interpretation of the four ICT Engagement subscales.", "which has countries ?", "Switzerland", 695.0, 706.0], ["Background: In December 2019, an outbreak of respiratory illness caused by a novel coronavirus (2019-nCoV) emerged in Wuhan, China and has swiftly spread to other parts of China and a number of foreign countries. The 2019-nCoV cases might have been under-reported roughly from 1 to 15 January 2020, and thus we estimated the number of unreported cases and the basic reproduction number, R0, of 2019-nCoV. Methods: We modelled the epidemic curve of 2019-nCoV cases, in mainland China from 1 December 2019 to 24 January 2020 through the exponential growth. The number of unreported cases was determined by the maximum likelihood estimation. We used the serial intervals (SI) of infection caused by two other well-known coronaviruses (CoV), Severe Acute Respiratory Syndrome (SARS) and Middle East Respiratory Syndrome (MERS) CoVs, as approximations of the unknown SI for 2019-nCoV to estimate R0. Results: We confirmed that the initial growth phase followed an exponential growth pattern. The under-reporting was likely to have resulted in 469 (95% CI: 403\u2013540) unreported cases from 1 to 15 January 2020. The reporting rate after 17 January 2020 was likely to have increased 21-fold (95% CI: 18\u201325) in comparison to the situation from 1 to 17 January 2020 on average. We estimated the R0 of 2019-nCoV at 2.56 (95% CI: 2.49\u20132.63). Conclusion: The under-reporting was likely to have occurred during the first half of January 2020 and should be considered in future investigation.", "which location ?", "mainland China", 468.0, 482.0], ["Abstract Introduction Since December 29, 2019 a pandemic of new novel coronavirus-infected pneumonia named COVID-19 has started from Wuhan, China, has led to 254 996 confirmed cases until midday March 20, 2020. Sporadic cases have been imported worldwide, in Algeria, the first case reported on February 25, 2020 was imported from Italy, and then the epidemic has spread to other parts of the country very quickly with 139 confirmed cases until March 21, 2020. Methods It is crucial to estimate the cases number growth in the early stages of the outbreak, to this end, we have implemented the Alg-COVID-19 Model which allows to predict the incidence and the reproduction number R0 in the coming months in order to help decision makers. The Alg-COVIS-19 Model initial equation 1, estimates the cumulative cases at t prediction time using two parameters: the reproduction number R0 and the serial interval SI. Results We found R0=2.55 based on actual incidence at the first 25 days, using the serial interval SI= 4,4 and the prediction time t=26. The herd immunity HI estimated is HI=61%. Also, The Covid-19 incidence predicted with the Alg-COVID-19 Model fits closely the actual incidence during the first 26 days of the epidemic in Algeria Fig. 1.A. which allows us to use it. According to Alg-COVID-19 Model, the number of cases will exceed 5000 on the 42 th day (April 7 th ) and it will double to 10000 on 46th day of the epidemic (April 11 th ), thus, exponential phase will begin (Table 1; Fig.1.B) and increases continuously until reaching \u00e0 herd immunity of 61% unless serious preventive measures are considered. Discussion This model is valid only when the majority of the population is vulnerable to COVID-19 infection, however, it can be updated to fit the new parameters values.", "which location ?", "Algeria", 259.0, 266.0], ["The outbreak of coronavirus disease 2019 (COVID-19) which originated in Wuhan, China, constitutes a public health emergency of international concern with a very high risk of spread and impact at the global level. We developed data-driven susceptible-exposed-infectious-quarantine-recovered (SEIQR) models to simulate the epidemic with the interventions of social distancing and epicenter lockdown. Population migration data combined with officially reported data were used to estimate model parameters, and then calculated the daily exported infected individuals by estimating the daily infected ratio and daily susceptible population size. As of Jan 01, 2020, the estimated initial number of latently infected individuals was 380.1 (95%-CI: 379.8~381.0). With 30 days of substantial social distancing, the reproductive number in Wuhan and Hubei was reduced from 2.2 (95%-CI: 1.4~3.9) to 1.58 (95%-CI: 1.34~2.07), and in other provinces from 2.56 (95%-CI: 2.43~2.63) to 1.65 (95%-CI: 1.56~1.76). We found that earlier intervention of social distancing could significantly limit the epidemic in mainland China. The number of infections could be reduced up to 98.9%, and the number of deaths could be reduced by up to 99.3% as of Feb 23, 2020. However, earlier epicenter lockdown would partially neutralize this favorable effect. Because it would cause in situ deteriorating, which overwhelms the improvement out of the epicenter. To minimize the epidemic size and death, stepwise implementation of social distancing in the epicenter city first, then in the province, and later the whole nation without the epicenter lockdown would be practical and cost-effective.", "which location ?", "Wuhan, China", 72.0, 84.0], ["Water scarcity in mountain regions such as the Himalaya has been studied with a pre-existing notion of scarcity justified by decades of communities' suffering from physical water shortages combined by difficulties of access. The Eastern Himalayan Region (EHR) of India receives significantly high amounts of annual precipitation. Studies have nonetheless shown that this region faces a strange dissonance: an acute water scarcity in a supposedly \u2018water-rich\u2019 region. The main objective of this paper is to decipher various drivers of water scarcity by locating the contemporary history of water institutions within the development trajectory of the Darjeeling region, particularly Darjeeling Municipal Town in West Bengal, India. A key feature of the region's urban water governance that defines the water scarcity narrative is the multiplicity of water institutions and the intertwining of formal and informal institutions at various scales. These factors affect the availability of and basic access to domestic water by communities in various ways resulting in the creation of a preferred water bundle consisting of informal water markets over and above traditional sourcing from springs and the formal water supply from the town municipality.", "which location ?", "Darjeeling", 649.0, 659.0], ["Abstract: The 2019-nCoV outbreak has raised concern of global spread. While person-to-person transmission within the Wuhan district has led to a large outbreak, the transmission potential outside of the region remains unclear. Here we present a simple approach for determining whether the upper limit of the confidence interval for the reproduction number exceeds one for transmission in the United States, which would allow endemic transmission. As of February 7, 2020, the number of cases in the United states support subcritical transmission, rather than ongoing transmission. However, this conclusion can change if pre-symptomatic cases resulting from human-to-human transmission have not yet been identified.", "which location ?", "United States", 392.0, 405.0], ["Between December 1, 2019 and January 26, 2020, nearly 3000 cases of respiratory illness caused by a novel coronavirus originating in Wuhan, China have been reported. In this short analysis, we combine publicly available cumulative case data from the ongoing outbreak with phenomenological modeling methods to conduct an early transmissibility assessment. Our model suggests that the basic reproduction number associated with the outbreak (at time of writing) may range from 2.0 to 3.1. Though these estimates are preliminary and subject to change, they are consistent with previous findings regarding the transmissibility of the related SARS-Coronavirus and indicate the possibility of epidemic potential.", "which location ?", "Wuhan", 133.0, 138.0], ["Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.", "which location ?", "Tianjin, China", 480.0, 494.0], ["Background: On January 23, 2020, a quarantine was imposed on travel in and out of Wuhan, where the 2019 novel coronavirus (2019-nCoV) outbreak originated from. Previous analyses estimated the basic epidemiological parameters using symptom onset dates of the confirmed cases in Wuhan and outside China. Methods: We obtained information on the 46 coronavirus cases who traveled from Wuhan before January 23 and have been subsequently confirmed in Hong Kong, Japan, Korea, Macau, Singapore, and Taiwan as of February 5, 2020. Most cases have detailed travel history and disease progress. Compared to previous analyses, an important distinction is that we used this data to informatively simulate the infection time of each case using the symptom onset time, previously reported incubation interval, and travel history. We then fitted a simple exponential growth model with adjustment for the January 23 travel ban to the distribution of the simulated infection time. We used a Bayesian analysis with diffuse priors to quantify the uncertainty of the estimated epidemiological parameters. We performed sensitivity analysis to different choices of incubation interval and the hyperparameters in the prior specification. Results: We found that our model provides good fit to the distribution of the infection time. Assuming the travel rate to the selected countries and regions is constant over the study period, we found that the epidemic was doubling in size every 2.9 days (95% credible interval [CrI], 2 days--4.1 days). Using previously reported serial interval for 2019-nCoV, the estimated basic reproduction number is 5.7 (95% CrI, 3.4--9.2). The estimates did not change substantially if we assumed the travel rate doubled in the last 3 days before January 23, when we used previously reported incubation interval for severe acute respiratory syndrome (SARS), or when we changed the hyperparameters in our prior specification. Conclusions: Our estimated epidemiological parameters are higher than an earlier report using confirmed cases in Wuhan. This indicates the 2019-nCoV could have been spreading faster than previous estimates.", "which location ?", "Hong Kong, Japan, Korea, Macau, Singapore, and Taiwan", 445.0, 498.0], ["Background: Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods: We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results: The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions: Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities.", "which location ?", "Tianjin, China", 421.0, 435.0], ["ABSTRACT Worldwide demand for gold has accelerated unregulated, small-scale artisanal gold mining (AGM) activities, which are responsible for widespread environmental pollution in Ghana. This study was conducted to assess the impact of AGM activities, namely the heavy metals pollution of soil-water-vegetative ecosystems in southern Ghana. Composite soil, stream sediments and water, well water, and plant samples were randomly collected in replicates from adjoining AGM areas, analyzed for soluble and total Fe, Cu, Zn, Pb, Cd, Hg contents and other properties, and calculated for indices to evaluate the extent of environmental pollution and degradation. Results indicated that both well and stream waters were contaminated with heavy metals and were unsuitable for drinking due to high levels of Pb (0.36\u20130.03 mg/L), Cd (0.01\u20130.02 mg/L), and Hg (<0.01 mg/L). Enrichment factor and geo-accumulation index showed that the soil and sediments were polluted with Cd and Hg. The soil, which could have acted as a source of the Hg pollutant for natural vegetation and food crops grown near AGM areas, was loaded with 2.3 times more Hg than the sediments. The concentration of heavy metals in fern was significantly higher than in corn, which exceeded the maximum permissible limits of WHO/FAO guidelines. Biocontamination factor suggested that the contamination of plants with Hg was high compared to other heavy metals. Further studies are needed for extensive sampling and monitoring of soil-water-vegetative ecosystems to remediate and control heavy metals pollution in response to AGM activities in Ghana.", "which location ?", "Ghana", 180.0, 185.0], ["The first cases of COVID-19 in France were detected on January 24, 2020. The number of screening tests carried out and the methodology used to target the patients tested do not allow for a direct computation of the real number of cases and the mortality this http URL this report, we develop a 'mechanistic-statistical' approach coupling a SIR ODE model describing the unobserved epidemiological dynamics, a probabilistic model describing the data acquisition process and a statistical inference method. The objective of this model is not to make forecasts but to estimate the real number of people infected with COVID-19 during the observation window in France and to deduce the mortality rate associated with the epidemic.Main results. The actual number of infected cases in France is probably much higher than the observations: we find here a factor x 15 (95%-CI: 1.5-11.7), which leads to a 5.2/1000 mortality rate (95%-CI: 1.5 / 1000-11.7/ 1000) at the end of the observation period. We find a R0 of 4.8, a high value which may be linked to the long viral shedding period of 20 days.", "which location ?", "France", 31.0, 37.0], ["The exported cases of 2019 novel coronavirus (COVID-19) infection that were confirmed outside China provide an opportunity to estimate the cumulative incidence and confirmed case fatality risk (cCFR) in mainland China. Knowledge of the cCFR is critical to characterize the severity and understand the pandemic potential of COVID-19 in the early stage of the epidemic. Using the exponential growth rate of the incidence, the present study statistically estimated the cCFR and the basic reproduction number\u2014the average number of secondary cases generated by a single primary case in a na\u00efve population. We modeled epidemic growth either from a single index case with illness onset on 8 December 2019 (Scenario 1), or using the growth rate fitted along with the other parameters (Scenario 2) based on data from 20 exported cases reported by 24 January 2020. The cumulative incidence in China by 24 January was estimated at 6924 cases (95% confidence interval [CI]: 4885, 9211) and 19,289 cases (95% CI: 10,901, 30,158), respectively. The latest estimated values of the cCFR were 5.3% (95% CI: 3.5%, 7.5%) for Scenario 1 and 8.4% (95% CI: 5.3%, 12.3%) for Scenario 2. The basic reproduction number was estimated to be 2.1 (95% CI: 2.0, 2.2) and 3.2 (95% CI: 2.7, 3.7) for Scenarios 1 and 2, respectively. Based on these results, we argued that the current COVID-19 epidemic has a substantial potential for causing a pandemic. The proposed approach provides insights in early risk assessment using publicly available data.", "which location ?", "mainland China", 203.0, 217.0], ["Since first identified, the epidemic scale of the recently emerged novel coronavirus (2019-nCoV) in Wuhan, China, has increased rapidly, with cases arising across China and other countries and regions. using a transmission model, we estimate a basic reproductive number of 3.11 (95%CI, 2.39-4.13); 58-76% of transmissions must be prevented to stop increasing; Wuhan case ascertainment of 5.0% (3.6-7.4); 21022 (11090-33490) total infections in Wuhan 1 to 22 January.", "which location ?", "China", 107.0, 112.0], ["Background: In December 2019, an outbreak of coronavirus disease (COVID-19)was identified in Wuhan, China and, later on, detected in other parts of China. Our aim is to evaluate the effectiveness of the evolution of interventions and self-protection measures, estimate the risk of partial lifting control measures and predict the epidemic trend of the virus in mainland China excluding Hubei province based on the published data and a novel mathematical model. Methods: A novel COVID-19 transmission dynamic model incorporating the intervention measures implemented in China is proposed. We parameterize the model by using the Markov Chain Monte Carlo (MCMC) method and estimate the control reproduction number Rc, as well as the effective daily reproduction ratio Re(t), of the disease transmission in mainland China excluding Hubei province. Results: The estimation outcomes indicate that the control reproduction number is 3.36 (95% CI 3.20-3.64) and Re(t) has dropped below 1 since January 31st, 2020, which implies that the containment strategies implemented by the Chinese government in mainland China excluding Hubei province are indeed effective and magnificently suppressed COVID-19 transmission. Moreover, our results show that relieving personal protection too early may lead to the spread of disease for a longer time and more people would be infected, and may even cause epidemic or outbreak again. By calculating the effective reproduction ratio, we proved that the contact rate should be kept at least less than 30% of the normal level by April, 2020. Conclusions: To ensure the epidemic ending rapidly, it is necessary to maintain the current integrated restrict interventions and self-protection measures, including travel restriction, quarantine of entry, contact tracing followed by quarantine and isolation and reduction of contact, like wearing masks, etc. People should be fully aware of the real-time epidemic situation and keep sufficient personal protection until April. If all the above conditions are met, the outbreak is expected to be ended by April in mainland China apart from Hubei province.", "which location ?", "mainland China excluding Hubei province", 361.0, 400.0], ["We estimate the effective reproduction number for 2019-nCoV based on the daily reported cases from China CDC. The results indicate that 2019-nCoV has a higher effective reproduction number than SARS with a comparable fatality rate.", "which location ?", "China", 99.0, 104.0], ["Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.", "which location ?", "Singapore", 297.0, 306.0], ["We conducted a comparative study of COVID-19 epidemic in three different settings: mainland China, the Guangdong province of China and South Korea, by formulating two disease transmission dynamics models incorporating epidemic characteristics and setting-specific interventions, and fitting the models to multi-source data to identify initial and effective reproduction numbers and evaluate effectiveness of interventions. We estimated the initial basic reproduction number for South Korea, the Guangdong province and mainland China as 2.6 (95% confidence interval (CI): (2.5, 2.7)), 3.0 (95%CI: (2.6, 3.3)) and 3.8 (95%CI: (3.5,4.2)), respectively, given a serial interval with mean of 5 days with standard deviation of 3 days. We found that the effective reproduction number for the Guangdong province and mainland China has fallen below the threshold 1 since February 8th and 18th respectively, while the effective reproduction number for South Korea remains high, suggesting that the interventions implemented need to be enhanced in order to halt further infections. We also project the epidemic trend in South Korea under different scenarios where a portion or the entirety of the integrated package of interventions in China is used. We show that a coherent and integrated approach with stringent public health interventions is the key to the success of containing the epidemic in China and specially its provinces outside its epicenter, and we show that this approach can also be effective to mitigate the burden of the COVID-19 epidemic in South Korea. The experience of outbreak control in mainland China should be a guiding reference for the rest of the world including South Korea.", "which location ?", "South Korea", 135.0, 146.0], ["The exported cases of 2019 novel coronavirus (COVID-19) infection that were confirmed outside China provide an opportunity to estimate the cumulative incidence and confirmed case fatality risk (cCFR) in mainland China. Knowledge of the cCFR is critical to characterize the severity and understand the pandemic potential of COVID-19 in the early stage of the epidemic. Using the exponential growth rate of the incidence, the present study statistically estimated the cCFR and the basic reproduction number\u2014the average number of secondary cases generated by a single primary case in a na\u00efve population. We modeled epidemic growth either from a single index case with illness onset on 8 December 2019 (Scenario 1), or using the growth rate fitted along with the other parameters (Scenario 2) based on data from 20 exported cases reported by 24 January 2020. The cumulative incidence in China by 24 January was estimated at 6924 cases (95% confidence interval [CI]: 4885, 9211) and 19,289 cases (95% CI: 10,901, 30,158), respectively. The latest estimated values of the cCFR were 5.3% (95% CI: 3.5%, 7.5%) for Scenario 1 and 8.4% (95% CI: 5.3%, 12.3%) for Scenario 2. The basic reproduction number was estimated to be 2.1 (95% CI: 2.0, 2.2) and 3.2 (95% CI: 2.7, 3.7) for Scenarios 1 and 2, respectively. Based on these results, we argued that the current COVID-19 epidemic has a substantial potential for causing a pandemic. The proposed approach provides insights in early risk assessment using publicly available data.", "which location ?", "mainland China", 203.0, 217.0], ["We conducted a comparative study of COVID-19 epidemic in three different settings: mainland China, the Guangdong province of China and South Korea, by formulating two disease transmission dynamics models incorporating epidemic characteristics and setting-specific interventions, and fitting the models to multi-source data to identify initial and effective reproduction numbers and evaluate effectiveness of interventions. We estimated the initial basic reproduction number for South Korea, the Guangdong province and mainland China as 2.6 (95% confidence interval (CI): (2.5, 2.7)), 3.0 (95%CI: (2.6, 3.3)) and 3.8 (95%CI: (3.5,4.2)), respectively, given a serial interval with mean of 5 days with standard deviation of 3 days. We found that the effective reproduction number for the Guangdong province and mainland China has fallen below the threshold 1 since February 8th and 18th respectively, while the effective reproduction number for South Korea remains high, suggesting that the interventions implemented need to be enhanced in order to halt further infections. We also project the epidemic trend in South Korea under different scenarios where a portion or the entirety of the integrated package of interventions in China is used. We show that a coherent and integrated approach with stringent public health interventions is the key to the success of containing the epidemic in China and specially its provinces outside its epicenter, and we show that this approach can also be effective to mitigate the burden of the COVID-19 epidemic in South Korea. The experience of outbreak control in mainland China should be a guiding reference for the rest of the world including South Korea.", "which location ?", "mainland China", 83.0, 97.0], ["Abstract Background The initial cases of novel coronavirus (2019-nCoV)\u2013infected pneumonia (NCIP) occurred in Wuhan, Hubei Province, China, in December 2019 and January 2020. We analyzed data on the first 425 confirmed cases in Wuhan to determine the epidemiologic characteristics of NCIP. Methods We collected information on demographic characteristics, exposure history, and illness timelines of laboratory-confirmed cases of NCIP that had been reported by January 22, 2020. We described characteristics of the cases and estimated the key epidemiologic time-delay distributions. In the early period of exponential growth, we estimated the epidemic doubling time and the basic reproductive number. Results Among the first 425 patients with confirmed NCIP, the median age was 59 years and 56% were male. The majority of cases (55%) with onset before January 1, 2020, were linked to the Huanan Seafood Wholesale Market, as compared with 8.6% of the subsequent cases. The mean incubation period was 5.2 days (95% confidence interval [CI], 4.1 to 7.0), with the 95th percentile of the distribution at 12.5 days. In its early stages, the epidemic doubled in size every 7.4 days. With a mean serial interval of 7.5 days (95% CI, 5.3 to 19), the basic reproductive number was estimated to be 2.2 (95% CI, 1.4 to 3.9). Conclusions On the basis of this information, there is evidence that human-to-human transmission has occurred among close contacts since the middle of December 2019. Considerable efforts to reduce transmission will be required to control outbreaks if similar dynamics apply elsewhere. Measures to prevent or reduce transmission should be implemented in populations at risk. (Funded by the Ministry of Science and Technology of China and others.)", "which location ?", "China", 132.0, 137.0], ["Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.", "which location ?", "Tianjin, China", 480.0, 494.0], ["Since first identified, the epidemic scale of the recently emerged novel coronavirus (2019-nCoV) in Wuhan, China, has increased rapidly, with cases arising across China and other countries and regions. using a transmission model, we estimate a basic reproductive number of 3.11 (95%CI, 2.39-4.13); 58-76% of transmissions must be prevented to stop increasing; Wuhan case ascertainment of 5.0% (3.6-7.4); 21022 (11090-33490) total infections in Wuhan 1 to 22 January.", "which location ?", "China", 107.0, 112.0], ["Background: Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods: We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results: The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions: Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities.", "which location ?", "Singapore", 407.0, 416.0], ["English Abstract: Background: Since the emergence of the first pneumonia cases in Wuhan, China, the novel coronavirus (2019-nCov) infection has been quickly spreading out to other provinces and neighbouring countries. Estimation of the basic reproduction number by means of mathematical modelling can be helpful for determining the potential and severity of an outbreak, and providing critical information for identifying the type of disease interventions and intensity. Methods: A deterministic compartmental model was devised based on the clinical progression of the disease, epidemiological status of the individuals, and the intervention measures. Findings: The estimation results based on likelihood and model analysis reveal that the control reproduction number may be as high as 6.47 (95% CI 5.71-7.23). Sensitivity analyses reveal that interventions, such as intensive contact tracing followed by quarantine and isolation, can effectively reduce the control reproduction number and transmission risk, with the effect of travel restriction of Wuhan on 2019-nCov infection in Beijing being almost equivalent to increasing quarantine by 100-thousand baseline value. Interpretation: It is essential to assess how the expensive, resource-intensive measures implemented by the Chinese authorities can contribute to the prevention and control of the 2019-nCov infection, and how long should be maintained. Under the most restrictive measures, the outbreak is expected to peak within two weeks (since January 23rd 2020) with significant low peak value. With travel restriction (no imported exposed individuals to Beijing), the number of infected individuals in 7 days will decrease by 91.14% in Beijing, compared with the scenario of no travel restriction. Mandarin Abstract: \u80cc\u666f\uff1a\u81ea\u4ece\u4e2d\u56fd\u6b66\u6c49\u51fa\u73b0\u7b2c\u4e00\u4f8b\u80ba\u708e\u75c5\u4f8b\u4ee5\u6765\uff0c\u65b0\u578b\u51a0\u72b6\u75c5\u6bd2\uff082019-nCov\uff09\u611f\u67d3\u5df2\u8fc5\u901f\u4f20\u64ad\u5230\u5176\u4ed6\u7701\u4efd\u548c\u5468\u8fb9\u56fd\u5bb6\u3002\u901a\u8fc7\u6570\u5b66\u6a21\u578b\u4f30\u8ba1\u57fa\u672c\u518d\u751f\u6570\uff0c\u6709\u52a9\u4e8e\u786e\u5b9a\u75ab\u60c5\u7206\u53d1\u7684\u53ef\u80fd\u6027\u548c\u4e25\u91cd\u6027\uff0c\u5e76\u4e3a\u786e\u5b9a\u75be\u75c5\u5e72\u9884\u7c7b\u578b\u548c\u5f3a\u5ea6\u63d0\u4f9b\u5173\u952e\u4fe1\u606f\u3002 \u65b9\u6cd5\uff1a\u6839\u636e\u75be\u75c5\u7684\u4e34\u5e8a\u8fdb\u5c55\uff0c\u4e2a\u4f53\u7684\u6d41\u884c\u75c5\u5b66\u72b6\u51b5\u548c\u5e72\u9884\u63aa\u65bd\uff0c\u8bbe\u8ba1\u786e\u5b9a\u6027\u7684\u4ed3\u5ba4\u6a21\u578b\u3002 \u7ed3\u679c\uff1a\u57fa\u4e8e\u4f3c\u7136\u51fd\u6570\u548c\u6a21\u578b\u5206\u6790\u7684\u4f30\u8ba1\u7ed3\u679c\u8868\u660e\uff0c\u63a7\u5236\u518d\u751f\u6570\u53ef\u80fd\u9ad8\u8fbe6.47\uff0895\uff05CI 5.71-7.23\uff09\u3002\u654f\u611f\u6027\u5206\u6790\u663e\u793a\uff0c\u5bc6\u96c6\u63a5\u89e6\u8ffd\u8e2a\u548c\u9694\u79bb\u7b49\u5e72\u9884\u63aa\u65bd\u53ef\u4ee5\u6709\u6548\u51cf\u5c11\u63a7\u5236\u518d\u751f\u6570\u548c\u4f20\u64ad\u98ce\u9669\uff0c\u6b66\u6c49\u5c01\u57ce\u63aa\u65bd\u5bf9\u5317\u4eac2019-nCov\u611f\u67d3\u7684\u5f71\u54cd\u51e0\u4e4e\u7b49\u540c\u4e8e\u589e\u52a0\u9694\u79bb\u63aa\u65bd10\u4e07\u7684\u57fa\u7ebf\u503c\u3002 \u89e3\u91ca\uff1a\u5fc5\u987b\u8bc4\u4f30\u4e2d\u56fd\u5f53\u5c40\u5b9e\u65bd\u7684\u6602\u8d35\uff0c\u8d44\u6e90\u5bc6\u96c6\u578b\u63aa\u65bd\u5982\u4f55\u6709\u52a9\u4e8e\u9884\u9632\u548c\u63a7\u52362019-nCov\u611f\u67d3\uff0c\u4ee5\u53ca\u5e94\u7ef4\u6301\u591a\u957f\u65f6\u95f4\u3002\u5728\u6700\u4e25\u683c\u7684\u63aa\u65bd\u4e0b\uff0c\u9884\u8ba1\u75ab\u60c5\u5c06\u5728\u4e24\u5468\u5185\uff08\u81ea2020\u5e741\u670823\u65e5\u8d77\uff09\u8fbe\u5230\u5cf0\u503c\uff0c\u5cf0\u503c\u8f83\u4f4e\u3002\u4e0e\u6ca1\u6709\u51fa\u884c\u9650\u5236\u7684\u60c5\u51b5\u76f8\u6bd4\uff0c\u6709\u4e86\u51fa\u884c\u9650\u5236\uff08\u5373\u6ca1\u6709\u8f93\u5165\u7684\u6f5c\u4f0f\u7c7b\u4e2a\u4f53\u8fdb\u5165\u5317\u4eac\uff09\uff0c\u5317\u4eac\u76847\u5929\u611f\u67d3\u8005\u6570\u91cf\u5c06\u51cf\u5c1191.14\uff05\u3002", "which location ?", "China", 89.0, 94.0], ["Self-sustaining human-to-human transmission of the novel coronavirus (2019-nCov) is the only plausible explanation of the scale of the outbreak in Wuhan. We estimate that, on average, each case infected 2.6 (uncertainty range: 1.5-3.5) other people up to 18 January 2020, based on an analysis combining our past estimates of the size of the outbreak in Wuhan with computational modelling of potential epidemic trajectories. This implies that control measures need to block well over 60% of transmission to be effective in controlling the outbreak. It is likely, based on the experience of SARS and MERS-CoV, that the number of secondary cases caused by a case of 2019-nCoV is highly variable \u2013 with many cases causing no secondary infections, and a few causing many. Whether transmission is continuing at the same rate currently depends on the effectiveness of current control measures implemented in China and the extent to which the populations of affected areas have adopted risk-reducing behaviours. In the absence of antiviral drugs or vaccines, control relies upon the prompt detection and isolation of symptomatic cases. It is unclear at the current time whether this outbreak can be contained within China; uncertainties include the severity spectrum of the disease caused by this virus and whether cases with relatively mild symptoms are able to transmit the virus efficiently. Identification and testing of potential cases need to be as extensive as is permitted by healthcare and diagnostic testing capacity \u2013 including the identification, testing and isolation of suspected cases with only mild to moderate disease (e.g. influenza-like illness), when logistically feasible.", "which location ?", "Wuhan", 147.0, 152.0], ["Background: Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods: We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results: The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions: Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities.", "which location ?", "Singapore", 407.0, 416.0], ["Between December 1, 2019 and January 26, 2020, nearly 3000 cases of respiratory illness caused by a novel coronavirus originating in Wuhan, China have been reported. In this short analysis, we combine publicly available cumulative case data from the ongoing outbreak with phenomenological modeling methods to conduct an early transmissibility assessment. Our model suggests that the basic reproduction number associated with the outbreak (at time of writing) may range from 2.0 to 3.1. Though these estimates are preliminary and subject to change, they are consistent with previous findings regarding the transmissibility of the related SARS-Coronavirus and indicate the possibility of epidemic potential.", "which location ?", "Wuhan", 133.0, 138.0], ["150 Impact of deforestation and subsequent land-use change on soil quality Emmanuel Amoakwah a, Mohammad A. Rahman b, Kwabena A. Nketia a, Rousseau Djouaka c, Nataliia Oleksandrivna Didenko d, Khandakar R. Islam b,* a CSIR \u2013 Soil Research Institute, Academy Post Office, Kwadaso-Kumasi, Ghana b Ohio State University South Centers, Piketon, Ohio, USA c International Institute of Tropical Agriculture, Benin d Institute of Water Problems and Land Reclamation, Kyiv, Ukraine", "which location ?", "Benin", 402.0, 407.0], ["Abstract Background The initial cases of novel coronavirus (2019-nCoV)\u2013infected pneumonia (NCIP) occurred in Wuhan, Hubei Province, China, in December 2019 and January 2020. We analyzed data on the first 425 confirmed cases in Wuhan to determine the epidemiologic characteristics of NCIP. Methods We collected information on demographic characteristics, exposure history, and illness timelines of laboratory-confirmed cases of NCIP that had been reported by January 22, 2020. We described characteristics of the cases and estimated the key epidemiologic time-delay distributions. In the early period of exponential growth, we estimated the epidemic doubling time and the basic reproductive number. Results Among the first 425 patients with confirmed NCIP, the median age was 59 years and 56% were male. The majority of cases (55%) with onset before January 1, 2020, were linked to the Huanan Seafood Wholesale Market, as compared with 8.6% of the subsequent cases. The mean incubation period was 5.2 days (95% confidence interval [CI], 4.1 to 7.0), with the 95th percentile of the distribution at 12.5 days. In its early stages, the epidemic doubled in size every 7.4 days. With a mean serial interval of 7.5 days (95% CI, 5.3 to 19), the basic reproductive number was estimated to be 2.2 (95% CI, 1.4 to 3.9). Conclusions On the basis of this information, there is evidence that human-to-human transmission has occurred among close contacts since the middle of December 2019. Considerable efforts to reduce transmission will be required to control outbreaks if similar dynamics apply elsewhere. Measures to prevent or reduce transmission should be implemented in populations at risk. (Funded by the Ministry of Science and Technology of China and others.)", "which location ?", "China", 132.0, 137.0], ["The B.1.1.7 variant of concern (VOC) is increasing in prevalence across Europe. Accurate estimation of disease severity associated with this VOC is critical for pandemic planning. We found increased risk of death for VOC compared with non-VOC cases in England (HR: 1.67 (95% CI: 1.34 - 2.09; P<.0001). Absolute risk of death by 28-days increased with age and comorbidities. VOC has potential to spread faster with higher mortality than the pandemic to date.", "which location ?", "England", 252.0, 259.0], ["Since the emergence of the first cases in Wuhan, China, the novel coronavirus (2019-nCoV) infection has been quickly spreading out to other provinces and neighboring countries. Estimation of the basic reproduction number by means of mathematical modeling can be helpful for determining the potential and severity of an outbreak and providing critical information for identifying the type of disease interventions and intensity. A deterministic compartmental model was devised based on the clinical progression of the disease, epidemiological status of the individuals, and intervention measures. The estimations based on likelihood and model analysis show that the control reproduction number may be as high as 6.47 (95% CI 5.71\u20137.23). Sensitivity analyses show that interventions, such as intensive contact tracing followed by quarantine and isolation, can effectively reduce the control reproduction number and transmission risk, with the effect of travel restriction adopted by Wuhan on 2019-nCoV infection in Beijing being almost equivalent to increasing quarantine by a 100 thousand baseline value. It is essential to assess how the expensive, resource-intensive measures implemented by the Chinese authorities can contribute to the prevention and control of the 2019-nCoV infection, and how long they should be maintained. Under the most restrictive measures, the outbreak is expected to peak within two weeks (since 23 January 2020) with a significant low peak value. With travel restriction (no imported exposed individuals to Beijing), the number of infected individuals in seven days will decrease by 91.14% in Beijing, compared with the scenario of no travel restriction.", "which location ?", "China", 49.0, 54.0], ["Abstract Backgrounds An ongoing outbreak of a novel coronavirus (2019-nCoV) pneumonia hit a major city of China, Wuhan, December 2019 and subsequently reached other provinces/regions of China and countries. We present estimates of the basic reproduction number, R 0 , of 2019-nCoV in the early phase of the outbreak. Methods Accounting for the impact of the variations in disease reporting rate, we modelled the epidemic curve of 2019-nCoV cases time series, in mainland China from January 10 to January 24, 2020, through the exponential growth. With the estimated intrinsic growth rate ( \u03b3 ), we estimated R 0 by using the serial intervals (SI) of two other well-known coronavirus diseases, MERS and SARS, as approximations for the true unknown SI. Findings The early outbreak data largely follows the exponential growth. We estimated that the mean R 0 ranges from 2.24 (95%CI: 1.96-2.55) to 3.58 (95%CI: 2.89-4.39) associated with 8-fold to 2-fold increase in the reporting rate. We demonstrated that changes in reporting rate substantially affect estimates of R 0 . Conclusion The mean estimate of R 0 for the 2019-nCoV ranges from 2.24 to 3.58, and significantly larger than 1. Our findings indicate the potential of 2019-nCoV to cause outbreaks.", "which location ?", "China", 106.0, 111.0], ["Background To control the COVID-19 outbreak in Japan, sports and entertainment events were canceled and schools were closed throughout Japan from February 26 through March 19. That policy has been designated as voluntary event cancellation and school closure (VECSC). Object This study assesses VECSC effectiveness based on predicted outcomes. Methods A simple susceptible\u2013infected\u2013recovered model was applied to data of patients with symptoms in Japan during January 14 through March 26. The respective reproduction numbers for periods before VECSC (R0), during VECSC (Re), and after VECSC (Ra) were estimated. Results Results suggest R0 before VECSC as 2.534 [2.449, 2.598], Re during VECSC as 1.077 [0.948, 1.228], and Ra after VECSC as 4.455 [3.615, 5.255]. Discussion and conclusion Results demonstrated that VECSC can reduce COVID-19 infectiousness considerably, but after VECSC, the value of the reproduction number rose to exceed 4.0.", "which location ?", "Japan", 47.0, 52.0], ["Background. The epidemic outbreak cased by coronavirus 2019-nCoV is of great interest to researches because of the high rate of spread of the infection and the significant number of fatalities. A detailed scientific analysis of the phenomenon is yet to come, but the public is already interested in the questions of the duration of the epidemic, the expected number of patients and deaths. For long time predictions, the complicated mathematical models are necessary which need many efforts for unknown parameters identification and calculations. In this article, some preliminary estimates will be presented. Objective. Since the reliable long time data are available only for mainland China, we will try to predict the epidemic characteristics only in this area. We will estimate some of the epidemic characteristics and present the most reliable dependences for victim numbers, infected and removed persons versus time. Methods. In this study we use the known SIR model for the dynamics of an epidemic, the known exact solution of the linear equations and statistical approach developed before for investigation of the children disease, which occurred in Chernivtsi (Ukraine) in 1988-1989. Results. The optimal values of the SIR model parameters were identified with the use of statistical approach. The numbers of infected, susceptible and removed persons versus time were predicted. Conclusions. Simple mathematical model was used to predict the characteristics of the epidemic caused by coronavirus 2019-nCoV in mainland China. The further research should focus on updating the predictions with the use of fresh data and using more complicated mathematical models.", "which location ?", "China", 687.0, 692.0], ["Background: In December 2019, an outbreak of respiratory illness caused by a novel coronavirus (2019-nCoV) emerged in Wuhan, China and has swiftly spread to other parts of China and a number of foreign countries. The 2019-nCoV cases might have been under-reported roughly from 1 to 15 January 2020, and thus we estimated the number of unreported cases and the basic reproduction number, R0, of 2019-nCoV. Methods: We modelled the epidemic curve of 2019-nCoV cases, in mainland China from 1 December 2019 to 24 January 2020 through the exponential growth. The number of unreported cases was determined by the maximum likelihood estimation. We used the serial intervals (SI) of infection caused by two other well-known coronaviruses (CoV), Severe Acute Respiratory Syndrome (SARS) and Middle East Respiratory Syndrome (MERS) CoVs, as approximations of the unknown SI for 2019-nCoV to estimate R0. Results: We confirmed that the initial growth phase followed an exponential growth pattern. The under-reporting was likely to have resulted in 469 (95% CI: 403\u2013540) unreported cases from 1 to 15 January 2020. The reporting rate after 17 January 2020 was likely to have increased 21-fold (95% CI: 18\u201325) in comparison to the situation from 1 to 17 January 2020 on average. We estimated the R0 of 2019-nCoV at 2.56 (95% CI: 2.49\u20132.63). Conclusion: The under-reporting was likely to have occurred during the first half of January 2020 and should be considered in future investigation.", "which location ?", "mainland China", 468.0, 482.0], ["Since the emergence of the first cases in Wuhan, China, the novel coronavirus (2019-nCoV) infection has been quickly spreading out to other provinces and neighboring countries. Estimation of the basic reproduction number by means of mathematical modeling can be helpful for determining the potential and severity of an outbreak and providing critical information for identifying the type of disease interventions and intensity. A deterministic compartmental model was devised based on the clinical progression of the disease, epidemiological status of the individuals, and intervention measures. The estimations based on likelihood and model analysis show that the control reproduction number may be as high as 6.47 (95% CI 5.71\u20137.23). Sensitivity analyses show that interventions, such as intensive contact tracing followed by quarantine and isolation, can effectively reduce the control reproduction number and transmission risk, with the effect of travel restriction adopted by Wuhan on 2019-nCoV infection in Beijing being almost equivalent to increasing quarantine by a 100 thousand baseline value. It is essential to assess how the expensive, resource-intensive measures implemented by the Chinese authorities can contribute to the prevention and control of the 2019-nCoV infection, and how long they should be maintained. Under the most restrictive measures, the outbreak is expected to peak within two weeks (since 23 January 2020) with a significant low peak value. With travel restriction (no imported exposed individuals to Beijing), the number of infected individuals in seven days will decrease by 91.14% in Beijing, compared with the scenario of no travel restriction.", "which location ?", "China", 49.0, 54.0], ["Previous research explored workplace climate as a factor of workplace bullying and coping with workplace bullying, but these concepts were not closely related to workplace bullying behaviors (WBBs). To examine whether the perceived exposure to bullying mediates the relationship between the climate of accepting WBBs and job satisfaction under the condition of different levels of WBBs coping self-efficacy beliefs, we performed moderated mediation analysis. The Negative Acts Questionnaire \u2013 Revised was given to 329 employees from Serbia for assessing perceived exposure to bullying. Leaving the original scale items, the instruction of the original Negative Acts Questionnaire \u2013 Revised was modified for assessing (1) the climate of accepting WBBs and (2) WBBs coping self-efficacy beliefs. There was a significant negative relationship between exposure to bullying and job satisfaction. WBB acceptance climate was positively related to exposure to workplace bullying and negatively related to job satisfaction. WBB acceptance climate had an indirect relationship with job satisfaction through bullying exposure, and the relationship between WBB acceptance and exposure to bullying was weaker among those who believed that they were more efficient in coping with workplace bullying. Workplace bullying could be sustained by WBB acceptance climate which threatens the job-related outcomes. WBBs coping self-efficacy beliefs have some buffering effects.", "which Country ?", "Serbia", 533.0, 539.0], ["In this work, Cu2O nanorods modified by reduced graphene oxide (rGO) were produced via a two-step synthesis method. CuO rods were firstly prepared in graphene oxide (GO) solution using cetyltrimethyl ammonium bromide (CTAB) as a soft template by the microwave-assisted hydrothermal method, accompanied with the reduction of GO. The complexes were subsequently annealed and Cu2O nanorods/rGO composites were obtained. The as-prepared composites were evaluated using various characterization methods, and were utilized as sensing materials. The room-temperature NH3 sensing properties of a sensor based on the Cu2O nanorods/rGO composites were systematically investigated. The sensor exhibited an excellent sensitivity and linear response toward NH3 at room temperature. Furthermore, the sensor could be easily recovered to its initial state in a short time after exposure to fresh air. The sensor also showed excellent repeatability and selectivity to NH3. The remarkably enhanced NH3-sensing performances could be attributed to the improved conductivity, catalytic activity for the oxygen reduction reaction and increased gas adsorption in the unique hybrid composites. Such composites showed great potential for manufacturing a new generation of low-power and portable ammonia sensors.", "which Sensing environment ?", "Air", 880.0, 883.0], ["The utility of electronically conductive metal-organic frameworks (EC-MOFs) in high-performance devices has been limited to date by a lack of high-quality thin film. The controllable thin-film fabrication of an EC-MOF, Cu3 (HHTP)2 , (HHTP=2,3,6,7,10,11-hexahydroxytriphenylene), by a spray layer-by-layer liquid-phase epitaxial method is reported. The Cu3 (HHTP)2 thin film can not only be precisely prepared with thickness increment of about 2 nm per growing cycle, but also shows a smooth surface, good crystallinity, and high orientation. The chemiresistor gas sensor based on this high-quality thin film is one of the best room-temperature sensors for NH3 among all reported sensors based on various materials.", "which Architecture ?", "Chemiresistor", 546.0, 559.0], ["www.MaterialsViews.com C O M M Hydrogen Sensing Using Pd-Functionalized Multi-Layer Graphene Nanoribbon Networks U N IC By Jason L. Johnson , Ashkan Behnam , S. J. Pearton , and Ant Ural * A IO N Sensing of gas molecules is critical in many fi elds including environmental monitoring, transportation, defense, space missions, energy, agriculture, and medicine. Solid state gas sensors have been developed for many of these applications. [ 1\u20133 ] More recently, chemical gas sensors based on nanoscale materials, such as carbon nanotubes and semiconductor nanowires, have attracted signifi cant research attention due to their naturally small size, large surface-to-volume ratio, low power consumption, room temperature operation, and simple fabrication. [ 4\u20136 ]", "which Analyte ?", "Hydrogen", 31.0, 39.0], ["Abstract Flower-like palladium nanoclusters (FPNCs) are electrodeposited onto graphene electrode that are prepared by chemical vapor deposition (CVD). The CVD graphene layer is transferred onto a poly(ethylene naphthalate) (PEN) film to provide a mechanical stability and flexibility. The surface of the CVD graphene is functionalized with diaminonaphthalene (DAN) to form flower shapes. Palladium nanoparticles act as templates to mediate the formation of FPNCs, which increase in size with reaction time. The population of FPNCs can be controlled by adjusting the DAN concentration as functionalization solution. These FPNCs_CG electrodes are sensitive to hydrogen gas at room temperature. The sensitivity and response time as a function of the FPNCs population are investigated, resulted in improved performance with increasing population. Furthermore, the minimum detectable level (MDL) of hydrogen is 0.1 ppm, which is at least 2 orders of magnitude lower than that of chemical sensors based on other Pd-based hybrid materials.", "which Analyte ?", "Hydrogen", 658.0, 666.0], ["In this work, in order to enhance the performance of graphene gas sensors, graphene and metal oxide nanoparticles (NPs) are combined to be utilized for high selectivity and fast response gas detection. Whether at the relatively optimal temperature or even room temperature, our gas sensors based on graphene transistors, decorated with SnO2 NPs, exhibit fast response and short recovery times (\u223c1 seconds) at 50 \u00b0C when the hydrogen concentration is 100 ppm. Specifically, X-ray photoelectron spectroscopy and conductive atomic force microscopy are employed to explore the interface properties between graphene and SnO2 NPs. Through the complimentary characterization, a mechanism based on charge transfer and band alignment is elucidated to explain the physical originality of these graphene gas sensors: high carrier mobility of graphene and small energy barrier between graphene and SnO2 NPs have ensured a fast response and a high sensitivity and selectivity of the devices. Generally, these gas sensors will facilitate the rapid development of next-generation hydrogen gas detection.", "which Analyte ?", "Hydrogen", 424.0, 432.0], ["The utilization of black phosphorus and its monolayer (phosphorene) and few-layers in field-effect transistors has attracted a lot of attention to this elemental two-dimensional material. Various studies on optimization of black phosphorus field-effect transistors, PN junctions, photodetectors, and other applications have been demonstrated. Although chemical sensing based on black phosphorus devices was theoretically predicted, there is still no experimental verification of such an important study of this material. In this article, we report on chemical sensing of nitrogen dioxide (NO2) using field-effect transistors based on multilayer black phosphorus. Black phosphorus sensors exhibited increased conduction upon NO2 exposure and excellent sensitivity for detection of NO2 down to 5 ppb. Moreover, when the multilayer black phosphorus field-effect transistor was exposed to NO2 concentrations of 5, 10, 20, and 40 ppb, its relative conduction change followed the Langmuir isotherm for molecules adsorbed on a surface. Additionally, on the basis of an exponential conductance change, the rate constants for adsorption and desorption of NO2 on black phosphorus were extracted for different NO2 concentrations, and they were in the range of 130-840 s. These results shed light on important electronic and sensing characteristics of black phosphorus, which can be utilized in future studies and applications.", "which Analyte ?", "Nitrogen dioxide", 571.0, 587.0], ["Nitrogen dioxide (NO2) is a gas species that plays an important role in certain industrial, farming, and healthcare sectors. However, there are still significant challenges for NO2 sensing at low detection limits, especially in the presence of other interfering gases. The NO2 selectivity of current gas-sensing technologies is significantly traded-off with their sensitivity and reversibility as well as fabrication and operating costs. In this work, we present an important progress for selective and reversible NO2 sensing by demonstrating an economical sensing platform based on the charge transfer between physisorbed NO2 gas molecules and two-dimensional (2D) tin disulfide (SnS2) flakes at low operating temperatures. The device shows high sensitivity and superior selectivity to NO2 at operating temperatures of less than 160 \u00b0C, which are well below those of chemisorptive and ion conductive NO2 sensors with much poorer selectivity. At the same time, excellent reversibility of the sensor is demonstrated, which has rarely been observed in other 2D material counterparts. Such impressive features originate from the planar morphology of 2D SnS2 as well as unique physical affinity and favorable electronic band positions of this material that facilitate the NO2 physisorption and charge transfer at parts per billion levels. The 2D SnS2-based sensor provides a real solution for low-cost and selective NO2 gas sensing.", "which Analyte ?", "Nitrogen dioxide", 0.0, 16.0], ["We describe a rapid immunochromatographic method for the quantitation of progesterone in bovine milk. The method is based on a 'competitive' assay format using the monoclonal antibody to progesterone and a progesterone-protein conjugate labelled with colloidal gold particles. The monoclonal antibody to progesterone is immobilized as a narrow detection zone on a porous membrane. The sample is mixed with colloidal gold particles coated with progesterone-protein conjugate, and the mixture is allowed to migrate past the detection zone. Migration is facilitated by capillary forces. The amount of labelled progesterone-protein conjugate bound to the detection zone, as detected by photometric scanning, is inversely proportional to the amount of progesterone present in the sample. Analysis is complete in less than 10 min. The method has a practical detection limit of 5 ng of progesterone per ml of bovine milk.", "which Analyte ?", "Progesterone in bovine milk", 73.0, 100.0], ["OBJECTIVE To establish a dosing regimen for potassium bromide and evaluate use of bromide to treat spontaneous seizures in cats. DESIGN Prospective and retrospective studies. ANIMALS 7 healthy adult male cats and records of 17 cats with seizures. PROCEDURE Seven healthy cats were administered potassium bromide (15 mg/kg [6.8 mg/lb], p.o., q 12 h) until steady-state concentrations were reached. Serum samples for pharmacokinetic analysis were obtained weekly until bromide concentrations were not detectable. Clinical data were obtained from records of 17 treated cats. RESULTS In the prospective study, maximum serum bromide concentration was 1.1 +/- 0.2 mg/mL at 8 weeks. Mean disappearance half-life was 1.6 +/- 0.2 weeks. Steady state was achieved at a mean of 5.3 +/-1.1 weeks. No adverse effects were detected and bromide was well tolerated. In the retrospective study, administration of bromide (n = 4) or bromide and phenobarbital (3) was associated with eradication of seizures in 7 of 15 cats (serum bromide concentration range, 1.0 to 1.6 mg/mL); however, bromide administration was associated with adverse effects in 8 of 16 cats. Coughing developed in 6 of these cats, leading to euthanasia in 1 cat and discontinuation of bromide administration in 2 cats. CONCLUSIONS AND CLINICAL RELEVANCE Therapeutic concentrations of bromide are attained within 2 weeks in cats that receive 30 mg/kg/d (13.6 mg/lb/d) orally. Although somewhat effective in seizure control, the incidence of adverse effects may not warrant routine use of bromide for control of seizures in cats.", "which Most common adverse effects ?", "Cough", NaN, NaN], ["Seven cats were presented for mild-to-moderate cough and/or dyspnoea after starting bromide (Br) therapy for neurological diseases. The thoracic auscultation was abnormal in three cats showing increased respiratory sounds and wheezes. Haematology revealed mild eosinophilia in one cat. The thoracic radiographs showed bronchial patterns with peribronchial cuffing in most of them. Bronchoalveolar lavage performed in two cats revealed neutrophilic and eosinophilic inflammation. Histopathology conducted in one cat showed endogenous lipid pneumonia (EnLP). All cats improved with steroid therapy after Br discontinuation. Five cats were completely weaned off steroids, with no recurrence of clinical signs. In one cat, the treatment was discontinued despite persistent clinical signs. The cat presenting with EnLP developed secondary pneumothorax and did not recover. Br-associated lower airway disease can appear in cats after months of treatment and clinical improvement occurs only after discontinuing Br therapy.", "which Most common adverse effects ?", "Cough", 47.0, 52.0], ["Acute fulminant hepatic necrosis was associated with repeated oral administration of diazepam (1.25 to 2 mg, PO, q 24 or 12 h), prescribed for behavioral modification or to facilitate urination. Five of 11 cats became lethargic, atactic, and anorectic within 96 hours of initial treatment. All cats became jaundiced during the first 11 days of illness. Serum biochemical analysis revealed profoundly high alanine transaminase and aspartate transaminase activities. Results of coagulation tests in 3 cats revealed marked abnormalities. Ten cats died or were euthanatized within 15 days of initial drug administration, and only 1 cat survived. Histologic evaluation of hepatic tissue specimens from each cat revealed florid centrilobular hepatic necrosis, profound biliary ductule proliferation and hyperplasia, and suppurative intraductal inflammation. Idiosyncratic hepatotoxicosis was suspected because of the rarity of this condition. Prior sensitization to diazepam was possible in only 1 cat, and consistent risk factors that could explain susceptibility to drug toxicosis were not identified. On the basis of the presumption that diazepam was hepatotoxic in these cats, an increase in serum transaminase activity within 5 days of treatment initiation indicates a need to suspend drug administration and to provide supportive care.", "which AED evaluated ?", "Diazepam", 85.0, 93.0], ["With the eventual goal of making zonisamide (ZNS), a relatively new antiepileptic drug, available for the treatment of epilepsy in cats, the pharmacokinetics after a single oral administration at 10 mg/kg and the toxicity after 9-week daily administration of 20 mg/kg/day of ZNS were studied in healthy cats. Pharmacokinetic parameters obtained with a single administration of ZNS at 10 mg/day were as follows: C max =13.1 \u03bcg/ml; T max =4.0 h; T 1/2 =33.0 h; areas under the curves (AUCs)=720.3 \u03bcg/mlh (values represent the medians). The study with daily administrations revealed that the toxicity of ZNS was comparatively low in cats, suggesting that it may be an available drug for cats. However, half of the cats that were administered 20 mg/kg/day daily showed adverse reactions such as anorexia, diarrhoea, vomiting, somnolence and locomotor ataxia.", "which AED evaluated ?", "Zonisamide", 33.0, 43.0], ["We describe a trainable and scalable summarization system which utilizes features derived from information retrieval, information extraction, and NLP techniques and on-line resources. The system combines these features using a trainable feature combiner learned from summary examples through a machine learning algorithm. We demonstrate system scalability by reporting results on the best combination of summarization features for different document sources. We also present preliminary results from a task-based evaluation on summarization output usability.", "which evaluation ?", "Evaluation", 513.0, 523.0], ["Researchers in computational linguistics have long speculated that the nuclei of the rhetorical structure tree of a text form an adequate \\summary\" of the text for which that tree was built. However, to my knowledge, there has been no experiment to connrm how valid this speculation really is. In this paper, I describe a psycholinguistic experiment that shows that the concepts of discourse structure and nuclearity can be used eeectively in text summarization. More precisely, I show that there is a strong correlation between the nuclei of the discourse structure of a text and what readers perceive to be the most important units in that text. In addition, I propose and evaluate the quality of an automatic, discourse-based summa-rization system that implements the methods that were validated by the psycholinguistic experiment. The evaluation indicates that although the system does not match yet the results that would be obtained if discourse trees had been built manually, it still signiicantly outperforms both a baseline algorithm and Microsoft's OOce97 summarizer. 1 Motivation Traditionally, previous approaches to automatic text summarization have assumed that the salient parts of a text can be determined by applying one or more of the following assumptions: important sentences in a text contain words that are used frequently (Luhn 1958; Edmundson 1968); important sentences contain words that are used in the title and section headings (Edmundson 1968); important sentences are located at the beginning or end of paragraphs (Baxendale 1958); important sentences are located at positions in a text that are genre dependent, and these positions can be determined automatically, through training important sentences use bonus words such as \\greatest\" and \\signiicant\" or indicator phrases such as \\the main aim of this paper\" and \\the purpose of this article\", while unimportant sentences use stigma words such as \\hardly\" and \\im-possible\" important sentences and concepts are the highest connected entities in elaborate semantic struc-important and unimportant sentences are derivable from a discourse representation of the text (Sparck Jones 1993b; Ono, Sumita, & Miike 1994). In determining the words that occur most frequently in a text or the sentences that use words that occur in the headings of sections, computers are accurate tools. Therefore, in testing the validity of using these indicators for determining the most important units in a text, it is adequate to compare the direct output of a summarization program that implements the assump-tion(s) under scrutiny with a human-made \u2026", "which evaluation ?", "Evaluation", 839.0, 849.0], ["This paper describes the multi-document text summarization system NeATS. Using a simple algorithm, NeATS was among the top two performers of the DUC-01 evaluation.", "which evaluation ?", "Evaluation", 152.0, 162.0], ["We present ERSS 2005, our entry to this year\u2019s DUC competition. With only slight modifications from last year\u2019s version to accommodate the more complex context information present in DUC 2005, we achieved a similar performance to last year\u2019s entry, ranking roughly in the upper third when examining the ROUGE-1 and Basic Element score. We also participated in the additional manual evaluation based on the new Pyramid method and performed further evaluations based on the Basic Elements method and the automatic generation of Pyramids. Interestingly, the ranking of our system differs greatly between the different measures; we attempt to analyse this effect based on correlations between the different results using the Spearman coefficient.", "which evaluation ?", "Evaluation", 382.0, 392.0], ["The multi-document summarizer using genetic algorithm-based sentence extraction (MSBGA) regards summarization process as an optimization problem where the optimal summary is chosen among a set of summaries formed by the conjunction of the original articles sentences. To solve the NP hard optimization problem, MSBGA adopts genetic algorithm, which can choose the optimal summary on global aspect. The evaluation function employs four features according to the criteria of a good summary: satisfied length, high coverage, high informativeness and low redundancy. To improve the accuracy of term frequency, MSBGA employs a novel method TFS, which takes word sense into account while calculating term frequency. The experiments on DUC04 data show that our strategy is effective and the ROUGE-1 score is only 0.55% lower than the best participant in DUC04", "which evaluation ?", "Evaluation", 402.0, 412.0], ["This paper presents a novel multi-document summarization approach based on personalized pagerank (PPRSum). In this algorithm, we uniformly integrate various kinds of information in the corpus. At first, we train a salience model of sentence global features based on Naive Bayes Model. Secondly, we generate a relevance model for each corpus utilizing the query of it. Then, we compute the personalized prior probability for each sentence in the corpus utilizing the salience model and the relevance model both. With the help of personalized prior probability, a Personalized PageRank ranking process is performed depending on the relationships among all sentences in the corpus. Additionally, the redundancy penalty is imposed on each sentence. The summary is produced by choosing the sentences with both high query-focused information richness and high information novelty. Experiments on DUC2007 are performed and the ROUGE evaluation results show that PPRSum ranks between the 1st and the 2nd systems on DUC2007 main task.", "which evaluation ?", "Evaluation", 926.0, 936.0], ["We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1027 canteen trays for a total of 3616 food instances belonging to 73 food classes. The food on the tray images has been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented with three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using convolutional-neural-networks-based features. The dataset, as well as the benchmark framework, are available to the research community.", "which Acquisition ?", "Canteen", 154.0, 161.0], ["We introduce the first visual dataset of fast foods with a total of 4,545 still images, 606 stereo pairs, 303 360\u00b0 videos for structure from motion, and 27 privacy-preserving videos of eating events of volunteers. This work was motivated by research on fast food recognition for dietary assessment. The data was collected by obtaining three instances of 101 foods from 11 popular fast food chains, and capturing images and videos in both restaurant conditions and a controlled lab setting. We benchmark the dataset using two standard approaches, color histogram and bag of SIFT features in conjunction with a discriminative classifier. Our dataset and the benchmarks are designed to stimulate research in this area and will be released freely to the research community.", "which Acquisition ?", "Lab", 477.0, 480.0], ["The Italian Public Administration Services (IPAS) is a registry of services provided to Italian citizens likewise the Local Government Service List (UK), or the European Service List for local authorities from other nations. Unlike existing registries, IPAS presents the novelty of modelling public services from the view point of the value they have for the consumers and the providers. A value-added-service (VAS) is linked to a life event that requires its fruition, addresses consumer categories to identify market opportunities for private providers, and is described by non-functional-properties such as price and time of fruition. Where Italian local authorities leave the citizen-users in a daedalus of references to understand whether they can/have to apply for a service, the IPAS model captures the necessary back-ground knowledge about the connection between administrative legislation and service specifications, life events, and application contexts to support the citizen-users to fulfill their needs. As a proof of concept, we developed an operational Web environment named ASSO, designed to assist the citizen-user to intuitively create bundles of mandatory-by-legislation and recommended services, to accomplish his bureaucratic fulfillments. Although ASSO is an ongoing project, domain experts gave preliminary positive feedback on the innovativeness and effectiveness of the proposed approach.", "which has Target users ?", "Citizens", 96.0, 104.0], ["In e-participation platforms, citizens suggest, discuss and vote online for initiatives aimed to address a wide range of issues and problems in a city, such as economic development, public safety, budges, infrastructure, housing, environment, social rights, and health care. For a particular citizen, the number of proposals and debates may be overwhelming, and recommender systems could help filtering and ranking those that are more relevant. Focusing on a particular case, the `Decide Madrid' platform, in this paper we empirically investigate which sources of user preferences and recommendation approaches could be more effective, in terms of several aspects, namely precision, coverage and diversity.", "which has Target users ?", "Citizens", 30.0, 38.0], ["This paper aims at studying the exploitation of intelligent agents for supporting citizens to access e-government services. To this purpose, it proposes a multi-agent system capable of suggesting to the users the most interesting services for them; specifically, these suggestions are computed by taking into account both their exigencies/preferences and the capabilities of the devices they are currently exploiting. The paper first describes the proposed system and, then, reports various experimental results. Finally, it presents a comparison between our system and other related ones already presented in the literature.", "which has Target users ?", "Citizens", 82.0, 90.0], ["Deep neural networks (DNNs) have become the gold standard for solving challenging classification problems, especially given complex sensor inputs (e.g., images and video). While DNNs are powerful, they are also brittle, and their inner workings are not fully understood by humans, leading to their use as \u201cblack-box\u201d models. DNNs often generalize poorly when provided new data sampled from slightly shifted distributions; DNNs are easily manipulated by adversarial examples; and the decision-making process of DNNs can be difficult for humans to interpret. To address these challenges, we propose integrating DNNs with external sources of semantic knowledge. Large quantities of meaningful, formalized knowledge are available in knowledge graphs and other databases, many of which are publicly obtainable. But at present, these sources are inaccessible to deep neural methods, which can only exploit patterns in the signals they are given to classify. In this work, we conduct experiments on the ADE20K dataset, using scene classification as an example task where combining DNNs with external knowledge graphs can result in more robust and explainable models. We align the atomic concepts present in ADE20K (i.e., objects) to WordNet, a hierarchically-organized lexical database. Using this knowledge graph, we expand the concept categories which can be identified in ADE20K and relate these concepts in a hierarchical manner. The neural architecture we present performs scene classification using these concepts, illuminating a path toward DNNs which can efficiently exploit high-level knowledge in place of excessive quantities of direct sensory input. We hypothesize and experimentally validate that incorporating background knowledge via an external knowledge graph into a deep learning-based model should improve the explainability and robustness of the model.", "which Machine Learning Task ?", "Classification", 82.0, 96.0], ["One characteristic that sets humans apart from modern learning-based computer vision algorithms is the ability to acquire knowledge about the world and use that knowledge to reason about the visual world. Humans can learn about the characteristics of objects and the relationships that occur between them to learn a large variety of visual concepts, often with few examples. This paper investigates the use of structured prior knowledge in the form of knowledge graphs and shows that using this knowledge improves performance on image classification. We build on recent work on end-to-end learning on graphs, introducing the Graph Search Neural Network as a way of efficiently incorporating large knowledge graphs into a vision classification pipeline. We show in a number of experiments that our method outperforms standard neural network baselines for multi-label classification.", "which Machine Learning Task ?", "Classification", 535.0, 549.0], ["The linkage of ImageNet WordNet synsets to Wikidata items will leverage deep learning algorithm with access to a rich multilingual knowledge graph. Here I will describe our on-going efforts in linking the two resources and issues faced in matching the Wikidata and WordNet knowledge graphs. I show an example on how the linkage can be used in a deep learning setting with real-time image classification and labeling in a non-English language and discuss what opportunities lies ahead.", "which Machine Learning Task ?", "Classification", 388.0, 402.0], ["The ability to promptly recognise new research trends is strategic for many stakeholders, including universities, institutional funding bodies, academic publishers and companies. While the literature describes several approaches which aim to identify the emergence of new research topics early in their lifecycle, these rely on the assumption that the topic in question is already associated with a number of publications and consistently referred to by a community of researchers. Hence, detecting the emergence of a new research area at an embryonic stage, i.e., before the topic has been consistently labelled by a community of researchers and associated with a number of publications, is still an open challenge. In this paper, we begin to address this challenge by performing a study of the dynamics preceding the creation of new topics. This study indicates that the emergence of a new topic is anticipated by a significant increase in the pace of collaboration between relevant research areas, which can be seen as the \u2018parents\u2019 of the new topic. These initial findings (i) confirm our hypothesis that it is possible in principle to detect the emergence of a new topic at the embryonic stage, (ii) provide new empirical evidence supporting relevant theories in Philosophy of Science, and also (iii) suggest that new topics tend to emerge in an environment in which weakly interconnected research areas begin to cross-fertilise.", "which users ?", "companies", 176.0, 185.0], ["The ability to promptly recognise new research trends is strategic for many stakeholders, including universities, institutional funding bodies, academic publishers and companies. While the literature describes several approaches which aim to identify the emergence of new research topics early in their lifecycle, these rely on the assumption that the topic in question is already associated with a number of publications and consistently referred to by a community of researchers. Hence, detecting the emergence of a new research area at an embryonic stage, i.e., before the topic has been consistently labelled by a community of researchers and associated with a number of publications, is still an open challenge. In this paper, we begin to address this challenge by performing a study of the dynamics preceding the creation of new topics. This study indicates that the emergence of a new topic is anticipated by a significant increase in the pace of collaboration between relevant research areas, which can be seen as the \u2018parents\u2019 of the new topic. These initial findings (i) confirm our hypothesis that it is possible in principle to detect the emergence of a new topic at the embryonic stage, (ii) provide new empirical evidence supporting relevant theories in Philosophy of Science, and also (iii) suggest that new topics tend to emerge in an environment in which weakly interconnected research areas begin to cross-fertilise.", "which users ?", "universities", 108.0, 120.0], ["The ability to promptly recognise new research trends is strategic for many stakeholders, including universities, institutional funding bodies, academic publishers and companies. While the literature describes several approaches which aim to identify the emergence of new research topics early in their lifecycle, these rely on the assumption that the topic in question is already associated with a number of publications and consistently referred to by a community of researchers. Hence, detecting the emergence of a new research area at an embryonic stage, i.e., before the topic has been consistently labelled by a community of researchers and associated with a number of publications, is still an open challenge. In this paper, we begin to address this challenge by performing a study of the dynamics preceding the creation of new topics. This study indicates that the emergence of a new topic is anticipated by a significant increase in the pace of collaboration between relevant research areas, which can be seen as the \u2018parents\u2019 of the new topic. These initial findings (i) confirm our hypothesis that it is possible in principle to detect the emergence of a new topic at the embryonic stage, (ii) provide new empirical evidence supporting relevant theories in Philosophy of Science, and also (iii) suggest that new topics tend to emerge in an environment in which weakly interconnected research areas begin to cross-fertilise.", "which users ?", "academic publishers", 152.0, 171.0], ["The ability to promptly recognise new research trends is strategic for many stakeholders, including universities, institutional funding bodies, academic publishers and companies. While the literature describes several approaches which aim to identify the emergence of new research topics early in their lifecycle, these rely on the assumption that the topic in question is already associated with a number of publications and consistently referred to by a community of researchers. Hence, detecting the emergence of a new research area at an embryonic stage, i.e., before the topic has been consistently labelled by a community of researchers and associated with a number of publications, is still an open challenge. In this paper, we begin to address this challenge by performing a study of the dynamics preceding the creation of new topics. This study indicates that the emergence of a new topic is anticipated by a significant increase in the pace of collaboration between relevant research areas, which can be seen as the \u2018parents\u2019 of the new topic. These initial findings (i) confirm our hypothesis that it is possible in principle to detect the emergence of a new topic at the embryonic stage, (ii) provide new empirical evidence supporting relevant theories in Philosophy of Science, and also (iii) suggest that new topics tend to emerge in an environment in which weakly interconnected research areas begin to cross-fertilise.", "which users ?", "institutional funding bodies", 122.0, 150.0], ["By now, XML has reached a wide acceptance as data exchange format in E-Business. An efficient collaboration between different participants in E-Business thus, is only possible, when business partners agree on a common syntax and have a common understanding of the basic concepts in the domain. XML covers the syntactic level, but lacks support for efficient sharing of conceptualizations. The Web Ontology Language (OWL [Bec04]) in turn supports the representation of domain knowledge using classes, properties and instances for the use in a distributed environment as the WorldWideWeb. We present in this paper a mapping between the data model elements of XML and OWL. We give account about its implementation within a ready-to-use XSLT framework, as well as its evaluation for common use cases.", "which Prototype extraction tool ?", "Framework", 738.0, 747.0], ["Abstract Ontologies have become a key element since many decades in information systems such as in epidemiological surveillance domain. Building domain ontologies requires the access to domain knowledge owned by domain experts or contained in knowledge sources. However, domain experts are not always available for interviews. Therefore, there is a lot of value in using ontology learning which consists in automatic or semi-automatic extraction of ontological knowledge from structured or unstructured knowledge sources such as texts, databases, etc. Many techniques have been used but they all are limited in concepts, properties and terminology extraction leaving behind axioms and rules. Source code which naturally embed domain knowledge is rarely used. In this paper, we propose an approach based on Hidden Markov Models (HMMs) for concepts, properties, axioms and rules learning from Java source code. This approach is experimented with the source code of EPICAM, an epidemiological platform developed in Java and used in Cameroon for tuberculosis surveillance. Domain experts involved in the evaluation estimated that knowledge extracted was relevant to the domain. In addition, we performed an automatic evaluation of the relevance of the terms extracted to the medical domain by aligning them with ontologies hosted on Bioportal platform through the Ontology Recommender tool. The results were interesting since the terms extracted were covered at 82.9% by many biomedical ontologies such as NCIT, SNOWMEDCT and ONTOPARON.", "which programming language ?", "Java", 891.0, 895.0], ["The total number of scholarly publications grows day by day, making it necessary to explore and use simple yet effective ways to expose their metadata. Schema.org supports adding structured metadata to web pages via markup, making it easier for data providers but also for search engines to provide the right search results. Bioschemas is based on the standards of schema.org, providing new types, properties and guidelines for metadata, i.e., providing metadata profiles tailored to the Life Sciences domain. Here we present our proposed contribution to Bioschemas (from the project \u201cBiotea\u201d), which supports metadata contributions for scholarly publications via profiles and web components. Biotea comprises a semantic model to represent publications together with annotated elements recognized from the scientific text; our Biotea model has been mapped to schema.org following Bioschemas standards.", "which programming language ?", "Web components", 677.0, 691.0], ["Software-intensive systems often consist of cooperating reactive components. In mobile and reconfigurable systems, their topology changes at run-time, which influences how the components must cooperate. The Scenario Modeling Language (SML) offers a formal approach for specifying the reactive behavior such systems that aligns with how humans conceive and communicate behavioral requirements. Simulation and formal checks can find specification flaws early. We present a framework for the Scenario-based Programming (SBP) that reflects the concepts of SML in Java and makes the scenario modeling approach available for programming. SBP code can also be generated from SML and extended with platform-specific code, thus streamlining the transition from design to implementation. As an example serves a car-to-x communication system. Demo video and artifact: http://scenariotools.org/esecfse-2017-tool-demo/", "which programming language ?", "Scenario Modeling Language", 207.0, 233.0], ["This paper presents the SCENARIOTOOLS solution for developing a cleaning robot system, an instance of the rover problem of the MDE Tools Challenge 2017. We present an MDE process that consists of (1) the modeling of the system behavior as a scenario-based assume-guarantee specification with SML (Scenario Modeling Language), (2) the formal realizabilitychecking and verification of the specification, (3) the generation of SBP (Scenario-Based Programming) Java code from the SML specification, and, finally, (4) adding platform-specific code to connect specification-level events with platform-level sensorand actuator-events. The resulting code can be executed on a RaspberryPi-based robot. The approach is suited for developing reactive systems with multiple cooperating components. Its strength is that the scenario-based modeling corresponds closely to how humans conceive and communicate behavioral requirements. SML in particular supports the modeling of environment assumptions and dynamic component structures. The formal checks ensure that the system satisfies its specification.", "which programming language ?", "Scenario Modeling Language", 297.0, 323.0], ["Purpose To evaluate the benefits of an artificial intelligence (AI)-based tool for two-dimensional mammography in the breast cancer detection process. Materials and Methods In this multireader, multicase retrospective study, 14 radiologists assessed a dataset of 240 digital mammography images, acquired between 2013 and 2016, using a counterbalance design in which half of the dataset was read without AI and the other half with the help of AI during a first session and vice versa during a second session, which was separated from the first by a washout period. Area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and reading time were assessed as endpoints. Results The average AUC across readers was 0.769 (95% CI: 0.724, 0.814) without AI and 0.797 (95% CI: 0.754, 0.840) with AI. The average difference in AUC was 0.028 (95% CI: 0.002, 0.055, P = .035). Average sensitivity was increased by 0.033 when using AI support (P = .021). Reading time changed dependently to the AI-tool score. For low likelihood of malignancy (< 2.5%), the time was about the same in the first reading session and slightly decreased in the second reading session. For higher likelihood of malignancy, the reading time was on average increased with the use of AI. Conclusion This clinical investigation demonstrated that the concurrent use of this AI tool improved the diagnostic performance of radiologists in the detection of breast cancer without prolonging their workflow.Supplemental material is available for this article.\u00a9 RSNA, 2020.", "which Imaging modality ?", "Mammography ", 99.0, 111.0], ["We aimed to evaluate an artificial intelligence (AI) system that can detect and diagnose lesions of maximum intensity projection (MIP) in dynamic contrast-enhanced (DCE) breast magnetic resonance imaging (MRI). We retrospectively gathered MIPs of DCE breast MRI for training and validation data from 30 and 7 normal individuals, 49 and 20 benign cases, and 135 and 45 malignant cases, respectively. Breast lesions were indicated with a bounding box and labeled as benign or malignant by a radiologist, while the AI system was trained to detect and calculate possibilities of malignancy using RetinaNet. The AI system was analyzed using test sets of 13 normal, 20 benign, and 52 malignant cases. Four human readers also scored these test data with and without the assistance of the AI system for the possibility of a malignancy in each breast. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were 0.926, 0.828, and 0.925 for the AI system; 0.847, 0.841, and 0.884 for human readers without AI; and 0.889, 0.823, and 0.899 for human readers with AI using a cutoff value of 2%, respectively. The AI system showed better diagnostic performance compared to the human readers (p = 0.002), and because of the increased performance of human readers with the assistance of the AI system, the AUC of human readers was significantly higher with than without the AI system (p = 0.039). Our AI system showed a high performance ability in detecting and diagnosing lesions in MIPs of DCE breast MRI and increased the diagnostic performance of human readers.", "which Imaging modality ?", "Magnetic resonance imaging", 177.0, 203.0], ["Today most of the data exchanged between information systems is done with the help of the XML syntax. Unfortunately when these data have to be integrated, the integration becomes difficult because of the semantics' heterogeneity. Consequently, leading researches in the domain of database systems are moving to semantic model in order to store data and its semantics definition. To benefit from these new systems and technologies, and to integrate different data sources, a flexible method consists in populating an existing OWL ontology from XML data. In paper we present such a method based on the definition of a graph which represents rules that drive the populating process. The graph of rules facilitates the mapping definition that consists in mapping elements from an XSD schema to the elements of the OWL schema.", "which Extraction methods ?", "Mapping", 715.0, 722.0], ["By now, XML has reached a wide acceptance as data exchange format in E-Business. An efficient collaboration between different participants in E-Business thus, is only possible, when business partners agree on a common syntax and have a common understanding of the basic concepts in the domain. XML covers the syntactic level, but lacks support for efficient sharing of conceptualizations. The Web Ontology Language (OWL [Bec04]) in turn supports the representation of domain knowledge using classes, properties and instances for the use in a distributed environment as the WorldWideWeb. We present in this paper a mapping between the data model elements of XML and OWL. We give account about its implementation within a ready-to-use XSLT framework, as well as its evaluation for common use cases.", "which Extraction methods ?", "Mapping", 614.0, 621.0], ["Research on definition extraction has been conducted for well over a decade, largely with significant constraints on the type of definitions considered. In this work, we present DeftEval, a SemEval shared task in which participants must extract definitions from free text using a term-definition pair corpus that reflects the complex reality of definitions in natural language. Definitions and glosses in free text often appear without explicit indicators, across sentences boundaries, or in an otherwise complex linguistic manner. DeftEval involved 3 distinct subtasks: 1) Sentence classification, 2) sequence labeling, and 3) relation extraction.", "which Data annotation granularities ?", "Sentences", 464.0, 473.0], ["Lexical Semantic Change detection, i.e., the task of identifying words that change meaning over time, is a very active research area, with applications in NLP, lexicography, and linguistics. Evaluation is currently the most pressing problem in Lexical Semantic Change detection, as no gold standards are available to the community, which hinders progress. We present the results of the first shared task that addresses this gap by providing researchers with an evaluation framework and manually annotated, high-quality datasets for English, German, Latin, and Swedish. 33 teams submitted 186 systems, which were evaluated on two subtasks.", "which Data annotation granularities ?", "words", 65.0, 70.0], ["There is currently a gap between the natural language expression of scholarly publications and their structured semantic content modeling to enable intelligent content search. With the volume of research growing exponentially every year, a search feature operating over semantically structured content is compelling. The SemEval-2021 Shared Task NLPContributionGraph (a.k.a. \u2018the NCG task\u2019) tasks participants to develop automated systems that structure contributions from NLP scholarly articles in the English language. Being the first-of-its-kind in the SemEval series, the task released structured data from NLP scholarly articles at three levels of information granularity, i.e. at sentence-level, phrase-level, and phrases organized as triples toward Knowledge Graph (KG) building. The sentence-level annotations comprised the few sentences about the article\u2019s contribution. The phrase-level annotations were scientific term and predicate phrases from the contribution sentences. Finally, the triples constituted the research overview KG. For the Shared Task, participating systems were then expected to automatically classify contribution sentences, extract scientific terms and relations from the sentences, and organize them as KG triples. Overall, the task drew a strong participation demographic of seven teams and 27 participants. The best end-to-end task system classified contribution sentences at 57.27% F1, phrases at 46.41% F1, and triples at 22.28% F1. While the absolute performance to generate triples remains low, as conclusion to the article, the difficulty of producing such data and as a consequence of modeling it is highlighted.", "which Information Units ?", "Tasks", 391.0, 396.0], ["The WISARD recognition system invented at Brunei University has been developed into an industrialised product by Computer Recognition Systems under licence from the British Technology Group. Using statistical pattern classification it already shows great potential in rapid sorting, and research indicates that it will track objects with positional feedback, rather like the human eye.", "which utilizes ?", "WiSARD", 4.0, 10.0], ["Information systems that build on sensor networks often process data produced by measuring physical properties. These data can serve in the acquisition of knowledge for real-world situations that are of interest to information services and, ultimately, to people. Such systems face a common challenge, namely the considerable gap between the data produced by measurement and the abstract terminology used to describe real-world situations. We present and discuss the architecture of a software system that utilizes sensor data, digital signal processing, machine learning, and knowledge representation and reasoning to acquire, represent, and infer knowledge about real-world situations observable by a sensor network. We demonstrate the application of the system to vehicle detection and classification by measurement of road pavement vibration. Thus, real-world situations involve vehicles and information for their type, speed, and driving direction.", "which utilizes ?", "A sensor network", 701.0, 717.0], ["This practice paper describes an ongoing research project to test the effectiveness and relevance of the FAIR Data Principles. Simultaneously, it will analyse how easy it is for data archives to adhere to the principles. The research took place from November 2016 to January 2017, and will be underpinned with feedback from the repositories. The FAIR Data Principles feature 15 facets corresponding to the four letters of FAIR - Findable, Accessible, Interoperable, Reusable. These principles have already gained traction within the research world. The European Commission has recently expanded its demand for research to produce open data. The relevant guidelines1are explicitly written in the context of the FAIR Data Principles. Given an increasing number of researchers will have exposure to the guidelines, understanding their viability and suggesting where there may be room for modification and adjustment is of vital importance. This practice paper is connected to a dataset(Dunning et al.,2017) containing the original overview of the sample group statistics and graphs, in an Excel spreadsheet. Over the course of two months, the web-interfaces, help-pages and metadata-records of over 40 data repositories have been examined, to score the individual data repository against the FAIR principles and facets. The traffic-light rating system enables colour-coding according to compliance and vagueness. The statistical analysis provides overall, categorised, on the principles focussing, and on the facet focussing results. The analysis includes the statistical and descriptive evaluation, followed by elaborations on Elements of the FAIR Data Principles, the subject specific or repository specific differences, and subsequently what repositories can do to improve their information architecture.", "which Publication type ?", "paper", 14.0, 19.0], ["The total number of scholarly publications grows day by day, making it necessary to explore and use simple yet effective ways to expose their metadata. Schema.org supports adding structured metadata to web pages via markup, making it easier for data providers but also for search engines to provide the right search results. Bioschemas is based on the standards of schema.org, providing new types, properties and guidelines for metadata, i.e., providing metadata profiles tailored to the Life Sciences domain. Here we present our proposed contribution to Bioschemas (from the project \u201cBiotea\u201d), which supports metadata contributions for scholarly publications via profiles and web components. Biotea comprises a semantic model to represent publications together with annotated elements recognized from the scientific text; our Biotea model has been mapped to schema.org following Bioschemas standards.", "which Related Resource ?", "Bioschemas", 325.0, 335.0], ["The total number of scholarly publications grows day by day, making it necessary to explore and use simple yet effective ways to expose their metadata. Schema.org supports adding structured metadata to web pages via markup, making it easier for data providers but also for search engines to provide the right search results. Bioschemas is based on the standards of schema.org, providing new types, properties and guidelines for metadata, i.e., providing metadata profiles tailored to the Life Sciences domain. Here we present our proposed contribution to Bioschemas (from the project \u201cBiotea\u201d), which supports metadata contributions for scholarly publications via profiles and web components. Biotea comprises a semantic model to represent publications together with annotated elements recognized from the scientific text; our Biotea model has been mapped to schema.org following Bioschemas standards.", "which Related Resource ?", "Biotea", 585.0, 591.0], ["The total number of scholarly publications grows day by day, making it necessary to explore and use simple yet effective ways to expose their metadata. Schema.org supports adding structured metadata to web pages via markup, making it easier for data providers but also for search engines to provide the right search results. Bioschemas is based on the standards of schema.org, providing new types, properties and guidelines for metadata, i.e., providing metadata profiles tailored to the Life Sciences domain. Here we present our proposed contribution to Bioschemas (from the project \u201cBiotea\u201d), which supports metadata contributions for scholarly publications via profiles and web components. Biotea comprises a semantic model to represent publications together with annotated elements recognized from the scientific text; our Biotea model has been mapped to schema.org following Bioschemas standards.", "which Related Resource ?", "schema.org", 152.0, 162.0], ["The slime mould Physarum polycephalum, an aneural organism, uses information from previous experiences to adjust its behaviour, but the mechanisms by which this is accomplished remain unknown. This article examines the possible role of oscillations in learning and memory in slime moulds. Slime moulds share surprising similarities with the network of synaptic connections in animal brains. First, their topology derives from a network of interconnected, vein-like tubes in which signalling molecules are transported. Second, network motility, which generates slime mould behaviour, is driven by distinct oscillations that organize into spatio-temporal wave patterns. Likewise, neural activity in the brain is organized in a variety of oscillations characterized by different frequencies. Interestingly, the oscillating networks of slime moulds are not precursors of nervous systems but, rather, an alternative architecture. Here, we argue that comparable information-processing operations can be realized on different architectures sharing similar oscillatory properties. After describing learning abilities and oscillatory activities of P. polycephalum, we explore the relation between network oscillations and learning, and evaluate the organism's global architecture with respect to information-processing potential. We hypothesize that, as in the brain, modulation of spontaneous oscillations may sustain learning in slime mould. This article is part of the theme issue \u2018Basal cognition: conceptual tools and the view from the single cell\u2019.", "which Has information processing paradigm ?", "Oscillations", 236.0, 248.0], ["Sn4+ ion doped TiO2 (TiO2\u2013Sn4+) nanoparticulate films with a doping ratio of about 7\u2236100 [(Sn)\u2236(Ti)] were prepared by the plasma-enhanced chemical vapor deposition (PCVD) method. The doping mode (lattice Ti substituted by Sn4+ ions) and the doping energy level of Sn4+ were determined by X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), surface photovoltage spectroscopy (SPS) and electric field induced surface photovoltage spectroscopy (EFISPS). It is found that the introduction of a doping energy level of Sn4+ ions is profitable to the separation of photogenerated carriers under both UV and visible light excitation. Characterization of the films with XRD and SPS indicates that after doping by Sn, more surface defects are present on the surface. Consequently, the photocatalytic activity for photodegradation of phenol in the presence of the TiO2\u2013Sn4+ film is higher than that of the pure TiO2 film under both UV and visible light irradiation.", "which doping elements ?", "Sn4+", 0.0, 3.0], ["Transparency, flexibility, and especially ultralow oxygen (OTR) and water vapor (WVTR) transmission rates are the key issues to be addressed for packaging of flexible organic photovoltaics and organic light-emitting diodes. Concomitant optimization of all essential features is still a big challenge. Here we present a thin (1.5 \u03bcm), highly transparent, and at the same time flexible nanocomposite coating with an exceptionally low OTR and WVTR (1.0 \u00d7 10(-2) cm(3) m(-2) day(-1) bar(-1) and <0.05 g m(-2) day(-1) at 50% RH, respectively). A commercially available polyurethane (Desmodur N 3600 and Desmophen 670 BA, Bayer MaterialScience AG) was filled with a delaminated synthetic layered silicate exhibiting huge aspect ratios of about 25,000. Functional films were prepared by simple doctor-blading a suspension of the matrix and the organophilized clay. This preparation procedure is technically benign, is easy to scale up, and may readily be applied for encapsulation of sensitive flexible electronics.", "which Polymer material ?", "Polyurethane", 564.0, 576.0], ["Flexible transparent barrier films are required in various fields of application ranging from flexible, transparent food packaging to display encapsulation. Environmentally friendly, waterborne polymer\u2013clay nanocomposites would be preferred but fail to meet in particular requirements for ultra high water vapor barriers. Here we show that self-assembly of nanocomposite films into one-dimensional crystalline (smectic) polymer\u2013clay domains is a so-far overlooked key-factor capable of suppressing water vapor diffusivity despite appreciable swelling at elevated temperatures and relative humidity (R.H.). Moreover, barrier performance was shown to improve with quality of the crystalline order. In this respect, spray coating is superior to doctor blading because it yields significantly better ordered structures. For spray-coated waterborne nanocomposite films (21.4 \u03bcm) ultra high barrier specifications are met at 23 \u00b0C and 50% R.H. with oxygen transmission rates (OTR) < 0.0005 cm3 m\u20132 day\u20131 bar\u20131 and water vapor ...", "which Solvent ?", "water", 300.0, 305.0], ["We present a new method for the preparation of cobinamide (CN)2Cbi, a vitamin B12 precursor, that should allow its broader utility. Treatment of vitamin B12 with only NaCN and heating in a microwave reactor affords (CN)2Cbi as the sole product. The purification procedure was greatly simplified, allowing for easy isolation of the product in 94% yield. The use of microwave heating proved beneficial also for (CN)2Cbi(c-lactone) synthesis. Treatment of (CN)2Cbi with triethanolamine led to (CN)2Cbi(c-lactam).", "which Solvent ?", "Ethanol", NaN, NaN], ["This short communication describes the screening of various metal salts for the preparation of cyano-aqua cobinamides from vitamin B12 in methanol. ZnCl 2 and Cu(NO 3 ) 2 \u00b73H 2 O have been identified as most active for this purpose and represent useful alternatives to the widely applied Ce(III) method that requires excess cyanide.", "which Solvent ?", "Methanol", 138.0, 146.0], ["Systematic studies on the influence of crystalline vs disordered nanocomposite structures on barrier properties and water vapor sensitivity are scarce as it is difficult to switch between the two morphologies without changing other critical parameters. By combining water-soluble poly(vinyl alcohol) (PVOH) and ultrahigh aspect ratio synthetic sodium fluorohectorite (Hec) as filler, we were able to fabricate nanocomposites from a single nematic aqueous suspension by slot die coating that, depending on the drying temperature, forms different desired morphologies. Increasing the drying temperature from 20 to 50 \u00b0C for the same formulation triggers phase segregation and disordered nanocomposites are obtained, while at room temperature, one-dimensional (1D) crystalline, intercalated hybrid Bragg Stacks form. The onset of swelling of the crystalline morphology is pushed to significantly higher relative humidity (RH). This disorder-order transition renders PVOH/Hec a promising barrier material at RH of up to 65%, which is relevant for food packaging. The oxygen permeability (OP) of the 1D crystalline PVOH/Hec is an order of magnitude lower compared to the OP of the disordered nanocomposite at this elevated RH (OP = 0.007 cm3 \u03bcm m-2 day-1 bar-1 cf. OP = 0.047 cm3 \u03bcm m-2 day-1 bar-1 at 23 \u00b0C and 65% RH).", "which Solvent ?", "water", 116.0, 121.0], ["Summaries of meetings are very important as they convey the essential content of discussions in a concise form. Both participants and non-participants are interested in the summaries of meetings to plan for their future work. Generally, it is time consuming to read and understand the whole documents. Therefore, summaries play an important role as the readers are interested in only the important context of discussions. In this work, we address the task of meeting document summarization. Automatic summarization systems on meeting conversations developed so far have been primarily extractive, resulting in unacceptable summaries that are hard to read. The extracted utterances contain disfluencies that affect the quality of the extractive summaries. To make summaries much more readable, we propose an approach to generating abstractive summaries by fusing important content from several utterances. We first separate meeting transcripts into various topic segments, and then identify the important utterances in each segment using a supervised learning approach. The important utterances are then combined together to generate a one-sentence summary. In the text generation step, the dependency parses of the utterances in each segment are combined together to create a directed graph. The most informative and well-formed sub-graph obtained by integer linear programming (ILP) is selected to generate a one-sentence summary for each topic segment. The ILP formulation reduces disfluencies by leveraging grammatical relations that are more prominent in non-conversational style of text, and therefore generates summaries that is comparable to human-written abstractive summaries. Experimental results show that our method can generate more informative summaries than the baselines. In addition, readability assessments by human judges as well as log-likelihood estimates obtained from the dependency parser show that our generated summaries are significantly readable and well-formed.", "which Human Evaluation Aspects ?", "Readability", 1801.0, 1812.0], ["Knowledge base question answering (KBQA) is an important task in Natural Language Processing. Existing approaches face significant challenges including complex question understanding, necessity for reasoning, and lack of large end-to-end training datasets. In this work, we propose Neuro-Symbolic Question Answering (NSQA), a modular KBQA system, that leverages (1) Abstract Meaning Representation (AMR) parses for task-independent question understanding; (2) a simple yet effective graph transformation approach to convert AMR parses into candidate logical queries that are aligned to the KB; (3) a pipeline-based approach which integrates multiple, reusable modules that are trained specifically for their individual tasks (semantic parser, entity and relationship linkers, and neuro-symbolic reasoner) and do not require end-to-end training data. NSQA achieves state-of-the-art performance on two prominent KBQA datasets based on DBpedia (QALD-9 and LC-QuAD 1.0). Furthermore, our analysis emphasizes that AMR is a powerful tool for KBQA systems.", "which Techniques/Methods ?", "Abstract Meaning Representation", 366.0, 397.0], ["Several classification algorithms for pattern recognition had been tested in the mapping of tropical forest cover using airborne hyperspectral data. Results from the use of Maximum Likelihood (ML), Spectral Angle Mapper (SAM), Artificial Neural Network (ANN) and Decision Tree (DT) classifiers were compared and evaluated. It was found that ML performed the best followed by ANN, DT and SAM with accuracies of 86%, 84%, 51% and 49% respectively.", "which Techniques/Methods ?", "Decision Tree", 263.0, 276.0], ["Advanced techniques using high resolution hyperspectral remote sensing data has recently evolved as an emerging tool with potential to aid mineral exploration. In this study, pertinently, five mosaicked scenes of Airborne Visible InfraRed Imaging Spectrometer-Next Generation (AVIRIS-NG) hyperspectral data of southeastern parts of the Aravalli Fold belt in Jahazpur area, Rajasthan, were processed. The exposed Proterozoic rocks in this area is of immense economic and scientific interest because of richness of poly-metallic mineral resources and their unique metallogenesis. Analysis of high resolution multispectral satellite image reveals that there are many prominent lineaments which acted as potential conduits of hydrothermal fluid emanation, some of which resulted in altering the country rock. This study takes cues from studying those altered minerals to enrich our knowledge base on mineralized zones. In this imaging spectroscopic study we have identified different hydrothermally altered minerals consisting of hydroxyl, carbonate and iron-bearing species. Spectral signatures (image based) of minerals such as Kaosmec, Talc, Kaolinite, Dolomite, and Montmorillonite were derived in SWIR (Short wave infrared) region while Iron bearing minerals such as Goethite and Limonite were identified in the VNIR (Visible and Near Infrared) region of electromagnetic spectrum. Validation of the target minerals was done by subsequent ground truthing and X-ray diffractogram (XRD) analysis. The altered end members were further mapped by Spectral Angle Mapper (SAM) and Adaptive Coherence Estimator (ACE) techniques to detect target minerals. Accuracy assessment was reported to be 86.82% and 77.75% for SAM and ACE respectively. This study confirms that the AVIRIS-NG hyperspectral data provides better solution for identification of endmember minerals.", "which Techniques/Methods ?", "Adaptive Coherence Estimator (ACE)", NaN, NaN], ["In this study, we test the potential of two different classification algorithms, namely the spectral angle mapper (SAM) and object-based classifier for mapping the land use/cover characteristics using a Hyperion imagery. We chose a study region that represents a typical Mediterranean setting in terms of landscape structure, composition and heterogeneous land cover classes. Accuracy assessment of the land cover classes was performed based on the error matrix statistics. Validation points were derived from visual interpretation of multispectral high resolution QuickBird-2 satellite imagery. Results from both the classifiers yielded more than 70% classification accuracy. However, the object-based classification clearly outperformed the SAM by 7.91% overall accuracy (OA) and a relatively high kappa coefficient. Similar results were observed in the classification of the individual classes. Our results highlight the potential of hyperspectral remote sensing data as well as object-based classification approach for mapping heterogeneous land use/cover in a typical Mediterranean setting.", "which Techniques/Methods ?", "Accuracy Assessment", 376.0, 395.0], ["The larger synoptic view and contiguous channels arrangement of Hyperion hyperspectral remote sensing data enhance the minor spectral identification of earth\u2019s features such as minerals, atmospheric gasses, vegetation and so on. Hydrothermal alteration minerals mostly associated with vicinity of geological structural features such as lineaments and fractures. In this study Hyperion data is used for identification of hydrothermally altered minerals and alteration facies near Chhabadiya village of Jahajpur area, Bhilwara, Rajasthan. There are some minerals such as talc minerals identified through Hyperion imagery. The identified talc minerals correlated and evaluated through petrographic analysis, XRD analysis and spectroscopic analysis. The validation of identified minerals completed by field survey, field sample spectra and USGS spectral library talc mineral spectra. The conclusion is that Hyperion hyperspectral remote sensing data have capability to identify the minerals, mineral assemblage, alteration minerals and alteration facies.", "which Techniques/Methods ?", "Petrographic analysis", 682.0, 703.0], ["Wetlands mapping using multispectral imagery from Landsat multispectral scanner (MSS) and thematic mapper (TM) and Syst\u00e8me pour l'observation de la Terre (SPOT) does not in general provide high classification accuracies because of poor spectral and spatial resolutions. This study tests the feasibility of using high-resolution hyperspectral imagery to map wetlands in Iowa with two nontraditional classification techniques: the spectral angle mapper (SAM) method and a new nonparametric object-oriented (OO) classification. The software programs used were ENVI and eCognition. Accuracies of these classified images were assessed by using the information collected through a field survey with a global positioning system and high-resolution color infrared images. Wetlands were identified more accurately with the OO method (overall accuracy 92.3%) than with SAM (63.53%). This paper also discusses the limitations of these classification techniques for wetlands, as well as discussing future directions for study.", "which Techniques/Methods ?", " nonparametric object-oriented (OO) classification", NaN, NaN], ["Knowledge base question answering (KBQA) is an important task in Natural Language Processing. Existing approaches face significant challenges including complex question understanding, necessity for reasoning, and lack of large end-to-end training datasets. In this work, we propose Neuro-Symbolic Question Answering (NSQA), a modular KBQA system, that leverages (1) Abstract Meaning Representation (AMR) parses for task-independent question understanding; (2) a simple yet effective graph transformation approach to convert AMR parses into candidate logical queries that are aligned to the KB; (3) a pipeline-based approach which integrates multiple, reusable modules that are trained specifically for their individual tasks (semantic parser, entity and relationship linkers, and neuro-symbolic reasoner) and do not require end-to-end training data. NSQA achieves state-of-the-art performance on two prominent KBQA datasets based on DBpedia (QALD-9 and LC-QuAD 1.0). Furthermore, our analysis emphasizes that AMR is a powerful tool for KBQA systems.", "which Techniques/Methods ?", "reasoner", 795.0, 803.0], ["Hyperion data acquired over Dongargarh area, Chattisgarh (India), in December 2006 have been analysed to identify dominant mineral types present in the area, with special emphasis on mapping the altered/weathered and clay minerals present in the rocks and soils. Various advanced spectral processes such as reflectance calibration of the Hyperion data, minimum noise fraction transformation, spectral feature fitting (SFF) and spectral angle mapper (SAM) have been used for comparison/mapping in conjunction with spectra of rocks and soils that have been collected in the field using Analytical Spectral Devices's FieldSpec instrument. In this study, 40 shortwave infrared channels ranging from 2.0 to 2.4 \u03bcm were analysed mainly to identify and map the major altered/weathered and clay minerals by studying the absorption bands around the 2.2 and 2.3 \u03bcm wavelength regions. The absorption characteristics were the results of O\u2013H stretching in the lattices of various hydrous minerals, in particular, clay minerals, constituting altered/weathered rocks and soils. SAM and SFF techniques implemented in Spectral Analyst were applied to identify the minerals present in the scene. A score of 0\u20131 was generated for both SAM and SFF, where a value of 1 indicated a perfect match showing the exact mineral type. Endmember spectra were matched with those of the minerals as available in the United States Geological Survey Spectral Library. Four minerals, oligoclase, rectorite, kaolinite and desert varnish, have been identified in the studied area. The SAM classifier was then applied to produce a mineral map over a subset of the Hyperion scene. The dominant lithology of the area included Dongargarh granite, Bijli rhyolite and Pitepani volcanics of Palaeo-Proterozoic age. Feldspar is one of the most dominant mineral constituents of all the above-mentioned rocks, which is highly susceptible to chemical weathering and produces various types of clay minerals. Oligoclase (a feldspar) was found in these areas where mostly rock outcrops were encountered. Kaolinite was also found mainly near exposed rocks, as it was formed due to the weathering of feldspar. Rectorite is the other clay mineral type that is observed mostly in the southern part of the studied area, where Bijli rhyolite dominates the lithology. However, the most predominant mineral type coating observed in this study is desert varnish, which is nothing but an assemblage of very fine clay minerals and forms a thin veneer on rock/soil surfaces, rendering a dark appearance to the latter. Thus, from this study, it could be inferred that Hyperion data can be well utilized to identify and map altered/weathered and clay minerals based on the study of the shape, size and position of spectral absorption features, which were otherwise absent in the signatures of the broadband sensors.", "which Techniques/Methods ?", "Spectral Angle Mapper (SAM)", NaN, NaN], ["Human motion is difficult to create and manipulate because of the high dimensionality and spatiotemporal nature of human motion data. Recently, the use of large collections of captured motion data has added increased realism in character animation. In order to make the synthesis and analysis of motion data tractable, we present a low\u2010dimensional motion space in which high\u2010dimensional human motion can be effectively visualized, synthesized, edited, parameterized, and interpolated in both spatial and temporal domains. Our system allows users to create and edit the motion of animated characters in several ways: The user can sketch and edit a curve on low\u2010dimensional motion space, directly manipulate the character's pose in three\u2010dimensional object space, or specify key poses to create in\u2010between motions. Copyright \u00a9 2006 John Wiley & Sons, Ltd.", "which Publisher ?", "Wiley", 835.0, 840.0], ["In this paper, we describe the successful results of an international research project focused on the use of Web technology in the educational context. The article explains how this international project, funded by public organizations and developed over the last two academic years, focuses on the area of open educational resources (OER) and particularly the educational content of the OpenCourseWare (OCW) model. This initiative has been developed by a research group composed of researchers from three countries. The project was enabled by the Universidad Politecnica de Madrid OCW Office's leadership of the Consortium of Latin American Universities and the distance education know-how of the Universidad Tecnica Particular de Loja (UTPL, Ecuador). We give a full account of the project, methodology, main outcomes and validation. The project results have further consolidated the group, and increased the maturity of group members and networking with other groups in the area. The group is now participating in other research projects that continue the lines developed here.", "which Document type ?", "Article", 156.0, 163.0], ["Thanks to the proliferation of academic services on the Web and the opening of educational content, today, students can access a large number of free learning resources, and interact with value-added services. In this context, Learning Analytics can be carried out on a large scale thanks to the proliferation of open practices that promote the sharing of datasets. However, the opening or sharing of data managed through platforms and educational services, without considering the protection of users' sensitive data, could cause some privacy issues. Data anonymization is a strategy that should be adopted during lifecycle of data processing to reduce security risks. In this research, we try to characterize how much and how the anonymization techniques have been used in learning analytics proposals. From an initial exploration made in the Scopus database, we found that less than 6% of the papers focused on LA have also covered the privacy issue. Finally, through a specific case, we applied data anonymization and learning analytics to demonstrate that both technique can be integrated, in a reliably and effectively way, to support decision making in educational institutions.", "which Source ?", "Scopus", 845.0, 851.0], ["Serious concerns on privacy protection in social networks have been raised in recent years; however, research in this area is still in its infancy. The problem is challenging due to the diversity and complexity of graph data, on which an adversary can use many types of background knowledge to conduct an attack. One popular type of attacks as studied by pioneer work [2] is the use of embedding subgraphs. We follow this line of work and identify two realistic targets of attacks, namely, NodeInfo and LinkInfo. Our investigations show that k-isomorphism, or anonymization by forming k pairwise isomorphic subgraphs, is both sufficient and necessary for the protection. The problem is shown to be NP-hard. We devise a number of techniques to enhance the anonymization efficiency while retaining the data utility. A compound vertex ID mechanism is also introduced for privacy preservation over multiple data releases. The satisfactory performance on a number of real datasets, including HEP-Th, EUemail and LiveJournal, illustrates that the high symmetry of social networks is very helpful in mitigating the difficulty of the problem.", "which Anonymistion algorithm/method ?", "k-isomorphism", 542.0, 555.0], ["Publishing data about individuals without revealing sensitive information about them is an important problem. In recent years, a new definition of privacy called \\kappa-anonymity has gained popularity. In a \\kappa-anonymized dataset, each record is indistinguishable from at least k\u20141 other records with respect to certain \"identifying\" attributes. In this paper we show with two simple attacks that a \\kappa-anonymized dataset has some subtle, but severe privacy problems. First, we show that an attacker can discover the values of sensitive attributes when there is little diversity in those sensitive attributes. Second, attackers often have background knowledge, and we show that \\kappa-anonymity does not guarantee privacy against attackers using background knowledge. We give a detailed analysis of these two attacks and we propose a novel and powerful privacy definition called \\ell-diversity. In addition to building a formal foundation for \\ell-diversity, we show in an experimental evaluation that \\ell-diversity is practical and can be implemented efficiently.", "which Anonymistion algorithm/method ?", "l-diversity", NaN, NaN], [" Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k-anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, \u03bc-Argus and k-Similar provide guarantees of privacy protection. ", "which Anonymistion algorithm/method ?", "k-anonymity", 506.0, 517.0], ["While privacy preservation of data mining approaches has been an important topic for a number of years, privacy of social network data is a relatively new area of interest. Previous research has shown that anonymization alone may not be sufficient for hiding identity information on certain real world data sets. In this paper, we focus on understanding the impact of network topology and node substructure on the level of anonymity present in the network. We present a new measure, topological anonymity, that quantifies the amount of privacy preserved in different topological structures. The measure uses a combination of known social network metrics and attempts to identify when node and edge inference breeches arise in these graphs.", "which Anonymistion algorithm/method ?", "Topological anonymity", 483.0, 504.0], ["In this paper, we present the implementation of a Haar\u2013Cascade based classifier for car detection. This constitutes the detection sub-module of the framework for a Smart Parking System (SParkSys). In high density cities, finding available parking can be time consuming and results in traffic congestions as drivers cruise to find parking. The computer vision based smart parking solution presented in this paper has the advantage of being the least intrusive of most car sensing technologies. It is scalable for use with large parking facilities. Functional code from OpenCV was used in conjunction with custom Python code to implement the algorithm. Our tests show mixed results, with excellent true positive detections along with some with false negatives. Remarkable is that the classification algorithm learnt features that are common across a wide range of objects of interest.", "which Software platform ?", "OpenCV", 568.0, 574.0], ["In this paper, we present the implementation of a Haar\u2013Cascade based classifier for car detection. This constitutes the detection sub-module of the framework for a Smart Parking System (SParkSys). In high density cities, finding available parking can be time consuming and results in traffic congestions as drivers cruise to find parking. The computer vision based smart parking solution presented in this paper has the advantage of being the least intrusive of most car sensing technologies. It is scalable for use with large parking facilities. Functional code from OpenCV was used in conjunction with custom Python code to implement the algorithm. Our tests show mixed results, with excellent true positive detections along with some with false negatives. Remarkable is that the classification algorithm learnt features that are common across a wide range of objects of interest.", "which Software platform ?", "Python", 611.0, 617.0], ["A diverse and changing array of digital media have been used to present heritage online. While websites have been created for online heritage outreach for nearly two decades, social media is employed increasingly to complement and in some cases replace the use of websites. These same social media are used by stakeholders as a form of participatory culture, to create communities and to discuss heritage independently of narratives offered by official institutions such as museums, memorials, and universities. With difficult or \u201cdark\u201d heritage\u2014places of memory centering on deaths, disasters, and atrocities\u2014these online representations and conversations can be deeply contested. Examining the websites and social media of difficult heritage, with an emphasis on the trans-Atlantic slave trade provides insights into the efficacy of online resources provided by official institutions, as well as the unofficial, participatory communities of stakeholders who use social media for collective memories.", "which has communication channel ?", "websites", 95.0, 103.0], ["A diverse and changing array of digital media have been used to present heritage online. While websites have been created for online heritage outreach for nearly two decades, social media is employed increasingly to complement and in some cases replace the use of websites. These same social media are used by stakeholders as a form of participatory culture, to create communities and to discuss heritage independently of narratives offered by official institutions such as museums, memorials, and universities. With difficult or \u201cdark\u201d heritage\u2014places of memory centering on deaths, disasters, and atrocities\u2014these online representations and conversations can be deeply contested. Examining the websites and social media of difficult heritage, with an emphasis on the trans-Atlantic slave trade provides insights into the efficacy of online resources provided by official institutions, as well as the unofficial, participatory communities of stakeholders who use social media for collective memories.", "which has communication channel ?", "social media", 175.0, 187.0], ["Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.", "which Personalisation features ?", "Goals", 742.0, 747.0], ["Abstract With the increased dependence on online learning platforms and educational resource repositories, a unified representation of digital learning resources becomes essential to support a dynamic and multi-source learning experience. We introduce the EduCOR ontology, an educational, career-oriented ontology that provides a foundation for representing online learning resources for personalised learning systems. The ontology is designed to enable learning material repositories to offer learning path recommendations, which correspond to the user\u2019s learning goals and preferences, academic and psychological parameters, and labour-market skills. We present the multiple patterns that compose the EduCOR ontology, highlighting its cross-domain applicability and integrability with other ontologies. A demonstration of the proposed ontology on the real-life learning platform eDoer is discussed as a use case. We evaluate the EduCOR ontology using both gold standard and task-based approaches. The comparison of EduCOR to three gold schemata, and its application in two use-cases, shows its coverage and adaptability to multiple OER repositories, which allows generating user-centric and labour-market oriented recommendations. Resource : https://tibonto.github.io/educor/.", "which Personalisation features ?", "recommendation", NaN, NaN], ["Choosing a higher education course at university is not an easy task for students. A wide range of courses are offered by the individual universities whose delivery mode and entry requirements differ. A personalized recommendation system can be an effective way of suggesting the relevant courses to the prospective students. This paper introduces a novel approach that personalizes course recommendations that will match the individual needs of users. The proposed approach developed a framework of an ontology-based hybrid-filtering system called the ontology-based personalized course recommendation (OPCR). This approach aims to integrate the information from multiple sources based on the hierarchical ontology similarity with a view to enhancing the efficiency and the user satisfaction and to provide students with appropriate recommendations. The OPCR combines collaborative-based filtering with content-based filtering. It also considers familiar related concepts that are evident in the profiles of both the student and the course, determining the similarity between them. Furthermore, OPCR uses an ontology mapping technique, recommending jobs that will be available following the completion of each course. This method can enable students to gain a comprehensive knowledge of courses based on their relevance, using dynamic ontology mapping to link the course profiles and student profiles with job profiles. Results show that a filtering algorithm that uses hierarchically related concepts produces better outcomes compared to a filtering method that considers only keyword similarity. In addition, the quality of the recommendations is improved when the ontology similarity between the items\u2019 and the users\u2019 profiles were utilized. This approach, using a dynamic ontology mapping, is flexible and can be adapted to different domains. The proposed framework can be used to filter the items for both postgraduate courses and items from other domains.", "which Personalisation features ?", "courses", 99.0, 106.0], ["Abstract With the increased dependence on online learning platforms and educational resource repositories, a unified representation of digital learning resources becomes essential to support a dynamic and multi-source learning experience. We introduce the EduCOR ontology, an educational, career-oriented ontology that provides a foundation for representing online learning resources for personalised learning systems. The ontology is designed to enable learning material repositories to offer learning path recommendations, which correspond to the user\u2019s learning goals and preferences, academic and psychological parameters, and labour-market skills. We present the multiple patterns that compose the EduCOR ontology, highlighting its cross-domain applicability and integrability with other ontologies. A demonstration of the proposed ontology on the real-life learning platform eDoer is discussed as a use case. We evaluate the EduCOR ontology using both gold standard and task-based approaches. The comparison of EduCOR to three gold schemata, and its application in two use-cases, shows its coverage and adaptability to multiple OER repositories, which allows generating user-centric and labour-market oriented recommendations. Resource : https://tibonto.github.io/educor/.", "which Personalisation features ?", "Learning Goal", NaN, NaN], ["In recent years, the development of recommender systems has attracted increased interest in several domains, especially in e-learning. Massive Open Online Courses have brought a revolution. However, deficiency in support and personalization in this context drive learners to lose their motivation and leave the learning process. To overcome this problem we focus on adapting learning activities to learners' needs using a recommender system.This paper attempts to provide an introduction to different recommender systems for e-learning settings, as well as to present our proposed recommender system for massive learning activities in order to provide learners with the suitable learning activities to follow the learning process and maintain their motivation. We propose a hybrid knowledge-based recommender system based on ontology for recommendation of e-learning activities to learners in the context of MOOCs. In the proposed recommendation approach, ontology is used to model and represent the knowledge about the domain model, learners and learning activities.", "which Personalisation features ?", "learning activities", 375.0, 394.0], ["Name ambiguity stems from the fact that many people or objects share identical names in the real world. Such name ambiguity decreases the performance of document retrieval, Web search, information integration, and may cause confusion in other applications. Due to the same name spellings and lack of information, it is a nontrivial task to distinguish them accurately. In this article, we focus on investigating the problem in digital libraries to distinguish publications written by authors with identical names. We present an effective framework named GHOST (abbreviation for GrapHical framewOrk for name diSambiguaTion), to solve the problem systematically. We devise a novel similarity metric, and utilize only one type of attribute (i.e., coauthorship) in GHOST. Given the similarity matrix, intermediate results are grouped into clusters with a recently introduced powerful clustering algorithm called Affinity Propagation . In addition, as a complementary technique, user feedback can be used to enhance the performance. We evaluated the framework on the real DBLP and PubMed datasets, and the experimental results show that GHOST can achieve both high precision and recall .", "which Evidence ?", "User feedback", 974.0, 987.0], ["This paper addresses the problem of name disambiguation in the context of digital libraries that administer bibliographic citations. The problem occurs when multiple authors share a common name or when multiple name variations for an author appear in citation records. Name disambiguation is not a trivial task, and most digital libraries do not provide an ecient way to accurately identify the citation records for an author. Furthermore, lack of complete meta-data information in digital libraries hinders the development of a generic algorithm that can be applicable to any dataset. We propose a heuristic-based, unsupervised and adaptive method that also examines users\u2019 interactions in order to include users\u2019 feedback in the disambiguation process. Moreover, the method exploits important features associated with author and citation records, such as co-authors, aliation, publication title, venue, etc., creating a multilayered hierarchical clustering algorithm which transforms itself according to the available information, and forms clusters of unambiguous records. Our experiments on a set of researchers\u2019 names considered to be highly ambiguous produced high precision and recall results, and decisively armed the viability of our algorithm.", "which Evidence ?", "Authors", 166.0, 173.0], ["We present a new, two-stage, self-supervised algorithm for author disambiguation in large bibliographic databases. In the first \u201cbootstrap\u201d stage, a collection of high-precision features is used to bootstrap a training set with positive and negative examples of coreferring authors. A supervised feature-based classifier is then trained on the bootstrap clusters and used to cluster the authors in a larger unlabeled dataset. Our self-supervised approach shares the advantages of unsupervised approaches (no need for expensive hand labels) as well as supervised approaches (a rich set of features that can be discriminatively trained). The algorithm disambiguates 54,000,000 author instances in Thomson Reuters' Web of Knowledge with B3 F1 of.807. We analyze parameters and features, particularly those from citation networks, which have not been deeply investigated in author disambiguation. The most important citation feature is self-citation, which can be approximated without expensive extraction of the full network. For the supervised stage, the minor improvement due to other citation features (increasing F1 from.748 to.767) suggests they may not be worth the trouble of extracting from databases that don't already have them. A lean feature set without expensive abstract and title features performs 130 times faster with about equal F1. \u00a9 2012 Wiley Periodicals, Inc.", "which Evidence ?", "Authors", 274.0, 281.0], ["This paper proposes a methodology which discriminates the articles by the target authors (\u201ctrue\u201d articles) from those by other homonymous authors (\u201cfalse\u201d articles). Author name searches for 2,595 \u201csource\u201d authors in six subject fields retrieved about 629,000 articles. In order to extract true articles from the large amount of the retrieved articles, including many false ones, two filtering stages were applied. At the first stage any retrieved article was eliminated as false if either its affiliation addresses had little similarity to those of its source article or there was no citation relationship between the journal of the retrieved article and that of its source article. At the second stage, a sample of retrieved articles was subjected to manual judgment, and utilizing the judgment results, discrimination functions based on logistic regression were defined. These discrimination functions demonstrated both the recall ratio and the precision of about 95% and the accuracy (correct answer ratio) of 90\u201395%. Existence of common coauthor(s), address similarity, title words similarity, and interjournal citation relationships between the retrieved and source articles were found to be the effective discrimination predictors. Whether or not the source author was from a specific country was also one of the important predictors. Furthermore, it was shown that a retrieved article is almost certainly true if it was cited by, or cocited with, its source article. The method proposed in this study would be effective when dealing with a large number of articles whose subject fields and affiliation addresses vary widely. \u00a9 2011 Wiley Periodicals, Inc.", "which Evidence ?", "Country", 1292.0, 1299.0], ["With the rapid development of digital libraries, name disambiguation becomes more and more important technique to distinguish authors with same names from physical persons. Many algorithms have been developed to accomplish the task. However, they are usually based on some restricted preconditions and rarely concern how to be incorporated into a practical application. In this paper, name disambiguation is regarded as the technique of learning module integrated with a knowledge base. A network is defined for the modeling of publication information, which facilitates the representation of differenttypes of relations among the attributes. The knowledge base component serves as the user interface for domain knowledge input. Furthermore, this paper exploits a random walk with restart algorithm and affinity propagation clustering algorithm to finally output name disambiguation results.", "which Evidence ?", "Authors", 126.0, 133.0], ["This paper proposes a methodology which discriminates the articles by the target authors (\u201ctrue\u201d articles) from those by other homonymous authors (\u201cfalse\u201d articles). Author name searches for 2,595 \u201csource\u201d authors in six subject fields retrieved about 629,000 articles. In order to extract true articles from the large amount of the retrieved articles, including many false ones, two filtering stages were applied. At the first stage any retrieved article was eliminated as false if either its affiliation addresses had little similarity to those of its source article or there was no citation relationship between the journal of the retrieved article and that of its source article. At the second stage, a sample of retrieved articles was subjected to manual judgment, and utilizing the judgment results, discrimination functions based on logistic regression were defined. These discrimination functions demonstrated both the recall ratio and the precision of about 95% and the accuracy (correct answer ratio) of 90\u201395%. Existence of common coauthor(s), address similarity, title words similarity, and interjournal citation relationships between the retrieved and source articles were found to be the effective discrimination predictors. Whether or not the source author was from a specific country was also one of the important predictors. Furthermore, it was shown that a retrieved article is almost certainly true if it was cited by, or cocited with, its source article. The method proposed in this study would be effective when dealing with a large number of articles whose subject fields and affiliation addresses vary widely. \u00a9 2011 Wiley Periodicals, Inc.", "which Evidence ?", "Affiliation", 494.0, 505.0], ["Name ambiguity in the context of bibliographic citation affects the quality of services in digital libraries. Previous methods are not widely applied in practice because of their high computational complexity and their strong dependency on excessive attributes, such as institutional affiliation, research area, address, etc., which are difficult to obtain in practice. To solve this problem, we propose a novel coarse\u2010to\u2010fine framework for name disambiguation which sequentially employs 3 common and easily accessible attributes (i.e., coauthor name, article title, and publication venue). Our proposed framework is based on multiple clustering and consists of 3 steps: (a) clustering articles by coauthorship and obtaining rough clusters, that is fragments; (b) clustering fragments obtained in step 1 by title information and getting bigger fragments; (c) and clustering fragments obtained in step 2 by the latent relations among venues. Experimental results on a Digital Bibliography and Library Project (DBLP) data set show that our method outperforms the existing state\u2010of\u2010the\u2010art methods by 2.4% to 22.7% on the average pairwise F1 score and is 10 to 100 times faster in terms of execution time.", "which Evidence ?", "Venues", 933.0, 939.0], ["We present a novel 3\u2010step self\u2010training method for author name disambiguation\u2014SAND (self\u2010training associative name disambiguator)\u2014which requires no manual labeling, no parameterization (in real\u2010world scenarios) and is particularly suitable for the common situation in which only the most basic information about a citation record is available (i.e., author names, and work and venue titles). During the first step, real\u2010world heuristics on coauthors are able to produce highly pure (although fragmented) clusters. The most representative of these clusters are then selected to serve as training data for the third supervised author assignment step. The third step exploits a state\u2010of\u2010the\u2010art transductive disambiguation method capable of detecting unseen authors not included in any training example and incorporating reliable predictions to the training data. Experiments conducted with standard public collections, using the minimum set of attributes present in a citation, demonstrate that our proposed method outperforms all representative unsupervised author grouping disambiguation methods and is very competitive with fully supervised author assignment methods. Thus, different from other bootstrapping methods that explore privileged, hard to obtain information such as self\u2010citations and personal information, our proposed method produces topnotch performance with no (manual) training data or parameterization and in the presence of scarce information.", "which Evidence ?", "Authors", 755.0, 762.0], [" Open educational resources are currently becoming increasingly available from a multitude of sources and are consequently annotated in many diverse ways. Interoperability concerns that naturally arise can often be resolved through the semantification of metadata descriptions, while at the same time strengthening the knowledge value of resources. SKOS can be a solid linking point offering a standard vocabulary for thematic descriptions, by referencing semantic thesauri. We propose the enhancement and maintenance of educational resources\u2019 metadata in the form of learning object ontologies and introduce the notion of a learning object ontology repository that can help towards their publication, discovery and reuse. At the same time, linking to thesauri datasets and contextualized sources interrelates learning objects with linked data and exposes them to the Web of Data. We build a set of extensions and workflows on top of contemporary ontology management tools, such as WebProt\u00e9g\u00e9, that can make it suitable as a learning object ontology repository. The proposed approach and implementation can help libraries and universities in discovering, managing and incorporating open educational resources and enhancing current curricula. ", "which Reuse of existing vocabularies ?", "thesauri", 473.0, 481.0], ["Raman spectroscopy, complemented by infrared spectroscopy has been used to characterise the ferroaxinite minerals of theoretical formula Ca2Fe2+Al2BSi4O15(OH), a ferrous aluminium borosilicate. The Raman spectra are complex but are subdivided into sections based upon the vibrating units. The Raman spectra are interpreted in terms of the addition of borate and silicate spectra. Three characteristic bands of ferroaxinite are observed at 1082, 1056 and 1025 cm-1 and are attributed to BO4 stretching vibrations. Bands at 1003, 991, 980 and 963 cm-1 are assigned to SiO4 stretching vibrations. Bands are found in these positions for each of the ferroaxinites studied. No Raman bands were found above 1100 cm-1 showing that ferroaxinites contain only tetrahedral boron. The hydroxyl stretching region of ferroaxinites is characterised by a single Raman band between 3368 and 3376 cm-1, the position of which is sample dependent. Bands for ferroaxinite at 678, 643, 618, 609, 588, 572, 546 cm-1 may be attributed to the \u03bd4 bending modes and the three bands at 484, 444 and 428 cm-1 may be attributed to the \u03bd2 bending modes of the (SiO4)2-.", "which Minerals in consideration ?", " Ferroaxinite", 91.0, 104.0], ["Selected joaquinite minerals have been studied by Raman spectroscopy. The minerals are categorised into two groups depending upon whether bands occur in the 3250 to 3450 cm\u22121 region and in the 3450 to 3600 cm\u22121 region, or in the latter region only. The first set of bands is attributed to water stretching vibrations and the second set to OH stretching bands. In the literature, X-ray diffraction could not identify the presence of OH units in the structure of joaquinite. Raman spectroscopy proves that the joaquinite mineral group contains OH units in their structure, and in some cases both water and OH units. A series of bands at 1123, 1062, 1031, 971, 912 and 892 cm\u22121 are assigned to SiO stretching vibrations. Bands above 1000 cm\u22121 are attributable to the \u03bdas modes of the (SiO4)4\u2212 and (Si2O7)6\u2212 units. Bands that are observed at 738, around 700, 682 and around 668, 621 and 602 cm\u22121 are attributed to OSiO bending modes. The patterns do not appear to match the published infrared spectral patterns of either (SiO4)4\u2212 or (Si2O7)6\u2212 units. The reason is attributed to the actual formulation of the joaquinite mineral, in which significant amounts of Ti or Nb and Fe are found. Copyright \u00a9 2007 John Wiley & Sons, Ltd.", "which Minerals in consideration ?", " Joaquinite", 8.0, 19.0], ["Raman spectroscopy has been used to study the structure of the humite mineral group ((A2SiO4)n\u2013A(OH, F)2 where n represents the number of olivine and brucite layers in the structure and is 1, 2, 3 or 4 and A2+ is Mg, Mn, Fe or some mix of these cations). The humite group of minerals forms a morphotropic series with the minerals olivine and brucite. The members of the humite group contain layers of the olivine structure that alternate with layers of the brucite-like sheets. The minerals are characterized by a complex set of bands in the 800\u20131000 cm\u22121 region attributed to the stretching vibrations of the olivine (SiO4)4\u2212 units. The number of bands in this region is influenced by the number of olivine layers. Characteristic bending modes of the (SiO4)4\u2212 units are observed in the 500\u2013650 cm\u22121 region. The brucite sheets are characterized by the OH stretching vibrations in the 3475\u20133625 cm\u22121 wavenumber region. The position of the OH stretching vibrations is determined by the strength of the hydrogen bond formed between the brucite-like OH units and the olivine silica layer. The number of olivine sheets and not the chemical composition determines the strength of the hydrogen bonds. Copyright \u00a9 2006 John Wiley & Sons, Ltd.", "which Minerals in consideration ?", " Humite Group", 258.0, 271.0], ["The Naipa and Muiane mines are located on the Nampula complex, a stratigraphic tectonic subdivision of the Mozambique Belt, in the Alto Ligonha region. The pegmatites are of the Li-Cs-Ta type, intrude a chlorite phyllite and gneisses with amphibole and biotite. The mines are still active. The main objective of this work was to analyze the pegmatite\u2019s spectral behavior considering ASTER and Landsat 8 OLI data. An ASTER image from 27/05/2005, and an image Landsat OLI image from 02/02/2018 were considered. The data were radiometric calibrated and after atmospheric corrected considered the Dark Object Subtraction algorithm available in the Semi-Automatic Classification Plugin accessible in QGIS software. In the field, samples were collected from lepidolite waste pile in Naipa and Muaine mines. A spectroadiometer was used in order to analyze the spectral behavior of several pegmatite\u2019s samples collected in the field in Alto Ligonha (Naipa and Muiane mines). In addition, QGIS software was also used for the spectral mapping of the hypothetical hydrothermal alterations associated with occurrences of basic metals, beryl gemstones, tourmalines, columbite-tantalites, and lithium minerals. A supervised classification algorithm was employed - Spectral Angle Mapper for the data processing, and the overall accuracy achieved was 80%. The integration of ASTER and Landsat 8 OLI data have proved very useful for pegmatite\u2019s mapping. From the results obtained, we can conclude that: (i) the combination of ASTER and Landsat 8 OLI data allows us to obtain more information about mineral composition than just one sensor, i.e., these two sensors are complementary; (ii) the alteration spots identified in the mines area are composed of clay minerals. In the future, more data and others image processing algorithms can be applied in order to identify the different Lithium minerals, as spodumene, petalite, amblygonite and lepidolite.", "which Supplementary information ?", " lepidolite", 751.0, 762.0], ["Detection, and monitoring of surface mining region have various research aspects. Coal surface mining has severe social, ecological, environmental adverse effects. In the past, semisupervised and supervised clustering techniques have been used to detect such regions. Coal has lower reflectance values in short wave infrared I (SWIR-I) than short wave infrared II (SWIR-II). The proposed method presents a novel approach to detect coal mine regions without manual intervention using this cue. Clay mineral ratio is defined as a ratio of SWIR-I to SWIR-II. Here, unsupervised K-Means clustering has been used in a hierarchical fashion over a variant of clay mineral ratio to detect opencast coal mine regions in the Jharia coal field (JCF), India. The proposed method has average precision, and recall of 76.43%, and 62.75%, respectively.", "which Supplementary information ?", "Coal mine region", NaN, NaN], ["Basin\u2010forming impacts expose material from deep within the interior of the Moon. Given the number of lunar basins, one would expect to find samples of the lunar mantle among those returned by the Apollo or Luna missions or within the lunar meteorite collection. However, only a few candidate mantle samples have been identified. Some remotely detected locations have been postulated to contain mantle\u2010derived material, but none are mineralogically consistent upon study with multiple techniques. To locate potential remnants of the lunar mantle, we searched for early\u2010crystallizing minerals using data from the Moon Mineralogy Mapper (M3) and the Diviner Lunar Radiometer (Diviner). While the lunar crust is largely composed of plagioclase, the mantle should contain almost none. M3 spectra were used to identify massifs bearing mafic minerals and Diviner was used to constrain the relative abundance of plagioclase. Of the sites analyzed, only Mons Wolff was found to potentially contain mantle material.", "which Supplementary information ?", "Diviner Lunar Radiometer", 647.0, 671.0], ["The visible and near\u2010infrared imaging spectrometer on board the Yutu Rover of Chinese Chang'E\u20103 mission measured the lunar surface reflectance at a close distance (~1 m) and collected four spectra at four different sites. These in situ lunar spectra have revealed less mature features than that measured remotely by spaceborne sensors such as the Moon Mineralogy Mapper instrument on board the Chandrayaan\u20101 mission and the Spectral Profiler on board the Kaguya over the same region. Mineral composition analysis using a spectral lookup table populated with a radiative transfer mixing model has shown that the regolith at the landing site contains high abundance of olivine. The mineral abundance results are consistent with that inferred from the compound measurement made by the on board alpha\u2010particle X\u2010ray spectrometer.", "which Supplementary information ?", "Moon Mineralogy mapper", 347.0, 369.0], ["The visible and near\u2010infrared imaging spectrometer on board the Yutu Rover of Chinese Chang'E\u20103 mission measured the lunar surface reflectance at a close distance (~1 m) and collected four spectra at four different sites. These in situ lunar spectra have revealed less mature features than that measured remotely by spaceborne sensors such as the Moon Mineralogy Mapper instrument on board the Chandrayaan\u20101 mission and the Spectral Profiler on board the Kaguya over the same region. Mineral composition analysis using a spectral lookup table populated with a radiative transfer mixing model has shown that the regolith at the landing site contains high abundance of olivine. The mineral abundance results are consistent with that inferred from the compound measurement made by the on board alpha\u2010particle X\u2010ray spectrometer.", "which Supplementary information ?", "spectral profiler", 424.0, 441.0], ["Several classification algorithms for pattern recognition had been tested in the mapping of tropical forest cover using airborne hyperspectral data. Results from the use of Maximum Likelihood (ML), Spectral Angle Mapper (SAM), Artificial Neural Network (ANN) and Decision Tree (DT) classifiers were compared and evaluated. It was found that ML performed the best followed by ANN, DT and SAM with accuracies of 86%, 84%, 51% and 49% respectively.", "which Features classified ?", "Tree", 272.0, 276.0], ["Hyperspectral Imaging (HSI) is used to provide a wealth of information which can be used to address a variety of problems in different applications. The main requirement in all applications is the classification of HSI data. In this paper, supervised HSI classification algorithms are used to extract agriculture areas that specialize in wheat growing and get a classified image. In particular, Parallelepiped and Spectral Angel Mapper (SAM) algorithms are used. They are implemented by a software tool used to analyse and process geospatial images that is an Environment of Visualizing Images (ENVI). They are applied on Al-Kharj, Saudi Arabia as the study area. The overall accuracy after applying the algorithms on the image of the study area for SAM classification was 66.67%, and 33.33% for Parallelepiped classification. Therefore, SAM algorithm has provided a better a study area image classification.", "which Features classified ?", "Wheat", 338.0, 343.0], ["Abstract In this study, we analysed the applicability of DNA barcodes for delimitation of 79 specimens of 13 species of nonbiting midges in the subfamily Tanypodinae (Diptera: Chironomidae) from S\u00e3o Paulo State, Brazil. Our results support DNA barcoding as an excellent tool for species identification and for solving taxonomic conflicts in genus Labrundinia. Molecular analysis of cytochrome c oxidase subunit I (COI) gene sequences yielded taxon identification trees, supporting 13 cohesive species clusters, of which three similar groups were subsequently linked to morphological variation at the larval and pupal stage. Additionally, another cluster previously described by means of morphology was linked to molecular markers. We found a distinct barcode gap, and in some species substantial interspecific pairwise divergences (up to 19.3%) were observed, which permitted identification of all analysed species. The results also indicated that barcodes can be used to associate life stages of chironomids since COI was easily amplified and sequenced from different life stages with universal barcode primers. R\u00e9sum\u00e9 Notre \u00e9tude \u00e9value l'utilit\u00e9 des codes \u00e0 barres d'ADN pour d\u00e9limiter 79 sp\u00e9cimens de 13 esp\u00e8ces de moucherons de la sous-famille des Tanypodinae (Diptera: Chironomidae) provenant de l\u2019\u00e9tat de S\u00e3o Paulo, Br\u00e9sil. Notre \u00e9tude confirme l'utilisation des codes \u00e0 barres d'ADN comme un excellent outil pour l'identification des esp\u00e8ces et la solution de probl\u00e8mes taxonomiques dans genre Labrundinia. Une analyse mol\u00e9culaire des s\u00e9quences des g\u00e8nes COI fournit des arbres d'identification des taxons, d\u00e9limitant 13 groupes coh\u00e9rents d'esp\u00e8ces, dont trois groupes similaires ont \u00e9t\u00e9 reli\u00e9s subs\u00e9quemment \u00e0 une variation morphologique des stades larvaires et nymphal. De plus, un autre groupe d\u00e9crit ant\u00e9rieurement \u00e0 partir de caract\u00e8res morphologiques a \u00e9t\u00e9 reli\u00e9 \u00e0 des marqueurs mol\u00e9culaires. Il existe un \u00e9cart net entre les codes \u00e0 barres et, chez certaines esp\u00e8ces, d'importantes divergences entre les esp\u00e8ces consid\u00e9r\u00e9es deux par deux (jusqu\u2019\u00e0 19,3%), ce qui a permis l'identification de toutes les esp\u00e8ces examin\u00e9es. Nos r\u00e9sultats montrent aussi que les codes \u00e0 barres peuvent servir \u00e0 associer les diff\u00e9rents stades de vie des chironomides, car il est facile d'amplifier et de s\u00e9quencer le g\u00e8ne COI provenant des diff\u00e9rents stades avec les amorces universelles des codes \u00e0 barres.", "which Studied taxonomic group (Biology) ?", "Chironomidae", 176.0, 188.0], ["Mosquitoes are insects of the Diptera, Nematocera, and Culicidae families, some species of which are important disease vectors. Identifying mosquito species based on morphological characteristics is difficult, particularly the identification of specimens collected in the field as part of disease surveillance programs. Because of this difficulty, we constructed DNA barcodes of the cytochrome c oxidase subunit 1, the COI gene, for the more common mosquito species in China, including the major disease vectors. A total of 404 mosquito specimens were collected and assigned to 15 genera and 122 species and subspecies on the basis of morphological characteristics. Individuals of the same species grouped closely together in a Neighborhood-Joining tree based on COI sequence similarity, regardless of collection site. COI gene sequence divergence was approximately 30 times higher for species in the same genus than for members of the same species. Divergence in over 98% of congeneric species ranged from 2.3% to 21.8%, whereas divergence in conspecific individuals ranged from 0% to 1.67%. Cryptic species may be common and a few pseudogenes were detected.", "which Studied taxonomic group (Biology) ?", "Culicidae", 55.0, 64.0], ["DNA barcoding has gained increased recognition as a molecular tool for species identification in various groups of organisms. In this preliminary study, we tested the efficacy of a 615\u2010bp fragment of the cytochrome c oxidase I (COI) as a DNA barcode in the medically important family Simuliidae, or black flies. A total of 65 (25%) morphologically distinct species and sibling species in species complexes of the 255 recognized Nearctic black fly species were used to create a preliminary barcode profile for the family. Genetic divergence among congeners averaged 14.93% (range 2.83\u201315.33%), whereas intraspecific genetic divergence between morphologically distinct species averaged 0.72% (range 0\u20133.84%). DNA barcodes correctly identified nearly 100% of the morphologically distinct species (87% of the total sampled taxa), whereas in species complexes (13% of the sampled taxa) maximum values of divergence were comparatively higher (max. 4.58\u20136.5%), indicating cryptic diversity. The existence of sibling species in Prosimulium travisi and P. neomacropyga was also demonstrated, thus confirming previous cytological evidence about the existence of such cryptic diversity in these two taxa. We conclude that DNA barcoding is an effective method for species identification and discovery of cryptic diversity in black flies.", "which Studied taxonomic group (Biology) ?", "Simuliidae", 284.0, 294.0], ["The proliferation of DNA data is revolutionizing all fields of systematic research. DNA barcode sequences, now available for millions of specimens and several hundred thousand species, are increasingly used in algorithmic species delimitations. This is complicated by occasional incongruences between species and gene genealogies, as indicated by situations where conspecific individuals do not form a monophyletic cluster in a gene tree. In two previous reviews, non-monophyly has been reported as being common in mitochondrial DNA gene trees. We developed a novel web service \u201cMonophylizer\u201d to detect non-monophyly in phylogenetic trees and used it to ascertain the incidence of species non-monophyly in COI (a.k.a. cox1) barcode sequence data from 4977 species and 41,583 specimens of European Lepidoptera, the largest data set of DNA barcodes analyzed from this regard. Particular attention was paid to accurate species identification to ensure data integrity. We investigated the effects of tree-building method, sampling effort, and other methodological issues, all of which can influence estimates of non-monophyly. We found a 12% incidence of non-monophyly, a value significantly lower than that observed in previous studies. Neighbor joining (NJ) and maximum likelihood (ML) methods yielded almost equal numbers of non-monophyletic species, but 24.1% of these cases of non-monophyly were only found by one of these methods. Non-monophyletic species tend to show either low genetic distances to their nearest neighbors or exceptionally high levels of intraspecific variability. Cases of polyphyly in COI trees arising as a result of deep intraspecific divergence are negligible, as the detected cases reflected misidentifications or methodological errors. Taking into consideration variation in sampling effort, we estimate that the true incidence of non-monophyly is \u223c23%, but with operational factors still being included. Within the operational factors, we separately assessed the frequency of taxonomic limitations (presence of overlooked cryptic and oversplit species) and identification uncertainties. We observed that operational factors are potentially present in more than half (58.6%) of the detected cases of non-monophyly. Furthermore, we observed that in about 20% of non-monophyletic species and entangled species, the lineages involved are either allopatric or parapatric\u2014conditions where species delimitation is inherently subjective and particularly dependent on the species concept that has been adopted. These observations suggest that species-level non-monophyly in COI gene trees is less common than previously supposed, with many cases reflecting misidentifications, the subjectivity of species delimitation or other operational factors.", "which Studied taxonomic group (Biology) ?", "Lepidoptera", 797.0, 808.0], ["Background The State of Bavaria is involved in a research program that will lead to the construction of a DNA barcode library for all animal species within its territorial boundaries. The present study provides a comprehensive DNA barcode library for the Geometridae, one of the most diverse of insect families. Methodology/Principal Findings This study reports DNA barcodes for 400 Bavarian geometrid species, 98 per cent of the known fauna, and approximately one per cent of all Bavarian animal species. Although 98.5% of these species possess diagnostic barcode sequences in Bavaria, records from neighbouring countries suggest that species-level resolution may be compromised in up to 3.5% of cases. All taxa which apparently share barcodes are discussed in detail. One case of modest divergence (1.4%) revealed a species overlooked by the current taxonomic system: Eupithecia goossensiata Mabille, 1869 stat.n. is raised from synonymy with Eupithecia absinthiata (Clerck, 1759) to species rank. Deep intraspecific sequence divergences (>2%) were detected in 20 traditionally recognized species. Conclusions/Significance The study emphasizes the effectiveness of DNA barcoding as a tool for monitoring biodiversity. Open access is provided to a data set that includes records for 1,395 geometrid specimens (331 species) from Bavaria, with 69 additional species from neighbouring regions. Taxa with deep intraspecific sequence divergences are undergoing more detailed analysis to ascertain if they represent cases of cryptic diversity.", "which Studied taxonomic group (Biology) ?", "Geometridae", 255.0, 266.0], ["Abstract A short fragment of mt DNA from the cytochrome c oxidase 1 (CO1) region was used to provide the first CO1 barcodes for 37 species of Canadian mosquitoes (Diptera: Culicidae) from the provinces Ontario and New Brunswick. Sequence variation was analysed in a 617\u2010bp fragment from the 5\u2032 end of the CO1 region. Sequences of each mosquito species formed barcode clusters with tight cohesion that were usually clearly distinct from those of allied species. CO1 sequence divergences were, on average, nearly 20 times higher for congeneric species than for members of a species; divergences between congeneric species averaged 10.4% (range 0.2\u201317.2%), whereas those for conspecific individuals averaged 0.5% (range 0.0\u20133.9%).", "which Studied taxonomic group (Biology) ?", "Culicidae", 172.0, 181.0], ["This data release provides COI barcodes for 366 species of parasitic flies (Diptera: Tachinidae), enabling the DNA based identification of the majority of northern European species and a large proportion of Palearctic genera, regardless of the developmental stage. The data will provide a tool for taxonomists and ecologists studying this ecologically important but challenging parasitoid family. A comparison of minimum distances between the nearest neighbors revealed the mean divergence of 5.52% that is approximately the same as observed earlier with comparable sampling in Lepidoptera, but clearly less than in Coleoptera. Full barcode-sharing was observed between 13 species pairs or triplets, equaling to 7.36% of all species. Delimitation based on Barcode Index Number (BIN) system was compared with traditional classification of species and interesting cases of possible species oversplits and cryptic diversity are discussed. Overall, DNA barcodes are effective in separating tachinid species and provide novel insight into the taxonomy of several genera.", "which Studied taxonomic group (Biology) ?", "Tachinidae", 85.0, 95.0], ["Cryptic biological diversity has generated ambiguity in taxonomic and evolutionary studies. Single-locus methods and other approaches for species delimitation are useful for addressing this challenge, enabling the practical processing of large numbers of samples for identification and inventory purposes. This study analyzed one assemblage of high Andean butterflies using DNA barcoding and compared the identifications based on the current morphological taxonomy with three methods of species delimitation (automatic barcode gap discovery, generalized mixed Yule coalescent model, and Poisson tree processes). Sixteen potential cryptic species were recognized using these three methods, representing a net richness increase of 11.3% in the assemblage. A well-studied taxon of the genus Vanessa, which has a wide geographical distribution, appeared with the potential cryptic species that had a higher genetic differentiation at the local level than at the continental level. The analyses were useful for identifying the potential cryptic species in Pedaliodes and Forsterinaria complexes, which also show differentiation along altitudinal and latitudinal gradients. This genetic assessment of an entire assemblage of high Andean butterflies (Papilionoidea), provides baseline information for future research in a region characterized by high rates of endemism and population isolation.", "which Studied taxonomic group (Biology) ?", "Papilionoidea", 1244.0, 1257.0], ["This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences.", "which Studied taxonomic group (Biology) ?", "Noctuoidea", 111.0, 121.0], ["This paper reports the first tests of the suitability of the standardized mitochondrial cytochrome c oxidase subunit I (COI) barcoding system for the identification of Canadian deerflies and horseflies. Two additional mitochondrial molecular markers were used to determine whether unambiguous species recognition in tabanids can be achieved. Our 332 Canadian tabanid samples yielded 650 sequences from five genera and 42 species. Standard COI barcodes demonstrated a strong A + T bias (mean 68.1%), especially at third codon positions (mean 93.0%). Our preliminary test of this system showed that the standard COI barcode worked well for Canadian Tabanidae: the target DNA can be easily recovered from small amounts of insect tissue and aligned for all tabanid taxa. Each tabanid species possessed distinctive sets of COI haplotypes which discriminated well among species. Average conspecific Kimura two\u2010parameter (K2P) divergence (0.49%) was 12 times lower than the average divergence within species. Both the neighbour\u2010joining and the Bayesian methods produced trees with identical monophyletic species groups. Two species, Chrysops dawsoni Philip and Chrysops montanus Osten Sacken (Diptera: Tabanidae), showed relatively deep intraspecific sequence divergences (\u223c10 times the average) for all three mitochondrial gene regions analysed. We suggest provisional differentiation of Ch. montanus into two haplotypes, namely, Ch. montanus haplomorph 1 and Ch. montanus haplomorph 2, both defined by their molecular sequences and by newly discovered differences in structural features near their ocelli.", "which Studied taxonomic group (Biology) ?", "Tabanidae", 647.0, 656.0], ["Abstract The DNA barcode reference library for Lepidoptera holds much promise as a tool for taxonomic research and for providing the reliable identifications needed for conservation assessment programs. We gathered sequences for the barcode region of the mitochondrial cytochrome c oxidase subunit I gene from 160 of the 176 nominal species of Erebidae moths (Insecta: Lepidoptera) known from the Iberian Peninsula. These results arise from a research project which constructing a DNA barcode library for the insect species of Spain. New records for 271 specimens (122 species) are coupled with preexisting data for 38 species from the Iberian fauna. Mean interspecific distance was 12.1%, while the mean nearest neighbour divergence was 6.4%. All 160 species possessed diagnostic barcode sequences, but one pair of congeneric taxa (Eublemma rosea and Eublemma rietzi) were assigned to the same BIN. As well, intraspecific sequence divergences higher than 1.5% were detected in four species which likely represent species complexes. This study reinforces the effectiveness of DNA barcoding as a tool for monitoring biodiversity in particular geographical areas and the strong correspondence between sequence clusters delineated by BINs and species recognized through detailed taxonomic analysis.", "which Studied taxonomic group (Biology) ?", "Erebidae", 344.0, 352.0], ["Abstract Background Various methods have been proposed to assign unknown specimens to known species using their DNA barcodes, while others have focused on using genetic divergence thresholds to estimate \u201cspecies\u201d diversity for a taxon, without a well-developed taxonomy and/or an extensive reference library of DNA barcodes. The major goals of the present work were to: a) conduct the largest species-level barcoding study of the Muscidae to date and characterize the range of genetic divergence values in the northern Nearctic fauna; b) evaluate the correspondence between morphospecies and barcode groupings defined using both clustering-based and threshold-based approaches; and c) use the reference library produced to address taxonomic issues. Results Our data set included 1114 individuals and their COI sequences (951 from Churchill, Manitoba), representing 160 morphologically-determined species from 25 genera, covering 89% of the known fauna of Churchill and 23% of the Nearctic fauna. Following an iterative process through which all specimens belonging to taxa with anomalous divergence values and/or monophyly issues were re-examined, identity was modified for 9 taxa, including the reinstatement of Phaonia luteva (Walker) stat. nov. as a species distinct from Phaonia errans (Meigen). In the post-reassessment data set, no distinct gap was found between maximum pairwise intraspecific distances (range 0.00-3.01%) and minimum interspecific distances (range: 0.77-11.33%). Nevertheless, using a clustering-based approach, all individuals within 98% of species grouped with their conspecifics with high (>95%) bootstrap support; in contrast, a maximum species discrimination rate of 90% was obtained at the optimal threshold of 1.2%. DNA barcoding enabled the determination of females from 5 ambiguous species pairs and confirmed that 16 morphospecies were genetically distinct from named taxa. There were morphological differences among all distinct genetic clusters; thus, no cases of cryptic species were detected. Conclusions Our findings reveal the great utility of building a well-populated, species-level reference barcode database against which to compare unknowns. When such a library is unavailable, it is still possible to obtain a fairly accurate (within ~10%) rapid assessment of species richness based upon a barcode divergence threshold alone, but this approach is most accurate when the threshold is tuned to a particular taxon.", "which Studied taxonomic group (Biology) ?", "Muscidae", 430.0, 438.0], ["Background With about 1,000 species in the Neotropics, the Eumaeini (Theclinae) are one of the most diverse butterfly tribes. Correct morphology-based identifications are challenging in many genera due to relatively little interspecific differences in wing patterns. Geographic infraspecific variation is sometimes more substantial than variation between species. In this paper we present a large DNA barcode dataset of South American Lycaenidae. We analyze how well DNA barcode BINs match morphologically delimited species. Methods We compare morphology-based species identifications with the clustering of molecular operational taxonomic units (MOTUs) delimitated by the RESL algorithm in BOLD, which assigns Barcode Index Numbers (BINs). We examine intra- and interspecific divergences for genera represented by at least four morphospecies. We discuss the existence of local barcode gaps in a genus by genus analysis. We also note differences in the percentage of species with barcode gaps in groups of lowland and high mountain genera. Results We identified 2,213 specimens and obtained 1,839 sequences of 512 species in 90 genera. Overall, the mean intraspecific divergence value of CO1 sequences was 1.20%, while the mean interspecific divergence between nearest congeneric neighbors was 4.89%, demonstrating the presence of a barcode gap. However, the gap seemed to disappear from the entire set when comparing the maximum intraspecific distance (8.40%) with the minimum interspecific distance (0.40%). Clear barcode gaps are present in many genera but absent in others. From the set of specimens that yielded COI fragment lengths of at least 650 bp, 75% of the a priori morphology-based identifications were unambiguously assigned to a single Barcode Index Number (BIN). However, after a taxonomic a posteriori review, the percentage of matched identifications rose to 85%. BIN splitting was observed for 17% of the species and BIN sharing for 9%. We found that genera that contain primarily lowland species show higher percentages of local barcode gaps and congruence between BINs and morphology than genera that contain exclusively high montane species. The divergence values to the nearest neighbors were significantly lower in high Andean species while the intra-specific divergence values were significantly lower in the lowland species. These results raise questions regarding the causes of observed low inter and high intraspecific genetic variation. We discuss incomplete lineage sorting and hybridization as most likely causes of this phenomenon, as the montane species concerned are relatively young and hybridization is probable. The release of our data set represents an essential baseline for a reference library for biological assessment studies of butterflies in mega diverse countries using modern high-throughput technologies an highlights the necessity of taxonomic revisions for various genera combining both molecular and morphological data.", "which Studied taxonomic group (Biology) ?", "Eumaeini", 59.0, 67.0], ["Abstract A feasibility test of molecular identification of European fruit flies (Diptera: Tephritidae) based on COI barcode sequences has been executed. A dataset containing 555 sequences of 135 ingroup species from three subfamilies and 42 genera and one single outgroup species has been analysed. 73.3% of all included species could be identified based on their COI barcode gene, based on similarity and distances. The low success rate is caused by singletons as well as some problematic groups: several species groups within the genus Terellia and especially the genus Urophora. With slightly more than 100 sequences \u2013 almost 20% of the total \u2013 this genus alone constitutes the larger part of the failure for molecular identification for this dataset. Deleting the singletons and Urophora results in a success-rate of 87.1% of all queries and 93.23% of the not discarded queries as correctly identified. Urophora is of special interest due to its economic importance as beneficial species for weed control, therefore it is desirable to have alternative markers for molecular identification. We demonstrate that the success of DNA barcoding for identification purposes strongly depends on the contents of the database used to BLAST against. Especially the necessity of including multiple specimens per species of geographically distinct populations and different ecologies for the understanding of the intra- versus interspecific variation is demonstrated. Furthermore thresholds and the distinction between true and false positives and negatives should not only be used to increase the reliability of the success of molecular identification but also to point out problematic groups, which should then be flagged in the reference database suggesting alternative methods for identification.", "which Studied taxonomic group (Biology) ?", "Tephritidae", 90.0, 101.0], ["Although members of the crambid subfamily Pyraustinae are frequently important crop pests, their identification is often difficult because many species lack conspicuous diagnostic morphological characters. DNA barcoding employs sequence diversity in a short standardized gene region to facilitate specimen identifications and species discovery. This study provides a DNA barcode reference library for North American pyraustines based upon the analysis of 1589 sequences recovered from 137 nominal species, 87% of the fauna. Data from 125 species were barcode compliant (>500bp, <1% n), and 99 of these taxa formed a distinct cluster that was assigned to a single BIN. The other 26 species were assigned to 56 BINs, reflecting frequent cases of deep intraspecific sequence divergence and a few instances of barcode sharing, creating a total of 155 BINs. Two systems for OTU designation, ABGD and BIN, were examined to check the correspondence between current taxonomy and sequence clusters. The BIN system performed better than ABGD in delimiting closely related species, while OTU counts with ABGD were influenced by the value employed for relative gap width. Different species with low or no interspecific divergence may represent cases of unrecognized synonymy, whereas those with high intraspecific divergence require further taxonomic scrutiny as they may involve cryptic diversity. The barcode library developed in this study will also help to advance understanding of relationships among species of Pyraustinae.", "which Studied taxonomic group (Biology) ?", "Pyraustinae", 42.0, 53.0], ["Biting midges of the genus Culicoides (Diptera: Ceratopogonidae) are insect vectors of economically important veterinary diseases such as African horse sickness virus and bluetongue virus. However, the identification of Culicoides based on morphological features is difficult. The sequencing of mitochondrial cytochrome oxidase subunit I (COI), referred to as DNA barcoding, has been proposed as a tool for rapid identification to species. Hence, a study was undertaken to establish DNA barcodes for all morphologically determined Culicoides species in Swedish collections. In total, 237 specimens of Culicoides representing 37 morphologically distinct species were used. The barcoding generated 37 supported clusters, 31 of which were in agreement with the morphological determination. However, two pairs of closely related species could not be separated using the DNA barcode approach. Moreover, Culicoides obsoletus Meigen and Culicoides newsteadi Austen showed relatively deep intraspecific divergence (more than 10 times the average), which led to the creation of two cryptic species within each of C. obsoletus and C. newsteadi. The use of COI barcodes as a tool for the species identification of biting midges can differentiate 95% of species studied. Identification of some closely related species should employ a less conserved region, such as a ribosomal internal transcribed spacer.", "which Studied taxonomic group (Biology) ?", "Ceratopogonidae", 48.0, 63.0], ["Abstract. Carrion-breeding Sarcophagidae (Diptera) can be used to estimate the post-mortem interval in forensic cases. Difficulties with accurate morphological identifications at any life stage and a lack of documented thermobiological profiles have limited their current usefulness. The molecular-based approach of DNA barcoding, which utilises a 648-bp fragment of the mitochondrial cytochrome oxidase subunit I gene, was evaluated in a pilot study for discrimination between 16 Australian sarcophagids. The current study comprehensively evaluated barcoding for a larger taxon set of 588 Australian sarcophagids. In total, 39 of the 84 known Australian species were represented by 580 specimens, which includes 92% of potentially forensically important species. A further eight specimens could not be identified, but were included nonetheless as six unidentifiable taxa. A neighbour-joining tree was generated and nucleotide sequence divergences were calculated. All species except Sarcophaga (Fergusonimyia) bancroftorum, known for high morphological variability, were resolved as monophyletic (99.2% of cases), with bootstrap support of 100. Excluding S. bancroftorum, the mean intraspecific and interspecific variation ranged from 1.12% and 2.81\u201311.23%, respectively, allowing for species discrimination. DNA barcoding was therefore validated as a suitable method for molecular identification of Australian Sarcophagidae, which will aid in the implementation of this fauna in forensic entomology.", "which Order (Taxonomy - biology) ?", "Diptera", 42.0, 49.0], ["Dasysyrphus Enderlein (Diptera: Syrphidae) has posed taxonomic challenges to researchers in the past, primarily due to their lack of interspecific diagnostic characters. In the present study, DNA data (mitochondrial cytochrome c oxidase sub-unit I\u2014COI) were combined with morphology to help delimit species. This led to two species being resurrected from synonymy (D. laticaudus and D. pacificus) and the discovery of one new species (D. occidualis sp. nov.). An additional new species was described based on morphology alone (D. richardi sp. nov.), as the specimens were too old to obtain COI. Part of the taxonomic challenge presented by this group arises from missing type specimens. Neotypes are designated here for D. pauxillus and D. pinastri to bring stability to these names. An illustrated key to 13 Nearctic species is presented, along with descriptions, maps and supplementary data. A phylogeny based on COI is also presented and discussed.", "which Order (Taxonomy - biology) ?", "Diptera", 23.0, 30.0], ["This paper reports the first tests of the suitability of the standardized mitochondrial cytochrome c oxidase subunit I (COI) barcoding system for the identification of Canadian deerflies and horseflies. Two additional mitochondrial molecular markers were used to determine whether unambiguous species recognition in tabanids can be achieved. Our 332 Canadian tabanid samples yielded 650 sequences from five genera and 42 species. Standard COI barcodes demonstrated a strong A + T bias (mean 68.1%), especially at third codon positions (mean 93.0%). Our preliminary test of this system showed that the standard COI barcode worked well for Canadian Tabanidae: the target DNA can be easily recovered from small amounts of insect tissue and aligned for all tabanid taxa. Each tabanid species possessed distinctive sets of COI haplotypes which discriminated well among species. Average conspecific Kimura two\u2010parameter (K2P) divergence (0.49%) was 12 times lower than the average divergence within species. Both the neighbour\u2010joining and the Bayesian methods produced trees with identical monophyletic species groups. Two species, Chrysops dawsoni Philip and Chrysops montanus Osten Sacken (Diptera: Tabanidae), showed relatively deep intraspecific sequence divergences (\u223c10 times the average) for all three mitochondrial gene regions analysed. We suggest provisional differentiation of Ch. montanus into two haplotypes, namely, Ch. montanus haplomorph 1 and Ch. montanus haplomorph 2, both defined by their molecular sequences and by newly discovered differences in structural features near their ocelli.", "which Order (Taxonomy - biology) ?", "Diptera", 1186.0, 1193.0], ["Abstract A feasibility test of molecular identification of European fruit flies (Diptera: Tephritidae) based on COI barcode sequences has been executed. A dataset containing 555 sequences of 135 ingroup species from three subfamilies and 42 genera and one single outgroup species has been analysed. 73.3% of all included species could be identified based on their COI barcode gene, based on similarity and distances. The low success rate is caused by singletons as well as some problematic groups: several species groups within the genus Terellia and especially the genus Urophora. With slightly more than 100 sequences \u2013 almost 20% of the total \u2013 this genus alone constitutes the larger part of the failure for molecular identification for this dataset. Deleting the singletons and Urophora results in a success-rate of 87.1% of all queries and 93.23% of the not discarded queries as correctly identified. Urophora is of special interest due to its economic importance as beneficial species for weed control, therefore it is desirable to have alternative markers for molecular identification. We demonstrate that the success of DNA barcoding for identification purposes strongly depends on the contents of the database used to BLAST against. Especially the necessity of including multiple specimens per species of geographically distinct populations and different ecologies for the understanding of the intra- versus interspecific variation is demonstrated. Furthermore thresholds and the distinction between true and false positives and negatives should not only be used to increase the reliability of the success of molecular identification but also to point out problematic groups, which should then be flagged in the reference database suggesting alternative methods for identification.", "which Order (Taxonomy - biology) ?", "Diptera", 81.0, 88.0], ["Although central to much biological research, the identification of species is often difficult. The use of DNA barcodes, short DNA sequences from a standardized region of the genome, has recently been proposed as a tool to facilitate species identification and discovery. However, the effectiveness of DNA barcoding for identifying specimens in species-rich tropical biotas is unknown. Here we show that cytochrome c oxidase I DNA barcodes effectively discriminate among species in three Lepidoptera families from Area de Conservaci\u00f3n Guanacaste in northwestern Costa Rica. We found that 97.9% of the 521 species recognized by prior taxonomic work possess distinctive cytochrome c oxidase I barcodes and that the few instances of interspecific sequence overlap involve very similar species. We also found two or more barcode clusters within each of 13 supposedly single species. Covariation between these clusters and morphological and/or ecological traits indicates overlooked species complexes. If these results are general, DNA barcoding will significantly aid species identification and discovery in tropical settings.", "which Order (Taxonomy - biology) ?", "Lepidoptera", 488.0, 499.0], ["Biting midges of the genus Culicoides (Diptera: Ceratopogonidae) are insect vectors of economically important veterinary diseases such as African horse sickness virus and bluetongue virus. However, the identification of Culicoides based on morphological features is difficult. The sequencing of mitochondrial cytochrome oxidase subunit I (COI), referred to as DNA barcoding, has been proposed as a tool for rapid identification to species. Hence, a study was undertaken to establish DNA barcodes for all morphologically determined Culicoides species in Swedish collections. In total, 237 specimens of Culicoides representing 37 morphologically distinct species were used. The barcoding generated 37 supported clusters, 31 of which were in agreement with the morphological determination. However, two pairs of closely related species could not be separated using the DNA barcode approach. Moreover, Culicoides obsoletus Meigen and Culicoides newsteadi Austen showed relatively deep intraspecific divergence (more than 10 times the average), which led to the creation of two cryptic species within each of C. obsoletus and C. newsteadi. The use of COI barcodes as a tool for the species identification of biting midges can differentiate 95% of species studied. Identification of some closely related species should employ a less conserved region, such as a ribosomal internal transcribed spacer.", "which Order (Taxonomy - biology) ?", "Diptera", 39.0, 46.0], ["This study provides a first, comprehensive, diagnostic use of DNA barcodes for the Canadian fauna of noctuoids or \u201cowlet\u201d moths (Lepidoptera: Noctuoidea) based on vouchered records for 1,541 species (99.1% species coverage), and more than 30,000 sequences. When viewed from a Canada-wide perspective, DNA barcodes unambiguously discriminate 90% of the noctuoid species recognized through prior taxonomic study, and resolution reaches 95.6% when considered at a provincial scale. Barcode sharing is concentrated in certain lineages with 54% of the cases involving 1.8% of the genera. Deep intraspecific divergence exists in 7.7% of the species, but further studies are required to clarify whether these cases reflect an overlooked species complex or phylogeographic variation in a single species. Non-native species possess higher Nearest-Neighbour (NN) distances than native taxa, whereas generalist feeders have lower NN distances than those with more specialized feeding habits. We found high concordance between taxonomic names and sequence clusters delineated by the Barcode Index Number (BIN) system with 1,082 species (70%) assigned to a unique BIN. The cases of discordance involve both BIN mergers and BIN splits with 38 species falling into both categories, most likely reflecting bidirectional introgression. One fifth of the species are involved in a BIN merger reflecting the presence of 158 species sharing their barcode sequence with at least one other taxon, and 189 species with low, but diagnostic COI divergence. A very few cases (13) involved species whose members fell into both categories. Most of the remaining 140 species show a split into two or three BINs per species, while Virbia ferruginosa was divided into 16. The overall results confirm that DNA barcodes are effective for the identification of Canadian noctuoids. This study also affirms that BINs are a strong proxy for species, providing a pathway for a rapid, accurate estimation of animal diversity.", "which Order (Taxonomy - biology) ?", "Lepidoptera", 129.0, 140.0], ["This data release provides COI barcodes for 366 species of parasitic flies (Diptera: Tachinidae), enabling the DNA based identification of the majority of northern European species and a large proportion of Palearctic genera, regardless of the developmental stage. The data will provide a tool for taxonomists and ecologists studying this ecologically important but challenging parasitoid family. A comparison of minimum distances between the nearest neighbors revealed the mean divergence of 5.52% that is approximately the same as observed earlier with comparable sampling in Lepidoptera, but clearly less than in Coleoptera. Full barcode-sharing was observed between 13 species pairs or triplets, equaling to 7.36% of all species. Delimitation based on Barcode Index Number (BIN) system was compared with traditional classification of species and interesting cases of possible species oversplits and cryptic diversity are discussed. Overall, DNA barcodes are effective in separating tachinid species and provide novel insight into the taxonomy of several genera.", "which Order (Taxonomy - biology) ?", "Diptera", 76.0, 83.0], ["This study summarizes results of a DNA barcoding campaign on German Diptera, involving analysis of 45,040 specimens. The resultant DNA barcode library includes records for 2,453 named species comprising a total of 5,200 barcode index numbers (BINs), including 2,700 COI haplotype clusters without species\u2010level assignment, so called \u201cdark taxa.\u201d Overall, 88 out of 117 families (75%) recorded from Germany were covered, representing more than 50% of the 9,544 known species of German Diptera. Until now, most of these families, especially the most diverse, have been taxonomically inaccessible. By contrast, within a few years this study provided an intermediate taxonomic system for half of the German Dipteran fauna, which will provide a useful foundation for subsequent detailed, integrative taxonomic studies. Using DNA extracts derived from bulk collections made by Malaise traps, we further demonstrate that species delineation using BINs and operational taxonomic units (OTUs) constitutes an effective method for biodiversity studies using DNA metabarcoding. As the reference libraries continue to grow, and gaps in the species catalogue are filled, BIN lists assembled by metabarcoding will provide greater taxonomic resolution. The present study has three main goals: (a) to provide a DNA barcode library for 5,200 BINs of Diptera; (b) to demonstrate, based on the example of bulk extractions from a Malaise trap experiment, that DNA barcode clusters, labelled with globally unique identifiers (such as OTUs and/or BINs), provide a pragmatic, accurate solution to the \u201ctaxonomic impediment\u201d; and (c) to demonstrate that interim names based on BINs and OTUs obtained through metabarcoding provide an effective method for studies on species\u2010rich groups that are usually neglected in biodiversity research projects because of their unresolved taxonomy.", "which Order (Taxonomy - biology) ?", "Diptera", 68.0, 75.0], ["Abstract A short fragment of mt DNA from the cytochrome c oxidase 1 (CO1) region was used to provide the first CO1 barcodes for 37 species of Canadian mosquitoes (Diptera: Culicidae) from the provinces Ontario and New Brunswick. Sequence variation was analysed in a 617\u2010bp fragment from the 5\u2032 end of the CO1 region. Sequences of each mosquito species formed barcode clusters with tight cohesion that were usually clearly distinct from those of allied species. CO1 sequence divergences were, on average, nearly 20 times higher for congeneric species than for members of a species; divergences between congeneric species averaged 10.4% (range 0.2\u201317.2%), whereas those for conspecific individuals averaged 0.5% (range 0.0\u20133.9%).", "which Order (Taxonomy - biology) ?", "Diptera", 163.0, 170.0], ["Abstract In this study, we analysed the applicability of DNA barcodes for delimitation of 79 specimens of 13 species of nonbiting midges in the subfamily Tanypodinae (Diptera: Chironomidae) from S\u00e3o Paulo State, Brazil. Our results support DNA barcoding as an excellent tool for species identification and for solving taxonomic conflicts in genus Labrundinia. Molecular analysis of cytochrome c oxidase subunit I (COI) gene sequences yielded taxon identification trees, supporting 13 cohesive species clusters, of which three similar groups were subsequently linked to morphological variation at the larval and pupal stage. Additionally, another cluster previously described by means of morphology was linked to molecular markers. We found a distinct barcode gap, and in some species substantial interspecific pairwise divergences (up to 19.3%) were observed, which permitted identification of all analysed species. The results also indicated that barcodes can be used to associate life stages of chironomids since COI was easily amplified and sequenced from different life stages with universal barcode primers. R\u00e9sum\u00e9 Notre \u00e9tude \u00e9value l'utilit\u00e9 des codes \u00e0 barres d'ADN pour d\u00e9limiter 79 sp\u00e9cimens de 13 esp\u00e8ces de moucherons de la sous-famille des Tanypodinae (Diptera: Chironomidae) provenant de l\u2019\u00e9tat de S\u00e3o Paulo, Br\u00e9sil. Notre \u00e9tude confirme l'utilisation des codes \u00e0 barres d'ADN comme un excellent outil pour l'identification des esp\u00e8ces et la solution de probl\u00e8mes taxonomiques dans genre Labrundinia. Une analyse mol\u00e9culaire des s\u00e9quences des g\u00e8nes COI fournit des arbres d'identification des taxons, d\u00e9limitant 13 groupes coh\u00e9rents d'esp\u00e8ces, dont trois groupes similaires ont \u00e9t\u00e9 reli\u00e9s subs\u00e9quemment \u00e0 une variation morphologique des stades larvaires et nymphal. De plus, un autre groupe d\u00e9crit ant\u00e9rieurement \u00e0 partir de caract\u00e8res morphologiques a \u00e9t\u00e9 reli\u00e9 \u00e0 des marqueurs mol\u00e9culaires. Il existe un \u00e9cart net entre les codes \u00e0 barres et, chez certaines esp\u00e8ces, d'importantes divergences entre les esp\u00e8ces consid\u00e9r\u00e9es deux par deux (jusqu\u2019\u00e0 19,3%), ce qui a permis l'identification de toutes les esp\u00e8ces examin\u00e9es. Nos r\u00e9sultats montrent aussi que les codes \u00e0 barres peuvent servir \u00e0 associer les diff\u00e9rents stades de vie des chironomides, car il est facile d'amplifier et de s\u00e9quencer le g\u00e8ne COI provenant des diff\u00e9rents stades avec les amorces universelles des codes \u00e0 barres.", "which Order (Taxonomy - biology) ?", "Diptera", 167.0, 174.0], ["Abstract The DNA barcode reference library for Lepidoptera holds much promise as a tool for taxonomic research and for providing the reliable identifications needed for conservation assessment programs. We gathered sequences for the barcode region of the mitochondrial cytochrome c oxidase subunit I gene from 160 of the 176 nominal species of Erebidae moths (Insecta: Lepidoptera) known from the Iberian Peninsula. These results arise from a research project which constructing a DNA barcode library for the insect species of Spain. New records for 271 specimens (122 species) are coupled with preexisting data for 38 species from the Iberian fauna. Mean interspecific distance was 12.1%, while the mean nearest neighbour divergence was 6.4%. All 160 species possessed diagnostic barcode sequences, but one pair of congeneric taxa (Eublemma rosea and Eublemma rietzi) were assigned to the same BIN. As well, intraspecific sequence divergences higher than 1.5% were detected in four species which likely represent species complexes. This study reinforces the effectiveness of DNA barcoding as a tool for monitoring biodiversity in particular geographical areas and the strong correspondence between sequence clusters delineated by BINs and species recognized through detailed taxonomic analysis.", "which Class (Taxonomy - biology) ?", "Insecta", 360.0, 367.0], [". Changes in disturbance due to fire regime in southwestern Pinus ponderosa forests over the last century have led to dense forests that are threatened by widespread fire. It has been shown in other studies that a pulse of native, early-seral opportunistic species typically follow such disturbance events. With the growing importance of exotic plants in local flora, however, these exotics often fill this opportunistic role in recovery. We report the effects of fire severity on exotic plant species following three widespread fires of 1996 in northern Arizona P. ponderosa forests. Species richness and abundance of all vascular plant species, including exotics, were higher in burned than nearby unburned areas. Exotic species were far more important, in terms of cover, where fire severity was highest. Species present after wildfires include those of the pre-disturbed forest and new species that could not be predicted from above-ground flora of nearby unburned forests.", "which Type of disturbance ?", "Fire", 32.0, 36.0], ["Aim We tested the hypothesis that anthropogenic fires favour the successful establishment of alien annual species to the detriment of natives in the Chilean coastal matorral.", "which Type of disturbance ?", "Fire", NaN, NaN], ["Island ecosystems are notably susceptible to biological invasions (Elton 1958), and the Hawaiian islands in particular have been colonized by many introduced species (Loope and Mueller-Dombois 1989). Introduced plants now dominate extensive areas of the Hawaiian Islands, and 86 species of alien plants are presently considered to pose serious threats to Hawaiian communities and ecosystems (Smith 1985). Among the most important invasive plants are several species of tropical and subtropical grasses that use the C4 photosynthetic pathway. These grasses now dominate extensive areas of dry and seasonally dry habitats in Hawai'i. They may compete with native species, and they have also been shown to alter hydrological properties in the areas they invade (MuellerDombois 1973). Most importantly, alien grasses can introduce fire into areas where it was previously rare or absent (Smith 1985), thereby altering the structure and functioning of previously native-dominated ecosystems. Many of these grasses evolved in fire-affected areas and have mechanisms for surviving and recovering rapidly from fire (Vogl 1975, Christensen 1985), while most native species in Hawai'i have little background with fire (Mueller-Dombois 1981) and hence few or no such mechanisms. Consequently, grass invasion could initiate a grass/fire cycle whereby invading grasses promote fire, which in turn favors alien grasses over native species. Such a scenario has been suggested in a number of areas, including Latin America, western North America, Australia, and Hawai'i (Parsons 1972, Smith 1985, Christensen and Burrows 1986, Mack 1986, MacDonald and Frame 1988). In most of these cases, land clearing by humans initiates colonization by alien grasses, and the grass/fire cycle then leads to their persistence. In Hawai'i and perhaps other areas, however, grass invasion occurs without any direct human intervention. Where such invasions initiate a grass/fire cy-", "which Type of disturbance ?", "Fire", 827.0, 831.0], ["Aim of study: Acacia dealbata is a naturalized tree of invasive behaviour that has expanded from small plots associated with vineyards into forest ecosystems. Our main objective is to find evidence to support the notion that disturbances, particularly forest fires, are important driving factors in the current expansion of A. dealbata. Area of study: We mapped it current distribution using three study areas and assesses the temporal changes registered in forest cover in these areas of the valley of the river Mino. Material and Methods: The analyses were based on visual interpretation of aerial photographs taken in 1985 and 2003 of three 1x1 km study areas and field works. Main result: A 62.4%, 48.6% and 22.2% of the surface area was covered by A. dealbata in 2003 in pure or mixed stands. Furthermore, areas composed exclusively of A. dealbata make up 33.8%, 15.2% and 5.7% of the stands. The transition matrix analyses between the two dates support our hypothesis that the areas currently covered by A. dealbata make up a greater proportion of the forest area previously classified as unwooded or open forest than those without A. dealbata cover. Both of these surface types are the result of an important impact of fire in the region. Within each area, A. dealbata is mainly located on steeper terrain, which is more affected by fires. Research highlights: A. dealbata is becoming the dominant tree species over large areas and the invasion of this species gives rise to monospecific stands, which may have important implications for future fire regimes. Keywords: Fire regime; Mimosa; plant invasion; silver wattle.", "which Type of disturbance ?", "Fire", 1226.0, 1230.0], ["The potential of introduced species to become invasive is often linked to their ability to colonise disturbed habitats rapidly. We studied the effects of major disturbance by severe storms on the indigenous mussel Perna perna and the invasive mussel Mytilus galloprovincialis in sympatric intertidal populations on the south coast of South Africa. At the study sites, these species dominate different shore levels and co-exist in the mid mussel zone. We tested the hypotheses that in the mid- zone P. perna would suffer less dislodgment than M. galloprovincialis, because of its greater tenacity, while M. galloprovincialis would respond with a higher re-colonisation rate. We estimated the per- cent cover of the 2 mussels in the mid-zone from photographs, once before severe storms and 3 times afterwards. M. galloprovincialis showed faster re-colonisation and 3 times more cover than P. perna 1 and 1.5 yr after the storms (when populations had recovered). Storm-driven dislodgment in the mid- zone was highest for the species that initially dominated at each site, conforming to the concept of compensatory mortality. This resulted in similar cover of the 2 species immediately after the storms. Thus, the storm wave forces exceeded the tenacity even of P. perna, while the higher recruitment rate of M. galloprovincialis can explain its greater colonisation ability. We predict that, because of its weaker attachment strength, M. galloprovincialis will be largely excluded from open coast sites where wave action is generally stronger, but that its greater capacity for exploitation competition through re-colonisation will allow it to outcompete P. perna in more sheltered areas (especially in bays) that are periodically disturbed by storms.", "which Type of disturbance ?", " Storm", 1209.0, 1215.0], ["This study is a comparison of the spontaneous vascular flora of five Italian cities: Milan, Ancona, Rome, Cagliari and Palermo. The aims of the study are to test the hypothesis that urbanization results in uniformity of urban floras, and to evaluate the role of alien species in the flora of settlements located in different phytoclimatic regions. To obtain comparable data, ten plots of 1 ha, each representing typical urban habitats, were analysed in each city. The results indicate a low floristic similarity between the cities, while the strongest similarity appears within each city and between each city and the seminatural vegetation of the surrounding region. In the Mediterranean settlements, even the most urbanized plots reflect the characters of the surrounding landscape and are rich in native species, while aliens are relatively few. These results differ from the reported uniformity and the high proportion of aliens which generally characterize urban floras elsewhere. To explain this trend the importance of apophytes (indigenous plants expanding into man-made habitats) is highlighted; several Mediterranean species adapted to disturbance (i.e. grazing, trampling, and human activities) are pre-adapted to the urban environment. In addition, consideration is given to the minor role played by the \u2018urban heat island\u2019 in the Mediterranean basin, and to the structure and history of several Italian settlements, where ancient walls, ruins and archaeological sites in the periphery as well as in the historical centres act as conservative habitats and provide connection with seed-sources on the outskirts.", "which Type of disturbance ?", "Urbanization", 182.0, 194.0], ["We report the impact of an extreme weather event, the October 1987 severe storm, on fragmented woodlands in southern Britain. We analysed ecological changes between 1971 and 2002 in 143 200\u2010m2 plots in 10 woodland sites exposed to the storm with an ecologically equivalent sample of 150 plots in 16 non\u2010exposed sites. Comparing both years, understorey plant species\u2010richness, species composition, soil pH and woody basal area of the tree and shrub canopy were measured. We tested the hypothesis that the storm had deflected sites from the wider national trajectory of an increase in woody basal area and reduced understorey species\u2010richness associated with ageing canopies and declining woodland management. We also expected storm disturbance to amplify the background trend of increasing soil pH, a UK\u2010wide response to reduced atmospheric sulphur deposition. Path analysis was used to quantify indirect effects of storm exposure on understorey species richness via changes in woody basal area and soil pH. By 2002, storm exposure was estimated to have increased mean species richness per 200 m2 by 32%. Woody basal area changes were highly variable and did not significantly differ with storm exposure. Increasing soil pH was associated with a 7% increase in richness. There was no evidence that soil pH increased more as a function of storm exposure. Changes in species richness and basal area were negatively correlated: a 3.4% decrease in richness occurred for every 0.1\u2010m2 increase in woody basal area per plot. Despite all sites substantially exceeding the empirical critical load for nitrogen deposition, there was no evidence that in the 15 years since the storm, disturbance had triggered a eutrophication effect associated with dominance of gaps by nitrophilous species. Synthesis. Although the impacts of the 1987 storm were spatially variable in terms of impacts on woody basal area, the storm had a positive effect on understorey species richness. There was no evidence that disturbance had increased dominance of gaps by invasive species. This could change if recovery from acidification results in a soil pH regime associated with greater macronutrient availability.", "which Type of disturbance ?", "Storm", 74.0, 79.0], ["Summary 1. Wetlands in urban regions are subjected to a wide variety of anthropogenic disturbances, many of which may promote invasions of exotic plant species. In order to devise management strategies, the influence of different aspects of the urban and natural environments on invasion and community structure must be understood. 2. The roles of soil variables, anthropogenic effects adjacent to and within the wetlands, and vegetation structure on exotic species occurrence within 21 forested wetlands in north-eastern New Jersey, USA, were compared. The hypotheses were tested that different vegetation strata and different invasive species respond similarly to environmental factors, and that invasion increases with increasing direct human impact, hydrologic disturbance, adjacent residential land use and decreasing wetland area. Canonical correspondence analyses, correlation and logistic regression analyses were used to examine invasion by individual species and overall site invasion, as measured by the absolute and relative number of exotic species in the site flora. 3. Within each stratum, different sets of environmental factors separated exotic and native species. Nutrients, soil clay content and pH, adjacent land use and canopy composition were the most frequently identified factors affecting species, but individual species showed highly individualistic responses to the sets of environmental variables, often responding in opposite ways to the same factor. 4. Overall invasion increased with decreasing area but only when sites > 100 ha were included. Unexpectedly, invasion decreased with increasing proportions of industrial/commercial adjacent land use. 5. The hypotheses were only partially supported; invasion does not increase in a simple way with increasing human presence and disturbance. 6. Synthesis and applications . The results suggest that a suite of environmental conditions can be identified that are associated with invasion into urban wetlands, which can be widely used for assessment and management. However, a comprehensive ecosystem approach is needed that places the remediation of physical alterations from urbanization within a landscape context. Specifically, sediment, inputs and hydrologic changes need to be related to adjoining urban land use and to the overlapping requirements of individual native and exotic species.", "which Type of disturbance ?", "Urbanization", 2153.0, 2165.0], ["Fire contributes to the maintenance of species diversity in many plant com- munities, but few studies have compared its impacts in similar communities that vary in such attributes as soils and productivity. We compared how a wildfire affected plant diversity in chaparral vegetation on serpentine and sandstone soils. We hypothesized that because biomass and cover are lower in serpentine chaparral, space and light are less limiting, and therefore postfire increases in plant species diversity would be lower than in sandstone chaparral. In 40 pairs of burned and unburned 250-m 2 plots, we measured changes in the plant community after a fire for three years. The diversity of native and exotic species increased more in response to fire in sandstone than serpentine chaparral, at both the local (plot) and regional (whole study) scales. In serpentine compared with sandstone chaparral, specialized fire-dependent species were less prevalent, mean fire severity was lower, mean time since last fire was longer, postfire shrub recruitment was lower, and regrowth of biomass was slower. Within each chaparral type, the responses of diversity to fire were positively correlated with prefire shrub cover and with a number of measures of soil fertility. Fire severity was negatively related to the postfire change in diversity in sandstone chaparral, and unimodally related to the postfire change in diversity in serpentine chaparral. Our results suggest that the effects of fire on less productive plant communities like serpentine chaparral may be less pronounced, although longer lasting, than the effects of fire on similar but more productive communities.", "which Type of disturbance ?", "Fire", 0.0, 4.0], ["Biological invasions are a global phenomenon that can accelerate disturbance regimes and facilitate colonization by other nonnative species. In a coastal grassland in northern California, we conducted a four-year exclosure experiment to assess the effects of soil disturbances by feral pigs (Sus scrofa) on plant community composition and soil nitrogen availability. Our results indicate that pig disturbances had substantial effects on the community, although many responses varied with plant functional group, geographic origin (native vs. exotic), and grassland type. (''Short patches'' were dominated by annual grasses and forbs, whereas ''tall patches'' were dominated by perennial bunchgrasses.) Soil disturbances by pigs increased the richness of exotic plant species by 29% and native taxa by 24%. Although native perennial grasses were unaffected, disturbances reduced the bio- mass of exotic perennial grasses by 52% in tall patches and had no effect in short patches. Pig disturbances led to a 69% decrease in biomass of exotic annual grasses in tall patches but caused a 62% increase in short patches. Native, nongrass monocots exhibited the opposite biomass pattern as those seen for exotic annual grasses, with disturbance causing an 80% increase in tall patches and a 56% decrease in short patches. Native forbs were unaffected by disturbance, whereas the biomass of exotic forbs increased by 79% with disturbance in tall patches and showed no response in short patches. In contrast to these vegetation results, we found no evidence that pig disturbances affected nitrogen mineral- ization rates or soil moisture availability. Thus, we hypothesize that the observed vegetation changes were due to space clearing by pigs that provided greater opportunities for colo- nization and reduced intensity of competition, rather than changes in soil characteristics. In summary, although responses were variable, disturbances by feral pigs generally pro- moted the continued invasion of this coastal grassland by exotic plant taxa.", "which Type of disturbance ?", "Soil disturbance", NaN, NaN], ["Facilitation among species may promote non\u2010native plant invasions through alteration of environmental conditions, enemies or mutualists. However, the role of non\u2010trophic indirect facilitation in invasions has rarely been examined. We used a long\u2010term field experiment to test for indirect facilitation by invasions of Microstegium vimineum (stiltgrass) on a secondary invasion of Alliaria petiolata (garlic mustard) by introducing Alliaria seed into replicated plots previously invaded experimentally by Microstegium. Alliaria more readily colonized control plots without Microstegium but produced almost seven times more biomass and nearly four times as many siliques per plant in Microstegium\u2010invaded plots. Improved performance of Alliaria in Microstegium\u2010invaded plots compared to control plots overwhelmed differences in total number of plants such that, on average, invaded plots contained 327% greater total Alliaria biomass and 234% more total siliques compared to control plots. The facilitation of Alliaria in Microstegium\u2010invaded plots was associated with an 85% reduction in the biomass of resident species at the peak of the growing season and significantly greater light availability in Microstegium\u2010invaded than control plots early in the growing season. Synthesis. Our results demonstrate that an initial plant invasion associated with suppression of resident species and increased resource availability can facilitate a secondary plant invasion. Such positive interactions among species with similar habitat requirements, but offset phenologies, may exacerbate invasions and their impacts on native ecosystems.", "which Investigated species ?", "Plants", 842.0, 848.0], ["Invasive exotic plants reduce the diversity of native communities by displacing native species. According to the coexistence theory, native plants are able to coexist with invaders only when their fitness is not significantly smaller than that of the exotics or when they occupy a different niche. It has therefore been hypothesized that the survival of some native species at invaded sites is due to post-invasion evolutionary changes in fitness and/or niche traits. In common garden experiments, we tested whether plants from invaded sites of two native species, Impatiens noli-tangere and Galeopsis speciosa, outperform conspecifics from non-invaded sites when grown in competition with the invader (Impatiens parviflora). We further examined whether the expected superior performance of the plants from the invaded sites is due to changes in the plant size (fitness proxy) and/or changes in the germination phenology and phenotypic plasticity (niche proxies). Invasion history did not influence the performance of any native species when grown with the exotic competitor. In I. noli-tangere, however, we found significant trait divergence with regard to plant size, germination phenology and phenotypic plasticity. In the absence of a competitor, plants of I. noli-tangere from invaded sites were larger than plants from non-invaded sites. The former plants germinated earlier than inexperienced conspecifics or an exotic congener. Invasion experience was also associated with increased phenotypic plasticity and an improved shade-avoidance syndrome. Although these changes indicate fitness and niche differentiation of I. noli-tangere at invaded sites, future research should examine more closely the adaptive value of these changes and their genetic basis.", "which Investigated species ?", "Plants", 16.0, 22.0], ["Functional traits, their plasticity and their integration in a phenotype have profound impacts on plant performance. We developed structural equation models (SEMs) to evaluate their relative contribution to promote invasiveness in plants along resource gradients. We compared 20 invasive-native phylogenetically and ecologically related pairs. SEMs included one morphological (root-to-shoot ratio (R/S)) and one physiological (photosynthesis nitrogen-use efficiency (PNUE)) trait, their plasticities in response to nutrient and light variation, and phenotypic integration among 31 traits. Additionally, these components were related to two fitness estimators, biomass and survival. The relative contributions of traits, plasticity and integration were similar in invasive and native species. Trait means were more important than plasticity and integration for fitness. Invasive species showed higher fitness than natives because: they had lower R/S and higher PNUE values across gradients; their higher PNUE plasticity positively influenced biomass and thus survival; and they offset more the cases where plasticity and integration had a negative direct effect on fitness. Our results suggest that invasiveness is promoted by higher values in the fitness hierarchy--trait means are more important than trait plasticity, and plasticity is similar to integration--rather than by a specific combination of the three components of the functional strategy.", "which Investigated species ?", "Plants", 231.0, 237.0], ["A central question in ecology concerns how some exotic plants that occur at low densities in their native range are able to attain much higher densities where they are introduced. This question has remained unresolved in part due to a lack of experiments that assess factors that affect the population growth or abundance of plants in both ranges. We tested two hypotheses for exotic plant success: escape from specialist insect herbivores and a greater response to disturbance in the introduced range. Within three introduced populations in Montana, USA, and three native populations in Germany, we experimentally manipulated insect herbivore pressure and created small-scale disturbances to determine how these factors affect the performance of houndstongue (Cynoglossum officinale), a widespread exotic in western North America. Herbivores reduced plant size and fecundity in the native range but had little effect on plant performance in the introduced range. Small-scale experimental disturbances enhanced seedling recruitment in both ranges, but subsequent seedling survival was more positively affected by disturbance in the introduced range. We combined these experimental results with demographic data from each population to parameterize integral projection population models to assess how enemy escape and disturbance might differentially influence C. officinale in each range. Model results suggest that escape from specialist insects would lead to only slight increases in the growth rate (lambda) of introduced populations. In contrast, the larger response to disturbance in the introduced vs. native range had much greater positive effects on lambda. These results together suggest that, at least in the regions where the experiments were performed, the differences in response to small disturbances by C. officinale contribute more to higher abundance in the introduced range compared to at home. Despite the challenges of conducting experiments on a wide biogeographic scale and the logistical constraints of adequately sampling populations within a range, this approach is a critical step forward to understanding the success of exotic plants.", "which Investigated species ?", "Plants", 55.0, 61.0], ["1 We tested the enemy release hypothesis for invasiveness using field surveys of herbivory on 39 exotic and 30 native plant species growing in natural areas near Ottawa, Canada, and found that exotics suffered less herbivory than natives. 2 For the 39 introduced species, we also tested relationships between herbivory, invasiveness and time since introduction to North America. Highly invasive plants had significantly less herbivory than plants ranked as less invasive. Recently arrived plants also tended to be more invasive; however, there was no relationship between time since introduction and herbivory. 3 Release from herbivory may be key to the success of highly aggressive invaders. Low herbivory may also indicate that a plant possesses potent defensive chemicals that are novel to North America, which may confer resistance to pathogens or enable allelopathy in addition to deterring herbivorous insects.", "which Investigated species ?", "Plants", 395.0, 401.0], ["Abstract How introduced plants, which may be locally adapted to specific climatic conditions in their native range, cope with the new abiotic conditions that they encounter as exotics is not well understood. In particular, it is unclear what role plasticity versus adaptive evolution plays in enabling exotics to persist under new environmental circumstances in the introduced range. We determined the extent to which native and introduced populations of St. John's Wort (Hypericum perforatum) are genetically differentiated with respect to leaf-level morphological and physiological traits that allow plants to tolerate different climatic conditions. In common gardens in Washington and Spain, and in a greenhouse, we examined clinal variation in percent leaf nitrogen and carbon, leaf \u03b413C values (as an integrative measure of water use efficiency), specific leaf area (SLA), root and shoot biomass, root/shoot ratio, total leaf area, and leaf area ratio (LAR). As well, we determined whether native European H. perforatum experienced directional selection on leaf-level traits in the introduced range and we compared, across gardens, levels of plasticity in these traits. In field gardens in both Washington and Spain, native populations formed latitudinal clines in percent leaf N. In the greenhouse, native populations formed latitudinal clines in root and shoot biomass and total leaf area, and in the Washington garden only, native populations also exhibited latitudinal clines in percent leaf C and leaf \u03b413C. Traits that failed to show consistent latitudinal clines instead exhibited significant phenotypic plasticity. Introduced St. John's Wort populations also formed significant or marginally significant latitudinal clines in percent leaf N in Washington and Spain, percent leaf C in Washington, and in root biomass and total leaf area in the greenhouse. In the Washington common garden, there was strong directional selection among European populations for higher percent leaf N and leaf \u03b413C, but no selection on any other measured trait. The presence of convergent, genetically based latitudinal clines between native and introduced H. perforatum, together with previously published molecular data, suggest that native and exotic genotypes have independently adapted to a broad-scale variation in climate that varies with latitude.", "which Investigated species ?", "Plants", 24.0, 30.0], ["Deep in the heart of a longstanding invasion, an exotic grass is still invading. Range infilling potentially has the greatest impact on native communities and ecosystem processes, but receives much less attention than range expansion. \u2018Snapshot' studies of invasive plant dispersal, habitat and propagule limitations cannot determine whether a landscape is saturated or whether a species is actively infilling empty patches. We investigate the mechanisms underlying invasive plant infilling by tracking the localized movement and expansion of Microstegium vimineum populations from 2009 to 2011 at sites along a 100-km regional gradient in eastern U.S. deciduous forests. We find that infilling proceeds most rapidly where the invasive plants occur in warm, moist habitats adjacent to roads: under these conditions they produce copious seed, the dispersal distances of which increase exponentially with proximity to roadway. Invasion then appears limited where conditions are generally dry and cool as propagule pressure tapers off. Invasion also is limited in habitats >1 m from road corridors, where dispersal distances decline precipitously. In contrast to propagule and dispersal limitations, we find little evidence that infilling is habitat limited, meaning that as long as M. vimineum seeds are available and transported, the plant generally invades quite vigorously. Our results suggest an invasive species continues to spread, in a stratified manner, within the invaded landscape long after first arriving. These dynamics conflict with traditional invasion models that emphasize an invasive edge with distinct boundaries. We find that propagule pressure and dispersal regulate infilling, providing the basis for projecting spread and landscape coverage, ecological effects and the efficacy of containment strategies.", "which Investigated species ?", "Plants", 736.0, 742.0], ["Abstract The enemy release hypothesis (ERH) frequently has been invoked to explain the naturalization and spread of introduced species. One ramification of the ERH is that invasive plants sustain less herbivore pressure than do native species. Empirical studies testing the ERH have mostly involved two-way comparisons between invasive introduced plants and their native counterparts in the invaded region. Testing the ERH would be more meaningful if such studies also included introduced non-invasive species because introduced plants, regardless of their abundance or impact, may support a reduced insect herbivore fauna and experience less damage. In this study, we employed a three-way comparison, in which we compared herbivore faunas among native, introduced invasive, and introduced non-invasive plants in the genus Eugenia (Myrtaceae) which all co-occur in South Florida. We observed a total of 25 insect species in 12 families and 6 orders feeding on the six species of Eugenia. Of these insect species, the majority were native (72%), polyphagous (64%), and ectophagous (68%). We found that invasive introduced Eugenia has a similar level of herbivore richness as both the native and the non-invasive introduced Eugenia. However, the numbers and percentages of oligophagous insect species were greatest on the native Eugenia, but they were not different between the invasive and non-invasive introduced Eugenia. One oligophagous endophagous insect has likely shifted from the native to the invasive, but none to the non-invasive Eugenia. In summary, the invasive Eugenia encountered equal, if not greater, herbivore pressure than the non-invasive Eugenia, including from oligophagous and endophagous herbivores. Our data only provided limited support to the ERH. We would not have been able to draw this conclusion without inclusion of the non-invasive Eugenia species in the study.", "which Investigated species ?", "Plants", 181.0, 187.0], ["Abstract Changes in morphological traits along elevation and latitudinal gradients in ectotherms are often interpreted in terms of the temperature-size rule, which states that the body size of organisms increases under low temperatures, and is therefore expected to increase with elevation and latitude. However other factors like host plant might contribute to spatial patterns in size as well, particularly for polyphagous insects. Here elevation patterns for trait size and shape in two leafminer species are examined, Liriomyza huidobrensis (Blanchard) (Diptera: Agromyzidae) and L. sativae Blanchard, along a tropical elevation gradient in Java, Indonesia. Adult leafminers were trapped from different locations in the mountainous area of Dieng in the province of Central Java. To separate environmental versus genetic effects, L. huidobrensis originating from 1378 m and 2129 m ASL were reared in the laboratory for five generations. Size variation along the elevation gradient was only found in L. huidobrensis and this followed expectations based on the temperature-size rule. There were also complex changes in wing shape along the gradient. Morphological differences were influenced by genetic and environmental effects. Findings are discussed within the context of adaptation to different elevations in the two species.", "which Investigated species ?", "Insects", 425.0, 432.0], ["Although negative relationships between diversity (frequently measured as species richness) and invasibility at neighborhood or community scales have often been reported, realistic natural diversity gradients have rarely been studied at this scale. We recreated a naturally occurring gradient in species richness to test the effects of species richness on community invasibility. In central Texas savannas, as the proportion of woody plants increases (a process known as woody plant encroachment), herbaceous habitat is both lost and fragmented, and native herbaceous species richness declines. We examined the effects of these species losses on invasibility in situ by removing species that occur less frequently in herbaceous patches as woody plant encroachment advances. This realistic species removal was accompanied by a parallel and equivalent removal of biomass with no changes in species richness. Over two springs, the nonnative bunchgrass Bothriochloa ischaemum germinated significantly more often in the biomass-removal treatment than in unmanipulated control plots, suggesting an effect of native plant density independent of diversity. Additionally, significantly more germination occurred in the species-removal treatment than in the biomass-removal treatment. Changes in species richness had a stronger effect on B. ischaemum germination than changes in plant density, demonstrating that niche-related processes contributed more to biotic resistance in this system than did species-neutral competitive interactions. Similar treatment effects were found on transplant growth. Thus we show that woody plant encroachment indirectly facilitates the establishment of an invasive grass by reducing native diversity. Although we found a negative relationship between species richness and invasibility at the scale of plots with similar composition and environmental conditions, we found a positive relationship between species richness and invasibility at larger scales. This apparent paradox is consistent with reports from other systems and may be the result of variation in environmental factors at larger scales similarly influencing both invasibility and richness. The habitat loss and fragmentation associated with woody plant encroachment are two of many processes that commonly threaten biodiversity, including climate change. Many of these processes are similarly likely to increase invasibility via their negative effects on native diversity.", "which Investigated species ?", "Plants", 434.0, 440.0], ["Aim To quantify the vulnerability of habitats to invasion by alien plants having accounted for the effects of propagule pressure, time and sampling effort. Location New Zealand. Methods We used spatial, temporal and habitat information taken from 9297 herbarium records of 301 alien plant species to examine the vulnerability of 11 terrestrial habitats to plant invasions. A null model that randomized species records across habitats was used to account for variation in sampling effort and to derive a relative measure of invasion based either on all records for a species or only its first record. The relative level of invasion was related to the average distance of each habitat from the nearest conurbation, which was used as a proxy for propagule pressure. The habitat in which a species was first recorded was compared to the habitats encountered for all records of that species to determine whether the initial habitat could predict subsequent habitat occupancy. Results Variation in sampling effort in space and time significantly masked the underlying vulnerability of habitats to plant invasions. Distance from the nearest conurbation had little effect on the relative level of invasion in each habitat, but the number of first records of each species significantly declined with increasing distance. While Urban, Streamside and Coastal habitats were over-represented as sites of initial invasion, there was no evidence of major invasion hotspots from which alien plants might subsequently spread. Rather, the data suggest that certain habitats (especially Roadsides) readily accumulate alien plants from other habitats. Main conclusions Herbarium records combined with a suitable null model provide a powerful tool for assessing the relative vulnerability of habitats to plant invasion. The first records of alien plants tend to be found near conurbations, but this pattern disappears with subsequent spread. Regardless of the habitat where a species was first recorded, ultimately most alien plants spread to Roadside and Sparse habitats. This information suggests that such habitats may be useful targets for weed surveillance and monitoring.", "which Investigated species ?", "Plants", 67.0, 73.0], ["The introduced C4 bunchgrass, Schizachyrium condensatum, is abundant in unburned, seasonally dry woodlands on the island of Hawaii, where it promotes the spread of fire. After fire, it is partially replaced by Melinis minutiflora, another invasive C4 grass. Seed bank surveys in unburned woodland showed that Melinis seed is present in locations without adult plants. Using a combination of germination tests and seedling outplant ex- periments, we tested the hypothesis that Melinis was unable to invade the unburned wood- land because of nutrient and/or light limitation. We found that Melinis germination and seedling growth are depressed by the low light levels common under Schizachyrium in unburned woodland. Outplanted Melinis seedlings grew rapidly to flowering and persisted for several years in unburned woodland without nutrient additions, but only if Schizachyrium individuals were removed. Nutrients alone did not facilitate Melinis establishment. Competition between Melinis and Schizachyrium naturally occurs when individuals of both species emerge from the seed bank simultaneously, or when seedlings of one species emerge in sites already dominated by individuals of the other species. When both species are grown from seed, we found that Melinis consistently outcompetes Schizachyrium, re- gardless of light or nutrient treatments. When seeds of Melinis were added to pots with well-established Schizachyrium (and vice versa), Melinis eventually invaded and overgrew adult Schizachyrium under high, but not low, nutrients. By contrast, Schizachyrium could not invade established Melinis pots regardless of nutrient level. A field experiment dem- onstrated that Schizachyrium individuals are suppressed by Melinis in burned sites through competition for both light and nutrients. Overall, Melinis is a dominant competitor over Schizachyrium once it becomes estab- lished, whether in a pot or in the field. We believe that the dominance of Schizachyrium, rather than Melinis, in the unburned woodland is the result of asymmetric competition due to the prior establishment of Schizachyrium in these sites. If Schizachyrium were not present, the unburned woodland could support dense stands of Melinis. Fire disrupts the priority effect of Schizachyrium and allows the dominant competitor (Melinis) to enter the system where it eventually replaces Schizachyrium through resource competition.", "which Investigated species ?", "Plants", 360.0, 366.0], ["Differences between native and exotic species in competitive ability and susceptibility to herbivores are hypothesized to facilitate coexistence. However, little fieldwork has been conducted to determine whether these differences are present in invaded communities. Here, we experimentally examined whether asymmetries exist between native and exotic plants in a community invaded for over 200 years and whether removing competitors or herbivores influences coexistence. We found that natives and exotics exhibit pronounced asymmetries, as exotics are competitively superior to natives, but are more significantly impacted by herbivores. We also found that herbivore removal mediated the outcome of competitive interactions and altered patterns of dominance across our field sites. Collectively, these findings suggest that asymmetric biotic interactions between native and exotic plants can help to facilitate coexistence in invaded communities.", "which Investigated species ?", "Plants", 351.0, 357.0], ["1. Biological invasion theory predicts that the introduction and establishment of non-native species is positively correlated with propagule pressure. Releases of pet and aquarium fishes to inland waters has a long history; however, few studies have examined the demographic basis of their importation and incidence in the wild. 2. For the 1500 grid squares (10\u00d710 km) that make up England, data on human demographics (population density, numbers of pet shops, garden centres and fish farms), the numbers of non-native freshwater fishes (from consented licences) imported in those grid squares (i.e. propagule pressure), and the reported incidences (in a national database) of non-native fishes in the wild were used to examine spatial relationships between the occurrence of non-native fishes and the demographic factors associated with propagule pressure, as well as to test whether the demographic factors are statistically reliable predictors of the incidence of non-native fishes, and as such surrogate estimators of propagule pressure. 3. Principal coordinates of neighbour matrices analyses, used to generate spatially explicit models, and confirmatory factor analysis revealed that spatial distributions of non-native species in England were significantly related to human population density, garden centre density and fish farm density. Human population density and the number of fish imports were identified as the best predictors of propagule pressure. 4. Human population density is an effective surrogate estimator of non-native fish propagule pressure and can be used to predict likely areas of non-native fish introductions. In conjunction with fish movements, where available, human population densities can be used to support biological invasion monitoring programmes across Europe (and perhaps globally) and to inform management decisions as regards the prioritization of areas for the control of non-native fish introductions. \u00a9 Crown copyright 2010. Reproduced with the permission of her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd.", "which Investigated species ?", "Fishes", 180.0, 186.0], ["Most introduced species apparently have little impact on native biodiversity, but the proliferation of human vectors that transport species worldwide increases the probability of a region being affected by high\u2010impact invaders \u2013 i.e. those that cause severe declines in native species populations. Our study determined whether the number of high\u2010impact invaders can be predicted from the total number of invaders in an area, after controlling for species\u2013area effects. These two variables are positively correlated in a set of 16 invaded freshwater and marine systems from around the world. The relationship is a simple linear function; there is no evidence of synergistic or antagonistic effects of invaders across systems. A similar relationship is found for introduced freshwater fishes across 149 regions. In both data sets, high\u2010impact invaders comprise approximately 10% of the total number of invaders. Although the mechanism driving this correlation is likely a sampling effect, it is not simply the proportional sampling of a constant number of repeat\u2010offenders; in most cases, an invader is not reported to have strong impacts on native species in the majority of regions it invades. These findings link vector activity and the negative impacts of introduced species on biodiversity, and thus justify management efforts to reduce invasion rates even where numerous invasions have already occurred.", "which Investigated species ?", "Fishes", 783.0, 789.0], ["Woodlands comprised of planted, nonnative trees are increasing in extent globally, while native woodlands continue to decline due to human activities. The ecological impacts of planted woodlands may include changes to the communities of understory plants and animals found among these nonnative trees relative to native woodlands, as well as invasion of adjacent habitat areas through spread beyond the originally planted areas. Eucalypts (Eucalyptus spp.) are among the most widely planted trees worldwide, and are very common in California, USA. The goals of our investigation were to compare the biological communities of nonnative eucalypt woodlands to native oak woodlands in coastal central California, and to examine whether planted eucalypt groves have increased in size over the past decades. We assessed site and habitat attributes and characterized biological communities using understory plant, ground-dwelling arthropod, amphibian, and bird communities as indicators. Degree of difference between native and nonnative woodlands depended on the indicator used. Eucalypts had significantly greater canopy height and cover, and significantly lower cover by perennial plants and species richness of arthropods than oaks. Community composition of arthropods also differed significantly between eucalypts and oaks. Eucalypts had marginally significantly deeper litter depth, lower abundance of native plants with ranges limited to western North America, and lower abundance of amphibians. In contrast to these differences, eucalypt and oak groves had very similar bird community composition, species richness, and abundance. We found no evidence of \"invasional meltdown,\" documenting similar abundance and richness of nonnatives in eucalypt vs. oak woodlands. Our time-series analysis revealed that planted eucalypt groves increased 271% in size, on average, over six decades, invading adjacent areas. Our results inform science-based management of California woodlands, revealing that while bird communities would probably not be affected by restoration of eucalypt to oak woodlands, such a restoration project would not only stop the spread of eucalypts into adjacent habitats but would also enhance cover by western North American native plants and perennials, enhance amphibian abundance, and increase arthropod richness.", "which Investigated species ?", "Plants", 248.0, 254.0], ["Several hypotheses proposed to explain the success of introduced species focus on altered interspecific interactions. One of the most prominent, the Enemy Release Hypothesis, posits that invading species benefit compared to their native counterparts if they lose their herbivores and pathogens during the invasion process. We previously reported on a common garden experiment (from 2002) in which we compared levels of herbivory between 30 taxonomically paired native and introduced old-field plants. In this phyloge- netically controlled comparison, herbivore damage tended to be higher on introduced than on native plants. This striking pattern, the opposite of current theory, prompted us to further investigate herbivory and several other interspecific interactions in a series of linked ex- periments with the same set of species. Here we show that, in these new experiments, introduced plants, on average, received less insect herbivory and were subject to half the negative soil microbial feedback compared to natives; attack by fungal and viral pathogens also tended to be reduced on introduced plants compared to natives. Although plant traits (foliar C:N, toughness, and water content) suggested that introduced species should be less resistant to generalist consumers, they were not consistently more heavily attacked. Finally, we used meta-analysis to combine data from this study with results from our previous work to show that escape generally was inconsistent among guilds of enemies: there were few instances in which escape from multiple guilds occurred for a taxonomic pair, and more cases in which the patterns of escape from different enemies canceled out. Our examination of multiple interspecific interactions demonstrates that escape from one guild of enemies does not necessarily imply escape from other guilds. Because the effects of each guild are likely to vary through space and time, the net effect of all enemies is also likely to be variable. The net effect of these interactions may create ''invasion opportunity windows'': times when introduced species make advances in native communities.", "which Investigated species ?", "Plants", 493.0, 499.0], ["There is often an inverse relationship between the diversity of a plant community and the invasibility of that community by non-native plants. Native herbivores that colonize novel plants may contribute to diversity\u2013invasibility relationships by limiting the relative success of non-native plants. Here, we show that, in large collections of non-native oak trees at sites across the USA, non-native oaks introduced to regions with greater oak species richness accumulated greater leaf damage than in regions with low oak richness. Underlying this trend was the ability of herbivores to exploit non-native plants that were close relatives to their native host. In diverse oak communities, non-native trees were on average more closely related to native trees and received greater leaf damage than those in depauperate oak communities. Because insect herbivores colonize non-native plants that are similar to their native hosts, in communities with greater native plant diversity, non-natives experience greater herbivory.", "which Investigated species ?", "Plants", 135.0, 141.0], ["BACKGROUND AND AIMS The successful spread of invasive plants in new environments is often linked to multiple introductions and a diverse gene pool that facilitates local adaptation to variable environmental conditions. For clonal plants, however, phenotypic plasticity may be equally important. Here the primary adaptive strategy in three non-native, clonally reproducing macrophytes (Egeria densa, Elodea canadensis and Lagarosiphon major) in New Zealand freshwaters were examined and an attempt was made to link observed differences in plant morphology to local variation in habitat conditions. METHODS Field populations with a large phenotypic variety were sampled in a range of lakes and streams with different chemical and physical properties. The phenotypic plasticity of the species before and after cultivation was studied in a common garden growth experiment, and the genetic diversity of these same populations was also quantified. KEY RESULTS For all three species, greater variation in plant characteristics was found before they were grown in standardized conditions. Moreover, field populations displayed remarkably little genetic variation and there was little interaction between habitat conditions and plant morphological characteristics. CONCLUSIONS The results indicate that at the current stage of spread into New Zealand, the primary adaptive strategy of these three invasive macrophytes is phenotypic plasticity. However, while limited, the possibility that genetic diversity between populations may facilitate ecotypic differentiation in the future cannot be excluded. These results thus indicate that invasive clonal aquatic plants adapt to new introduced areas by phenotypic plasticity. Inorganic carbon, nitrogen and phosphorous were important in controlling plant size of E. canadensis and L. major, but no other relationships between plant characteristics and habitat conditions were apparent. This implies that within-species differences in plant size can be explained by local nutrient conditions. All together this strongly suggests that invasive clonal aquatic plants adapt to a wide range of habitats in introduced areas by phenotypic plasticity rather than local adaptation.", "which Investigated species ?", "Plants", 54.0, 60.0], ["communities by alien plant species that can adversely affect community structure and function. To determine how corridor establishment influences riparian vegetation of the Allegheny High Plateau of northwestern Pennsylvania, we compared the species composition and richness of the herbaceous layer (all vascular plants s 1 m tall) of utility corridors and adjacent headwater riparian forests, and tested the hypothesis that utility corridors serve as foci for the invasion of adjacent riparian forest by alien vascular plants. We contrasted plant species richness and vegetative cover, cover by growth form, species richness and cover of alien plants and cover of microhabitat components (open soil, rock, leaf litter, log, bryophyte) in utility corridors and adjacent riparian forest at 17 sites. Cluster analysis revealed that herbaceous layer species assemblages in corridors and riparian forest were compositionally distinct. Herbaceous layer cover and species richness were significantly (P s 0.05) greater in corridors than in riparian forest. Fern, graminoid, and forb species co-dominated herbaceous layer cover in corridors; fern cover dominated riparian forests. Cover of alien plants was significantly greater in corridors than in riparian forest. Alien plant species richness and cover were significantly and positively correlated with open soil, floodplain width, and active channel width in corridors but were significantly and negatively correlated with litter cover in riparian forest. Given that the majority of alien plant species we found in corridors were shade-intolerant and absent from riparian forests, we conclude that open utility corridors primarily serve as habitat refugia, rather than as invasion foci, for alien plant species in riparian forests of the Allegheny High Plateau.", "which Investigated species ?", "Plants", 313.0, 319.0], ["Biotic resistance, the ability of species in a community to limit invasion, is central to our understanding of how communities at risk of invasion assemble after disturbances, but it has yet to translate into guiding principles for the restoration of invasion\u2010resistant plant communities. We combined experimental, functional, and modelling approaches to investigate processes of community assembly contributing to biotic resistance to an introduced lineage of Phragmites australis, a model invasive species in North America. We hypothesized that (i) functional group identity would be a good predictor of biotic resistance to P. australis, while species identity effect would be redundant within functional group (ii) mixtures of species would be more invasion resistant than monocultures. We classified 36 resident wetland plants into four functional groups based on eight functional traits. We conducted two competition experiments based on the additive competition design with P. australis and monocultures or mixtures of wetland plants. As an indicator of biotic resistance, we calculated a relative competition index (RCIavg) based on the average performance of P. australis in competition treatment compared with control. To explain diversity effect further, we partitioned it into selection effect and complementarity effect and tested several diversity\u2013interaction models. In monoculture treatments, RCIavg of wetland plants was significantly different among functional groups, but not within each functional group. We found the highest RCIavg for fast\u2010growing annuals, suggesting priority effect. RCIavg of wetland plants was significantly greater in mixture than in monoculture mainly due to complementarity\u2013diversity effect among functional groups. In diversity\u2013interaction models, species interaction patterns in mixtures were described best by interactions between functional groups when fitted to RCIavg or biomass, implying niche partitioning. Synthesis. Functional group identity and diversity of resident plant communities are good indicators of biotic resistance to invasion by introduced Phragmites australis, suggesting niche pre\u2010emption (priority effect) and niche partitioning (diversity effect) as underlying mechanisms. Guiding principles to understand and/or manage biological invasion could emerge from advances in community theory and the use of a functional framework. Targeting widely distributed invasive plants in different contexts and scaling up to field situations will facilitate generalization.", "which Investigated species ?", "Plants", 825.0, 831.0], ["All else being equal, more isolated islands should be more susceptible to invasion because their native species are derived from a smaller pool of colonists, and isolated islands may be missing key functional groups. Although some analyses seem to support this hypothesis, previous studies have not taken into account differences in the number of plant introductions made to different islands, which will affect invasibility estimates. Furthermore, previous studies have not assessed invasibility in terms of the rates at which introduced plant species attain different degrees invasion or naturalization. I compared the naturalization status of introduced plants on two pairs of Pacific island groups that are similar in most respects but that differ in their distances from a mainland. Then, to factor out differences in propagule pressure due to differing numbers of introductions, I compared the naturalization status only among shared introductions. In the first comparison, Hawai\u2018i (3700 km from a mainland) had three times more casual/weakly naturalized, naturalized and pest species than Taiwan (160 km from a mainland); however, roughly half (54%) of this difference can be attributed to a larger number of plant introductions to Hawai\u2018i. In the second comparison, Fiji (2500 km from a mainland) did not differ in susceptibility to invasion in comparison to New Caledonia (1000 km from a mainland); the latter two island groups appear to have experienced roughly similar propagule pressure, and they have similar invasibility. The rate at which naturalized species have become pests is similar for Hawai\u2018i and other island groups. The higher susceptibility of Hawai\u2018i to invasion is related to more species entering the earliest stages in the invasion process (more casual and weakly naturalized species), and these higher numbers are then maintained in the naturalized and pest pools. The number of indigenous (not endemic) species was significantly correlated with susceptibility to invasion across all four island groups. When islands share similar climates and habitat diversity, the number of indigenous species may be a better predictor of invasibility than indices of physical isolation because it is a composite measure of biological isolation.", "which Investigated species ?", "Plants", 657.0, 663.0], ["Invasive species can displace natives, and thus identifying the traits that make aliens successful is crucial for predicting and preventing biodiversity loss. Pathogens may play an important role in the invasive process, facilitating colonization of their hosts in new continents and islands. According to the Novel Weapon Hypothesis, colonizers may out-compete local native species by bringing with them novel pathogens to which native species are not adapted. In contrast, the Enemy Release Hypothesis suggests that flourishing colonizers are successful because they have left their pathogens behind. To assess the role of avian malaria and related haemosporidian parasites in the global spread of a common invasive bird, we examined the prevalence and genetic diversity of haemosporidian parasites (order Haemosporida, genera Plasmodium and Haemoproteus) infecting house sparrows (Passer domesticus). We sampled house sparrows (N = 1820) from 58 locations on 6 continents. All the samples were tested using PCR-based methods; blood films from the PCR-positive birds were examined microscopically to identify parasite species. The results show that haemosporidian parasites in the house sparrows' native range are replaced by species from local host-generalist parasite fauna in the alien environments of North and South America. Furthermore, sparrows in colonized regions displayed a lower diversity and prevalence of parasite infections. Because the house sparrow lost its native parasites when colonizing the American continents, the release from these natural enemies may have facilitated its invasion in the last two centuries. Our findings therefore reject the Novel Weapon Hypothesis and are concordant with the Enemy Release Hypothesis.", "which Investigated species ?", "Birds", 1063.0, 1068.0], ["Abstract Understanding how the landscape-scale replacement of indigenous plants with alien plants influences ecosystem structure and functioning is critical in a world characterized by increasing biotic homogenization. An important step in this process is to assess the impact on invertebrate communities. Here we analyse insect species richness and abundance in sweep collections from indigenous and alien (Australasian) woody plant species in South Africa's Western Cape. We use phylogenetically relevant comparisons and compare one indigenous with three Australasian alien trees within each of Fabaceae: Mimosoideae, Myrtaceae, and Proteaceae: Grevilleoideae. Although some of the alien species analysed had remarkably high abundances of herbivores, even when intentionally introduced biological control agents are discounted, overall, herbivorous insect assemblages from alien plants were slightly less abundant and less diverse compared with those from indigenous plants \u2013 in accordance with predictions from the enemy release hypothesis. However, there were no clear differences in other insect feeding guilds. We conclude that insect assemblages from alien plants are generally quite diverse, and significant differences between these and assemblages from indigenous plants are only evident for herbivorous insects.", "which Investigated species ?", "Plants", 73.0, 79.0], ["Models and observational studies have sought patterns of predictability for invasion of natural areas by nonindigenous species, but with limited success. In a field experiment using forest understory plants, we jointly manipulated three hypothesized determinants of biological invasion outcome: resident diversity, physical disturbance and abiotic conditions, and propagule pressure. The foremost constraints on net habitat invasibility were the number of propagules that arrived at a site and naturally varying resident plant density. The physical environment (flooding regime) and the number of established resident species had negligible impact on habitat invasibility as compared to propagule pressure, despite manipulations that forced a significant reduction in resident richness, and a gradient in flooding from no flooding to annual flooding. This is the first experimental study to demonstrate the primacy of propagule pressure as a determinant of habitat invasibility in comparison with other candidate controlling factors.", "which Investigated species ?", "Plants", 200.0, 206.0], ["Invasion by exotic species following clearfelling of Eucalyptus regnans F. Muell. (Mountain Ash) forest was examined in the Toolangi State Forest in the Central Highlands of Victoria. Coupes ranging in age from < 1- to 10-years-old and the spar-stage forests (1939 bushfire regrowth) adjacent to each of these coupes and a mature, 250-year-old forest were surveyed. The dispersal and establishment of weeds was facilitated by clearfelling. An influx of seeds of exotic species was detected in recently felled coupes but not in the adjacent, unlogged forests. Vehicles and frequently disturbed areas, such as roadside verges, are likely sources of the seeds of exotic species. The soil seed bank of younger coupes had a greater number and percentage of seeds of exotics than the 10-year-old coupes and the spar-stage and mature forests. Exotic species were a minor component (< 1% vegetation cover) in the more recently logged coupes and were not present in 10-year-old coupes and the spar-stage and mature forests. These particular exotic species did not persist in the dense regeneration nor exist in the older forests because the weeds were ruderal species (light-demanding, short-lived and short-statured plants). The degree of influence that these particular exotic species have on the regeneration and survival of native species in E. regnans forests is almost negligible. However, the current management practices may need to be addressed to prevent a more threatening exotic species from establishing in these coupes and forests.", "which Investigated species ?", "Plants", 1208.0, 1214.0], ["Due to altered ecological and evolutionary contexts, we might expect the responses of alien plants to environmental gradients, as revealed through patterns of trait variation, to differ from those of the same species in their native range. In particular, the spread of alien plant species along such gradients might be limited by their ability to establish clinal patterns of trait variation. We investigated trends in growth and reproductive traits in natural populations of eight invasive Asteraceae forbs along altitudinal gradients in their native and introduced ranges (Valais, Switzerland, and Wallowa Mountains, Oregon, USA). Plants showed similar responses to altitude in both ranges, being generally smaller and having fewer inflorescences but larger seeds at higher altitudes. However, these trends were modified by region-specific effects that were independent of species status (native or introduced), suggesting that any differential performance of alien species in the introduced range cannot be interpreted without a fully reciprocal approach to test the basis of these differences. Furthermore, we found differences in patterns of resource allocation to capitula among species in the native and the introduced areas. These suggest that the mechanisms underlying trait variation, for example, increasing seed size with altitude, might differ between ranges. The rapid establishment of clinal patterns of trait variation in the new range indicates that the need to respond to altitudinal gradients, possibly by local adaptation, has not limited the ability of these species to invade mountain regions. Studies are now needed to test the underlying mechanisms of altitudinal clines in traits of alien species.", "which Investigated species ?", "Plants", 92.0, 98.0], ["Previous studies have concluded that southern ocean islands are anomalous because past glacial extent and current temperature apparently explain most variance in their species richness. Here, the relationships between physical variables and species richness of vascular plants, insects, land and seabirds, and mammals were reexamined for these islands. Indigenous and introduced species were distinguished, and relationships between the latter and human occupancy variables were investigated. Most variance in indigenous species richness was explained by combinations of area and temperature (56%)\u2014vascular plants; distance (nearest continent) and vascular plant species richness (75%)\u2014insects; area and chlorophyll concentration (65%)\u2014seabirds; and indigenous insect species richness and age (73%)\u2014land birds. Indigenous insects and plants, along with distance (closest continent), explained most variance (70%) in introduced land bird species richness. A combination of area and temperature explained most variance in species richness of introduced vascular plants (73%), insects (69%), and mammals (69%). However, there was a strong relationship between area and number of human occupants. This suggested that larger islands attract more human occupants, increasing the risk of propagule transfer, while temperature increases the chance of propagule establishment. Consequently, human activities on these islands should be regulated more tightly.", "which Investigated species ?", "Mammals", 310.0, 317.0], ["Although biological invasions are of considerable concern to ecologists, relatively little attention has been paid to the potential for and consequences of indirect interactions between invasive species. Such interactions are generally thought to enhance invasives' spread and impact (i.e., the \"invasional meltdown\" hypothesis); however, exotic species might also act indirectly to slow the spread or blunt the impact of other invasives. On the east coast of the United States, the invasive hemlock woolly adelgid (Adelges tsugae, HWA) and elongate hemlock scale (Fiorinia externa, EHS) both feed on eastern hemlock (Tsuga canadensis). Of the two insects, HWA is considered far more damaging and disproportionately responsible for hemlock mortality. We describe research assessing the interaction between HWA and EHS, and the consequences of this interaction for eastern hemlock. We conducted an experiment in which uninfested hemlock branches were experimentally infested with herbivores in a 2 x 2 factorial design (either, both, or neither herbivore species). Over the 2.5-year course of the experiment, each herbivore's density was approximately 30% lower in mixed- vs. single-species treatments. Intriguingly, however, interspecific competition weakened rather than enhanced plant damage: growth was lower in the HWA-only treatment than in the HWA + EHS, EHS-only, or control treatments. Our results suggest that, for HWA-infested hemlocks, the benefit of co-occurring EHS infestations (reduced HWA density) may outweigh the cost (increased resource depletion).", "which Investigated species ?", "Insects", 648.0, 655.0], [". The invasion by non-native plant species of an urban remnant of a species-rich Themeda triandra grassland in southeastern Australia was quantified and related to abiotic influences. Richness and cover of non-native species were highest at the edges of the remnant and declined to relatively uniform levels within the remnant. Native species richness and cover were lowest at the edge adjoining a roadside but then showed little relation to distance from edge. Roadside edge quadrats were floristically distinct from most other quadrats when ordinated by Detrended Correspondence Analysis. Soil phosphorus was significantly higher at the roadside edge but did not vary within the remnant itself. All other abiotic factors measured (NH4, NO3, S, pH and % organic carbon) showed little variation across the remnant. Non-native species richness and cover were strongly correlated with soil phosphorus levels. Native species were negatively correlated with soil phosphorus levels. Canonical Correspondence Analysis identified the perennial non-native grasses of high biomass as species most dependent on high soil nutrient levels. Such species may be resource-limited in undisturbed soils. Three classes of non-native plants have invaded this species-rich grassland: (1) generalist species (> 50 % frequency), mostly therophytes with non-specialized habitat or germination requirements; (2) resource-limited species comprising perennial species of high biomass that are dependent on nutrient increases and/or soil disturbances before they can invade the community and; (3) species of intermediate frequency (1\u201330 %), of low to high biomass potential, that appear to have non-specialized habitat requirements but are currently limited by seed dispersal, seedling establishment or the current site management. Native species richness and cover are most negatively affected by increases in non-native cover. Declines are largely evident once the non-native cover exceeds 40 %. Widespread, generalist non-native species are numerous in intact sites and will have to be considered a permanent part of the flora of remnant grasslands. Management must aim to minimize increases in cover of any non-native species or the disturbances that favour the establishment of competitive non-native grasses if the native grassland flora is to be conserved in small, fragmented remnants.", "which Investigated species ?", "Plants", 1215.0, 1221.0], ["In recent decades the grass Phragmites australis has been aggressively in- vading coastal, tidal marshes of North America, and in many areas it is now considered a nuisance species. While P. australis has historically been restricted to the relatively benign upper border of brackish and salt marshes, it has been expanding seaward into more phys- iologically stressful regions. Here we test a leading hypothesis that the spread of P. australis is due to anthropogenic modification of coastal marshes. We did a field experiment along natural borders between stands of P. australis and the other dominant grasses and rushes (i.e., matrix vegetation) in a brackish marsh in Rhode Island, USA. We applied a pulse disturbance in one year by removing or not removing neighboring matrix vegetation and adding three levels of nutrients (specifically nitrogen) in a factorial design, and then we monitored the aboveground performance of P. australis and the matrix vegetation. Both disturbances increased the density, height, and biomass of shoots of P. australis, and the effects of fertilization were more pronounced where matrix vegetation was removed. Clear- ing competing matrix vegetation also increased the distance that shoots expanded and their reproductive output, both indicators of the potential for P. australis to spread within and among local marshes. In contrast, the biomass of the matrix vegetation decreased with increasing severity of disturbance. Disturbance increased the total aboveground production of plants in the marsh as matrix vegetation was displaced by P. australis. A greenhouse experiment showed that, with increasing nutrient levels, P. australis allocates proportionally more of its biomass to aboveground structures used for spread than to belowground struc- tures used for nutrient acquisition. Therefore, disturbances that enrich nutrients or remove competitors promote the spread of P. australis by reducing belowground competition for nutrients between P. australis and the matrix vegetation, thus allowing P. australis, the largest plant in the marsh, to expand and displace the matrix vegetation. Reducing nutrient load and maintaining buffers of matrix vegetation along the terrestrial-marsh ecotone will, therefore, be important methods of control for this nuisance species.", "which Investigated species ?", "Plants", 1518.0, 1524.0], ["We used three congeneric annual thistles, which vary in their ability to invade California (USA) annual grasslands, to test whether invasiveness is related to differences in life history traits. We hypothesized that populations of these summer-flowering Centaurea species must pass through a demographic gauntlet of survival and reproduction in order to persist and that the most invasive species (C. solstitialis) might possess unique life history characteristics. Using the idea of a demographic gauntlet as a conceptual framework, we compared each congener in terms of (1) seed germination and seedling establishment, (2) survival of rosettes subjected to competition from annual grasses, (3) subsequent growth and flowering in adult plants, and (4) variation in breeding system. Grazing and soil disturbance is thought to affect Centaurea establishment, growth, and reproduction, so we also explored differences among congeners in their response to clipping and to different sizes of soil disturbance. We found minimal differences among congeners in either seed germination responses or seedling establishment and survival. In contrast, differential growth responses of congeners to different sizes of canopy gaps led to large differences in adult size and fecundity. Canopy-gap size and clipping affected the fecundity of each species, but the most invasive species (C. solstitialis) was unique in its strong positive response to combinations of clipping and canopy gaps. In addition, the phenology of C. solstitialis allows this species to extend its growing season into the summer\u2014a time when competition from winter annual vegetation for soil water is minimal. Surprisingly, C. solstitialis was highly self-incompatible while the less invasive species were highly self-compatible. Our results suggest that the invasiveness of C. solstitialis arises, in part, from its combined ability to persist in competition with annual grasses and its plastic growth and reproductive responses to open, disturbed habitat patches. Corresponding Editor: D. P. C. Peters.", "which Investigated species ?", "Plants", 737.0, 743.0], ["Research examining the relationship between community diversity and invasions by nonnative species has raised new questions about the theory and management of biological invasions. Ecological theory predicts, and small-scale experiments confirm, lower levels of nonnative species invasion into species-rich compared to species-poor communities, but observational studies across a wider range of scales often report positive relationships between native and nonnative species richness. This paradox has been attributed to the scale dependency of diversity-invasibility relationships and to differences between experimental and observational studies. Disturbance is widely recognized as an important factor determining invasibility of communities, but few studies have investigated the relative and interactive roles of diversity and disturbance on nonnative species invasion. Here, we report how the relationship between native and nonnative plant species richness responded to an experimentally applied disturbance gradient (from no disturbance up to clearcut) in oak-dominated forests. We consider whether results are consistent with various explanations of diversity-invasibility relationships including biotic resistance, resource availability, and the potential effects of scale (1 m2 to 2 ha). We found no correlation between native and nonnative species richness before disturbance except at the largest spatial scale, but a positive relationship after disturbance across scales and levels of disturbance. Post-disturbance richness of both native and nonnative species was positively correlated with disturbance intensity and with variability of residual basal area of trees. These results suggest that more nonnative plants may invade species-rich communities compared to species-poor communities following disturbance.", "which Investigated species ?", "Plants", 1724.0, 1730.0], ["Positive interactions play a widespread role in facilitating biological invasions. Here we use a landscape-scale ant exclusion experiment to show that widespread invasion of tropical rainforest by honeydew-producing scale insects on Christmas Island (Indian Ocean) has been facilitated by positive interactions with the invasive ant Anoplolepis gracilipes. Toxic bait was used to exclude A. gracilipes from large (9-35 ha) forest patches. Within 11 weeks, ant activity on the ground and on trunks had been reduced by 98-100%, while activity on control plots remained unchanged. The exclusion of ants caused a 100% decline in the density of scale insects in the canopies of three rainforest trees in 12 months (Inocarpus fagifer, Syzygium nervosum and Barringtonia racemosa), but on B. racemosa densities of scale insects also declined in control plots, resulting in no effect of ant exclusion on this species. This study demonstrates the role of positive interactions in facilitating biological invasions, and supports recent models calling for greater recognition of the role of positive interactions in structuring ecological communities.", "which Investigated species ?", "Insects", 222.0, 229.0], ["Plants with poorly attractive flowers or with little floral rewards may have inadequate pollinator service, which in turn reduces seed output. However, pollinator service of less attractive species could be enhanced when they are associated with species with highly attractive flowers (so called \u2018magnet-species\u2019). Although several studies have reported the magnet species effect, few of them have evaluated whether this positive interaction result in an enhancement of the seed output for the beneficiary species. Here, we compared pollinator visitation rates and seed output of the invasive annual species Carduus pycnocephalus when grow associated with shrubs of the invasive Lupinus arboreus and when grow alone, and hypothesized that L. arboreus acts as a magnet species for C. pycnocephalus. Results showed that C. pycnocephalus individuals associated with L. arboreus had higher pollinator visitation rates and higher seed output than individuals growing alone. The higher visitation rates of C. pycnocephalus associated to L. arboreus were maintained after accounting for flower density, which consistently supports our hypothesis on the magnet species effect of L. arboreus. Given that both species are invasives, the facilitated pollination and reproduction of C. pycnocephalus by L. arboreus could promote its naturalization in the community, suggesting a synergistic invasional process contributing to an \u2018invasional meltdown\u2019. The magnet effect of Lupinus on Carduus found in this study seems to be one the first examples of indirect facilitative interactions via increased pollination among invasive species.", "which Investigated species ?", "Plants", 0.0, 6.0], ["To shed light on the process of how exotic species become invasive, it is necessary to study them both in their native and non\u2010native ranges. Our intent was to measure differences in herbivory, plant growth and the impact on other species in Fallopia japonica in its native and non\u2010native ranges. We performed a cross\u2010range full descriptive, field study in Japan (native range) and France (non\u2010native range). We assessed DNA ploidy levels, the presence of phytophagous enemies, the amount of leaf damage, several growth parameters and the co\u2010occurrence of Fallopia japonica with other plant species of herbaceous communities. Invasive Fallopia japonica plants were all octoploid, a ploidy level we did not encounter in the native range, where plants were all tetraploid. Octoploids in France harboured far less phytophagous enemies, suffered much lower levels of herbivory, grew larger and had a much stronger impact on plant communities than tetraploid conspecifics in the native range in Japan. Our data confirm that Fallopia japonica performs better \u2013 plant vigour and dominance in the herbaceous community \u2013 in its non\u2010native than its native range. Because we could not find octoploids in the native range, we cannot separate the effects of differences in ploidy from other biogeographic factors. To go further, common garden experiments would now be needed to disentangle the proper role of each factor, taking into account the ploidy levels of plants in their native and non\u2010native ranges. Synthesis. As the process by which invasive plants successfully invade ecosystems in their non\u2010native range is probably multifactorial in most cases, examining several components \u2013 plant growth, herbivory load, impact on recipient systems \u2013 of plant invasions through biogeographic comparisons is important. Our study contributes towards filling this gap in the research, and it is hoped that this method will spread in invasion ecology, making such an approach more common.", "which Investigated species ?", "Plants", 653.0, 659.0], ["Invasions of natural communities by non-indigenous species threaten native biodiversity and are currently rated as one of the most important global-scale environmental problems. The mechanisms that make communities resistant to invasions and drive the establishment success of seedlings are essential both for management and for understanding community assembly and structure. Especially in grasslands, anecic earthworms are known to function as ecosystem engineers, however, their direct effects on plant community composition and on the invasibility of plant communities via plant seed burial, ingestion and digestion are poorly understood. In a greenhouse experiment we investigated the impact of Lumbricus terrestris, plant functional group identity and seed size of plant invader species and plant functional group of the established plant community on the number and biomass of plant invaders. We set up 120 microcosms comprising four plant community treatments, two earthworm treatments and three plant invader treatments containing three seed size classes. Earthworm performance was influenced by an interaction between plant functional group identity of the established plant community and that of invader species. The established plant community and invader seed size affected the number of invader plants significantly, while invader biomass was only affected by the established community. Since earthworm effects on the number and biomass of invader plants varied with seed size and plant functional group identity they probably play a key role in seedling establishment and plant community composition. Seeds and germinating seedlings in earthworm burrows may significantly contribute to earthworm nutrition, but this deserves further attention. Lumbricus terrestris likely behaves like a \u2018farmer\u2019 by collecting plant seeds which cannot directly be swallowed or digested. Presumably, these seeds are left in middens and become eatable after partial microbial decay. Increased earthworm numbers in more diverse plant communities likely contribute to the positive relationship between plant species diversity and resistance against invaders.", "which Investigated species ?", "Plants", 1309.0, 1315.0], ["Questions: 1. What are the distribution and habitat associations of non-native (neophyte) species in riparian zones? 2. Are there significant differences, in terms of plant species diversity, composition, habitat condition and species attributes, between plant communities where non-natives are present or abundant and those where non-natives are absent or infrequent? 3. Are the observed differences generic to non-natives or do individual non-native species differ in their vegetation associations? Location: West Midlands Conurbation (WMC), UK. Methods: 56 sites were located randomly on four rivers across the WMC. Ten 2 m \u00d7 2 m quadrats were placed within 15 m of the river to sample vegetation within the floodplain at each site. All vascular plants were recorded along with site information such as surrounding land use and habitat types. Results: Non-native species were found in many vegetation types and on all rivers in the WMC. There were higher numbers of non-natives on more degraded, human-modified rivers. More non-native species were found in woodland, scrub and tall herb habitats than in grasslands. We distinguish two types of communities with non-natives. In communities colonized following disturbance, in comparison to quadrats containing no non-native species, those with non-natives had higher species diversity and more forbs, annuals and shortlived monocarpic perennials. Native species in quadrats containing non-natives were characteristic of conditions of higher fertility and pH, had a larger specific leaf area and were less stress tolerant or competitive. In later successional communities dominated by particular non-natives, native diversity declined with increasing cover of non-natives. Associated native species were characteristic of low light conditions. Conclusions: Communities containing non-natives can be associated with particular types of native species. Extrinsic factors (disturbance, eutrophication) affected both native and non-native species. In disturbed riparian habitats the key determinant of diversity is dominance by competitive invasive species regardless of their native or non-native origin.", "which Investigated species ?", "Plants", 749.0, 755.0], ["Invasive plants are hypothesized to have higher fitness in introduced areas due to their release from pathogens and herbivores and the relocation of resources to reproduction. However, few studies have tested this hypothesis in native and introduced regions. A biogeographical approach is fundamental to understanding the mechanisms involved in plant invasions and to detect rapid evolutionary changes in the introduced area. Reproduction was assessed in native and introduced ranges of two invasive Australian woody legumes, Acacia dealbata and A. longifolia. Seed production, pre\u2010dispersal seed predation, seed and elaiosome size and seedling size were assessed in 7\u201310 populations from both ranges, taking into account the effect of differences in climate. There was a significantly higher percentage of fully developed seeds per pod, a lower proportion of aborted seeds and the absence of pre\u2010dispersal predation in the introduced range for both Acacia species. Acacia longifolia produced more seeds per pod in the invaded range, whereas A. dealbata produced more seeds per tree in the invaded range. Seeds were bigger in the invaded range for both species, and elaiosome: seed ratio was smaller for A. longifolia in the invaded range. Seedlings were also larger in the invaded range, suggesting that the increase in seed size results into greater offspring growth. There were no differences in the climatic conditions of sites occupied by A. longifolia in both regions. Minimum temperature was higher in Portuguese A. dealbata populations, but this difference did not explain the increase in seed production and seed size in the introduced range. It did have, however, a positive effect on the number of pods per tree. Synthesis. Acacia dealbata and A. longifolia escape pre\u2010dispersal predation in the introduced range and display a higher production of fully developed seeds per fruit and bigger seeds. These differences may explain the invasion of both species because they result in an increased seedling growth and the production of abundant soil seedbanks in the introduced area.", "which Investigated species ?", "Plants", 9.0, 15.0], ["Darwin acknowledged contrasting, plausible arguments for how species invasions are influenced by phylogenetic relatedness to the native community. These contrasting arguments persist today without clear resolution. Using data on the naturalization and abundance of exotic plants in the Auckland region, we show how different expectations can be accommodated through attention to scale, assumptions about niche overlap, and stage of invasion. Probability of naturalization was positively related to the number of native species in a genus but negatively related to native congener abundance, suggesting the importance of both niche availability and biotic resistance. Once naturalized, however, exotic abundance was not related to the number of native congeners, but positively related to native congener abundance. Changing the scale of analysis altered this outcome: within habitats exotic abundance was negatively related to native congener abundance, implying that native and exotic species respond similarly to broad scale environmental variation across habitats, with biotic resistance occurring within habitats.", "which Investigated species ?", "Plants", 272.0, 278.0], ["Detailed knowledge of patterns of native species richness, an important component of biodiversity, and non\u2010native species invasions is often lacking even though this knowledge is essential to conservation efforts. However, we cannot afford to wait for complete information on the distribution and abundance of native and harmful invasive species. Using information from counties well surveyed for plants across the USA, we developed models to fill data gaps in poorly surveyed areas by estimating the density (number of species km\u22122) of native and non\u2010native plant species. Here, we show that native plant species density is non\u2010random, predictable, and is the best predictor of non\u2010native plant species density. We found that eastern agricultural sites and coastal areas are among the most invaded in terms of non\u2010native plant species densities, and that the central USA appears to have the greatest ratio of non\u2010native to native species. These large\u2010scale models could also be applied to smaller spatial scales or other taxa to set priorities for conservation and invasion mitigation, prevention, and control efforts.", "which Investigated species ?", "Plants", 397.0, 403.0], ["The finding that passeriform birds introduced to the islands of Hawaii and Saint Helena were more likely to successfully invade when fewer other introduced species were present has been interpreted as strong support for the hypothesis that interspecific competition influences invasion success. I tested whether invasions were more likely to succeed when fewer species were present using the records of passeriform birds introduced to four acclimatization districts in New Zealand. I also tested whether introduction effort, measured as the number of introductions and the total number of birds released, could predict invasion outcomes, a result previously established for all birds introduced to New Zealand. I found patterns consistent with both competition and introduction effort as explanations for invasion success. However, data supporting the two explanations were confounded such that the greater success of invaders arriving when fewer other species were present could have been due to a causal relationship between invasion success and introduction effort. Hence, without data on introduction effort, previous studies may have overestimated the degree to which the number of potential competitors could independently explain invasion outcomes and may therefore have overstated the importance of competition in structuring introduced avian assemblages. Furthermore, I suggest that a second pattern in avian invasion success previously attributed to competition, the morphological overdispersion of successful invaders, could also arise as an artifact of variation in introduction effort.", "which Investigated species ?", "Birds", 29.0, 34.0], ["Invasive species richness often is negatively correlated with native species richness at the small spatial scale of sampling plots, but positively correlated in larger areas. The pattern at small scales has been interpreted as evidence that native plants can competitively exclude invasive species. Large-scale patterns have been understood to result from environmental heterogeneity, among other causes. We investigated species richness patterns among submerged and floating-leaved aquatic plants (87 native species and eight invasives) in 103 temperate lakes in Connecticut (northeastern USA) and found neither a consistently negative relationship at small (3-m2) scales, nor a positive relationship at large scales. Native species richness at sampling locations was uncorrelated with invasive species richness in 37 of the 60 lakes where invasive plants occurred; richness was negatively correlated in 16 lakes and positively correlated in seven. No correlation between native and invasive species richness was found at larger spatial scales (whole lakes and counties). Increases in richness with area were uncorrelated with abiotic heterogeneity. Logistic regression showed that the probability of occurrence of five invasive species increased in sampling locations (3 m2, n = 2980 samples) where native plants occurred, indicating that native plant species richness provided no resistance against invasion. However, the probability of three invasive species' occurrence declined as native plant density increased, indicating that density, if not species richness, provided some resistance with these species. Density had no effect on occurrence of three other invasive species. Based on these results, native species may resist invasion at small spatial scales only in communities where density is high (i.e., in communities where competition among individuals contributes to community structure). Most hydrophyte communities, however, appear to be maintained in a nonequilibrial condition by stress and/or disturbance. Therefore, most aquatic plant communities in temperate lakes are likely to be vulnerable to invasion.", "which Investigated species ?", "Plants", 248.0, 254.0], ["ABSTRACT One explanation for the success of exotic plants in their introduced habitats is that, upon arriving to a new continent, plants escaped their native herbivores or pathogens, resulting in less damage and lower abundance of enemies than closely related native species (enemy release hypothesis). We tested whether the three exotic plant species, Rubus phoenicolasius (wineberry), Fallopia japonica (Japanese knotweed), and Persicaria perfoliata (mile-a-minute weed), suffered less herbivory or pathogen attack than native species by comparing leaf damage and invertebrate herbivore abundance and diversity on the invasive species and their native congeners. Fallopia japonica and R. phoenicolasius received less leaf damage than their native congeners, and F. japonica also contained a lower diversity and abundance of invertebrate herbivores. If the observed decrease in damage experienced by these two plant species contributes to increased fitness, then escape from enemies may provide at least a partial explanation for their invasiveness. However, P. perfoliata actually received greater leaf damage than its native congener. Rhinoncomimus latipes, a weevil previously introduced in the United States as a biological control for P. perfoliata, accounted for the greatest abundance of insects collected from P. perfoliata. Therefore, it is likely that the biocontrol R. latipes was responsible for the greater damage on P. perfoliata, suggesting this insect may be effective at controlling P. perfoliata populations if its growth and reproduction is affected by the increased herbivore damage.", "which Investigated species ?", "Plants", 51.0, 57.0], ["Background Coastal landscapes are being transformed as a consequence of the increasing demand for infrastructures to sustain residential, commercial and tourist activities. Thus, intertidal and shallow marine habitats are largely being replaced by a variety of artificial substrata (e.g. breakwaters, seawalls, jetties). Understanding the ecological functioning of these artificial habitats is key to planning their design and management, in order to minimise their impacts and to improve their potential to contribute to marine biodiversity and ecosystem functioning. Nonetheless, little effort has been made to assess the role of human disturbances in shaping the structure of assemblages on marine artificial infrastructures. We tested the hypothesis that some negative impacts associated with the expansion of opportunistic and invasive species on urban infrastructures can be related to the severe human disturbances that are typical of these environments, such as those from maintenance and renovation works. Methodology/Principal Findings Maintenance caused a marked decrease in the cover of dominant space occupiers, such as mussels and oysters, and a significant enhancement of opportunistic and invasive forms, such as biofilm and macroalgae. These effects were particularly pronounced on sheltered substrata compared to exposed substrata. Experimental application of the disturbance in winter reduced the magnitude of the impacts compared to application in spring or summer. We use these results to identify possible management strategies to inform the improvement of the ecological value of artificial marine infrastructures. Conclusions/Significance We demonstrate that some of the impacts of globally expanding marine urban infrastructures, such as those related to the spread of opportunistic, and invasive species could be mitigated through ecologically-driven planning and management of long-term maintenance of these structures. Impact mitigation is a possible outcome of policies that consider the ecological features of built infrastructures and the fundamental value of controlling biodiversity in marine urban systems.", "which Investigated species ?", "Algae", NaN, NaN], ["During the past centuries, humans have introduced many plant species in areas where they do not naturally occur. Some of these species establish populations and in some cases become invasive, causing economic and ecological damage. Which factors determine the success of non-native plants is still incompletely understood, but the absence of natural enemies in the invaded area (Enemy Release Hypothesis; ERH) is one of the most popular explanations. One of the predictions of the ERH, a reduced herbivore load on non-native plants compared with native ones, has been repeatedly tested. However, many studies have either used a community approach (sampling from native and non-native species in the same community) or a biogeographical approach (sampling from the same plant species in areas where it is native and where it is non-native). Either method can sometimes lead to inconclusive results. To resolve this, we here add to the small number of studies that combine both approaches. We do so in a single study of insect herbivory on 47 woody plant species (trees, shrubs, and vines) in the Netherlands and Japan. We find higher herbivore diversity, higher herbivore load and more herbivory on native plants than on non-native plants, generating support for the enemy release hypothesis.", "which Investigated species ?", "Plants", 282.0, 288.0], ["Previous studies have concluded that southern ocean islands are anomalous because past glacial extent and current temperature apparently explain most variance in their species richness. Here, the relationships between physical variables and species richness of vascular plants, insects, land and seabirds, and mammals were reexamined for these islands. Indigenous and introduced species were distinguished, and relationships between the latter and human occupancy variables were investigated. Most variance in indigenous species richness was explained by combinations of area and temperature (56%)\u2014vascular plants; distance (nearest continent) and vascular plant species richness (75%)\u2014insects; area and chlorophyll concentration (65%)\u2014seabirds; and indigenous insect species richness and age (73%)\u2014land birds. Indigenous insects and plants, along with distance (closest continent), explained most variance (70%) in introduced land bird species richness. A combination of area and temperature explained most variance in species richness of introduced vascular plants (73%), insects (69%), and mammals (69%). However, there was a strong relationship between area and number of human occupants. This suggested that larger islands attract more human occupants, increasing the risk of propagule transfer, while temperature increases the chance of propagule establishment. Consequently, human activities on these islands should be regulated more tightly.", "which Investigated species ?", "Insects", 278.0, 285.0], ["Whether or not a bird species will establish a new population after invasion of uncolonized habitat depends, from theory, on its life-history attributes and initial population size. Data about initial population sizes are often unobtainable for natural and deliberate avian invasions. In New Zealand, however, contemporary documentation of introduction efforts allowed us to systematically compare unsuccessful and successful invaders without bias. We obtained data for 79 species involved in 496 introduction events and used the present-day status of each species as the dependent variable in fitting multiple logistic regression models. We found that introduction efforts for species that migrated within their endemic ranges were significantly less likely to be successful than those for nonmigratory species with similar introduction efforts. Initial population size, measured as number of releases and as the minimum number of propagules liberated in New Zealand, significantly increased the probability of translocation success. A null model showed that species released more times had a higher probability per release of successful establishment. Among 36 species for which data were available, successful invaders had significantly higher natality/mortality ratios. Successful invaders were also liberated at significantly more sites. Invasion of New Zealand by exotic birds was therefore primarily related to management, an outcome that has implications for conservation biology.", "which Investigated species ?", "Birds", 1377.0, 1382.0], ["The impacts of alien plants on native richness are usually assessed at small spatial scales and in locations where the alien is at high abundance. But this raises two questions: to what extent do impacts occur where alien species are at low abundance, and do local impacts translate to effects at the landscape scale? In an analysis of 47 widespread alien plant species occurring across a 1,000 km2 landscape, we examined the relationship between their local abundance and native plant species richness in 594 grassland plots. We first defined the critical abundance at which these focal alien species were associated with a decline in native \u03b1\u2010richness (plot\u2010scale species numbers), and then assessed how this local decline was translated into declines in native species \u03b3\u2010richness (landscape\u2010scale species numbers). After controlling for sampling biases and environmental gradients that might lead to spurious relationships, we found that eight out of 47 focal alien species were associated with a significant decline in native \u03b1\u2010richness as their local abundance increased. Most of these significant declines started at low to intermediate classes of abundance. For these eight species, declines in native \u03b3\u2010richness were, on average, an order of magnitude (32.0 vs. 2.2 species) greater than those found for native \u03b1\u2010richness, mostly due to spatial homogenization of native communities. The magnitude of the decrease at the landscape scale was best explained by the number of plots where an alien species was found above its critical abundance. Synthesis. Even at low abundance, alien plants may impact native plant richness at both local and landscape scales. Local impacts may result in much greater declines in native richness at larger spatial scales. Quantifying impact at the landscape scale requires consideration of not only the prevalence of an alien plant, but also its critical abundance and its effect on native community homogenization. This suggests that management approaches targeting only those locations dominated by alien plants might not mitigate impacts effectively. Our integrated approach will improve the ranking of alien species risks at a spatial scale appropriate for prioritizing management and designing conservation policies.", "which Investigated species ?", "plants", 21.0, 27.0], ["Context. Wildfire is a major driver of the structure and function of mallee eucalypt- and spinifex-dominated landscapes. Understanding how fire influences the distribution of biota in these fire-prone environments is essential for effective ecological and conservation-based management. Aims. We aimed to (1) determine the effects of an extensive wildfire (118 000 ha) on a small mammal community in the mallee shrublands of semiarid Australia and (2) assess the hypothesis that the fire-response patterns of small mammals can be predicted by their life-history characteristics. Methods. Small-mammal surveys were undertaken concurrently at 26 sites: once before the fire and on four occasions following the fire (including 14 sites that remained unburnt). We documented changes in small-mammal occurrence before and after the fire, and compared burnt and unburnt sites. In addition, key components of vegetation structure were assessed at each site. Key results. Wildfire had a strong influence on vegetation structure and on the occurrence of small mammals. The mallee ningaui, Ningaui yvonneae, a dasyurid marsupial, showed a marked decline in the immediate post-fire environment, corresponding with a reduction in hummock-grass cover in recently burnt vegetation. Species richness of native small mammals was positively associated with unburnt vegetation, although some species showed no clear response to wildfire. Conclusions. Our results are consistent with the contention that mammal responses to fire are associated with their known life-history traits. The species most strongly affected by wildfire, N. yvonneae, has the most specific habitat requirements and restricted life history of the small mammals in the study area. The only species positively associated with recently burnt vegetation, the introduced house mouse, Mus domesticus, has a flexible life history and non-specialised resource requirements. Implications. Maintaining sources for recolonisation after large-scale wildfires will be vital to the conservation of native small mammals in mallee ecosystems.", "which Investigated species ?", "Mammals", 515.0, 522.0], ["Hussner A (2012). Alien aquatic plant species in European countries. Weed Research52, 297\u2013306. Summary Alien aquatic plant species cause serious ecological and economic impacts to European freshwater ecosystems. This study presents a comprehensive overview of all alien aquatic plants in Europe, their places of origin and their distribution within the 46 European countries. In total, 96 aquatic species from 30 families have been reported as aliens from at least one European country. Most alien aquatic plants are native to Northern America, followed by Asia and Southern America. Elodea canadensis is the most widespread alien aquatic plant in Europe, reported from 41 European countries. Azolla filiculoides ranks second (25), followed by Vallisneria spiralis (22) and Elodea nuttallii (20). The highest number of alien aquatic plant species has been found in Italy and France (34 species), followed by Germany (27), Belgium and Hungary (both 26) and the Netherlands (24). Even though the number of alien aquatic plants seems relatively small, the European and Mediterranean Plant Protection Organization (EPPO, http://www.eppo.org) has listed 18 of these species as invasive or potentially invasive within the EPPO region. As ornamental trade has been regarded as the major pathway for the introduction of alien aquatic plants, trading bans seem to be the most effective option to reduce the risk of further unintended entry of alien aquatic plants into Europe.", "which Investigated species ?", "Plants", 278.0, 284.0], ["Background and Aims Invasiveness of some alien plants is associated with their traits, plastic responses to environmental conditions and interpopulation differentiation. To obtain insights into the role of these processes in contributing to variation in performance, we compared congeneric species of Impatiens (Balsaminaceae) with different origin and invasion status that occur in central Europe. Methods Native I. noli-tangere and three alien species (highly invasive I. glandulifera, less invasive I. parviflora and potentially invasive I. capensis) were studied and their responses to simulated canopy shading and different nutrient and moisture levels were determined in terms of survival and seedling traits. Key Results and Conclusions Impatiens glandulifera produced high biomass in all the treatments and the control, exhibiting the \u2018Jack-and-master\u2019 strategy that makes it a strong competitor from germination onwards. The results suggest that plasticity and differentiation occurred in all the species tested and that along the continuum from plasticity to differentiation, the species at the plasticity end is the better invader. The most invasive species I. glandulifera appears to be highly plastic, whereas the other two less invasive species, I. parviflora and I. capensis, exhibited lower plasticity but rather strong population differentiation. The invasive Impatiens species were taller and exhibited higher plasticity and differentiation than native I. noli-tangere. This suggests that even within one genus, the relative importance of the phenomena contributing to invasiveness appears to be species'specific.", "which Investigated species ?", "Plants", 47.0, 53.0], ["The invasion of exotic species into assemblages of native plants is a pervasive and widespread phenomenon. Many theoretical and observational studies suggest that diverse communities are more resistant to invasion by exotic species than less diverse ones. However, experimental results do not always support such a relationship. Therefore, the hypothesis of diversity-community invasibility is still a focus of controversy in the field of invasion ecology. In this study, we established and manipulated communities with different species diversity and different species functional groups (16 species belong to C3, C4, forbs and legumes, respectively) to test Elton's hypothesis and other relevant hypotheses by studying the process of invasion. Alligator weed (Alternanthera philoxeroides) was chosen as the invader. We found that the correlation between the decrement of extractable soil nitrogen and biomass of alligator weed was not significant, and that species diversity, independent of functional groups diversity, did not show a significant correlation with invasibility. However, the communities with higher functional groups diversity significantly reduced the biomass of alligator weed by decreasing its resource opportunity. Functional traits of species also influenced the success of the invasion. Alternanthera sessilis, in the same morphological and functional group as alligator weed, was significantly resistant to alligator weed invasion. Because community invasibility is influenced by many factors and interactions among them, the pattern and mechanisms of community invasibility are likely to be far subtler than we found in this study. More careful manipulated experiments coupled with theoretical modeling studies are essential steps to a more profound understanding of community invasibility.", "which Investigated species ?", "Plants", 58.0, 64.0], ["Aim: A recent upsurge of interest in the island biogeography of exotic species has followed from the argument that they may provide valuable information on the natural processes structuring island biotas. Here, we use data on the occurrence of exotic bird species across oceanic islands worldwide to demonstrate an alternative and previously untested hypothesis that these distributional patterns are a simple consequence of where humans have released such species, and hence of the number of species released. Location: Islands around the world. Methods: Statistical analysis of published information on the numbers of exotic bird species introduced to, and established on, islands around the world. Results: Established exotic birds showed very similar species-area relationships to native species, but different species-isolation relationships. However, in both cases the relationship for established exotics simply mimicked that for the number of exotic bird species introduced. Exotic bird introductions scaled positively with human population size and island isolation, and islands that had seen more native species extinctions had had more exotic species released. Main conclusion: The island biogeography of exotic birds is primarily a consequence of human, rather than natural, processes. \u00a9 2007 The Authors Journal compilation \u00a9 2007 Blackwell Publishing Ltd.", "which Investigated species ?", "Birds", 729.0, 734.0], ["We investigated whether plasticity in growth responses to nutrients could predict invasive potential in aquatic plants by measuring the effects of nutrients on growth of eight non\u2010invasive native and six invasive exotic aquatic plant species. Nutrients were applied at two levels, approximating those found in urbanized and relatively undisturbed catchments, respectively. To identify systematic differences between invasive and non\u2010invasive species, we compared the growth responses (total biomass, root:shoot allocation, and photosynthetic surface area) of native species with those of related invasive species after 13 weeks growth. The results were used to seek evidence of invasive potential among four recently naturalized species. There was evidence that invasive species tend to accumulate more biomass than native species (P = 0.0788). Root:shoot allocation did not differ between native and invasive plant species, nor was allocation affected by nutrient addition. However, the photosynthetic surface area of invasive species tended to increase with nutrients, whereas it did not among native species (P = 0.0658). Of the four recently naturalized species, Hydrocleys nymphoides showed the same nutrient\u2010related plasticity in photosynthetic area displayed by known invasive species. Cyperus papyrus showed a strong reduction in photosynthetic area with increased nutrients. H. nymphoides and C. papyrus also accumulated more biomass than their native relatives. H. nymphoides possesses both of the traits we found to be associated with invasiveness, and should thus be regarded as likely to be invasive.", "which Investigated species ?", "Plants", 112.0, 118.0], ["One hundred and seventy-three exotic angiosperms form 48.2% of the angiosperm flora of Lord Howe Island (310 35'S, 1590 05'E) in the south Pacific Ocean. The families Poaceae (23%) and Asteraceae (13%) dominate the exotic flora. Some 30% are native to the Old World, 26% from the New World and 14% from Eurasia. Exotics primarily occur on heavily disturbed areas but c. 10% are widely distributed in undisturbed vegetation. Analysis of historical records, eleven species lists over the 128 years 1853-1981, shows that invasion has been a continuous process at an exponential rate. Exotics have been naturalized at the overall rate of 1.3 species y-1. Most exotics were deliberately introduced as pasture species or accidentally as contaminants although ornamental plants are increasing. Exotics show some evidence of invading progressively less disturbed habitats but the response of each species is individualistic. As introduction of exotics is a social rather than an ecological problem, the present pattern will continue.", "which Investigated species ?", "Plants", 764.0, 770.0], ["Islands can serve as model systems for understanding how biological invasions affect community structure and ecosystem function. Here we show invasion by the alien crazy ant Anoplolepis gracilipes causes a rapid, catastrophic shift in the rain forest ecosystem of a tropical oceanic island, affecting at least three trophic levels. In invaded areas, crazy ants extirpate the red land crab, the dominant endemic consumer on the forest floor. In doing so, crazy ants indirectly release seedling recruitment, enhance species richness of seedlings, and slow litter breakdown. In the forest canopy, new associations between this invasive ant and honeydew-secreting scale insects accelerate and diversify impacts. Sustained high densities of foraging ants on canopy trees result in high population densities of hostgeneralist scale insects and growth of sooty moulds, leading to canopy dieback and even deaths of canopy trees. The indirect fallout from the displacement of a native keystone species by an ant invader, itself abetted by introduced/cryptogenic mutualists, produces synergism in impacts to precipitate invasional meltdown in this system.", "which Investigated species ?", "Insects", 666.0, 673.0], ["Nonnative, invasive plant species often increase in growth, abundance, or habitat distribution in their introduced ranges. The enemy-release hypothesis, proposed to account for these changes, posits that herbivores and pathogens (natural enemies) limit growth or survival of plants in native areas, that natural enemies have less impact in the introduced than in the native range, and that the release from natural-enemy regulation in areas of introduction accounts in part for observed changes in plant abundance. We tested experimentally the enemy-release hypothesis with the invasive neotropical shrub Clidemia hirta (L.) D. Don (Melastomataceae). Clidemia hirta does not occur in forest in its native range but is a vigorous invader of tropical forest in its introduced range. Therefore, we tested the specific prediction that release from natural enemies has contributed to its ex- panded habitat distribution. We planted C. hirta into understory and open habitats where it is native (Costa Rica) and where it has been introduced (Hawaii) and applied pesticides to examine the effects of fungal pathogen and insect herbivore exclusion. In understory sites in Costa Rica, C. hirta survival increased by 12% if sprayed with insecticide, 19% with fungicide, and 41% with both insecticide and fungicide compared to control plants sprayed only with water. Exclusion of natural enemies had no effect on survival in open sites in Costa Rica or in either habitat in Hawaii. Fungicide application promoted relative growth rates of plants that survived to the end of the experiment in both habitats of Costa Rica but not in Hawaii, suggesting that fungal pathogens only limit growth of C. hirta where it is native. Galls, stem borers, weevils, and leaf rollers were prevalent in Costa Rica but absent in Hawaii. In addition, the standing percentage of leaf area missing on plants in the control (water only) treatment was five times greater on plants in Costa Rica than in Hawaii and did not differ between habitats. The results from this study suggest that significant effects of herbivores and fungal pathogens may be limited to particular habitats. For Clidemia hirta, its absence from forest understory in its native range likely results in part from the strong pressures of natural enemies. Its invasion into Hawaiian forests is apparently aided by a release from these herbivores and pathogens.", "which Investigated species ?", "Plants", 275.0, 281.0], ["The paper provides the first estimate of the composition and structure of alien plants occurring in the wild in the European continent, based on the results of the DAISIE project (2004\u20132008), funded by the 6th Framework Programme of the European Union and aimed at \u201ccreating an inventory of invasive species that threaten European terrestrial, freshwater and marine environments\u201d. The plant section of the DAISIE database is based on national checklists from 48 European countries/regions and Israel; for many of them the data were compiled during the project and for some countries DAISIE collected the first comprehensive checklists of alien species, based on primary data (e.g., Cyprus, Greece, F. Y. R. O. Macedonia, Slovenia, Ukraine). In total, the database contains records of 5789 alien plant species in Europe (including those native to a part of Europe but alien to another part), of which 2843 are alien to Europe (of extra-European origin). The research focus was on naturalized species; there are in total 3749 naturalized aliens in Europe, of which 1780 are alien to Europe. This represents a marked increase compared to 1568 alien species reported by a previous analysis of data in Flora Europaea (1964\u20131980). Casual aliens were marginally considered and are represented by 1507 species with European origins and 872 species whose native range falls outside Europe. The highest diversity of alien species is concentrated in industrialized countries with a tradition of good botanical recording or intensive recent research. The highest number of all alien species, regardless of status, is reported from Belgium (1969), the United Kingdom (1779) and Czech Republic (1378). The United Kingdom (857), Germany (450), Belgium (447) and Italy (440) are countries with the most naturalized neophytes. The number of naturalized neophytes in European countries is determined mainly by the interaction of temperature and precipitation; it increases with increasing precipitation but only in climatically warm and moderately warm regions. Of the nowadays naturalized neophytes alien to Europe, 50% arrived after 1899, 25% after 1962 and 10% after 1989. At present, approximately 6.2 new species, that are capable of naturalization, are arriving each year. Most alien species have relatively restricted European distributions; half of all naturalized species occur in four or fewer countries/regions, whereas 70% of non-naturalized species occur in only one region. Alien species are drawn from 213 families, dominated by large global plant families which have a weedy tendency and have undergone major radiations in temperate regions (Asteraceae, Poaceae, Rosaceae, Fabaceae, Brassicaceae). There are 1567 genera, which have alien members in European countries, the commonest being globally-diverse genera comprising mainly urban and agricultural weeds (e.g., Amaranthus, Chenopodium and Solanum) or cultivated for ornamental purposes (Cotoneaster, the genus richest in alien species). Only a few large genera which have successfully invaded (e.g., Oenothera, Oxalis, Panicum, Helianthus) are predominantly of non-European origin. Conyza canadensis, Helianthus tuberosus and Robinia pseudoacacia are most widely distributed alien species. Of all naturalized aliens present in Europe, 64.1% occur in industrial habitats and 58.5% on arable land and in parks and gardens. Grasslands and woodlands are also highly invaded, with 37.4 and 31.5%, respectively, of all naturalized aliens in Europe present in these habitats. Mires, bogs and fens are least invaded; only approximately 10% of aliens in Europe occur there. Intentional introductions to Europe (62.8% of the total number of naturalized aliens) prevail over unintentional (37.2%). Ornamental and horticultural introductions escaped from cultivation account for the highest number of species, 52.2% of the total. Among unintentional introductions, contaminants of seed, mineral materials and other commodities are responsible for 1091 alien species introductions to Europe (76.6% of all species introduced unintentionally) and 363 species are assumed to have arrived as stowaways (directly associated with human transport but arriving independently of commodity). Most aliens in Europe have a native range in the same continent (28.6% of all donor region records are from another part of Europe where the plant is native); in terms of species numbers the contribution of Europe as a region of origin is 53.2%. Considering aliens to Europe separately, 45.8% of species have their native distribution in North and South America, 45.9% in Asia, 20.7% in Africa and 5.3% in Australasia. Based on species composition, European alien flora can be classified into five major groups: (1) north-western, comprising Scandinavia and the UK; (2) west-central, extending from Belgium and the Netherlands to Germany and Switzerland; (3) Baltic, including only the former Soviet Baltic states; (4) east-central, comprizing the remainder of central and eastern Europe; (5) southern, covering the entire Mediterranean region. The clustering patterns cut across some European bioclimatic zones; cultural factors such as regional trade links and traditional local preferences for crop, forestry and ornamental species are also important by influencing the introduced species pool. Finally, the paper evaluates a state of the art in the field of plant invasions in Europe, points to research gaps and outlines avenues of further research towards documenting alien plant invasions in Europe. The data are of varying quality and need to be further assessed with respect to the invasion status and residence time of the species included. This concerns especially the naturalized/casual status; so far, this information is available comprehensively for only 19 countries/regions of the 49 considered. Collating an integrated database on the alien flora of Europe can form a principal contribution to developing a European-wide management strategy of alien species.", "which Investigated species ?", "Plants", 80.0, 86.0], ["Abstract Abiotic factors, particularly area, and biotic factors play important roles in determining species richness of continental islands such as cedar glades. We examined the relationship between environmental parameters and species richness on glades and the influence of native species richness on exotic invasion. Field surveys of vascular plants on 40 cedar glades in Rutherford County, Tennessee were conducted during the 2001\u20132003 growing seasons. Glades were geo-referenced to obtain area, perimeter, distance from autotour road, and degree of isolation. Amount of disturbance also was recorded. Two-hundred thirty two taxa were found with Andropogon virginicus, Croton monanthogynus, Juniperus virginiana, Panicum flexile, and Ulmus alata present on all glades. The exotics Ligustrum sinense, Leucanthemum vulgare, and Taraxacum officinale occurred on the majority of glades. Lobelia appendiculata var. gattingeri, Leavenworthia stylosa, and Pediomelum subacaule were the most frequent endemics. Richness of native, exotic and endemic species increased with increasing area and perimeter and decreased with increasing isolation (P \u2264 0.03); richness was unrelated to distance to road (P \u2265 0.20). Perimeter explained a greater amount of variation than area for native and exotic species, whereas area accounted for greater variation for endemic species. Slope of the relationship between area and total richness (0.17) was within the range reported for continental islands. Disturbed glades contained a higher number of exotic and native species than nondisturbed ones, but they were larger (P \u2264 0.03). Invasion of exotic species was unrelated to native species richness when glade size was statistically controlled (P = 0.88). Absence of a relationship is probably due to a lack of substantial competitive interactions. Most endemics occurred over a broad range of glade sizes emphasizing the point that glades of all sizes are worthy of protection.", "which Investigated species ?", "Plants", 346.0, 352.0], ["The idea that naturalised invading plants have fewer phytophagous insects associated with them in their new environment relative to their native range is often assumed, but quantitative data are few and mostly refer to pests on crop species. In this study, the incidence of seed-eating insect larvae in flowerheads of naturalised Asteraceae in New Zealand is compared with that in Britain where the species are native. Similar surveys were carried out in both countries by sampling 200 flowerheads of three populations of the same thirteen species. In the New Zealand populations only one seed-eating insect larva was found in 7800 flowerheads (0.013% infected flowerheads, all species combined) in contrast with the British populations which had 487 (6.24%) flowerheads infested. Possible reasons for the low colonization level of the introduced Asteraceae by native insects in New Zealand are 1) the relatively recent introduction of the plants (100-200 years), 2) their phylogenetic distance from the native flora, and 3) the specialised nature of the bud-infesting habit of the insects.", "which Investigated species ?", "Plants", 35.0, 41.0], ["A longstanding goal in the study of biological invasions is to predict why some species are successful invaders, whereas others are not. To understand this process, detailed information is required concerning the pool of species that have the opportunity to become established. Here we develop an extensive database of ant species unintentionally transported to the continental United States and use these data to test how opportunity and species-level ecological attributes affect the probability of establishment. This database includes an amount of information on failed introductions that may be unparalleled for any group of unintentionally introduced insects. We found a high diversity of species (232 species from 394 records), 12% of which have become established in the continental United States. The probability of establishment increased with the number of times a species was transported (propagule pressure) but was also influenced by nesting habit. Ground nesting species were more likely to become established compared with arboreal species. These results highlight the value of developing similar databases for additional groups of organisms transported by humans to obtain quantitative data on the first stages of the invasion process: opportunity and transport.", "which Investigated species ?", "Insects", 657.0, 664.0], ["It is commonly assumed that invasive plants grow more vigorously in their introduced than in their native range, which is then attributed to release from natural enemies or to microevolutionary changes, or both. However, few studies have tested this assumption by comparing the performance of invasive species in their native vs. introduced ranges. Here, we studied abundance, growth, reproduction, and herbivory in 10 native Chinese and 10 invasive German populations of the invasive shrub Buddleja davidii (Scrophulariaceae; butterfly bush). We found strong evidence for increased plant vigour in the introduced range: plants in invasive populations were significantly taller and had thicker stems, larger inflorescences, and heavier seeds than plants in native populations. These differences in plant performance could not be explained by a more benign climate in the introduced range. Since leaf herbivory was substantially reduced in invasive populations, our data rather suggest that escape from natural enemies, associated with increased plant growth and reproduction, contributes to the invasion success of B. davidii in Central Europe.", "which Investigated species ?", "Plants", 37.0, 43.0], ["Summary 1. Plastic responses to spatiotemporal environmental variation strongly influence species distribution, with widespread species expected to have high phenotypic plasticity. Theoretically, high phenotypic plasticity has been linked to plant invasiveness because it facilitates colonization and rapid spreading over large and environmentally heterogeneous new areas. 2. To determine the importance of phenotypic plasticity for plant invasiveness, we compare well-known exotic invasive species with widespread native congeners. First, we characterized the phenotype of 20 invasive\u2013native ecologically and phylogenetically related pairs from the Mediterranean region by measuring 20 different traits involved in resource acquisition, plant competition ability and stress tolerance. Second, we estimated their plasticity across nutrient and light gradients. 3. On average, invasive species had greater capacity for carbon gain and enhanced performance over a range of limiting to saturating resource availabilities than natives. However, both groups responded to environmental variations with high albeit similar levels of trait plasticity. Therefore, contrary to the theory, the extent of phenotypic plasticity was not significantly higher for invasive plants. 4. We argue that the combination of studying mean values of a trait with its plasticity can render insightful conclusions on functional comparisons of species such as those exploring the performance of species coexisting in heterogeneous and changing environments.", "which Investigated species ?", "Plants", 1257.0, 1263.0], ["Background: In temperate mountains, most non-native plant species reach their distributional limit somewhere along the elevational gradient. However, it is unclear if growth limitations can explain upper range limits and whether phenotypic plasticity or genetic changes allow species to occupy a broad elevational gradient. Aims: We investigated how non-native plant individuals from different elevations responded to growing season temperatures, which represented conditions at the core and margin of the elevational distributions of the species. Methods: We recorded the occurrence of nine non-native species in the Swiss Alps and subsequently conducted a climate chamber experiment to assess growth rates of plants from different elevations under different temperature treatments. Results: The elevational limit observed in the field was not related to the species' temperature response in the climate chamber experiment. Almost all species showed a similar level of reduction in growth rates under lower temperatures independent of the upper elevational limit of the species' distribution. For two species we found indications for genetic differentiation among plants from different elevations. Conclusions: We conclude that factors other than growing season temperatures, such as extreme events or winter mortality, might shape the elevational limit of non-native species, and that ecological filtering might select for genotypes that are phenotypically plastic.", "which Investigated species ?", "Plants", 711.0, 717.0], ["Previous studies have concluded that southern ocean islands are anomalous because past glacial extent and current temperature apparently explain most variance in their species richness. Here, the relationships between physical variables and species richness of vascular plants, insects, land and seabirds, and mammals were reexamined for these islands. Indigenous and introduced species were distinguished, and relationships between the latter and human occupancy variables were investigated. Most variance in indigenous species richness was explained by combinations of area and temperature (56%)\u2014vascular plants; distance (nearest continent) and vascular plant species richness (75%)\u2014insects; area and chlorophyll concentration (65%)\u2014seabirds; and indigenous insect species richness and age (73%)\u2014land birds. Indigenous insects and plants, along with distance (closest continent), explained most variance (70%) in introduced land bird species richness. A combination of area and temperature explained most variance in species richness of introduced vascular plants (73%), insects (69%), and mammals (69%). However, there was a strong relationship between area and number of human occupants. This suggested that larger islands attract more human occupants, increasing the risk of propagule transfer, while temperature increases the chance of propagule establishment. Consequently, human activities on these islands should be regulated more tightly.", "which Investigated species ?", "Birds", 804.0, 809.0], ["This study provides an updated picture of mammal invasions in Europe, based on detailed analysis of information on introductions occurring from the Neolithic to recent times. The assessment considered all information on species introductions, known extinctions and successful eradication campaigns, to reconstruct a trend of alien mammals' establishment in the region. Through a comparative analysis of the data on introduction, with the information on the impact of alien mammals on native and threatened species of Europe, the present study also provides an objective assessment of the overall impact of mammal introductions on European biodiversity, including information on impact mechanisms. The results of this assessment confirm the constant increase of mammal invasions in Europe, with no indication of a reduction of the rate of introduction. The study also confirms the severe impact of alien mammals, which directly threaten a significant number of native species, including many highly threatened species. The results could help to prioritize species for response, as required by international conventions and obligations.", "which Investigated species ?", "Mammals", 331.0, 338.0], ["The Enemies Hypothesis predicts that alien plants have a competitive ad- vantage over native plants because they are often introduced with few herbivores or diseases. To investigate this hypothesis, we transplanted seedlings of the invasive alien tree, Sapium sebiferum (Chinese tallow tree) and an ecologically similar native tree, Celtis laevigata (hackberry), into mesic forest, floodplain forest, and coastal prairie sites in east Texas and manipulated foliar fungal diseases and insect herbivores with fungicidal and insecticidal sprays. As predicted by the Enemies Hypothesis, insect herbivores caused significantly greater damage to untreated Celtis seedlings than to untreated Sapium seedlings. However, contrary to predictions, suppression of insect herbivores caused significantly greater in- creases in survivorship and growth of Sapium seedlings compared to Celtis seedlings. Regressions suggested that Sapium seedlings compensate for damage in the first year but that this greatly increases the risk of mortality in subsequent years. Fungal diseases had no effects on seedling survival or growth. The Recruitment Limitation Hypothesis predicts that the local abundance of a species will depend more on local seed input than on com- petitive ability at that location. To investigate this hypothesis, we added seeds of Celtis and Sapium on and off of artificial soil disturbances at all three sites. Adding seeds increased the density of Celtis seedlings and sometimes Sapium seedlings, with soil disturbance only affecting density of Celtis. Together the results of these experiments suggest that the success of Sapium may depend on high rates of seed input into these ecosystems and high growth potential, as well as performance advantages of seedlings caused by low rates of herbivory.", "which Investigated species ?", "Plants", 43.0, 49.0], ["Escape from natural enemies is a widely held generalization for the success of exotic plants. We conducted a large-scale experiment in Hawaii (USA) to quantify impacts of ungulate removal on plant growth and performance, and to test whether elimination of an exotic generalist herbivore facilitated exotic success. Assessment of impacted and control sites before and after ungulate exclusion using airborne imaging spectroscopy and LiDAR, time series satellite observations, and ground-based field studies over nine years indicated that removal of generalist herbivores facilitated exotic success, but the abundance of native species was unchanged. Vegetation cover <1 m in height increased in ungulate-free areas from 48.7% +/- 1.5% to 74.3% +/- 1.8% over 8.4 years, corresponding to an annualized growth rate of lambda = 1.05 +/- 0.01 yr(-1) (median +/- SD). Most of the change was attributable to exotic plant species, which increased from 24.4% +/- 1.4% to 49.1% +/- 2.0%, (lambda = 1.08 +/- 0.01 yr(-1)). Native plants experienced no significant change in cover (23.0% +/- 1.3% to 24.2% +/- 1.8%, lambda = 1.01 +/- 0.01 yr(-1)). Time series of satellite phenology were indistinguishable between the treatment and a 3.0-km2 control site for four years prior to ungulate removal, but they diverged immediately following exclusion of ungulates. Comparison of monthly EVI means before and after ungulate exclusion and between the managed and control areas indicates that EVI strongly increased in the managed area after ungulate exclusion. Field studies and airborne analyses show that the dominant invader was Senecio madagascariensis, an invasive annual forb that increased from < 0.01% to 14.7% fractional cover in ungulate-free areas (lambda = 1.89 +/- 0.34 yr(-1)), but which was nearly absent from the control site. A combination of canopy LAI, water, and fractional cover were expressed in satellite EVI time series and indicate that the invaded region maintained greenness during drought conditions. These findings demonstrate that enemy release from generalist herbivores can facilitate exotic success and suggest a plausible mechanism by which invasion occurred. They also show how novel remote-sensing technology can be integrated with conservation and management to help address exotic plant invasions.", "which Investigated species ?", "Plants", 86.0, 92.0], ["A prominent hypothesis for plant invasions is escape from the inhibitory effects of soil biota. Although the strength of these inhibitory effects, measured as soil feedbacks, has been assessed between natives and exotics in non\u2010native ranges, few studies have compared the strength of plant\u2013soil feedbacks for exotic species in soils from non\u2010native versus native ranges. We examined whether 6 perennial European forb species that are widespread invaders in North American grasslands (Centaurea stoebe, Euphorbia esula, Hypericum perforatum, Linaria vulgaris, Potentilla recta and Leucanthemum vulgare) experienced different suppressive effects of soil biota collected from 21 sites across both ranges. Four of the six species tested exhibited substantially reduced shoot biomass in \u2018live\u2019 versus sterile soil from Europe. In contrast, North American soils produced no significant feedbacks on any of the invasive species tested indicating a broad scale escape from the inhibitory effects of soil biota. Negative feedbacks generated by European soil varied idiosyncratically among sites and species. Since this variation did not correspond with the presence of the target species at field sites, it suggests that negative feedbacks can be generated from soil biota that are widely distributed in native ranges in the absence of density\u2010dependent effects. Synthesis. Our results show that for some invasives, native soils have strong suppressive potential, whereas this is not the case in soils from across the introduced range. Differences in regional\u2010scale evolutionary history among plants and soil biota could ultimately help explain why some exotics are able to occur at higher abundance in the introduced versus native range.", "which Investigated species ?", "Plants", 1585.0, 1591.0], ["In introduced organisms, dispersal propensity is expected to increase during range expansion. This prediction is based on the assumption that phenotypic plasticity is low compared to genetic diversity, and an increase in dispersal can be counteracted by the Allee effect. Empirical evidence in support of these hypotheses is however lacking. The present study tested for evidence of differentiation in dispersal-related traits and the Allee effect in the wind-dispersed invasive Senecio inaequidens (Asteraceae). We collected capitula from individuals in ten field populations, along an invasion route including the original introduction site in southern France. In addition, we conducted a common garden experiment from field-collected seeds and obtained capitula from individuals representing the same ten field populations. We analysed phenotypic variation in dispersal traits between field and common garden environments as a function of the distance between populations and the introduction site. Our results revealed low levels of phenotypic differentiation among populations. However, significant clinal variation in dispersal traits was demonstrated in common garden plants representing the invasion route. In field populations, similar trends in dispersal-related traits and evidence of an Allee effect were not detected. In part, our results supported expectations of increased dispersal capacity with range expansion, and emphasized the contribution of phenotypic plasticity under natural conditions.", "which Investigated species ?", "Plants", 1175.0, 1181.0], ["Enemy release of exotic plants from soil pathogens has been tested by examining plant-soil feedback effects in repetitive growth cycles. However, positive soil feedback may also be due to enhanced benefit from the local arbuscular mycorrhizal fungi (AMF). Few studies actually have tested pathogen effects, and none of them did so in arid savannas. In the Kalahari savanna in Botswana, we compared the soil feedback of the exotic grass Cenchrus biflorus with that of two dominant native grasses, Eragrostis lehmanniana and Aristida meridionalis. The exotic grass had neutral to positive soil feedback, whereas both native grasses showed neutral to negative feedback effects. Isolation and testing of root-inhabiting fungi of E. lehmanniana yielded two host-specific pathogens that did not influence the exotic C. biflorus or the other native grass, A. meridionalis. None of the grasses was affected by the fungi that were isolated from the roots of the exotic C. biflorus. We isolated and compared the AMF community of the native and exotic grasses by polymerase chain reaction-denaturing gradient gel elecrophoresis (PCR-DGGE), targeting AMF 18S rRNA. We used roots from monospecific field stands and from plants grown in pots with mixtures of soils from the monospecific field stands. Three-quarters of the root samples of the exotic grass had two nearly identical sequences, showing 99% similarity with Glomus versiforme. The two native grasses were also associated with distinct bands, but each of these bands occurred in only a fraction of the root samples. The native grasses contained a higher diversity of AMF bands than the exotic grass. Canonical correspondence analyses of the AMF band patterns revealed almost as much difference between the native and exotic grasses as between the native grasses. In conclusion, our results support the hypothesis that release from soil-borne enemies may facilitate local abundance of exotic plants, and we provide the first evidence that these processes may occur in arid savanna ecosystems. Pathogenicity tests implicated the involvement of soil pathogens in the soil feedback responses, and further studies should reveal the functional consequences of the observed high infection with a low diversity of AMF in the roots of exotic plants.", "which Investigated species ?", "Plants", 24.0, 30.0], ["The differences in phenotypic plasticity between invasive (North American) and native (German) provenances of the invasive plant Lythrum salicaria (purple loosestrife) were examined using a multivariate reaction norm approach testing two important attributes of reaction norms described by multivariate vectors of phenotypic change: the magnitude and direction of mean trait differences between environments. Data were collected for six life history traits from native and invasive plants using a split-plot design with experimentally manipulated water and nutrient levels. We found significant differences between native and invasive plants in multivariate phenotypic plasticity for comparisons between low and high water treatments within low nutrient levels, between low and high nutrient levels within high water treatments, and for comparisons that included both a water and nutrient level change. The significant genotype x environment (G x E) effects support the argument that invasiveness of purple loosestrife is closely associated with the interaction of high levels of soil nutrient and flooding water regime. Our results indicate that native and invasive plants take different strategies for growth and reproduction; native plants flowered earlier and allocated more to flower production, while invasive plants exhibited an extended period of vegetative growth before flowering to increase height and allocation to clonal reproduction, which may contribute to increased fitness and invasiveness in subsequent years.", "which Investigated species ?", "Plants", 482.0, 488.0], ["\n\nHolcus lanatus L. can colonise a wide range of sites within the naturalised grassland of the Humid Dominion of Chile. The objectives were to determine plant growth mechanisms and strategies that have allowed H. lanatus to colonise contrasting pastures and to determine the existence of ecotypes of H. lanatus in southern Chile. Plants of H. lanatus were collected from four geographic zones of southern Chile and established in a randomised complete block design with four replicates. Five newly emerging tillers were marked per plant and evaluated at the vegetative, pre-ear emergence, complete emerged inflorescence, end of flowering period, and mature seed stages. At each evaluation, one marked tiller was harvested per plant. The variables measured included lamina length and width, tiller height, length of the inflorescence, total number of leaves, and leaf, stem, and inflorescence mass. At each phenological stage, groups of accessions were statistically formed using cluster analysis. The grouping of accessions (cluster analysis) into statistically different groups (ANOVA and canonical variate analysis) indicated the existence of different ecotypes. The phenotypic variation within each group of the accessions suggested that each group has its own phenotypic plasticity. It is concluded that the successful colonisation by H. lanatus has resulted from diversity within the species.\n", "which Investigated species ?", "Plants", 338.0, 344.0], ["Abstract We documented microhabitat occurrence and growth of Lonicera japonica to identify factors related to its invasion into a southern Illinois shale barren. The barren was surveyed for L. japonica in June 2003, and the microhabitats of established L. japonica plants were compared to random points that sampled the range of available microhabitats in the barren. Vine and leaf characters were used as measurements of plant growth. Lonicera japonica occurred preferentially in areas of high litter cover and species richness, comparatively small trees, low PAR, low soil moisture and temperature, steep slopes, and shallow soils. Plant growth varied among these microhabitats. Among plots where L. japonica occurred, growth was related to soil and light conditions, and aspects of surrounding cover. Overhead canopy cover was a common variable associated with nearly all measured growth traits. Plasticity of traits to improve invader success can only affect the likelihood of invasion once constraints to establishment and persistence have been surmounted. Therefore, understanding where L. japonica invasion occurs, and microhabitat interactions with plant growth are important for estimating invasion success.", "which Investigated species ?", "Plants", 265.0, 271.0], ["AbstractRussian olive (Elaeagnus angustifolia Linnaeus; Elaeagnaceae) is an exotic shrub/tree that has become invasive in many riparian ecosystems throughout semi-arid, western North America, including southern British Columbia, Canada. Despite its prevalence and the potentially dramatic impacts it can have on riparian and aquatic ecosystems, little is known about the insect communities associated with Russian olive within its invaded range. At six sites throughout the Okanagan valley of southern British Columbia, Canada, we compared the diversity of insects associated with Russian olive plants to that of insects associated with two commonly co-occurring native plant species: Woods\u2019 rose (Rosa woodsii Lindley; Rosaceae) and Saskatoon (Amelanchier alnifolia (Nuttall) Nuttall ex Roemer; Rosaceae). Total abundance did not differ significantly among plant types. Family richness and Shannon diversity differed significantly between Woods\u2019 rose and Saskatoon, but not between either of these plant types and Russian olive. An abundance of Thripidae (Thysanoptera) on Russian olive and Tingidae (Hemiptera) on Saskatoon contributed to significant compositional differences among plant types. The families Chloropidae (Diptera), Heleomyzidae (Diptera), and Gryllidae (Orthoptera) were uniquely associated with Russian olive, albeit in low abundances. Our study provides valuable and novel information about the diversity of insects associated with an emerging plant invader of western Canada.", "which Investigated species ?", "Plants", 655.0, 661.0], ["Abstract: Roads are believed to be a major contributing factor to the ongoing spread of exotic plants. We examined the effect of road improvement and environmental variables on exotic and native plant diversity in roadside verges and adjacent semiarid grassland, shrubland, and woodland communities of southern Utah ( U.S.A. ). We measured the cover of exotic and native species in roadside verges and both the richness and cover of exotic and native species in adjacent interior communities ( 50 m beyond the edge of the road cut ) along 42 roads stratified by level of road improvement ( paved, improved surface, graded, and four\u2010wheel\u2010drive track ). In roadside verges along paved roads, the cover of Bromus tectorum was three times as great ( 27% ) as in verges along four\u2010wheel\u2010drive tracks ( 9% ). The cover of five common exotic forb species tended to be lower in verges along four\u2010wheel\u2010drive tracks than in verges along more improved roads. The richness and cover of exotic species were both more than 50% greater, and the richness of native species was 30% lower, at interior sites adjacent to paved roads than at those adjacent to four\u2010wheel\u2010drive tracks. In addition, environmental variables relating to dominant vegetation, disturbance, and topography were significantly correlated with exotic and native species richness and cover. Improved roads can act as conduits for the invasion of adjacent ecosystems by converting natural habitats to those highly vulnerable to invasion. However, variation in dominant vegetation, soil moisture, nutrient levels, soil depth, disturbance, and topography may render interior communities differentially susceptible to invasions originating from roadside verges. Plant communities that are both physically invasible ( e.g., characterized by deep or fertile soils ) and disturbed appear most vulnerable. Decision\u2010makers considering whether to build, improve, and maintain roads should take into account the potential spread of exotic plants.", "which Investigated species ?", "Plants", 95.0, 101.0], ["ABSTRACT Solenopsis invicta Buren is an important invasive pest that has a negative impact on biodiversity. However, current knowledge regarding the ecological effects of its interaction with honeydewproducing hemipteran insects is inadequate. To partially address this problem, we assessed whether the interaction between the two invasive species S. invicta and Phenacoccus solenopsis Tinsley mediated predation of P. solenopsis by Propylaea japonica Thunbery lady beetles using field investigations and indoor experiments. S. invicta tending significantly reduced predation by the Pr. japonica lady beetle, and this response was more pronounced for lady beetle larvae than for adults. A field investigation showed that the species richness and quantity of lady beetle species in plots with fire ants were much lower than in those without fire ants. In an olfaction bioassay, lady beetles preferred to move toward untended rather than tended mealybugs. Overall, these results suggest that mutualism between S. invicta and P. solenopsis may have a serious impact on predation of P. solenopsis by lady beetles, which could promote growth of P. solenopsis populations.", "which Investigated species ?", "Insects", 221.0, 228.0], ["Summary 1. The use of off-season burns to control exotic vegetation shows promise for land managers. In California, wildfires tend to occur in the summer and autumn, when most grassland vegetation is dormant. The effects of spring fires on native bunchgrasses have been examined but their impacts on native forbs have received less attention. 2. We introduced Erodium macrophyllum, a rare native annual forb, by seeding plots in 10 different areas in a California grassland. We tested the hypotheses that E. macrophyllum would perform better (increased fecundity and germination) when competing with native grasses than with a mixture of exotic and native grasses, and fire would alter subsequent demography of E. macrophyllum and other species\u2019 abundances. We monitored the demography of E. macrophyllum for two seasons in plots manually weeded so that they were free from exotics, and in areas that were burned or not burned the spring after seeding. 3. Weeding increased E. macrophyllum seedling emergence, survival and fecundity during both seasons. When vegetation was burned in June 2001 (at the end of the first growing season) to kill exotic grass seeds before they dispersed, all E. macrophyllum plants had finished their life cycle and dispersed seeds, suggesting that burns at this time of year would not directly impact on fecundity. In the growing season after burning (2002), burned plots had less recruitment of E. macrophyllum but more establishment of native grass seedlings, suggesting burning may differentially affect seedling recruitment. 4. At the end of the second growing season (June 2002), burned plots had less cover of exotic and native grasses but more cover of exotic forbs. Nevertheless, E. macrophyllum plants in burned plots had greater fecundity than in non-burned plots, suggesting that exotic grasses are more competitive than exotic forbs. 5. A glasshouse study showed that exotic grasses competitively suppress E. macrophyllum to a greater extent than native grasses, indicating that the poor performance of E. macrophyllum in the non-burned plots was due to exotic grass competition. 6. Synthesis and applications. This study illustrates that fire can alter the competitive environment in grasslands with differential effects on rare forbs, and that exotic grasses strongly interfere with E. macrophyllum. For land managers, the benefits of prescribed spring burns will probably outweigh the costs of decreased E. macrophyllum establishment. Land managers can use spring burns to cause a flush of native grass recruitment and to create an environment that is, although abundant with exotic forbs, ultimately less competitive compared with non-burned areas dominated by exotic grasses.", "which Investigated species ?", "Plants", 1205.0, 1211.0], ["Species richness of native, rare native, and exotic understorey plants was recorded at 120 sites in temperate grassy vegetation in New South Wales. Linear models were used to predict the effects of environment and disturbance on the richness of each of these groups. Total native species and rare native species showed similar responses, with rich- ness declining on sites of increasing natural fertility of par- ent material as well as declining under conditions of water", "which Investigated species ?", "Plants", 64.0, 70.0], ["Summary 1 It has previously been hypothesized that low rates of attack by natural enemies may contribute to the invasiveness of exotic plants. 2 We tested this hypothesis by investigating the influence of pathogens on survival during a critical life-history stage: the seed bank. We used fungicide treatments to estimate the impacts of soil fungi on buried seeds of a taxonomically broad suite of congeneric natives and exotics, in both upland and wetland meadows. 3 Seeds of both natives and exotics were recovered at lower rates in wetlands than in uplands. Fungicide addition reduced this difference by improving recovery in wetlands, indicating that the lower recovery was largely attributable to a higher level of fungal mortality. This suggests that fungal pathogens may contribute to the exclusion of upland species from wetlands. 4 The effects of fungicide on the recovery of buried seeds did not differ between natives and exotics. Seeds of exotics were recovered at a higher rate than seeds of natives in uplands, but this effect was not attributable to fungal pathogens. 5 Fungal seed pathogens may offer poor prospects for the management of most exotic species. The lack of consistent differences in the responses of natives vs. exotics to fungicide suggests few aliens owe their success to low seed pathogen loads, while impacts of seed-pathogenic biocontrol agents on non-target species would be frequent.", "which Investigated species ?", "Plants", 135.0, 141.0], ["We studied the relative importance of residence time, propagule pressure, and species traits in three stages of invasion of alien woody plants cultivated for about 150 years in the Czech Republic, Central Europe. The probability of escape from cultivation, naturalization, and invasion was assessed using classification trees. We compared 109 escaped-not-escaped congeneric pairs, 44 naturalized-not-naturalized, and 17 invasive-not-invasive congeneric pairs. We used the following predictors of the above probabilities: date of introduction to the target region as a measure of residence time; intensity of planting in the target area as a proxy for propagule pressure; the area of origin; and 21 species-specific biological and ecological traits. The misclassification rates of the naturalization and invasion model were low, at 19.3% and 11.8%, respectively, indicating that the variables used included the major determinants of these processes. The probability of escape increased with residence time in the Czech Republic, whereas the probability of naturalization increased with the residence time in Europe. This indicates that some species were already adapted to local conditions when introduced to the Czech Republic. Apart from residence time, the probability of escape depends on planting intensity (propagule pressure), and that of naturalization on the area of origin and fruit size; it is lower for species from Asia and those with small fruits. The probability of invasion is determined by a long residence time and the ability to tolerate low temperatures. These results indicate that a simple suite of factors determines, with a high probability, the invasion success of alien woody plants, and that the relative role of biological traits and other factors is stage dependent. High levels of propagule pressure as a result of planting lead to woody species eventually escaping from cultivation, regardless of biological traits. However, the biological traits play a role in later stages of invasion.", "which Investigated species ?", "Plants", 136.0, 142.0], ["Summary 1 We investigated factors hypothesized to influence introduction success and subsequent geographical range size in 52 species of bird that have been introduced to mainland Australia. 2 The 19 successful species had been introduced more times, at more sites and in greater overall numbers. Relative to failed species, successfully introduced species also had a greater area of climatically suitable habitat available in Australia, a larger overseas range size and were more likely to have been introduced successfully outside Australia. After controlling for phylogeny these relationships held, except that with overseas range size and, in addition, larger-bodied species had a higher probability of introduction success. There was also a marked taxonomic bias: gamebirds had a much lower probability of success than other species. A model including five of these variables explained perfectly the patterns in introduction success across-species. 3 Of the successful species, those with larger geographical ranges in Australia had a greater area of climatically suitable habitat, traits associated with a faster population growth rate (small body size, short incubation period and more broods per season) and a larger overseas range size. The relationships between range size in Australia, the extent of climatically suitable habitat and overseas range size held after controlling for phylogeny. 4 We discuss the probable causes underlying these relationships and why, in retrospect, the outcome of bird introductions to Australia are highly predictable.", "which Investigated species ?", "Birds", NaN, NaN], ["1 Understanding why some alien plant species become invasive when others fail is a fundamental goal in invasion ecology. We used detailed historical planting records of alien plant species introduced to Amani Botanical Garden, Tanzania and contemporary surveys of their invasion status to assess the relative ability of phylogeny, propagule pressure, residence time, plant traits and other factors to explain the success of alien plant species at different stages of the invasion process. 2 Species with native ranges centred in the tropics and with larger seeds were more likely to regenerate, whereas naturalization success was explained by longer residence time, faster growth rate, fewer seeds per fruit, smaller seed mass and shade tolerance. 3 Naturalized species spreading greater distances from original plantings tended to have more seeds per fruit, whereas species dispersed by canopy\u2010feeding animals and with native ranges centred on the tropics tended to have spread more widely in the botanical garden. Species dispersed by canopy\u2010feeding animals and with greater seed mass were more likely to be established in closed forest. 4 Phylogeny alone made a relatively minor contribution to the explanatory power of statistical models, but a greater proportion of variation in spread within the botanical garden and in forest establishment was explained by phylogeny alone than for other models. Phylogeny jointly with variables also explained a greater proportion of variation in forest establishment than in other models. Phylogenetic correction weakened the importance of dispersal syndrome in explaining compartmental spread, seed mass in the forest establishment model, and all factors except for growth rate and residence time in the naturalization model. 5 Synthesis. This study demonstrates that it matters considerably how invasive species are defined when trying to understand the relative ability of multiple variables to explain invasion success. By disentangling different invasion stages and using relatively objective criteria to assess species status, this study highlights that relatively simple models can help to explain why some alien plants are able to naturalize, spread and even establish in closed tropical forests.", "which Investigated species ?", "Plants", 2162.0, 2168.0], ["1. Soil communities and their interactions with plants may play a major role in determining the success of invasive species. However, rigorous investigations of this idea using cross\u2010continental comparisons, including native and invasive plant populations, are still scarce.", "which Investigated species ?", "Plants", 48.0, 54.0], ["The paper examines the role of feral sheep (Ovis aries) in facilitating the naturalization of alien plants and degrading a formerly robust and stable ecosystem of Socorro, an isolated oceanic island in the Mexican Pacific Ocean. Approximately half of the island is still sheep\u2010free. The other half has been widely overgrazed and transformed into savannah and prairie\u2010like open habitats that exhibit sheet and gully erosion and are covered by a mix of native and alien invasive vegetation today. Vegetation transects in this moderately sheep\u2010impacted sector show that a significant number of native and endemic herb and shrub species exhibit sympatric distribution patterns with introduced plants. Only one alien plant species has been recorded from any undisturbed and sheep\u2010free island sector so far.", "which Investigated species ?", "Plants", 100.0, 106.0], ["Island ecosystems are notably susceptible to biological invasions (Elton 1958), and the Hawaiian islands in particular have been colonized by many introduced species (Loope and Mueller-Dombois 1989). Introduced plants now dominate extensive areas of the Hawaiian Islands, and 86 species of alien plants are presently considered to pose serious threats to Hawaiian communities and ecosystems (Smith 1985). Among the most important invasive plants are several species of tropical and subtropical grasses that use the C4 photosynthetic pathway. These grasses now dominate extensive areas of dry and seasonally dry habitats in Hawai'i. They may compete with native species, and they have also been shown to alter hydrological properties in the areas they invade (MuellerDombois 1973). Most importantly, alien grasses can introduce fire into areas where it was previously rare or absent (Smith 1985), thereby altering the structure and functioning of previously native-dominated ecosystems. Many of these grasses evolved in fire-affected areas and have mechanisms for surviving and recovering rapidly from fire (Vogl 1975, Christensen 1985), while most native species in Hawai'i have little background with fire (Mueller-Dombois 1981) and hence few or no such mechanisms. Consequently, grass invasion could initiate a grass/fire cycle whereby invading grasses promote fire, which in turn favors alien grasses over native species. Such a scenario has been suggested in a number of areas, including Latin America, western North America, Australia, and Hawai'i (Parsons 1972, Smith 1985, Christensen and Burrows 1986, Mack 1986, MacDonald and Frame 1988). In most of these cases, land clearing by humans initiates colonization by alien grasses, and the grass/fire cycle then leads to their persistence. In Hawai'i and perhaps other areas, however, grass invasion occurs without any direct human intervention. Where such invasions initiate a grass/fire cy-", "which Investigated species ?", "Plants", 211.0, 217.0], ["Both human-related and natural factors can affect the establishment and distribution of exotic species. Understanding the relative role of the different factors has important scientific and applied implications. Here, we examined the relative effect of human-related and natural factors in determining the richness of exotic bird species established across Europe. Using hierarchical partitioning, which controls for covariation among factors, we show that the most important factor is the human-related community-level propagule pressure (the number of exotic species introduced), which is often not included in invasion studies due to the lack of information for this early stage in the invasion process. Another, though less important, factor was the human footprint (an index that includes human population size, land use and infrastructure). Biotic and abiotic factors of the environment were of minor importance in shaping the number of established birds when tested at a European extent using 50\u00d750 km2 grid squares. We provide, to our knowledge, the first map of the distribution of exotic bird richness in Europe. The richest hotspot of established exotic birds is located in southeastern England, followed by areas in Belgium and The Netherlands. Community-level propagule pressure remains the major factor shaping the distribution of exotic birds also when tested for the UK separately. Thus, studies examining the patterns of establishment should aim at collecting the crucial and hard-to-find information on community-level propagule pressure or develop reliable surrogates for estimating this factor. Allowing future introductions of exotic birds into Europe should be reconsidered carefully, as the number of introduced species is basically the main factor that determines the number established.", "which Investigated species ?", "Birds", 955.0, 960.0], ["Understanding the factors that drive commonness and rarity of plant species and whether these factors differ for alien and native species are key questions in ecology. If a species is to become common in a community, incoming propagules must first be able to establish. The latter could be determined by competition with resident plants, the impacts of herbivores and soil biota, or a combination of these factors. We aimed to tease apart the roles that these factors play in determining establishment success in grassland communities of 10 alien and 10 native plant species that are either common or rare in Germany, and from four families. In a two\u2010year multisite field experiment, we assessed the establishment success of seeds and seedlings separately, under all factorial combinations of low vs. high disturbance (mowing vs mowing and tilling of the upper soil layer), suppression or not of pathogens (biocide application) and, for seedlings only, reduction or not of herbivores (net\u2010cages). Native species showed greater establishment success than alien species across all treatments, regardless of their commonness. Moreover, establishment success of all species was positively affected by disturbance. Aliens showed lower establishment success in undisturbed sites with biocide application. Release of the undisturbed resident community from pathogens by biocide application might explain this lower establishment success of aliens. These findings were consistent for establishment from either seeds or seedlings, although less significantly so for seedlings, suggesting a more important role of pathogens in very early stages of establishment after germination. Herbivore exclusion did play a limited role in seedling establishment success. Synthesis: In conclusion, we found that less disturbed grassland communities exhibited strong biotic resistance to establishment success of species, whether alien or native. However, we also found evidence that alien species may benefit weakly from soilborne enemy release, but that this advantage over native species is lost when the latter are also released by biocide application. Thus, disturbance was the major driver for plant species establishment success and effects of pathogens on alien plant establishment may only play a minor role.", "which Investigated species ?", "Plants", 330.0, 336.0], ["Abstract Enemy release is a commonly accepted mechanism to explain plant invasions. Both the diploid Leucanthemum vulgare and the morphologically very similar tetraploid Leucanthemum ircutianum have been introduced into North America. To verify which species is more prevalent in North America we sampled 98 Leucanthemum populations and determined their ploidy level. Although polyploidy has repeatedly been proposed to be associated with increased invasiveness in plants, only two of the populations surveyed in North America were the tetraploid L. ircutianum . We tested the enemy release hypothesis by first comparing 20 populations of L. vulgare and 27 populations of L. ircutianum in their native range in Europe, and then comparing the European L. vulgare populations with 31 L. vulgare populations sampled in North America. Characteristics of the site and associated vegetation, plant performance and invertebrate herbivory were recorded. In Europe, plant height and density of the two species were similar but L. vulgare produced more flower heads than L. ircutianum . Leucanthemum vulgare in North America was 17 % taller, produced twice as many flower heads and grew much denser compared to L. vulgare in Europe. Attack rates by root- and leaf-feeding herbivores on L. vulgare in Europe (34 and 75 %) was comparable to that on L. ircutianum (26 and 71 %) but higher than that on L. vulgare in North America (10 and 3 %). However, herbivore load and leaf damage were low in Europe. Cover and height of the co-occurring vegetation was higher in L. vulgare populations in the native than in the introduced range, suggesting that a shift in plant competition may more easily explain the invasion success of L. vulgare than escape from herbivory.", "which Investigated species ?", "Plants", 465.0, 471.0], ["At large spatial scales, exotic and native plant diversity exhibit a strong positive relationship. This may occur because exotic and native species respond similarly to processes that influence diversity over large geographical areas. To test this hypothesis, we compared exotic and native species-area relationships within six North American ecoregions. We predicted and found that within ecoregions the ratio of exotic to native species richness remains constant with increasing area. Furthermore, we predicted that areas with more native species than predicted by the species-area relationship would have proportionally more exotics as well. We did find that these exotic and native deviations were highly correlated, but areas that were good (or bad) for native plants were even better (or worse) for exotics. Similar processes appear to influence exotic and native plant diversity but the degree of this influence may differ with site quality.", "which Investigated species ?", "Plants", 766.0, 772.0], ["Background: Phenotypic plasticity and ecotypic differentiation have been suggested as the main mechanisms by which widely distributed species can colonise broad geographic areas with variable and stressful conditions. Some invasive plant species are among the most widely distributed plants worldwide. Plasticity and local adaptation could be the mechanisms for colonising new areas. Aims: We addressed if Taraxacum officinale from native (Alps) and introduced (Andes) stock responded similarly to drought treatment, in terms of photosynthesis, foliar angle, and flowering time. We also evaluated if ontogeny affected fitness and physiological responses to drought. Methods: We carried out two common garden experiments with both seedlings and adults (F2) of T. officinale from its native and introduced ranges in order to evaluate their plasticity and ecotypic differentiation under a drought treatment. Results: Our data suggest that the functional response of T. officinale individuals from the introduced range to drought is the result of local adaptation rather than plasticity. In addition, the individuals from the native distribution range were more sensitive to drought than those from the introduced distribution ranges at both seedling and adult stages. Conclusions: These results suggest that local adaptation may be a possible mechanism underlying the successful invasion of T. officinale in high mountain environments of the Andes.", "which Investigated species ?", "Plants", 284.0, 290.0], ["Predicting invasion potential has global significance for managing ecosystems as well as important theoretical implications for understanding community assembly. Phylogenetic relationships of introduced species to the extant community may be predictive of establishment success because of the opposing forces of competition/shared enemies (which should limit invasions by close relatives) versus environmental filtering (which should allow invasions by close relatives). We examine here the association between establishment success of introduced birds and their phylogenetic relatedness to the extant avifauna within three highly invaded regions (Florida, New Zealand, and Hawaii). Published information on both successful and failed introductions, as well as native species, was compiled for all three regions. We created a phylogeny for each avifauna including all native and introduced bird species. From the estimated branch lengths on these phylogenies, we calculated multiple measurements of relatedness between each introduced species and the extant avifauna. We used generalized linear models to test for an association between relatedness and establishment success. We found that close relatedness to the extant avifauna was significantly associated with increased establishment success for exotic birds both at the regional (Florida, Hawaii, New Zealand) and sub-regional (islands within Hawaii) levels. Our results suggest that habitat filtering may be more important than interspecific competition in avian communities assembled under high rates of anthropogenic species introductions. This work also supports the utility of community phylogenetic methods in the study of vertebrate invasions.", "which Investigated species ?", "Birds", 547.0, 552.0], ["Grasslands have been lost and degraded in the United States since Euro-American settlement due to agriculture, development, introduced invasive species, and changes in fire regimes. Fire is frequently used in prairie restoration to control invasion by trees and shrubs, but may have additional consequences. For example, fire might reduce damage by herbivore and pathogen enemies by eliminating litter, which harbors eggs and spores. Less obviously, fire might influence enemy loads differently for native and introduced plant hosts. We used a controlled burn in a Willamette Valley (Oregon) prairie to examine these questions. We expected that, without fire, introduced host plants should have less damage than native host plants because the introduced species are likely to have left many of their enemies behind when they were transported to their new range (the enemy release hypothesis, or ERH). If the ERH holds, then fire, which should temporarily reduce enemies on all species, should give an advantage to the natives because they should see greater total reduction in damage by enemies. Prior to the burn, we censused herbivore and pathogen attack on eight plant species (five of nonnative origin: Bromus hordaceous, Cynosuros echinatus, Galium divaricatum, Schedonorus arundinaceus (= Festuca arundinacea), and Sherardia arvensis; and three natives: Danthonia californica, Epilobium minutum, and Lomatium nudicale). The same plots were monitored for two years post-fire. Prior to the burn, native plants had more kinds of damage and more pathogen damage than introduced plants, consistent with the ERH. Fire reduced pathogen damage relative to the controls more for the native than the introduced species, but the effects on herbivory were negligible. Pathogen attack was correlated with plant reproductive fitness, whereas herbivory was not. These results suggest that fire may be useful for promoting some native plants in prairies due to its negative effects on their pathogens.", "which Investigated species ?", "Plants", 676.0, 682.0], ["Recent comprehensive data provided through the DAISIE project (www.europe-aliens.org) have facilitated the development of the first pan-European assessment of the impacts of alien plants, vertebrates, and invertebrates \u2013 in terrestrial, freshwater, and marine environments \u2013 on ecosystem services. There are 1094 species with documented ecological impacts and 1347 with economic impacts. The two taxonomic groups with the most species causing impacts are terrestrial invertebrates and terrestrial plants. The North Sea is the maritime region that suffers the most impacts. Across taxa and regions, ecological and economic impacts are highly correlated. Terrestrial invertebrates create greater economic impacts than ecological impacts, while the reverse is true for terrestrial plants. Alien species from all taxonomic groups affect \u201csupporting\u201d, \u201cprovisioning\u201d, \u201cregulating\u201d, and \u201ccultural\u201d services and interfere with human well-being. Terrestrial vertebrates are responsible for the greatest range of impacts, and these are widely distributed across Europe. Here, we present a review of the financial costs, as the first step toward calculating an estimate of the economic consequences of alien species in Europe.", "which Investigated species ?", "Plants", 180.0, 186.0], ["Hanley ME (2012). Seedling defoliation, plant growth and flowering potential in native- and invasive-range Plantago lanceolata populations. Weed Research52, 252\u2013259. Summary The plastic response of weeds to new environmental conditions, in particular the likely relaxation of herbivore pressure, is considered vital for successful colonisation and spread. However, while variation in plant anti-herbivore resistance between native- and introduced-range populations is well studied, few authors have considered herbivore tolerance, especially at the seedling stage. This study examines variation in seedling tolerance in native (European) and introduced (North American) Plantago lanceolata populations following cotyledon removal at 14 days old. Subsequent effects on plant growth were quantified at 35 days, along with effects on flowering potential at maturity. Cotyledon removal reduced early growth for all populations, with no variation between introduced- or native-range plants. Although more variable, the effects of cotyledon loss on flowering potential were also unrelated to range. The likelihood that generalist seedling herbivores are common throughout North America may explain why no difference in seedling tolerance was apparent. However, increased flowering potential in plants from North American P. lanceolata populations was observed. As increased flowering potential was not lost, even after severe cotyledon damage, the manifestation of phenotypic plasticity in weeds at maturity may nonetheless still be shaped by plasticity in the ability to tolerate herbivory during seedling establishment.", "which Investigated species ?", "Plants", 978.0, 984.0], ["We compared community composition, density, and species richness of herbivorous insects on the introduced plant Solidago altissima L. (Asteraceae) and the related native species Solidago virgaurea L. in Japan. We found large differences in community composition on the two Solidago species. Five hemipteran sap feeders were found only on S. altissima. Two of them, the aphid Uroleucon nigrotuberculatum Olive (Hemiptera: Aphididae) and the scale insect Parasaissetia nigra Nietner (Hemiptera: Coccidae), were exotic species, accounting for 62% of the total individuals on S. altissima. These exotic sap feeders mostly determined the difference of community composition on the two plant species. In contrast, the herbivore community on S. virgaurea consisted predominately of five native insects: two lepidopteran leaf chewers and three dipteran leaf miners. Overall species richness did not differ between the plants because the increased species richness of sap feeders was offset by the decreased richness of leaf chewers and leaf miners on S. altissima. The overall density of herbivorous insects was higher on S. altissima than on S. virgaurea, because of the high density of the two exotic sap feeding species on S. altissima. We discuss the importance of analyzing community composition in terms of feeding guilds of insect herbivores for understanding how communities of insect herbivores are organized on introduced plants in novel habitats.", "which Investigated species ?", "Plants", 910.0, 916.0], ["*Globally, exotic invaders threaten biodiversity and ecosystem function. Studies often report that invading plants are less affected by enemies in their invaded vs home ranges, but few studies have investigated the underlying mechanisms. *Here, we investigated the variation in prevalence, species composition and virulence of soil-borne Pythium pathogens associated with the tree Prunus serotina in its native US and non-native European ranges by culturing, DNA sequencing and controlled pathogenicity trials. *Two controlled pathogenicity experiments showed that Pythium pathogens from the native range caused 38-462% more root rot and 80-583% more seedling mortality, and 19-45% less biomass production than Pythium from the non-native range. DNA sequencing indicated that the most virulent Pythium taxa were sampled only from the native range. The greater virulence of Pythium sampled from the native range therefore corresponded to shifts in species composition across ranges rather than variation within a common Pythium species. *Prunus serotina still encounters Pythium in its non-native range but encounters less virulent taxa. Elucidating patterns of enemy virulence in native and nonnative ranges adds to our understanding of how invasive plants escape disease. Moreover, this strategy may identify resident enemies in the non-native range that could be used to manage invasive plants.", "which Investigated species ?", "Plants", 108.0, 114.0], ["Summary 1 Tests of the relationship between resident plant species richness and habitat invasibility have yielded variable results. I investigated the roles of experimental manipulation of understorey species richness and overstorey characteristics in resistance to invader establishment in a floodplain forest in south-western Virginia, USA. 2 I manipulated resident species richness in experimental plots along a flooding gradient, keeping plot densities at their original levels, and quantified the overstorey characteristics of each plot. 3 After manipulating the communities, I transplanted 10 randomly chosen invaders from widespread native and non-native forest species into the experimental plots. Success of an invasion was measured by survival and growth of the invader. 4 Native and non-native invader establishment trends were influenced by different aspects of the biotic community and these relationships depended on the site of invasion. The most significant influence on non-native invader survival in this system of streamside and upper terrace plots was the overstorey composition. Non-native species survival in the flooded plots after 2 years was significantly positively related to proximity to larger trees. However, light levels did not fully explain the overstorey effect and were unrelated to native survivorship. The effects of understorey richness on survivorship depended on the origin of the invaders and the sites they were transplanted into. Additionally, native species growth was significantly affected by understorey plot richness. 5 The direction and strength of interactions with both the overstorey (for non-native invaders) and understorey richness (for natives and non-natives) changed with the site of invasion and associated environmental conditions. Rather than supporting the hypothesis of biotic resistance to non-native invasion, my results suggest that native invaders experienced increased competition with the native understorey plants in the more benign upland habitat and facilitation in the stressful riparian zone.", "which Investigated species ?", "Plants", 1977.0, 1983.0], ["Introduced plant species can become locally dominant and threaten native flora and fauna. This dominance is often thought to be a result of release from specialist enemies in the invaded range, or the evolution of increased competitive ability. Soil borne microorganisms have often been overlooked as enemies in this context, but a less deleterious plant soil interaction in the invaded range could explain local dominance. Two plant species, Carpobrotus edulis and the hybrid Carpobrotus X cf. acinaciformis, are considered major pests in the Mediterranean basin. We tested if release from soil-borne enemies and/or evolution of increased competitive ability could explain this dominance. Comparing biomass production in non-sterile soil with that in sterilized soil, we found that inoculation with rhizosphere soil from the native range reduced biomass production by 32% while inoculation with rhizosphere soil from the invaded range did not have a significant effect on plant biomass. Genotypes from the invaded range, including a hybrid, did not perform better than plants from the native range in sterile soil. Hence evolution of increased competitive ability and hybridization do not seem to play a major role. We conclude that the reduced negative net impact of the soil community in the invaded range may contribute to the success of Carpobrotus species in the Mediterranean basin. \u00a9 2008 SAAB. Published by Elsevier B.V. All rights reserved.", "which Investigated species ?", "Plants", 1070.0, 1076.0], ["1 The cultivation and dissemination of alien ornamental plants increases their potential to invade. More specifically, species with bird\u2010dispersed seeds can potentially infiltrate natural nucleation processes in savannas. 2 To test (i) whether invasion depends on facilitation by host trees, (ii) whether propagule pressure determines invasion probability, and (iii) whether alien host plants are better facilitators of alien fleshy\u2010fruited species than indigenous species, we mapped the distribution of alien fleshy\u2010fruited species planted inside a military base, and compared this with the distribution of alien and native fleshy\u2010fruited species established in the surrounding natural vegetation. 3 Abundance and diversity of fleshy\u2010fruited plant species was much greater beneath tree canopies than in open grassland and, although some native fleshy\u2010fruited plants were found both beneath host trees and in the open, alien fleshy\u2010fruited plants were found only beneath trees. 4 Abundance of fleshy\u2010fruited alien species in the natural savanna was positively correlated with the number of individuals of those species planted in the grounds of the military base, while the species richness of alien fleshy\u2010fruited taxa decreased with distance from the military base, supporting the notion that propagule pressure is a fundamental driver of invasions. 5 There were more fleshy\u2010fruited species beneath native Acacia tortilis than beneath alien Prosopis sp. trees of the equivalent size. Although there were significant differences in native plant assemblages beneath these hosts, the proportion of alien to native fleshy\u2010fruited species did not differ with host. 6 Synthesis. Birds facilitate invasion of a semi\u2010arid African savanna by alien fleshy\u2010fruited plants, and this process does not require disturbance. Instead, propagule pressure and a few simple biological observations define the probability that a plant will invade, with alien species planted in gardens being a major source of propagules. Some invading species have the potential to transform this savanna by overtopping native trees, leading to ecosystem\u2010level impacts. Likewise, the invasion of the open savanna by alien host trees (such as Prosopis sp.) may change the diversity, abundance and species composition of the fleshy\u2010fruited understorey. These results illustrate the complex interplay between propagule pressure, facilitation, and a range of other factors in biological invasions.", "which Investigated species ?", "Plants", 56.0, 62.0], ["In a field experiment with 30 locally occurring old-field plant species grown in a common garden, we found that non-native plants suffer levels of attack (leaf herbivory) equal to or greater than levels suffered by congeneric native plants. This phylogenetically controlled analysis is in striking contrast to the recent findings from surveys of exotic organisms, and suggests that even if enemy release does accompany the invasion process, this may not be an important mechanism of invasion, particularly for plants with close relatives in the recipient flora.", "which Investigated species ?", "Plants", 123.0, 129.0], ["We sampled the understory community in an old-growth, temperate forest to test alternative hypotheses explaining the establishment of exotic plants. We quantified the individual and net importance of distance from areas of human disturbance, native plant diversity, and environmental gradients in determining exotic plant establishment. Distance from disturbed areas, both within and around the reserve, was not correlated to exotic species richness. Numbers of native and exotic species were positively correlated at large (50 m 2 ) and small (10 m 2 ) plot sizes, a trend that persisted when relationships to environ- mental gradients were controlled statistically. Both native and exotic species richness in- creased with soil pH and decreased along a gradient of increasing nitrate availability. Exotic species were restricted to the upper portion of the pH gradient and had individualistic responses to the availability of soil resources. These results are inconsistent with both the diversity-resistance and resource-enrichment hypotheses for invasibility. Environmental conditions favoring native species richness also favor exotic species richness, and com- petitive interactions with the native flora do not appear to limit the entry of additional species into the understory community at this site. It appears that exotic species with niche requirements poorly represented in the regional flora of native species may establish with relatively little resistance or consequence for native species richness.", "which Investigated species ?", "Plants", 141.0, 147.0], ["A major goal for ecology and evolution is to understand how abiotic and biotic factors shape patterns of biological diversity. Here, we show that variation in establishment success of nonnative frogs and toads is primarily explained by variation in introduction pathways and climatic similarity between the native range and introduction locality, with minor contributions from phylogeny, species ecology, and life history. This finding contrasts with recent evidence that particular species characteristics promote evolutionary range expansion and reduce the probability of extinction in native populations of amphibians, emphasizing how different mechanisms may shape species distributions on different temporal and spatial scales. We suggest that contemporary changes in the distribution of amphibians will be primarily determined by human-mediated extinctions and movement of species within climatic envelopes, and less by species-typical traits.", "which Investigated species ?", "Amphibians", 610.0, 620.0], ["Nitrogen availability affects both plant growth and the preferences of herbivores. We hypothesized that an interaction between these two factors could affect the early establishment of native and exotic species differently, promoting invasion in natural systems. Taxonomically paired native and invasive species (Acer platanoides, Acer rubrum, Lonicera maackii, Diervilla lonicera, Celastrus orbiculata, Celastrus scandens, Elaeagnus umbellata, Ceanothus americanus, Ampelopsis brevipedunculata, and Vitis riparia) were grown in relatively high-resource (hardwood forests) and low-resource (pine barrens) communities on Long Island, New York, for a period of 3 months. Plants were grown in ambient and nitrogen-enhanced conditions in both communities. Nitrogen additions produced an average 12% initial increase in leaf number of all plants. By the end of the experiment, invasive species outperformed native species in nitrogen-enhanced plots in hardwood forests, where all plants experienced increased damage relative to control plots. Native species experienced higher overall amounts of damage in hardwood forests, losing, on average, 45% more leaves than exotic species, and only native species experienced a decline in growth rates (32% compared with controls). In contrast, in pine barrens, there were no differences in damage and no differences in performance between native and invasive plants. Our results suggest that unequal damage by natural enemies may play a role in determining community composition by shifting the competitive advantage to exotic species in nitrogen-enhanced environments. FOR. SCI. 53(6):701-709.", "which Investigated species ?", "Plants", 669.0, 675.0], ["Enemy release is frequently posed as a main driver of invasiveness of alien species. However, an experimental multi-species test examining performance and herbivory of invasive alien, non-invasive alien and native plant species in the presence and absence of natural enemies is lacking. In a common garden experiment in Switzerland, we manipulated exposure of seven alien invasive, eight alien non-invasive and fourteen native species from six taxonomic groups to natural enemies (invertebrate herbivores), by applying a pesticide treatment under two different nutrient levels. We assessed biomass production, herbivore damage and the major herbivore taxa on plants. Across all species, plants gained significantly greater biomass under pesticide treatment. However, invasive, non-invasive and native species did not differ in their biomass response to pesticide treatment at either nutrient level. The proportion of leaves damaged on invasive species was significantly lower compared to native species, but not when compared to non-invasive species. However, the difference was lost when plant size was accounted for. There were no differences between invasive, non-invasive and native species in herbivore abundance. Our study offers little support for invertebrate herbivore release as a driver of plant invasiveness, but suggests that future enemy release studies should account for differences in plant size among species.", "which Investigated species ?", "Plants", 659.0, 665.0], ["In The Origin of Species, Darwin (1859) drew attention to observations by Alphonse de Candolle (1855) that floras gain by naturalization far more species belonging to new genera than species belonging to native genera. Darwin (1859, p. 86) goes on to give a specific example: \u201cIn the last edition of Dr. Asa Gray\u2019s \u2018Manual of the Flora of the United States\u2019 ... out of the 162 naturalised genera, no less than 100 genera are not there indigenous.\u201d Darwin used these data to support his theory of intense competition between congeners, described only a few pages earlier: \u201cAs the species of the same genus usually have, though by no means invariably, much similarity in habits and constitution, and always in structure, the struggle will generally be more severe between them\u201d (1859, p. 60). Darwin\u2019s intriguing observations have recently attracted renewed interest, as comprehensive lists of naturalized plants have become available for various regions of the world. Two studies (Mack 1996; Rejmanek 1996, 1998) have concluded that naturalized floras provide some support for Darwin\u2019s hypothesis, but only one of these studies used statistical tests. Analyses of additional floras are needed to test the generality of Darwin\u2019s naturalization hypothesis. Mack (1996) tabulated data from six regional floras within the United States and noted that naturalized species more often belong to alien genera than native genera, with the curious exception of one region (New York). In addition to the possibility of strong competition between native and introduced congeners, Mack (1996) proposed that specialist native herbivores, or pathogens, may be", "which Investigated species ?", "Plants", 904.0, 910.0], ["We conducted comparisons for exotic mammal species introduced to New Zealand (28 successful, 4 failed), Australia (24, 17) and Britain (15, 16). Modelling of variables associated with establishment success was constrained by small sample sizes and phylogenetic dependence, so our results should be interpreted with caution. Successful species were subject to more release events, had higher climate matches between their overseas geographic range and their country of introduction, had larger overseas geographic range sizes and were more likely to have established an exotic population elsewhere than was the case for failed species. Of the mammals introduced to New Zealand, successful species also had larger areas of suitable habitat than did failed species. Our findings may guide risk assessments for the import of live mammals to reduce the rate new species establish in the wild.", "which Investigated species ?", "Mammals", 642.0, 649.0], ["While two cyprinid fishes introduced from nearby drainages have become widespread and abundant in the Eel River of northwestern California, a third nonindigenous cyprinid has remained largely confined to \u226425 km of one major tributary (the Van Duzen River) for at least 15 years. The downstream limit of this species, speckled dace, does not appear to correspond with any thresholds or steep gradients in abiotic conditions, but it lies near the upstream limits of three other fishes: coastrange sculpin, prickly sculpin, and nonindigenous Sacramento pikeminnow. We conducted a laboratory stream experiment to explore the potential for emergent multiple predator effects to influence biotic resistance in this situation. Sculpins in combination with Sacramento pikeminnow caused greater mortality of speckled dace than predicted based on their separate effects. In contrast to speckled dace, 99% of sculpin survived trials with Sacramento pikeminnow, in part because sculpin usually occupied benthic cover units while Sacramento pikeminnow occupied the water column. A 10-fold difference in benthic cover availability did not detectably influence biotic interactions in the experiment. The distribution of speckled dace in the Eel River drainage may be limited by two predator taxa with very different patterns of habitat use and a shortage of alternative habitats.", "which Investigated species ?", "Fishes", 19.0, 25.0], ["Biotic resistance, the process by which new colonists are excluded from a community by predation from and/or competition with resident species, can prevent or limit species invasions. We examined whether biotic resistance by native predators on Caribbean coral reefs has influenced the invasion success of red lionfishes (Pterois volitans and Pterois miles), piscivores from the Indo-Pacific. Specifically, we surveyed the abundance (density and biomass) of lionfish and native predatory fishes that could interact with lionfish (either through predation or competition) on 71 reefs in three biogeographic regions of the Caribbean. We recorded protection status of the reefs, and abiotic variables including depth, habitat type, and wind/wave exposure at each site. We found no relationship between the density or biomass of lionfish and that of native predators. However, lionfish densities were significantly lower on windward sites, potentially because of habitat preferences, and in marine protected areas, most likely because of ongoing removal efforts by reserve managers. Our results suggest that interactions with native predators do not influence the colonization or post-establishment population density of invasive lionfish on Caribbean reefs.", "which Investigated species ?", "Fishes", 488.0, 494.0], ["Although much of the theory on the success of invasive species has been geared at escape from specialist enemies, the impact of introduced generalist invertebrate herbivores on both native and introduced plant species has been underappreciated. The role of nocturnal invertebrate herbivores in structuring plant communities has been examined extensively in Europe, but less so in North America. Many nocturnal generalists (slugs, snails, and earwigs) have been introduced to North America, and 96% of herbivores found during a night census at our California Central Valley site were introduced generalists. We explored the role of these herbivores in the distribution, survivorship, and growth of 12 native and introduced plant species from six families. We predicted that introduced species sharing an evolutionary history with these generalists might be less vulnerable than native plant species. We quantified plant and herbivore abundances within our heterogeneous site and also established herbivore removal experiments in 160 plots spanning the gamut of microhabitats. As 18 collaborators, we checked 2000 seedling sites every day for three weeks to assess nocturnal seedling predation. Laboratory feeding trials allowed us to quantify the palatability of plant species to the two dominant nocturnal herbivores at the site (slugs and earwigs) and allowed us to account for herbivore microhabitat preferences when analyzing attack rates on seedlings. The relationship between local slug abundance and percent cover of five common plant taxa at the field site was significantly negatively associated with the mean palatability of these taxa to slugs in laboratory trials. Moreover, seedling mortality of 12 species in open-field plots was positively correlated with mean palatability of these taxa to both slugs and earwigs in laboratory trials. Counter to expectations, seedlings of native species were neither more vulnerable nor more palatable to nocturnal generalists than those of introduced species. Growth comparison of plants within and outside herbivore exclosures also revealed no differences between native and introduced plant species, despite large impacts of herbivores on growth. Cryptic nocturnal predation on seedlings was common and had large effects on plant establishment at our site. Without intensive monitoring, such predation could easily be misconstrued as poor seedling emergence.", "which Investigated species ?", "Plants", 2031.0, 2037.0], ["Invasive species have been hypothesized to out-compete natives though either a Jack-of-all-trades strategy, where they are able to utilize resources effectively in unfavourable environments, a master-of-some, where resource utilization is greater than its competitors in favourable environments, or a combination of the two (Jack-and-master). We examined the invasive strategy of Berberis darwinii in New Zealand compared with four co-occurring native species by examining germination, seedling survival, photosynthetic characteristics and water-use efficiency of adult plants, in sun and shade environments. Berberis darwinii seeds germinated more in shady sites than the other natives, but survival was low. In contrast, while germination of B. darwinii was the same as the native species in sunny sites, seedling survival after 18 months was nearly twice that of the all native species. The maximum photosynthetic rate of B. darwinii was nearly double that of all native species in the sun, but was similar among all species in the shade. Other photosynthetic traits (quantum yield and stomatal conductance) did not generally differ between B. darwinii and the native species, regardless of light environment. Berberis darwinii had more positive values of \u03b413C than the four native species, suggesting that it gains more carbon per unit water transpired than the competing native species. These results suggest that the invasion success of B. darwinii may be partially explained by combination of a Jack-of-all-trades scenario of widespread germination with a master-of-some scenario through its ability to photosynthesize at higher rates in the sun and, hence, gain a rapid height and biomass advantage over native species in favourable environments.", "which Investigated species ?", "Plants", 570.0, 576.0], ["Galliformes, after Passeriformes, is the group of birds that has been most introduced to oceanic islands. Among Passeriformes, whether the species\u2019 plumage is sexually monochromatic or dichromatic, along with other factors such as introduction effort and interspecific competition, has been identified as a factor that limits introduction success. In this study, we tested the hypothesis that sexually dichromatic plumage reduces the probability of success for 51 species from 26 genera of game birds that were introduced onto 12 oceanic islands. Analyses revealed no significant differences in probability of introduction success between monochromatic and dichromatic species at either the generic or specific levels. We also found no significant difference between these two groups in size of native geographic range, wing length or humanintroduction effort. Our results do not support the hypothesis that sexually dichromatic plumage (probably a response to sexual selection) predicts introduction outcomes of game birds as has been reported for passerine birds. These findings suggest that passerine and non-passerine birds differ fundamentally in terms of factors that could influence introduction outcome, and should therefore be evaluated separately as opposed to lumping these two groups as \u2018land birds\u2019.", "which Investigated species ?", "Birds", 50.0, 55.0], ["Numerous studies have shown how interactions between nonindigenous spe- cies (NIS) can accelerate the rate at which they establish and spread in invaded habitats, leading to an \"invasional meltdown.\" We investigated facilitation at an earlier stage in the invasion process: during entrainment of propagules in a transport pathway. The introduced bryozoan Watersipora subtorquata is tolerant of several antifouling biocides and a common component of hull-fouling assemblages, a major transport pathway for aquatic NIS. We predicted that colonies of W. subtorquata act as nontoxic refugia for other, less tolerant species to settle on. We compared rates of recruitment of W. subtorquata and other fouling organisms to surfaces coated with three antifouling paints and a nontoxic primer in coastal marinas in Queensland, Australia. Diversity and abundance of fouling taxa were compared between bryozoan colonies and adjacent toxic or nontoxic paint surfaces. After 16 weeks immersion, W. subtorquata covered up to 64% of the tile surfaces coated in antifouling paint. Twenty-two taxa occurred exclusively on W. subtorquata and were not found on toxic surfaces. Other fouling taxa present on toxic surfaces were up to 248 times more abundant on W. subtorquata. Because biocides leach from the paint surface, we expected a positive relationship between the size of W. subtorquata colonies and the abundance and diversity of epibionts. To test this, we compared recruitment of fouling organisms to mimic W. subtorquata colonies of three different sizes that had the same total surface area. Sec- ondary recruitment to mimic colonies was greater when the surrounding paint surface contained biocides. Contrary to our predictions, epibionts were most abundant on small mimic colonies with a large total perimeter. This pattern was observed in encrusting and erect bryozoans, tubiculous amphipods, and serpulid and sabellid polychaetes, but only in the presence of toxic paint. Our results show that W. subtorquata acts as a foundation species for fouling assemblages on ship hulls and facilitates the transport of other species at greater abundance and frequency than would otherwise be possible. Invasion success may be increased by positive interactions between NIS that enhance the delivery of prop- agules by human transport vectors.", "which Investigated species ?", "Bryozoan", 346.0, 354.0], ["Life-history traits of invasive exotic plants are typically considered to be exceptional vis-a-vis native species. In particular, hyper-fecundity and long range dispersal are regarded as invasive traits, but direct comparisons with native species are needed to identify the life-history stages behind invasiveness. Until recently, this task was particularly problematic in forests as tree fecundity and dispersal were difficult to characterize in closed stands. We used inverse modelling to parameterize fecundity, seed dispersal and seedling dispersion functions for two exotic and eight native tree species in closed-canopy forests in Connecticut, USA. Interannual variation in seed production was dramatic for all species, with complete seed crop failures in at least one year for six native species. However, the average per capita seed production of the exotic Ailanthus altissima was extraordinary: \ue02e 40 times higher than the next highest species. Seed production of the shade tolerant exotic Acer platanoides was average, but much higher than the native shade tolerant species, and the density of its established seedlings (\ue024 3 years) was higher than any other species. Overall, the data supported a model in which adults of native and exotic species must reach a minimum size before seed production occurred. Once reached, the relationship between tree diameter and seed production was fairly flat for seven species, including both exotics. Seed dispersal was highly localized and usually showed a steep decline with increasing distance from parent trees: only Ailanthus altissima and Fraxinus americana had mean dispersal distances \ue02e 10 m. Janzen-Connell patterns were clearly evident for both native and exotic species, as the mode and mean dispersion distance of seedlings were further from potential parent trees than seeds. The comparable intensity of Janzen-Connell effects between native and exotic species suggests that the enemy escape hypothesis alone cannot explain the invasiveness of these exotics. Our study confirms the general importance of colonization processes in invasions, yet demon strates how invasiveness can occur via divergent colonization strategies. Dispersal limitation of Acer platanoides and recruitment limitation of Ailanthus altissima will likely constitute some limit on their invasiveness in closed-canopy forests.", "which Investigated species ?", "Plants", 39.0, 45.0], ["In a California riparian system, the most diverse natural assemblages are the most invaded by exotic plants. A direct in situ manipulation of local diversity and a seed addition experiment showed that these patterns emerge despite the intrinsic negative effects of diversity on invasions. The results suggest that species loss at small scales may reduce invasion resistance. At community-wide scales, the overwhelming effects of ecological factors spatially covarying with diversity, such as propagule supply, make the most diverse communities most likely to be invaded.", "which Investigated species ?", "Plants", 101.0, 107.0], ["Phalaris arundinacea L. (reed canary grass) is a major invader of wetlands in temperate North America; it creates monotypic stands and displaces native vegetation. In this study, the effect of plant canopies on the establishment of P. arundinacea from seed in a fen, fen-like mesocosms, and a fen restoration site was assessed. In Wingra Fen, canopies that were more resistant to P. arundinacea establishment had more species (eight or nine versus four to six species) and higher cover of Aster firmus. In mesocosms planted with Glyceria striata plus 1, 6, or 15 native species, all canopies closed rapidly and prevented P. arundinacea establishment from seed, regardless of the density of the matrix species or the number of added species. Only after gaps were created in the canopy was P. arundinacea able to establish seedlings; then, the 15-species treatment reduced establishment to 48% of that for single-species canopies. A similar experiment in the restoration site produced less cover of native plants, and P. a...", "which Investigated species ?", "Plants", 1004.0, 1010.0], ["Darwin proposed two contradictory hypotheses to explain the influence of congeners on the outcomes of invasion: the naturalization hypothesis, which predicts a negative relationship between the presence of congeners and invasion success, and the pre-adaptation hypothesis, which predicts a positive relationship between the presence of congeners and invasion success. Studies testing these hypotheses have shown mixed support. We tested these hypotheses using the establishment success of non-native reptiles and congener presence/absence and richness across the globe. Our results demonstrated support for the pre-adaptation hypothesis. We found that globally, both on islands and continents, establishment success was higher in the presence than in the absence of congeners and that establishment success increased with increasing congener richness. At the life form level, establishment success was higher for lizards, marginally higher for snakes, and not different for turtles in the presence of congeners; data were insufficient to test the hypotheses for crocodiles. There was no relationship between establishment success and congener richness for any life form. We suggest that we found support for the pre-adaptation hypothesis because, at the scale of our analysis, native congeners represent environmental conditions appropriate for the species rather than competition for niche space. Our results imply that areas to target for early detection of non-native reptiles are those that host closely related species.", "which Investigated species ?", "Reptiles", 508.0, 516.0], ["Experiments that have manipulated species richness with random draws of species from a larger species pool have usually found that invasibility declines as richness increases. These results have usually been attributed to niche complementarity, and interpreted to mean that communities will become less resistant to invaders as species go locally extinct. However, it is not clear how relevant these studies are to real-world situations where species extinctions are non-random, and where species diversity declines due to increased rarity (i.e. reduced evenness) without having local extinctions. We experimentally varied species richness from 1 to 4, and evenness from 0.44 to 0.97 with two different extinction scenarios in two-year old plantings using seedling transplants in western Iowa. In both scenarios, evenness was varied by changing the level of dominance of the tall grass Andropogon gerardii. In one scenario, which simulated a loss of short species from Andropogon communities, we directly tested for complementarity in light capture due to having species in mixtures with dissimilar heights. We contrasted this scenario with a second set of mixtures that contained all tall species. In both cases, we controlled for factors such as rooting depth and planting density. Mean invader biomass was higher in monocultures (5.4 g m 2 week 1 ) than in 4-species mixtures (3.2 g m 2 week 1 ). Reduced evenness did not affect invader biomass in mixtures with dissimilar heights. However, the amount of invader biomass decreased by 60% as evenness increased across mixtures with all tall species. This difference was most pronounced early in the growing season when high evenness plots had greater light capture than low evenness plots. These results suggest that the effect of reduced species diversity on invasibility are 1) not related to complementarity through height dissimilarity, and 2) variable depending on the phenological traits of the species that are becoming rare or going locally extinct.", "which Investigated species ?", "Plants", NaN, NaN], ["A method of predicting weed status was developed for southern African plants naturalized in Australia, based upon information on extra-Australian weed status, distribution and taxonomy. Weed status in Australia was associated with being geographically widespread in southern Africa, being found in a wide range of climates in southern Africa, being described as a weed or targeted by herbicides in southern Africa, with early introduction and establishment in Australia, and with weediness in regions other than southern Africa. Multiple logistic regressions were used to identify the variables that best predicted weed status. The best fitting regressions were for weeds present for a long time in Australia (more than 140 years). They utilized three variables, namely weed status, climatic range in southern Africa and the existence of congeneric weeds in southern Africa. The highest level of variation explained (43%) was obtained for agricultural weeds using a single variable, weed status in southern Africa. Being recorded as a weed in Australia was related to climatic range and the existence of congeneric weeds in southern Africa (40% of variation explained). No variables were suitable predictors of non-agricultural (environmental) weeds. The regressions were used to predict future weed status of plants either not introduced or recently arrived in Australia. Recently-arrived species which were predicted to become weeds are Acacia karroo Hayne (Mimosaceae), Arctotis venustra T. Norl. (Asteraceae), Sisymbrium thellungii O.E. Schulz (Brassicaceae) and Solanum retroflexum Dun. (Solanaceae). Twenty species not yet arrived in Australia were predicted to have a high likelihood of becoming weeds. Analysis of the residuals of the regressions indicated two long-established species which might prove to be good targets for biological control: Mesembryanthemum crystallinum L. (Aizoaceae) and Watsonia meriana (L.) Mill. (Iridaceae).", "which Investigated species ?", "Plants", 70.0, 76.0], ["Phenotypic plasticity and rapid evolution are two important strategies by which invasive species adapt to a wide range of environments and consequently are closely associated with plant invasion. To test their importance in invasion success of Crofton weed, we examined the phenotypic response and genetic variation of the weed by conducting a field investigation, common garden experiments, and intersimple sequence repeat (ISSR) marker analysis on 16 populations in China. Molecular markers revealed low genetic variation among and within the sampled populations. There were significant differences in leaf area (LA), specific leaf area (SLA), and seed number (SN) among field populations, and plasticity index (PIv) for LA, SLA, and SN were 0.62, 0.46 and 0.85, respectively. Regression analyses revealed a significant quadratic effect of latitude of population origin on LA, SLA, and SN based on field data but not on traits in the common garden experiments (greenhouse and open air). Plants from different populations showed similar reaction norms across the two common gardens for functional traits. LA, SLA, aboveground biomass, plant height at harvest, first flowering day, and life span were higher in the greenhouse than in the open-air garden, whereas SN was lower. Growth conditions (greenhouse vs. open air) and the interactions between growth condition and population origin significantly affect plant traits. The combined evidence suggests high phenotypic plasticity but low genetically based variation for functional traits of Crofton weed in the invaded range. Therefore, we suggest that phenotypic plasticity is the primary strategy for Crofton weed as an aggressive invader that can adapt to diverse environments in China.", "which Investigated species ?", "Plants", 1018.0, 1024.0], ["Numerous hypotheses suggest that natural enemies can influence the dynamics of biological invasions. Here, we use a group of 12 related native, invasive, and naturalized vines to test the relative importance of resistance and tolerance to herbivory in promoting biological invasions. In a field experiment in Long Island, New York, we excluded mammal and insect herbivores and examined plant growth and foliar damage over two growing seasons. This novel approach allowed us to compare the relative damage from mammal and insect herbivores and whether damage rates were related to invasion. In a greenhouse experiment, we simulated herbivory through clipping and measured growth response. After two seasons of excluding herbivores, there was no difference in relative growth rates among invasive, naturalized, and native woody vines, and all vines were susceptible to damage from mammal and insect herbivores. Thus, differential attack by herbivores and plant resistance to herbivory did not explain invasion success of these species. In the field, where damage rates were high, none of the vines were able to fully compensate for damage from mammals. However, in the greenhouse, we found that invasive vines were more tolerant of simulated herbivory than native and naturalized relatives. Our results indicate that invasive vines are not escaping herbivory in the novel range, rather they are persisting despite high rates of herbivore damage in the field. While most studies of invasive plants and natural enemies have focused on resistance, this work suggests that tolerance may also play a large role in facilitating invasions.", "which Investigated species ?", "Plants", 1488.0, 1494.0], ["ABSTRACT: Despite mounting evidence of invasive species\u2019 impacts on the environment and society,our ability to predict invasion establishment, spread, and impact are inadequate. Efforts to explainand predict invasion outcomes have been limited primarily to terrestrial and freshwater ecosystems.Invasions are also common in coastal marine ecosystems, yet to date predictive marine invasion mod-els are absent. Here we present a model based on biological attributes associated with invasion suc-cess (establishment) of marine molluscs that compares successful and failed invasions from a groupof 93 species introduced to San Francisco Bay (SFB) in association with commercial oyster transfersfrom eastern North America (ca. 1869 to 1940). A multiple logistic regression model correctly classi-fied 83% of successful and 80% of failed invaders according to their source region abundance at thetime of oyster transfers, tolerance of low salinity, and developmental mode. We tested the generalityof the SFB invasion model by applying it to 3 coastal locations (2 in North America and 1 in Europe)that received oyster transfers from the same source and during the same time as SFB. The model cor-rectly predicted 100, 75, and 86% of successful invaders in these locations, indicating that abun-dance, environmental tolerance (ability to withstand low salinity), and developmental mode not onlyexplain patterns of invasion success in SFB, but more importantly, predict invasion success in geo-graphically disparate marine ecosystems. Finally, we demonstrate that the proportion of marine mol-luscs that succeeded in the latter stages of invasion (i.e. that establish self-sustaining populations,spread and become pests) is much greater than has been previously predicted or shown for otheranimals and plants.KEY WORDS: Invasion \u00b7 Bivalve \u00b7 Gastropod \u00b7 Mollusc \u00b7 Marine \u00b7 Oyster \u00b7 Vector \u00b7 Risk assessment", "which Investigated species ?", "Molluscs", 525.0, 533.0], ["Abstract In hardwood subtropical forests of southern Florida, nonnative vines have been hypothesized to be detrimental, as many species form dense \u201cvine blankets\u201d that shroud the forest. To investigate the effects of nonnative vines in post-hurricane regeneration, we set up four large (two pairs of 30 \u00d7 60 m) study areas in each of three study sites. One of each pair was unmanaged and the other was managed by removal of nonnative plants, predominantly vines. Within these areas, we sampled vegetation in 5 \u00d7 5 m plots for stems 2 cm DBH (diameter at breast height) or greater and in 2 \u00d7 0.5 m plots for stems of all sizes. For five years, at annual censuses, we tagged and measured stems of vines, trees, shrubs and herbs in these plots. For each 5 \u00d7 5 m plot, we estimated percent coverage by individual vine species, using native and nonnative vines as classes. We investigated the hypotheses that: (1) plot coverage, occurrence and recruitment of nonnative vines were greater than that of native vines in unmanaged plots; (2) the management program was effective at reducing cover by nonnative vines; and (3) reduction of cover by nonnative vines improved recruitment of seedlings and saplings of native trees, shrubs, and herbs. In unmanaged plots, nonnative vines recruited more seedlings and had a significantly higher plot-cover index, but not a higher frequency of occurrence. Management significantly reduced cover by nonnative vines and had a significant overall positive effect on recruitment of seedlings and saplings of native trees, shrubs and herbs. Management also affected the seedling community (which included vines, trees, shrubs, and herbs) in some unanticipated ways, favoring early successional species for a longer period of time. The vine species with the greatest potential to \u201cstrangle\u201d gaps were those that rapidly formed dense cover, had shade tolerant seedling recruitment, and were animal-dispersed. This suite of traits was more common in the nonnative vines than in the native vines. Our results suggest that some vines may alter the spatiotemporal pattern of recruitment sites in a forest ecosystem following a natural disturbance by creating many very shady spots very quickly.", "which Investigated species ?", "Plants", 434.0, 440.0], ["Over 27,000 exotic plant species have been introduced to Australia, predominantly for use in gardening, agriculture and forestry. Less than 1% of such introductions have been solely accidental. Plant introductions also occur within Australia, as exotic and native species are moved across the country. Plant-based industries contribute around $50 billion to Australia\u2019s economy each year, play a signifi cant social role and can also provide environmental benefi ts such as mitigating dryland salinity. However, one of the downsides of a new plant introduction is the potential to become a new weed. Overall, 10% of exotic plant species introduced since European settlement have naturalised, but this rate is higher for agricultural and forestry plants. Exotic plant species have become agricultural, noxious and natural ecosystem weeds at rates of 4%, 1% and 7% respectively. Whilst garden plants have the lowest probability of becoming weeds this is more than compensated by their vast numbers of introductions, such that gardening is the greatest source of weeds in Australia. Resolving confl icts of interest with plant introductions needs a collaborative effort between those stakeholders who would benefi t (i.e. grow the plant) and those who would potentially lose (i.e. gain a weed) to compare the weed risk, feasibility of management and benefi ts of the species in question. For proposed plant imports to Australia, weed risk is presently the single consideration under international trade rules. Hence the focus is on ensuring the optimal performance of the border Weed Risk Assessment System. For plant species already present in Australia there are inconsistencies in managing weed risk between the States/Territories. This is being addressed with the development of a national standard for weed risk management. For agricultural and forestry species of high economic value but signifi cant weed risk, the feasibility of standard risk management approaches needs to be investigated. Invasive garden plants need national action.", "which Investigated species ?", "Plants", 746.0, 752.0], ["Abstract: Concern over the impact of invaders on biodiversity and on the functioning of ecosystems has generated a rising tide of comparative analyses aiming to unveil the factors that shape the success of introduced species across different regions. One limitation of these studies is that they often compare geographically rather than ecologically defined regions. We propose an approach that can help address this limitation: comparison of invasions across convergent ecosystems that share similar climates. We compared avian invasions in five convergent mediterranean climate systems around the globe. Based on a database of 180 introductions representing 121 avian species, we found that the proportion of bird species successfully established was high in all mediterranean systems (more than 40% for all five regions). Species differed in their likelihood to become established, although success was not higher for those originating from mediterranean systems than for those from nonmediterranean regions. Controlling for this taxonomic effect with generalized linear mixed models, species introduced into mediterranean islands did not show higher establishment success than those introduced to the mainland. Susceptibility to avian invaders, however, differed substantially among the different mediterranean regions. The probability that a species will become established was highest in the Mediterranean Basin and lowest in mediterranean Australia and the South African Cape. Our results suggest that many of the birds recently introduced into mediterranean systems, and especially into the Mediterranean Basin, have a high potential to establish self\u2010sustaining populations. This finding has important implications for conservation in these biologically diverse hotspots.", "which Investigated species ?", "Birds", 1521.0, 1526.0], ["What are the relative roles of mechanisms underlying plant responses in grassland communities invaded by both plants and mammals? What type of community can we expect in the future given current or novel conditions? We address these questions by comparing Markov chain community models among treatments from a field experiment on invasive species on Robinson Crusoe Island, Chile. Because of seed dispersal, grazing and disturbance, we predicted that the exotic European rabbit (Oryctolagus cuniculus) facilitates epizoochorous exotic plants (plants with seeds that stick to the skin an animal) at the expense of native plants. To test our hypothesis, we crossed rabbit exclosure treatments with disturbance treatments, and sampled the plant community in permanent plots over 3 years. We then estimated Markov chain model transition probabilities and found significant differences among treatments. As hypothesized, this modelling revealed that exotic plants survive better in disturbed areas, while natives prefer no rabbits or disturbance. Surprisingly, rabbits negatively affect epizoochorous plants. Markov chain dynamics indicate that an overall replacement of native plants by exotic plants is underway. Using a treatment-based approach to multi-species Markov chain models allowed us to examine the changes in the importance of mechanisms in response to experimental impacts on communities.", "which Investigated species ?", "Mammals", 121.0, 128.0], ["Invasions by exotic plants may be more likely if exotics have low rates of attack by natural enemies, includ - ing post-dispersal seed predators (granivores). We investigated this idea with a field experiment conducted near Newmarket, Ontario, in which we experimentally excluded vertebrate and terrestrial insect seed predators from seeds of 43 native and exotic old-field plants. Protection from vertebrates significantly increased recovery of seeds; vertebrate exclusion produced higher recovery than controls for 30 of the experimental species, increasing overall seed recovery from 38.2 to 45.6%. Losses to vertebrates varied among species, significantly increasing with seed mass. In contrast, insect exclusion did not significantly improve seed recovery. There was no evidence that aliens benefitted from a re - duced rate of post-dispersal seed predation. The impacts of seed predators did not differ significantly between natives and exotics, which instead showed very similar responses to predator exclusion treatments. These results indicate that while vertebrate granivores had important impacts, especially on large-seeded species, exotics did not generally benefit from reduced rates of seed predation. Instead, differences between natives and exotics were small compared with interspecific variation within these groups. Resume : L'invasion par les plantes adventices est plus plausible si ces plantes ont peu d'ennemis naturels, incluant les predateurs post-dispersion des graines (granivores). Les auteurs ont examine cette idee lors d'une experience sur le ter- rain, conduite pres de Newmarket en Ontario, dans laquelle ils ont experimentalement empeche les predateurs de grai- nes, vertebres et insectes terrestres, d'avoir acces aux graines de 43 especes de plantes indigenes ou exotiques, de vielles prairies. La protection contre les vertebres augmente significativement la survie des graines; l'exclusion permet de recuperer plus de graines comparativement aux temoins chez 30 especes de plantes experimentales, avec une aug- mentation generale de recuperation allant de 38.2 a 45.6%. Les pertes occasionnees par les vertebres varient selon les especes, augmentant significativement avec la grosseur des graines. Au contraire, l'exclusion des insectes n'augmente pas significativement les nombres de graines recuperees. Ils n'y a pas de preuve que les adventices auraient beneficie d'une reduction du taux de predation post-dispersion des graines. Les impacts des predateurs de graines ne different pas significativement entre les especes indigenes et introduites, qui montrent au contraire des reactions tres similaires aux traitements d'exclusion des predateurs. Ces resultats indiquent que bien que les granivores vertebres aient des im - pacts importants, surtout sur les especes a grosses graines, les plantes introduites ne beneficient generalement pas de taux reduits de predation des graines. Au contraire, les differences entre les plantes indigenes et les plantes introduites sont petites comparativement a la variation interspecifique a l'interieur de chacun de ces groupes. Mots cles : adventices, exotiques, granivores, envahisseurs, vieilles prairies, predateurs de graines. (Traduit par la Redaction) Blaney and Kotanen 292", "which Investigated species ?", "Plants", 20.0, 26.0], ["An important question in the study of biological invasions is the degree to which successful invasion can be explained by release from control by natural enemies. Natural enemies dominate explanations of two alternate phenomena: that most introduced plants fail to establish viable populations (biotic resistance hypothesis) and that some introduced plants become noxious invaders (natural enemies hypothesis). We used a suite of 18 phylogenetically related native and nonnative clovers (Trifolium and Medicago) and the foliar pathogens and invertebrate herbivores that attack them to answer two questions. Do native species suffer greater attack by natural enemies relative to introduced species at the same site? Are some introduced species excluded from native plant communities because they are susceptible to local natural enemies? We address these questions using three lines of evidence: (1) the frequency of attack and composition of fungal pathogens and herbivores for each clover species in four years of common garden experiments, as well as susceptibility to inoculation with a common pathogen; (2) the degree of leaf damage suffered by each species in common garden experiments; and (3) fitness effects estimated using correlative approaches and pathogen removal experiments. Introduced species showed no evidence of escape from pathogens, being equivalent to native species as a group in terms of infection levels, susceptibility, disease prevalence, disease severity (with more severe damage on introduced species in one year), the influence of disease on mortality, and the effect of fungicide treatment on mortality and biomass. In contrast, invertebrate herbivores caused more damage on native species in two years, although the influence of herbivore attack on mortality did not differ between native and introduced species. Within introduced species, the predictions of the biotic resistance hypothesis were not supported: the most invasive species showed greater infection, greater prevalence and severity of disease, greater prevalence of herbivory, and greater effects of fungicide on biomass and were indistinguishable from noninvasive introduced species in all other respects. Therefore, although herbivores preferred native over introduced species, escape from pest pressure cannot be used to explain why some introduced clovers are common invaders in coastal prairie while others are not.", "which Investigated species ?", "Plants", 250.0, 256.0], ["The enemy release hypothesis predicts that native herbivores prefer native, rather than exotic plants, giving invaders a competitive advantage. In contrast, the biotic resistance hypothesis states that many invaders are prevented from establishing because of competitive interactions, including herbivory, with native fauna and flora. Success or failure of spread and establishment might also be influenced by the presence or absence of mutualists, such as pollinators. Senecio madagascariensis (fireweed), an annual weed from South Africa, inhabits a similar range in Australia to the related native S. pinnatifolius. The aim of this study was to determine, within the context of invasion biology theory, whether the two Senecio species share insect fauna, including floral visitors and herbivores. Surveys were carried out in south-east Queensland on allopatric populations of the two Senecio species, with collected insects identified to morphospecies. Floral visitor assemblages were variable between populations. However, the two Senecio species shared the two most abundant floral visitors, honeybees and hoverflies. Herbivore assemblages, comprising mainly hemipterans of the families Cicadellidae and Miridae, were variable between sites and no patterns could be detected between Senecio species at the morphospecies level. However, when insect assemblages were pooled (i.e. community level analysis), S. pinnatifolius was shown to host a greater total abundance and richness of herbivores. Senecio madagascariensis is unlikely to be constrained by lack of pollinators in its new range and may benefit from lower levels of herbivory compared to its native congener S. pinnatifolius.", "which Investigated species ?", "Plants", 95.0, 101.0], ["The current rate of invasive species introductions is unprecedented, and the dramatic impacts of exotic invasive plants on community and ecosystem properties have been well documented. Despite the pressing management implications, the mechanisms that control exotic plant invasion remain poorly understood. Several factors, such as disturbance, propagule pressure, species diversity, and herbivory, are widely believed to play a critical role in exotic plant invasions. However, few studies have examined the relative importance of these factors, and little is known about how propagule pressure interacts with various mechanisms of ecological resistance to determine invasion success. We quantified the relative importance of canopy disturbance, propagule pressure, species diversity, and herbivory in determining exotic plant invasion in 10 eastern hemlock forests in Pennsylvania and New Jersey (USA). Use of a maximum-likelihood estimation framework and information theoretics allowed us to quantify the strength of evidence for alternative models of the influence of these factors on changes in exotic plant abundance. In addition, we developed models to determine the importance of interactions between ecosystem properties and propagule pressure. These analyses were conducted for three abundant, aggressive exotic species that represent a range of life histories: Alliaria petiolata, Berberis thunbergii, and Microstegium vimineum. Of the four hypothesized determinants of exotic plant invasion considered in this study, canopy disturbance and propagule pressure appear to be the most important predictors of A. petiolata, B. thunbergii, and M. vimineum invasion. Herbivory was also found to be important in contributing to the invasion of some species. In addition, we found compelling evidence of an important interaction between propagule pressure and canopy disturbance. This is the first study to demonstrate the dominant role of the interaction between canopy disturbance and propagule pressure in determining forest invasibility relative to other potential controlling factors. The importance of the disturbance-propagule supply interaction, and its nonlinear functional form, has profound implications for the management of exotic plant species populations. Improving our ability to predict exotic plant invasions will require enhanced understanding of the interaction between propagule pressure and ecological resistance mechanisms.", "which Investigated species ?", "Plants", 113.0, 119.0], ["In herbaceous ecosystems worldwide, biodiversity has been negatively impacted by changed grazing regimes and nutrient enrichment. Altered disturbance regimes are thought to favour invasive species that have a high phenotypic plasticity, although most studies measure plasticity under controlled conditions in the greenhouse and then assume plasticity is an advantage in the field. Here, we compare trait plasticity between three co-occurring, C4 perennial grass species, an invader Eragrostis curvula, and natives Eragrostis sororia and Aristida personata to grazing and fertilizer in a three-year field trial. We measured abundances and several leaf traits known to correlate with strategies used by plants to fix carbon and acquire resources, i.e. specific leaf area (SLA), leaf dry matter content (LDMC), leaf nutrient concentrations (N, C\u2236N, P), assimilation rates (Amax) and photosynthetic nitrogen use efficiency (PNUE). In the control treatment (grazed only), trait values for SLA, leaf C\u2236N ratios, Amax and PNUE differed significantly between the three grass species. When trait values were compared across treatments, E. curvula showed higher trait plasticity than the native grasses, and this correlated with an increase in abundance across all but the grazed/fertilized treatment. The native grasses showed little trait plasticity in response to the treatments. Aristida personata decreased significantly in the treatments where E. curvula increased, and E. sororia abundance increased possibly due to increased rainfall and not in response to treatments or invader abundance. Overall, we found that plasticity did not favour an increase in abundance of E. curvula under the grazed/fertilized treatment likely because leaf nutrient contents increased and subsequently its' palatability to consumers. E. curvula also displayed a higher resource use efficiency than the native grasses. These findings suggest resource conditions and disturbance regimes can be manipulated to disadvantage the success of even plastic exotic species.", "which Investigated species ?", "Plants", 701.0, 707.0], ["It has been long established that the richness of vascular plant species and many animal taxa decreases with increasing latitude, a pattern that very generally follows declines in actual and potential evapotranspiration, solar radiation, temperature, and thus, total productivity. Using county-level data on vascular plants from the United States (3000 counties in the conterminous 48 states), we used the Akaike Information Criterion (AIC) to evaluate competing models predicting native and nonnative plant species density (number of species per square kilometer in a county) from various combinations of biotic variables (e.g., native bird species density, vegetation carbon, normalized difference vegetation in- dex), environmental/topographic variables (elevation, variation in elevation, the number of land cover classes in the county, radiation, mean precipitation, actual evapotranspiration, and potential evapotranspiration), and human variables (human population density, crop- land, and percentage of disturbed lands in a county). We found no evidence of a latitudinal gradient for the density of native plant species and a significant, slightly positive latitudinal gradient for the density of nonnative plant species. We found stronger evidence of a sig- nificant, positive productivity gradient (vegetation carbon) for the density of native plant species and nonnative plant species. We found much stronger significant relationships when biotic, environmental/topographic, and human variables were used to predict native plant species density and nonnative plant species density. Biotic variables generally had far greater influence in multivariate models than human or environmental/topographic variables. Later, we found that the best, single, positive predictor of the density of nonnative plant species in a county was the density of native plant species in a county. While further study is needed, it may be that, while humans facilitate the initial establishment invasions of non- native plant species, the spread and subsequent distributions of nonnative species are con- trolled largely by biotic and environmental factors.", "which Investigated species ?", "Plants", 317.0, 323.0], ["The widely held belief that riparian communities are highly invasible to exotic plants is based primarily on comparisons of the extent of invasion in riparian and upland communities. However, because differences in the extent of invasion may simply result from variation in propagule supply among recipient environments, true comparisons of invasibility require that both invasion success and propagule pressure are quantified. In this study, we quantified propagule pressure in order to compare the invasibility of riparian and upland forests and assess the accuracy of using a community's level of invasion as a surrogate for its invasibility. We found the extent of invasion to be a poor proxy for invasibility. The higher level of invasion in the studied riparian forests resulted from greater propagule availability rather than higher invasibility. Furthermore, failure to account for propagule pressure may confound our understanding of general invasion theories. Ecological theory suggests that species-rich communities should be less invasible. However, we found significant relationships between species diversity and invasion extent, but no diversity-invasibility relationship was detected for any species. Our results demonstrate that using a community's level of invasion as a surrogate for its invasibility can confound our understanding of invasibility and its determinants.", "which Investigated species ?", "Plants", 80.0, 86.0], ["Over 75 species of alien plants were recorded during the first five years after fire in southern California shrublands, most of which were European annuals. Both cover and richness of aliens varied between years and plant association. Alien cover was lowest in the first postfire year in all plant associations and remained low during succession in chaparral but increased in sage scrub. Alien cover and richness were significantly correlated with year (time since disturbance) and with precipitation in both coastal and interior sage scrub associations. Hypothesized factors determining alien dominance were tested with structural equation modeling. Models that included nitrogen deposition and distance from the coast were not significant, but with those variables removed we obtained a significant model that gave an R 2 5 0.60 for the response variable of fifth year alien dominance. Factors directly affecting alien dominance were (1) woody canopy closure and (2) alien seed banks. Significant indirect effects were (3) fire intensity, (4) fire history, (5) prefire stand structure, (6) aridity, and (7) community type. According to this model the most critical factor in- fluencing aliens is the rapid return of the shrub and subshrub canopy. Thus, in these communities a single functional type (woody plants) appears to the most critical element controlling alien invasion and persistence. Fire history is an important indirect factor be- cause it affects both prefire stand structure and postfire alien seed banks. Despite being fire-prone ecosystems, these shrublands are not adapted to fire per se, but rather to a particular fire regime. Alterations in the fire regime produce a very different selective environment, and high fire frequency changes the selective regime to favor aliens. This study does not support the widely held belief that prescription burning is a viable man- agement practice for controlling alien species on semiarid landscapes.", "which Investigated species ?", "Plants", 25.0, 31.0], ["A fundamental question in ecology is whether there are evolutionary characteristics of species that make some better than others at invading new communities. In birds, nesting habits, sexually selected traits, migration, clutch size and body mass have been suggested as important variables, but behavioural flexibility is another obvious trait that has received little attention. Behavioural flexibility allows animals to respond more rapidly to environmental changes and can therefore be advantageous when invading novel habitats. Behavioural flexibility is linked to relative brain size and, for foraging, has been operationalised as the number of innovations per taxon reported in the short note sections of ornithology journals. Here, we use data on avian species introduced to New Zealand and test the link between forebrain size, feeding innovation frequency and invasion success. Relative brain size was, as expected, a significant predictor of introduction success, after removing the effect of introduction effort. Species with relatively larger brains tended to be better invaders than species with smaller ones. Introduction effort, migratory strategy and mode of juvenile development were also significant in the models. Pair-wise comparisons of closely related species indicate that successful invaders also showed a higher frequency of foraging innovations in their region of origin. This study provides the first evidence in vertebrates of a general set of traits, behavioural flexibility, that can potentially favour invasion success.", "which Investigated species ?", "Birds", 161.0, 166.0], ["Background Biological invasions are fundamentally biogeographic processes that occur over large spatial scales. Interactions with soil microbes can have strong impacts on plant invasions, but how these interactions vary among areas where introduced species are highly invasive vs. naturalized is still unknown. In this study, we examined biogeographic variation in plant-soil microbe interactions of a globally invasive weed, Centaurea solstitialis (yellow starthistle). We addressed the following questions (1) Is Centaurea released from natural enemy pressure from soil microbes in introduced regions? and (2) Is variation in plant-soil feedbacks associated with variation in Centaurea's invasive success? Methodology/Principal Findings We conducted greenhouse experiments using soils and seeds collected from native Eurasian populations and introduced populations spanning North and South America where Centaurea is highly invasive and noninvasive. Soil microbes had pervasive negative effects in all regions, although the magnitude of their effect varied among regions. These patterns were not unequivocally congruent with the enemy release hypothesis. Surprisingly, we also found that Centaurea generated strong negative feedbacks in regions where it is the most invasive, while it generated neutral plant-soil feedbacks where it is noninvasive. Conclusions/Significance Recent studies have found reduced below-ground enemy attack and more positive plant-soil feedbacks in range-expanding plant populations, but we found increased negative effects of soil microbes in range-expanding Centaurea populations. While such negative feedbacks may limit the long-term persistence of invasive plants, such feedbacks may also contribute to the success of invasions, either by having disproportionately negative impacts on competing species, or by yielding relatively better growth in uncolonized areas that would encourage lateral spread. Enemy release from soil-borne pathogens is not sufficient to explain the success of this weed in such different regions. The biogeographic variation in soil-microbe effects indicates that different mechanisms may operate on this species in different regions, thus establishing geographic mosaics of species interactions that contribute to variation in invasion success.", "which Investigated species ?", "Plants", 1690.0, 1696.0], ["Abstract Root plowing is a common management practice to reduce woody vegetation and increase herbaceous forage for livestock on rangelands. Our objective was to test the hypotheses that four decades after sites are root plowed they have 1) lower plant species diversity, less heterogeneity, greater percent canopy cover of exotic grasses; and 2) lower abundance and diversity of amphibians, reptiles, and small mammals, compared to sites that were not disturbed by root plowing. Pairs of 4-ha sites were selected for sampling: in each pair of sites, one was root plowed in 1965 and another was not disturbed by root plowing (untreated). We estimated canopy cover of woody and herbaceous vegetation during summer 2003 and canopy cover of herbaceous vegetation during spring 2004. We trapped small mammals and herpetofauna in pitfall traps during late spring and summer 2001\u20132004. Species diversity and richness of woody plants were less on root-plowed than on untreated sites; however, herbaceous plant and animal species did not differ greatly between treatments. Evenness of woody vegetation was less on root-plowed sites, in part because woody legumes were more abundant. Abundance of small mammals and herpetofauna varied with annual rainfall more than it varied with root plowing. Although structural differences existed between vegetation communities, secondary succession of vegetation reestablishing after root plowing appears to be leading to convergence in plant and small animal species composition with untreated sites.", "which Investigated species ?", "Plants", 920.0, 926.0], ["Although many studies have investigated how community characteristics such as diversity and disturbance relate to invasibility, the mechanisms underlying biotic resistance to introduced species are not well understood. I manipulated the functional group composition of native algal communities and invaded them with the introduced, Japanese seaweed Sargassum muticum to understand how individual functional groups contributed to overall invasion resistance. The results suggested that space preemption by crustose and turfy algae inhibited S. muticum recruitment and that light preemption by canopy and understory algae reduced S. muticum survivorship. However, other mechanisms I did not investigate could have contributed to these two results. In this marine community the sequential preemption of key resources by different functional groups in different stages of the invasion generated resistance to invasion by S. muticum. Rather than acting collectively on a single resource the functional groups in this system were important for preempting either space or light, but not both resources. My experiment has important implications for diversity-invasibility studies, which typically look for an effect of diversity on individual resources. Overall invasion resistance will be due to the additive effects of individual functional groups (or species) summed over an invader's life cycle. Therefore, the cumulative effect of multiple functional groups (or species) acting on multiple resources is an alternative mechanism that could generate negative relationships between diversity and invasibility in a variety of biological systems.", "which Investigated species ?", "Algae", 524.0, 529.0], ["Giant hogweed, Heracleum mantegazzianum (Apiaceae), was introduced from the Caucasus into Western Europe more than 150 years ago and later became an invasive weed which created major problems for European authorities. Phytophagous insects were collected in the native range of the giant hogweed (Caucasus) and were compared to those found on plants in the invaded parts of Europe. The list of herbivores was compiled from surveys of 27 localities in nine countries during two seasons. In addition, litera- ture records for herbivores were analysed for a total of 16 Heracleum species. We recorded a total of 265 herbivorous insects on Heracleum species and we analysed them to describe the herbivore assemblages, locate vacant niches, and identify the most host- specific herbivores on H. mantegazzianum. When combining our investigations with similar studies of herbivores on other invasive weeds, all studies show a higher proportion of specialist herbivores in the native habitats compared to the invaded areas, supporting the \"enemy release hypothesis\" (ERH). When analysing the relative size of the niches (measured as plant organ biomass), we found less herbivore species per biomass on the stem and roots, and more on the leaves (Fig. 5). Most herbivores were polyphagous gener- alists, some were found to be oligophagous (feeding within the same family of host plants) and a few had only Heracleum species as host plants (monophagous). None were known to feed exclusively on H. mantegazzianum. The oligophagous herbivores were restricted to a few taxonomic groups, especially within the Hemiptera, and were particularly abundant on this weed.", "which Investigated species ?", "Plants", 342.0, 348.0], ["Predicting community susceptibility to invasion has become a priority for preserving biodiversity. We tested the hypothesis that the occurrence and abundance of the seaweed Caulerpa racemosa in the north-western (NW) Mediterranean would increase with increasing levels of human disturbance. Data from a survey encompassing areas subjected to different human influences (i.e. from urbanized to protected areas) were fitted by means of generalized linear mixed models, including descriptors of habitats and communities. The incidence of occurrence of C. racemosa was greater on urban than extra-urban or protected reefs, along the coast of Tuscany and NW Sardinia, respectively. Within the Marine Protected Area of Capraia Island (Tuscan Archipelago), the probability of detecting C. racemosa did not vary according to the degree of protection (partial versus total). Human influence was, however, a poor predictor of the seaweed cover. At the seascape level, C. racemosa was more widely spread within degraded (i.e. Posidonia oceanica dead matte or algal turfs) than in better preserved habitats (i.e. canopy-forming macroalgae or P. oceanica seagrass meadows). At a smaller spatial scale, the presence of the seaweed was positively correlated to the diversity of macroalgae and negatively to that of sessile invertebrates. These results suggest that C. racemosa can take advantage of habitat degradation. Thus, predicting invasion scenarios requires a thorough knowledge of ecosystem structure, at a hierarchy of levels of biological organization (from the landscape to the assemblage) and detailed information on the nature and intensity of sources of disturbance and spatial scales at which they operate.", "which Investigated species ?", "Algae", NaN, NaN], ["Aim Recent works have found the presence of native congeners to have a small effect on the naturalization rates of introduced plants, some suggesting a negative interaction (as proposed by Charles Darwin in The Origin of Species), and others a positive association. We assessed this question for a new biogeographic region, and discuss some of the problems associated with data base analyses of this type.", "which Investigated species ?", "Plants", 126.0, 132.0], ["Abstract Entomofauna in monospecific stands of the introduced Chinese tallow tree (Sapium sebiferum) and native mixed woodlands was sampled in 1982 along the Texas coast and compared to samples of arthropods from an earlier study of native coastal prairie and from a study of arthropods in S. sebiferum in 2004. Species diversity, richness, and abundance were highest in prairie, and were higher in mixed woodland than in S. sebiferum. Nonmetric multidimensional scaling distinguished orders and families of arthropods, and families of herbivores in S. sebiferum from mixed woodland and coastal prairie. Taxonomic similarity between S. sebiferum and mixed woodland was 51%. Fauna from S. sebiferum in 2001 was more similar to mixed woodland than to samples from S. sebiferum collected in 1982. These results indicate that the entomofauna in S. sebiferum originated from mixed prairie and that, with time, these faunas became more similar. Species richness and abundance of herbivores was lower in S. sebiferum, but proportion of total species in all trophic groups, except herbivores, was higher in S. sebiferum than mixed woodland. Low concentration of tannin in leaves of S. sebiferum did not explain low loss of leaves to herbivores. Lower abundance of herbivores on introduced species of plants fits the enemy release hypothesis, and low concentration of defense compounds in the face of low number of herbivores fits the evolution of increased competitive ability hypothesis.", "which Investigated species ?", "Plants", 1292.0, 1298.0], [". Changes in disturbance due to fire regime in southwestern Pinus ponderosa forests over the last century have led to dense forests that are threatened by widespread fire. It has been shown in other studies that a pulse of native, early-seral opportunistic species typically follow such disturbance events. With the growing importance of exotic plants in local flora, however, these exotics often fill this opportunistic role in recovery. We report the effects of fire severity on exotic plant species following three widespread fires of 1996 in northern Arizona P. ponderosa forests. Species richness and abundance of all vascular plant species, including exotics, were higher in burned than nearby unburned areas. Exotic species were far more important, in terms of cover, where fire severity was highest. Species present after wildfires include those of the pre-disturbed forest and new species that could not be predicted from above-ground flora of nearby unburned forests.", "which Investigated species ?", "Plants", 345.0, 351.0], ["This study is a comparison of the spontaneous vascular flora of five Italian cities: Milan, Ancona, Rome, Cagliari and Palermo. The aims of the study are to test the hypothesis that urbanization results in uniformity of urban floras, and to evaluate the role of alien species in the flora of settlements located in different phytoclimatic regions. To obtain comparable data, ten plots of 1 ha, each representing typical urban habitats, were analysed in each city. The results indicate a low floristic similarity between the cities, while the strongest similarity appears within each city and between each city and the seminatural vegetation of the surrounding region. In the Mediterranean settlements, even the most urbanized plots reflect the characters of the surrounding landscape and are rich in native species, while aliens are relatively few. These results differ from the reported uniformity and the high proportion of aliens which generally characterize urban floras elsewhere. To explain this trend the importance of apophytes (indigenous plants expanding into man-made habitats) is highlighted; several Mediterranean species adapted to disturbance (i.e. grazing, trampling, and human activities) are pre-adapted to the urban environment. In addition, consideration is given to the minor role played by the \u2018urban heat island\u2019 in the Mediterranean basin, and to the structure and history of several Italian settlements, where ancient walls, ruins and archaeological sites in the periphery as well as in the historical centres act as conservative habitats and provide connection with seed-sources on the outskirts.", "which Investigated species ?", "Plants", 1048.0, 1054.0], ["Schinus molle (Peruvian pepper tree) was introduced to South Africa more than 150 years ago and was widely planted, mainly along roads. Only in the last two decades has the species become naturalized and invasive in some parts of its new range, notably in semi-arid savannas. Research is being undertaken to predict its potential for further invasion in South Africa. We studied production, dispersal and predation of seeds, seed banks, and seedling establishment in relation to land uses at three sites, namely ungrazed savanna once used as a military training ground; a savanna grazed by native game; and an ungrazed mine dump. We found that seed production and seed rain density of S. molle varied greatly between study sites, but was high at all sites (384 864\u20131 233 690 seeds per tree per year; 3877\u20139477 seeds per square metre per year). We found seeds dispersed to distances of up to 320 m from female trees, and most seeds were deposited within 50 m of putative source trees. Annual seed rain density below canopies of Acacia tortillis, the dominant native tree at all sites, was significantly lower in grazed savanna. The quality of seed rain was much reduced by endophagous predators. Seed survival in the soil was low, with no survival recorded beyond 1 year. Propagule pressure to drive the rate of recruitment: densities of seedlings and sapling densities were higher in ungrazed savanna and the ungrazed mine dump than in grazed savanna, as reflected by large numbers of young individuals, but adult : seedling ratios did not differ between savanna sites. Frequent and abundant seed production, together with effective dispersal of viable S. molle seed by birds to suitable establishment sites below trees of other species to overcome predation effects, facilitates invasion. Disturbance enhances invasion, probably by reducing competition from native plants.", "which Investigated species ?", "Plants", 1866.0, 1872.0], ["Among introduced passeriform and columbiform birds of the six major Hawaiian islands, some species (including most of those introduced early) may have an intrinsically high probability of successful invasion, whereas others (including many of those introduced from 1900 through 1936) may be intrinsically less likely to succeed. This hypothesis accords well with the observation that, of the 41 species introduced on more than one of the Hawaiian islands, all but four either succeeded everywhere they were introduced or failed everywhere they were introduced, no matter what other species or how many other species were present. Other hypotheses, including competitive ones, are possible. However, most other patterns that have been claimed to support the hypothesis that competitive interactions have been key to which species survived are ambiguous. We propose that the following patterns are true: (1) Extinction rate as a function of number of species present (S) is not better fit by addition of an S2 term. (2) Bill-length differences between pairs of species that invaded together may tend to be less for pairs in which at least one species became extinct, but the result is easily changed by use of one reasonable set of conventions rather than another. In any event, the relationship of bill-length differences to resource overlap has not been established for these species. (3) Surviving forest passeriforms on Oahu may be overdispersed in morphological space, although the species pool used to construct the space may not have been the correct one. (4) Densities of surviving species on species-poor islands have not been shown to exceed those on species-rich islands.", "which Investigated species ?", "Birds", 45.0, 50.0], ["Few invaded ecosystems are free from habitat loss and disturbance, leading to uncertainty whether dominant invasive species are driving community change or are passengers along for the environmental ride. The ''driver'' model predicts that invaded communities are highly interactive, with subordinate native species being limited or ex- cluded by competition from the exotic dominants. The ''passenger'' model predicts that invaded communities are primarily structured by noninteractive factors (environmental change, dispersal limitation) that are less constraining on the exotics, which thus dominate. We tested these alternative hypotheses in an invaded, fragmented, and fire-suppressed oak savanna. We examined the impact of two invasive dominant perennial grasses on community structure using a reduction (mowing of aboveground biomass) and removal (weeding of above- and belowground biomass) experiment conducted at different seasons and soil depths. We examined the relative importance of competition vs. dispersal limitation with experimental seed additions. Competition by the dominants limits the abundance and re- production of many native and exotic species based on their increased performance with removals and mowing. The treatments resulted in increased light availability and bare soil; soil moisture and N were unaffected. Although competition was limiting for some, 36 of 79 species did not respond to the treatments or declined in the absence of grass cover. Seed additions revealed that some subordinates are dispersal limited; competition alone was insufficient to explain their rarity even though it does exacerbate dispersal inefficiencies by lowering reproduction. While the net effects of the dominants were negative, their presence restricted woody plants, facilitated seedling survival with moderate disturbance (i.e., treatments applied in the fall), or was not the primary limiting factor for the occurrence of some species. Finally, the species most functionally distinct from the dominants (forbs, woody plants) responded most significantly to the treatments. This suggests that relative abundance is determined more by trade-offs relating to environmental conditions (long- term fire suppression) than to traits relating to resource capture (which should most impact functionally similar species). This points toward the passenger model as the underlying cause of exotic dominance, although their combined effects (suppressive and facilitative) on community structure are substantial.", "which Investigated species ?", "Plants", 1776.0, 1782.0], ["The herbivore load (abundance and species richness of herbivores) on alien plants is supposed to be one of the keys to understand the invasiveness of species. We investigate the phytophagous insect communities on cabbage plants (Brassicaceae) in Europe. We compare the communities of endophagous and ectophagous insects as well as of Coleoptera and Lepidoptera on native and alien cabbage plant species. Contrary to many other reports, we found no differences in the herbivore load between native and alien hosts. The majority of insect species attacked alien as well as native hosts. Across insect species, there was no difference in the patterns of host range on native and on alien hosts. Likewise the similarity of insect communities across pairs of host species was not different between natives and aliens. We conclude that the general similarity in the community patterns between native and alien cabbage plant species are due to the chemical characteristics of this plant family. All cabbage plants share glucosinolates. This may facilitate host switches from natives to aliens. Hence the presence of native congeners may influence invasiveness of alien plants.", "which Investigated species ?", "Plants", 75.0, 81.0], ["The enemy release hypothesis posits that non-native plant species may gain a competitive advantage over their native counterparts because they are liberated from co-evolved natural enemies from their native area. The phylogenetic relationship between a non-native plant and the native community may be important for understanding the success of some non-native plants, because host switching by insect herbivores is more likely to occur between closely related species. We tested the enemy release hypothesis by comparing leaf damage and herbivorous insect assemblages on the invasive species Senecio madagascariensis Poir. to that on nine congeneric species, of which five are native to the study area, and four are non-native but considered non-invasive. Non-native species had less leaf damage than natives overall, but we found no significant differences in the abundance, richness and Shannon diversity of herbivores between native and non-native Senecio L. species. The herbivore assemblage and percentage abundance of herbivore guilds differed among all Senecio species, but patterns were not related to whether the species was native or not. Species-level differences indicate that S. madagascariensis may have a greater proportion of generalist insect damage (represented by phytophagous leaf chewers) than the other Senecio species. Within a plant genus, escape from natural enemies may not be a sufficient explanation for why some non-native species become more invasive than others.", "which Investigated species ?", "Plants", 361.0, 367.0], ["Introduction vectors for marine non-native species, such as oyster culture and boat foul- ing, often select for organisms dependent on hard substrates during some or all life stages. In soft- sediment estuaries, hard substrate is a limited resource, which can increase with the introduction of hard habitat-creating non-native species. Positive interactions between non-native, habitat-creating species and non-native species utilizing such habitats could be a mechanism for enhanced invasion success. Most previous studies on aquatic invasive habitat-creating species have demonstrated posi- tive responses in associated communities, but few have directly addressed responses of other non- native species. We explored the association of native and non-native species with invasive habitat- creating species by comparing communities associated with non-native, reef-building tubeworms Ficopomatus enigmaticus and native oysters Ostrea conchaphila in Elkhorn Slough, a central Califor- nia estuary. Non-native habitat supported greater densities of associated organisms\u2014primarily highly abundant non-native amphipods (e.g. Monocorophium insidiosum, Melita nitida), tanaid (Sinelebus sp.), and tube-dwelling polychaetes (Polydora spp.). Detritivores were the most common trophic group, making up disproportionately more of the community associated with F. enigmaticus than was the case in the O. conchaphila community. Analysis of similarity (ANOSIM) showed that native species' community structure varied significantly among sites, but not between biogenic habi- tats. In contrast, non-natives varied with biogenic habitat type, but not with site. Thus, reefs of the invasive tubeworm F. enigmaticus interact positively with other non-native species.", "which Investigated species ?", "Tubeworm", 1675.0, 1683.0], ["Plant species introduced into novel ranges may become invasive due to evolutionary change, phenotypic plasticity, or other biotic or abiotic mechanisms. Evolution of introduced populations could be the result of founder effects, drift, hybridization, or adaptation to local conditions, which could enhance the invasiveness of introduced species. However, understanding whether the success of invading populations is due to genetic differences between native and introduced populations may be obscured by origin x environment interactions. That is, studies conducted under a limited set of environmental conditions may show inconsistent results if native or introduced populations are differentially adapted to specific conditions. We tested for genetic differences between native and introduced populations, and for origin x environment interactions, between native (China) and introduced (U.S.) populations of the invasive annual grass Microstegium vimineum (stiltgrass) across 22 common gardens spanning a wide range of habitats and environmental conditions. On average, introduced populations produced 46% greater biomass and had 7.4% greater survival, and outperformed native range populations in every common garden. However, we found no evidence that introduced Microstegium exhibited greater phenotypic plasticity than native populations. Biomass of Microstegium was positively correlated with light and resident community richness and biomass across the common gardens. However, these relationships were equivalent for native and introduced populations, suggesting that the greater mean performance of introduced populations is not due to unequal responses to specific environmental parameters. Our data on performance of invasive and native populations suggest that post-introduction evolutionary changes may have enhanced the invasive potential of this species. Further, the ability of Microstegium to survive and grow across the wide variety of environmental conditions demonstrates that few habitats are immune to invasion.", "which Phenotypic plasticity form ?", "Biomass", 1117.0, 1124.0], ["I examined the distribution and abundance of bird species across an urban gradient, and concomitant changes in community structure, by censusing summer resident bird populations at six sites in Santa Clara County, California (all former oak woodlands). These sites represented a gradient of urban land use that ranged from relatively undisturbed to highly developed, and included a biological preserve, recreational area, golf course, residential neighborhood, office park, and business district. The composition of the bird community shifted from predominantly native species in the undisturbed area to invasive and exotic species in the business district. Species richness, Shannon diversity, and bird biomass peaked at moderately disturbed sites. One or more species reached maximal densities in each of the sites, and some species were restricted to a given site. The predevelopment bird species (assumed to be those found at the most undisturbed site) dropped out gradually as the sites became more urban. These patterns were significantly related to shifts in habitat structure that occurred along the gradient, as determined by canonical correspondence analysis (CCA) using the environmental variables of percent land covered by pavement, buildings, lawn, grasslands, and trees or shrubs. I compared each formal site to four additional sites with similar levels of development within a two-county area to verify that the bird communities at the formal study sites were rep- resentative of their land use category.", "which Focal entity ?", "Communities", 1433.0, 1444.0], ["I examined the distribution and abundance of bird species across an urban gradient, and concomitant changes in community structure, by censusing summer resident bird populations at six sites in Santa Clara County, California (all former oak woodlands). These sites represented a gradient of urban land use that ranged from relatively undisturbed to highly developed, and included a biological preserve, recreational area, golf course, residential neighborhood, office park, and business district. The composition of the bird community shifted from predominantly native species in the undisturbed area to invasive and exotic species in the business district. Species richness, Shannon diversity, and bird biomass peaked at moderately disturbed sites. One or more species reached maximal densities in each of the sites, and some species were restricted to a given site. The predevelopment bird species (assumed to be those found at the most undisturbed site) dropped out gradually as the sites became more urban. These patterns were significantly related to shifts in habitat structure that occurred along the gradient, as determined by canonical correspondence analysis (CCA) using the environmental variables of percent land covered by pavement, buildings, lawn, grasslands, and trees or shrubs. I compared each formal site to four additional sites with similar levels of development within a two-county area to verify that the bird communities at the formal study sites were rep- resentative of their land use category.", "which Focal entity ?", "Populations", 166.0, 177.0], ["Summary Non-native species with growth forms that are different from the native flora may alter the physical structure of the area they invade, thereby changing the resources available to resident species. This in turn can select for species with traits suited for the new growing environment. We used adjacent uninvaded and invaded grassland patches to evaluate whether the shift in dominance from a native perennial bunchgrass, Nassella pulchra, to the early season, non-native annual grass, Bromus diandrus, affects the physical structure, available light, plant community composition and community-weighted trait means. Our field surveys revealed that the exotic grass B. diandrus alters both the vertical and horizontal structure creating more dense continuous vegetative growth and dead plant biomass than patches dominated by N. pulchra. These differences in physical structure are responsible for a threefold reduction in available light and likely contribute to the lower diversity, especially of native forbs in B. diandrus-dominated patches. Further, flowering time began earlier and seed size and plant height were higher in B. diandrus patches relative to N. pulchra patches. Our results suggest that species that are better suited (earlier phenology, larger seed size and taller) for low light availability are those that coexist with B. diandrus, and this is consistent with our hypothesis that change in physical structure with B. diandrus invasion is an important driver of community and trait composition. The traits of species able to coexist with invaders are rarely considered when assessing community change following invasion; however, this may be a powerful approach for predicting community change in environments with high anthropogenic pressures, such as disturbance and nutrient enrichment. It also provides a means for selecting species to introduce when trying to enhance native diversity in an otherwise invaded community.", "which Ecological Level of evidence ?", "Community", 566.0, 575.0], ["In this study, evidence for interspecific interaction was provided by comparing distribution patterns of nonnative rainbow trout Onchorhynchus mykiss and brown trout Salmo trutta between the past and present in the Chitose River system, Hokkaido, northern Japan. O. mykiss was first introduced in 1920 in the Chitose River system and has since successfully established a population. Subsequently, another nonnative salmonid species, S. trutta have expanded the Chitose River system since the early 1980s. At present, S. trutta have replaced O. mykiss in the majority of the Chitose River, although O. mykiss have persisted in areas above migration barriers that prevent S. trutta expansion. In conclusion, the results of this study highlight the role of interspecific interactions between sympatric nonnative species on the establishment and persistence of populations of nonnative species.", "which Ecological Level of evidence ?", "Population", 371.0, 381.0], ["Summary Biological invasions threaten ecosystem integrity and biodiversity, with numerous adverse implications for native flora and fauna. Established populations of two notorious freshwater invaders, the snail Tarebia granifera and the fish Pterygoplichthys disjunctivus, have been reported on three continents and are frequently predicted to be in direct competition with native species for dietary resources. Using comparisons of species' isotopic niche widths and stable isotope community metrics, we investigated whether the diets of the invasive T. granifera and P. disjunctivus overlapped with those of native species in a highly invaded river. We also attempted to resolve diet composition for both species, providing some insight into the original pathway of invasion in the Nseleni River, South Africa. Stable isotope metrics of the invasive species were similar to or consistently mid-range in comparison with their native counterparts, with the exception of markedly more uneven spread in isotopic space relative to indigenous species. Dietary overlap between the invasive P. disjunctivus and native fish was low, with the majority of shared food resources having overlaps of <0.26. The invasive T. granifera showed effectively no overlap with the native planorbid snail. However, there was a high degree of overlap between the two invasive species (\u02dc0.86). Bayesian mixing models indicated that detrital mangrove Barringtonia racemosa leaves contributed the largest proportion to P. disjunctivus diet (0.12\u20130.58), while the diet of T. granifera was more variable with high proportions of detrital Eichhornia crassipes (0.24\u20130.60) and Azolla filiculoides (0.09\u20130.33) as well as detrital Barringtonia racemosa leaves (0.00\u20130.30). Overall, although the invasive T. granifera and P. disjunctivus were not in direct competition for dietary resources with native species in the Nseleni River system, their spread in isotopic space suggests they are likely to restrict energy available to higher consumers in the food web. Establishment of these invasive populations in the Nseleni River is thus probably driven by access to resources unexploited or unavailable to native residents.", "which Ecological Level of evidence ?", "Population", NaN, NaN], ["During the 20th century, deer (family Cervidae), both native and introduced populations, dramatically increased in abundance in many parts of the world and became seen as major threats to biodiversity in forest ecosystems. Here, we evaluated the consequences that restoring top\u2010down herbivore population control has on plants and birds.", "which Ecological Level of evidence ?", "Population", 293.0, 303.0], ["What are the relative roles of mechanisms underlying plant responses in grassland communities invaded by both plants and mammals? What type of community can we expect in the future given current or novel conditions? We address these questions by comparing Markov chain community models among treatments from a field experiment on invasive species on Robinson Crusoe Island, Chile. Because of seed dispersal, grazing and disturbance, we predicted that the exotic European rabbit (Oryctolagus cuniculus) facilitates epizoochorous exotic plants (plants with seeds that stick to the skin an animal) at the expense of native plants. To test our hypothesis, we crossed rabbit exclosure treatments with disturbance treatments, and sampled the plant community in permanent plots over 3 years. We then estimated Markov chain model transition probabilities and found significant differences among treatments. As hypothesized, this modelling revealed that exotic plants survive better in disturbed areas, while natives prefer no rabbits or disturbance. Surprisingly, rabbits negatively affect epizoochorous plants. Markov chain dynamics indicate that an overall replacement of native plants by exotic plants is underway. Using a treatment-based approach to multi-species Markov chain models allowed us to examine the changes in the importance of mechanisms in response to experimental impacts on communities.", "which Ecological Level of evidence ?", "Community", 143.0, 152.0], ["AbstractThe spread of nonnative species over the last century has profoundly altered freshwater ecosystems, resulting in novel species assemblages. Interactions between nonnative species may alter their impacts on native species, yet few studies have addressed multispecies interactions. The spread of whirling disease, caused by the nonnative parasite Myxobolus cerebralis, has generated declines in wild trout populations across western North America. Westslope Cutthroat Trout Oncorhynchus clarkii lewisi in the northern Rocky Mountains are threatened by hybridization with introduced Rainbow Trout O. mykiss. Rainbow Trout are more susceptible to whirling disease than Cutthroat Trout and may be more vulnerable due to differences in spawning location. We hypothesized that the presence of whirling disease in a stream would (1) reduce levels of introgressive hybridization at the site scale and (2) limit the size of the hybrid zone at the whole-stream scale. We measured levels of introgression and the spatial ext...", "which Ecological Level of evidence ?", "Population", NaN, NaN], ["How interactions between exotic species affect invasion impact is a fundamental issue on both theoretical and applied grounds. Exotics can facilitate establishment and invasion of other exotics (invasional meltdown) or they can restrict them by re-establishing natural population control (as predicted by the enemy- release hypothesis). We studied forest invasion on an Argentinean island where 43 species of Pinaceae, including 60% of the world's recorded invasive Pinaceae, were introduced c. 1920 but where few species are colonizing pristine areas. In this area two species of Palearctic deer, natural enemies of most Pinaceae, were introduced 80 years ago. Expecting deer to help to control the exotics, we conducted a cafeteria experiment to assess deer preferences among the two dominant native species (a conifer, Austrocedrus chilensis, and a broadleaf, Nothofagus dombeyi) and two widely introduced exotic tree species (Pseudotsuga menziesii and Pinus ponderosa). Deer browsed much more intensively on native species than on exotic conifers, in terms of number of individuals attacked and degree of browsing. Deer preference for natives could potentially facilitate invasion by exotic pines. However, we hypothesize that the low rates of invasion currently observed can result at least partly from high densities of exotic deer, which, despite their preference for natives, can prevent establishment of both native and exotic trees. Other factors, not mutually exclusive, could produce the observed pattern. Our results underscore the difficulty of predicting how one introduced species will effect impact of another one.", "which Ecological Level of evidence ?", "Individual", NaN, NaN], ["Since 1995, Dikerogammarus villosus Sowinski, a Ponto-Caspian amphi- pod species, has been invading most of Western Europe' s hydrosystems. D. villosus geographic extension and quickly increasing population density has enabled it to become a major component of macrobenthic assemblages in recipient ecosystems. The ecological characteristics of D. villosus on a mesohabitat scale were investigated at a station in the Moselle River. This amphipod is able to colonize a wide range of sub- stratum types, thus posing a threat to all freshwater ecosystems. Rivers whose domi- nant substratum is cobbles and which have tree roots along the banks could harbour particularly high densities of D. villosus. A relationship exists between substratum par- ticle size and the length of the individuals, and spatial segregation according to length was shown. This allows the species to limit intra-specific competition between genera- tions while facilitating reproduction. A strong association exists between D. villosus and other Ponto-Caspian species, such as Dreissena polymorpha and Corophium cur- vispinum, in keeping with Invasional Meltdown Theory. Four taxa (Coenagrionidae, Calopteryx splendens, Corophium curvispinum and Gammarus pulex ) exhibited spa- tial niches that overlap significantly that of D. villosus. According to the predatory be- haviour of the newcomer, their populations may be severely impacted.", "which Ecological Level of evidence ?", "Population", 196.0, 206.0], ["The introduced brushtail possum (Trichosurus vulpecula) is a major environmental and agricultural pest in New Zealand but little information is available on the ecology of possums in drylands, which cover c. 19% of the country. Here, we describe a temporal snapshot of the diet and feeding preferences of possums in a dryland habitat in New Zealand's South Island, as well as movement patterns and survival rates. We also briefly explore spatial patterns in capture rates. We trapped 279 possums at an average capture rate of 9 possums per 100 trap nights. Capture rates on individual trap lines varied from 0 to 38%, decreased with altitude, and were highest in the eastern (drier) parts of the study area. Stomach contents were dominated by forbs and sweet briar (Rosa rubiginosa); both items were consumed preferentially relative to availability. Possums also strongly preferred crack willow (Salix fragilis), which was uncommon in the study area and consumed only occasionally, but in large amounts. Estimated activity areas of 29 possums radio-tracked for up to 12 months varied from 0.2 to 19.5 ha (mean 5.1 ha). Nine possums (4 male, 5 female) undertook dispersal movements (\u22651000 m), the longest of which was 4940 m. The most common dens of radio-collared possums were sweet briar shrubs, followed by rock outcrops. Estimated annual survival was 85% for adults and 54% for subadults. Differences between the diets, activity areas and den use of possums in this study and those in forest or farmland most likely reflect differences in availability and distribution of resources. Our results suggest that invasive willow and sweet briar may facilitate the existence of possums by providing abundant food and shelter. In turn, possums may facilitate the spread of weeds by acting as a seed vector. This basic ecological information will be useful in modelling and managing the impacts of possum populations in drylands.", "which Ecological Level of evidence ?", "Individual", 574.0, 584.0], ["The probability of a bird species going extinct on oceanic islands in the period since European colonization is predicted by the number of introduced predatory mammal species, but the exact mechanism driving this relationship is unknown. One possibility is that larger exotic predator communities include a wider array of predator functional types. These predator communities may target native bird species with a wider range of behavioral or life history characteristics. We explored the hypothesis that the functional diversity of the exotic predators drives bird species extinctions. We also tested how different combinations of functionally important traits of the predators explain variation in extinction probability. Our results suggest a unique impact of each introduced mammal species on native bird populations, as opposed to a situation where predators exhibit functional redundancy. Further, the impact of each additional predator may be facilitated by those already present, suggesting the possibility of \u201cinvasional meltdown.\u201d", "which Ecological Level of evidence ?", "Population", NaN, NaN], ["Abstract Studies have suggested that plant-based nutritional resources are important in promoting high densities of omnivorous and invasive ants, but there have been no direct tests of the effects of these resources on colony productivity. We conducted an experiment designed to determine the relative importance of plants and honeydew-producing insects feeding on plants to the growth of colonies of the invasive ant Solenopsis invicta (Buren). We found that colonies of S. invicta grew substantially when they only had access to unlimited insect prey; however, colonies that also had access to plants colonized by honeydew-producing Hemiptera grew significantly and substantially (\u224850%) larger. Our experiment also showed that S. invicta was unable to acquire significant nutritional resources directly from the Hemiptera host plant but acquired them indirectly from honeydew. Honeydew alone is unlikely to be sufficient for colony growth, however, and both carbohydrates abundant in plants and proteins abundant in animals are likely to be necessary for optimal growth. Our experiment provides important insight into the effects of a common tritrophic interaction among an invasive mealybug, Antonina graminis (Maskell), an invasive host grass, Cynodon dactylon L. Pers., and S. invicta in the southeastern United States, suggesting that interactions among these species can be important in promoting extremely high population densities of S. invicta.", "which Ecological Level of evidence ?", "Population", 1419.0, 1429.0], ["1. Freshwaters are subject to particularly high rates of species introductions; hence, invaders increasingly co-occur and may interact to enhance impacts on ecosystem structure and function. As trophic interactions are a key mechanism by which invaders influence communities, we used a combination of approaches to investigate the feeding preferences and community impacts of two globally invasive large benthic decapods that co-occur in freshwaters: the signal crayfish (Pacifastacus leniusculus) and Chinese mitten crab (Eriocheir sinensis). 2. In laboratory preference tests, both consumed similar food items, including chironomids, isopods and the eggs of two coarse fish species. In a comparison of predatory functional responses with a native crayfish Austropotamobius pallipes), juvenile E. sinensis had a greater predatory intensity than the native A. pallipes on the keystone shredder Gammarus pulex, and also displayed a greater preference than P. leniusculus for this prey item. 3. In outdoor mesocosms (n = 16) used to investigate community impacts, the abundance of amphipods, isopods, chironomids and gastropods declined in the presence of decapods, and a decapod >gastropod >periphyton trophic cascade was detected when both species were present. Eriocheir sinensis affected a wider range of animal taxa than P. leniusculus. 4. Stable-isotope and gut-content analysis of wild-caught adult specimens of both invaders revealed a wide and overlapping range of diet items including macrophytes, algae, terrestrial detritus, macroinvertebrates and fish. Both decapods were similarly enriched in 15N and occupied the same trophic level as Ephemeroptera, Odonata and Notonecta. Eriocheir sinensis d13C values were closely aligned with macrophytes indicating a reliance on energy from this basal resource, supported by evidence of direct consumption from gut contents. Pacifastacus leniusculus d13C values were intermediate between those of terrestrial leaf litter and macrophytes, suggesting reliance on both allochthonous and autochthonous energy pathways. 5. Our results suggest that E. sinensis is likely to exert a greater per capita impact on the macroinvertebrate communities in invaded systems than P. leniusculus, with potential indirect effects on productivity and energy flow through the community.", "which Ecological Level of evidence ?", "Community", 355.0, 364.0], ["The impacts of alien plants on native richness are usually assessed at small spatial scales and in locations where the alien is at high abundance. But this raises two questions: to what extent do impacts occur where alien species are at low abundance, and do local impacts translate to effects at the landscape scale? In an analysis of 47 widespread alien plant species occurring across a 1,000 km2 landscape, we examined the relationship between their local abundance and native plant species richness in 594 grassland plots. We first defined the critical abundance at which these focal alien species were associated with a decline in native \u03b1\u2010richness (plot\u2010scale species numbers), and then assessed how this local decline was translated into declines in native species \u03b3\u2010richness (landscape\u2010scale species numbers). After controlling for sampling biases and environmental gradients that might lead to spurious relationships, we found that eight out of 47 focal alien species were associated with a significant decline in native \u03b1\u2010richness as their local abundance increased. Most of these significant declines started at low to intermediate classes of abundance. For these eight species, declines in native \u03b3\u2010richness were, on average, an order of magnitude (32.0 vs. 2.2 species) greater than those found for native \u03b1\u2010richness, mostly due to spatial homogenization of native communities. The magnitude of the decrease at the landscape scale was best explained by the number of plots where an alien species was found above its critical abundance. Synthesis. Even at low abundance, alien plants may impact native plant richness at both local and landscape scales. Local impacts may result in much greater declines in native richness at larger spatial scales. Quantifying impact at the landscape scale requires consideration of not only the prevalence of an alien plant, but also its critical abundance and its effect on native community homogenization. This suggests that management approaches targeting only those locations dominated by alien plants might not mitigate impacts effectively. Our integrated approach will improve the ranking of alien species risks at a spatial scale appropriate for prioritizing management and designing conservation policies.", "which Ecological Level of evidence ?", "Community", 1928.0, 1937.0], ["Plants with poorly attractive flowers or with little floral rewards may have inadequate pollinator service, which in turn reduces seed output. However, pollinator service of less attractive species could be enhanced when they are associated with species with highly attractive flowers (so called \u2018magnet-species\u2019). Although several studies have reported the magnet species effect, few of them have evaluated whether this positive interaction result in an enhancement of the seed output for the beneficiary species. Here, we compared pollinator visitation rates and seed output of the invasive annual species Carduus pycnocephalus when grow associated with shrubs of the invasive Lupinus arboreus and when grow alone, and hypothesized that L. arboreus acts as a magnet species for C. pycnocephalus. Results showed that C. pycnocephalus individuals associated with L. arboreus had higher pollinator visitation rates and higher seed output than individuals growing alone. The higher visitation rates of C. pycnocephalus associated to L. arboreus were maintained after accounting for flower density, which consistently supports our hypothesis on the magnet species effect of L. arboreus. Given that both species are invasives, the facilitated pollination and reproduction of C. pycnocephalus by L. arboreus could promote its naturalization in the community, suggesting a synergistic invasional process contributing to an \u2018invasional meltdown\u2019. The magnet effect of Lupinus on Carduus found in this study seems to be one the first examples of indirect facilitative interactions via increased pollination among invasive species.", "which Ecological Level of evidence ?", "Individual", NaN, NaN], ["Introduced stoats (Mustela erminea) are important invasive predators in southern beech (Nothofagus sp.) forests in New Zealand. In these forests, one of their primary prey species \u2013 introduced house mice (Mus musculus), fluctuate dramatically between years, driven by the irregular heavy seed-fall (masting) of the beech trees. We examined the effects of mice on stoats in this system by comparing the weights, age structure and population densities of stoats caught on two large islands in Fiordland, New Zealand \u2013 one that has mice (Resolution Island) and one that does not (Secretary Island). On Resolution Island, the stoat population showed a history of recruitment spikes and troughs linked to beech masting, whereas the Secretary Island population had more constant recruitment, indicating that rodents are probably the primary cause for the \u2018boom and bust\u2019 population cycle of stoats in beech forests. Resolutions Island stoats were 10% heavier on average than Secretary Island stoats, supporting the hypothesis that the availability of larger prey (mice verses w\u0113t\u0101) leads to larger stoats. Beech masting years on this island were also correlated with a higher weight for stoats born in the year of the masting event. The detailed demographic information on the stoat populations of these two islands supports previously suggested interactions among mice, stoats and beech masting. These interactions may have important consequences for the endemic species that interact with fluctuating populations of mice and stoats.", "which Ecological Level of evidence ?", "Population", 429.0, 439.0], ["Abstract Invasive animals can facilitate the success of invasive plant populations through disturbance. We examined the relationship between the repeated foraging disturbance of an invasive animal and the population maintenance of an invasive plant in a coastal dune ecosystem. We hypothesized that feral wild hog (Sus scrofa) populations repeatedly utilized tubers of the clonal perennial, yellow nutsedge (Cyperus esculentus) as a food source and evaluated whether hog activity promoted the long\u2010term maintenance of yellow nutsedge populations on St. Catherine's Island, Georgia, United States. Using generalized linear mixed models, we tested the effect of wild hog disturbance on permanent sites for yellow nutsedge culm density, tuber density, and percent cover of native plant species over a 12\u2010year period. We found that disturbance plots had a higher number of culms and tubers and a lower percentage of native live plant cover than undisturbed control plots. Wild hogs redisturbed the disturbed plots approximately every 5 years. Our research provides demographic evidence that repeated foraging disturbances by an invasive animal promote the long\u2010term population maintenance of an invasive clonal plant. Opportunistic facultative interactions such as we demonstrate in this study are likely to become more commonplace as greater numbers of introduced species are integrated into ecological communities around the world.", "which Ecological Level of evidence ?", "Population", 205.0, 215.0], ["Multiple invasive species have now established at most locations around the world, and the rate of new species invasions and records of new invasive species continue to grow. Multiple invasive species interact in complex and unpredictable ways, altering their invasion success and impacts on biodiversity. Incumbent invasive species can be replaced by functionally similar invading species through competitive processes; however the generalized circumstances leading to such competitive displacement have not been well investigated. The likelihood of competitive displacement is a function of the incumbent advantage of the resident invasive species and the propagule pressure of the colonizing invasive species. We modeled interactions between populations of two functionally similar invasive species and indicated the circumstances under which dominance can be through propagule pressure and incumbent advantage. Under certain circumstances, a normally subordinate species can be incumbent and reject a colonizing dominant species, or successfully colonize in competition with a dominant species during simultaneous invasion. Our theoretical results are supported by empirical studies of the invasion of islands by three invasive Rattus species. Competitive displacement is prominent in invasive rats and explains the replacement of R. exulans on islands subsequently invaded by European populations of R. rattus and R. norvegicus. These competition outcomes between invasive species can be found in a broad range of taxa and biomes, and are likely to become more common. Conservation management must consider that removing an incumbent invasive species may facilitate invasion by another invasive species. Under very restricted circumstances of dominant competitive ability but lesser impact, competitive displacement may provide a novel method of biological control.", "which Ecological Level of evidence ?", "Population", NaN, NaN], ["Abstract During the early 1990s, 2 Eurasian macrofouling mollusks, the zebra mussel Dreissena polymorpha and the quagga mussel D. bugensis, colonized the freshwater section of the St. Lawrence River and decimated native mussel populations through competitive interference. For several years, zebra mussels dominated molluscan biomass in the river; however, quagga mussels have increased in abundance and are apparently displacing zebra mussels from the Soulanges Canal, west of the Island of Montreal. The ratio of quagga mussel biomass to zebra mussel biomass on the canal wall is correlated with depth, and quagga mussels constitute >99% of dreissenid biomass on bottom sediments. This dominance shift did not substantially affect the total dreissenid biomass, which has remained at 3 to 5 kg fresh mass /m2 on the canal walls for nearly a decade. The mechanism for this shift is unknown, but may be related to a greater bioenergetic efficiency for quaggas, which attained larger shell sizes than zebra mussels at all depths. Similar events have occurred in the lower Great Lakes where zebra mussels once dominated littoral macroinvertebrate biomass, demonstrating that a well-established and prolific invader can be replaced by another introduced species without prior extinction.", "which Ecological Level of evidence ?", "Population", NaN, NaN], ["Nitrogen availability affects both plant growth and the preferences of herbivores. We hypothesized that an interaction between these two factors could affect the early establishment of native and exotic species differently, promoting invasion in natural systems. Taxonomically paired native and invasive species (Acer platanoides, Acer rubrum, Lonicera maackii, Diervilla lonicera, Celastrus orbiculata, Celastrus scandens, Elaeagnus umbellata, Ceanothus americanus, Ampelopsis brevipedunculata, and Vitis riparia) were grown in relatively high-resource (hardwood forests) and low-resource (pine barrens) communities on Long Island, New York, for a period of 3 months. Plants were grown in ambient and nitrogen-enhanced conditions in both communities. Nitrogen additions produced an average 12% initial increase in leaf number of all plants. By the end of the experiment, invasive species outperformed native species in nitrogen-enhanced plots in hardwood forests, where all plants experienced increased damage relative to control plots. Native species experienced higher overall amounts of damage in hardwood forests, losing, on average, 45% more leaves than exotic species, and only native species experienced a decline in growth rates (32% compared with controls). In contrast, in pine barrens, there were no differences in damage and no differences in performance between native and invasive plants. Our results suggest that unequal damage by natural enemies may play a role in determining community composition by shifting the competitive advantage to exotic species in nitrogen-enhanced environments. FOR. SCI. 53(6):701-709.", "which Indicator for enemy release ?", "Performance", 1356.0, 1367.0], ["The enemy release hypothesis (ERH) is often cited to explain why some plants successfully invade natural communities while others do not. This hypothesis maintains that plant populations are regulated by coevolved enemies in their native range but are relieved of this pressure where their enemies have not been co-introduced. Some studies have shown that invasive plants sustain lower levels of herbivore damage when compared to native species, but how damage affects fitness and population dynamics remains unclear. We used a system of co-occurring native and invasive Eugenia congeners in south Florida (USA) to experimentally test the ERH, addressing deficiencies in our understanding of the role of natural enemies in plant invasion at the population level. Insecticide was used to experimentally exclude insect herbivores from invasive Eugenia uniflora and its native co-occurring congeners in the field for two years. Herbivore damage, plant growth, survival, and population growth rates for the three species were then compared for control and insecticide-treated plants. Our results contradict the ERH, indicating that E. uniflora sustains more herbivore damage than its native congeners and that this damage negatively impacts stem height, survival, and population growth. In addition, most damage to E. uniflora, a native of Brazil, is carried out by Myllocerus undatus, a recently introduced weevil from Sri Lanka, and M. undatus attacks a significantly greater proportion of E. uniflora leaves than those of its native congeners. This interaction is particularly interesting because M. undatus and E. uniflora share no coevolutionary history, having arisen on two separate continents and come into contact on a third. Our study is the first to document negative population-level effects for an invasive plant as a result of the introduction of a novel herbivore. Such inhibitory interactions are likely to become more prevalent as suites of previously noninteracting species continue to accumulate and new communities assemble worldwide.", "which Indicator for enemy release ?", "Damage", 406.0, 412.0], ["The Enemies Hypothesis predicts that alien plants have a competitive ad- vantage over native plants because they are often introduced with few herbivores or diseases. To investigate this hypothesis, we transplanted seedlings of the invasive alien tree, Sapium sebiferum (Chinese tallow tree) and an ecologically similar native tree, Celtis laevigata (hackberry), into mesic forest, floodplain forest, and coastal prairie sites in east Texas and manipulated foliar fungal diseases and insect herbivores with fungicidal and insecticidal sprays. As predicted by the Enemies Hypothesis, insect herbivores caused significantly greater damage to untreated Celtis seedlings than to untreated Sapium seedlings. However, contrary to predictions, suppression of insect herbivores caused significantly greater in- creases in survivorship and growth of Sapium seedlings compared to Celtis seedlings. Regressions suggested that Sapium seedlings compensate for damage in the first year but that this greatly increases the risk of mortality in subsequent years. Fungal diseases had no effects on seedling survival or growth. The Recruitment Limitation Hypothesis predicts that the local abundance of a species will depend more on local seed input than on com- petitive ability at that location. To investigate this hypothesis, we added seeds of Celtis and Sapium on and off of artificial soil disturbances at all three sites. Adding seeds increased the density of Celtis seedlings and sometimes Sapium seedlings, with soil disturbance only affecting density of Celtis. Together the results of these experiments suggest that the success of Sapium may depend on high rates of seed input into these ecosystems and high growth potential, as well as performance advantages of seedlings caused by low rates of herbivory.", "which Indicator for enemy release ?", "Performance", 1730.0, 1741.0], ["Nitrogen availability affects both plant growth and the preferences of herbivores. We hypothesized that an interaction between these two factors could affect the early establishment of native and exotic species differently, promoting invasion in natural systems. Taxonomically paired native and invasive species (Acer platanoides, Acer rubrum, Lonicera maackii, Diervilla lonicera, Celastrus orbiculata, Celastrus scandens, Elaeagnus umbellata, Ceanothus americanus, Ampelopsis brevipedunculata, and Vitis riparia) were grown in relatively high-resource (hardwood forests) and low-resource (pine barrens) communities on Long Island, New York, for a period of 3 months. Plants were grown in ambient and nitrogen-enhanced conditions in both communities. Nitrogen additions produced an average 12% initial increase in leaf number of all plants. By the end of the experiment, invasive species outperformed native species in nitrogen-enhanced plots in hardwood forests, where all plants experienced increased damage relative to control plots. Native species experienced higher overall amounts of damage in hardwood forests, losing, on average, 45% more leaves than exotic species, and only native species experienced a decline in growth rates (32% compared with controls). In contrast, in pine barrens, there were no differences in damage and no differences in performance between native and invasive plants. Our results suggest that unequal damage by natural enemies may play a role in determining community composition by shifting the competitive advantage to exotic species in nitrogen-enhanced environments. FOR. SCI. 53(6):701-709.", "which Indicator for enemy release ?", "Damage", 1004.0, 1010.0], ["Escape from natural enemies is a widely held generalization for the success of exotic plants. We conducted a large-scale experiment in Hawaii (USA) to quantify impacts of ungulate removal on plant growth and performance, and to test whether elimination of an exotic generalist herbivore facilitated exotic success. Assessment of impacted and control sites before and after ungulate exclusion using airborne imaging spectroscopy and LiDAR, time series satellite observations, and ground-based field studies over nine years indicated that removal of generalist herbivores facilitated exotic success, but the abundance of native species was unchanged. Vegetation cover <1 m in height increased in ungulate-free areas from 48.7% +/- 1.5% to 74.3% +/- 1.8% over 8.4 years, corresponding to an annualized growth rate of lambda = 1.05 +/- 0.01 yr(-1) (median +/- SD). Most of the change was attributable to exotic plant species, which increased from 24.4% +/- 1.4% to 49.1% +/- 2.0%, (lambda = 1.08 +/- 0.01 yr(-1)). Native plants experienced no significant change in cover (23.0% +/- 1.3% to 24.2% +/- 1.8%, lambda = 1.01 +/- 0.01 yr(-1)). Time series of satellite phenology were indistinguishable between the treatment and a 3.0-km2 control site for four years prior to ungulate removal, but they diverged immediately following exclusion of ungulates. Comparison of monthly EVI means before and after ungulate exclusion and between the managed and control areas indicates that EVI strongly increased in the managed area after ungulate exclusion. Field studies and airborne analyses show that the dominant invader was Senecio madagascariensis, an invasive annual forb that increased from < 0.01% to 14.7% fractional cover in ungulate-free areas (lambda = 1.89 +/- 0.34 yr(-1)), but which was nearly absent from the control site. A combination of canopy LAI, water, and fractional cover were expressed in satellite EVI time series and indicate that the invaded region maintained greenness during drought conditions. These findings demonstrate that enemy release from generalist herbivores can facilitate exotic success and suggest a plausible mechanism by which invasion occurred. They also show how novel remote-sensing technology can be integrated with conservation and management to help address exotic plant invasions.", "which Indicator for enemy release ?", "Performance", 208.0, 219.0], ["The Enemies Hypothesis predicts that alien plants have a competitive ad- vantage over native plants because they are often introduced with few herbivores or diseases. To investigate this hypothesis, we transplanted seedlings of the invasive alien tree, Sapium sebiferum (Chinese tallow tree) and an ecologically similar native tree, Celtis laevigata (hackberry), into mesic forest, floodplain forest, and coastal prairie sites in east Texas and manipulated foliar fungal diseases and insect herbivores with fungicidal and insecticidal sprays. As predicted by the Enemies Hypothesis, insect herbivores caused significantly greater damage to untreated Celtis seedlings than to untreated Sapium seedlings. However, contrary to predictions, suppression of insect herbivores caused significantly greater in- creases in survivorship and growth of Sapium seedlings compared to Celtis seedlings. Regressions suggested that Sapium seedlings compensate for damage in the first year but that this greatly increases the risk of mortality in subsequent years. Fungal diseases had no effects on seedling survival or growth. The Recruitment Limitation Hypothesis predicts that the local abundance of a species will depend more on local seed input than on com- petitive ability at that location. To investigate this hypothesis, we added seeds of Celtis and Sapium on and off of artificial soil disturbances at all three sites. Adding seeds increased the density of Celtis seedlings and sometimes Sapium seedlings, with soil disturbance only affecting density of Celtis. Together the results of these experiments suggest that the success of Sapium may depend on high rates of seed input into these ecosystems and high growth potential, as well as performance advantages of seedlings caused by low rates of herbivory.", "which Indicator for enemy release ?", "Damage", 630.0, 636.0], ["\n\nInvasive plant species may benefit from a reduction in herbivory in their introduced range. The reduced herbivory may cause a reallocation of resources from defence to fitness. Here, we evaluated leaf herbivory of an invasive tree species (Ligustrum lucidum Aiton) in its native and novel ranges, and determined the potential changes in leaf traits that may be associated with the patterns of herbivory. We measured forest structure, damage by herbivores and leaf traits in novel and native ranges, and on the basis of the literature, we identified the common natural herbivores of L. lucidum. We also performed an experiment offering leaves from both ranges to a generalist herbivore (Spodoptera frugiperda). L. lucidum was more abundant and experienced significantly less foliar damage in the novel than in the native range, in spite of the occurrence of several natural herbivores. The reduced lignin content and lower lignin\u2009:\u2009N ratio in novel leaves, together with the higher herbivore preference for leaves of this origin in the laboratory experiment, indicated lower herbivore resistance in novel than in native populations. The reduced damage by herbivores is not the only factor explaining invasion success, but it may be an important cause that enhances the invasiveness of L. lucidum.\n", "which Indicator for enemy release ?", "Damage", 444.0, 450.0], ["Numerous hypotheses suggest that natural enemies can influence the dynamics of biological invasions. Here, we use a group of 12 related native, invasive, and naturalized vines to test the relative importance of resistance and tolerance to herbivory in promoting biological invasions. In a field experiment in Long Island, New York, we excluded mammal and insect herbivores and examined plant growth and foliar damage over two growing seasons. This novel approach allowed us to compare the relative damage from mammal and insect herbivores and whether damage rates were related to invasion. In a greenhouse experiment, we simulated herbivory through clipping and measured growth response. After two seasons of excluding herbivores, there was no difference in relative growth rates among invasive, naturalized, and native woody vines, and all vines were susceptible to damage from mammal and insect herbivores. Thus, differential attack by herbivores and plant resistance to herbivory did not explain invasion success of these species. In the field, where damage rates were high, none of the vines were able to fully compensate for damage from mammals. However, in the greenhouse, we found that invasive vines were more tolerant of simulated herbivory than native and naturalized relatives. Our results indicate that invasive vines are not escaping herbivory in the novel range, rather they are persisting despite high rates of herbivore damage in the field. While most studies of invasive plants and natural enemies have focused on resistance, this work suggests that tolerance may also play a large role in facilitating invasions.", "which Indicator for enemy release ?", "Damage", 410.0, 416.0], ["The enemy release hypothesis predicts that native herbivores will either prefer or cause more damage to native than introduced plant species. We tested this using preference and performance experiments in the laboratory and surveys of leaf damage caused by the magpie moth Nyctemera amica on a co-occuring native and introduced species of fireweed (Senecio) in eastern Australia. In the laboratory, ovipositing females and feeding larvae preferred the native S. pinnatifolius over the introduced S. madagascariensis. Larvae performed equally well on foliage of S. pinnatifolius and S. madagascariensis: pupal weights did not differ between insects reared on the two species, but growth rates were significantly faster on S. pinnatifolius. In the field, foliage damage was significantly greater on native S. pinnatifolius than introduced S. madagascariensis. These results support the enemy release hypothesis, and suggest that the failure of native consumers to switch to introduced species contributes to their invasive success. Both plant species experienced reduced, rather than increased, levels of herbivory when growing in mixed populations, as opposed to pure stands in the field; thus, there was no evidence that apparent competition occurred.", "which Indicator for enemy release ?", "Damage", 94.0, 100.0], ["Enemy release is frequently posed as a main driver of invasiveness of alien species. However, an experimental multi-species test examining performance and herbivory of invasive alien, non-invasive alien and native plant species in the presence and absence of natural enemies is lacking. In a common garden experiment in Switzerland, we manipulated exposure of seven alien invasive, eight alien non-invasive and fourteen native species from six taxonomic groups to natural enemies (invertebrate herbivores), by applying a pesticide treatment under two different nutrient levels. We assessed biomass production, herbivore damage and the major herbivore taxa on plants. Across all species, plants gained significantly greater biomass under pesticide treatment. However, invasive, non-invasive and native species did not differ in their biomass response to pesticide treatment at either nutrient level. The proportion of leaves damaged on invasive species was significantly lower compared to native species, but not when compared to non-invasive species. However, the difference was lost when plant size was accounted for. There were no differences between invasive, non-invasive and native species in herbivore abundance. Our study offers little support for invertebrate herbivore release as a driver of plant invasiveness, but suggests that future enemy release studies should account for differences in plant size among species.", "which Indicator for enemy release ?", "Damage", 620.0, 626.0], ["We surveyed naturally occurring leaf herbivory in nine invasive and nine non-invasive exotic plant species sampled in natural areas in Ontario, New York and Massachusetts, and found that invasive plants experienced, on average, 96% less leaf damage than non-invasive species. Invasive plants were also more taxonomically isolated than non-invasive plants, belonging to families with 75% fewer native North American genera. However, the relationship between taxonomic isolation at the family level and herbivory was weak. We suggest that invasive plants may possess novel phytochemicals with anti-herbivore properties in addition to allelopathic and anti-microbial characteristics. Herbivory could be employed as an easily measured predictor of the likelihood that recently introduced exotic plants may become invasive.", "which Indicator for enemy release ?", "Damage", 242.0, 248.0], ["The enemy-release hypothesis (ERH) states that species become more successful in their introduced range than in their native range because they leave behind natural enemies in their native range and are thus \"released\" from enemy pressures in their introduced range. The ERH is popularly cited to explain the invasive properties of many species and is the underpinning of biological control. We tested the prediction that plant populations are more strongly regulated by natural enemies (herbivores and pathogens) in their native range than in their introduced range with enemy-removal experiments using pesticides. These experiments were replicated at multiple sites in both the native and invaded ranges of the grass Brachypodium sylvaticum. In support of the ERH, enemies consistently regulated populations in the native range. There were more tillers and more seeds produced in treated vs. untreated plots in the native range, and few seedlings survived in the native range. Contrary to the ERH, total measured leaf damage was similar in both ranges, though the enemies that caused it differed. There was more damage by generalist mollusks and pathogens in the native range, and more damage by generalist insect herbivores in the invaded range. Demographic analysis showed that population growth rates were lower in the native range than in the invaded range, and that sexually produced seedlings constituted a smaller fraction of the total in the native range. Our removal experiment showed that enemies regulate plant populations in their native range and suggest that generalist enemies, not just specialists, are important for population regulation.", "which Indicator for enemy release ?", "Damage", 1020.0, 1026.0], ["An important question in the study of biological invasions is the degree to which successful invasion can be explained by release from control by natural enemies. Natural enemies dominate explanations of two alternate phenomena: that most introduced plants fail to establish viable populations (biotic resistance hypothesis) and that some introduced plants become noxious invaders (natural enemies hypothesis). We used a suite of 18 phylogenetically related native and nonnative clovers (Trifolium and Medicago) and the foliar pathogens and invertebrate herbivores that attack them to answer two questions. Do native species suffer greater attack by natural enemies relative to introduced species at the same site? Are some introduced species excluded from native plant communities because they are susceptible to local natural enemies? We address these questions using three lines of evidence: (1) the frequency of attack and composition of fungal pathogens and herbivores for each clover species in four years of common garden experiments, as well as susceptibility to inoculation with a common pathogen; (2) the degree of leaf damage suffered by each species in common garden experiments; and (3) fitness effects estimated using correlative approaches and pathogen removal experiments. Introduced species showed no evidence of escape from pathogens, being equivalent to native species as a group in terms of infection levels, susceptibility, disease prevalence, disease severity (with more severe damage on introduced species in one year), the influence of disease on mortality, and the effect of fungicide treatment on mortality and biomass. In contrast, invertebrate herbivores caused more damage on native species in two years, although the influence of herbivore attack on mortality did not differ between native and introduced species. Within introduced species, the predictions of the biotic resistance hypothesis were not supported: the most invasive species showed greater infection, greater prevalence and severity of disease, greater prevalence of herbivory, and greater effects of fungicide on biomass and were indistinguishable from noninvasive introduced species in all other respects. Therefore, although herbivores preferred native over introduced species, escape from pest pressure cannot be used to explain why some introduced clovers are common invaders in coastal prairie while others are not.", "which Indicator for enemy release ?", "Damage", 1130.0, 1136.0], ["To shed light on the process of how exotic species become invasive, it is necessary to study them both in their native and non\u2010native ranges. Our intent was to measure differences in herbivory, plant growth and the impact on other species in Fallopia japonica in its native and non\u2010native ranges. We performed a cross\u2010range full descriptive, field study in Japan (native range) and France (non\u2010native range). We assessed DNA ploidy levels, the presence of phytophagous enemies, the amount of leaf damage, several growth parameters and the co\u2010occurrence of Fallopia japonica with other plant species of herbaceous communities. Invasive Fallopia japonica plants were all octoploid, a ploidy level we did not encounter in the native range, where plants were all tetraploid. Octoploids in France harboured far less phytophagous enemies, suffered much lower levels of herbivory, grew larger and had a much stronger impact on plant communities than tetraploid conspecifics in the native range in Japan. Our data confirm that Fallopia japonica performs better \u2013 plant vigour and dominance in the herbaceous community \u2013 in its non\u2010native than its native range. Because we could not find octoploids in the native range, we cannot separate the effects of differences in ploidy from other biogeographic factors. To go further, common garden experiments would now be needed to disentangle the proper role of each factor, taking into account the ploidy levels of plants in their native and non\u2010native ranges. Synthesis. As the process by which invasive plants successfully invade ecosystems in their non\u2010native range is probably multifactorial in most cases, examining several components \u2013 plant growth, herbivory load, impact on recipient systems \u2013 of plant invasions through biogeographic comparisons is important. Our study contributes towards filling this gap in the research, and it is hoped that this method will spread in invasion ecology, making such an approach more common.", "which Indicator for enemy release ?", "Damage", 497.0, 503.0], ["The Natural Enemies Hypothesis (i.e., introduced species experience release from their natural enemies) is a common explanation for why invasive species are so successful. We tested this hypothesis for Ammophila arenaria (Poaceae: European beachgrass), an aggressive plant invading the coastal dunes of California, USA, by comparing the demographic effects of belowground pathogens on A. arenaria in its introduced range to those reported in its native range. European research on A. arenaria in its native range has established that soil-borne pathogens, primarily nematodes and fungi, reduce A. arenaria's growth. In a greenhouse experiment designed to parallel European studies, seeds and 2-wk-old seedlings were planted in sterilized and nonsterilized soil collected from the A. arenaria root zone in its introduced range of California. We assessed the effects of pathogens via soil sterilization on three early performance traits: seed germination, seedling survival, and plant growth. We found that seed germinatio...", "which Indicator for enemy release ?", "Performance", 916.0, 927.0], ["The enemy release hypothesis posits that non-native plant species may gain a competitive advantage over their native counterparts because they are liberated from co-evolved natural enemies from their native area. The phylogenetic relationship between a non-native plant and the native community may be important for understanding the success of some non-native plants, because host switching by insect herbivores is more likely to occur between closely related species. We tested the enemy release hypothesis by comparing leaf damage and herbivorous insect assemblages on the invasive species Senecio madagascariensis Poir. to that on nine congeneric species, of which five are native to the study area, and four are non-native but considered non-invasive. Non-native species had less leaf damage than natives overall, but we found no significant differences in the abundance, richness and Shannon diversity of herbivores between native and non-native Senecio L. species. The herbivore assemblage and percentage abundance of herbivore guilds differed among all Senecio species, but patterns were not related to whether the species was native or not. Species-level differences indicate that S. madagascariensis may have a greater proportion of generalist insect damage (represented by phytophagous leaf chewers) than the other Senecio species. Within a plant genus, escape from natural enemies may not be a sufficient explanation for why some non-native species become more invasive than others.", "which Indicator for enemy release ?", "Damage", 527.0, 533.0], ["A central question in ecology concerns how some exotic plants that occur at low densities in their native range are able to attain much higher densities where they are introduced. This question has remained unresolved in part due to a lack of experiments that assess factors that affect the population growth or abundance of plants in both ranges. We tested two hypotheses for exotic plant success: escape from specialist insect herbivores and a greater response to disturbance in the introduced range. Within three introduced populations in Montana, USA, and three native populations in Germany, we experimentally manipulated insect herbivore pressure and created small-scale disturbances to determine how these factors affect the performance of houndstongue (Cynoglossum officinale), a widespread exotic in western North America. Herbivores reduced plant size and fecundity in the native range but had little effect on plant performance in the introduced range. Small-scale experimental disturbances enhanced seedling recruitment in both ranges, but subsequent seedling survival was more positively affected by disturbance in the introduced range. We combined these experimental results with demographic data from each population to parameterize integral projection population models to assess how enemy escape and disturbance might differentially influence C. officinale in each range. Model results suggest that escape from specialist insects would lead to only slight increases in the growth rate (lambda) of introduced populations. In contrast, the larger response to disturbance in the introduced vs. native range had much greater positive effects on lambda. These results together suggest that, at least in the regions where the experiments were performed, the differences in response to small disturbances by C. officinale contribute more to higher abundance in the introduced range compared to at home. Despite the challenges of conducting experiments on a wide biogeographic scale and the logistical constraints of adequately sampling populations within a range, this approach is a critical step forward to understanding the success of exotic plants.", "which Indicator for enemy release ?", "Performance", 732.0, 743.0], ["Enemy release is frequently posed as a main driver of invasiveness of alien species. However, an experimental multi-species test examining performance and herbivory of invasive alien, non-invasive alien and native plant species in the presence and absence of natural enemies is lacking. In a common garden experiment in Switzerland, we manipulated exposure of seven alien invasive, eight alien non-invasive and fourteen native species from six taxonomic groups to natural enemies (invertebrate herbivores), by applying a pesticide treatment under two different nutrient levels. We assessed biomass production, herbivore damage and the major herbivore taxa on plants. Across all species, plants gained significantly greater biomass under pesticide treatment. However, invasive, non-invasive and native species did not differ in their biomass response to pesticide treatment at either nutrient level. The proportion of leaves damaged on invasive species was significantly lower compared to native species, but not when compared to non-invasive species. However, the difference was lost when plant size was accounted for. There were no differences between invasive, non-invasive and native species in herbivore abundance. Our study offers little support for invertebrate herbivore release as a driver of plant invasiveness, but suggests that future enemy release studies should account for differences in plant size among species.", "which Indicator for enemy release ?", "Performance", 139.0, 150.0], ["In their colonized ranges, exotic plants may be released from some of the herbivores or pathogens of their home ranges but these can be replaced by novel enemies. It is of basic and practical interest to understand which characteristics of invaded communities control accumulation of the new pests. Key questions are whether enemy load on exotic species is smaller than on native competitors as suggested by the enemy release hypothesis (ERH) and whether this difference is most pronounced in resource\u2010rich habitats as predicted by the resource\u2013enemy release hypothesis (R\u2010ERH). In 72 populations of 12 exotic invasive species, we scored all visible above\u2010ground damage morphotypes caused by herbivores and fungal pathogens. In addition, we quantified levels of leaf herbivory and fruit damage. We then assessed whether variation in damage diversity and levels was explained by habitat fertility, by relatedness between exotic species and the native community or rather by native species diversity. In a second part of the study, we also tested the ERH and the R\u2010ERH by comparing damage of plants in 28 pairs of co\u2010occurring native and exotic populations, representing nine congeneric pairs of native and exotic species. In the first part of the study, diversity of damage morphotypes and damage levels of exotic populations were greater in resource\u2010rich habitats. Co\u2010occurrence of closely related, native species in the community significantly increased the probability of fruit damage. Herbivory on exotics was less likely in communities with high phylogenetic diversity. In the second part of the study, exotic and native congeneric populations incurred similar damage diversity and levels, irrespective of whether they co\u2010occurred in nutrient\u2010poor or nutrient\u2010rich habitats. Synthesis. We identified habitat productivity as a major community factor affecting accumulation of enemy damage by exotic populations. Similar damage levels in exotic and native congeneric populations, even in species pairs from fertile habitats, suggest that the enemy release hypothesis or the R\u2010ERH cannot always explain the invasiveness of introduced species.", "which Indicator for enemy release ?", "Damage", 663.0, 669.0], ["1 Acer platanoides (Norway maple) is an important non\u2010native invasive canopy tree in North American deciduous forests, where native species diversity and abundance are greatly reduced under its canopy. We conducted a field experiment in North American forests to compare planted seedlings of A. platanoides and Acer saccharum (sugar maple), a widespread, common native that, like A. platanoides, is shade tolerant. Over two growing seasons in three forests we compared multiple components of seedling success: damage from natural enemies, ecophysiology, growth and survival. We reasoned that equal or superior performance by A. platanoides relative to A. saccharum indicates seedling characteristics that support invasiveness, while inferior performance indicates potential barriers to invasion. 2 Acer platanoides seedlings produced more leaves and allocated more biomass to roots, A. saccharum had greater water use efficiency, and the two species exhibited similar photosynthesis and first\u2010season mortality rates. Acer platanoides had greater winter survival and earlier spring leaf emergence, but second\u2010season mortality rates were similar. 3 The success of A. platanoides seedlings was not due to escape from natural enemies, contrary to the enemy release hypothesis. Foliar insect herbivory and disease symptoms were similarly high for both native and non\u2010native, and seedling biomass did not differ. Rather, A. platanoides compared well with A. saccharum because of its equivalent ability to photosynthesize in the low light herb layer, its higher leaf production and greater allocation to roots, and its lower winter mortality coupled with earlier spring emergence. Its only potential barrier to seedling establishment, relative to A. saccharum, was lower water use efficiency, which possibly could hinder its invasion into drier forests. 4 The spread of non\u2010native canopy trees poses an especially serious problem for native forest communities, because canopy trees strongly influence species in all forest layers. Success at reaching the canopy depends on a tree's ecology in previous life\u2010history stages, particularly as a vulnerable seedling, but little is known about seedling characteristics that promote non\u2010native tree invasion. Experimental field comparison with ecologically successful native trees provides insight into why non\u2010native trees succeed as seedlings, which is a necessary stage on their journey into the forest canopy.", "which Indicator for enemy release ?", "Performance", 610.0, 621.0], ["Abstract The enemies release hypothesis proposes that exotic species can become invasive by escaping from predators and parasites in their novel environment. Agrawal et al. (Enemy release? An experiment with congeneric plant pairs and diverse above\u2010 and below\u2010ground enemies. Ecology, 86, 2979\u20132989) proposed that areas or times in which damage to introduced species is low provide opportunities for the invasion of native habitat. We tested whether ornamental settings may provide areas with low levels of herbivory for trees and shrubs, potentially facilitating invasion success. First, we compared levels of leaf herbivory among native and exotic species in ornamental and natural settings in Cincinnati, Ohio, United States. In the second study, we compared levels of herbivory for invasive and noninvasive exotic species between natural and ornamental settings. We found lower levels of leaf damage for exotic species than for native species; however, we found no differences in the amount of leaf damage suffered in ornamental or natural settings. Our results do not provide any evidence that ornamental settings afford additional release from herbivory for exotic plant species.", "which Indicator for enemy release ?", "Damage", 338.0, 344.0], ["One explanation for the higher abundance of invasive species in their non-native than native ranges is the escape from natural enemies. But there are few experimental studies comparing the parallel impact of enemies (or competitors and mutualists) on a plant species in its native and invaded ranges, and release from soil pathogens has been rarely investigated. Here we present evidence showing that the invasion of black cherry (Prunus serotina) into north-western Europe is facilitated by the soil community. In the native range in the USA, the soil community that develops near black cherry inhibits the establishment of neighbouring conspecifics and reduces seedling performance in the greenhouse. In contrast, in the non-native range, black cherry readily establishes in close proximity to conspecifics, and the soil community enhances the growth of its seedlings. Understanding the effects of soil organisms on plant abundance will improve our ability to predict and counteract plant invasions.", "which Indicator for enemy release ?", "Performance", 672.0, 683.0], ["The enemy release hypothesis predicts that invasive species will receive less damage from enemies, compared to co-occurring native and noninvasive exotic species in their introduced range. However, release operating early in invasion could be lost over time and with increased range size as introduced species acquire new enemies. We used three years of data, from 61 plant species planted into common gardens, to determine whether (1) invasive, noninvasive exotic, and native species experience differential damage from insect herbivores. and mammalian browsers, and (2) enemy release is lost with increased residence time and geographic spread in the introduced range. We find no evidence suggesting enemy release is a general mechanism contributing to invasiveness in this region. Invasive species received the most insect herbivory, and damage increased with longer residence times and larger range sizes at three spatial scales. Our results show that invasive and exotic species fail to escape enemies, particularly over longer temporal and larger spatial scales.", "which Indicator for enemy release ?", "Damage", 78.0, 84.0], ["Biological invasions are often complex phenomena because many factors influence their outcome. One key aspect is how non-natives interact with the local biota. Interaction with local species may be especially important for exotic species that require an obligatory mutualist, such as Pinaceae species that need ectomycorrhizal (EM) fungi. EM fungi and seeds of Pinaceae disperse independently, so they may use different vectors. We studied the role of exotic mammals as dispersal agents of EM fungi on Isla Victoria, Argentina, where many Pinaceae species have been introduced. Only a few of these tree species have become invasive, and they are found in high densities only near plantations, partly because these Pinaceae trees lack proper EM fungi when their seeds land far from plantations. Native mammals (a dwarf deer and rodents) are rare around plantations and do not appear to play a role in these invasions. With greenhouse experiments using animal feces as inoculum, plus observational and molecular studies, we found that wild boar and deer, both non-native, are dispersing EM fungi. Approximately 30% of the Pinaceae seedlings growing with feces of wild boar and 15% of the seedlings growing with deer feces were colonized by non-native EM fungi. Seedlings growing in control pots were not colonized by EM fungi. We found a low diversity of fungi colonizing the seedlings, with the hypogeous Rhizopogon as the most abundant genus. Wild boar, a recent introduction to the island, appear to be the main animal dispersing the fungi and may be playing a key role in facilitating the invasion of pine trees and even triggering their spread. These results show that interactions among non-natives help explain pine invasions in our study area.", "which Outcome of interaction ?", "Dispersal", 470.0, 479.0], ["The role of novel ecological interactions between mammals, fungi and plants in invaded ecosystems remains unresolved, but may play a key role in the widespread successful invasion of pines and their ectomycorrhizal fungal associates, even where mammal faunas originate from different continents to trees and fungi as in New Zealand. We examine the role of novel mammal associations in dispersal of ectomycorrhizal fungal inoculum of North American pines (Pinus contorta, Pseudotsuga menziesii), and native beech trees (Lophozonia menziesii) using faecal analyses, video monitoring and a bioassay experiment. Both European red deer (Cervus elaphus) and Australian brushtail possum (Trichosurus vulpecula) pellets contained spores and DNA from a range of native and non\u2010native ectomycorrhizal fungi. Faecal pellets from both animals resulted in ectomycorrhizal infection of pine seedlings with fungal genera Rhizopogon and Suillus, but not with native fungi or the invasive fungus Amanita muscaria, despite video and DNA evidence of consumption of these fungi. Native L. menziesii seedlings never developed any ectomycorrhizal infection from faecal pellet inoculation. Synthesis. Our results show that introduced mammals from Australia and Europe facilitate the co\u2010invasion of invasive North American trees and Northern Hemisphere fungi in New Zealand, while we find no evidence that introduced mammals benefit native trees or fungi. This novel tripartite \u2018invasional meltdown\u2019, comprising taxa from three kingdoms and three continents, highlights unforeseen consequences of global biotic homogenization.", "which Outcome of interaction ?", "Dispersal", 385.0, 394.0], ["Abstract Seed dispersal by exotic mammals exemplifies mutualistic interactions that can modify the habitat by facilitating the establishment of certain species. We examined the potential for endozoochoric dispersal of exotic plants by Callosciurus erythraeus introduced in the Pampas Region of Argentina. We identified and characterized entire and damaged seeds found in squirrel faeces and evaluated the germination capacity and viability of entire seeds in laboratory assays. We collected 120 samples of squirrel faeces that contained 883 pellets in seasonal surveys conducted between July 2011 and June 2012 at 3 study sites within the main invasion focus of C. erythraeus in Argentina. We found 226 entire seeds in 21% of the samples belonging to 4 species of exotic trees and shrubs. Germination in laboratory assays was recorded for Morus alba and Casuarina sp.; however, germination percentage and rate was higher for seeds obtained from the fruits than for seeds obtained from the faeces. The largest size of entire seeds found in the faeces was 4.2 \u00d7 4.0 mm, whereas the damaged seeds had at least 1 dimension \u2265 4.7 mm. Our results indicated that C. erythraeus can disperse viable seeds of at least 2 species of exotic trees. C. erythraeus predated seeds of other naturalized species in the region. The morphometric description suggested a restriction on the maximum size for the passage of entire seeds through the digestive tract of squirrels, which provides useful information to predict its role as a potential disperser or predator of other species in other invaded communities.", "which Outcome of interaction ?", "Dispersal", 14.0, 23.0], ["1. Freshwaters are subject to particularly high rates of species introductions; hence, invaders increasingly co-occur and may interact to enhance impacts on ecosystem structure and function. As trophic interactions are a key mechanism by which invaders influence communities, we used a combination of approaches to investigate the feeding preferences and community impacts of two globally invasive large benthic decapods that co-occur in freshwaters: the signal crayfish (Pacifastacus leniusculus) and Chinese mitten crab (Eriocheir sinensis). 2. In laboratory preference tests, both consumed similar food items, including chironomids, isopods and the eggs of two coarse fish species. In a comparison of predatory functional responses with a native crayfish Austropotamobius pallipes), juvenile E. sinensis had a greater predatory intensity than the native A. pallipes on the keystone shredder Gammarus pulex, and also displayed a greater preference than P. leniusculus for this prey item. 3. In outdoor mesocosms (n = 16) used to investigate community impacts, the abundance of amphipods, isopods, chironomids and gastropods declined in the presence of decapods, and a decapod >gastropod >periphyton trophic cascade was detected when both species were present. Eriocheir sinensis affected a wider range of animal taxa than P. leniusculus. 4. Stable-isotope and gut-content analysis of wild-caught adult specimens of both invaders revealed a wide and overlapping range of diet items including macrophytes, algae, terrestrial detritus, macroinvertebrates and fish. Both decapods were similarly enriched in 15N and occupied the same trophic level as Ephemeroptera, Odonata and Notonecta. Eriocheir sinensis d13C values were closely aligned with macrophytes indicating a reliance on energy from this basal resource, supported by evidence of direct consumption from gut contents. Pacifastacus leniusculus d13C values were intermediate between those of terrestrial leaf litter and macrophytes, suggesting reliance on both allochthonous and autochthonous energy pathways. 5. Our results suggest that E. sinensis is likely to exert a greater per capita impact on the macroinvertebrate communities in invaded systems than P. leniusculus, with potential indirect effects on productivity and energy flow through the community.", "which Outcome of interaction ?", "Impact", 2146.0, 2152.0], ["Factors such as aggressiveness and adaptation to disturbed environments have been suggested as important characteristics of invasive ant species, but diet has rarely been considered. However, because invasive ants reach extraordinary densities at introduced locations, increased feeding efficiency or increased exploitation of new foods should be important in their success. Earlier studies suggest that honeydew produced by Homoptera (e.g., aphids, mealybugs, scale insects) may be important in the diet of the invasive ant species Solenopsis invicta. To determine if this is the case, we studied associations of S. invicta and Homoptera in east Texas and conducted a regional survey for such associations throughout the species' range in the southeast United States. In east Texas, we found that S. invicta tended Ho- moptera extensively and actively constructed shelters around them. The shelters housed a variety of Homoptera whose frequency differed according to either site location or season, presumably because of differences in host plant availability and temperature. Overall, we estimate that the honeydew produced in Homoptera shelters at study sites in east Texas could supply nearly one-half of the daily energetic requirements of an S. invicta colony. Of that, 70% may come from a single species of invasive Homoptera, the mealybugAntonina graminis. Homoptera shelters were also common at regional survey sites and A. graminis occurred in shelters at nine of 11 survey sites. A comparison of shelter densities at survey sites and in east Texas suggests that our results from east Texas could apply throughout the range of S. invicta in the southeast United States. Antonina graminis may be an ex- ceptionally important nutritional resource for S. invicta in the southeast United States. While it remains largely unstudied, the tending of introduced or invasive Homoptera also appears important to other, and perhaps all, invasive ant species. Exploitative or mutually beneficial associations that occur between these insects may be an important, previously unrecognized factor promoting their success.", "which Outcome of interaction ?", "Resource", 1746.0, 1754.0], ["The introduced brushtail possum (Trichosurus vulpecula) is a major environmental and agricultural pest in New Zealand but little information is available on the ecology of possums in drylands, which cover c. 19% of the country. Here, we describe a temporal snapshot of the diet and feeding preferences of possums in a dryland habitat in New Zealand's South Island, as well as movement patterns and survival rates. We also briefly explore spatial patterns in capture rates. We trapped 279 possums at an average capture rate of 9 possums per 100 trap nights. Capture rates on individual trap lines varied from 0 to 38%, decreased with altitude, and were highest in the eastern (drier) parts of the study area. Stomach contents were dominated by forbs and sweet briar (Rosa rubiginosa); both items were consumed preferentially relative to availability. Possums also strongly preferred crack willow (Salix fragilis), which was uncommon in the study area and consumed only occasionally, but in large amounts. Estimated activity areas of 29 possums radio-tracked for up to 12 months varied from 0.2 to 19.5 ha (mean 5.1 ha). Nine possums (4 male, 5 female) undertook dispersal movements (\u22651000 m), the longest of which was 4940 m. The most common dens of radio-collared possums were sweet briar shrubs, followed by rock outcrops. Estimated annual survival was 85% for adults and 54% for subadults. Differences between the diets, activity areas and den use of possums in this study and those in forest or farmland most likely reflect differences in availability and distribution of resources. Our results suggest that invasive willow and sweet briar may facilitate the existence of possums by providing abundant food and shelter. In turn, possums may facilitate the spread of weeds by acting as a seed vector. This basic ecological information will be useful in modelling and managing the impacts of possum populations in drylands.", "which Outcome of interaction ?", "Resource", NaN, NaN], ["Habitat disturbance and the spread of invasive organisms are major threats to biodiversity, but the interactions between these two factors remain poorly understood in many systems. Grazing activities may facilitate the spread of invasive cane toads (Rhinella marina) through tropical Australia by providing year-round access to otherwise-seasonal resources. We quantified the cane toad\u2019s use of cowpats (feces piles) in the field, and conducted experimental trials to assess the potential role of cowpats as sources of prey, water, and warmth for toads. Our field surveys show that cane toads are found on or near cowpats more often than expected by chance. Field-enclosure experiments show that cowpats facilitate toad feeding by providing access to dung beetles. Cowpats also offer moist surfaces that can reduce dehydration rates of toads and are warmer than other nearby substrates. Livestock grazing is the primary form of land use over vast areas of Australia, and pastoral activities may have contributed substantially to the cane toad\u2019s successful invasion of that continent.", "which Outcome of interaction ?", "Resource", NaN, NaN], ["1 The cultivation and dissemination of alien ornamental plants increases their potential to invade. More specifically, species with bird\u2010dispersed seeds can potentially infiltrate natural nucleation processes in savannas. 2 To test (i) whether invasion depends on facilitation by host trees, (ii) whether propagule pressure determines invasion probability, and (iii) whether alien host plants are better facilitators of alien fleshy\u2010fruited species than indigenous species, we mapped the distribution of alien fleshy\u2010fruited species planted inside a military base, and compared this with the distribution of alien and native fleshy\u2010fruited species established in the surrounding natural vegetation. 3 Abundance and diversity of fleshy\u2010fruited plant species was much greater beneath tree canopies than in open grassland and, although some native fleshy\u2010fruited plants were found both beneath host trees and in the open, alien fleshy\u2010fruited plants were found only beneath trees. 4 Abundance of fleshy\u2010fruited alien species in the natural savanna was positively correlated with the number of individuals of those species planted in the grounds of the military base, while the species richness of alien fleshy\u2010fruited taxa decreased with distance from the military base, supporting the notion that propagule pressure is a fundamental driver of invasions. 5 There were more fleshy\u2010fruited species beneath native Acacia tortilis than beneath alien Prosopis sp. trees of the equivalent size. Although there were significant differences in native plant assemblages beneath these hosts, the proportion of alien to native fleshy\u2010fruited species did not differ with host. 6 Synthesis. Birds facilitate invasion of a semi\u2010arid African savanna by alien fleshy\u2010fruited plants, and this process does not require disturbance. Instead, propagule pressure and a few simple biological observations define the probability that a plant will invade, with alien species planted in gardens being a major source of propagules. Some invading species have the potential to transform this savanna by overtopping native trees, leading to ecosystem\u2010level impacts. Likewise, the invasion of the open savanna by alien host trees (such as Prosopis sp.) may change the diversity, abundance and species composition of the fleshy\u2010fruited understorey. These results illustrate the complex interplay between propagule pressure, facilitation, and a range of other factors in biological invasions.", "which Outcome of interaction ?", "Richness", 1182.0, 1190.0], ["Many invasive species cause ecological or economic damage, and the fraction of introduced species that become invasive is an important determinant of the overall costs caused by invaders. According to the widely quoted tens rule, about 10% of all introduced species establish themselves and about 10% of these established species become invasive. Global taxonomic differences in the fraction of species becoming invasive have not been described. In a global analysis of mammal and bird introductions, I show that both mammals and birds have a much higher invasion success than predicted by the tens rule, and that mammals have a significantly higher success than birds. Averaged across islands and continents, 79% of mammals and 50% of birds introduced have established themselves and 63% of mammals and 34% of birds established have become invasive. My analysis also does not support the hypothesis that islands are more susceptible to invaders than continents, as I did not find a significant relationship between invasion success and the size of the island or continent to which the species were introduced. The data set used in this study has a number of limitations, e.g. information on propagule pressure was not available at this global scale, so understanding the mechanisms behind the observed patterns has to be postponed to future studies.", "which hypothesis ?", " Tens rule", 218.0, 228.0], [". Changes in disturbance due to fire regime in southwestern Pinus ponderosa forests over the last century have led to dense forests that are threatened by widespread fire. It has been shown in other studies that a pulse of native, early-seral opportunistic species typically follow such disturbance events. With the growing importance of exotic plants in local flora, however, these exotics often fill this opportunistic role in recovery. We report the effects of fire severity on exotic plant species following three widespread fires of 1996 in northern Arizona P. ponderosa forests. Species richness and abundance of all vascular plant species, including exotics, were higher in burned than nearby unburned areas. Exotic species were far more important, in terms of cover, where fire severity was highest. Species present after wildfires include those of the pre-disturbed forest and new species that could not be predicted from above-ground flora of nearby unburned forests.", "which hypothesis ?", "Disturbance", 13.0, 24.0], ["The limiting similarity hypothesis predicts that communities should be more resistant to invasion by non\u2010natives when they include natives with a diversity of traits from more than one functional group. In restoration, planting natives with a diversity of traits may result in competition between natives of different functional groups and may influence the efficacy of different seeding and maintenance methods, potentially impacting native establishment. We compare initial establishment and first\u2010year performance of natives and the effectiveness of maintenance techniques in uniform versus mixed functional group plantings. We seeded ruderal herbaceous natives, longer\u2010lived shrubby natives, or a mixture of the two functional groups using drill\u2010 and hand\u2010seeding methods. Non\u2010natives were left undisturbed, removed by hand\u2010weeding and mowing, or treated with herbicide to test maintenance methods in a factorial design. Native functional groups had highest establishment, growth, and reproduction when planted alone, and hand\u2010seeding resulted in more natives as well as more of the most common invasive, Brassica nigra. Wick herbicide removed more non\u2010natives and resulted in greater reproduction of natives, while hand\u2010weeding and mowing increased native density. Our results point to the importance of considering competition among native functional groups as well as between natives and invasives in restoration. Interactions among functional groups, seeding methods, and maintenance techniques indicate restoration will be easier to implement when natives with different traits are planted separately.", "which hypothesis ?", "limiting similarity", 4.0, 23.0], ["The variability of shell morphology and relative growth of the invasive pearl oyster Pinctada radiata was studied within and among ten populations from coastal Tunisia using discriminant tests. Therefore, 12 morphological characters were examined and 34 metric and weight ratios were defined. In addition to the classic morphological characters, populations were compared by the thickness of the nacreous layer. Results of Duncan's multiple comparison test showed that the most discriminative ratios were the width of nacreous layer of right valve to the inflation of shell, the hinge line length to the maximum width of shell and the nacre thickness to the maximum width of shell. The analysis of variance revealed an important inter-population morphological variability. Both multidimensional scaling analysis and the squared Mahalanobis distances (D2) of metric ratios divided Tunisian P. radiata populations into four biogeographical groupings: the north coast (La Marsa); harbours (Hammamet, Monastir and Zarzis); the Gulf of Gab\u00e8s (Sfax, Kerkennah Island, Mahar\u00e8s, Skhira and Djerba) and the intertidal area (Ajim). However, the Kerkennah Island population was discriminated by the squared Mahalanobis distances (D2) of weight ratios in an isolated group suggesting particular trophic conditions in this area. The allometric study revealed high linear correlation between shell morphological characters and differences in allometric growth among P. radiata populations. Unlike the morphological discrimination, allometric differentiation shows no clear geographical distinction. This study revealed that the pearl oyster P. radiata exhibited considerable phenotypic plasticity related to differences of environmental and/or ecological conditions along Tunisian coasts and highlighted the discriminative character of the nacreous layer thickness parameter.", "which hypothesis ?", "Phenotypic plasticity", 1927.0, 1948.0], ["1. Biological invasion theory predicts that the introduction and establishment of non-native species is positively correlated with propagule pressure. Releases of pet and aquarium fishes to inland waters has a long history; however, few studies have examined the demographic basis of their importation and incidence in the wild. 2. For the 1500 grid squares (10\u00d710 km) that make up England, data on human demographics (population density, numbers of pet shops, garden centres and fish farms), the numbers of non-native freshwater fishes (from consented licences) imported in those grid squares (i.e. propagule pressure), and the reported incidences (in a national database) of non-native fishes in the wild were used to examine spatial relationships between the occurrence of non-native fishes and the demographic factors associated with propagule pressure, as well as to test whether the demographic factors are statistically reliable predictors of the incidence of non-native fishes, and as such surrogate estimators of propagule pressure. 3. Principal coordinates of neighbour matrices analyses, used to generate spatially explicit models, and confirmatory factor analysis revealed that spatial distributions of non-native species in England were significantly related to human population density, garden centre density and fish farm density. Human population density and the number of fish imports were identified as the best predictors of propagule pressure. 4. Human population density is an effective surrogate estimator of non-native fish propagule pressure and can be used to predict likely areas of non-native fish introductions. In conjunction with fish movements, where available, human population densities can be used to support biological invasion monitoring programmes across Europe (and perhaps globally) and to inform management decisions as regards the prioritization of areas for the control of non-native fish introductions. \u00a9 Crown copyright 2010. Reproduced with the permission of her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd.", "which hypothesis ?", "Propagule pressure", 131.0, 149.0], ["In herbaceous ecosystems worldwide, biodiversity has been negatively impacted by changed grazing regimes and nutrient enrichment. Altered disturbance regimes are thought to favour invasive species that have a high phenotypic plasticity, although most studies measure plasticity under controlled conditions in the greenhouse and then assume plasticity is an advantage in the field. Here, we compare trait plasticity between three co-occurring, C4 perennial grass species, an invader Eragrostis curvula, and natives Eragrostis sororia and Aristida personata to grazing and fertilizer in a three-year field trial. We measured abundances and several leaf traits known to correlate with strategies used by plants to fix carbon and acquire resources, i.e. specific leaf area (SLA), leaf dry matter content (LDMC), leaf nutrient concentrations (N, C\u2236N, P), assimilation rates (Amax) and photosynthetic nitrogen use efficiency (PNUE). In the control treatment (grazed only), trait values for SLA, leaf C\u2236N ratios, Amax and PNUE differed significantly between the three grass species. When trait values were compared across treatments, E. curvula showed higher trait plasticity than the native grasses, and this correlated with an increase in abundance across all but the grazed/fertilized treatment. The native grasses showed little trait plasticity in response to the treatments. Aristida personata decreased significantly in the treatments where E. curvula increased, and E. sororia abundance increased possibly due to increased rainfall and not in response to treatments or invader abundance. Overall, we found that plasticity did not favour an increase in abundance of E. curvula under the grazed/fertilized treatment likely because leaf nutrient contents increased and subsequently its' palatability to consumers. E. curvula also displayed a higher resource use efficiency than the native grasses. These findings suggest resource conditions and disturbance regimes can be manipulated to disadvantage the success of even plastic exotic species.", "which hypothesis ?", "Phenotypic plasticity", 214.0, 235.0], ["Propagule pressure is intuitively a key factor in biological invasions: increased availability of propagules increases the chances of establishment, persistence, naturalization, and invasion. The role of propagule pressure relative to disturbance and various environmental factors is, however, difficult to quantify. We explored the relative importance of factors driving invasions using detailed data on the distribution and percentage cover of alien tree species on South Africa\u2019s Agulhas Plain (2,160 km2). Classification trees based on geology, climate, land use, and topography adequately explained distribution but not abundance (canopy cover) of three widespread invasive species (Acacia cyclops, Acacia saligna, and Pinus pinaster). A semimechanistic model was then developed to quantify the roles of propagule pressure and environmental heterogeneity in structuring invasion patterns. The intensity of propagule pressure (approximated by the distance from putative invasion foci) was a much better predictor of canopy cover than any environmental factor that was considered. The influence of environmental factors was then assessed on the residuals of the first model to determine how propagule pressure interacts with environmental factors. The mediating effect of environmental factors was species specific. Models combining propagule pressure and environmental factors successfully predicted more than 70% of the variation in canopy cover for each species.", "which hypothesis ?", "Propagule pressure", 0.0, 18.0], ["1. The ecological and economic costs of introduced species can be high. Ecologists try to predict the probability of success and potential risk of the establishment of recently introduced species, given their biological characteristics. 2. In 1990 gudgeon, Gobio gobio, were released in a drainage canal of the Rhone delta of southern France. The Asian topmouth gudgeon, Pseudorasbora parva, was found for the first time in the same canal in 1993. Those introductions offered a unique opportunity to compare in situ the fate of two closely related fish in the same habitat. 3. Our major aims were to assess whether G. gobio was able to establish in what seemed an unlikely environment, to compare populations trends and life-history traits of both species and to assess whether we could explain or could have predicted our results, by considering their life-history strategies. 4. Data show that both species have established in the canal and have spread. Catches of P. parva have increased strongly and are now higher than those of G. gobio. 5. The two cyprinids have the same breeding season and comparable traits (such as short generation time, small body, high reproductive effort), so both could be classified as opportunists. The observed difference in their success (in terms of population growth and colonization rate) could be explained by the wider ecological and physiological tolerance of P. parva. 6. In conclusion, our field study seems to suggest that invasive vigour also results from the ability to tolerate environmental changes through phenotypic plasticity, rather than from particular life-history features pre-adapted to invasion. It thus remains difficult to define a good invader simply on the basis of its life-history features.", "which hypothesis ?", "Phenotypic plasticity", 1555.0, 1576.0], ["Summary Introduction of an exotic species has the potential to alter interactions between fish and bivalves; yet our knowledge in this field is limited, not least by lack of studies involving fish early life stages (ELS). Here, for the first time, we examine glochidial infection of fish ELS by native and exotic bivalves in a system recently colonised by two exotic gobiid species (round goby Neogobius melanostomus, tubenose goby Proterorhinus semilunaris) and the exotic Chinese pond mussel Anodonta woodiana. The ELS of native fish were only rarely infected by native glochidia. By contrast, exotic fish displayed significantly higher native glochidia prevalence and mean intensity of infection than native fish (17 versus 2% and 3.3 versus 1.4 respectively), inferring potential for a parasite spillback/dilution effect. Exotic fish also displayed a higher parasitic load for exotic glochidia, inferring potential for invasional meltdown. Compared to native fish, presence of gobiids increased the total number of glochidia transported downstream on drifting fish by approximately 900%. We show that gobiid ELS are a novel, numerous and \u2018attractive\u2019 resource for unionid glochidia. As such, unionids could negatively affect gobiid recruitment through infection-related mortality of gobiid ELS and/or reinforce downstream unionid populations through transport on drifting gobiid ELS. These implications go beyond what is suggested in studies of older life stages, thereby stressing the importance of an holistic ontogenetic approach in ecological studies.", "which hypothesis ?", "Invasional meltdown", 923.0, 942.0], ["The literature on alien animal invaders focuses largely on successful invasions over broad geographic scales and rarely examines failed invasions. As a result, it is difficult to make predictions about which species are likely to become successful invaders or which environments are likely to be most susceptible to invasion. To address these issues, we developed a data set on fish invasions in watersheds throughout California (USA) that includes failed introductions. Our data set includes information from three stages of the invasion process (establishment, spread, and integration). We define seven categorical predictor variables (trophic status, size of native range, parental care, maximum adult size, physiological tolerance, distance from nearest native source, and propagule pressure) and one continuous predictor variable (prior invasion success) for all introduced species. Using an information-theoretic approach we evaluate 45 separate hypotheses derived from the invasion literature over these three sta...", "which hypothesis ?", "Propagule pressure", 777.0, 795.0], ["ABSTRACT One explanation for the success of exotic plants in their introduced habitats is that, upon arriving to a new continent, plants escaped their native herbivores or pathogens, resulting in less damage and lower abundance of enemies than closely related native species (enemy release hypothesis). We tested whether the three exotic plant species, Rubus phoenicolasius (wineberry), Fallopia japonica (Japanese knotweed), and Persicaria perfoliata (mile-a-minute weed), suffered less herbivory or pathogen attack than native species by comparing leaf damage and invertebrate herbivore abundance and diversity on the invasive species and their native congeners. Fallopia japonica and R. phoenicolasius received less leaf damage than their native congeners, and F. japonica also contained a lower diversity and abundance of invertebrate herbivores. If the observed decrease in damage experienced by these two plant species contributes to increased fitness, then escape from enemies may provide at least a partial explanation for their invasiveness. However, P. perfoliata actually received greater leaf damage than its native congener. Rhinoncomimus latipes, a weevil previously introduced in the United States as a biological control for P. perfoliata, accounted for the greatest abundance of insects collected from P. perfoliata. Therefore, it is likely that the biocontrol R. latipes was responsible for the greater damage on P. perfoliata, suggesting this insect may be effective at controlling P. perfoliata populations if its growth and reproduction is affected by the increased herbivore damage.", "which hypothesis ?", "Enemy release", 276.0, 289.0], ["Summary 1. The global spread of non-native species is a major concern for ecologists, particularly in regards to aquatic systems. Predicting the characteristics of successful invaders has been a goal of invasion biology for decades. Quantitative analysis of species characteristics may allow invasive species profiling and assist the development of risk assessment strategies. 2. In the current analysis we developed a data base on fish invasions in catchments throughout California that distinguishes among the establishment, spread and integration stages of the invasion process, and separates social and biological factors related to invasion success. 3. Using Akaike's information criteria (AIC), logistic and multiple regression models, we show suites of biological variables, which are important in predicting establishment (parental care and physiological tolerance), spread (life span, distance from nearest native source and trophic status) and abundance (maximum size, physiological tolerance and distance from nearest native source). Two variables indicating human interest in a species (propagule pressure and prior invasion success) are predictors of successful establishment and prior invasion success is a predictor of spread and integration. 4. Despite the idiosyncratic nature of the invasion process, our results suggest some assistance in the search for characteristics of fish species that successfully transition between invasion stages.", "which hypothesis ?", "Propagule pressure", 1099.0, 1117.0], ["Over 75 species of alien plants were recorded during the first five years after fire in southern California shrublands, most of which were European annuals. Both cover and richness of aliens varied between years and plant association. Alien cover was lowest in the first postfire year in all plant associations and remained low during succession in chaparral but increased in sage scrub. Alien cover and richness were significantly correlated with year (time since disturbance) and with precipitation in both coastal and interior sage scrub associations. Hypothesized factors determining alien dominance were tested with structural equation modeling. Models that included nitrogen deposition and distance from the coast were not significant, but with those variables removed we obtained a significant model that gave an R 2 5 0.60 for the response variable of fifth year alien dominance. Factors directly affecting alien dominance were (1) woody canopy closure and (2) alien seed banks. Significant indirect effects were (3) fire intensity, (4) fire history, (5) prefire stand structure, (6) aridity, and (7) community type. According to this model the most critical factor in- fluencing aliens is the rapid return of the shrub and subshrub canopy. Thus, in these communities a single functional type (woody plants) appears to the most critical element controlling alien invasion and persistence. Fire history is an important indirect factor be- cause it affects both prefire stand structure and postfire alien seed banks. Despite being fire-prone ecosystems, these shrublands are not adapted to fire per se, but rather to a particular fire regime. Alterations in the fire regime produce a very different selective environment, and high fire frequency changes the selective regime to favor aliens. This study does not support the widely held belief that prescription burning is a viable man- agement practice for controlling alien species on semiarid landscapes.", "which hypothesis ?", "Disturbance", 465.0, 476.0], ["Abstract: We developed a method to predict the potential of non\u2010native reptiles and amphibians (herpetofauna) to establish populations. This method may inform efforts to prevent the introduction of invasive non\u2010native species. We used boosted regression trees to determine whether nine variables influence establishment success of introduced herpetofauna in California and Florida. We used an independent data set to assess model performance. Propagule pressure was the variable most strongly associated with establishment success. Species with short juvenile periods and species with phylogenetically more distant relatives in regional biotas were more likely to establish than species that start breeding later and those that have close relatives. Average climate match (the similarity of climate between native and non\u2010native range) and life form were also important. Frogs and lizards were the taxonomic groups most likely to establish, whereas a much lower proportion of snakes and turtles established. We used results from our best model to compile a spreadsheet\u2010based model for easy use and interpretation. Probability scores obtained from the spreadsheet model were strongly correlated with establishment success as were probabilities predicted for independent data by the boosted regression tree model. However, the error rate for predictions made with independent data was much higher than with cross validation using training data. This difference in predictive power does not preclude use of the model to assess the probability of establishment of herpetofauna because (1) the independent data had no information for two variables (meaning the full predictive capacity of the model could not be realized) and (2) the model structure is consistent with the recent literature on the primary determinants of establishment success for herpetofauna. It may still be difficult to predict the establishment probability of poorly studied taxa, but it is clear that non\u2010native species (especially lizards and frogs) that mature early and come from environments similar to that of the introduction region have the highest probability of establishment.", "which hypothesis ?", "Propagule pressure", 443.0, 461.0], ["Biological invasions are rapidly producing planet-wide changes in biodiversity and ecosystem function. In coastal waters of the U.S., >500 invaders have become established, and new introductions continue at an increasing rate. Although most species have little impact on native communities, some initially benign introductions may occasionally turn into damaging invasions, although such introductions are rarely documented. Here, I demonstrate that a recently introduced crab has resulted in the rapid spread and increase of an introduced bivalve that had been rare in the system for nearly 50 yr. This increase has occurred through the positive indirect effects of predation by the introduced crab on native bivalves. I used field and laboratory experiments to show that the mechanism is size-specific predation interacting with the different reproductive life histories of the native (protandrous hermaphrodite) and the introduced (dioecious) bivalves. These results suggest that positive interactions among the hundreds of introduced species that are accumulating in coastal systems could result in the rapid transformation of previously benign introductions into aggressively expanding invasions. Even if future management efforts reduce the number of new introductions, given the large number of species already present, there is a high potential for positive interactions to produce many future management problems. Given that invasional meltdown is now being documented in natural systems, I suggest that coastal systems may be closer to this threshold than currently believed.", "which hypothesis ?", "Invasional meltdown", 1434.0, 1453.0], ["How interactions between exotic species affect invasion impact is a fundamental issue on both theoretical and applied grounds. Exotics can facilitate establishment and invasion of other exotics (invasional meltdown) or they can restrict them by re-establishing natural population control (as predicted by the enemy- release hypothesis). We studied forest invasion on an Argentinean island where 43 species of Pinaceae, including 60% of the world's recorded invasive Pinaceae, were introduced c. 1920 but where few species are colonizing pristine areas. In this area two species of Palearctic deer, natural enemies of most Pinaceae, were introduced 80 years ago. Expecting deer to help to control the exotics, we conducted a cafeteria experiment to assess deer preferences among the two dominant native species (a conifer, Austrocedrus chilensis, and a broadleaf, Nothofagus dombeyi) and two widely introduced exotic tree species (Pseudotsuga menziesii and Pinus ponderosa). Deer browsed much more intensively on native species than on exotic conifers, in terms of number of individuals attacked and degree of browsing. Deer preference for natives could potentially facilitate invasion by exotic pines. However, we hypothesize that the low rates of invasion currently observed can result at least partly from high densities of exotic deer, which, despite their preference for natives, can prevent establishment of both native and exotic trees. Other factors, not mutually exclusive, could produce the observed pattern. Our results underscore the difficulty of predicting how one introduced species will effect impact of another one.", "which hypothesis ?", "Invasional meltdown", 195.0, 214.0], ["1. Community assembly theories predict that the success of invading species into a new community should be predictable by functional traits. Environmental filters could constrain the number of successful ecological strategies in a habitat, resulting in similar suites of traits between native and successfully invading species (convergence). Conversely, concepts of limiting similarity and competitive exclusion predict native species will prevent invasion by functionally similar exotic species, resulting in trait divergence between the two species pools. Nutrient availability may further alter the strength of convergent or divergent forces in community assembly, by relaxing environmental constraints and/or influencing competitive interactions.", "which hypothesis ?", "limiting similarity", 366.0, 385.0], ["Phenotypic plasticity has long been suspected to allow invasive species to expand their geographic range across large-scale environmental gradients. We tested this possibility in Australia using a continental scale survey of the invasive tree Parkinsonia aculeata (Fabaceae) in twenty-three sites distributed across four climate regions and three habitat types. Using tree-level responses, we detected a trade-off between seed mass and seed number across the moisture gradient. Individual trees plastically and reversibly produced many small seeds at dry sites or years, and few big seeds at wet sites and years. Bigger seeds were positively correlated with higher seed and seedling survival rates. The trade-off, the relation between seed mass, seed and seedling survival, and other fitness components of the plant life-cycle were integrated within a matrix population model. The model confirms that the plastic response resulted in average fitness benefits across the life-cycle. Plasticity resulted in average fitness being positively maintained at the wet and dry range margins where extinction risks would otherwise have been high (\u201cJack-of-all-Trades\u201d strategy JT), and fitness being maximized at the species range centre where extinction risks were already low (\u201cMaster-of-Some\u201d strategy MS). The resulting hybrid \u201cJack-and-Master\u201d strategy (JM) broadened the geographic range and amplified average fitness in the range centre. Our study provides the first empirical evidence for a JM species. It also confirms mechanistically the importance of phenotypic plasticity in determining the size, the shape and the dynamic of a species distribution. The JM allows rapid and reversible phenotypic responses to new or changing moisture conditions at different scales, providing the species with definite advantages over genetic adaptation when invading diverse and variable environments. Furthermore, natural selection pressure acting on phenotypic plasticity is predicted to result in maintenance of the JT and strengthening of the MS, further enhancing the species invasiveness in its range centre.", "which hypothesis ?", "Phenotypic plasticity", 0.0, 21.0], ["The Asian grass Miscanthus sinensis (Poaceae) is being considered for use as a bioenergy crop in the U.S. Corn Belt. Originally introduced to the United States for ornamental plantings, it escaped, forming invasive populations. The concern is that naturalized M. sinensis populations have evolved shade tolerance. We tested the hypothesis that seedlings from within the invasive U.S. range of M. sinensis would display traits associated with shade tolerance, namely increased area for light capture and phenotypic plasticity, compared with seedlings from the native Japanese populations. In a common garden experiment, seedlings of 80 half-sib maternal lines were grown from the native range (Japan) and 60 half-sib maternal lines from the invasive range (U.S.) under four light levels. Seedling leaf area, leaf size, growth, and biomass allocation were measured on the resulting seedlings after 12 wk. Seedlings from both regions responded strongly to the light gradient. High light conditions resulted in seedlings with greater leaf area, larger leaves, and a shift to greater belowground biomass investment, compared with shaded seedlings. Japanese seedlings produced more biomass and total leaf area than U.S. seedlings across all light levels. Generally, U.S. and Japanese seedlings allocated a similar amount of biomass to foliage and equal leaf area per leaf mass. Subtle differences in light response by region were observed for total leaf area, mass, growth, and leaf size. U.S. seedlings had slightly higher plasticity for total mass and leaf area but lower plasticity for measures of biomass allocation and leaf traits compared with Japanese seedlings. Our results do not provide general support for the hypothesis of increased M. sinensis shade tolerance within its introduced U.S. range compared with native Japanese populations. Nomenclature: Eulaliagrass; Miscanthus sinensis Anderss. Management Implications: Eulaliagrass (Miscanthus sinensis), an Asian species under consideration for biomass production in the Midwest, has escaped ornamental plantings in the United States to form naturalized populations. Evidence suggests that U.S. populations are able to tolerate relatively shady conditions, but it is unclear whether U.S. populations have greater shade tolerance than the relatively shade-intolerant populations within the species' native range in Asia. Increased shade tolerance could result in a broader range of invaded light environments within the introduced range of M. sinensis. However, results from our common garden experiment do not support the hypothesis of increased shade tolerance in introduced U.S. populations compared with seedlings from native Asian populations. Our results do demonstrate that for both U.S. and Japanese populations under low light conditions, M. sinensis seeds germinate and seedlings gain mass and leaf area; therefore, land managers should carefully monitor or eradicate M. sinensis within these habitats.", "which hypothesis ?", "Phenotypic plasticity", 503.0, 524.0], ["Abstract The enemy release hypothesis (ERH) frequently has been invoked to explain the naturalization and spread of introduced species. One ramification of the ERH is that invasive plants sustain less herbivore pressure than do native species. Empirical studies testing the ERH have mostly involved two-way comparisons between invasive introduced plants and their native counterparts in the invaded region. Testing the ERH would be more meaningful if such studies also included introduced non-invasive species because introduced plants, regardless of their abundance or impact, may support a reduced insect herbivore fauna and experience less damage. In this study, we employed a three-way comparison, in which we compared herbivore faunas among native, introduced invasive, and introduced non-invasive plants in the genus Eugenia (Myrtaceae) which all co-occur in South Florida. We observed a total of 25 insect species in 12 families and 6 orders feeding on the six species of Eugenia. Of these insect species, the majority were native (72%), polyphagous (64%), and ectophagous (68%). We found that invasive introduced Eugenia has a similar level of herbivore richness as both the native and the non-invasive introduced Eugenia. However, the numbers and percentages of oligophagous insect species were greatest on the native Eugenia, but they were not different between the invasive and non-invasive introduced Eugenia. One oligophagous endophagous insect has likely shifted from the native to the invasive, but none to the non-invasive Eugenia. In summary, the invasive Eugenia encountered equal, if not greater, herbivore pressure than the non-invasive Eugenia, including from oligophagous and endophagous herbivores. Our data only provided limited support to the ERH. We would not have been able to draw this conclusion without inclusion of the non-invasive Eugenia species in the study.", "which hypothesis ?", "Enemy release", 13.0, 26.0], ["What roles do ruderals and residuals play in early forest succession and how does repeated disturbance affect them? We examined this question by monitoring plant cover and composition on a producti...", "which hypothesis ?", "Disturbance", 91.0, 102.0], ["Since 1995, Dikerogammarus villosus Sowinski, a Ponto-Caspian amphi- pod species, has been invading most of Western Europe' s hydrosystems. D. villosus geographic extension and quickly increasing population density has enabled it to become a major component of macrobenthic assemblages in recipient ecosystems. The ecological characteristics of D. villosus on a mesohabitat scale were investigated at a station in the Moselle River. This amphipod is able to colonize a wide range of sub- stratum types, thus posing a threat to all freshwater ecosystems. Rivers whose domi- nant substratum is cobbles and which have tree roots along the banks could harbour particularly high densities of D. villosus. A relationship exists between substratum par- ticle size and the length of the individuals, and spatial segregation according to length was shown. This allows the species to limit intra-specific competition between genera- tions while facilitating reproduction. A strong association exists between D. villosus and other Ponto-Caspian species, such as Dreissena polymorpha and Corophium cur- vispinum, in keeping with Invasional Meltdown Theory. Four taxa (Coenagrionidae, Calopteryx splendens, Corophium curvispinum and Gammarus pulex ) exhibited spa- tial niches that overlap significantly that of D. villosus. According to the predatory be- haviour of the newcomer, their populations may be severely impacted.", "which hypothesis ?", "Invasional meltdown", 1117.0, 1136.0], ["Plants with poorly attractive flowers or with little floral rewards may have inadequate pollinator service, which in turn reduces seed output. However, pollinator service of less attractive species could be enhanced when they are associated with species with highly attractive flowers (so called \u2018magnet-species\u2019). Although several studies have reported the magnet species effect, few of them have evaluated whether this positive interaction result in an enhancement of the seed output for the beneficiary species. Here, we compared pollinator visitation rates and seed output of the invasive annual species Carduus pycnocephalus when grow associated with shrubs of the invasive Lupinus arboreus and when grow alone, and hypothesized that L. arboreus acts as a magnet species for C. pycnocephalus. Results showed that C. pycnocephalus individuals associated with L. arboreus had higher pollinator visitation rates and higher seed output than individuals growing alone. The higher visitation rates of C. pycnocephalus associated to L. arboreus were maintained after accounting for flower density, which consistently supports our hypothesis on the magnet species effect of L. arboreus. Given that both species are invasives, the facilitated pollination and reproduction of C. pycnocephalus by L. arboreus could promote its naturalization in the community, suggesting a synergistic invasional process contributing to an \u2018invasional meltdown\u2019. The magnet effect of Lupinus on Carduus found in this study seems to be one the first examples of indirect facilitative interactions via increased pollination among invasive species.", "which hypothesis ?", "Invasional meltdown", 1418.0, 1437.0], ["1. Disturbance and anthropogenic land use changes are usually considered to be key factors facilitating biological invasions. However, specific comparisons of invasion success between sites affected to different degrees by these factors are rare. 2. In this study we related the large-scale distribution of the invading New Zealand mud snail ( Potamopyrgus antipodarum ) in southern Victorian streams, Australia, to anthropogenic land use, flow variability, water quality and distance from the site to the sea along the stream channel. 3. The presence of P. antipodarum was positively related to an index of flow-driven disturbance, the coefficient of variability of mean daily flows for the year prior to the study. 4. Furthermore, we found that the invader was more likely to occur at sites with multiple land uses in the catchment, in the forms of grazing, forestry and anthropogenic developments (e.g. towns and dams), compared with sites with low-impact activities in the catchment. However, this relationship was confounded by a higher likelihood of finding this snail in lowland sites close to the sea. 5. We conclude that P. antipodarum could potentially be found worldwide at sites with similar ecological characteristics. We hypothesise that its success as an invader may be related to an ability to quickly re-colonise denuded areas and that population abundances may respond to increased food resources. Disturbances could facilitate this invader by creating spaces for colonisation (e.g. a possible consequence of floods) or changing resource levels (e.g. increased nutrient levels in streams with intense human land use in their catchments).", "which hypothesis ?", "Disturbance", 3.0, 14.0], ["Alliaria petiolata is a Eurasian biennial herb that is invasive in North America and for which phenotypic plasticity has been noted as a potentially important invasive trait. Using four European and four North American populations, we explored variation among populations in the response of a suite of antioxidant, antiherbivore, and morphological traits to the availability of water and nutrients and to jasmonic acid treatment. Multivariate analyses revealed substantial variation among populations in mean levels of these traits and in the response of this suite of traits to environmental variation, especially water availability. Univariate analyses revealed variation in plasticity among populations in the expression of all of the traits measured to at least one of these environmental factors, with the exception of leaf length. There was no evidence for continentally distinct plasticity patterns, but there was ample evidence for variation in phenotypic plasticity among the populations within continents. This implies that A. petiolata has the potential to evolve distinct phenotypic plasticity patterns within populations but that invasive populations are no more plastic than native populations.", "which hypothesis ?", "Phenotypic plasticity", 95.0, 116.0], ["Interspecific interactions play an important role in the success of introduced species. For example, the \u2018enemy release\u2019 hypothesis posits that introduced species become invasive because they escape top\u2013down regulation by natural enemies while the \u2018invasional meltdown\u2019 hypothesis posits that invasions may be facilitated by synergistic interactions between introduced species. Here, we explore how facilitation and enemy release interact to moderate the potential effect of a large category of positive interactions \u2013 protection mutualisms. We use the interactions between an introduced plant (Japanese knotweed Fallopia japonica), an introduced herbivore (Japanese beetle Popillia japonica), an introduced ant (European red ant Myrmica rubra), and native ants and herbivores in riparian zones of the northeastern United States as a model system. Japanese knotweed produces sugary extrafloral nectar that is attractive to ants, and we show that both sugar reward production and ant attendance increase when plants experience a level of leaf damage that is typical in the plants\u2019 native range. Using manipulative experiments at six sites, we demonstrate low levels of ant patrolling, little effect of ants on herbivory rates, and low herbivore pressure during midsummer. Herbivory rates and the capacity of ants to protect plants (as evidenced by effects of ant exclusion) increased significantly when plants were exposed to introduced Japanese beetles that attack plants in the late summer. Beetles were also associated with greater on-plant foraging by ants, and among-plant differences in ant-foraging were correlated with the magnitude of damage inflicted on plants by the beetles. Last, we found that sites occupied by introduced M. rubra ants almost invariably included Japanese knotweed. Thus, underlying variation in the spatiotemporal distribution of the introduced herbivore influences the provision of benefits to the introduced plant and to the introduced ant. More specifically, the presence of the introduced herbivore converts an otherwise weak interaction between two introduced species into a reciprocally beneficial mutualism. Because the prospects for facilitation are linked to the prospects for enemy release in protection mutualisms, species", "which hypothesis ?", "Invasional meltdown", 249.0, 268.0], ["Propagule pressure is fundamental to invasion success, yet our understanding of its role in the marine domain is limited. Few studies have manipulated or controlled for propagule supply in the field, and consequently there is little empirical data to test for non-linearities or interactions with other processes. Supply of non-indigenous propagules is most likely to be elevated in urban estuaries, where vessels congregate and bring exotic species on fouled hulls and in ballast water. These same environments are also typically subject to elevated levels of disturbance from human activities, creating the potential for propagule pressure and disturbance to interact. By applying a controlled dose of free-swimming larvae to replicate assemblages, we were able to quantify a dose-response relationship at much finer spatial and temporal scales than previously achieved in the marine environment. We experimentally crossed controlled levels of propagule pressure and disturbance in the field, and found that both were required for invasion to occur. Only recruits that had settled onto bare space survived beyond three months, precluding invader persistence in undisturbed communities. In disturbed communities initial survival on bare space appeared stochastic, such that a critical density was required before the probability of at least one colony surviving reached a sufficient level. Those that persisted showed 75% survival over the following three months, signifying a threshold past which invaders were resilient to chance mortality. Urban estuaries subject to anthropogenic disturbance are common throughout the world, and similar interactions may be integral to invasion dynamics in these ecosystems.", "which hypothesis ?", "Propagule pressure", 0.0, 18.0], ["Identifying mechanisms governing the establishment and spread of invasive species is a fundamental challenge in invasion biology. Because species invasions are frequently observed only after the species presents an environmental threat, research identifying the contributing agents to dispersal and subsequent spread are confined to retrograde observations. Here, we use a combination of seasonal surveys and experimental approaches to test the relative importance of behavioral and abiotic factors in determining the local co-occurrence of two invasive ant species, the established Argentine ant (Linepithema humile Mayr) and the newly invasive Asian needle ant (Pachycondyla chinensis Emery). We show that the broader climatic envelope of P. chinensis enables it to establish earlier in the year than L. humile. We also demonstrate that increased P. chinensis propagule pressure during periods of L. humile scarcity contributes to successful P. chinensis early season establishment. Furthermore, we show that, although L. humile is the numerically superior and behaviorally dominant species at baits, P. chinensis is currently displacing L. humile across the invaded landscape. By identifying the features promoting the displacement of one invasive ant by another we can better understand both early determinants in the invasion process and factors limiting colony expansion and survival.", "which hypothesis ?", "Propagule pressure", 862.0, 880.0], ["During the upsurge of the introduced predatory Nile perch in Lake Victoria in the 1980s, the zooplanktivorous Haplochromis (Yssichromis) pyrrhocephalus nearly vanished. The species recovered coincident with the intense fishing of Nile perch in the 1990s, when water clarity and dissolved oxygen levels had decreased dramatically due to increased eutrophication. In response to the hypoxic conditions, total gill surface in resurgent H. pyrrhocephalus increased by 64%. Remarkably, head length, eye length, and head volume decreased in size, whereas cheek depth increased. Reductions in eye size and depth of the rostral part of the musculus sternohyoideus, and reallocation of space between the opercular and suspensorial compartments of the head may have permitted accommodation of larger gills in a smaller head. By contrast, the musculus levator posterior, located dorsal to the gills, increased in depth. This probably reflects an adaptive response to the larger and tougher prey types in the diet of resurgent H. pyrrhocephalus. These striking morphological changes over a time span of only two decades could be the combined result of phenotypic plasticity and genetic change and may have fostered recovery of this species.", "which hypothesis ?", "Phenotypic plasticity", 1140.0, 1161.0], [". The effect of fire on annual plants was examined in two vegetation types at remnant vegetation edges in the Western Australian wheatbelt. Density and cover of non-native species were consistently greatest at the reserve edges, decreasing rapidly with increasing distance from reserve edge. Numbers of native species showed little effect of distance from reserve edge. Fire had no apparent effect on abundance of non-natives in Allocasuarina shrubland but abundance of native plants increased. Density of both non-native and native plants in Acacia acuminata-Eucalyptus loxophleba woodland decreased after fire. Fewer non-native species were found in the shrubland than in the woodland in both unburnt and burnt areas, this difference being smallest between burnt areas. Levels of soil phosphorus and nitrate were higher in burnt areas of both communities and ammonium also increased in the shrubland. Levels of soil phosphorus and nitrate were higher at the reserve edge in the unburnt shrubland, but not in the woodland. There was a strong correlation between soil phosphorus levels and abundance of non-native species in the unburnt shrubland, but not after fire or in the woodland. Removal of non-native plants in the burnt shrubland had a strong positive effect on total abundance of native plants, apparently due to increases in growth of smaller, suppressed native plants in response to decreased competition. Two native species showed increased seed production in plots where non-native plants had been removed. There was a general indication that, in the short term, fire does not necessarily increase invasion of these communities by non-native species and could, therefore be a useful management tool in remnant vegetation, providing other disturbances are minimised.", "which hypothesis ?", "Disturbance", NaN, NaN], ["Invasive exotic plants reduce the diversity of native communities by displacing native species. According to the coexistence theory, native plants are able to coexist with invaders only when their fitness is not significantly smaller than that of the exotics or when they occupy a different niche. It has therefore been hypothesized that the survival of some native species at invaded sites is due to post-invasion evolutionary changes in fitness and/or niche traits. In common garden experiments, we tested whether plants from invaded sites of two native species, Impatiens noli-tangere and Galeopsis speciosa, outperform conspecifics from non-invaded sites when grown in competition with the invader (Impatiens parviflora). We further examined whether the expected superior performance of the plants from the invaded sites is due to changes in the plant size (fitness proxy) and/or changes in the germination phenology and phenotypic plasticity (niche proxies). Invasion history did not influence the performance of any native species when grown with the exotic competitor. In I. noli-tangere, however, we found significant trait divergence with regard to plant size, germination phenology and phenotypic plasticity. In the absence of a competitor, plants of I. noli-tangere from invaded sites were larger than plants from non-invaded sites. The former plants germinated earlier than inexperienced conspecifics or an exotic congener. Invasion experience was also associated with increased phenotypic plasticity and an improved shade-avoidance syndrome. Although these changes indicate fitness and niche differentiation of I. noli-tangere at invaded sites, future research should examine more closely the adaptive value of these changes and their genetic basis.", "which hypothesis ?", "Phenotypic plasticity", 925.0, 946.0], ["While phenotypic plasticity is considered the major means that allows plant to cope with environmental heterogeneity, scant information is available on phenotypic plasticity of the whole-plant architecture in relation to ontogenic processes. We performed an architectural analysis to gain an understanding of the structural and ontogenic properties of common buckthorn (Rhamnus cathartica L., Rhamnaceae) growing in the understory and under an open canopy. We found that ontogenic effects on growth need to be calibrated if a full description of phenotypic plasticity is to be obtained. Our analysis pointed to three levels of organization (or nested structural units) in R. cathartica. Their modulation in relation to light conditions leads to the expression of two architectural strategies that involve sets of traits known to confer competitive advantage in their respective environments. In the understory, the plant develops a tree-like form. Its strategy here is based on restricting investment in exploitation str...", "which hypothesis ?", "Phenotypic plasticity", 6.0, 27.0], ["The enemy release hypothesis predicts that native herbivores will either prefer or cause more damage to native than introduced plant species. We tested this using preference and performance experiments in the laboratory and surveys of leaf damage caused by the magpie moth Nyctemera amica on a co-occuring native and introduced species of fireweed (Senecio) in eastern Australia. In the laboratory, ovipositing females and feeding larvae preferred the native S. pinnatifolius over the introduced S. madagascariensis. Larvae performed equally well on foliage of S. pinnatifolius and S. madagascariensis: pupal weights did not differ between insects reared on the two species, but growth rates were significantly faster on S. pinnatifolius. In the field, foliage damage was significantly greater on native S. pinnatifolius than introduced S. madagascariensis. These results support the enemy release hypothesis, and suggest that the failure of native consumers to switch to introduced species contributes to their invasive success. Both plant species experienced reduced, rather than increased, levels of herbivory when growing in mixed populations, as opposed to pure stands in the field; thus, there was no evidence that apparent competition occurred.", "which hypothesis ?", "Enemy release", 4.0, 17.0], ["Deep in the heart of a longstanding invasion, an exotic grass is still invading. Range infilling potentially has the greatest impact on native communities and ecosystem processes, but receives much less attention than range expansion. \u2018Snapshot' studies of invasive plant dispersal, habitat and propagule limitations cannot determine whether a landscape is saturated or whether a species is actively infilling empty patches. We investigate the mechanisms underlying invasive plant infilling by tracking the localized movement and expansion of Microstegium vimineum populations from 2009 to 2011 at sites along a 100-km regional gradient in eastern U.S. deciduous forests. We find that infilling proceeds most rapidly where the invasive plants occur in warm, moist habitats adjacent to roads: under these conditions they produce copious seed, the dispersal distances of which increase exponentially with proximity to roadway. Invasion then appears limited where conditions are generally dry and cool as propagule pressure tapers off. Invasion also is limited in habitats >1 m from road corridors, where dispersal distances decline precipitously. In contrast to propagule and dispersal limitations, we find little evidence that infilling is habitat limited, meaning that as long as M. vimineum seeds are available and transported, the plant generally invades quite vigorously. Our results suggest an invasive species continues to spread, in a stratified manner, within the invaded landscape long after first arriving. These dynamics conflict with traditional invasion models that emphasize an invasive edge with distinct boundaries. We find that propagule pressure and dispersal regulate infilling, providing the basis for projecting spread and landscape coverage, ecological effects and the efficacy of containment strategies.", "which hypothesis ?", "Propagule pressure", 1002.0, 1020.0], ["A long\u2014term field experiment in limestone grassland near Buxton (North Derbyshire, United Kingdom) was designed to identify plant attributes and vegetation characteristics conducive to successful invasion. Plots containing crossed, continuous gradients of fertilizer addition and disturbance intensity were subjected to a single\u2014seed inoculum comprising a wide range of plant functional types and 54 species not originally present at the site. Several disturbance treatments were applied; these included the creation of gaps of contrasting size and the mowing of the vegetation to different heights and at different times of the year. This paper analyzes the factors controlling the initial phase of the resulting invasions within the plots subject to gap creation. The susceptibility of the indigenous community to invasion was strongly related to the availability of bare ground created, but greatest success occurred where disturbance coincided with eutrophication. Disturbance damage to the indigenous dominants (particularly Festuca ovina) was an important determinant of seedling establishment by the sown invaders. Large seed size was identified as an important characteristic allowing certain species to establish relatively evenly across the productivity\u2014disturbance matrix; smaller\u2014seeded species were more dependent on disturbance for establishment. Successful and unsuccessful invaders were also distinguished to some extent by differences in germination requirements and present geographical distribution.", "which hypothesis ?", "Disturbance", 280.0, 291.0], ["We studied the effect of propagule pressure on the establishment and subsequent spread of the invasive little fire ant Wasmannia auropunctata in a Gabonese oilfield in lowland rain forest. Oil well drilling, the major anthropogenic disturbance over the past 21 years in the area, was used as an indirect measure of propagule pressure. An analysis of 82 potential introductions at oil production platforms revealed that the probability of successful establishment significantly increased with the number of drilling events. Specifically, the shape of the dose\u2013response establishment curve could be closely approximated by a Poisson process with a 34% chance of infestation per well drilled. Consistent with our knowledge of largely clonal reproduction by W. auropunctata, the shape of the establishment curve suggested that the ants were not substantially affected by Allee effects, probably greatly contributing to this species\u2019 success as an invader. By contrast, the extent to which W. auropunctata spread beyond the point of initial introduction, and thus the extent of its damage to diversity of other ant species, was independent of propagule pressure. These results suggest that while establishment success depends on propagule pressure, other ecological or genetic factors may limit the extent of further spread. Knowledge of the shape of the dose\u2013response establishment curve should prove useful in modelling the future spread of W. auropunctata and perhaps the spread of other clonal organisms.", "which hypothesis ?", "Propagule pressure", 25.0, 43.0], ["Understanding the factors that drive commonness and rarity of plant species and whether these factors differ for alien and native species are key questions in ecology. If a species is to become common in a community, incoming propagules must first be able to establish. The latter could be determined by competition with resident plants, the impacts of herbivores and soil biota, or a combination of these factors. We aimed to tease apart the roles that these factors play in determining establishment success in grassland communities of 10 alien and 10 native plant species that are either common or rare in Germany, and from four families. In a two\u2010year multisite field experiment, we assessed the establishment success of seeds and seedlings separately, under all factorial combinations of low vs. high disturbance (mowing vs mowing and tilling of the upper soil layer), suppression or not of pathogens (biocide application) and, for seedlings only, reduction or not of herbivores (net\u2010cages). Native species showed greater establishment success than alien species across all treatments, regardless of their commonness. Moreover, establishment success of all species was positively affected by disturbance. Aliens showed lower establishment success in undisturbed sites with biocide application. Release of the undisturbed resident community from pathogens by biocide application might explain this lower establishment success of aliens. These findings were consistent for establishment from either seeds or seedlings, although less significantly so for seedlings, suggesting a more important role of pathogens in very early stages of establishment after germination. Herbivore exclusion did play a limited role in seedling establishment success. Synthesis: In conclusion, we found that less disturbed grassland communities exhibited strong biotic resistance to establishment success of species, whether alien or native. However, we also found evidence that alien species may benefit weakly from soilborne enemy release, but that this advantage over native species is lost when the latter are also released by biocide application. Thus, disturbance was the major driver for plant species establishment success and effects of pathogens on alien plant establishment may only play a minor role.", "which hypothesis ?", "Enemy release", 2009.0, 2022.0], ["Models and observational studies have sought patterns of predictability for invasion of natural areas by nonindigenous species, but with limited success. In a field experiment using forest understory plants, we jointly manipulated three hypothesized determinants of biological invasion outcome: resident diversity, physical disturbance and abiotic conditions, and propagule pressure. The foremost constraints on net habitat invasibility were the number of propagules that arrived at a site and naturally varying resident plant density. The physical environment (flooding regime) and the number of established resident species had negligible impact on habitat invasibility as compared to propagule pressure, despite manipulations that forced a significant reduction in resident richness, and a gradient in flooding from no flooding to annual flooding. This is the first experimental study to demonstrate the primacy of propagule pressure as a determinant of habitat invasibility in comparison with other candidate controlling factors.", "which hypothesis ?", "Propagule pressure", 364.0, 382.0], ["ABSTRACT The Asian tiger mosquito, Aedes albopictus (Skuse), is perhaps the most successful invasive mosquito species in contemporary history. In the United States, Ae. albopictus has spread from its introduction point in southern Texas to as far north as New Jersey (i.e., a span of \u224814\u00b0 latitude). This species experiences seasonal constraints in activity because of cold temperatures in winter in the northern United States, but is active year-round in the south. We performed a laboratory experiment to examine how life-history traits of Ae. albopictus from four populations (New Jersey [39.4\u00b0 N], Virginia [38.6\u00b0 N], North Carolina [35.8\u00b0 N], Florida [27.6\u00b0 N]) responded to photoperiod conditions that mimic approaching winter in the north (short static daylength, short diminishing daylength) or relatively benign summer conditions in the south (long daylength), at low and high larval densities. Individuals from northern locations were predicted to exhibit reduced development times and to emerge smaller as adults under short daylength, but be larger and take longer to develop under long daylength. Life-history traits of southern populations were predicted to show less plasticity in response to daylength because of low probability of seasonal mortality in those areas. Males and females responded strongly to photoperiod regardless of geographic location, being generally larger but taking longer to develop under the long daylength compared with short day lengths; adults of both sexes were smaller when reared at low larval densities. Adults also differed in mass and development time among locations, although this effect was independent of density and photoperiod in females but interacted with density in males. Differences between male and female mass and development times was greater in the long photoperiod suggesting differences between the sexes in their reaction to different photoperiods. This work suggests that Ae. albopictus exhibits sex-specific phenotypic plasticity in life-history traits matching variation in important environmental variables.", "which hypothesis ?", "Phenotypic plasticity", 1977.0, 1998.0], ["1 The search for general characteristics of invasive species has not been very successful yet. A reason for this could be that current invasion patterns are mainly reflecting the introduction history (i.e. time since introduction and propagule pressure) of the species. Accurate data on the introduction history are, however, rare, particularly for introduced alien species that have not established. As a consequence, few studies that tested for the effects of species characteristics on invasiveness corrected for introduction history. 2 We tested whether the naturalization success of 582 North American woody species in Europe, measured as the proportion of European geographic regions in which each species is established, can be explained by their introduction history. For 278 of these species we had data on characteristics related to growth form, life cycle, growth, fecundity and environmental tolerance. We tested whether naturalization success can be further explained by these characteristics. In addition, we tested whether the effects of species characteristics differ between growth forms. 3 Both planting frequency in European gardens and time since introduction significantly increased naturalization success, but the effect of the latter was relatively weak. After correction for introduction history and taxonomy, six of the 26 species characteristics had significant effects on naturalization success. Leaf retention and precipitation tolerance increased naturalization success. Tree species were only 56% as likely to naturalize as non\u2010tree species (vines, shrubs and subshrubs), and the effect of planting frequency on naturalization success was much stronger for non\u2010trees than for trees. On the other hand, the naturalization success of trees, but not for non\u2010trees, increased with native range size, maximum plant height and seed spread rate. 4 Synthesis. Our results suggest that introduction history, particularly planting frequency, is an important determinant of current naturalization success of North American woody species (particularly of non\u2010trees) in Europe. Therefore, studies comparing naturalization success among species should correct for introduction history. Species characteristics are also significant determinants of naturalization success, but their effects may differ between growth forms.", "which hypothesis ?", "Propagule pressure", 234.0, 252.0], ["Aim Biological invasions pose a major conservation threat and are occurring at an unprecedented rate. Disproportionate levels of invasion across the landscape indicate that propagule pressure and ecosystem characteristics can mediate invasion success. However, most invasion predictions relate to species\u2019 characteristics (invasiveness) and habitat requirements. Given myriad invaders and the inability to generalize from single\u2010species studies, more general predictions about invasion are required. We present a simple new method for characterizing and predicting landscape susceptibility to invasion that is not species\u2010specific.", "which hypothesis ?", "Propagule pressure", 173.0, 191.0], ["Hybridization and introgression between introduced and native salmonids threaten the continued persistence of many inland cutthroat trout species. Environmental models have been developed to predict the spread of introgression, but few studies have assessed the role of propagule pressure. We used an extensive set of fish Stocking records and geographic information system (GIS) data to produce a spatially explicit index of potential propagule pressure exerted by introduced rainbow trout in the Upper Kootenay River, British Columbia, Canada. We then used logistic regression and the information-theoretic approach to test the ability of a set of environmental and spatial variables to predict the level of introgression between native westslope cutthroat trout and introduced rainbow trout. Introgression was assessed using between four and seven co-dominant, diagnostic nuclear markers at 45 sites in 31 different streams. The best model for predicting introgression included our GIS propagule pressure index and an environmental variable that accounted for the biogeoclimatic zone of the site (r2=0.62). This model was 1.4 times more likely to explain introgression than the next-best model, which consisted of only the propagule pressure index variable. We created a composite model based on the model-averaged results of the seven top models that included environmental, spatial, and propagule pressure variables. The propagule pressure index had the highest importance weight (0.995) of all variables tested and was negatively related to sites with no introgression. This study used an index of propagule pressure and demonstrated that propagule pressure had the greatest influence on the level of introgression between a native and introduced trout in a human-induced hybrid zone.", "which hypothesis ?", "Propagule pressure", 270.0, 288.0], ["The probability of a bird species going extinct on oceanic islands in the period since European colonization is predicted by the number of introduced predatory mammal species, but the exact mechanism driving this relationship is unknown. One possibility is that larger exotic predator communities include a wider array of predator functional types. These predator communities may target native bird species with a wider range of behavioral or life history characteristics. We explored the hypothesis that the functional diversity of the exotic predators drives bird species extinctions. We also tested how different combinations of functionally important traits of the predators explain variation in extinction probability. Our results suggest a unique impact of each introduced mammal species on native bird populations, as opposed to a situation where predators exhibit functional redundancy. Further, the impact of each additional predator may be facilitated by those already present, suggesting the possibility of \u201cinvasional meltdown.\u201d", "which hypothesis ?", "Invasional meltdown", 1019.0, 1038.0], ["1 Limiting similarity theory predicts that successful invaders should differ functionally from species already present in the community. This theory has been tested by manipulating the functional richness of communities, but not other aspects of functional diversity such as the identity of dominant species. Because dominant species are known to have strong effects on ecosystem functioning, I hypothesized that successful invaders should be functionally dissimilar from community dominants. 2 To test this hypothesis, I added seeds of 17 different species to two different experiments: one in a natural oldfield community that had patches dominated by different plant species, and one in grassland mesocosms that varied in the identity of the dominant species but not in species richness or evenness. I used indicator species analyses to test whether invaders had higher establishment success in plots with functionally different dominant species. 3 A large percentage of invader species (47\u201371%) in both experiments showed no difference in affinity across the different dominant treatments, although one\u2010third of species did show some evidence for limiting similarity. Exotic invaders had much higher invasion success than native invaders, and seemed to be inhibited by dominant species that were functionally similar. However, even these invasion patterns were not consistent across the two experiments. 4 The results from this study show that there is some evidence that dominant species suppress invasion by functionally similar species, beyond the effect of simple presence or absence of species in communities, although it is not the sole factor affecting invasion success. Patterns of invasion success were inconsistent across species and experiments, indicating that other studies using only a single species of invader to make conclusions about community invasibility should be interpreted with caution.", "which hypothesis ?", "limiting similarity", 2.0, 21.0], ["Background: Phenotypic plasticity and ecotypic differentiation have been suggested as the main mechanisms by which widely distributed species can colonise broad geographic areas with variable and stressful conditions. Some invasive plant species are among the most widely distributed plants worldwide. Plasticity and local adaptation could be the mechanisms for colonising new areas. Aims: We addressed if Taraxacum officinale from native (Alps) and introduced (Andes) stock responded similarly to drought treatment, in terms of photosynthesis, foliar angle, and flowering time. We also evaluated if ontogeny affected fitness and physiological responses to drought. Methods: We carried out two common garden experiments with both seedlings and adults (F2) of T. officinale from its native and introduced ranges in order to evaluate their plasticity and ecotypic differentiation under a drought treatment. Results: Our data suggest that the functional response of T. officinale individuals from the introduced range to drought is the result of local adaptation rather than plasticity. In addition, the individuals from the native distribution range were more sensitive to drought than those from the introduced distribution ranges at both seedling and adult stages. Conclusions: These results suggest that local adaptation may be a possible mechanism underlying the successful invasion of T. officinale in high mountain environments of the Andes.", "which hypothesis ?", "Phenotypic plasticity", 12.0, 33.0], ["The quantification of invader impacts remains a major hurdle to understanding and managing invasions. Here, we demonstrate a method for quantifying the community-level impact of multiple plant invaders by applying Parker et al.'s (1999) equation (impact = range x local abundance x per capita effect or per unit effect) using data from 620 survey plots from 31 grasslands across west-central Montana, USA. In testing for interactive effects of multiple invaders on native plant abundance (percent cover), we found no evidence for invasional meltdown or synergistic interactions for the 25 exotics tested. While much concern exists regarding impact thresholds, we also found little evidence for nonlinear relationships between invader abundance and impacts. These results suggest that management actions that reduce invader abundance should reduce invader impacts monotonically in this system. Eleven of 25 invaders had significant per unit impacts (negative local-scale relationships between invader and native cover). In decomposing the components of impact, we found that local invader abundance had a significant influence on the likelihood of impact, but range (number of plots occupied) did not. This analysis helped to differentiate measures of invasiveness (local abundance and range) from impact to distinguish high-impact invaders from invaders that exhibit negligible impacts, even when widespread. Distinguishing between high- and low-impact invaders should help refine trait-based prediction of problem species. Despite the unique information derived from evaluation of per unit effects of invaders, invasiveness 'scores based on range and local abundance produced similar rankings to impact scores that incorporated estimates of per unit effects. Hence, information on range and local abundance alone was sufficient to identify problematic plant invaders at the regional scale. In comparing empirical data on invader impacts to the state noxious weed list, we found that the noxious weed list captured 45% of the high impact invaders but missed 55% and assigned the lowest risk category to the highest-impact invader. While such subjective weed lists help to guide invasive species management, empirical data are needed to develop more comprehensive rankings of ecological impacts. Using weed lists to classify invaders for testing invasion theory is not well supported.", "which hypothesis ?", "Invasional meltdown", 530.0, 549.0], ["ABSTRACT Invasion ecology offers a unique opportunity to examine drivers of ecological processes that regulate communities. Biotic resistance to nonindigenous species establishment is thought to be greater in communities that have not been disturbed by human activities. Alternatively, invasion may occur wherever environmental conditions are appropriate for the colonist, regardless of the composition of the existing community and the level of disturbance. We tested these hypotheses by investigating distribution of the nonindigenous amphipod, Echinogammarus ischnus Stebbing, 1899, in co-occurrence with a widespread amphipod, Gammarus fasciatus Say, 1818, at 97 sites across the Laurentian Great Lakes coastal margins influenced by varying types and levels of anthropogenic stress. E. Ischnus was distributed independently of disturbance gradients related to six anthropogenic disturbance variables that summarized overall nutrient input, nitrogen, and phosphorus load carried from the adjacent coastal watershed, agricultural land area, human population density, overall pollution loading, and the site-specific dominant stressor, consistent with the expectations of regulation by general environmental characteristics. Our results support the view that the biotic facilitation by dreissenid mussels and distribution of suitable habitats better explain E. ischnus' distribution at Laurentian Great Lakes coastal margins than anthropogenic disturbance.", "which hypothesis ?", "Disturbance", 446.0, 457.0], ["Understanding the role of enemy release in biological invasions requires an assessment of the invader's home range, the number of invasion events and enemy prevalence. The common wasp (Vespula vulgaris) is a widespread invader. We sought to determine the Eurasian origin of this wasp and examined world\u2010wide populations for microsporidian pathogen infections to investigate enemy release.", "which hypothesis ?", "Enemy release", 26.0, 39.0], ["\n\nHolcus lanatus L. can colonise a wide range of sites within the naturalised grassland of the Humid Dominion of Chile. The objectives were to determine plant growth mechanisms and strategies that have allowed H. lanatus to colonise contrasting pastures and to determine the existence of ecotypes of H. lanatus in southern Chile. Plants of H. lanatus were collected from four geographic zones of southern Chile and established in a randomised complete block design with four replicates. Five newly emerging tillers were marked per plant and evaluated at the vegetative, pre-ear emergence, complete emerged inflorescence, end of flowering period, and mature seed stages. At each evaluation, one marked tiller was harvested per plant. The variables measured included lamina length and width, tiller height, length of the inflorescence, total number of leaves, and leaf, stem, and inflorescence mass. At each phenological stage, groups of accessions were statistically formed using cluster analysis. The grouping of accessions (cluster analysis) into statistically different groups (ANOVA and canonical variate analysis) indicated the existence of different ecotypes. The phenotypic variation within each group of the accessions suggested that each group has its own phenotypic plasticity. It is concluded that the successful colonisation by H. lanatus has resulted from diversity within the species.\n", "which hypothesis ?", "Phenotypic plasticity", 1272.0, 1293.0], ["ABSTRACT: We examined effects of a natural disturbance (hurricanes) on potential invasion of tree islands by an exotic plant (Old World climbing fern, Lygodium microphyllum) in the Arthur R. Marshall Loxahatchee National Wildlife Refuge, Florida. Three major hurricanes in 2004 and 2005 caused varying degrees of impacts to trees on tree islands within the Refuge. Physical impacts of hurricanes were hypothesized to promote invasion and growth of L. microphyllum. We compared presence and density of L. microphyllum in plots of disturbed soil created by hurricane-caused treefalls to randomly selected non-disturbed plots on 12 tree islands. We also examined relationships between disturbed area size, canopy cover, and presence of standing water on presence and density of L. microphyllum. Lygodium microphyllum was present in significantly more treefall plots than random non-treefall plots (76% of the treefall plots (N=55) and only 14% of random non-treefall plots (N=55)). Density of L. microphyllum was higher in treefall plots compared to random non-disturbed plots (6.0 stems per m2 for treefall plots; 0.5 stems per m2 for random non-disturbed plots), and L. microphyllum density was correlated with disturbed area size (P = 0.005). Lygodium microphyllum presence in treefall sites was significantly related to canopy cover and presence of water: it was present in five times more treefalls with water than those without. These results suggest that disturbances, such as hurricanes, that result in canopy openings and the creation of disturbed areas with standing water contribute to the ability of L. microphyllum to invade natural areas.", "which hypothesis ?", "Disturbance", 43.0, 54.0], ["Several hypotheses proposed to explain the success of introduced species focus on altered interspecific interactions. One of the most prominent, the Enemy Release Hypothesis, posits that invading species benefit compared to their native counterparts if they lose their herbivores and pathogens during the invasion process. We previously reported on a common garden experiment (from 2002) in which we compared levels of herbivory between 30 taxonomically paired native and introduced old-field plants. In this phyloge- netically controlled comparison, herbivore damage tended to be higher on introduced than on native plants. This striking pattern, the opposite of current theory, prompted us to further investigate herbivory and several other interspecific interactions in a series of linked ex- periments with the same set of species. Here we show that, in these new experiments, introduced plants, on average, received less insect herbivory and were subject to half the negative soil microbial feedback compared to natives; attack by fungal and viral pathogens also tended to be reduced on introduced plants compared to natives. Although plant traits (foliar C:N, toughness, and water content) suggested that introduced species should be less resistant to generalist consumers, they were not consistently more heavily attacked. Finally, we used meta-analysis to combine data from this study with results from our previous work to show that escape generally was inconsistent among guilds of enemies: there were few instances in which escape from multiple guilds occurred for a taxonomic pair, and more cases in which the patterns of escape from different enemies canceled out. Our examination of multiple interspecific interactions demonstrates that escape from one guild of enemies does not necessarily imply escape from other guilds. Because the effects of each guild are likely to vary through space and time, the net effect of all enemies is also likely to be variable. The net effect of these interactions may create ''invasion opportunity windows'': times when introduced species make advances in native communities.", "which hypothesis ?", "Enemy release", 149.0, 162.0], ["Few invaded ecosystems are free from habitat loss and disturbance, leading to uncertainty whether dominant invasive species are driving community change or are passengers along for the environmental ride. The ''driver'' model predicts that invaded communities are highly interactive, with subordinate native species being limited or ex- cluded by competition from the exotic dominants. The ''passenger'' model predicts that invaded communities are primarily structured by noninteractive factors (environmental change, dispersal limitation) that are less constraining on the exotics, which thus dominate. We tested these alternative hypotheses in an invaded, fragmented, and fire-suppressed oak savanna. We examined the impact of two invasive dominant perennial grasses on community structure using a reduction (mowing of aboveground biomass) and removal (weeding of above- and belowground biomass) experiment conducted at different seasons and soil depths. We examined the relative importance of competition vs. dispersal limitation with experimental seed additions. Competition by the dominants limits the abundance and re- production of many native and exotic species based on their increased performance with removals and mowing. The treatments resulted in increased light availability and bare soil; soil moisture and N were unaffected. Although competition was limiting for some, 36 of 79 species did not respond to the treatments or declined in the absence of grass cover. Seed additions revealed that some subordinates are dispersal limited; competition alone was insufficient to explain their rarity even though it does exacerbate dispersal inefficiencies by lowering reproduction. While the net effects of the dominants were negative, their presence restricted woody plants, facilitated seedling survival with moderate disturbance (i.e., treatments applied in the fall), or was not the primary limiting factor for the occurrence of some species. Finally, the species most functionally distinct from the dominants (forbs, woody plants) responded most significantly to the treatments. This suggests that relative abundance is determined more by trade-offs relating to environmental conditions (long- term fire suppression) than to traits relating to resource capture (which should most impact functionally similar species). This points toward the passenger model as the underlying cause of exotic dominance, although their combined effects (suppressive and facilitative) on community structure are substantial.", "which hypothesis ?", "Disturbance", 54.0, 65.0], ["Biological invasions are a global phenomenon that can accelerate disturbance regimes and facilitate colonization by other nonnative species. In a coastal grassland in northern California, we conducted a four-year exclosure experiment to assess the effects of soil disturbances by feral pigs (Sus scrofa) on plant community composition and soil nitrogen availability. Our results indicate that pig disturbances had substantial effects on the community, although many responses varied with plant functional group, geographic origin (native vs. exotic), and grassland type. (''Short patches'' were dominated by annual grasses and forbs, whereas ''tall patches'' were dominated by perennial bunchgrasses.) Soil disturbances by pigs increased the richness of exotic plant species by 29% and native taxa by 24%. Although native perennial grasses were unaffected, disturbances reduced the bio- mass of exotic perennial grasses by 52% in tall patches and had no effect in short patches. Pig disturbances led to a 69% decrease in biomass of exotic annual grasses in tall patches but caused a 62% increase in short patches. Native, nongrass monocots exhibited the opposite biomass pattern as those seen for exotic annual grasses, with disturbance causing an 80% increase in tall patches and a 56% decrease in short patches. Native forbs were unaffected by disturbance, whereas the biomass of exotic forbs increased by 79% with disturbance in tall patches and showed no response in short patches. In contrast to these vegetation results, we found no evidence that pig disturbances affected nitrogen mineral- ization rates or soil moisture availability. Thus, we hypothesize that the observed vegetation changes were due to space clearing by pigs that provided greater opportunities for colo- nization and reduced intensity of competition, rather than changes in soil characteristics. In summary, although responses were variable, disturbances by feral pigs generally pro- moted the continued invasion of this coastal grassland by exotic plant taxa.", "which hypothesis ?", "Disturbance", 65.0, 76.0], ["The widely held belief that riparian communities are highly invasible to exotic plants is based primarily on comparisons of the extent of invasion in riparian and upland communities. However, because differences in the extent of invasion may simply result from variation in propagule supply among recipient environments, true comparisons of invasibility require that both invasion success and propagule pressure are quantified. In this study, we quantified propagule pressure in order to compare the invasibility of riparian and upland forests and assess the accuracy of using a community's level of invasion as a surrogate for its invasibility. We found the extent of invasion to be a poor proxy for invasibility. The higher level of invasion in the studied riparian forests resulted from greater propagule availability rather than higher invasibility. Furthermore, failure to account for propagule pressure may confound our understanding of general invasion theories. Ecological theory suggests that species-rich communities should be less invasible. However, we found significant relationships between species diversity and invasion extent, but no diversity-invasibility relationship was detected for any species. Our results demonstrate that using a community's level of invasion as a surrogate for its invasibility can confound our understanding of invasibility and its determinants.", "which hypothesis ?", "Propagule pressure", 393.0, 411.0], ["Aim Island faunas, particularly those with high levels of endemism, usually are considered especially susceptible to disruption from habitat disturbance and invasive alien species. We tested this general hypothesis by examining the distribution of small mammals along gradients of anthropogenic habitat disturbance in northern Luzon Island, an area with a very high level of mammalian endemism.", "which hypothesis ?", "Disturbance", 141.0, 152.0], ["A current challenge in ecology is to better understand the magnitude, variation, and interaction in the factors that limit the invasiveness of exotic species. We conducted a factorial experiment involving herbivore manipulation (insecticide-in-water vs. water-only control) and seven densities of introduced nonnative Cirsium vulgare (bull thistle) seed. The experiment was repeated with two seed cohorts at eight grassland sites uninvaded by C. vulgare in the central Great Plains, USA. Herbivory by native insects significantly reduced thistle seedling density, causing the largest reductions in density at the highest propagule inputs. The magnitude of this herbivore effect varied widely among sites and between cohort years. The combination of herbivory and lower propagule pressure increased the rate at which new C. vulgare populations failed to establish during the initial stages of invasion. This experiment demonstrates that the interaction between biotic resistance by native insects, propagule pressure, and spatiotemporal variation in their effects were crucial to the initial invasion by this Eurasian plant in the western tallgrass prairie.", "which hypothesis ?", "Propagule pressure", 769.0, 787.0], ["Average inoculum size and number of introductions are known to have positive effects on population persistence. However, whether these factors affect persistence independently or interact is unknown. We conducted a two-factor experiment in which 112 populations of parthenogenetic Daphnia magna were maintained for 41 days to study effects of inoculum size and introduction frequency on: (i) population growth, (ii) population persistence and (iii) time-to-extinction. We found that the interaction of inoculum size and introduction frequency\u2014the immigration rate\u2014affected all three dependent variables, while population growth was additionally affected by introduction frequency. We conclude that for this system the most important aspect of propagule pressure is immigration rate, with relatively minor additional effects of introduction frequency and negligible effects of inoculum size.", "which hypothesis ?", "Propagule pressure", 743.0, 761.0], [": Alien plant species have rapidly invaded and successfully displaced native species in many grasslands of western North America. Thus, the status of alien species in the nature reserve grasslands of this region warrants special attention. This study describes alien flora in nine fescue grassland study sites adjacent to three types of transportation corridors\u2014primary roads, secondary roads, and backcountry trails\u2014in Glacier National Park, Montana (U.S.A.). Parallel transects, placed at varying distances from the adjacent road or trail, were used to determine alien species richness and frequency at individual study sites. Fifteen alien species were recorded, two Eurasian grasses, Phleum pratense and Poa pratensis, being particularly common in most of the study sites. In sites adjacent to primary and secondary roads, alien species richness declined out to the most distant transect, suggesting that alien species are successfully invading grasslands from the roadside area. In study sites adjacent to backcountry trails, absence of a comparable decline and unexpectedly high levels of alien species richness 100 m from the trailside suggest that alien species have been introduced in off-trail areas. The results of this study imply that in spite of low levels of livestock grazing and other anthropogenic disturbances, fescue grasslands in nature reserves of this region are vulnerable to invasion by alien flora. Given the prominent role that roadsides play in the establishment and dispersal of alien flora, road construction should be viewed from a biological, rather than an engineering, perspective. Nature reserve man agers should establish effective roadside vegetation management programs that include monitoring, quickly treating keystone alien species upon their initial occurrence in nature reserves, and creating buffer zones on roadside leading to nature reserves. Resumen: Especies de plantas introducidas han invadido rapidamente y desplazado exitosamente especies nativas en praderas del Oeste de America del Norte. Por lo tanto el estado de las especies introducidas en las reservas de pastizales naturales de esta region exige especial atencion. Este estudio describe la flora introducida en nueve pastizales naturales de festuca, las areas de estudios son adyacentes a tres tipos decorredores de transporte\u2014caminos primarios, caminos secundarios y senderos remotos\u2014en el Parque Nacional \u201cGlacier,\u201d Montana (EE.UU). Para determinar riqueza y frecuencia de especies introducidas, se trazaron transectas paralelas, localizadas a distancias variables del camino o sendero adyacente en las areas de estudio. Se registraron quince especies introducidas. Dos pastos eurasiaticos, Phleum pratensis y Poa pratensis, resultaron particularmente abuntes en la mayoria de las areas de estudio. En lugares adyacentes a caminos primarios y secundarios, la riqueza de especies introducidas disminuyo en la direccion de las transectas mas distantes, sugiriendo que las especies introducidas estan invadiendo exitosamente las praderas desde areas aledanas a caminos. En las areas de estudio adyacentes a senderos remotos no se encontro una disminucion comparable; inesperados altos niveles de riqueza de especies introducidas a 100 m de los senderos, sugieren que las especies foraneas han sido introducidas desde otras areas fuero de los senderos. Los resultados de este estudio implican que a pesar de los bajos niveles de pastoreo y otras perturbaciones antropogenicas, los pastizales de festuca en las reservas naturales de esta region son vulnerables a la invasion de la flora introducida. Dada el rol preponderante que juegan los caminos en el establecimiento y dispersion de la flora introducida, la construccion de rutas debe ser vista desde un punto de vista biologica, mas que desde una perspectiva meramente ingenieril. Los administradores de reservas naturales deberian establecer programas efectivos de manejo de vegetacion en los bordes de los caminos. Estos programas deberian incluir monitoreo, tratamiento rapido de especies introducidas y claves tan pronto como se detecten en las reservas naturales, y creacion de zonas de transicion en los caminos que conducen a las reservas naturales.", "which hypothesis ?", "Disturbance", NaN, NaN], ["Plantations of rapidly growing trees are becoming increasingly common because the high productivity can enhance local economies, support improvements in educational systems, and generally improve the quality of life in rural communities. Landowners frequently choose to plant nonindigenous species; one rationalization has been that silvicultural productivity is enhanced when trees are separated from their native herbivores and pathogens. The expectation of enemy reduction in nonindigenous species has theoretical and empirical support from studies of the enemy release hypothesis (ERH) in the context of invasion ecology, but its relevance to forestry has not been evaluated. We evaluated ERH in the productive forests of Galicia, Spain, where there has been a profusion of pine plantations, some with the indigenous Pinus pinaster, but increasingly with the nonindigenous P. radiata. Here, one of the most important pests of pines is the indigenous bark beetle, Tomicus piniperda. In support of ERH, attacks by T. piniperda were more than twice as great in stands of P. pinaster compared to P. radiata. This differential held across a range of tree ages and beetle abundance. However, this extension of ERH to forestry failed in the broader sense because beetle attacks, although fewer on P. radiata, reduced productivity of P. radiata more than that of P. pinaster (probably because more photosynthetic tissue is lost per beetle attack in P. radiata). Productivity of the nonindigenous pine was further reduced by the pathogen, Sphaeropsis sapinea, which infected up to 28% of P. radiata but was absent in P. pinaster. This was consistent with the forestry axiom (antithetical to ERH) that trees planted \"off-site\" are more susceptible to pathogens. Fungal infections were positively correlated with beetle attacks; apparently T. piniperda facilitates S. sapinea infections by creating wounds and by carrying fungal propagules. A globally important component in the diminution of indigenous flora has been the deliberate large-scale propagation of nonnative trees for silviculture. At least for Pinus forestry in Spain, reduced losses to pests did not rationalize the planting of nonindigenous trees. There would be value in further exploration of relations between invasion ecology and the forestry of nonindigenous trees.", "which hypothesis ?", "Enemy release", 559.0, 572.0], ["Plant species introduced into novel ranges may become invasive due to evolutionary change, phenotypic plasticity, or other biotic or abiotic mechanisms. Evolution of introduced populations could be the result of founder effects, drift, hybridization, or adaptation to local conditions, which could enhance the invasiveness of introduced species. However, understanding whether the success of invading populations is due to genetic differences between native and introduced populations may be obscured by origin x environment interactions. That is, studies conducted under a limited set of environmental conditions may show inconsistent results if native or introduced populations are differentially adapted to specific conditions. We tested for genetic differences between native and introduced populations, and for origin x environment interactions, between native (China) and introduced (U.S.) populations of the invasive annual grass Microstegium vimineum (stiltgrass) across 22 common gardens spanning a wide range of habitats and environmental conditions. On average, introduced populations produced 46% greater biomass and had 7.4% greater survival, and outperformed native range populations in every common garden. However, we found no evidence that introduced Microstegium exhibited greater phenotypic plasticity than native populations. Biomass of Microstegium was positively correlated with light and resident community richness and biomass across the common gardens. However, these relationships were equivalent for native and introduced populations, suggesting that the greater mean performance of introduced populations is not due to unequal responses to specific environmental parameters. Our data on performance of invasive and native populations suggest that post-introduction evolutionary changes may have enhanced the invasive potential of this species. Further, the ability of Microstegium to survive and grow across the wide variety of environmental conditions demonstrates that few habitats are immune to invasion.", "which hypothesis ?", "Phenotypic plasticity", 91.0, 112.0], ["Predicting community susceptibility to invasion has become a priority for preserving biodiversity. We tested the hypothesis that the occurrence and abundance of the seaweed Caulerpa racemosa in the north-western (NW) Mediterranean would increase with increasing levels of human disturbance. Data from a survey encompassing areas subjected to different human influences (i.e. from urbanized to protected areas) were fitted by means of generalized linear mixed models, including descriptors of habitats and communities. The incidence of occurrence of C. racemosa was greater on urban than extra-urban or protected reefs, along the coast of Tuscany and NW Sardinia, respectively. Within the Marine Protected Area of Capraia Island (Tuscan Archipelago), the probability of detecting C. racemosa did not vary according to the degree of protection (partial versus total). Human influence was, however, a poor predictor of the seaweed cover. At the seascape level, C. racemosa was more widely spread within degraded (i.e. Posidonia oceanica dead matte or algal turfs) than in better preserved habitats (i.e. canopy-forming macroalgae or P. oceanica seagrass meadows). At a smaller spatial scale, the presence of the seaweed was positively correlated to the diversity of macroalgae and negatively to that of sessile invertebrates. These results suggest that C. racemosa can take advantage of habitat degradation. Thus, predicting invasion scenarios requires a thorough knowledge of ecosystem structure, at a hierarchy of levels of biological organization (from the landscape to the assemblage) and detailed information on the nature and intensity of sources of disturbance and spatial scales at which they operate.", "which hypothesis ?", "Disturbance", 278.0, 289.0], ["Islands can serve as model systems for understanding how biological invasions affect community structure and ecosystem function. Here we show invasion by the alien crazy ant Anoplolepis gracilipes causes a rapid, catastrophic shift in the rain forest ecosystem of a tropical oceanic island, affecting at least three trophic levels. In invaded areas, crazy ants extirpate the red land crab, the dominant endemic consumer on the forest floor. In doing so, crazy ants indirectly release seedling recruitment, enhance species richness of seedlings, and slow litter breakdown. In the forest canopy, new associations between this invasive ant and honeydew-secreting scale insects accelerate and diversify impacts. Sustained high densities of foraging ants on canopy trees result in high population densities of hostgeneralist scale insects and growth of sooty moulds, leading to canopy dieback and even deaths of canopy trees. The indirect fallout from the displacement of a native keystone species by an ant invader, itself abetted by introduced/cryptogenic mutualists, produces synergism in impacts to precipitate invasional meltdown in this system.", "which hypothesis ?", "Invasional meltdown", 1110.0, 1129.0], ["Invasive alien species might benefit from phenotypic plasticity by being able to (i) maintain fitness in stressful environments (\u2018robust\u2019), (ii) increase fitness in favourable environments (\u2018opportunistic\u2019), or (iii) combine both abilities (\u2018robust and opportunistic\u2019). Here, we applied this framework, for the first time, to an animal, the invasive slug, Arion lusitanicus, and tested (i) whether it has a more adaptive phenotypic plasticity compared with a congeneric native slug, Arion fuscus, and (ii) whether it is robust, opportunistic or both. During one year, we exposed specimens of both species to a range of temperatures along an altitudinal gradient (700\u20132400 m a.s.l.) and to high and low food levels, and we compared the responsiveness of two fitness traits: survival and egg production. During summer, the invasive species had a more adaptive phenotypic plasticity, and at high temperatures and low food levels, it survived better and produced more eggs than A. fuscus, representing the robust phenotype. During winter, A. lusitanicus displayed a less adaptive phenotype than A. fuscus. We show that the framework developed for plants is also very useful for a better mechanistic understanding of animal invasions. Warmer summers and milder winters might lead to an expansion of this invasive species to higher altitudes and enhance its spread in the lowlands, supporting the concern that global climate change will increase biological invasions.", "which hypothesis ?", "Phenotypic plasticity", 42.0, 63.0], ["Introduced hosts populations may benefit of an \"enemy release\" through impoverishment of parasite communities made of both few imported species and few acquired local ones. Moreover, closely related competing native hosts can be affected by acquiring introduced taxa (spillover) and by increased transmission risk of native parasites (spillback). We determined the macroparasite fauna of invasive grey squirrels (Sciurus carolinensis) in Italy to detect any diversity loss, introduction of novel parasites or acquisition of local ones, and analysed variation in parasite burdens to identify factors that may increase transmission risk for native red squirrels (S. vulgaris). Based on 277 grey squirrels sampled from 7 populations characterised by different time scales in introduction events, we identified 7 gastro-intestinal helminths and 4 parasite arthropods. Parasite richness is lower than in grey squirrel's native range and independent from introduction time lags. The most common parasites are Nearctic nematodes Strongyloides robustus (prevalence: 56.6%) and Trichostrongylus calcaratus (6.5%), red squirrel flea Ceratophyllus sciurorum (26.0%) and Holarctic sucking louse Neohaematopinus sciuri (17.7%). All other parasites are European or cosmopolitan species with prevalence below 5%. S. robustus abundance is positively affected by host density and body mass, C. sciurorum abundance increases with host density and varies with seasons. Overall, we show that grey squirrels in Italy may benefit of an enemy release, and both spillback and spillover processes towards native red squirrels may occur.", "which hypothesis ?", "Enemy release", 48.0, 61.0], ["Both anthropogenic habitat disturbance and the breadth of habitat use by alien species have been found to facilitate invasion into novel environments, and these factors have been hypothesized to be important within coccinellid communities specifically. In this study, we address two questions: (1) Do alien species benefit more than native species from human\u2010disturbed habitats? (2) Are alien species more generalized in their habitat use than natives within the invaded range or can their abundance patterns be explained by specialization on the most common habitats?", "which hypothesis ?", "Disturbance", 27.0, 38.0], ["Genetic diversity is supposed to support the colonization success of expanding species, in particular in situations where microsite availability is constrained. Addressing the role of genetic diversity in plant invasion experimentally requires its manipulation independent of propagule pressure. To assess the relative importance of these components for the invasion of Senecio vernalis, we created propagule mixtures of four levels of genotype diversity by combining seeds across remote populations, across proximate populations, within single populations and within seed families. In a first container experiment with constant Festuca rupicola density as matrix, genotype diversity was crossed with three levels of seed density. In a second experiment, we tested for effects of establishment limitation and genotype diversity by manipulating Festuca densities. Increasing genetic diversity had no effects on abundance and biomass of S. vernalis but positively affected the proportion of large individuals to small individuals. Mixtures composed from proximate populations had a significantly higher proportion of large individuals than mixtures composed from within seed families only. High propagule pressure increased emergence and establishment of S. vernalis but had no effect on individual growth performance. Establishment was favoured in containers with Festuca, but performance of surviving seedlings was higher in open soil treatments. For S. vernalis invasion, we found a shift in driving factors from density dependence to effects of genetic diversity across life stages. While initial abundance was mostly linked to the amount of seed input, genetic diversity, in contrast, affected later stages of colonization probably via sampling effects and seemed to contribute to filtering the genotypes that finally grew up. In consequence, when disentangling the mechanistic relationships of genetic diversity, seed density and microsite limitation in colonization of invasive plants, a clear differentiation between initial emergence and subsequent survival to juvenile and adult stages is required.", "which hypothesis ?", "Propagule pressure", 276.0, 294.0], ["1 Invading species typically need to overcome multiple limiting factors simultaneously in order to become established, and understanding how such factors interact to regulate the invasion process remains a major challenge in ecology. 2 We used the invasion of marine algal communities by the seaweed Sargassum muticum as a study system to experimentally investigate the independent and interactive effects of disturbance and propagule pressure in the short term. Based on our experimental results, we parameterized an integrodifference equation model, which we used to examine how disturbances created by different benthic herbivores influence the longer term invasion success of S. muticum. 3 Our experimental results demonstrate that in this system neither disturbance nor propagule input alone was sufficient to maximize invasion success. Rather, the interaction between these processes was critical for understanding how the S. muticum invasion is regulated in the short term. 4 The model showed that both the size and spatial arrangement of herbivore disturbances had a major impact on how disturbance facilitated the invasion, by jointly determining how much space\u2010limitation was alleviated and how readily disturbed areas could be reached by dispersing propagules. 5 Synthesis. Both the short\u2010term experiment and the long\u2010term model show that S. muticum invasion success is co\u2010regulated by disturbance and propagule pressure. Our results underscore the importance of considering interactive effects when making predictions about invasion success.", "which hypothesis ?", "Propagule pressure", 425.0, 443.0], ["1 The emerald ash borer Agrilus planipennis (Coleoptera: Buprestidae) (EAB), an invasive wood\u2010boring beetle, has recently caused significant losses of native ash (Fraxinus spp.) trees in North America. Movement of wood products has facilitated EAB spread, and heat sanitation of wooden materials according to International Standards for Phytosanitary Measures No. 15 (ISPM 15) is used to prevent this. 2 In the present study, we assessed the thermal conditions experienced during a typical heat\u2010treatment at a facility using protocols for pallet wood treatment under policy PI\u201007, as implemented in Canada. The basal high temperature tolerance of EAB larvae and pupae was determined, and the observed heating rates were used to investigate whether the heat shock response and expression of heat shock proteins occurred in fourth\u2010instar larvae. 3 The temperature regime during heat treatment greatly exceeded the ISPM 15 requirements of 56 \u00b0C for 30 min. Emerald ash borer larvae were highly tolerant of elevated temperatures, with some instars surviving exposure to 53 \u00b0C without any heat pre\u2010treatments. High temperature survival was increased by either slow warming or pre\u2010exposure to elevated temperatures and a recovery regime that was accompanied by up\u2010regulated hsp70 expression under some of these conditions. 4 Because EAB is highly heat tolerant and exhibits a fully functional heat shock response, we conclude that greater survival than measured in vitro is possible under industry treatment conditions (with the larvae still embedded in the wood). We propose that the phenotypic plasticity of EAB may lead to high temperature tolerance very close to conditions experienced in an ISPM 15 standard treatment.", "which hypothesis ?", "Phenotypic plasticity", 1579.0, 1600.0], ["The diversity and composition of a community are determined by a com- bination of local and regional processes. We conducted a field experiment to examine the impact of resource manipulations and seed addition on the invasibility and diversity of a low-productivity grassland. We manipulated resource levels both by a disturbance treatment that reduced adult plant cover in the spring of the first year and by addition of fertilizer every year. Seeds of 46 native species, both resident and nonresident to the community, were added in spring of the first year to determine the effects of recruitment limitation from local (seed limitation) and regional (dispersal limitation) sources on local species richness. Our results show that the unmanipulated community was not readily invasible. Seed addition increased the species richness of unmanipulated plots, but this was primarily due to increased occurrence of resident species. Nonresident species were only able to invade following a cover-reduction disturbance. Cover reduction resulted in an increase in nitrogen availability in the first year, but had no measurable effect on light availability in any year. In contrast, fertilization created a persistent increase in nitrogen availability that increased plant cover or biomass and reduced light penetration to ground level. Initially, fertilization had an overall positive effect on species richness, but by the third year, the effect was either negative or neutral. Unlike cover reduction, fertilization had no observable effect on seedling recruitment or occurrence (number of plots) of invading resident or nonresident species. The results of our experiment demonstrate that, although resource fluctuations can increase the invasibility of this grass- land, the community response depends on the nature of the resource change.", "which hypothesis ?", "Disturbance", 318.0, 329.0], ["Questions: How did post-wildfire understorey plant community response, including exotic species response, differ between pre-fire treated areas that were less severely burned, and pre-fire untreated areas that were more severely burned? Were these differences consistent through time? Location: East-central Arizona, southwestern US. Methods: We used a multi-year data set from the 2002 Rodeo\u2013Chediski Fire to detect post-fire trends in plant community response in burned ponderosa pine forests. Within the burn perimeter, we examined the effects of pre-fire fuels treatments on post-fire vegetation by comparing paired treated and untreated sites on the Apache-Sitgreaves National Forest. We sampled these paired sites in 2004, 2005 and 2011. Results: There were significant differences in pre-fire treated and untreated plant communities by species composition and abundance in 2004 and 2005, but these communities were beginning to converge in 2011. Total understorey plant cover was significantly higher in untreated areas for all 3 yr. Plant cover generally increased between 2004 and 2005 and markedly decreased in 2011, with the exception of shrub cover, which steadily increased through time. The sharp decrease in forb and graminoid cover in 2011 is likely related to drought conditions since the fire. Annual/biennial forb and graminoid cover decreased relative to perennial cover through time, consistent with the initial floristics hypothesis. Exotic plant response was highly variable and not limited to the immediate post-fire, annual/biennial community. Despite low overall exotic forb and graminoid cover for all years (<2.5%), several exotic species increased in frequency, and the relative proportion of exotic to native cover increased through time. Conclusions: Pre-treatment fuel reduction treatments helped maintain foundation overstorey species and associated native plant communities following this large wildfire. The overall low cover of exotic species on these sites supports other findings that the disturbance associated with high-severity fire does not always result in exotic species invasions. The increase in relative cover and frequency though time indicates that some species are proliferating, and continued monitoring is recommended. Patterns of exotic species invasions after severe burning are not easily predicted, and are likely more dependent on site-specific factors such as propagules, weather patterns and management.", "which hypothesis ?", "Disturbance", 2027.0, 2038.0], ["1. The invasion success of Ceratitis capitata probably stems from physiological, morphological, and behavioural adaptations that enable them to survive in different habitats. However, it is generally poorly understood if variation in acute thermal tolerance and its phenotypic plasticity might be important in facilitating survival of C. capitata upon introduction to novel environments.", "which hypothesis ?", "Phenotypic plasticity", 266.0, 287.0], ["We characterized patterns of genetic variation in populations of the fire ant Solenopsis invicta in China using mitochondrial DNA sequences and nuclear microsatellite loci to test predictions as to how propagule pressure and subsequent dispersal following establishment jointly shape the invasion success of this ant in this recently invaded area. Fire ants in Wuchuan (Guangdong Province) are genetically differentiated from those found in other large infested areas of China. The immediate source of ants in Wuchuan appears to be somewhere near Texas, which ranks first among the southern USA infested states in the exportation of goods to China. Most colonies from spatially distant, outlying areas in China are genetically similar to one another and appear to share a common source (Wuchuan, Guangdong Province), suggesting that long\u2010distance jump dispersal has been a prevalent means of recent spread of fire ants in China. Furthermore, most colonies at outlier sites are of the polygyne social form (featuring multiple egg\u2010laying queens per nest), reinforcing the important role of this social form in the successful invasion of new areas and subsequent range expansion following invasion. Several analyses consistently revealed characteristic signatures of genetic bottlenecks for S. invicta populations in China. The results of this study highlight the invasive potential of this pest ant, suggest that the magnitude of international trade may serve as a predictor of propagule pressure and indicate that rates and patterns of subsequent range expansion are partly determined by the interplay between species traits and the trade and transportation networks.", "which hypothesis ?", "Propagule pressure", 202.0, 220.0], ["Aim To quantify the vulnerability of habitats to invasion by alien plants having accounted for the effects of propagule pressure, time and sampling effort. Location New Zealand. Methods We used spatial, temporal and habitat information taken from 9297 herbarium records of 301 alien plant species to examine the vulnerability of 11 terrestrial habitats to plant invasions. A null model that randomized species records across habitats was used to account for variation in sampling effort and to derive a relative measure of invasion based either on all records for a species or only its first record. The relative level of invasion was related to the average distance of each habitat from the nearest conurbation, which was used as a proxy for propagule pressure. The habitat in which a species was first recorded was compared to the habitats encountered for all records of that species to determine whether the initial habitat could predict subsequent habitat occupancy. Results Variation in sampling effort in space and time significantly masked the underlying vulnerability of habitats to plant invasions. Distance from the nearest conurbation had little effect on the relative level of invasion in each habitat, but the number of first records of each species significantly declined with increasing distance. While Urban, Streamside and Coastal habitats were over-represented as sites of initial invasion, there was no evidence of major invasion hotspots from which alien plants might subsequently spread. Rather, the data suggest that certain habitats (especially Roadsides) readily accumulate alien plants from other habitats. Main conclusions Herbarium records combined with a suitable null model provide a powerful tool for assessing the relative vulnerability of habitats to plant invasion. The first records of alien plants tend to be found near conurbations, but this pattern disappears with subsequent spread. Regardless of the habitat where a species was first recorded, ultimately most alien plants spread to Roadside and Sparse habitats. This information suggests that such habitats may be useful targets for weed surveillance and monitoring.", "which hypothesis ?", "Propagule pressure", 110.0, 128.0], ["Disturbance is one of the most important factors promoting exotic invasion. However, if disturbance per se is sufficient to explain exotic success, then \u201cinvasion\u201d abroad should not differ from \u201ccolonization\u201d at home. Comparisons of the effects of disturbance on organisms in their native and introduced ranges are crucial to elucidate whether this is the case; however, such comparisons have not been conducted. We investigated the effects of disturbance on the success of Eurasian native Centaurea solstitialis in two invaded regions, California and Argentina, and one native region, Turkey, by conducting field experiments consisting of simulating different disturbances and adding locally collected C. solstitialis seeds. We also tested differences among C. solstitialis genotypes in these three regions and the effects of local soil microbes on C. solstitialis performance in greenhouse experiments. Disturbance increased C. solstitialis abundance and performance far more in nonnative ranges than in the native range, but C. solstitialis biomass and fecundity were similar among populations from all regions grown under common conditions. Eurasian soil microbes suppressed growth of C. solstitialis plants, while Californian and Argentinean soil biota did not. We suggest that escape from soil pathogens may contribute to the disproportionately powerful effect of disturbance in introduced regions.", "which hypothesis ?", "Disturbance", 0.0, 11.0], ["Few field experiments have examined the effects of both resource availability and propagule pressure on plant community invasibility. Two non-native forest species, a herb and a shrub (Hesperis matronalis and Rhamnus cathartica, respectively), were sown into 60 1-m 2 sub-plots distributed across three plots. These contained reconstructed native plant communities in a replaced surface soil layer in a North American forest interior. Resource availability and propagule pressure were manipulated as follows: understorey light level (shaded/unshaded), nutrient availability (control/fertilized), and seed pressures of the two non-native species (control/low/high). Hesperis and Rhamnus cover and the above-ground biomass of Hesperis were significantly higher in shaded sub-plots and at greater propagule pressures. Similarly, the above-ground biomass of Rhamnus was significantly increased with propagule pressure, although this was a function of density. In contrast, of species that seeded into plots from the surrounding forest during the growing season, the non-native species had significantly greater cover in unshaded sub-plots. Plants in these unshaded sub-plots were significantly taller than plants in shaded sub-plots, suggesting a greater fitness. Total and non-native species richness varied significantly among plots indicating the importance of fine-scale dispersal patterns. None of the experimental treatments influenced native species. Since the forest seed bank in our study was colonized primarily by non-native ruderal species that dominated understorey vegetation, the management of invasions by non-native species in forest understoreys will have to address factors that influence light levels and dispersal pathways.", "which hypothesis ?", "Propagule pressure", 82.0, 100.0], ["Abstract The enemies release hypothesis proposes that exotic species can become invasive by escaping from predators and parasites in their novel environment. Agrawal et al. (Enemy release? An experiment with congeneric plant pairs and diverse above\u2010 and below\u2010ground enemies. Ecology, 86, 2979\u20132989) proposed that areas or times in which damage to introduced species is low provide opportunities for the invasion of native habitat. We tested whether ornamental settings may provide areas with low levels of herbivory for trees and shrubs, potentially facilitating invasion success. First, we compared levels of leaf herbivory among native and exotic species in ornamental and natural settings in Cincinnati, Ohio, United States. In the second study, we compared levels of herbivory for invasive and noninvasive exotic species between natural and ornamental settings. We found lower levels of leaf damage for exotic species than for native species; however, we found no differences in the amount of leaf damage suffered in ornamental or natural settings. Our results do not provide any evidence that ornamental settings afford additional release from herbivory for exotic plant species.", "which hypothesis ?", "Enemy release", 174.0, 187.0], ["The dynamics of invasive species may depend on their abilities to compete for resources and exploit disturbances relative to the abilities of native species. We test this hypothesis and explore its implications for the restoration of native ecosystems in one of the most dramatic ecological invasions worldwide, the replacement of native perennial grasses by exotic annual grasses and forbs in 9.2 million hectares of California grasslands. The long-term persistence of these exotic annuals has been thought to imply that the exotics are superior competitors. However, seed-addition experiments in a southern California grassland revealed that native perennial species, which had lower requirements for deep soil water, soil nitrate, and light, were strong competitors, and they markedly depressed the abundance and fecundity of exotic annuals after overcoming recruitment limitations. Native species reinvaded exotic grasslands across experimentally imposed nitrogen, water, and disturbance gradients. Thus, exotic annuals are not superior competitors but rather may dominate because of prior disturbance and the low dispersal abilities and extreme current rarity of native perennials. If our results prove to be general, it may be feasible to restore native California grassland flora to at least parts of its former range.", "which hypothesis ?", "Disturbance", 980.0, 991.0], ["Questions: On sandy coastal habitats, factors related to substrate and to wind action vary along the sea\u2013inland ecotone, forming a marked directional disturbance and stress gradient. Further, input of propagules of alien plant species associated to touristic exploitation and development is intense. This has contributed to establishment and spread of aliens in coastal systems. Records of alien species in databases of such heterogeneous landscapes remain scarce, posing a challenge for statistical modelling. We address this issue and attempt to shed light on the role of environmental stress/disturbance gradients and propagule pressure on invasibility of plant communities in these typical model systems. Location: Sandy coasts of Lazio (Central Italy). Methods: We proposed an innovative methodology to deal with low prevalence of alien occurrence in a data set and high cost of field-based sampling by taking advantage, through predictive modelling, of the strong interrelation between vegetation and abiotic features in coastal dunes. We fitted generalized additive models to analyse (1) overall patterns of alien occurrence and spread and (2) specific patterns of the most common alien species recorded. Conclusion: Even in the presence of strong propagule pressure, variation in local abiotic conditions can explain differences in invasibility within a local environment, and intermediate levels of natural disturbance and stress offer the best conditions for spread of alien species. However, in our model system, propagule pressure is actually the main determinant of alien species occurrence and spread. We demonstrated that extending the information of environmental features measured in a subsample of vegetation plots through predictive modelling allows complex questions in invasion biology to be addressed without requiring disproportionate funding and sampling effort.", "which hypothesis ?", "Propagule pressure", 621.0, 639.0], ["Abstract Entomofauna in monospecific stands of the introduced Chinese tallow tree (Sapium sebiferum) and native mixed woodlands was sampled in 1982 along the Texas coast and compared to samples of arthropods from an earlier study of native coastal prairie and from a study of arthropods in S. sebiferum in 2004. Species diversity, richness, and abundance were highest in prairie, and were higher in mixed woodland than in S. sebiferum. Nonmetric multidimensional scaling distinguished orders and families of arthropods, and families of herbivores in S. sebiferum from mixed woodland and coastal prairie. Taxonomic similarity between S. sebiferum and mixed woodland was 51%. Fauna from S. sebiferum in 2001 was more similar to mixed woodland than to samples from S. sebiferum collected in 1982. These results indicate that the entomofauna in S. sebiferum originated from mixed prairie and that, with time, these faunas became more similar. Species richness and abundance of herbivores was lower in S. sebiferum, but proportion of total species in all trophic groups, except herbivores, was higher in S. sebiferum than mixed woodland. Low concentration of tannin in leaves of S. sebiferum did not explain low loss of leaves to herbivores. Lower abundance of herbivores on introduced species of plants fits the enemy release hypothesis, and low concentration of defense compounds in the face of low number of herbivores fits the evolution of increased competitive ability hypothesis.", "which hypothesis ?", "Enemy release", 1308.0, 1321.0], ["Over the last decade, the porcelain crab Petrolisthes armatus invaded oyster reefs of Georgia, USA, at mean densities of up to 11 000 adults m -2 . Interactions affecting the invasion are undocumented. We tested the effects of native species richness and composition on invasibility by constructing isolated reef communities with 0, 2, or 4 of the most common native species, by seeding adult P. armatus into a subset of the 4 native species communities and by constructing communities with and without native, predatory mud crabs. At 4 wk, recruitment of P. armatus juveniles to oyster shells lacking native species was 2.75 times greater than to the 2 native species treatment and 3.75 times greater than to the 4 native species treatment. The biotic resistance produced by 2 species of native filter feeders may have occurred due to competition with, or predation on, the settling juve- niles of the filter feeding invasive crab. Adding adult porcelain crabs to communities with 4 native species enhanced recruitment by a significant 3-fold, and countered the effects of native biotic resis- tance. Differences in recruitment at Week 4 were lost by Weeks 8 and 12, when densities of recent recruits reached ~17 000 to 34 000 crabs m -2 across all treatments. Thus, native species richness slows initial invasion, but early colonists stimulate settlement by later ones and produce tremendous propagule pressure that overwhelms the effects of biotic resistance.", "which hypothesis ?", "Propagule pressure", 1394.0, 1412.0], ["The bluegill sunfish, Lepomis macrochirus, is a widespread exotic species in Japan that is considered to have originated from 15 fish introduced from Guttenberg, Iowa, in 1960. Here, the genetic and phenotypic traits of Japanese populations were examined, together with 11 native populations of the USA using 10 microsatellite markers and six meristic traits. Phylogenetic analysis reconfirmed a single origin of Japanese populations, among which populations established in the 1960s were genetically close to Guttenberg population, keeping high genetic diversity comparable to the ancestral population. In contrast, genetic diversity of later\u2010established populations significantly declined with genetic divergence from the ancestral population. Among the 1960s established populations, that from Lake Biwa showed a significant isolation\u2010by\u2010distance pattern with surrounding populations in which genetic bottlenecks increased with geographical distance from Lake Biwa. Although phenotypic divergence among populations was recognized in both neutral and adaptive traits, PST\u2013FST comparisons showed that it is independent of neutral genetic divergence. Divergent selection was suggested in some populations from reservoirs with unstable habitats, while stabilizing selection was dominant. Accordingly, many Japanese populations of L. macrochirus appear to have derived from Lake Biwa population, expanding their distribution with population bottlenecks. Despite low propagule pressure, the invasion success of L. macrochirus is probably because of its drastic population growth in Lake Biwa shortly after its introduction, together with artificial transplantations. It not only enabled the avoidance of a loss in genetic diversity but also formed a major gene pool that supported local adaptation with high phenotypic plasticity.", "which hypothesis ?", "Propagule pressure", 1464.0, 1482.0], ["We are now beginning to understand the role of intraspecific diversity on fundamental ecological phenomena. There exists a paucity of knowledge, however, regarding how intraspecific, or genetic diversity, may covary with other important factors such as propagule pressure. A combination of theoretical modelling and experimentation was used to explore the way propagule pressure and genetic richness may interact. We compare colonization rates of the Australian bivalve Saccostrea glomerata (Gould 1885). We cross propagule size and genetic richness in a factorial design in order to examine the generalities of our theoretical model. Modelling showed that diversity and propagule pressure should generally interact synergistically when positive feedbacks occur (e.g. aggregation). The strength of genotype effects depended on propagule size, or the numerical abundance of arriving individuals. When propagule size was very small (<4 individuals), however, greater genetic richness unexpectedly reduced colonization. The probability of S. glomerata colonization was 76% in genetically rich, larger propagules, almost 39 percentage points higher than in genetically poor propagules of similar size. This pattern was not observed in less dense, smaller propagules. We predict that density-dependent interactions between larvae in the water column may explain this pattern.", "which hypothesis ?", "Propagule pressure", 253.0, 271.0], ["In the process of species introduction, the traits that enable a species to establish and spread in a new habitat, and the habitat characteristics that determine the susceptibility to intro- duced species play a major role. Among the habitat characteristics that render a habitat resistant or susceptible to introductions, species diversity and disturbance are believed to be the most important. It is generally assumed that high species richness renders a habitat resistant to introductions, while disturbances enhance their susceptibility. In the present study, these 2 hypotheses were tested on NW Mediterranean shallow subtidal macrophyte assemblages. Data collection was carried out in early summer 2002 on sub-horizontal rocky substrate at 9 sites along the French Mediterranean coast, 4 undisturbed and 5 highly disturbed. Disturbances include cargo, naval and passenger har- bours, and industrial and urban pollution. Relationships between species richness (point diversity), disturbances and the number of introduced macrophytes were analysed. The following conclusions were drawn: (1) there is no relationship between species introductions, diversity and disturbance for the macrophyte assemblages; (2) multifactorial analyses only revealed the biogeographical relation- ships between the native flora of the sites.", "which hypothesis ?", "Disturbance", 345.0, 356.0], ["Hypoxia is increasing in marine and estuarine systems worldwide, primarily due to anthropogenic causes. Periodic hypoxia represents a pulse disturbance, with the potential to restruc- ture estuarine biotic communities. We chose the shallow, epifaunal community in the lower Chesa- peake Bay, Virginia, USA, to test the hypothesis that low dissolved oxygen (DO) (<4 mg l -1 ) affects community dynamics by reducing the cover of spatial dominants, creating space both for less domi- nant native species and for invasive species. Settling panels were deployed at shallow depths in spring 2000 and 2001 at Gloucester Point, Virginia, and were manipulated every 2 wk from late June to mid-August. Manipulation involved exposing epifaunal communities to varying levels of DO for up to 24 h followed by redeployment in the York River. Exposure to low DO affected both species com- position (presence or absence) and the abundance of the organisms present. Community dominance shifted away from barnacles as level of hypoxia increased. Barnacles were important spatial domi- nants which reduced species diversity when locally abundant. The cover of Hydroides dianthus, a native serpulid polychaete, doubled when exposed to periodic hypoxia. Increased H. dianthus cover may indicate whether a local region has experienced periodic, local DO depletion and thus provide an indicator of poor water-quality conditions. In 2001, the combined cover of the invasive and crypto- genic species in this community, Botryllus schlosseri (tunicate), Molgula manhattensis (tunicate), Ficopomatus enigmaticus (polychaete) and Diadumene lineata (anemone), was highest on the plates exposed to moderately low DO (2 mg l -1 < DO < 4 mg l -1 ). All 4 of these species are now found world- wide and exhibit life histories well adapted for establishment in foreign habitats. Low DO events may enhance success of invasive species, which further stress marine and estuarine ecosystems.", "which hypothesis ?", "Disturbance", 140.0, 151.0], ["ABSTRACT Question: Do anthropogenic activities facilitate the distribution of exotic plants along steep altitudinal gradients? Location: Sani Pass road, Grassland biome, South Africa. Methods: On both sides of this road, presence and abundance of exotic plants was recorded in four 25-m long road-verge plots and in parallel 25 m \u00d7 2 m adjacent land plots, nested at five altitudinal levels: 1500, 1800, 2100, 2400 and 2700 m a.s.l. Exotic community structure was analyzed using Canonical Correspondence Analysis while a two-level nested Generalized Linear Model was fitted for richness and cover of exotics. We tested the upper altitudinal limits for all exotics along this road for spatial clustering around four potential propagule sources using a t-test. Results: Community structure, richness and abundance of exotics were negatively correlated with altitude. Greatest invasion by exotics was recorded for adjacent land at the 1500 m level. Of the 45 exotics, 16 were found at higher altitudes than expected and observations were spatially clustered around potential propagule sources. Conclusions: Spatial clustering of upper altitudinal limits around human inhabited areas suggests that exotics originate from these areas, while exceeding expected altitudinal limits suggests that distribution ranges of exotics are presently underestimated. Exotics are generally characterised by a high propagule pressure and/or persistent seedbanks, thus future tarring of the Sani Pass may result in an increase of exotic species richness and abundance. This would initially result from construction-related soil disturbance and subsequently from increased traffic, water run-off, and altered fire frequency. We suggest examples of management actions to prevent this. Nomenclature: Germishuizen & Meyer (2003).", "which hypothesis ?", "Disturbance", 1607.0, 1618.0], ["One of the key environmental factors affecting plant species abundance, including that of invasive exotics, is nutrient resource availability. Plant functional response to nutrient availability, and what this tells us about plant interactions with associated species, may therefore give us clues about underlying processes related to plant abundance and invasion. Patterns of abundance of Hieracium lepidulum, a European herbaceous invader of subalpine New Zealand, appear to be related to soil fertility/nutrient availability, however, abundance may be influenced by other factors including disturbance. In this study we compare H. lepidulum and field co-occurring species for growth performance across artificial nutrient concentration gradients, for relative competitiveness and for response to disturbance, to construct a functional profile of the species. Hieracium lepidulum was found to be significantly different in its functional response to nutrient concentration gradients. Hieracium lepidulum had high relative growth rate, high yield and root plasticity in response to nutrient concentration dilution, relatively low absolute yield, low competitive yield and a positive response to clipping disturbance relative to other species. Based on overall functional response to nutrient concentration gradients, compared with other species found at the same field sites, we hypothesize that H. lepidulum invasion is not related to competitive domination. Relatively low tolerance of nutrient dilution leads us to predict that H. lepidulum is likely to be restricted from invading low fertility sites, including sites within alpine vegetation or where intact high biomass plant communities are found. Positive response to clipping disturbance and relatively high nutrient requirement, despite poor competitive performance, leads us to predict that H. lepidulum may respond to selective grazing disturbance of associated vegetation. These results are discussed in relation to published observations of H. lepidulum in New Zealand and possible tests for the hypotheses raised here.", "which hypothesis ?", "Disturbance", 592.0, 603.0], ["Background: In temperate mountains, most non-native plant species reach their distributional limit somewhere along the elevational gradient. However, it is unclear if growth limitations can explain upper range limits and whether phenotypic plasticity or genetic changes allow species to occupy a broad elevational gradient. Aims: We investigated how non-native plant individuals from different elevations responded to growing season temperatures, which represented conditions at the core and margin of the elevational distributions of the species. Methods: We recorded the occurrence of nine non-native species in the Swiss Alps and subsequently conducted a climate chamber experiment to assess growth rates of plants from different elevations under different temperature treatments. Results: The elevational limit observed in the field was not related to the species' temperature response in the climate chamber experiment. Almost all species showed a similar level of reduction in growth rates under lower temperatures independent of the upper elevational limit of the species' distribution. For two species we found indications for genetic differentiation among plants from different elevations. Conclusions: We conclude that factors other than growing season temperatures, such as extreme events or winter mortality, might shape the elevational limit of non-native species, and that ecological filtering might select for genotypes that are phenotypically plastic.", "which hypothesis ?", "Phenotypic plasticity", 229.0, 250.0], ["Summary 1 With biological invasions causing widespread problems in ecosystems, methods to curb the colonization success of invasive species are needed. The effective management of invasive species will require an integrated approach that restores community structure and ecosystem processes while controlling propagule pressure of non-native species. 2 We tested the hypotheses that restoring native vegetation and minimizing propagule pressure of invasive species slows the establishment of an invader. In field and greenhouse experiments, we evaluated (i) the effects of a native submersed aquatic plant species, Vallisneria americana, on the colonization success of a non-native species, Hydrilla verticillata; and (ii) the effects of H. verticillata propagule density on its colonization success. 3 Results from the greenhouse experiment showed that V. americana decreased H. verticillata colonization through nutrient draw-down in the water column of closed mesocosms, although data from the field experiment, located in a tidal freshwater region of Chesapeake Bay that is open to nutrient fluxes, suggested that V. americana did not negatively impact H. verticillata colonization. However, H. verticillata colonization was greater in a treatment of plastic V. americana look-alikes, suggesting that the canopy of V. americana can physically capture H. verticillata fragments. Thus pre-emption effects may be less clear in the field experiment because of complex interactions between competitive and facilitative effects in combination with continuous nutrient inputs from tides and rivers that do not allow nutrient draw-down to levels experienced in the greenhouse. 4 Greenhouse and field tests differed in the timing, duration and density of propagule inputs. However, irrespective of these differences, propagule pressure of the invader affected colonization success except in situations when the native species could draw-down nutrients in closed greenhouse mesocosms. In that case, no propagules were able to colonize. 5 Synthesis and applications. We have shown that reducing propagule pressure through targeted management should be considered to slow the spread of invasive species. This, in combination with restoration of native species, may be the best defence against non-native species invasion. Thus a combined strategy of targeted control and promotion of native plant growth is likely to be the most sustainable and cost-effective form of invasive species management.", "which hypothesis ?", "Propagule pressure", 309.0, 327.0], ["1. With continued globalization, species are being transported and introduced into novel habitats at an accelerating rate. Interactions between invasive species may provide important mechanisms that moderate their impacts on native species. 2. The European green crab Carcinus maenas is an aggressive predator that was introduced to the east coast of North America in the mid-1800 s and is capable of rapid consumption of bivalve prey. A newer invasive predator, the Asian shore crab Hemigrapsus sanguineus, was first discovered on the Atlantic coast in the 1980s, and now inhabits many of the same regions as C. maenas within the Gulf of Maine. Using a series of field and laboratory investigations, we examined the consequences of interactions between these predators. 3. Density patterns of these two species at different spatial scales are consistent with negative interactions. As a result of these interactions, C. maenas alters its diet to consume fewer mussels, its preferred prey, in the presence of H. sanguineus. Decreased mussel consumption in turn leads to lower growth rates for C. maenas, with potential detrimental effects on C. maenas populations. 4. Rather than an invasional meltdown, this study demonstrates that, within the Gulf of Maine, this new invasive predator can moderate the impacts of the older invasive predator.", "which hypothesis ?", "Invasional meltdown", 1183.0, 1202.0], ["Summary 1. Plastic responses to spatiotemporal environmental variation strongly influence species distribution, with widespread species expected to have high phenotypic plasticity. Theoretically, high phenotypic plasticity has been linked to plant invasiveness because it facilitates colonization and rapid spreading over large and environmentally heterogeneous new areas. 2. To determine the importance of phenotypic plasticity for plant invasiveness, we compare well-known exotic invasive species with widespread native congeners. First, we characterized the phenotype of 20 invasive\u2013native ecologically and phylogenetically related pairs from the Mediterranean region by measuring 20 different traits involved in resource acquisition, plant competition ability and stress tolerance. Second, we estimated their plasticity across nutrient and light gradients. 3. On average, invasive species had greater capacity for carbon gain and enhanced performance over a range of limiting to saturating resource availabilities than natives. However, both groups responded to environmental variations with high albeit similar levels of trait plasticity. Therefore, contrary to the theory, the extent of phenotypic plasticity was not significantly higher for invasive plants. 4. We argue that the combination of studying mean values of a trait with its plasticity can render insightful conclusions on functional comparisons of species such as those exploring the performance of species coexisting in heterogeneous and changing environments.", "which hypothesis ?", "Phenotypic plasticity", 158.0, 179.0], ["The Enemy Release Hypothesis (ERH) predicts that when plant species are introduced outside their native range there is a release from natural enemies resulting in the plants becoming problematic invasive alien species (Lake & Leishman 2004; Puliafico et al. 2008). The release from natural enemies may benefit alien plants more than simply reducing herbivory because, according to the Evolution of Increased Competitive Ability (EICA) hypothesis, without pressure from herbivores more resources that were previously allocated to defence can be allocated to reproduction (Blossey & Notzold 1995). Alien invasive plants are therefore expected to have simpler herbivore communities with fewer specialist herbivores (Frenzel & Brandl 2003; Heleno et al. 2008; Heger & Jeschke 2014).", "which hypothesis ?", "Enemy release", 4.0, 17.0], ["The differences in phenotypic plasticity between invasive (North American) and native (German) provenances of the invasive plant Lythrum salicaria (purple loosestrife) were examined using a multivariate reaction norm approach testing two important attributes of reaction norms described by multivariate vectors of phenotypic change: the magnitude and direction of mean trait differences between environments. Data were collected for six life history traits from native and invasive plants using a split-plot design with experimentally manipulated water and nutrient levels. We found significant differences between native and invasive plants in multivariate phenotypic plasticity for comparisons between low and high water treatments within low nutrient levels, between low and high nutrient levels within high water treatments, and for comparisons that included both a water and nutrient level change. The significant genotype x environment (G x E) effects support the argument that invasiveness of purple loosestrife is closely associated with the interaction of high levels of soil nutrient and flooding water regime. Our results indicate that native and invasive plants take different strategies for growth and reproduction; native plants flowered earlier and allocated more to flower production, while invasive plants exhibited an extended period of vegetative growth before flowering to increase height and allocation to clonal reproduction, which may contribute to increased fitness and invasiveness in subsequent years.", "which hypothesis ?", "Phenotypic plasticity", 19.0, 40.0], ["Abstract Understanding how the landscape-scale replacement of indigenous plants with alien plants influences ecosystem structure and functioning is critical in a world characterized by increasing biotic homogenization. An important step in this process is to assess the impact on invertebrate communities. Here we analyse insect species richness and abundance in sweep collections from indigenous and alien (Australasian) woody plant species in South Africa's Western Cape. We use phylogenetically relevant comparisons and compare one indigenous with three Australasian alien trees within each of Fabaceae: Mimosoideae, Myrtaceae, and Proteaceae: Grevilleoideae. Although some of the alien species analysed had remarkably high abundances of herbivores, even when intentionally introduced biological control agents are discounted, overall, herbivorous insect assemblages from alien plants were slightly less abundant and less diverse compared with those from indigenous plants \u2013 in accordance with predictions from the enemy release hypothesis. However, there were no clear differences in other insect feeding guilds. We conclude that insect assemblages from alien plants are generally quite diverse, and significant differences between these and assemblages from indigenous plants are only evident for herbivorous insects.", "which hypothesis ?", "Enemy release", 1018.0, 1031.0], ["Alien species are often reported to perform better than functionally similar species native to the invaded range, resulting in high population densities, and a tendency to become invasive. The enemy release hypothesis (ERH) explains the success of invasive alien species (IAS) as a consequence of reduced mortality from natural enemies (predators, parasites and pathogens) compared with native species. The harlequin ladybird, Harmonia axyridis, a species alien to Britain, provides a model system for testing the ERH. Pupae of H. axyridis and the native ladybird Coccinella septempunctata were monitored for parasitism between 2008 and 2011, from populations across southern England in areas first invaded by H. axyridis between 2004 and 2009. In addition, a semi\u2010field experiment was established to investigate the incidence of parasitism of adult H. axyridis and C. septempunctata by Dinocampus coccinellae. Harmonia axyridis pupae were parasitised at a much lower rate than conspecifics in the native range, and both pupae and adults were parasitised at a considerably lower rate than C. septempunctata populations from the same place and time (H. axyridis: 1.67%; C. septempunctata: 18.02%) or in previous studies on Asian H. axyridis (2\u20137%). We found no evidence that the presence of H. axyridis affected the parasitism rate of C. septempunctata by D. coccinellae. Our results are consistent with the general prediction that the prevalence of natural enemies is lower for introduced species than for native species at early stages of invasion. This may partly explain why H. axyridis is such a successful IAS.", "which hypothesis ?", "Enemy release", 193.0, 206.0], ["Enemy release is frequently posed as a main driver of invasiveness of alien species. However, an experimental multi-species test examining performance and herbivory of invasive alien, non-invasive alien and native plant species in the presence and absence of natural enemies is lacking. In a common garden experiment in Switzerland, we manipulated exposure of seven alien invasive, eight alien non-invasive and fourteen native species from six taxonomic groups to natural enemies (invertebrate herbivores), by applying a pesticide treatment under two different nutrient levels. We assessed biomass production, herbivore damage and the major herbivore taxa on plants. Across all species, plants gained significantly greater biomass under pesticide treatment. However, invasive, non-invasive and native species did not differ in their biomass response to pesticide treatment at either nutrient level. The proportion of leaves damaged on invasive species was significantly lower compared to native species, but not when compared to non-invasive species. However, the difference was lost when plant size was accounted for. There were no differences between invasive, non-invasive and native species in herbivore abundance. Our study offers little support for invertebrate herbivore release as a driver of plant invasiveness, but suggests that future enemy release studies should account for differences in plant size among species.", "which hypothesis ?", "Enemy release", 0.0, 13.0], ["This study is a comparison of the spontaneous vascular flora of five Italian cities: Milan, Ancona, Rome, Cagliari and Palermo. The aims of the study are to test the hypothesis that urbanization results in uniformity of urban floras, and to evaluate the role of alien species in the flora of settlements located in different phytoclimatic regions. To obtain comparable data, ten plots of 1 ha, each representing typical urban habitats, were analysed in each city. The results indicate a low floristic similarity between the cities, while the strongest similarity appears within each city and between each city and the seminatural vegetation of the surrounding region. In the Mediterranean settlements, even the most urbanized plots reflect the characters of the surrounding landscape and are rich in native species, while aliens are relatively few. These results differ from the reported uniformity and the high proportion of aliens which generally characterize urban floras elsewhere. To explain this trend the importance of apophytes (indigenous plants expanding into man-made habitats) is highlighted; several Mediterranean species adapted to disturbance (i.e. grazing, trampling, and human activities) are pre-adapted to the urban environment. In addition, consideration is given to the minor role played by the \u2018urban heat island\u2019 in the Mediterranean basin, and to the structure and history of several Italian settlements, where ancient walls, ruins and archaeological sites in the periphery as well as in the historical centres act as conservative habitats and provide connection with seed-sources on the outskirts.", "which hypothesis ?", "Disturbance", 1146.0, 1157.0], ["Biogeographic experiments that test how multiple interacting factors influence exotic plant abundance in their home and recipient communities are remarkably rare. We examined the effects of soil fungi, disturbance and propagule pressure on seed germination, seedling recruitment and adult plant establishment of the invasive Centaurea stoebe in its native European and non\u2010native North American ranges. Centaurea stoebe can establish virtual monocultures in parts of its non\u2010native range, but occurs at far lower abundances where it is native. We conducted parallel experiments at four European and four Montana (USA) grassland sites with all factorial combinations of \u00b1 suppression of soil fungi, \u00b1disturbance and low versus high knapweed propagule pressure [100 or 300 knapweed seeds per 0.3 m \u00d7 0.3 m plot (1000 or 3000 per m2)]. We also measured germination in buried bags containing locally collected knapweed seeds that were either treated or not with fungicide. Disturbance and propagule pressure increased knapweed recruitment and establishment, but did so similarly in both ranges. Treating plots with fungicides had no effect on recruitment or establishment in either range. However, we found: (i) greater seedling recruitment and plant establishment in undisturbed plots in Montana compared to undisturbed plots in Europe and (ii) substantially greater germination of seeds in bags buried in Montana compared to Europe. Also, across all treatments, total plant establishment was greater in Montana than in Europe. Synthesis. Our results highlight the importance of simultaneously examining processes that could influence invasion in both ranges. They indicate that under \u2018background\u2019 undisturbed conditions, knapweed recruits and establishes at greater abundance in Montana than in Europe. However, our results do not support the importance of soil fungi or local disturbances as mechanisms for knapweed's differential success in North America versus Europe.", "which hypothesis ?", "Propagule pressure", 218.0, 236.0], ["The loss of natural enemies is a key feature of species introductions and is assumed to facilitate the increased success of species in new locales (enemy release hypothesis; ERH). The ERH is rarely tested experimentally, however, and is often assumed from observations of enemy loss. We provide a rigorous test of the link between enemy loss and enemy release by conducting observational surveys and an in situ parasitoid exclusion experiment in multiple locations in the native and introduced ranges of a gall-forming insect, Neuroterus saltatorius, which was introduced poleward, within North America. Observational surveys revealed that the gall-former experienced increased demographic success and lower parasitoid attack in the introduced range. Also, a different composition of parasitoids attacked the gall-former in the introduced range. These observational results show that enemies were lost and provide support for the ERH. Experimental results, however, revealed that, while some enemy release occurred, it was not the sole driver of demographic success. This was because background mortality in the absence of enemies was higher in the native range than in the introduced range, suggesting that factors other than parasitoids limit the species in its native range and contribute to its success in its introduced range. Our study demonstrates the importance of measuring the effect of enemies in the context of other community interactions in both ranges to understand what factors cause the increased demographic success of introduced species. This case also highlights that species can experience very different dynamics when introduced into ecologically similar communities.", "which hypothesis ?", "Enemy release", 148.0, 161.0], ["Summary 1. Urban and agricultural activities are not part of natural disturbance regimes and may bear little resemblance to them. Such disturbances are common in densely populated semi-arid shrub communities of the south-western US, yet successional studies in these regions have been limited primarily to natural successional change and the impact of human-induced changes on natural disturbance regimes. Although these communities are resilient to recurrent and large-scale disturbance by fire, they are not necessarily well-adapted to recover from exotic disturbances. 2. This study investigated the effects of severe exotic disturbance (construction, heavy-vehicle activity, landfill operations, soil excavation and tillage) on shrub communities in southern California. These disturbances led to the conversion of indigenous shrublands to exotic annual communities with low native species richness. 3. Nearly 60% of the cover on disturbed sites consisted of exotic annual species, while undisturbed sites were primarily covered by native shrub species (68%). Annual species dominant on disturbed sites included Erodium botrys, Hypochaeris glabra, Bromus spp., Vulpia myuros and Avena spp. 4. The cover of native species remained low on disturbed sites even 71 years after initial exotic disturbance ceased. Native shrub seedlings were also very infrequent on disturbed sites, despite the presence of nearby seed sources. Only two native shrubs, Eriogonum fasciculatum and Baccharis sarothroides, colonized some disturbed sites in large numbers. 5. Although some disturbed sites had lower total soil nitrogen and percentage organic matter and higher pH than undisturbed sites, soil variables measured in this study were not sufficient to explain variations in species abundances on these sites. 6. Non-native annual communities observed in this study did not recover to a predisturbed state within typical successional time (< 25 years), supporting the hypothesis that altered stable states can occur if a community is pushed beyond its threshold of resilience.", "which hypothesis ?", "Disturbance", 69.0, 80.0], ["ABSTRACT Recent invasion theory has hypothesized that newly established exotic species may initially be free of their native parasites, augmenting their population success. Others have hypothesized that invaders may introduce exotic parasites to native species and/or may become hosts to native parasites in their new habitats. Our study analyzed the parasites of two exotic Eurasian gobies that were detected in the Great Lakes in 1990: the round goby Apollonia melanostoma and the tubenose goby Proterorhinus semilunaris. We compared our results from the central region of their introduced ranges in Lakes Huron, St. Clair, and Erie with other studies in the Great Lakes over the past decade, as well as Eurasian native and nonindigenous habitats. Results showed that goby-specific metazoan parasites were absent in the Great Lakes, and all but one species were represented only as larvae, suggesting that adult parasites presently are poorly-adapted to the new gobies as hosts. Seven parasitic species are known to infest the tubenose goby in the Great Lakes, including our new finding of the acanthocephalan Southwellina hispida, and all are rare. We provide the first findings of four parasite species in the round goby and clarified two others, totaling 22 in the Great Lakes\u2014with most being rare. In contrast, 72 round goby parasites occur in the Black Sea region. Trematodes are the most common parasitic group of the round goby in the Great Lakes, as in their native Black Sea range and Baltic Sea introduction. Holarctic trematode Diplostomum spathaceum larvae, which are one of two widely distributed species shared with Eurasia, were found in round goby eyes from all Great Lakes localities except Lake Huron proper. Our study and others reveal no overall increases in parasitism of the invasive gobies over the past decade after their establishment in the Great Lakes. In conclusion, the parasite \u201cload\u201d on the invasive gobies appears relatively low in comparison with their native habitats, lending support to the \u201cenemy release hypothesis.\u201d", "which hypothesis ?", "Enemy release", 2029.0, 2042.0], ["Abstract Huisinga, K. D., D. C. Laughlin, P. Z. Ful\u00e9, J. D. Springer, and C. M. McGlone (Ecological Restoration Institute and School of Forestry, Northern Arizona University, Box 15017, Flagstaff, AZ 86011). Effects of an intense prescribed fire on understory vegetation in a mixed conifer forest. J. Torrey Bot. Soc. 132: 590\u2013601. 2005.\u2014Intense prescribed fire has been suggested as a possible method for forest restoration in mixed conifer forests. In 1993, a prescribed fire in a dense, never-harvested forest on the North Rim of Grand Canyon National Park escaped prescription and burned with greater intensity and severity than expected. We sampled this burned area and an adjacent unburned area to assess fire effects on understory species composition, diversity, and plant cover. The unburned area was sampled in 1998 and the burned area in 1999; 25% of the plots were resampled in 2001 to ensure that differences between sites were consistent and persistent, and not due to inter-annual climatic differences. Species composition differed significantly between unburned and burned sites; eight species were identified as indicators of the unburned site and thirteen as indicators of the burned site. Plant cover was nearly twice as great in the burned site than in the unburned site in the first years of measurement and was 4.6 times greater in the burned site in 2001. Average and total species richness was greater in the burned site, explained mostly by higher numbers of native annual and biennial forbs. Overstory canopy cover and duff depth were significantly lower in the burned site, and there were significant inverse relationships between these variables and plant species richness and plant cover. Greater than 95% of the species in the post-fire community were native and exotic plant cover never exceeded 1%, in contrast with other northern Arizona forests that were dominated by exotic species following high-severity fires. This difference is attributed to the minimal anthropogenic disturbance history (no logging, minimal grazing) of forests in the national park, and suggests that park managers may have more options than non-park managers to use intense fire as a tool for forest conservation and restoration.", "which hypothesis ?", "Disturbance", 2006.0, 2017.0], ["Plant distributions are in part determined by environmental heterogeneity on both large (landscape) and small (several meters) spatial scales. Plant populations can respond to environmental heterogeneity via genetic differentiation between large distinct patches, and via phenotypic plasticity in response to heterogeneity occurring at small scales relative to dispersal distance. As a result, the level of environmental heterogeneity experienced across generations, as determined by seed dispersal distance, may itself be under selection. Selection could act to increase or decrease seed dispersal distance, depending on patterns of heterogeneity in environmental quality with distance from a maternal home site. Serpentine soils, which impose harsh and variable abiotic stress on non-adapted plants, have been partially invaded by Erodium cicutarium in northern California, USA. Using nearby grassland sites characterized as either serpentine or non-serpentine, we collected seeds from dense patches of E. cicutarium on both soil types in spring 2004 and subsequently dispersed those seeds to one of four distances from their maternal home site (0, 0.5, 1, or 10 m). We examined distance-dependent patterns of variation in offspring lifetime fitness, conspecific density, soil availability, soil water content, and aboveground grass and forb biomass. ANOVA revealed a distinct fitness peak when seeds were dispersed 0.5 m from their maternal home site on serpentine patches. In non-serpentine patches, fitness was reduced only for seeds placed back into the maternal home site. Conspecific density was uniformly high within 1 m of a maternal home site on both soils, whereas soil water content and grass biomass were significantly heterogeneous among dispersal distances only on serpentine soils. Structural equation modeling and multigroup analysis revealed significantly stronger direct and indirect effects linking abiotic and biotic variation to offspring performance on serpentine soils than on non-serpentine soils, indicating the potential for soil-specific selection on seed dispersal distance in this invasive species.", "which hypothesis ?", "Phenotypic plasticity", 272.0, 293.0], ["ABSTRACT Question: Do specific environmental conditions affect the performance and growth dynamics of one of the most invasive taxa (Carpobrotus aff. acinaciformis) on Mediterranean islands? Location: Four populations located on Mallorca, Spain. Methods: We monitored growth rates of main and lateral shoots of this stoloniferous plant for over two years (2002\u20132003), comparing two habitats (rocky coast vs. coastal dune) and two different light conditions (sun vs. shade). In one population of each habitat type, we estimated electron transport rate and the level of plant stress (maximal photochemical efficiency Fv/Fm) by means of chlorophyll fluorescence. Results: Main shoots of Carpobrotus grew at similar rates at all sites, regardless habitat type. However, growth rate of lateral shoots was greater in shaded plants than in those exposed to sunlight. Its high phenotypic plasticity, expressed in different allocation patterns in sun and shade individuals, and its clonal growth which promotes the continuous sea...", "which hypothesis ?", "Phenotypic plasticity", 869.0, 890.0], ["To investigate factors affecting the ability of introduced species to invade natural communities in the Western Australian wheatbelt, five communities were examined within a nature reserve near Kellerberrin. Transect studies indicated that introduced annuals were more abundant in woodland than in shrub communities, despite an input of introduced seed into all communities. The response of native and introduced annuals to soil disturbance and fertilizer addition was examined. Small areas were disturbed and/or provided with fertilizer prior to addition of seed of introduced annuals. In most communities, the introduced species used (Avena fatua and Ursinia anthemoides) established well only where the soil had been disturbed, but their growth was increased greatly when fertilizer was also added. Establishment and growth of other introduced species also increased where nutrient addition and soil disturbance were combined. Growth of several native annuals increased greatly with fertilizer addition, but showed little response to disturbance. Fertilizer addition also significantly increased the number of native species present in most communities. This indicates that growth of both native and introduced species is limited by nutrient availability in these communities, but also that introduced species respond more to a combination of nutrient addition and soil disturbance.", "which hypothesis ?", "Disturbance", 429.0, 440.0], ["Theoretically, disturbance and diversity can influence the success of invasive colonists if (1) resource limitation is a prime determinant of invasion success and (2) disturbance and diversity affect the availability of required resources. However, resource limitation is not of overriding importance in all systems, as exemplified by marine soft sediments, one of Earth's most widespread habitat types. Here, we tested the disturbance-invasion hypothesis in a marine soft-sediment system by altering rates of biogenic disturbance and tracking the natural colonization of plots by invasive species. Levels of sediment disturbance were controlled by manipulating densities of burrowing spatangoid urchins, the dominant biogenic sediment mixers in the system. Colonization success by two invasive species (a gobiid fish and a semelid bivalve) was greatest in plots with sediment disturbance rates < 500 cm(3) x m(-2) x d(-1), at the low end of the experimental disturbance gradient (0 to > 9000 cm(3) x m(-2) x d(-1)). Invasive colonization declined with increasing levels of sediment disturbance, counter to the disturbance-invasion hypothesis. Increased sediment disturbance by the urchins also reduced the richness and diversity of native macrofauna (particularly small, sedentary, surface feeders), though there was no evidence of increased availability of resources with increased disturbance that would have facilitated invasive colonization: sediment food resources (chlorophyll a and organic matter content) did not increase, and space and access to overlying water were not limited (low invertebrate abundance). Thus, our study revealed the importance of biogenic disturbance in promoting invasion resistance in a marine soft-sediment community, providing further evidence of the valuable role of bioturbation in soft-sediment systems (bioturbation also affects carbon processing, nutrient recycling, oxygen dynamics, benthic community structure, and so on.). Bioturbation rates are influenced by the presence and abundance of large burrowing species (like spatangoid urchins). Therefore, mass mortalities of large bioturbators could inflate invasion risk and alter other aspects of ecosystem performance in marine soft-sediment habitats.", "which hypothesis ?", "Disturbance", 15.0, 26.0], ["Summary 1. Wetlands in urban regions are subjected to a wide variety of anthropogenic disturbances, many of which may promote invasions of exotic plant species. In order to devise management strategies, the influence of different aspects of the urban and natural environments on invasion and community structure must be understood. 2. The roles of soil variables, anthropogenic effects adjacent to and within the wetlands, and vegetation structure on exotic species occurrence within 21 forested wetlands in north-eastern New Jersey, USA, were compared. The hypotheses were tested that different vegetation strata and different invasive species respond similarly to environmental factors, and that invasion increases with increasing direct human impact, hydrologic disturbance, adjacent residential land use and decreasing wetland area. Canonical correspondence analyses, correlation and logistic regression analyses were used to examine invasion by individual species and overall site invasion, as measured by the absolute and relative number of exotic species in the site flora. 3. Within each stratum, different sets of environmental factors separated exotic and native species. Nutrients, soil clay content and pH, adjacent land use and canopy composition were the most frequently identified factors affecting species, but individual species showed highly individualistic responses to the sets of environmental variables, often responding in opposite ways to the same factor. 4. Overall invasion increased with decreasing area but only when sites > 100 ha were included. Unexpectedly, invasion decreased with increasing proportions of industrial/commercial adjacent land use. 5. The hypotheses were only partially supported; invasion does not increase in a simple way with increasing human presence and disturbance. 6. Synthesis and applications . The results suggest that a suite of environmental conditions can be identified that are associated with invasion into urban wetlands, which can be widely used for assessment and management. However, a comprehensive ecosystem approach is needed that places the remediation of physical alterations from urbanization within a landscape context. Specifically, sediment, inputs and hydrologic changes need to be related to adjoining urban land use and to the overlapping requirements of individual native and exotic species.", "which hypothesis ?", "Disturbance", 765.0, 776.0], ["As the number of biological invasions increases, the potential for invader-invader interactions also rises. The effect of multiple invaders can be superadditive (invasional meltdown), additive, or subadditive (invasional interference); which of these situations occurs has critical implications for prioritization of management efforts. Carduus nutans and C. acanthoides, two congeneric invasive weeds, have a striking, segregated distribution in central Pennsylvania, U.S.A. Possible hypotheses for this pattern include invasion history and chance, direct competition, or negative interactions mediated by other species, such as shared pollinators. To explore the role of resource competition in generating this pattern, we conducted three related experiments using a response-surface design throughout the life cycles of two cohorts. Although these species have similar niche requirements, we found no differential response to competition between conspecifics vs. congeners. The response to combined density was relatively weak for both species. While direct competitive interactions do not explain the segregated distributional patterns of these two species, we predict that invasions of either species singly, or both species together, would have similar impacts. When prioritizing which areas to target to prevent the spread of one of the species, it is better to focus on areas as yet unaffected by its congener; where the congener is already present, invasional interference makes it unlikely that the net effect will change.", "which hypothesis ?", "Invasional meltdown", 162.0, 181.0], ["Norway maple (Acer platanoides L), which is among the most invasive tree species in forests of eastern North America, is associated with reduced regeneration of the related native species, sugar maple (Acer saccharum Marsh) and other native flora. To identify traits conferring an advantage to Norway maple, we grew both species through an entire growing season under simulated light regimes mimicking a closed forest understorey vs. a canopy disturbance (gap). Dynamic shade-houses providing a succession of high-intensity direct-light events between longer periods of low, diffuse light were used to simulate the light regimes. We assessed seedling height growth three times in the season, as well as stem diameter, maximum photosynthetic capacity, biomass allocation above- and below-ground, seasonal phenology and phenotypic plasticity. Given the north European provenance of Norway maple, we also investigated the possibility that its growth in North America might be increased by delayed fall senescence. We found that Norway maple had significantly greater photosynthetic capacity in both light regimes and grew larger in stem diameter than sugar maple. The differences in below- and above-ground biomass, stem diameter, height and maximum photosynthesis were especially important in the simulated gap where Norway maple continued extension growth during the late fall. In the gap regime sugar maple had a significantly higher root : shoot ratio that could confer an advantage in the deepest shade of closed understorey and under water stress or browsing pressure. Norway maple is especially invasive following canopy disturbance where the opposite (low root : shoot ratio) could confer a competitive advantage. Considering the effects of global change in extending the potential growing season, we anticipate that the invasiveness of Norway maple will increase in the future.", "which hypothesis ?", "Phenotypic plasticity", 818.0, 839.0], ["Species that are frequently introduced to an exotic range have a high potential of becoming invasive. Besides propagule pressure, however, no other generally strong determinant of invasion success is known. Although evidence has accumulated that human affiliates (domesticates, pets, human commensals) also have high invasion success, existing studies do not distinguish whether this success can be completely explained by or is partly independent of propagule pressure. Here, we analyze both factors independently, propagule pressure and human affiliation. We also consider a third factor directly related to humans, hunting, and 17 traits on each species' population size and extent, diet, body size, and life history. Our dataset includes all 2362 freshwater fish, mammals, and birds native to Europe or North America. In contrast to most previous studies, we look at the complete invasion process consisting of (1) introduction, (2) establishment, and (3) spread. In this way, we not only consider which of the introduced species became invasive but also which species were introduced. Of the 20 factors tested, propagule pressure and human affiliation were the two strongest determinants of invasion success across all taxa and steps. This was true for multivariate analyses that account for intercorrelations among variables as well as univariate analyses, suggesting that human affiliation influenced invasion success independently of propagule pressure. Some factors affected the different steps of the invasion process antagonistically. For example, game species were much more likely to be introduced to an exotic continent than nonhunted species but tended to be less likely to establish themselves and spread. Such antagonistic effects show the importance of considering the complete invasion process.", "which hypothesis ?", "Propagule pressure", 110.0, 128.0], ["Abstract The selection and introduction of drought tolerant species is a common method of restoring degraded grasslands in arid environments. This study investigated the effects of water stress on growth, water relations, Na+ and K+ accumulation, and stomatal development in the native plant species Zygophyllum xanthoxylum (Bunge) Maxim., and an introduced species, Caragana korshinskii Kom., under three watering regimes. Moderate drought significantly reduced pre\u2010dawn water potential, leaf relative water content, total biomass, total leaf area, above\u2010ground biomass, total number of leaves and specific leaf area, but it increased the root/total weight ratio (0.23 versus 0.33) in C. korshinskii. Only severe drought significantly affected water status and growth in Z. xanthoxylum. In any given watering regime, a significantly higher total biomass was observed in Z. xanthoxylum (1.14 g) compared to C. korshinskii (0.19 g). Moderate drought significantly increased Na+ accumulation in all parts of Z. xanthoxylum, e.g., moderate drought increased leaf Na+ concentration from 1.14 to 2.03 g/100 g DW, however, there was no change in Na+ (0.11 versus 0.12) in the leaf of C. korshinskii when subjected to moderate drought. Stomatal density increased as water availability was reduced in both C. korshinskii and Z. xanthoxylum, but there was no difference in stomatal index of either species. Stomatal length and width, and pore width were significantly reduced by moderate water stress in Z. xanthoxylum, but severe drought was required to produce a significant effect in C. korshinskii. These results indicated that C. korshinskii is more responsive to water stress and exhibits strong phenotypic plasticity especially in above\u2010ground/below\u2010ground biomass allocation. In contrast, Z. xanthoxylum was more tolerant to water deficit, with a lower specific leaf area and a strong ability to maintain water status through osmotic adjustment and stomatal closure, thereby providing an effective strategy to cope with local extreme arid environments.", "which hypothesis ?", "Phenotypic plasticity", 1693.0, 1714.0], ["1 Phenotypic plasticity is often cited as an important mechanism of plant invasion. However, few studies have evaluated the plasticity of a diverse set of traits among invasive and native species, particularly in low resource habitats, and none have examined the functional significance of these traits. 2 I explored trait plasticity in response to variation in light and nutrient availability in five phylogenetically related pairs of native and invasive species occurring in a nutrient\u2010poor habitat. In addition to the magnitude of trait plasticity, I assessed the correlation between 16 leaf\u2010 and plant\u2010level traits and plant performance, as measured by total plant biomass. Because plasticity for morphological and physiological traits is thought to be limited in low resource environments (where native species usually display traits associated with resource conservation), I predicted that native and invasive species would display similar, low levels of trait plasticity. 3 Across treatments, invasive and native species within pairs differed with respect to many of the traits measured; however, invasive species as a group did not show consistent patterns in the direction of trait values. Relative to native species, invasive species displayed high plasticity in traits pertaining to biomass partitioning and leaf\u2010level nitrogen and light use, but only in response to nutrient availability. Invasive and native species showed similar levels of resource\u2010use efficiency and there was no relationship between species plasticity and resource\u2010use efficiency across species. 4 Traits associated with carbon fixation were strongly correlated with performance in invasive species while only a single resource conservation trait was strongly correlated with performance in multiple native species. Several highly plastic traits were not strongly correlated with performance which underscores the difficulty in assessing the functional significance of resource conservation traits over short timescales and calls into question the relevance of simple, quantitative assessments of trait plasticity. 5 Synthesis. My data support the idea that invasive species display high trait plasticity. The degree of plasticity observed here for species occurring in low resource systems corresponds with values observed in high resource systems, which contradicts the general paradigm that trait plasticity is constrained in low resource systems. Several traits were positively correlated with plant performance suggesting that trait plasticity will influence plant fitness.", "which hypothesis ?", "Phenotypic plasticity", 2.0, 23.0], ["The patterns in and the processes underlying the distribution of invertebrates among Southern Ocean islands and across vegetation types on these islands are reasonably well understood. However, few studies have examined the extent to which populations are genetically structured. Given that many sub\u2010Antarctic islands experienced major glaciation and volcanic activity, it might be predicted that substantial population substructure and little genetic isolation\u2010by\u2010distance should characterize indigenous species. By contrast, substantially less population structure might be expected for introduced species. Here, we examine these predictions and their consequences for the conservation of diversity in the region. We do so by examining haplotype diversity based on mitochondrial cytochrome c oxidase subunit I sequence data, from two indigenous (Cryptopygus antarcticus travei, Tullbergia bisetosa) and two introduced (Isotomurus cf. palustris, Ceratophysella denticulata) springtail species from Marion Island. We find considerable genetic substructure in the indigenous species that is compatible with the geological and glacialogical history of the island. Moreover, by employing ecological techniques, we show that haplotype diversity is likely much higher than our sequenced samples suggest. No structure is found in the introduced species, with each being represented by a single haplotype only. This indicates that propagule pressure is not significant for these small animals unlike the situation for other, larger invasive species: a few individuals introduced once are likely to have initiated the invasion. These outcomes demonstrate that sampling must be more comprehensive if the population history of indigenous arthropods on these islands is to be comprehended, and that the risks of within\u2010 and among\u2010island introductions are substantial. The latter means that, if biogeographical signal is to be retained in the region, great care must be taken to avoid inadvertent movement of indigenous species among and within islands. Thus, quarantine procedures should also focus on among\u2010island movements.", "which hypothesis ?", "Propagule pressure", 1424.0, 1442.0], ["Background Coastal landscapes are being transformed as a consequence of the increasing demand for infrastructures to sustain residential, commercial and tourist activities. Thus, intertidal and shallow marine habitats are largely being replaced by a variety of artificial substrata (e.g. breakwaters, seawalls, jetties). Understanding the ecological functioning of these artificial habitats is key to planning their design and management, in order to minimise their impacts and to improve their potential to contribute to marine biodiversity and ecosystem functioning. Nonetheless, little effort has been made to assess the role of human disturbances in shaping the structure of assemblages on marine artificial infrastructures. We tested the hypothesis that some negative impacts associated with the expansion of opportunistic and invasive species on urban infrastructures can be related to the severe human disturbances that are typical of these environments, such as those from maintenance and renovation works. Methodology/Principal Findings Maintenance caused a marked decrease in the cover of dominant space occupiers, such as mussels and oysters, and a significant enhancement of opportunistic and invasive forms, such as biofilm and macroalgae. These effects were particularly pronounced on sheltered substrata compared to exposed substrata. Experimental application of the disturbance in winter reduced the magnitude of the impacts compared to application in spring or summer. We use these results to identify possible management strategies to inform the improvement of the ecological value of artificial marine infrastructures. Conclusions/Significance We demonstrate that some of the impacts of globally expanding marine urban infrastructures, such as those related to the spread of opportunistic, and invasive species could be mitigated through ecologically-driven planning and management of long-term maintenance of these structures. Impact mitigation is a possible outcome of policies that consider the ecological features of built infrastructures and the fundamental value of controlling biodiversity in marine urban systems.", "which hypothesis ?", "Disturbance", 1382.0, 1393.0], ["Aim of study: Acacia dealbata is a naturalized tree of invasive behaviour that has expanded from small plots associated with vineyards into forest ecosystems. Our main objective is to find evidence to support the notion that disturbances, particularly forest fires, are important driving factors in the current expansion of A. dealbata. Area of study: We mapped it current distribution using three study areas and assesses the temporal changes registered in forest cover in these areas of the valley of the river Mino. Material and Methods: The analyses were based on visual interpretation of aerial photographs taken in 1985 and 2003 of three 1x1 km study areas and field works. Main result: A 62.4%, 48.6% and 22.2% of the surface area was covered by A. dealbata in 2003 in pure or mixed stands. Furthermore, areas composed exclusively of A. dealbata make up 33.8%, 15.2% and 5.7% of the stands. The transition matrix analyses between the two dates support our hypothesis that the areas currently covered by A. dealbata make up a greater proportion of the forest area previously classified as unwooded or open forest than those without A. dealbata cover. Both of these surface types are the result of an important impact of fire in the region. Within each area, A. dealbata is mainly located on steeper terrain, which is more affected by fires. Research highlights: A. dealbata is becoming the dominant tree species over large areas and the invasion of this species gives rise to monospecific stands, which may have important implications for future fire regimes. Keywords: Fire regime; Mimosa; plant invasion; silver wattle.", "which hypothesis ?", "Disturbance", NaN, NaN], ["Species richness of native, rare native, and exotic understorey plants was recorded at 120 sites in temperate grassy vegetation in New South Wales. Linear models were used to predict the effects of environment and disturbance on the richness of each of these groups. Total native species and rare native species showed similar responses, with rich- ness declining on sites of increasing natural fertility of par- ent material as well as declining under conditions of water", "which hypothesis ?", "Disturbance", 214.0, 225.0], ["In a field experiment with 30 locally occurring old-field plant species grown in a common garden, we found that non-native plants suffer levels of attack (leaf herbivory) equal to or greater than levels suffered by congeneric native plants. This phylogenetically controlled analysis is in striking contrast to the recent findings from surveys of exotic organisms, and suggests that even if enemy release does accompany the invasion process, this may not be an important mechanism of invasion, particularly for plants with close relatives in the recipient flora.", "which hypothesis ?", "Enemy release", 390.0, 403.0], ["We sampled the understory community in an old-growth, temperate forest to test alternative hypotheses explaining the establishment of exotic plants. We quantified the individual and net importance of distance from areas of human disturbance, native plant diversity, and environmental gradients in determining exotic plant establishment. Distance from disturbed areas, both within and around the reserve, was not correlated to exotic species richness. Numbers of native and exotic species were positively correlated at large (50 m 2 ) and small (10 m 2 ) plot sizes, a trend that persisted when relationships to environ- mental gradients were controlled statistically. Both native and exotic species richness in- creased with soil pH and decreased along a gradient of increasing nitrate availability. Exotic species were restricted to the upper portion of the pH gradient and had individualistic responses to the availability of soil resources. These results are inconsistent with both the diversity-resistance and resource-enrichment hypotheses for invasibility. Environmental conditions favoring native species richness also favor exotic species richness, and com- petitive interactions with the native flora do not appear to limit the entry of additional species into the understory community at this site. It appears that exotic species with niche requirements poorly represented in the regional flora of native species may establish with relatively little resistance or consequence for native species richness.", "which hypothesis ?", "Disturbance", 229.0, 240.0], ["Ecosystems that are heavily invaded by an exotic species often contain abundant populations of other invasive species. This may reflect shared responses to a common factor, but may also reflect positive interactions among these exotic species. Armand Bayou (Pasadena, TX) is one such ecosystem where multiple species of invasive aquatic plants are common. We used this system to investigate whether presence of one exotic species made subsequent invasions by other exotic species more likely, less likely, or if it had no effect. We performed an experiment in which we selectively removed exotic rooted and/or floating aquatic plant species and tracked subsequent colonization and growth of native and invasive species. This allowed us to quantify how presence or absence of one plant functional group influenced the likelihood of successful invasion by members of the other functional group. We found that presence of alligatorweed (rooted plant) decreased establishment of new water hyacinth (free-floating plant) patches but increased growth of hyacinth in established patches, with an overall net positive effect on success of water hyacinth. Water hyacinth presence had no effect on establishment of alligatorweed but decreased growth of existing alligatorweed patches, with an overall net negative effect on success of alligatorweed. Moreover, observational data showed positive correlations between hyacinth and alligatorweed with hyacinth, on average, more abundant. The negative effect of hyacinth on alligatorweed growth implies competition, not strong mutual facilitation (invasional meltdown), is occurring in this system. Removal of hyacinth may increase alligatorweed invasion through release from competition. However, removal of alligatorweed may have more complex effects on hyacinth patch dynamics because there were strong opposing effects on establishment versus growth. The mix of positive and negative interactions between floating and rooted aquatic plants may influence local population dynamics of each group and thus overall invasion pressure in this watershed.", "which hypothesis ?", "Invasional meltdown", 1584.0, 1603.0], ["1 The cultivation and dissemination of alien ornamental plants increases their potential to invade. More specifically, species with bird\u2010dispersed seeds can potentially infiltrate natural nucleation processes in savannas. 2 To test (i) whether invasion depends on facilitation by host trees, (ii) whether propagule pressure determines invasion probability, and (iii) whether alien host plants are better facilitators of alien fleshy\u2010fruited species than indigenous species, we mapped the distribution of alien fleshy\u2010fruited species planted inside a military base, and compared this with the distribution of alien and native fleshy\u2010fruited species established in the surrounding natural vegetation. 3 Abundance and diversity of fleshy\u2010fruited plant species was much greater beneath tree canopies than in open grassland and, although some native fleshy\u2010fruited plants were found both beneath host trees and in the open, alien fleshy\u2010fruited plants were found only beneath trees. 4 Abundance of fleshy\u2010fruited alien species in the natural savanna was positively correlated with the number of individuals of those species planted in the grounds of the military base, while the species richness of alien fleshy\u2010fruited taxa decreased with distance from the military base, supporting the notion that propagule pressure is a fundamental driver of invasions. 5 There were more fleshy\u2010fruited species beneath native Acacia tortilis than beneath alien Prosopis sp. trees of the equivalent size. Although there were significant differences in native plant assemblages beneath these hosts, the proportion of alien to native fleshy\u2010fruited species did not differ with host. 6 Synthesis. Birds facilitate invasion of a semi\u2010arid African savanna by alien fleshy\u2010fruited plants, and this process does not require disturbance. Instead, propagule pressure and a few simple biological observations define the probability that a plant will invade, with alien species planted in gardens being a major source of propagules. Some invading species have the potential to transform this savanna by overtopping native trees, leading to ecosystem\u2010level impacts. Likewise, the invasion of the open savanna by alien host trees (such as Prosopis sp.) may change the diversity, abundance and species composition of the fleshy\u2010fruited understorey. These results illustrate the complex interplay between propagule pressure, facilitation, and a range of other factors in biological invasions.", "which hypothesis ?", "Propagule pressure", 305.0, 323.0], [" Before European settlement, grassy white box woodlands were the dominant vegetation in the east of the\nwheat-sheep belt of south-eastern Australia. Tree clearing, cultivation and pasture improvement have led\nto fragmentation of this once relatively continuous ecosystem, leaving a series of remnants which\nthemselves have been modified by livestock grazing. Little-modified remnants are extremely rare. We\nexamined and compared the effects of fragmentation and disturbance on the understorey flora of\nwoodland remnants, through a survey of remnants of varying size, grazing history and tree clearing. In\naccordance with fragmentation theory, species richness generally increased with remnant size, and, for\nlittle-grazed remnants, smaller remnants were more vulnerable to weed invasion. Similarly, tree\nclearing and grazing encouraged weed invasion and reduced native species richness. Evidence for\nincreased total species richness at intermediate grazing levels, as predicted by the intermediate\ndisturbance hypothesis, was equivocal. Remnant quality was more severely affected by grazing than by\nremnant size. All little-grazed remnants had lower exotic species abundance and similar or higher native\nspecies richness than grazed remnants, despite their extremely small sizes (< 6 ha). Further, small, littlegrazed\nremnants maintained the general character of the pre-European woodland understorey, while\ngrazing caused changes to the dominant species. Although generally small, the little-grazed remnants\nare the best representatives of the pre-European woodland understorey, and should be central to any\nconservation plan for the woodlands. Selected larger remnants are needed to complement these,\nhowever, to increase the total area of woodland conserved, and, because most little-grazed remnants are\ncleared, to represent the ecosystem in its original structural form. For the maintenance of native plant\ndiversity and composition in little-grazed remnants, it is critical that livestock grazing continues to be\nexcluded. For grazed remnants, maintenance of a site in its current state would allow continuation of\npast management, while restoration to a pre-European condition would require management directed\ntowards weed removal, and could take advantage of the difference noted in the predominant life-cycle\nof native (perennial) versus exotic (annual or biennial) species. ", "which hypothesis ?", "Disturbance", 470.0, 481.0], ["Phenotypic plasticity and rapid evolution are two important strategies by which invasive species adapt to a wide range of environments and consequently are closely associated with plant invasion. To test their importance in invasion success of Crofton weed, we examined the phenotypic response and genetic variation of the weed by conducting a field investigation, common garden experiments, and intersimple sequence repeat (ISSR) marker analysis on 16 populations in China. Molecular markers revealed low genetic variation among and within the sampled populations. There were significant differences in leaf area (LA), specific leaf area (SLA), and seed number (SN) among field populations, and plasticity index (PIv) for LA, SLA, and SN were 0.62, 0.46 and 0.85, respectively. Regression analyses revealed a significant quadratic effect of latitude of population origin on LA, SLA, and SN based on field data but not on traits in the common garden experiments (greenhouse and open air). Plants from different populations showed similar reaction norms across the two common gardens for functional traits. LA, SLA, aboveground biomass, plant height at harvest, first flowering day, and life span were higher in the greenhouse than in the open-air garden, whereas SN was lower. Growth conditions (greenhouse vs. open air) and the interactions between growth condition and population origin significantly affect plant traits. The combined evidence suggests high phenotypic plasticity but low genetically based variation for functional traits of Crofton weed in the invaded range. Therefore, we suggest that phenotypic plasticity is the primary strategy for Crofton weed as an aggressive invader that can adapt to diverse environments in China.", "which hypothesis ?", "Phenotypic plasticity", 8.0, 29.0], ["Invasive nonnative grasses have altered the composition of seasonally dry shrublands and woodlands throughout the world. In many areas they coexist with native woody species until fire occurs, after which they become dominant. Yet it is not clear how long their impacts persist in the absence of further fire. We evaluated the long-term impacts of grass invasions and subsequent fire in seasonally dry submontane habitats on Hawai'i, USA. We recensused transects in invaded unburned woodland and woodland that had burned in exotic grass-fueled fires in 1970 and 1987 and had last been censused in 1991. In the unburned woodlands, we found that the dominant understory grass invader, Schizachyrium condensatum, had declined by 40%, while native understory species were abundant and largely unchanged from measurements 17 years ago. In burned woodland, exotic grass cover also declined, but overall values remained high and recruitment of native species was poor. Sites that had converted to exotic grassland after a 1970 fire remained dominated by exotic grasses with no increase in native cover despite 37 years without fire. Grass-dominated sites that had burned twice also showed limited recovery despite 20 years of fire suppression. We found limited evidence for \"invasional meltdown\": Exotic richness remained low across burned sites, and the dominant species in 1991, Melinis minutiflora, is still dominant today. Twice-burned sites are, however, being invaded by the nitrogen-fixing tree Morella faya, an introduced species with the potential to greatly alter the successional trajectory on young volcanic soils. In summary, despite decades of fire suppression, native species show little recovery in burned Hawaiian woodlands. Thus, burned sites appear to be beyond a threshold for \"natural recovery\" (e.g., passive restoration).", "which hypothesis ?", "Invasional meltdown", 1268.0, 1287.0], ["Abstract How introduced plants, which may be locally adapted to specific climatic conditions in their native range, cope with the new abiotic conditions that they encounter as exotics is not well understood. In particular, it is unclear what role plasticity versus adaptive evolution plays in enabling exotics to persist under new environmental circumstances in the introduced range. We determined the extent to which native and introduced populations of St. John's Wort (Hypericum perforatum) are genetically differentiated with respect to leaf-level morphological and physiological traits that allow plants to tolerate different climatic conditions. In common gardens in Washington and Spain, and in a greenhouse, we examined clinal variation in percent leaf nitrogen and carbon, leaf \u03b413C values (as an integrative measure of water use efficiency), specific leaf area (SLA), root and shoot biomass, root/shoot ratio, total leaf area, and leaf area ratio (LAR). As well, we determined whether native European H. perforatum experienced directional selection on leaf-level traits in the introduced range and we compared, across gardens, levels of plasticity in these traits. In field gardens in both Washington and Spain, native populations formed latitudinal clines in percent leaf N. In the greenhouse, native populations formed latitudinal clines in root and shoot biomass and total leaf area, and in the Washington garden only, native populations also exhibited latitudinal clines in percent leaf C and leaf \u03b413C. Traits that failed to show consistent latitudinal clines instead exhibited significant phenotypic plasticity. Introduced St. John's Wort populations also formed significant or marginally significant latitudinal clines in percent leaf N in Washington and Spain, percent leaf C in Washington, and in root biomass and total leaf area in the greenhouse. In the Washington common garden, there was strong directional selection among European populations for higher percent leaf N and leaf \u03b413C, but no selection on any other measured trait. The presence of convergent, genetically based latitudinal clines between native and introduced H. perforatum, together with previously published molecular data, suggest that native and exotic genotypes have independently adapted to a broad-scale variation in climate that varies with latitude.", "which hypothesis ?", "Phenotypic plasticity", 1605.0, 1626.0], ["Invasive species can displace natives, and thus identifying the traits that make aliens successful is crucial for predicting and preventing biodiversity loss. Pathogens may play an important role in the invasive process, facilitating colonization of their hosts in new continents and islands. According to the Novel Weapon Hypothesis, colonizers may out-compete local native species by bringing with them novel pathogens to which native species are not adapted. In contrast, the Enemy Release Hypothesis suggests that flourishing colonizers are successful because they have left their pathogens behind. To assess the role of avian malaria and related haemosporidian parasites in the global spread of a common invasive bird, we examined the prevalence and genetic diversity of haemosporidian parasites (order Haemosporida, genera Plasmodium and Haemoproteus) infecting house sparrows (Passer domesticus). We sampled house sparrows (N = 1820) from 58 locations on 6 continents. All the samples were tested using PCR-based methods; blood films from the PCR-positive birds were examined microscopically to identify parasite species. The results show that haemosporidian parasites in the house sparrows' native range are replaced by species from local host-generalist parasite fauna in the alien environments of North and South America. Furthermore, sparrows in colonized regions displayed a lower diversity and prevalence of parasite infections. Because the house sparrow lost its native parasites when colonizing the American continents, the release from these natural enemies may have facilitated its invasion in the last two centuries. Our findings therefore reject the Novel Weapon Hypothesis and are concordant with the Enemy Release Hypothesis.", "which hypothesis ?", "Enemy release", 479.0, 492.0], ["The monk parakeet (Myiopsitta monachus) is a successful invasive species that does not exhibit life history traits typically associated with colonizing species (e.g., high reproductive rate or long\u2010distance dispersal capacity). To investigate this apparent paradox, we examined individual and population genetic patterns of microsatellite loci at one native and two invasive sites. More specifically, we aimed at evaluating the role of propagule pressure, sexual monogamy and long\u2010distance dispersal in monk parakeet invasion success. Our results indicate little loss of genetic variation at invasive sites relative to the native site. We also found strong evidence for sexual monogamy from patterns of relatedness within sites, and no definite cases of extra\u2010pair paternity in either the native site sample or the examined invasive site. Taken together, these patterns directly and indirectly suggest that high propagule pressure has contributed to monk parakeet invasion success. In addition, we found evidence for frequent long\u2010distance dispersal at an invasive site (\u223c100 km) that sharply contrasted with previous estimates of smaller dispersal distance made in the native range (\u223c2 km), suggesting long\u2010range dispersal also contributes to the species\u2019 spread within the United States. Overall, these results add to a growing body of literature pointing to the important role of propagule pressure in determining, and thus predicting, invasion success, especially for species whose life history traits are not typically associated with invasiveness.", "which hypothesis ?", "Propagule pressure", 436.0, 454.0], ["Despite its appeal to explain plant invasions, the enemy release hypothesis (ERH) remains largely unexplored for tropical forest trees. Even scarcer are ERH studies conducted on the same host species at both the community and biogeographical scale, irrespective of the system or plant life form. In Cabrits National Park, Dominica, we observed patterns consistent with enemy release of two introduced, congeneric mahogany species, Swietenia macrophylla and S. mahagoni, planted almost 50 years ago. Swietenia populations at Cabrits have reproduced, with S. macrophylla juveniles established in and out of plantation areas at densities much higher than observed in its native range. Swietenia macrophylla juveniles also experienced significantly lower leaf-level herbivory (\u223c3.0%) than nine co-occurring species native to Dominica (8.4\u201321.8%), and far lower than conspecific herbivory observed in its native range (11%\u201343%, on average). These complimentary findings at multiple scales support ERH, and confirm that Swietenia has naturalized at Cabrits. However, Swietenia abundance was positively correlated with native plant diversity at the seedling stage, and only marginally negatively correlated with native plant abundance for stems \u22651-cm dbh. Taken together, these descriptive patterns point to relaxed enemy pressure from specialized enemies, specifically the defoliator Steniscadia poliophaea and the shoot-borer Hypsipyla grandella, as a leading explanation for the enhanced recruitment of Swietenia trees documented at Cabrits.", "which hypothesis ?", "Enemy release", 51.0, 64.0], ["\n\nThe phenotypic plasticity and the competitive ability of the invasive Acacia longifolia v. the indigenous Mediterranean dune species Halimium halimifolium and Pinus pinea were evaluated. In particular, we explored the hypothesis that phenotypic plasticity in response to biotic and abiotic factors explains the observed differences in competitiveness between invasive and native species. The seedlings\u2019 ability to exploit different resource availabilities was examined in a two factorial experimental design of light and nutrient treatments by analysing 20 physiological and morphological traits. Competitiveness was tested using an additive experimental design in combination with 15N-labelling experiments. Light and nutrient availability had only minor effects on most physiological traits and differences between species were not significant. Plasticity in response to changes in resource availability occurred in morphological and allocation traits, revealing A. longifolia to be a species of intermediate responsiveness. The major competitive advantage of A. longifolia was its constitutively high shoot elongation rate at most resource treatments and its effective nutrient acquisition. Further, A. longifolia was found to be highly tolerant against competition from native species. In contrast to common expectations, the competition experiment indicated that A. longifolia expressed a constant allocation pattern and a phenotypic plasticity similar to that of the native species.\n", "which hypothesis ?", "Phenotypic plasticity", 14.0, 35.0], ["Patterns of native and alien plant diversity in response to disturbance were examined along an elevational gradient in blue oak savanna, chaparral, and coniferous forests. Total species richness, alien species richness, and alien cover declined with elevation, at scales from 1 to 1000 m2. We found no support for the hypothesis that community diversity inhibits alien invasion. At the 1-m2 point scale, where we would expect competitive interactions between the largely herbaceous flora to be most intense, alien species richness as well as alien cover increased with increasing native species richness in all communities. This suggests that aliens are limited not by the number of native competitors, but by resources that affect establishment of both natives and aliens. Blue oak savannas were heavily dominated by alien species and consistently had more alien than native species at the 1-m2 scale. All of these aliens are annuals, and it is widely thought that they have displaced native bunchgrasses. If true, this...", "which hypothesis ?", "Disturbance", 60.0, 71.0], ["Background Biological invasions are fundamentally biogeographic processes that occur over large spatial scales. Interactions with soil microbes can have strong impacts on plant invasions, but how these interactions vary among areas where introduced species are highly invasive vs. naturalized is still unknown. In this study, we examined biogeographic variation in plant-soil microbe interactions of a globally invasive weed, Centaurea solstitialis (yellow starthistle). We addressed the following questions (1) Is Centaurea released from natural enemy pressure from soil microbes in introduced regions? and (2) Is variation in plant-soil feedbacks associated with variation in Centaurea's invasive success? Methodology/Principal Findings We conducted greenhouse experiments using soils and seeds collected from native Eurasian populations and introduced populations spanning North and South America where Centaurea is highly invasive and noninvasive. Soil microbes had pervasive negative effects in all regions, although the magnitude of their effect varied among regions. These patterns were not unequivocally congruent with the enemy release hypothesis. Surprisingly, we also found that Centaurea generated strong negative feedbacks in regions where it is the most invasive, while it generated neutral plant-soil feedbacks where it is noninvasive. Conclusions/Significance Recent studies have found reduced below-ground enemy attack and more positive plant-soil feedbacks in range-expanding plant populations, but we found increased negative effects of soil microbes in range-expanding Centaurea populations. While such negative feedbacks may limit the long-term persistence of invasive plants, such feedbacks may also contribute to the success of invasions, either by having disproportionately negative impacts on competing species, or by yielding relatively better growth in uncolonized areas that would encourage lateral spread. Enemy release from soil-borne pathogens is not sufficient to explain the success of this weed in such different regions. The biogeographic variation in soil-microbe effects indicates that different mechanisms may operate on this species in different regions, thus establishing geographic mosaics of species interactions that contribute to variation in invasion success.", "which hypothesis ?", "Enemy release", 1131.0, 1144.0], ["The enemy release hypothesis posits that non-native plant species may gain a competitive advantage over their native counterparts because they are liberated from co-evolved natural enemies from their native area. The phylogenetic relationship between a non-native plant and the native community may be important for understanding the success of some non-native plants, because host switching by insect herbivores is more likely to occur between closely related species. We tested the enemy release hypothesis by comparing leaf damage and herbivorous insect assemblages on the invasive species Senecio madagascariensis Poir. to that on nine congeneric species, of which five are native to the study area, and four are non-native but considered non-invasive. Non-native species had less leaf damage than natives overall, but we found no significant differences in the abundance, richness and Shannon diversity of herbivores between native and non-native Senecio L. species. The herbivore assemblage and percentage abundance of herbivore guilds differed among all Senecio species, but patterns were not related to whether the species was native or not. Species-level differences indicate that S. madagascariensis may have a greater proportion of generalist insect damage (represented by phytophagous leaf chewers) than the other Senecio species. Within a plant genus, escape from natural enemies may not be a sufficient explanation for why some non-native species become more invasive than others.", "which hypothesis ?", "Enemy release", 4.0, 17.0], ["Summary In order to evaluate the phenotypic plasticity of introduced pikeperch populations in Tunisia, the intra- and interpopulation differentiation was analysed using a biometric approach. Thus, nine meristic counts and 23 morphological measurements were taken from 574 specimens collected from three dams and a hill lake. The univariate (anova) and multivariate analyses (PCA and DFA) showed a low meristic variability between the pikeperch samples and a segregated pikeperch group from the Sidi Salem dam which displayed a high distance between mouth and pectoral fin and a high antedorsal distance. In addition, the Korba hill lake population seemed to have more important values of total length, eye diameter, maximum body height and a higher distance between mouth and operculum than the other populations. However, the most accentuated segregation was found in the Lebna sample where the individuals were characterized by high snout length, body thickness, pectoral fin length, maximum body height and distance between mouth and operculum. This study shows the existence of morphological differentiations between populations derived from a single gene pool that have been isolated in separated sites for several decades although in relatively similar environments.", "which hypothesis ?", "Phenotypic plasticity", 33.0, 54.0], ["Most hypotheses addressing the effect of diversity on ecosystem function indicate the occurrence of higher process rates with increasing diversity, and only diverge in the shape of the function depending on their assumptions about the role of individual species and functional groups. Contrarily to these predictions, we show that grazing of the Flooding Pampas grasslands increased species richness, but drastically reduced above ground net primary production, even when communities with similar initial biomass were compared. Grazing increased species richness through the addition of a number of exotic forbs, without reducing the richness and cover of the native flora. Since these forbs were essentially cool-season species, and also because their introduction has led to the displacement of warm-season grasses from dominant to subordinate positions in the community, grazing not only decreased productivity, but also shifted its seasonality towards the cool season. These results suggest that species diversity and/or richness alone are poor predictors of above-ground primary production. Therefore, models that relate productivity to diversity should take into account the relative abundance and identity of species that are added or deleted by the specific disturbances that modify diversity.", "which hypothesis ?", "Disturbance", NaN, NaN], ["Schinus molle (Peruvian pepper tree) was introduced to South Africa more than 150 years ago and was widely planted, mainly along roads. Only in the last two decades has the species become naturalized and invasive in some parts of its new range, notably in semi-arid savannas. Research is being undertaken to predict its potential for further invasion in South Africa. We studied production, dispersal and predation of seeds, seed banks, and seedling establishment in relation to land uses at three sites, namely ungrazed savanna once used as a military training ground; a savanna grazed by native game; and an ungrazed mine dump. We found that seed production and seed rain density of S. molle varied greatly between study sites, but was high at all sites (384 864\u20131 233 690 seeds per tree per year; 3877\u20139477 seeds per square metre per year). We found seeds dispersed to distances of up to 320 m from female trees, and most seeds were deposited within 50 m of putative source trees. Annual seed rain density below canopies of Acacia tortillis, the dominant native tree at all sites, was significantly lower in grazed savanna. The quality of seed rain was much reduced by endophagous predators. Seed survival in the soil was low, with no survival recorded beyond 1 year. Propagule pressure to drive the rate of recruitment: densities of seedlings and sapling densities were higher in ungrazed savanna and the ungrazed mine dump than in grazed savanna, as reflected by large numbers of young individuals, but adult : seedling ratios did not differ between savanna sites. Frequent and abundant seed production, together with effective dispersal of viable S. molle seed by birds to suitable establishment sites below trees of other species to overcome predation effects, facilitates invasion. Disturbance enhances invasion, probably by reducing competition from native plants.", "which hypothesis ?", "Propagule pressure", 1271.0, 1289.0], ["Ecological theory on biological invasions attempts to characterize the predictors of invasion success and the relative importance of the different drivers of population establishment. An outstanding question is how propagule pressure determines the probability of population establishment, where propagule pressure is the number of individuals of a species introduced into a specific location (propagule size) and their frequency of introduction (propagule number). Here, we used large-scale replicated mesocosm ponds over three reproductive seasons to identify how propagule size and number predict the probability of establishment of one of world's most invasive fish, Pseudorasbora parva, as well as its effect on the somatic growth of individuals during establishment. We demonstrated that, although a threshold of 11 introduced pairs of fish (a pair is 1 male, 1 female) was required for establishment probability to exceed 95%, establishment also occurred at low propagule size (1-5 pairs). Although single introduction events were as effective as multiple events at enabling establishment, the propagule sizes used in the multiple introductions were above the detected threshold for establishment. After three reproductive seasons, population abundance was also a function of propagule size, with rapid increases in abundance only apparent when propagule size exceeded 25 pairs. This was initially assisted by adapted biological traits, including rapid individual somatic growth that helped to overcome demographic bottlenecks.", "which hypothesis ?", "Propagule pressure", 215.0, 233.0], ["Grasslands have been lost and degraded in the United States since Euro-American settlement due to agriculture, development, introduced invasive species, and changes in fire regimes. Fire is frequently used in prairie restoration to control invasion by trees and shrubs, but may have additional consequences. For example, fire might reduce damage by herbivore and pathogen enemies by eliminating litter, which harbors eggs and spores. Less obviously, fire might influence enemy loads differently for native and introduced plant hosts. We used a controlled burn in a Willamette Valley (Oregon) prairie to examine these questions. We expected that, without fire, introduced host plants should have less damage than native host plants because the introduced species are likely to have left many of their enemies behind when they were transported to their new range (the enemy release hypothesis, or ERH). If the ERH holds, then fire, which should temporarily reduce enemies on all species, should give an advantage to the natives because they should see greater total reduction in damage by enemies. Prior to the burn, we censused herbivore and pathogen attack on eight plant species (five of nonnative origin: Bromus hordaceous, Cynosuros echinatus, Galium divaricatum, Schedonorus arundinaceus (= Festuca arundinacea), and Sherardia arvensis; and three natives: Danthonia californica, Epilobium minutum, and Lomatium nudicale). The same plots were monitored for two years post-fire. Prior to the burn, native plants had more kinds of damage and more pathogen damage than introduced plants, consistent with the ERH. Fire reduced pathogen damage relative to the controls more for the native than the introduced species, but the effects on herbivory were negligible. Pathogen attack was correlated with plant reproductive fitness, whereas herbivory was not. These results suggest that fire may be useful for promoting some native plants in prairies due to its negative effects on their pathogens.", "which hypothesis ?", "Enemy release", 866.0, 879.0], ["Biological invasions are a growing aspect of global biodiversity change. In many regions, introduced species richness increases supralinearly over time. This does not, however, necessarily indicate increasing introduction rates or invasion success. We develop a simple null model to identify the expected trend in invasion records over time. For constant introduction rates and success, the expected trend is exponentially increasing. Model extensions with varying introduction rate and success can also generate exponential distributions. We then analyse temporal trends in aquatic, marine and terrestrial invasion records. Most data sets support an exponential distribution (15/16) and the null invasion model (12/16). Thus, our model shows that no change in introduction rate or success need be invoked to explain the majority of observed trends. Further, an exponential trend does not necessarily indicate increasing invasion success or 'invasional meltdown', and a saturating trend does not necessarily indicate decreasing success or biotic resistance.", "which hypothesis ?", "Invasional meltdown", 942.0, 961.0], ["1 Acer platanoides (Norway maple) is an important non\u2010native invasive canopy tree in North American deciduous forests, where native species diversity and abundance are greatly reduced under its canopy. We conducted a field experiment in North American forests to compare planted seedlings of A. platanoides and Acer saccharum (sugar maple), a widespread, common native that, like A. platanoides, is shade tolerant. Over two growing seasons in three forests we compared multiple components of seedling success: damage from natural enemies, ecophysiology, growth and survival. We reasoned that equal or superior performance by A. platanoides relative to A. saccharum indicates seedling characteristics that support invasiveness, while inferior performance indicates potential barriers to invasion. 2 Acer platanoides seedlings produced more leaves and allocated more biomass to roots, A. saccharum had greater water use efficiency, and the two species exhibited similar photosynthesis and first\u2010season mortality rates. Acer platanoides had greater winter survival and earlier spring leaf emergence, but second\u2010season mortality rates were similar. 3 The success of A. platanoides seedlings was not due to escape from natural enemies, contrary to the enemy release hypothesis. Foliar insect herbivory and disease symptoms were similarly high for both native and non\u2010native, and seedling biomass did not differ. Rather, A. platanoides compared well with A. saccharum because of its equivalent ability to photosynthesize in the low light herb layer, its higher leaf production and greater allocation to roots, and its lower winter mortality coupled with earlier spring emergence. Its only potential barrier to seedling establishment, relative to A. saccharum, was lower water use efficiency, which possibly could hinder its invasion into drier forests. 4 The spread of non\u2010native canopy trees poses an especially serious problem for native forest communities, because canopy trees strongly influence species in all forest layers. Success at reaching the canopy depends on a tree's ecology in previous life\u2010history stages, particularly as a vulnerable seedling, but little is known about seedling characteristics that promote non\u2010native tree invasion. Experimental field comparison with ecologically successful native trees provides insight into why non\u2010native trees succeed as seedlings, which is a necessary stage on their journey into the forest canopy.", "which hypothesis ?", "Enemy release", 1247.0, 1260.0], ["Roads can facilitate the establishment and spread of both native and exotic species. Nevertheless, the precise mechanisms facilitating this expansion are rarely known. We tested the hypothesis that dirt roads are favorable landing and nest initiation sites for founding\u2010queens of the leaf\u2010cutter ant Atta laevigata. For 2 yr, we compared the number of attempts to found new nests (colonization attempts) in dirt roads and the adjacent vegetation in a reserve of cerrado (tree\u2010dominated savanna) in southeastern Brazil. The number of colonization attempts in roads was 5 to 10 times greater than in the adjacent vegetation. Experimental transplants indicate that founding\u2010queens are more likely to establish a nest on bare soil than on soil covered with leaf\u2010litter, but the amount of litter covering the ground did not fully explain the preference of queens for dirt roads. Queens that landed on roads were at higher risk of predation by beetles and ants than those that landed in the adjacent vegetation. Nevertheless, greater predation in roads was not sufficient to offset the greater number of colonization attempts in this habitat. As a consequence, significantly more new colonies were established in roads than in the adjacent vegetation. Our results suggest that disturbance caused by the opening of roads could result in an increased Atta abundance in protected areas of the Brazilian Cerrado.", "which hypothesis ?", "Disturbance", 1271.0, 1282.0], ["Many ecosystems are created by the presence of ecosystem engineers that play an important role in determining species' abundance and species composition. Additionally, a mosaic environment of engineered and non-engineered habitats has been shown to increase biodiversity. Non-native ecosystem engineers can be introduced into environments that do not contain or have lost species that form biogenic habitat, resulting in dramatic impacts upon native communities. Yet, little is known about how non-native ecosystem engineers interact with natives and other non-natives already present in the environment, specifically whether non-native ecosystem engineers facilitate other non-natives, and whether they increase habitat heterogeneity and alter the diversity, abundance, and distribution of benthic species. Through sampling and experimental removal of reefs, we examine the effects of a non-native reef-building tubeworm, Ficopomatus enigmaticus, on community composition in the central Californian estuary, Elkhorn Slough. Tubeworm reefs host significantly greater abundances of many non-native polychaetes and amphipods, particularly the amphipods Monocorophium insidiosum and Melita nitida, compared to nearby mudflats. Infaunal assemblages under F. enigmaticus reefs and around reef's edges show very low abundance and taxonomic diversity. Once reefs are removed, the newly exposed mudflat is colonized by opportunistic non-native species, such as M. insidiosum and the polychaete Streblospio benedicti, making removal of reefs a questionable strategy for control. These results show that provision of habitat by a non-native ecosystem engineer may be a mechanism for invasional meltdown in Elkhorn Slough, and that reefs increase spatial heterogeneity in the abundance and composition of benthic communities.", "which hypothesis ?", "Invasional meltdown", 1673.0, 1692.0], ["High propagule pressure is arguably the only consistent predictor of colonization success. More individuals enhance colonization success because they aid in overcoming demographic consequences of small population size (e.g. stochasticity and Allee effects). The number of founders can also have direct genetic effects: with fewer individuals, more inbreeding and thus inbreeding depression will occur, whereas more individuals typically harbour greater genetic variation. Thus, the demographic and genetic components of propagule pressure are interrelated, making it difficult to understand which mechanisms are most important in determining colonization success. We experimentally disentangled the demographic and genetic components of propagule pressure by manipulating the number of founders (fewer or more), and genetic background (inbred or outbred) of individuals released in a series of three complementary experiments. We used Bemisia whiteflies and released them onto either their natal host (benign) or a novel host (challenging). Our experiments revealed that having more founding individuals and those individuals being outbred both increased the number of adults produced, but that only genetic background consistently shaped net reproductive rate of experimental populations. Environment was also important and interacted with propagule size to determine the number of adults produced. Quality of the environment interacted also with genetic background to determine establishment success, with a more pronounced effect of inbreeding depression in harsh environments. This interaction did not hold for the net reproductive rate. These data show that the positive effect of propagule pressure on founding success can be driven as much by underlying genetic processes as by demographics. Genetic effects can be immediate and have sizable effects on fitness.", "which hypothesis ?", "Propagule pressure", 5.0, 23.0], ["The potential of introduced species to become invasive is often linked to their ability to colonise disturbed habitats rapidly. We studied the effects of major disturbance by severe storms on the indigenous mussel Perna perna and the invasive mussel Mytilus galloprovincialis in sympatric intertidal populations on the south coast of South Africa. At the study sites, these species dominate different shore levels and co-exist in the mid mussel zone. We tested the hypotheses that in the mid- zone P. perna would suffer less dislodgment than M. galloprovincialis, because of its greater tenacity, while M. galloprovincialis would respond with a higher re-colonisation rate. We estimated the per- cent cover of the 2 mussels in the mid-zone from photographs, once before severe storms and 3 times afterwards. M. galloprovincialis showed faster re-colonisation and 3 times more cover than P. perna 1 and 1.5 yr after the storms (when populations had recovered). Storm-driven dislodgment in the mid- zone was highest for the species that initially dominated at each site, conforming to the concept of compensatory mortality. This resulted in similar cover of the 2 species immediately after the storms. Thus, the storm wave forces exceeded the tenacity even of P. perna, while the higher recruitment rate of M. galloprovincialis can explain its greater colonisation ability. We predict that, because of its weaker attachment strength, M. galloprovincialis will be largely excluded from open coast sites where wave action is generally stronger, but that its greater capacity for exploitation competition through re-colonisation will allow it to outcompete P. perna in more sheltered areas (especially in bays) that are periodically disturbed by storms.", "which hypothesis ?", "Disturbance", 160.0, 171.0], ["Introduced species must adapt their ecology, behaviour, and morphological traits to new conditions. The successful introduction and invasive potential of a species are related to its levels of phenotypic plasticity and genetic polymorphism. We analysed changes in the body mass and length of American mink (Neovison vison) since its introduction into the Warta Mouth National Park, western Poland, in relation to diet composition and colonization progress from 1996 to 2004. Mink body mass decreased significantly during the period of population establishment within the study area, with an average decrease of 13% from 1.36 to 1.18 kg in males and of 16% from 0.83 to 0.70 kg in females. Diet composition varied seasonally and between consecutive years. The main prey items were mammals and fish in the cold season and birds and fish in the warm season. During the study period the proportion of mammals preyed upon increased in the cold season and decreased in the warm season. The proportion of birds preyed upon decreased over the study period, whereas the proportion of fish increased. Following introduction, the strictly aquatic portion of mink diet (fish and frogs) increased over time, whereas the proportion of large prey (large birds, muskrats, and water voles) decreased. The average yearly proportion of large prey and average-sized prey in the mink diet was significantly correlated with the mean body masses of males and females. Biogeographical variation in the body mass and length of mink was best explained by the percentage of large prey in the mink diet in both sexes, and by latitude for females. Together these results demonstrate that American mink rapidly changed their body mass in relation to local conditions. This phenotypic variability may be underpinned by phenotypic plasticity and/or by adaptation of quantitative genetic variation. The potential to rapidly change phenotypic variation in this manner is an important factor determining the negative ecological impacts of invasive species. \u00a9 2012 The Linnean Society of London, Biological Journal of the Linnean Society, 2012, 105, 681\u2013693.", "which hypothesis ?", "Phenotypic plasticity", 193.0, 214.0], ["Abstract Enemy release is a commonly accepted mechanism to explain plant invasions. Both the diploid Leucanthemum vulgare and the morphologically very similar tetraploid Leucanthemum ircutianum have been introduced into North America. To verify which species is more prevalent in North America we sampled 98 Leucanthemum populations and determined their ploidy level. Although polyploidy has repeatedly been proposed to be associated with increased invasiveness in plants, only two of the populations surveyed in North America were the tetraploid L. ircutianum . We tested the enemy release hypothesis by first comparing 20 populations of L. vulgare and 27 populations of L. ircutianum in their native range in Europe, and then comparing the European L. vulgare populations with 31 L. vulgare populations sampled in North America. Characteristics of the site and associated vegetation, plant performance and invertebrate herbivory were recorded. In Europe, plant height and density of the two species were similar but L. vulgare produced more flower heads than L. ircutianum . Leucanthemum vulgare in North America was 17 % taller, produced twice as many flower heads and grew much denser compared to L. vulgare in Europe. Attack rates by root- and leaf-feeding herbivores on L. vulgare in Europe (34 and 75 %) was comparable to that on L. ircutianum (26 and 71 %) but higher than that on L. vulgare in North America (10 and 3 %). However, herbivore load and leaf damage were low in Europe. Cover and height of the co-occurring vegetation was higher in L. vulgare populations in the native than in the introduced range, suggesting that a shift in plant competition may more easily explain the invasion success of L. vulgare than escape from herbivory.", "which hypothesis ?", "Enemy release", 9.0, 22.0], ["Giant hogweed, Heracleum mantegazzianum (Apiaceae), was introduced from the Caucasus into Western Europe more than 150 years ago and later became an invasive weed which created major problems for European authorities. Phytophagous insects were collected in the native range of the giant hogweed (Caucasus) and were compared to those found on plants in the invaded parts of Europe. The list of herbivores was compiled from surveys of 27 localities in nine countries during two seasons. In addition, litera- ture records for herbivores were analysed for a total of 16 Heracleum species. We recorded a total of 265 herbivorous insects on Heracleum species and we analysed them to describe the herbivore assemblages, locate vacant niches, and identify the most host- specific herbivores on H. mantegazzianum. When combining our investigations with similar studies of herbivores on other invasive weeds, all studies show a higher proportion of specialist herbivores in the native habitats compared to the invaded areas, supporting the \"enemy release hypothesis\" (ERH). When analysing the relative size of the niches (measured as plant organ biomass), we found less herbivore species per biomass on the stem and roots, and more on the leaves (Fig. 5). Most herbivores were polyphagous gener- alists, some were found to be oligophagous (feeding within the same family of host plants) and a few had only Heracleum species as host plants (monophagous). None were known to feed exclusively on H. mantegazzianum. The oligophagous herbivores were restricted to a few taxonomic groups, especially within the Hemiptera, and were particularly abundant on this weed.", "which hypothesis ?", "Enemy release", 1031.0, 1044.0], ["1 We tested the enemy release hypothesis for invasiveness using field surveys of herbivory on 39 exotic and 30 native plant species growing in natural areas near Ottawa, Canada, and found that exotics suffered less herbivory than natives. 2 For the 39 introduced species, we also tested relationships between herbivory, invasiveness and time since introduction to North America. Highly invasive plants had significantly less herbivory than plants ranked as less invasive. Recently arrived plants also tended to be more invasive; however, there was no relationship between time since introduction and herbivory. 3 Release from herbivory may be key to the success of highly aggressive invaders. Low herbivory may also indicate that a plant possesses potent defensive chemicals that are novel to North America, which may confer resistance to pathogens or enable allelopathy in addition to deterring herbivorous insects.", "which hypothesis ?", "Enemy release", 16.0, 29.0], ["A primary impediment to understanding how species diversity and anthropogenic disturbance are related is that both diversity and disturbance can depend on the scales at which they are sampled. While the scale dependence of diversity estimation has received substantial attention, the scale dependence of disturbance estimation has been essentially overlooked. Here, we break from conventional examination of the diversity-disturbance relationship by holding the area over which species richness is estimated constant and instead manipulating the area over which human disturbance is measured. In the boreal forest ecoregion of Alberta, Canada, we test the dependence of species richness on disturbance scale, the scale-dependence of the intermediate disturbance hypothesis, and the consistency of these patterns in native versus exotic species and among human disturbance types. We related field observed species richness in 1 ha surveys of 372 boreal vascular plant communities to remotely sensed measures of human disturbance extent at two survey scales: local (1 ha) and landscape (18 km2). Supporting the intermediate disturbance hypothesis, species richness-disturbance relationships were quadratic at both local and landscape scales of disturbance measurement. This suggests the shape of richness-disturbance relationships is independent of the scale at which disturbance is assessed, despite that local diversity is influenced by disturbance at different scales by different mechanisms, such as direct removal of individuals (local) or indirect alteration of propagule supply (landscape). By contrast, predictions of species richness did depend on scale of disturbance measurement: with high local disturbance richness was double that under high landscape disturbance.", "which hypothesis ?", "Disturbance", 78.0, 89.0], ["Summary Ecosystems with multiple exotic species may be affected by facilitative invader interactions, which could lead to additional invasions (invasional meltdown hypothesis). Experiments show that one-way facilitation favours exotic species and observational studies suggest that reciprocal facilitation among exotic species may lead to an invasional meltdown. We conducted a mesocosm experiment to determine whether reciprocal facilitation occurs in wetland communities. We established communities with native wetland plants and aquatic snails. Communities were assigned to treatments: control (only natives), exotic snail (Pomacea maculata) invasion, exotic plant (Alternanthera philoxeroides) invasion, sequential invasion (snails then plants or plants then snails) or simultaneous invasion (snails and plants). Pomacea maculata preferentially consumed native plants, so A. philoxeroides comprised a larger percentage of plant mass and native plant mass was lowest in sequential (snail then plant) invasion treatments. Even though P. maculata may indirectly facilitate A. philoxeroides, A. philoxeroides did not reciprocally facilitate P. maculata. Rather, ecosystems invaded by multiple exotic species may be affected by one-way facilitation or reflect exotic species\u2019 common responses to abiotic factors or common paths of introduction.", "which hypothesis ?", "Invasional meltdown", 144.0, 163.0], ["Background: Brown trout (Salmo trutta) were introduced into, and subsequently colonized, a number of disparate watersheds on the island of Newfoundland, Canada (110,638 km 2 ), starting in 1883. Questions: Do environmental features of recently invaded habitats shape population-level phenotypic variability? Are patterns of phenotypic variability suggestive of parallel adaptive divergence? And does the extent of phenotypic divergence increase as a function of distance between populations? Hypotheses: Populations that display similar phenotypes will inhabit similar environments. Patterns in morphology, coloration, and growth in an invasive stream-dwelling fish should be consistent with adaptation, and populations closer to each other should be more similar than should populations that are farther apart. Organism and study system: Sixteen brown trout populations of probable common descent, inhabiting a gradient of environments. These populations include the most ancestral (\u223c130 years old) and most recently established (\u223c20 years old). Analytical methods: We used multivariate statistical techniques to quantify morphological (e.g. body shape via geometric morphometrics and linear measurements of traits), meristic (e.g. counts of pigmentation spots), and growth traits from 1677 individuals. To account for ontogenetic and allometric effects on morphology, we conducted separate analyses on three distinct size/age classes. We used the BIO-ENV routine and Mantel tests to measure the correlation between phenotypic and habitat features. Results: Phenotypic similarity was significantly correlated with environmental similarity, especially in the larger size classes of fish. The extent to which these associations between phenotype and habitat result from parallel evolution, adaptive phenotypic plasticity, or historical founder effects is not known. Observed patterns of body shape and fin sizes were generally consistent with predictions of adaptive trait patterns, but other traits showed less consistent patterns with habitat features. Phenotypic differences increased as a function of straight-line distance (km) between watersheds and to a lesser extent fish dispersal distances, which suggests habitat has played a more significant role in shaping population phenotypes compared with founder effects.", "which hypothesis ?", "Phenotypic plasticity", 1798.0, 1819.0], ["Although much of the theory on the success of invasive species has been geared at escape from specialist enemies, the impact of introduced generalist invertebrate herbivores on both native and introduced plant species has been underappreciated. The role of nocturnal invertebrate herbivores in structuring plant communities has been examined extensively in Europe, but less so in North America. Many nocturnal generalists (slugs, snails, and earwigs) have been introduced to North America, and 96% of herbivores found during a night census at our California Central Valley site were introduced generalists. We explored the role of these herbivores in the distribution, survivorship, and growth of 12 native and introduced plant species from six families. We predicted that introduced species sharing an evolutionary history with these generalists might be less vulnerable than native plant species. We quantified plant and herbivore abundances within our heterogeneous site and also established herbivore removal experiments in 160 plots spanning the gamut of microhabitats. As 18 collaborators, we checked 2000 seedling sites every day for three weeks to assess nocturnal seedling predation. Laboratory feeding trials allowed us to quantify the palatability of plant species to the two dominant nocturnal herbivores at the site (slugs and earwigs) and allowed us to account for herbivore microhabitat preferences when analyzing attack rates on seedlings. The relationship between local slug abundance and percent cover of five common plant taxa at the field site was significantly negatively associated with the mean palatability of these taxa to slugs in laboratory trials. Moreover, seedling mortality of 12 species in open-field plots was positively correlated with mean palatability of these taxa to both slugs and earwigs in laboratory trials. Counter to expectations, seedlings of native species were neither more vulnerable nor more palatable to nocturnal generalists than those of introduced species. Growth comparison of plants within and outside herbivore exclosures also revealed no differences between native and introduced plant species, despite large impacts of herbivores on growth. Cryptic nocturnal predation on seedlings was common and had large effects on plant establishment at our site. Without intensive monitoring, such predation could easily be misconstrued as poor seedling emergence.", "which Release of which kind of enemies? ?", "Generalists", 410.0, 421.0], ["Aim Charles Darwin posited that introduced species with close relatives were less likely to succeed because of fiercer competition resulting from their similarity to residents. There is much debate about the generality of this rule, and recent studies on plant and fish introductions have been inconclusive. Information on phylogenetic relatedness is potentially valuable for explaining invasion outcomes and could form part of screening protocols for minimizing future invasions. We provide the first test of this hypothesis for terrestrial vertebrates using two new molecular phylogenies for native and introduced reptiles for two regions with the best data on introduction histories.", "which Non-plant species ?", "Reptiles", 616.0, 624.0], ["1. The invasion success of Ceratitis capitata probably stems from physiological, morphological, and behavioural adaptations that enable them to survive in different habitats. However, it is generally poorly understood if variation in acute thermal tolerance and its phenotypic plasticity might be important in facilitating survival of C. capitata upon introduction to novel environments.", "which Species name ?", "Ceratitis capitata", 27.0, 45.0], ["\n\nHolcus lanatus L. can colonise a wide range of sites within the naturalised grassland of the Humid Dominion of Chile. The objectives were to determine plant growth mechanisms and strategies that have allowed H. lanatus to colonise contrasting pastures and to determine the existence of ecotypes of H. lanatus in southern Chile. Plants of H. lanatus were collected from four geographic zones of southern Chile and established in a randomised complete block design with four replicates. Five newly emerging tillers were marked per plant and evaluated at the vegetative, pre-ear emergence, complete emerged inflorescence, end of flowering period, and mature seed stages. At each evaluation, one marked tiller was harvested per plant. The variables measured included lamina length and width, tiller height, length of the inflorescence, total number of leaves, and leaf, stem, and inflorescence mass. At each phenological stage, groups of accessions were statistically formed using cluster analysis. The grouping of accessions (cluster analysis) into statistically different groups (ANOVA and canonical variate analysis) indicated the existence of different ecotypes. The phenotypic variation within each group of the accessions suggested that each group has its own phenotypic plasticity. It is concluded that the successful colonisation by H. lanatus has resulted from diversity within the species.\n", "which Species name ?", "Holcus lanatus", 10.0, 24.0], ["The ability to succeed in diverse conditions is a key factor allowing introduced species to successfully invade and spread across new areas. Two non-exclusive factors have been suggested to promote this ability: adaptive phenotypic plasticity of individuals, and the evolution of locally adapted populations in the new range. We investigated these individual and population-level factors in Polygonum cespitosum, an Asian annual that has recently become invasive in northeastern North America. We characterized individual fitness, life-history, and functional plasticity in response to two contrasting glasshouse habitat treatments (full sun/dry soil and understory shade/moist soil) in 165 genotypes sampled from nine geographically separate populations representing the range of light and soil moisture conditions the species inhabits in this region. Polygonum cespitosum genotypes from these introduced-range populations expressed broadly similar plasticity patterns. In response to full sun, dry conditions, genotypes from all populations increased photosynthetic rate, water use efficiency, and allocation to root tissues, dramatically increasing reproductive fitness compared to phenotypes expressed in simulated understory shade. Although there were subtle among-population differences in mean trait values as well as in the slope of plastic responses, these population differences did not reflect local adaptation to environmental conditions measured at the population sites of origin. Instead, certain populations expressed higher fitness in both glasshouse habitat treatments. We also compared the introduced-range populations to a single population from the native Asian range, and found that the native population had delayed phenology, limited functional plasticity, and lower fitness in both experimental environments compared with the introduced-range populations. Our results indicate that the future spread of P. cespitosum in its introduced range will likely be fueled by populations consisting of individuals able to express high fitness across diverse light and moisture conditions, rather than by the evolution of locally specialized populations.", "which Species name ?", "Polygonum cespitosum", 391.0, 411.0], ["Invasive alien species might benefit from phenotypic plasticity by being able to (i) maintain fitness in stressful environments (\u2018robust\u2019), (ii) increase fitness in favourable environments (\u2018opportunistic\u2019), or (iii) combine both abilities (\u2018robust and opportunistic\u2019). Here, we applied this framework, for the first time, to an animal, the invasive slug, Arion lusitanicus, and tested (i) whether it has a more adaptive phenotypic plasticity compared with a congeneric native slug, Arion fuscus, and (ii) whether it is robust, opportunistic or both. During one year, we exposed specimens of both species to a range of temperatures along an altitudinal gradient (700\u20132400 m a.s.l.) and to high and low food levels, and we compared the responsiveness of two fitness traits: survival and egg production. During summer, the invasive species had a more adaptive phenotypic plasticity, and at high temperatures and low food levels, it survived better and produced more eggs than A. fuscus, representing the robust phenotype. During winter, A. lusitanicus displayed a less adaptive phenotype than A. fuscus. We show that the framework developed for plants is also very useful for a better mechanistic understanding of animal invasions. Warmer summers and milder winters might lead to an expansion of this invasive species to higher altitudes and enhance its spread in the lowlands, supporting the concern that global climate change will increase biological invasions.", "which Species name ?", "Arion lusitanicus", 356.0, 373.0], ["While phenotypic plasticity is considered the major means that allows plant to cope with environmental heterogeneity, scant information is available on phenotypic plasticity of the whole-plant architecture in relation to ontogenic processes. We performed an architectural analysis to gain an understanding of the structural and ontogenic properties of common buckthorn (Rhamnus cathartica L., Rhamnaceae) growing in the understory and under an open canopy. We found that ontogenic effects on growth need to be calibrated if a full description of phenotypic plasticity is to be obtained. Our analysis pointed to three levels of organization (or nested structural units) in R. cathartica. Their modulation in relation to light conditions leads to the expression of two architectural strategies that involve sets of traits known to confer competitive advantage in their respective environments. In the understory, the plant develops a tree-like form. Its strategy here is based on restricting investment in exploitation str...", "which Species name ?", "Rhamnus cathartica L.", NaN, NaN], ["During the upsurge of the introduced predatory Nile perch in Lake Victoria in the 1980s, the zooplanktivorous Haplochromis (Yssichromis) pyrrhocephalus nearly vanished. The species recovered coincident with the intense fishing of Nile perch in the 1990s, when water clarity and dissolved oxygen levels had decreased dramatically due to increased eutrophication. In response to the hypoxic conditions, total gill surface in resurgent H. pyrrhocephalus increased by 64%. Remarkably, head length, eye length, and head volume decreased in size, whereas cheek depth increased. Reductions in eye size and depth of the rostral part of the musculus sternohyoideus, and reallocation of space between the opercular and suspensorial compartments of the head may have permitted accommodation of larger gills in a smaller head. By contrast, the musculus levator posterior, located dorsal to the gills, increased in depth. This probably reflects an adaptive response to the larger and tougher prey types in the diet of resurgent H. pyrrhocephalus. These striking morphological changes over a time span of only two decades could be the combined result of phenotypic plasticity and genetic change and may have fostered recovery of this species.", "which Species name ?", "Haplochromis (Yssichromis) pyrrhocephalus", NaN, NaN], ["Fish can undergo changes in their life-history traits that correspond with local demographic conditions. Under range expansion, a population of non-native fish might then be expected to exhibit a suite of life-history traits that differ between the edge and the centre of the population\u2019s geographic range. To test this hypothesis, life-history traits of an expanding population of round goby, Neogobius melanostomus (Pallas), in early and newly established sites in the Trent River (Ontario, Canada) were compared in 2007 and 2008. Round goby in the area of first introduction exhibited a significant decrease in age at maturity, increased length at age 1 and they increased in GSI from 2007 to 2008. While individuals at the edges of the range exhibited traits that promote population growth under low intraspecific density, yearly variability in life-history traits suggests that additional processes such as declining density and fluctuating food availability are influencing the reproductive strategy and growth of round goby during an invasion.", "which Species name ?", "Neogobius melanostomus", 394.0, 416.0], ["Plant distributions are in part determined by environmental heterogeneity on both large (landscape) and small (several meters) spatial scales. Plant populations can respond to environmental heterogeneity via genetic differentiation between large distinct patches, and via phenotypic plasticity in response to heterogeneity occurring at small scales relative to dispersal distance. As a result, the level of environmental heterogeneity experienced across generations, as determined by seed dispersal distance, may itself be under selection. Selection could act to increase or decrease seed dispersal distance, depending on patterns of heterogeneity in environmental quality with distance from a maternal home site. Serpentine soils, which impose harsh and variable abiotic stress on non-adapted plants, have been partially invaded by Erodium cicutarium in northern California, USA. Using nearby grassland sites characterized as either serpentine or non-serpentine, we collected seeds from dense patches of E. cicutarium on both soil types in spring 2004 and subsequently dispersed those seeds to one of four distances from their maternal home site (0, 0.5, 1, or 10 m). We examined distance-dependent patterns of variation in offspring lifetime fitness, conspecific density, soil availability, soil water content, and aboveground grass and forb biomass. ANOVA revealed a distinct fitness peak when seeds were dispersed 0.5 m from their maternal home site on serpentine patches. In non-serpentine patches, fitness was reduced only for seeds placed back into the maternal home site. Conspecific density was uniformly high within 1 m of a maternal home site on both soils, whereas soil water content and grass biomass were significantly heterogeneous among dispersal distances only on serpentine soils. Structural equation modeling and multigroup analysis revealed significantly stronger direct and indirect effects linking abiotic and biotic variation to offspring performance on serpentine soils than on non-serpentine soils, indicating the potential for soil-specific selection on seed dispersal distance in this invasive species.", "which Species name ?", "Erodium cicutarium", 833.0, 851.0], ["Phenotypic plasticity has been suggested as the main mechanism for species persistence under a global change scenario, and also as one of the main mechanisms that alien species use to tolerate and invade broad geographic areas. However, contrasting with this central role of phenotypic plasticity, standard models aimed to predict the effect of climatic change on species distributions do not allow for the inclusion of differences in plastic responses among populations. In this context, the climatic variability hypothesis (CVH), which states that higher thermal variability at higher latitudes should determine an increase in phenotypic plasticity with latitude, could be considered a timely and promising hypothesis. Accordingly, in this study we evaluated, for the first time in a plant species (Taraxacum officinale), the prediction of the CVH. Specifically, we measured plastic responses at different environmental temperatures (5 and 20\u00b0C), in several ecophysiological and fitness-related traits for five populations distributed along a broad latitudinal gradient. Overall, phenotypic plasticity increased with latitude for all six traits analyzed, and mean trait values increased with latitude at both experimental temperatures, the change was noticeably greater at 20\u00b0 than at 5\u00b0C. Our results suggest that the positive relationship found between phenotypic plasticity and geographic latitude could have very deep implications on future species persistence and invasion processes under a scenario of climate change.", "which Species name ?", "Taraxacum officinale", 801.0, 821.0], ["Abstract Interactions between environmental variables in anthropogenically disturbed environments and physiological traits of invasive species may help explain reasons for invasive species' establishment in new areas. Here we analyze how soil contamination along roadsides may influence the establishment of Conium maculatum (poison hemlock) in Cook County, IL, USA. We combine analyses that: (1) characterize the soil and measure concentrations of heavy metals and polycyclic aromatic hydrocarbons (PAHs) where Conium is growing; (2) assess the genetic diversity and structure of individuals among nine known populations; and (3) test for tolerance to heavy metals and evidence for local soil growth advantage with greenhouse establishment experiments. We found elevated levels of metals and PAHs in the soil where Conium was growing. Specifically, arsenic (As), cadmium (Cd), and lead (Pb) were found at elevated levels relative to U.S. EPA ecological contamination thresholds. In a greenhouse study we found that Conium is more tolerant of soils containing heavy metals (As, Cd, Pb) than two native species. For the genetic analysis a total of 217 individuals (approximately 20\u201330 per population) were scored with 5 ISSR primers, yielding 114 variable loci. We found high levels of genetic diversity in all populations but little genetic structure or differentiation among populations. Although Conium shows a general tolerance to contamination, we found few significant associations between genetic diversity metrics and a suite of measured environmental and spatial parameters. Soil contamination is not driving the peculiar spatial distribution of Conium in Cook County, but these findings indicate that Conium is likely establishing in the Chicago region partially due to its ability to tolerate high levels of metal contamination.", "which Species name ?", "Conium maculatum", 308.0, 324.0], ["1 The emerald ash borer Agrilus planipennis (Coleoptera: Buprestidae) (EAB), an invasive wood\u2010boring beetle, has recently caused significant losses of native ash (Fraxinus spp.) trees in North America. Movement of wood products has facilitated EAB spread, and heat sanitation of wooden materials according to International Standards for Phytosanitary Measures No. 15 (ISPM 15) is used to prevent this. 2 In the present study, we assessed the thermal conditions experienced during a typical heat\u2010treatment at a facility using protocols for pallet wood treatment under policy PI\u201007, as implemented in Canada. The basal high temperature tolerance of EAB larvae and pupae was determined, and the observed heating rates were used to investigate whether the heat shock response and expression of heat shock proteins occurred in fourth\u2010instar larvae. 3 The temperature regime during heat treatment greatly exceeded the ISPM 15 requirements of 56 \u00b0C for 30 min. Emerald ash borer larvae were highly tolerant of elevated temperatures, with some instars surviving exposure to 53 \u00b0C without any heat pre\u2010treatments. High temperature survival was increased by either slow warming or pre\u2010exposure to elevated temperatures and a recovery regime that was accompanied by up\u2010regulated hsp70 expression under some of these conditions. 4 Because EAB is highly heat tolerant and exhibits a fully functional heat shock response, we conclude that greater survival than measured in vitro is possible under industry treatment conditions (with the larvae still embedded in the wood). We propose that the phenotypic plasticity of EAB may lead to high temperature tolerance very close to conditions experienced in an ISPM 15 standard treatment.", "which Species name ?", "Agrilus planipennis", 24.0, 43.0], ["Abstract Background Introduced species can have profound effects on native species, communities, and ecosystems, and have caused extinctions or declines in native species globally. We examined the evolutionary response of native zooplankton populations to the introduction of non-native salmonids in alpine lakes in the Sierra Nevada of California, USA. We compared morphological and life-history traits in populations of Daphnia with a known history of introduced salmonids and populations that have no history of salmonid introductions. Results Our results show that Daphnia populations co-existing with fish have undergone rapid adaptive reductions in body size and in the timing of reproduction. Size-related traits decreased by up to 13 percent in response to introduced fish. Rates of evolutionary change are as high as 4,238 darwins (0.036 haldanes). Conclusion Species introductions into aquatic habitats can dramatically alter the selective environment of native species leading to a rapid evolutionary response. Knowledge of the rates and limits of adaptation is an important component of understanding the long-term effects of alterations in the species composition of communities. We discuss the evolutionary consequences of species introductions and compare the rate of evolution observed in the Sierra Nevada Daphnia to published estimates of evolutionary change in ecological timescales.", "which Species name ?", "Salmonids", 287.0, 296.0], ["Summary 1. The exotic cladoceran Daphnia lumholtzi has recently invaded freshwater systems throughout the United States. Daphnia lumholtzi possesses extravagant head spines that are longer than those found on any other North American Daphnia. These spines are effective at reducing predation from many of the predators that are native to newly invaded habitats; however, they are plastic both in nature and in laboratory cultures. The purpose of this experiment was to better understand what environmental cues induce and maintain these effective predator-deterrent spines. We conducted life-table experiments on individual D. lumholtzi grown in water conditioned with an invertebrate insect predator, Chaoborus punctipennis, and water conditioned with a vertebrate fish predator, Lepomis macrochirus. 2. Daphnia lumholtzi exhibited morphological plasticity in response to kairomones released by both predators. However, direct exposure to predator kairomones during postembryonic development did not induce long spines in D. lumholtzi. In contrast, neonates produced from individuals exposed to Lepomis kairomones had significantly longer head and tail spines than neonates produced from control and Chaoborus individuals. These results suggest that there may be a maternal, or pre-embryonic, effect of kairomone exposure on spine development in D. lumholtzi. 3. Independent of these morphological shifts, D. lumholtzi also exhibited plasticity in life history characteristics in response to predator kairomones. For example, D. lumholtzi exhibited delayed reproduction in response to Chaoborus kairomones, and significantly more individuals produced resting eggs, or ephippia, in the presence of Lepomis kairomones.", "which Species name ?", "Daphnia lumholtzi", 33.0, 50.0], ["ABSTRACT The Asian tiger mosquito, Aedes albopictus (Skuse), is perhaps the most successful invasive mosquito species in contemporary history. In the United States, Ae. albopictus has spread from its introduction point in southern Texas to as far north as New Jersey (i.e., a span of \u224814\u00b0 latitude). This species experiences seasonal constraints in activity because of cold temperatures in winter in the northern United States, but is active year-round in the south. We performed a laboratory experiment to examine how life-history traits of Ae. albopictus from four populations (New Jersey [39.4\u00b0 N], Virginia [38.6\u00b0 N], North Carolina [35.8\u00b0 N], Florida [27.6\u00b0 N]) responded to photoperiod conditions that mimic approaching winter in the north (short static daylength, short diminishing daylength) or relatively benign summer conditions in the south (long daylength), at low and high larval densities. Individuals from northern locations were predicted to exhibit reduced development times and to emerge smaller as adults under short daylength, but be larger and take longer to develop under long daylength. Life-history traits of southern populations were predicted to show less plasticity in response to daylength because of low probability of seasonal mortality in those areas. Males and females responded strongly to photoperiod regardless of geographic location, being generally larger but taking longer to develop under the long daylength compared with short day lengths; adults of both sexes were smaller when reared at low larval densities. Adults also differed in mass and development time among locations, although this effect was independent of density and photoperiod in females but interacted with density in males. Differences between male and female mass and development times was greater in the long photoperiod suggesting differences between the sexes in their reaction to different photoperiods. This work suggests that Ae. albopictus exhibits sex-specific phenotypic plasticity in life-history traits matching variation in important environmental variables.", "which Species name ?", "Aedes albopictus", 35.0, 51.0], ["Phenotypic plasticity has long been suspected to allow invasive species to expand their geographic range across large-scale environmental gradients. We tested this possibility in Australia using a continental scale survey of the invasive tree Parkinsonia aculeata (Fabaceae) in twenty-three sites distributed across four climate regions and three habitat types. Using tree-level responses, we detected a trade-off between seed mass and seed number across the moisture gradient. Individual trees plastically and reversibly produced many small seeds at dry sites or years, and few big seeds at wet sites and years. Bigger seeds were positively correlated with higher seed and seedling survival rates. The trade-off, the relation between seed mass, seed and seedling survival, and other fitness components of the plant life-cycle were integrated within a matrix population model. The model confirms that the plastic response resulted in average fitness benefits across the life-cycle. Plasticity resulted in average fitness being positively maintained at the wet and dry range margins where extinction risks would otherwise have been high (\u201cJack-of-all-Trades\u201d strategy JT), and fitness being maximized at the species range centre where extinction risks were already low (\u201cMaster-of-Some\u201d strategy MS). The resulting hybrid \u201cJack-and-Master\u201d strategy (JM) broadened the geographic range and amplified average fitness in the range centre. Our study provides the first empirical evidence for a JM species. It also confirms mechanistically the importance of phenotypic plasticity in determining the size, the shape and the dynamic of a species distribution. The JM allows rapid and reversible phenotypic responses to new or changing moisture conditions at different scales, providing the species with definite advantages over genetic adaptation when invading diverse and variable environments. Furthermore, natural selection pressure acting on phenotypic plasticity is predicted to result in maintenance of the JT and strengthening of the MS, further enhancing the species invasiveness in its range centre.", "which Species name ?", "Parkinsonia aculeata", 243.0, 263.0], ["Abstract We documented microhabitat occurrence and growth of Lonicera japonica to identify factors related to its invasion into a southern Illinois shale barren. The barren was surveyed for L. japonica in June 2003, and the microhabitats of established L. japonica plants were compared to random points that sampled the range of available microhabitats in the barren. Vine and leaf characters were used as measurements of plant growth. Lonicera japonica occurred preferentially in areas of high litter cover and species richness, comparatively small trees, low PAR, low soil moisture and temperature, steep slopes, and shallow soils. Plant growth varied among these microhabitats. Among plots where L. japonica occurred, growth was related to soil and light conditions, and aspects of surrounding cover. Overhead canopy cover was a common variable associated with nearly all measured growth traits. Plasticity of traits to improve invader success can only affect the likelihood of invasion once constraints to establishment and persistence have been surmounted. Therefore, understanding where L. japonica invasion occurs, and microhabitat interactions with plant growth are important for estimating invasion success.", "which Species name ?", " Lonicera japonica", 60.0, 78.0], ["Abstract How introduced plants, which may be locally adapted to specific climatic conditions in their native range, cope with the new abiotic conditions that they encounter as exotics is not well understood. In particular, it is unclear what role plasticity versus adaptive evolution plays in enabling exotics to persist under new environmental circumstances in the introduced range. We determined the extent to which native and introduced populations of St. John's Wort (Hypericum perforatum) are genetically differentiated with respect to leaf-level morphological and physiological traits that allow plants to tolerate different climatic conditions. In common gardens in Washington and Spain, and in a greenhouse, we examined clinal variation in percent leaf nitrogen and carbon, leaf \u03b413C values (as an integrative measure of water use efficiency), specific leaf area (SLA), root and shoot biomass, root/shoot ratio, total leaf area, and leaf area ratio (LAR). As well, we determined whether native European H. perforatum experienced directional selection on leaf-level traits in the introduced range and we compared, across gardens, levels of plasticity in these traits. In field gardens in both Washington and Spain, native populations formed latitudinal clines in percent leaf N. In the greenhouse, native populations formed latitudinal clines in root and shoot biomass and total leaf area, and in the Washington garden only, native populations also exhibited latitudinal clines in percent leaf C and leaf \u03b413C. Traits that failed to show consistent latitudinal clines instead exhibited significant phenotypic plasticity. Introduced St. John's Wort populations also formed significant or marginally significant latitudinal clines in percent leaf N in Washington and Spain, percent leaf C in Washington, and in root biomass and total leaf area in the greenhouse. In the Washington common garden, there was strong directional selection among European populations for higher percent leaf N and leaf \u03b413C, but no selection on any other measured trait. The presence of convergent, genetically based latitudinal clines between native and introduced H. perforatum, together with previously published molecular data, suggest that native and exotic genotypes have independently adapted to a broad-scale variation in climate that varies with latitude.", "which Species name ?", "Hypericum perforatum", 472.0, 492.0], ["Aims Adaptive evolution along geographic gradients of climatic conditions is suggested to facilitate the spread of invasive plant species, leading to clinal variation among populations in the introduced range. We investigated whether adaptation to climate is also involved in the invasive spread of an ornamental shrub, Buddleja davidii, across western and central Europe. Methods We combined a common garden experiment, replicated in three climatically different central European regions, with reciprocal transplantation to quantify genetic differentiation in growth and reproductive traits of 20 invasive B. davidii populations. Additionally, we compared compensatory regrowth among populations after clipping of stems to simulate mechanical damage.", "which Species name ?", "Buddleja davidii", 320.0, 336.0], ["Invasive exotic plants reduce the diversity of native communities by displacing native species. According to the coexistence theory, native plants are able to coexist with invaders only when their fitness is not significantly smaller than that of the exotics or when they occupy a different niche. It has therefore been hypothesized that the survival of some native species at invaded sites is due to post-invasion evolutionary changes in fitness and/or niche traits. In common garden experiments, we tested whether plants from invaded sites of two native species, Impatiens noli-tangere and Galeopsis speciosa, outperform conspecifics from non-invaded sites when grown in competition with the invader (Impatiens parviflora). We further examined whether the expected superior performance of the plants from the invaded sites is due to changes in the plant size (fitness proxy) and/or changes in the germination phenology and phenotypic plasticity (niche proxies). Invasion history did not influence the performance of any native species when grown with the exotic competitor. In I. noli-tangere, however, we found significant trait divergence with regard to plant size, germination phenology and phenotypic plasticity. In the absence of a competitor, plants of I. noli-tangere from invaded sites were larger than plants from non-invaded sites. The former plants germinated earlier than inexperienced conspecifics or an exotic congener. Invasion experience was also associated with increased phenotypic plasticity and an improved shade-avoidance syndrome. Although these changes indicate fitness and niche differentiation of I. noli-tangere at invaded sites, future research should examine more closely the adaptive value of these changes and their genetic basis.", "which Species name ?", "Impatiens parviflora", 703.0, 723.0], ["The expression of defensive morphologies in prey often is correlated with predator abundance or diversity over a range of temporal and spatial scales. These patterns are assumed to reflect natural selection via differential predation on genetically determined, fixed phenotypes. Phenotypic variation, however, also can reflect within-generation developmental responses to environmental cues (phenotypic plasticity). For example, water-borne effluents from predators can induce the production of defensive morphologies in many prey taxa. This phenomenon, however, has been examined only on narrow scales. Here, we demonstrate adaptive phenotypic plasticity in prey from geographically separated populations that were reared in the presence of an introduced predator. Marine snails exposed to predatory crab effluent in the field increased shell thickness rapidly compared with controls. Induced changes were comparable to (i) historical transitions in thickness previously attributed to selection by the invading predator and (ii) present-day clinal variation predicted from water temperature differences. Thus, predator-induced phenotypic plasticity may explain broad-scale geographic and temporal phenotypic variation. If inducible defenses are heritable, then selection on the reaction norm may influence coevolution between predator and prey. Trade-offs may explain why inducible rather than constitutive defenses have evolved in several gastropod species.", "which Species name ?", "Marine snails", 766.0, 779.0], ["Plant species introduced into novel ranges may become invasive due to evolutionary change, phenotypic plasticity, or other biotic or abiotic mechanisms. Evolution of introduced populations could be the result of founder effects, drift, hybridization, or adaptation to local conditions, which could enhance the invasiveness of introduced species. However, understanding whether the success of invading populations is due to genetic differences between native and introduced populations may be obscured by origin x environment interactions. That is, studies conducted under a limited set of environmental conditions may show inconsistent results if native or introduced populations are differentially adapted to specific conditions. We tested for genetic differences between native and introduced populations, and for origin x environment interactions, between native (China) and introduced (U.S.) populations of the invasive annual grass Microstegium vimineum (stiltgrass) across 22 common gardens spanning a wide range of habitats and environmental conditions. On average, introduced populations produced 46% greater biomass and had 7.4% greater survival, and outperformed native range populations in every common garden. However, we found no evidence that introduced Microstegium exhibited greater phenotypic plasticity than native populations. Biomass of Microstegium was positively correlated with light and resident community richness and biomass across the common gardens. However, these relationships were equivalent for native and introduced populations, suggesting that the greater mean performance of introduced populations is not due to unequal responses to specific environmental parameters. Our data on performance of invasive and native populations suggest that post-introduction evolutionary changes may have enhanced the invasive potential of this species. Further, the ability of Microstegium to survive and grow across the wide variety of environmental conditions demonstrates that few habitats are immune to invasion.", "which Species name ?", "Microstegium vimineum", 937.0, 958.0], ["The Asian grass Miscanthus sinensis (Poaceae) is being considered for use as a bioenergy crop in the U.S. Corn Belt. Originally introduced to the United States for ornamental plantings, it escaped, forming invasive populations. The concern is that naturalized M. sinensis populations have evolved shade tolerance. We tested the hypothesis that seedlings from within the invasive U.S. range of M. sinensis would display traits associated with shade tolerance, namely increased area for light capture and phenotypic plasticity, compared with seedlings from the native Japanese populations. In a common garden experiment, seedlings of 80 half-sib maternal lines were grown from the native range (Japan) and 60 half-sib maternal lines from the invasive range (U.S.) under four light levels. Seedling leaf area, leaf size, growth, and biomass allocation were measured on the resulting seedlings after 12 wk. Seedlings from both regions responded strongly to the light gradient. High light conditions resulted in seedlings with greater leaf area, larger leaves, and a shift to greater belowground biomass investment, compared with shaded seedlings. Japanese seedlings produced more biomass and total leaf area than U.S. seedlings across all light levels. Generally, U.S. and Japanese seedlings allocated a similar amount of biomass to foliage and equal leaf area per leaf mass. Subtle differences in light response by region were observed for total leaf area, mass, growth, and leaf size. U.S. seedlings had slightly higher plasticity for total mass and leaf area but lower plasticity for measures of biomass allocation and leaf traits compared with Japanese seedlings. Our results do not provide general support for the hypothesis of increased M. sinensis shade tolerance within its introduced U.S. range compared with native Japanese populations. Nomenclature: Eulaliagrass; Miscanthus sinensis Anderss. Management Implications: Eulaliagrass (Miscanthus sinensis), an Asian species under consideration for biomass production in the Midwest, has escaped ornamental plantings in the United States to form naturalized populations. Evidence suggests that U.S. populations are able to tolerate relatively shady conditions, but it is unclear whether U.S. populations have greater shade tolerance than the relatively shade-intolerant populations within the species' native range in Asia. Increased shade tolerance could result in a broader range of invaded light environments within the introduced range of M. sinensis. However, results from our common garden experiment do not support the hypothesis of increased shade tolerance in introduced U.S. populations compared with seedlings from native Asian populations. Our results do demonstrate that for both U.S. and Japanese populations under low light conditions, M. sinensis seeds germinate and seedlings gain mass and leaf area; therefore, land managers should carefully monitor or eradicate M. sinensis within these habitats.", "which Species name ?", "Miscanthus sinensis", 16.0, 35.0], ["In herbaceous ecosystems worldwide, biodiversity has been negatively impacted by changed grazing regimes and nutrient enrichment. Altered disturbance regimes are thought to favour invasive species that have a high phenotypic plasticity, although most studies measure plasticity under controlled conditions in the greenhouse and then assume plasticity is an advantage in the field. Here, we compare trait plasticity between three co-occurring, C4 perennial grass species, an invader Eragrostis curvula, and natives Eragrostis sororia and Aristida personata to grazing and fertilizer in a three-year field trial. We measured abundances and several leaf traits known to correlate with strategies used by plants to fix carbon and acquire resources, i.e. specific leaf area (SLA), leaf dry matter content (LDMC), leaf nutrient concentrations (N, C\u2236N, P), assimilation rates (Amax) and photosynthetic nitrogen use efficiency (PNUE). In the control treatment (grazed only), trait values for SLA, leaf C\u2236N ratios, Amax and PNUE differed significantly between the three grass species. When trait values were compared across treatments, E. curvula showed higher trait plasticity than the native grasses, and this correlated with an increase in abundance across all but the grazed/fertilized treatment. The native grasses showed little trait plasticity in response to the treatments. Aristida personata decreased significantly in the treatments where E. curvula increased, and E. sororia abundance increased possibly due to increased rainfall and not in response to treatments or invader abundance. Overall, we found that plasticity did not favour an increase in abundance of E. curvula under the grazed/fertilized treatment likely because leaf nutrient contents increased and subsequently its' palatability to consumers. E. curvula also displayed a higher resource use efficiency than the native grasses. These findings suggest resource conditions and disturbance regimes can be manipulated to disadvantage the success of even plastic exotic species.", "which Species name ?", "Eragrostis curvula", 482.0, 500.0], ["ABSTRACT Question: Do specific environmental conditions affect the performance and growth dynamics of one of the most invasive taxa (Carpobrotus aff. acinaciformis) on Mediterranean islands? Location: Four populations located on Mallorca, Spain. Methods: We monitored growth rates of main and lateral shoots of this stoloniferous plant for over two years (2002\u20132003), comparing two habitats (rocky coast vs. coastal dune) and two different light conditions (sun vs. shade). In one population of each habitat type, we estimated electron transport rate and the level of plant stress (maximal photochemical efficiency Fv/Fm) by means of chlorophyll fluorescence. Results: Main shoots of Carpobrotus grew at similar rates at all sites, regardless habitat type. However, growth rate of lateral shoots was greater in shaded plants than in those exposed to sunlight. Its high phenotypic plasticity, expressed in different allocation patterns in sun and shade individuals, and its clonal growth which promotes the continuous sea...", "which Species name ?", "Carpobrotus aff. acinaciformis", 133.0, 163.0], ["The variability of shell morphology and relative growth of the invasive pearl oyster Pinctada radiata was studied within and among ten populations from coastal Tunisia using discriminant tests. Therefore, 12 morphological characters were examined and 34 metric and weight ratios were defined. In addition to the classic morphological characters, populations were compared by the thickness of the nacreous layer. Results of Duncan's multiple comparison test showed that the most discriminative ratios were the width of nacreous layer of right valve to the inflation of shell, the hinge line length to the maximum width of shell and the nacre thickness to the maximum width of shell. The analysis of variance revealed an important inter-population morphological variability. Both multidimensional scaling analysis and the squared Mahalanobis distances (D2) of metric ratios divided Tunisian P. radiata populations into four biogeographical groupings: the north coast (La Marsa); harbours (Hammamet, Monastir and Zarzis); the Gulf of Gab\u00e8s (Sfax, Kerkennah Island, Mahar\u00e8s, Skhira and Djerba) and the intertidal area (Ajim). However, the Kerkennah Island population was discriminated by the squared Mahalanobis distances (D2) of weight ratios in an isolated group suggesting particular trophic conditions in this area. The allometric study revealed high linear correlation between shell morphological characters and differences in allometric growth among P. radiata populations. Unlike the morphological discrimination, allometric differentiation shows no clear geographical distinction. This study revealed that the pearl oyster P. radiata exhibited considerable phenotypic plasticity related to differences of environmental and/or ecological conditions along Tunisian coasts and highlighted the discriminative character of the nacreous layer thickness parameter.", "which Species name ?", "Pinctada radiata", 106.0, 122.0], ["\n\nThe phenotypic plasticity and the competitive ability of the invasive Acacia longifolia v. the indigenous Mediterranean dune species Halimium halimifolium and Pinus pinea were evaluated. In particular, we explored the hypothesis that phenotypic plasticity in response to biotic and abiotic factors explains the observed differences in competitiveness between invasive and native species. The seedlings\u2019 ability to exploit different resource availabilities was examined in a two factorial experimental design of light and nutrient treatments by analysing 20 physiological and morphological traits. Competitiveness was tested using an additive experimental design in combination with 15N-labelling experiments. Light and nutrient availability had only minor effects on most physiological traits and differences between species were not significant. Plasticity in response to changes in resource availability occurred in morphological and allocation traits, revealing A. longifolia to be a species of intermediate responsiveness. The major competitive advantage of A. longifolia was its constitutively high shoot elongation rate at most resource treatments and its effective nutrient acquisition. Further, A. longifolia was found to be highly tolerant against competition from native species. In contrast to common expectations, the competition experiment indicated that A. longifolia expressed a constant allocation pattern and a phenotypic plasticity similar to that of the native species.\n", "which Species name ?", "Acacia longifolia", 80.0, 97.0], ["The objective of this study was to test if morphological differences in pumpkinseed Lepomis gibbosus found in their native range (eastern North America) that are linked to feeding regime, competition with other species, hydrodynamic forces and habitat were also found among stream- and lake- or reservoir-dwelling fish in Iberian systems. The species has been introduced into these systems, expanding its range, and is presumably well adapted to freshwater Iberian Peninsula ecosystems. The results show a consistent pattern for size of lateral fins, with L. gibbosus that inhabit streams in the Iberian Peninsula having longer lateral fins than those inhabiting reservoirs or lakes. Differences in fin placement, body depth and caudal peduncle dimensions do not differentiate populations of L. gibbosus from lentic and lotic water bodies and, therefore, are not consistent with functional expectations. Lepomis gibbosus from lotic and lentic habitats also do not show a consistent pattern of internal morphological differentiation, probably due to the lack of lotic-lentic differences in prey type. Overall, the univariate and multivariate analyses show that most of the external and internal morphological characters that vary among populations do not differentiate lotic from lentic Iberian populations. The lack of expected differences may be a consequence of the high seasonal flow variation in Mediterranean streams, and the resultant low- or no-flow conditions during periods of summer drought.", "which Species name ?", "Lepomis gibbosus", 84.0, 100.0], ["1 When a plant species is introduced into a new range, it may differentiate genetically from the original populations in the home range. This genetic differentiation may influence the extent to which the invasion of the new range is successful. We tested this hypothesis by examining Senecio pterophorus, a South African shrub that was introduced into NE Spain about 40 years ago. We predicted that in the introduced range invasive populations would perform better and show greater plasticity than native populations. 2 Individuals of S. pterophorus from four Spanish (invasive) and four South African (native) populations were grown in Catalonia, Spain, in a common garden in which disturbance and water availability were manipulated. Fitness traits and several ecophysiological parameters were measured. 3 The invasive populations of S. pterophorus survived better throughout the summer drought in a disturbed (unvegetated) environment than native South African populations. This success may be attributable to the lower specific leaf area (SLA) and better water content regulation of the invasive populations in this treatment. 4 Invasive populations displayed up to three times higher relative growth rate than native populations under conditions of disturbance and non\u2010limiting water availability. 5 The reproductive performance of the invasive populations was higher in all treatments except under the most stressful conditions (i.e. in non\u2010watered undisturbed plots), where no plant from either population flowered. 6 The results for leaf parameters and chlorophyll fluorescence measurements suggested that the greater fitness of the invasive populations could be attributed to more favourable ecophysiological responses. 7 Synthesis. Spanish invasive populations of S. pterophorus performed better in the presence of high levels of disturbance, and displayed higher plasticity of fitness traits in response to resource availability than native South African populations. Our results suggest that genetic differentiation from source populations associated with founding may play a role in invasion success.", "which Species name ?", "Senecio pterophorus", 284.0, 303.0], ["\n\nTwo geographically distinct populations of the submerged aquatic macrophyte Ceratophyllum demersum L. were compared after acclimation to five different nitrogen concentrations (0.005, 0.02, 0.05, 0.1 and 0.2\u2009mM N) in a common garden setup. The two populations were an apparent invasive population from New Zealand (NZ) and a noninvasive population from Denmark (DK). The populations were compared with a focus on both morphological and physiological traits. The NZ population had higher relative growth rates (RGRs) and photosynthesis rates (Pmax) (range: RGR, 0.06\u20130.08 per day; Pmax, 200\u2013395\u2009\u00b5mol\u2009O2\u2009g\u20131 dry mass (DM) h\u20131) compared with the Danish population (range: RGR, 0.02\u20130.05 per day; Pmax, 88\u2013169\u2009\u00b5mol O2 g\u20131 DM h\u20131). The larger, faster-growing NZ population also showed higher plasticity than the DK population in response to nitrogen in traits important for growth. Hence, the observed differences in growth behaviour between the two populations are a result of genetic differences and differences in their level of plasticity. Here, we show that two populations of the same species from similar climates but different geographical areas can differ in several ecophysiological traits after growth in a common garden setup.\n", "which Species name ?", "Ceratophyllum demersum L.", NaN, NaN], ["To understand the role of leaf-level plasticity and variability in species invasiveness, foliar characteristics were studied in relation to seasonal average integrated quantum flux density (Qint) in the understorey evergreen species Rhododendron ponticum and Ilex aquifolium at two sites. A native relict population of R. ponticum was sampled in southern Spain (Mediterranean climate), while an invasive alien population was investigated in Belgium (temperate maritime climate). Ilex aquifolium was native at both sites. Both species exhibited a significant plastic response to Qint in leaf dry mass per unit area, thickness, photosynthetic potentials, and chlorophyll contents at the two sites. However, R. ponticum exhibited a higher photosynthetic nitrogen use efficiency and larger investment of nitrogen in chlorophyll than I. aquifolium. Since leaf nitrogen (N) contents per unit dry mass were lower in R. ponticum, this species formed a larger foliar area with equal photosynthetic potential and light-harvesting efficiency compared with I. aquifolium. The foliage of R. ponticum was mechanically more resistant with larger density in the Belgian site than in the Spanish site. Mean leaf-level phenotypic plasticity was larger in the Belgian population of R. ponticum than in the Spanish population of this species and the two populations of I. aquifolium. We suggest that large fractional investments of foliar N in photosynthetic function coupled with a relatively large mean, leaf-level phenotypic plasticity may provide the primary explanation for the invasive nature and superior performance of R. ponticum at the Belgian site. With alleviation of water limitations from Mediterranean to temperate maritime climates, the invasiveness of R. ponticum may also be enhanced by the increased foliage mechanical resistance observed in the alien populations.", "which Species name ?", "Rhododendron ponticum", 233.0, 254.0], ["Hanley ME (2012). Seedling defoliation, plant growth and flowering potential in native- and invasive-range Plantago lanceolata populations. Weed Research52, 252\u2013259. Summary The plastic response of weeds to new environmental conditions, in particular the likely relaxation of herbivore pressure, is considered vital for successful colonisation and spread. However, while variation in plant anti-herbivore resistance between native- and introduced-range populations is well studied, few authors have considered herbivore tolerance, especially at the seedling stage. This study examines variation in seedling tolerance in native (European) and introduced (North American) Plantago lanceolata populations following cotyledon removal at 14 days old. Subsequent effects on plant growth were quantified at 35 days, along with effects on flowering potential at maturity. Cotyledon removal reduced early growth for all populations, with no variation between introduced- or native-range plants. Although more variable, the effects of cotyledon loss on flowering potential were also unrelated to range. The likelihood that generalist seedling herbivores are common throughout North America may explain why no difference in seedling tolerance was apparent. However, increased flowering potential in plants from North American P. lanceolata populations was observed. As increased flowering potential was not lost, even after severe cotyledon damage, the manifestation of phenotypic plasticity in weeds at maturity may nonetheless still be shaped by plasticity in the ability to tolerate herbivory during seedling establishment.", "which Species name ?", "Plantago lanceolata", 107.0, 126.0], ["SUMMARY The invasive zebra mussel (Dreissena polymorpha) has quickly colonized shallow-water habitats in the North American Great Lakes since the 1980s but the quagga mussel (Dreissena bugensis) is becoming dominant in both shallow and deep-water habitats. While quagga mussel shell morphology differs between shallow and deep habitats, functional causes and consequences of such difference are unknown. We examined whether quagga mussel shell morphology could be induced by three environmental variables through developmental plasticity. We predicted that shallow-water conditions (high temperature, food quantity, water motion) would yield a morphotype typical of wild quagga mussels from shallow habitats, while deep-water conditions (low temperature, food quantity, water motion) would yield a morphotype present in deep habitats. We tested this prediction by examining shell morphology and growth rate of quagga mussels collected from shallow and deep habitats and reared under common-garden treatments that manipulated the three variables. Shell morphology was quantified using the polar moment of inertia. Of the variables tested, temperature had the greatest effect on shell morphology. Higher temperature (\u223c18\u201320\u00b0C) yielded a morphotype typical of wild shallow mussels regardless of the levels of food quantity or water motion. In contrast, lower temperature (\u223c6\u20138\u00b0C) yielded a morphotype approaching that of wild deep mussels. If shell morphology has functional consequences in particular habitats, a plastic response might confer quagga mussels with a greater ability than zebra mussels to colonize a wider range of habitats within the Great Lakes.", "which Species name ?", "Dreissena polymorpha", 35.0, 55.0], ["Abstract Background The invasive Chondrostoma nasus nasus has colonized part of the distribution area of the protected endemic species Chondrostoma toxostoma toxostoma . This hybrid zone is a complex system where multiple effects such as inter-species competition, bi-directional introgression, strong environmental pressure and so on are combined. Why do sympatric Chondrostoma fish present a unidirectional change in body shape? Is this the result of inter-species interactions and/or a response to environmental effects or the result of trade-offs? Studies focusing on the understanding of a trade-off between multiple parameters are still rare. Although this has previously been done for Cichlid species flock and for Darwin finches, where mouth or beak morphology were coupled to diet and genetic identification, no similar studies have been done for a fish hybrid zone in a river. We tested the correlation between morphology (body and mouth morphology), diet (stable carbon and nitrogen isotopes) and genomic combinations in different allopatric and sympatric populations for a global data set of 1330 specimens. To separate the species interaction effect from the environmental effect in sympatry, we distinguished two data sets: the first one was obtained from a highly regulated part of the river and the second was obtained from specimens coming from the less regulated part. Results The distribution of the hybrid combinations was different in the two part of the sympatric zone, whereas all the specimens presented similar overall changes in body shape and in mouth morphology. Sympatric specimens were also characterized by a larger diet behavior variance than reference populations, characteristic of an opportunistic diet. No correlation was established between the body shape (or mouth deformation) and the stable isotope signature. Conclusion The Durance River is an untamed Mediterranean river despite the presence of numerous dams that split the river from upstream to downstream. The sympatric effect on morphology and the large diet behavior range can be explained by a tendency toward an opportunistic behavior of the sympatric specimens. Indeed, the similar response of the two species and their hybrids implied an adaptation that could be defined as an alternative trade-off that underline the importance of epigenetics mechanisms for potential success in a novel environment.", "which Species name ?", "Chondrostoma nasus nasus", 33.0, 57.0], ["Climatic means with different degrees of variability (\u03b4) may change in the future and could significantly impact ectotherm species fitness. Thus, there is an increased interest in understanding the effects of changes in means and variances of temperature on traits of climatic stress resistance. Here, we examined short\u2010term (within\u2010generation) variation in mean temperature (23, 25, and 27 \u00b0C) at three levels of diel thermal fluctuations (\u03b4 = 1, 3, or 5 \u00b0C) on an invasive pest insect, the Mediterranean fruit fly, Ceratitis capitata (Wiedemann) (Diptera: Tephritidae). Using the adult flies, we address the hypothesis that temperature variability may affect the climatic stress resistance over and above changes in mean temperature at constant variability levels. We scored the traits of high\u2010 and low\u2010thermal tolerance, high\u2010 and low\u2010temperature acute hardening ability, water balance, and egg production under benign conditions after exposure to each of the nine experimental scenarios. Most importantly, results showed that temperature variance may have significant effects in addition to the changes in mean temperature for most traits scored. Although typical acclimation responses were detected for most of the traits under low variance conditions, high variance scenarios dramatically altered the outcomes, with poorer climatic stress resistance detected in some, but not all, traits. These results suggest that large temperature fluctuations might limit plastic responses which in turn could reduce the insect fitness. Increased mean temperatures in conjunction with increased temperature variability may therefore have stronger negative effects on this agricultural pest than elevated temperatures alone. The results of this study therefore have significant implications for understanding insect responses to climate change and suggest that analyses or simulations of only mean temperature variation may be inappropriate for predicting population\u2010level responses under future climate change scenarios despite their widespread use.", "which Species name ?", "Ceratitis capitata", 517.0, 535.0], ["Background: Phenotypic plasticity and ecotypic differentiation have been suggested as the main mechanisms by which widely distributed species can colonise broad geographic areas with variable and stressful conditions. Some invasive plant species are among the most widely distributed plants worldwide. Plasticity and local adaptation could be the mechanisms for colonising new areas. Aims: We addressed if Taraxacum officinale from native (Alps) and introduced (Andes) stock responded similarly to drought treatment, in terms of photosynthesis, foliar angle, and flowering time. We also evaluated if ontogeny affected fitness and physiological responses to drought. Methods: We carried out two common garden experiments with both seedlings and adults (F2) of T. officinale from its native and introduced ranges in order to evaluate their plasticity and ecotypic differentiation under a drought treatment. Results: Our data suggest that the functional response of T. officinale individuals from the introduced range to drought is the result of local adaptation rather than plasticity. In addition, the individuals from the native distribution range were more sensitive to drought than those from the introduced distribution ranges at both seedling and adult stages. Conclusions: These results suggest that local adaptation may be a possible mechanism underlying the successful invasion of T. officinale in high mountain environments of the Andes.", "which Species name ?", "Taraxacum officinale", 406.0, 426.0], ["In the Bonin Islands of the western Pacific where the light environment is characterized by high fluctuations due to frequent typhoon disturbance, we hypothesized that the invasive success of Bischofia javanica Blume (invasive tree, mid-successional) may be attributable to a high acclimation capacity under fluctuating light availability. The physiological and morphological responses of B. javanica to both simulated canopy opening and closure were compared against three native species of different successional status: Trema orientalis Blume (pioneer), Schima mertensiana (Sieb. et Zucc.) Koidz (mid-successional) and Elaeocarpus photiniaefolius Hook.et Arn (late-successional). The results revealed significant species-specific differences in the timing of physiological maturity and phenotypic plasticity in leaves developed under constant high and low light levels. For example, the photosynthetic capacity of T. orientalis reached a maximum in leaves that had just fully expanded when grown under constant high light (50% of full sun) whereas that of E. photiniaefolius leaves continued to increase until 50 d after full expansion. For leaves that had just reached full expansion, T. orientalis, having high photosynthetic plasticity between high and low light, exhibited low acclimation capacity under the changing light (from high to low or low to high light). In comparison with native species, B. javanica showed a higher degree of physiological and morphological acclimation following transfer to a new light condition in leaves of all age classes (i.e. before and after reaching full expansion). The high acclimation ability of B. javanica in response to changes in light availability may be a part of its pre-adaptations for invasiveness in the fluctuating environment of the Bonin Islands.", "which Species name ?", "Bischofia javanica", 192.0, 210.0], ["Understanding the factors that drive commonness and rarity of plant species and whether these factors differ for alien and native species are key questions in ecology. If a species is to become common in a community, incoming propagules must first be able to establish. The latter could be determined by competition with resident plants, the impacts of herbivores and soil biota, or a combination of these factors. We aimed to tease apart the roles that these factors play in determining establishment success in grassland communities of 10 alien and 10 native plant species that are either common or rare in Germany, and from four families. In a two\u2010year multisite field experiment, we assessed the establishment success of seeds and seedlings separately, under all factorial combinations of low vs. high disturbance (mowing vs mowing and tilling of the upper soil layer), suppression or not of pathogens (biocide application) and, for seedlings only, reduction or not of herbivores (net\u2010cages). Native species showed greater establishment success than alien species across all treatments, regardless of their commonness. Moreover, establishment success of all species was positively affected by disturbance. Aliens showed lower establishment success in undisturbed sites with biocide application. Release of the undisturbed resident community from pathogens by biocide application might explain this lower establishment success of aliens. These findings were consistent for establishment from either seeds or seedlings, although less significantly so for seedlings, suggesting a more important role of pathogens in very early stages of establishment after germination. Herbivore exclusion did play a limited role in seedling establishment success. Synthesis: In conclusion, we found that less disturbed grassland communities exhibited strong biotic resistance to establishment success of species, whether alien or native. However, we also found evidence that alien species may benefit weakly from soilborne enemy release, but that this advantage over native species is lost when the latter are also released by biocide application. Thus, disturbance was the major driver for plant species establishment success and effects of pathogens on alien plant establishment may only play a minor role.", "which Research Method ?", "Experiment", 672.0, 682.0], ["The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents\u2019 circumstances, such as a spouse\u2019s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.", "which gender ?", "male", NaN, NaN], ["The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents\u2019 circumstances, such as a spouse\u2019s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.", "which gender ?", "female", 662.0, 668.0], ["The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents\u2019 circumstances, such as a spouse\u2019s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.", "which target population ?", "academics", 475.0, 484.0], ["Academia serves as a valuable case for studying the effects of social forces on workplace productivity, using a concrete measure of output: scholarly papers. Many academics, especially women, have ...", "which target population ?", "academics", 163.0, 172.0], ["The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents\u2019 circumstances, such as a spouse\u2019s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.", "which influencing factor ?", "child", NaN, NaN], ["The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents\u2019 circumstances, such as a spouse\u2019s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.", "which influencing factor ?", "COVID-19 pandemic", 24.0, 41.0], ["Hybrid energy systems (HESs) generate electricity from multiple energy sources that complement each other. Recently, due to the reduction in costs of photovoltaic (PV) modules and wind turbines, these types of systems have become economically competitive. In this study, a mathematical programming model is applied to evaluate the techno-economic feasibility of autonomous units located in two isolated areas of Ecuador: first, the province of Galapagos (subtropical island) and second, the province of Morona Santiago (Amazonian tropical forest). The two case studies suggest that HESs are potential solutions to reduce the dependence of rural villages on fossil fuels and viable mechanisms to bring electrical power to isolated communities in Ecuador. Our results reveal that not only from the economic but also from the environmental point of view, for the case of the Galapagos province, a hybrid energy system with a PV\u2013wind\u2013battery configuration and a levelized cost of energy (LCOE) equal to 0.36 $/kWh is the optimal energy supply system. For the case of Morona Santiago, a hybrid energy system with a PV\u2013diesel\u2013battery configuration and an LCOE equal to 0.37 $/kWh is the most suitable configuration to meet the load of a typical isolated community in Ecuador. The proposed optimization model can be used as a decision-support tool for evaluating the viability of autonomous HES projects at any other location.", "which System location ?", "Galapagos", 444.0, 453.0], ["Hybrid energy systems (HESs) generate electricity from multiple energy sources that complement each other. Recently, due to the reduction in costs of photovoltaic (PV) modules and wind turbines, these types of systems have become economically competitive. In this study, a mathematical programming model is applied to evaluate the techno-economic feasibility of autonomous units located in two isolated areas of Ecuador: first, the province of Galapagos (subtropical island) and second, the province of Morona Santiago (Amazonian tropical forest). The two case studies suggest that HESs are potential solutions to reduce the dependence of rural villages on fossil fuels and viable mechanisms to bring electrical power to isolated communities in Ecuador. Our results reveal that not only from the economic but also from the environmental point of view, for the case of the Galapagos province, a hybrid energy system with a PV\u2013wind\u2013battery configuration and a levelized cost of energy (LCOE) equal to 0.36 $/kWh is the optimal energy supply system. For the case of Morona Santiago, a hybrid energy system with a PV\u2013diesel\u2013battery configuration and an LCOE equal to 0.37 $/kWh is the most suitable configuration to meet the load of a typical isolated community in Ecuador. The proposed optimization model can be used as a decision-support tool for evaluating the viability of autonomous HES projects at any other location.", "which System location ?", "Morona Santiago", 503.0, 518.0], ["This paper describes a revolutionary micromachined accelerometer which is simple, reliable, and inexpensive to make. The operating principle of this accelerometer is based on free-convection heat transfer of a tiny hot air bubble in an enclosed chamber. An experimental device has demonstrated a 0.6 milli-g sensitivity which can theoretically be extended to sub-micro-g level.", "which Working fluid ?", "Air", 219.0, 222.0], ["In this paper, a liquid-based micro thermal convective accelerometer (MTCA) is optimized by the Rayleigh number (Ra) based compact model and fabricated using the $0.35\\mu $ m CMOS MEMS technology. To achieve water-proof performance, the conformal Parylene C coating was adopted as the isolation layer with the accelerated life-testing results of a 9-year-lifetime for liquid-based MTCA. Then, the device performance was characterized considering sensitivity, response time, and noise. Both the theoretical and experimental results demonstrated that fluid with a larger Ra number can provide better performance for the MTCA. More significantly, Ra based model showed its advantage to make a more accurate prediction than the simple linear model to select suitable fluid to enhance the sensitivity and balance the linear range of the device. Accordingly, an alcohol-based MTCA was achieved with a two-order-of magnitude increase in sensitivity (43.8 mV/g) and one-order-of-magnitude decrease in the limit of detection (LOD) ( $61.9~\\mu \\text{g}$ ) compared with the air-based MTCA. [2021-0092]", "which Working fluid ?", "Water", 298.0, 303.0], ["In this study, we synthesized hierarchical CuO nanoleaves in large-quantity via the hydrothermal method. We employed different techniques to characterize the morphological, structural, optical properties of the as-prepared hierarchical CuO nanoleaves sample. An electrochemical based nonenzymatic glucose biosensor was fabricated using engineered hierarchical CuO nanoleaves. The electrochemical behavior of fabricated biosensor towards glucose was analyzed with cyclic voltammetry (CV) and amperometry (i\u2013t) techniques. Owing to the high electroactive surface area, hierarchical CuO nanoleaves based nonenzymatic biosensor electrode shows enhanced electrochemical catalytic behavior for glucose electro-oxidation in 100 mM sodium hydroxide (NaOH) electrolyte. The nonenzymatic biosensor displays a high sensitivity (1467.32 \u03bc A/(mM cm 2 )), linear range (0.005\u20135.89 mM), and detection limit of 12 nM (S/N = 3). Moreover, biosensor displayed good selectivity, reproducibility, repeatability, and stability at room temperature over three-week storage period. Further, as-fabricated nonenzymatic glucose biosensors were employed for practical applications in human serum sample measurements. The obtained data were compared to the commercial biosensor, which demonstrates the practical usability of nonenzymatic glucose biosensors in real sample analysis.", "which Has study area ?", "Biosensors", 1102.0, 1112.0], ["This paper studies the effect of surface roughness on up-state and down-state capacitances of microelectromechanical systems (MEMS) capacitive switches. When the root-mean-square (RMS) roughness is 10 nm, the up-state capacitance is approximately 9% higher than the theoretical value. When the metal bridge is driven down, the normalized contact area between the metal bridge and the surface of the dielectric layer is less than 1% if the RMS roughness is larger than 2 nm. Therefore, the down-state capacitance is actually determined by the non-contact part of the metal bridge. The normalized isolation is only 62% for RMS roughness of 10 nm when the hold-down voltage is 30 V. The analysis also shows that the down-state capacitance and the isolation increase with the hold-down voltage. The normalized isolation increases from 58% to 65% when the hold-down voltage increases from 10 V to 60 V for RMS roughness of 10 nm.", "which MEMS switch type ?", "Capacitive", 132.0, 142.0], ["In the industrial sector there are many processes where the visual inspection is essential, the automation of that processes becomes a necessity to guarantee the quality of several objects. In this paper we propose a methodology for textile quality inspection based on the texture cue of an image. To solve this, we use a Neuro-Symbolic Hybrid System (NSHS) that allow us to combine an artificial neural network and the symbolic representation of the expert knowledge. The artificial neural network uses the CasCor learning algorithm and we use production rules to represent the symbolic knowledge. The features used for inspection has the advantage of being tolerant to rotation and scale changes. We compare the results with those obtained from an automatic computer vision task, and we conclude that results obtained using the proposed methodology are better.", "which Has participants ?", "expert", 451.0, 457.0], ["Hackathons have become an increasingly popular approach for organizations to both test their new products and services as well as to generate new ideas. Most events either focus on attracting external developers or requesting employees of the organization to focus on a specific problem. In this paper we describe extensions to this paradigm that open up the event to internal employees and preserve the open-ended nature of the hackathon itself. In this paper we describe our initial motivation and objectives for conducting an internal hackathon, our experience in pioneering an internal hackathon at AT&T including specific things we did to make the internal hackathon successful. We conclude with the benefits (both expected and unexpected) we achieved from the internal hackathon approach, and recommendations for continuing the use of this valuable tool within AT&T.", "which Has participants ?", "employees of the organization", 226.0, 255.0], ["Hackathons have become an increasingly popular approach for organizations to both test their new products and services as well as to generate new ideas. Most events either focus on attracting external developers or requesting employees of the organization to focus on a specific problem. In this paper we describe extensions to this paradigm that open up the event to internal employees and preserve the open-ended nature of the hackathon itself. In this paper we describe our initial motivation and objectives for conducting an internal hackathon, our experience in pioneering an internal hackathon at AT&T including specific things we did to make the internal hackathon successful. We conclude with the benefits (both expected and unexpected) we achieved from the internal hackathon approach, and recommendations for continuing the use of this valuable tool within AT&T.", "which Has participants ?", "external developers", 192.0, 211.0], ["Granular activated carbon (GAC) materials were prepared via simple gas activation of silkworm cocoons and were coated on ZnO nanorods (ZNRs) by the facile hydrothermal method. The present combination of GAC and ZNRs shows a core-shell structure (where the GAC is coated on the surface of ZNRs) and is exposed by systematic material analysis. The as-prepared samples were then fabricated as dual-functional sensors and, most fascinatingly, the as-fabricated core-shell structure exhibits better UV and H2 sensing properties than those of as-fabricated ZNRs and GAC. Thus, the present core-shell structure-based H2 sensor exhibits fast responses of 11% (10 ppm) and 23.2% (200 ppm) with ultrafast response and recovery. However, the UV sensor offers an ultrahigh photoresponsivity of 57.9 A W-1, which is superior to that of as-grown ZNRs (0.6 A W-1). Besides this, switching photoresponse of GAC/ZNR core-shell structures exhibits a higher switching ratio (between dark and photocurrent) of 1585, with ultrafast response and recovery, than that of as-grown ZNRs (40). Because of the fast adsorption ability of GAC, it was observed that the finest distribution of GAC on ZNRs results in rapid electron transportation between the conduction bands of GAC and ZNRs while sensing H2 and UV. Furthermore, the present core-shell structure-based UV and H2 sensors also well-retained excellent sensitivity, repeatability, and long-term stability. Thus, the salient feature of this combination is that it provides a dual-functional sensor with biowaste cocoon and ZnO, which is ecological and inexpensive.", "which ZnO form ?", "Nanorods", 125.0, 133.0], ["Here, we report the facile synthesis of a highly ordered luminescent ZnO nanowire array using a low temperature anodic aluminium oxide (AAO) template route which can be economically produced in large scale quantity. The as-synthesized nanowires have diameters ranging from 60 to 70 nm and length \u223c11 \u03bcm. The photoluminescence spectrum reveals that the AAO/ZnO assembly has a strong green emission peak at 490 nm upon excitation at a wavelength of 406 nm. Furthermore, the ZnO nanowire array-based gas sensor has been fabricated by a simple micromechanical technique and its NH3 gas sensing properties have been explored thoroughly. The fabricated gas sensor exhibits excellent sensitivity and fast response to NH3 gas at room temperature. Moreover, for 50 ppm NH3 concentration, the observed value of sensitivity is around 68%, while the response and recovery times are 28 and 29 seconds, respectively. The present synthesis technique to produce a highly ordered ZnO nanowire array and a fabricated gas sensor has great potential to push the low cost gas sensing nanotechnology.", "which ZnO form ?", "Nanowires", 235.0, 244.0], ["Gas sensing properties of ZnO nanowires prepared via thermal chemical vapor deposition method were investigated by analyzing change in their photoluminescence (PL) spectra. The as-synthesized nanowires show two different PL peaks positioned at 380 nm and 520 nm. The 380 nm emission is ascribed to near band edge emission, and the green peak (520 nm) appears due to the oxygen vacancy defects. The intensity of the green PL signal enhances upon hydrogen gas exposure, whereas it gets quenched upon oxygen gas loading. The ZnO nanowires' sensing response values were observed as about 54% for H2 gas and 9% for O2 gas at room temperature for 50 sccm H2/O2 gas flow rate. The sensor response was also analyzed as a function of sample temperature ranging from 300 K to 400 K. A conclusion was derived from the observations that the H2/O2 gases affect the adsorbed oxygen species on the surface of ZnO nanowires. The adsorbed species result in the band bending and hence changes the depletion region which causes variation i...", "which ZnO form ?", "Nanowires", 30.0, 39.0], ["We report herein a glucose biosensor based on glucose oxidase (GOx) immobilized on ZnO nanorod array grown by hydrothermal decomposition. In a phosphate buffer solution with a pH value of 7.4, negatively charged GOx was immobilized on positively charged ZnO nanorods through electrostatic interaction. At an applied potential of +0.8V versus Ag\u2215AgCl reference electrode, ZnO nanorods based biosensor presented a high and reproducible sensitivity of 23.1\u03bcAcm\u22122mM\u22121 with a response time of less than 5s. The biosensor shows a linear range from 0.01to3.45mM and an experiment limit of detection of 0.01mM. An apparent Michaelis-Menten constant of 2.9mM shows a high affinity between glucose and GOx immobilized on ZnO nanorods.", "which ZnO form ?", "ZnO nanorod array", 83.0, 100.0], ["River-lake systems comprise chains of lakes connected by rivers and streams that flow into and out of them. The contact zone between a lake and a river can act as a barrier, where inflowing matter is accumulated and transformed. Magnesium and calcium are natural components of surface water, and their concentrations can be shaped by various factors, mostly the geological structure of a catchment area, soil class and type, plant cover, weather conditions (precipitation-evaporation, seasonal variations), land relief, type and intensity of water supply (surface runoffs and groundwater inflows), etc. The aim of this study was to analyze the influence of a river-lake system on magnesium and calcium concentrations in surface water (inflows, lake, outflow) and their accumulation in bottom deposits. The study was performed between March 2011 and May 2014 in a river-lake system comprising Lake Symsar with inflows, lying in the Olsztyn Lakeland region. The study revealed that calcium and magnesium were retained in the water column and the bottom deposits of the lake at 12.75 t Mg year-1 and 1.97 t Ca year-1. On average, 12.7\u00b11.2 g of calcium and 1.77\u00b10.9 g of magnesium accumulated in 1 kg of bottom deposits in Lake Symsar. The river-lake system, which received pollutants from an agricultural catchment, influenced the Ca2+ and Mg2+ concentrations in the water and the bottom deposits of Lake Symsar. The Tolknicka Struga drainage canal, to which incompletely treated municipal wastewater was discharged, also affected Ca2+ and Mg2+ levels, thus indicating the significant influence of anthropogenic factors.", "which Major cations ?", "Calcium", 243.0, 250.0], ["Focusing on the archival records of the production and performance of Dance in Trees and Church by the Swedish independent dance group Rubicon, this article conceptualizes a records-oriented costume ethics. Theorizations of costume as a co-creative agent of performance are brought into the dance archive to highlight the productivity of paying attention to costume in the making of performance history. Addressing recent developments within archival studies, a feminist ethics of care and radical empathy is employed, which is the capability to empathically engage with others, even if it can be difficult, as a means of exploring how a records-centred costume ethics can be conceptualized for the dance archive. The exploration resulted in two ethical stances useful for better attending to costume-bodies in the dance archive: (1) caring for costume-body relations in the dance archive means that a conventional, so-called static understanding of records as neutral carriers of facts is replaced by a more inclusive, expanding and infinite process. By moving across time and space, and with a caring attitude finding and exploring fragments from various, sometimes contradictory production processes, one can help scattered and poorly represented dance and costume histories to emerge and contribute to the formation of identity and memory. (2) The use of bodily empathy with records can respectfully bring together the understanding of costume in performance as inseparable from the performer\u2019s body with dance as an art form that explicitly uses the dancing costume-body as an expressive tool. It is argued that bodily empathy with records in the dance archive helps one access bodily holisms that create possibilities for exploring the potential of art to critically expose and render strange ideological systems and normativities.", "which dance group ?", "Rubicon", 170.0, 177.0], ["Beetroot is a root vegetable rich in different bioactive components, such as vitamins, minerals, phenolics, carotenoids, nitrate, ascorbic acids, and betalains, that can have a positive effect on human health. The aim of this work was to study the influence of the pulsed electric field (PEF) at different electric field strengths (4.38 and 6.25 kV/cm), pulse number 10\u201330, and energy input 0\u201312.5 kJ/kg as a pretreatment method on the extraction of betalains from beetroot. The obtained results showed that the application of PEF pre-treatment significantly (p < 0.05) influenced the efficiency of extraction of bioactive compounds from beetroot. The highest increase in the content of betalain compounds in the red beet\u2019s extract (betanin by 329%, vulgaxanthin by 244%, compared to the control sample), was noted for 20 pulses of electric field at 4.38 kV/cm of strength. Treatment of the plant material with a PEF also resulted in an increase in the electrical conductivity compared to the non-treated sample due to the increase in cell membrane permeability, which was associated with leakage of substances able to conduct electricity, including mineral salts, into the intercellular space.", "which Vegetable source ?", "Beetroot", 8.0, 16.0], ["Beetroot is a root vegetable rich in different bioactive components, such as vitamins, minerals, phenolics, carotenoids, nitrate, ascorbic acids, and betalains, that can have a positive effect on human health. The aim of this work was to study the influence of the pulsed electric field (PEF) at different electric field strengths (4.38 and 6.25 kV/cm), pulse number 10\u201330, and energy input 0\u201312.5 kJ/kg as a pretreatment method on the extraction of betalains from beetroot. The obtained results showed that the application of PEF pre-treatment significantly (p < 0.05) influenced the efficiency of extraction of bioactive compounds from beetroot. The highest increase in the content of betalain compounds in the red beet\u2019s extract (betanin by 329%, vulgaxanthin by 244%, compared to the control sample), was noted for 20 pulses of electric field at 4.38 kV/cm of strength. Treatment of the plant material with a PEF also resulted in an increase in the electrical conductivity compared to the non-treated sample due to the increase in cell membrane permeability, which was associated with leakage of substances able to conduct electricity, including mineral salts, into the intercellular space.", "which Compound of interest ?", "Betanin", 744.0, 751.0], ["Abstract For decades, uncertainty visualisation has attracted attention in disciplines such as cartography and geographic visualisation, scientific visualisation and information visualisation. Most of this research deals with the development of new approaches to depict uncertainty visually; only a small part is concerned with empirical evaluation of such techniques. This systematic review aims to summarize past user studies and describe their characteristics and findings, focusing on the field of geographic visualisation and cartography and thus on displays containing geospatial uncertainty. From a discussion of the main findings, we derive lessons learned and recommendations for future evaluation in the field of uncertainty visualisation. We highlight the importance of user tasks for successful solutions and recommend moving towards task-centered typologies to support systematic evaluation in the field of uncertainty visualisation.", "which Reviews ?", "uncertainty", 22.0, 33.0], ["ABSTRACT For many years, uncertainty visualization has been a topic of research in several disparate fields, particularly in geographical visualization (geovisualization), information visualization, and scientific visualization. Multiple techniques have been proposed and implemented to visually depict uncertainty, but their evaluation has received less attention by the research community. In order to understand how uncertainty visualization influences reasoning and decision-making using spatial information in visual displays, this paper presents a comprehensive review of uncertainty visualization assessments from geovisualization and related fields. We systematically analyze characteristics of the studies under review, i.e., number of participants, tasks, evaluation metrics, etc. An extensive summary of findings with respect to the effects measured or the impact of different visualization techniques helps to identify commonalities and differences in the outcome. Based on this summary, we derive \u201clessons learned\u201d and provide recommendations for carrying out evaluation of uncertainty visualizations. As a basis for systematic evaluation, we present a categorization of research foci related to evaluating the effects of uncertainty visualization on decision-making. By assigning the studies to categories, we identify gaps in the literature and suggest key research questions for the future. This paper is the second of two reviews on uncertainty visualization. It follows the first that covers the communication of uncertainty, to investigate the effects of uncertainty visualization on reasoning and decision-making.", "which Reviews ?", "uncertainty", 25.0, 36.0], ["ABSTRACT For many years, uncertainty visualization has been a topic of research in several disparate fields, particularly in geographical visualization (geovisualization), information visualization, and scientific visualization. Multiple techniques have been proposed and implemented to visually depict uncertainty, but their evaluation has received less attention by the research community. In order to understand how uncertainty visualization influences reasoning and decision-making using spatial information in visual displays, this paper presents a comprehensive review of uncertainty visualization assessments from geovisualization and related fields. We systematically analyze characteristics of the studies under review, i.e., number of participants, tasks, evaluation metrics, etc. An extensive summary of findings with respect to the effects measured or the impact of different visualization techniques helps to identify commonalities and differences in the outcome. Based on this summary, we derive \u201clessons learned\u201d and provide recommendations for carrying out evaluation of uncertainty visualizations. As a basis for systematic evaluation, we present a categorization of research foci related to evaluating the effects of uncertainty visualization on decision-making. By assigning the studies to categories, we identify gaps in the literature and suggest key research questions for the future. This paper is the second of two reviews on uncertainty visualization. It follows the first that covers the communication of uncertainty, to investigate the effects of uncertainty visualization on reasoning and decision-making.", "which Reviews ?", "Visualization Techniques", 888.0, 912.0], ["Abstract\u2014 This study serves as a proof\u2010of\u2010concept for the technique of using visible\u2010near infrared (VNIR), short\u2010wavelength infrared (SWIR), and thermal infrared (TIR) spectroscopic observations to map impact\u2010exposed subsurface lithologies and stratigraphy on Earth or Mars. The topmost layer, three subsurface layers and undisturbed outcrops of the target sequence exposed just 10 km to the northeast of the 23 km diameter Haughton impact structure (Devon Island, Nunavut, Canada) were mapped as distinct spectral units using Landsat 7 ETM+ (VNIR/SWIR) and ASTER (VNIR/SWIR/TIR) multispectral images. Spectral mapping was accomplished by using standard image contrast\u2010stretching algorithms. Both spectral matching and deconvolution algorithms were applied to image\u2010derived ASTER TIR emissivity spectra using spectra from a library of laboratory\u2010measured spectra of minerals (Arizona State University) and whole\u2010rocks (Ward's). These identifications were made without the use of a priori knowledge from the field (i.e., a \u201cblind\u201d analysis). The results from this analysis suggest a sequence of dolomitic rock (in the crater rim), limestone (wall), gypsum\u2010rich carbonate (floor), and limestone again (central uplift). These matched compositions agree with the lithologic units and the pre\u2010impact stratigraphic sequence as mapped during recent field studies of the Haughton impact structure by Osinski et al. (2005a). Further conformation of the identity of image\u2010derived spectra was confirmed by matching these spectra with laboratory\u2010measured spectra of samples collected from Haughton. The results from the \u201cblind\u201d remote sensing methods used here suggest that these techniques can also be used to understand subsurface lithologies on Mars, where ground truth knowledge may not be generally available.", "which Minerals Identified (Terrestrial samples) ?", "gypsum", 1148.0, 1154.0], ["Here, we present the detailed chemical and spectral characteristics of gypsum\u2010phyllosilicate association of Karai Shale Formation in Tiruchirapalli region of the Cauvery Basin in South India. The Karai Shale Formation comprises Odiyam sandy clay and gypsiferous clay, well exposed in Karai village of Tiruchirapalli area, Tamil Nadu in South India. Gypsum is fibrous to crystalline and translucent/transparent type with fluid inclusions preserved in it. Along some cleavage planes, alteration features have been observed. Visible and near infrared (VNIR), Raman, and Fourier transform infrared techniques were used to obtain the excitation/vibration bands of mineral phases. VNIR spectroscopic analysis of the gypsum samples has shown absorption features at 560, 650, 900, 1,000, 1,200, 1,445, 1,750, 1,900, 2,200, and 2,280 nm in the electrical and vibrational range of electromagnetic radiation. VNIR results of phyllosilicate samples have shown absorption features at 1,400, 1,900, and 2,200 nm. Further, we have identified the prominent Raman bands at 417.11, 496.06, 619.85, 673.46, 1,006.75, 1,009.75, \u223c1,137.44, \u223c3,403, and 3,494.38 cm\u22121 for gypsum due to sulphate and hydroxyl ion vibrations. We propose that gypsum veins in Karai may have precipitated in the fractures formed due to pressure/forces generated by crystal growth. The combined results of chemical and spectral studies have shown that these techniques have significant potential to identify the pure/mineral associates/similar chemical compositions elsewhere. Our results definitely provide the database from a range of spectroscopic techniques to better identify similar minerals and/or mineral\u2010associations in an extraterrestrial scenario. This study has significant implications in understanding various geological processes such as fluid\u2010rock interactions and alteration processes involving water on the planets such as Mars.", "which Minerals Identified (Terrestrial samples) ?", "gypsum", 71.0, 77.0], ["Abstract. Spectroscopy plays a vital role in the identification and characterization of minerals on terrestrial and planetary surfaces. We review the three different spectroscopic techniques for characterizing minerals on the Earth and lunar surfaces separately. Seven sedimentary and metamorphic terrestrial rock samples were analyzed with three field-based spectrometers, i.e., Raman, Fourier transform infrared (FTIR), and visible to near infrared and shortwave infrared (Vis\u2013NIR\u2013SWIR) spectrometers. Similarly, a review of work done by previous researchers on lunar rock samples was also carried out for their Raman, Vis\u2013NIR\u2013SWIR, and thermal (mid-infrared) spectral responses. It has been found in both the cases that the spectral information such as Si-O-Si stretching (polymorphs) in Raman spectra, identification of impurities, Christiansen and Restrahlen band center variation in mid-infrared spectra, location of elemental substitution, the content of iron, and shifting of the band center of diagnostic absorption features at 1 and 2 \u03bcm in reflectance spectra are contributing to the characterization and identification of terrestrial and lunar minerals. We show that quartz can be better characterized by considering silica polymorphs from Raman spectra, emission features in the range of 8 to 14 \u03bcm in FTIR spectra, and reflectance absorption features from Vis\u2013NIR\u2013SWIR spectra. KREEP materials from Apollo 12 and 14 samples are also better characterized using integrated spectroscopic studies. Integrated spectral responses felicitate comprehensive characterization and better identification of minerals. We suggest that Raman spectroscopy and visible and NIR-thermal spectroscopy are the best techniques to explore the Earth\u2019s and lunar mineralogy.", "which Minerals Identified (Terrestrial samples) ?", "Iron", 962.0, 966.0], ["Merging hyperspectral data from optical and thermal ranges allows a wider variety of minerals to be mapped and thus allows lithology to be mapped in a more complex way. In contrast, in most of the studies that have taken advantage of the data from the visible (VIS), near-infrared (NIR), shortwave infrared (SWIR) and longwave infrared (LWIR) spectral ranges, these different spectral ranges were analysed and interpreted separately. This limits the complexity of the final interpretation. In this study a presentation is made of how multiple absorption features, which are directly linked to the mineral composition and are present throughout the VIS, NIR, SWIR and LWIR ranges, can be automatically derived and, moreover, how these new datasets can be successfully used for mineral/lithology mapping. The biggest advantage of this approach is that it overcomes the issue of prior definition of endmembers, which is a requested routine employed in all widely used spectral mapping techniques. In this study, two different airborne image datasets were analysed, HyMap (VIS/NIR/SWIR image data) and Airborne Hyperspectral Scanner (AHS, LWIR image data). Both datasets were acquired over the Sokolov lignite open-cast mines in the Czech Republic. It is further demonstrated that even in this case, when the absorption feature information derived from multispectral LWIR data is integrated with the absorption feature information derived from hyperspectral VIS/NIR/SWIR data, an important improvement in terms of more complex mineral mapping is achieved.", "which Minerals Identified (Terrestrial samples) ?", "lignite", 1198.0, 1205.0], ["Abstract\u2014 This study serves as a proof\u2010of\u2010concept for the technique of using visible\u2010near infrared (VNIR), short\u2010wavelength infrared (SWIR), and thermal infrared (TIR) spectroscopic observations to map impact\u2010exposed subsurface lithologies and stratigraphy on Earth or Mars. The topmost layer, three subsurface layers and undisturbed outcrops of the target sequence exposed just 10 km to the northeast of the 23 km diameter Haughton impact structure (Devon Island, Nunavut, Canada) were mapped as distinct spectral units using Landsat 7 ETM+ (VNIR/SWIR) and ASTER (VNIR/SWIR/TIR) multispectral images. Spectral mapping was accomplished by using standard image contrast\u2010stretching algorithms. Both spectral matching and deconvolution algorithms were applied to image\u2010derived ASTER TIR emissivity spectra using spectra from a library of laboratory\u2010measured spectra of minerals (Arizona State University) and whole\u2010rocks (Ward's). These identifications were made without the use of a priori knowledge from the field (i.e., a \u201cblind\u201d analysis). The results from this analysis suggest a sequence of dolomitic rock (in the crater rim), limestone (wall), gypsum\u2010rich carbonate (floor), and limestone again (central uplift). These matched compositions agree with the lithologic units and the pre\u2010impact stratigraphic sequence as mapped during recent field studies of the Haughton impact structure by Osinski et al. (2005a). Further conformation of the identity of image\u2010derived spectra was confirmed by matching these spectra with laboratory\u2010measured spectra of samples collected from Haughton. The results from the \u201cblind\u201d remote sensing methods used here suggest that these techniques can also be used to understand subsurface lithologies on Mars, where ground truth knowledge may not be generally available.", "which Preprocessing required ?", "Deconvolution", 719.0, 732.0], ["Reducing the number of image bands input for principal component analysis (PCA) ensures that certain materials will not be mapped and increases the likelihood that others will be unequivocally mapped into only one of the principal component images. In arid terrain, PCA of four TM bands will avoid iron-oxide and thus more reliably detect hydroxyl-bearing minerals if only one input band is from the visible spectrum. PCA for iron-oxide mapping will avoid hydroxyls if only one of the SWIR bands is used. A simple principal component color composite image can then be created in which anomalous concentrations of hydroxyl, hydroxyl plus iron-oxide, and iron-oxide are displayed brightly in red-green-blue (RGB) color space. This composite allows qualitative inferences on alteration type and intensity to be made which can be widely applied.", "which Minerals/ Feature Mapped ?", "Hydroxyl", 339.0, 347.0], ["The success of Open Data initiatives has increased the amount of data available on the Web. Unfortunately, most of this data is only available in raw tabular form, what makes analysis and reuse quite difficult for non-experts. Linked Data principles allow for a more sophisticated approach by making explicit both the structure and semantics of the data. However, from the end-user viewpoint, they continue to be monolithic files completely opaque or difficult to explore by making tedious semantic queries. Our objective is to facilitate the user to grasp what kind of entities are in the dataset, how they are interrelated, which are their main properties and values, etc. Rhizomer is a tool for data publishing whose interface provides a set of components borrowed from Information Architecture (IA) that facilitate awareness of the dataset at hand. It automatically generates navigation menus and facets based on the kinds of things in the dataset and how they are described through metadata properties and values. Moreover, motivated by recent tests with end-users, it also provides the possibility to pivot among the faceted views created for each class of resources in the dataset.", "which App. Type ?", "Web", 87.0, 90.0], ["Querying the Semantic Web and analyzing the query results are often complex tasks that can be greatly facilitated by visual interfaces. A major challenge in the design of these interfaces is to provide intuitive and efficient interaction support without limiting too much the analytical degrees of freedom. This paper introduces SemLens, a visual tool that combines scatter plots and semantic lenses to overcome this challenge and to allow for a simple yet powerful analysis of RDF data. The scatter plots provide a global overview on an object collection and support the visual discovery of correlations and patterns in the data. The semantic lenses add dimensions for local analysis of subsets of the objects. A demo accessing DBpedia data is used for illustration.", "which App. Type ?", "Web", 22.0, 25.0], ["Recently, the amount of semantic data available in the Web has increased dramatically. The potential of this vast amount of data is enormous but in most cases it is difficult for users to explore and use this data, especially for those without experience with Semantic Web technologies. Applying information visualization techniques to the Semantic Web helps users to easily explore large amounts of data and interact with them. In this article we devise a formal Linked Data Visualization Model (LDVM), which allows to dynamically connect data with visualizations. We report about our implementation of the LDVM comprising a library of generic visualizations that enable both users and data analysts to get an overview on, visualize and explore the Data Web and perform detailed analyzes on Linked Data.", "which App. Type ?", "Web", 55.0, 58.0], ["Providing easy to use methods for visual analysis of Linked Data is often hindered by the complexity of semantic technologies. On the other hand, semantic information inherent to Linked Data provides opportunities to support the user in interactively analysing the data. This paper provides a demonstration of an interactive, Web-based visualisation tool, the \"Vis Wizard\", which makes use of semantics to simplify the process of setting up visualisations, transforming the data and, most importantly, interactively analysing multiple datasets using brushing and linking methods.", "which App. Type ?", "Web", 326.0, 329.0], ["It is widely accepted that by controlling metadata, it is easier to publish high quality data on the web. Metadata, in the context of Linked Data, refers to vocabularies and ontologies used for describing data. With more and more data published on the web, the need for reusing controlled taxonomies and vocabularies is becoming more and more a necessity. Catalogues of vocabularies are generally a starting point to search for vocabularies based on search terms. Some recent studies recommend that it is better to reuse terms from \"popular\" vocabularies [4]. However, there is not yet an agreement on what makes a popular vocabulary since it depends on diverse criteria such as the number of properties, the number of datasets using part or the whole vocabulary, etc. In this paper, we propose a method for ranking vocabularies based on an information content metric which combines three features: (i) the datasets using the vocabulary, (ii) the outlinks from the vocabulary and (iii) the inlinks to the vocabulary. We applied this method to 366 vocabularies described in the LOV catalogue. The results are then compared with other catalogues which provide alternative rankings.", "which App. Type ?", "Web", 101.0, 104.0], ["The Visual Notation for OWL Ontologies (VOWL) is a well-specified visual language for the user-oriented representation of ontologies. It defines graphical depictions for most elements of the Web Ontology Language (OWL) that are combined to a force-directed graph layout visualizing the ontology. In contrast to related work, VOWL aims for an intuitive and comprehensive representation that is also understandable to users less familiar with ontologies. This article presents VOWL in detail and describes its implementation in two different tools: ProtegeVOWL and WebVOWL. The first is a plugin for the ontology editor Protege, the second a standalone web application. Both tools demonstrate the applicability of VOWL by means of various ontologies. In addition, the results of three user studies that evaluate the comprehensibility and usability of VOWL are summarized. They are complemented by findings from an interview with experienced ontology users and from testing the visual scope and completeness of VOWL with a benchmark ontology. The evaluations helped to improve VOWL and confirm that it produces comparatively intuitive and comprehensible ontology visualizations.", "which App. Type ?", "Web", 191.0, 194.0], ["We present a novel platform for the interactive visualization of very large graphs. The platform enables the user to interact with the visualized graph in a way that is very similar to the exploration of maps at multiple levels. Our approach involves an offline preprocessing phase that builds the layout of the graph by assigning coordinates to its nodes with respect to a Euclidean plane. The respective points are indexed with a spatial data structure, i.e., an R-tree, and stored in a database. Multiple abstraction layers of the graph based on various criteria are also created offline, and they are indexed similarly so that the user can explore the dataset at different levels of granularity, depending on her particular needs. Then, our system translates user operations into simple and very efficient spatial operations (i.e., window queries) in the backend. This technique allows for a fine-grained access to very large graphs with extremely low latency and memory requirements and without compromising the functionality of the tool. Our web-based prototype supports three main operations: (1) interactive navigation, (2) multi-level exploration, and (3) keyword search on the graph metadata.", "which App. Type ?", "Web", 1048.0, 1051.0], ["Querying the Semantic Web and analyzing the query results are often complex tasks that can be greatly facilitated by visual interfaces. A major challenge in the design of these interfaces is to provide intuitive and efficient interaction support without limiting too much the analytical degrees of freedom. This paper introduces SemLens, a visual tool that combines scatter plots and semantic lenses to overcome this challenge and to allow for a simple yet powerful analysis of RDF data. The scatter plots provide a global overview on an object collection and support the visual discovery of correlations and patterns in the data. The semantic lenses add dimensions for local analysis of subsets of the objects. A demo accessing DBpedia data is used for illustration.", "which Vis. Types ?", "Scatter", 366.0, 373.0], ["Abstract Background Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections and the resulting disease, coronavirus disease 2019 (Covid-19), have spread to millions of persons worldwide. Multiple vaccine candidates are under development, but no vaccine is currently available. Interim safety and immunogenicity data about the vaccine candidate BNT162b1 in younger adults have been reported previously from trials in Germany and the United States. Methods In an ongoing, placebo-controlled, observer-blinded, dose-escalation, phase 1 trial conducted in the United States, we randomly assigned healthy adults 18 to 55 years of age and those 65 to 85 years of age to receive either placebo or one of two lipid nanoparticle\u2013formulated, nucleoside-modified RNA vaccine candidates: BNT162b1, which encodes a secreted trimerized SARS-CoV-2 receptor\u2013binding domain; or BNT162b2, which encodes a membrane-anchored SARS-CoV-2 full-length spike, stabilized in the prefusion conformation. The primary outcome was safety (e.g., local and systemic reactions and adverse events); immunogenicity was a secondary outcome. Trial groups were defined according to vaccine candidate, age of the participants, and vaccine dose level (10 \u03bcg, 20 \u03bcg, 30 \u03bcg, and 100 \u03bcg). In all groups but one, participants received two doses, with a 21-day interval between doses; in one group (100 \u03bcg of BNT162b1), participants received one dose. Results A total of 195 participants underwent randomization. In each of 13 groups of 15 participants, 12 participants received vaccine and 3 received placebo. BNT162b2 was associated with a lower incidence and severity of systemic reactions than BNT162b1, particularly in older adults. In both younger and older adults, the two vaccine candidates elicited similar dose-dependent SARS-CoV-2\u2013neutralizing geometric mean titers, which were similar to or higher than the geometric mean titer of a panel of SARS-CoV-2 convalescent serum samples. Conclusions The safety and immunogenicity data from this U.S. phase 1 trial of two vaccine candidates in younger and older adults, added to earlier interim safety and immunogenicity data regarding BNT162b1 in younger adults from trials in Germany and the United States, support the selection of BNT162b2 for advancement to a pivotal phase 2\u20133 safety and efficacy evaluation. (Funded by BioNTech and Pfizer; ClinicalTrials.gov number, NCT04368728.)", "which Organisations ?", "BioNTech", 2349.0, 2357.0], ["Abstract Background Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections and the resulting disease, coronavirus disease 2019 (Covid-19), have spread to millions of persons worldwide. Multiple vaccine candidates are under development, but no vaccine is currently available. Interim safety and immunogenicity data about the vaccine candidate BNT162b1 in younger adults have been reported previously from trials in Germany and the United States. Methods In an ongoing, placebo-controlled, observer-blinded, dose-escalation, phase 1 trial conducted in the United States, we randomly assigned healthy adults 18 to 55 years of age and those 65 to 85 years of age to receive either placebo or one of two lipid nanoparticle\u2013formulated, nucleoside-modified RNA vaccine candidates: BNT162b1, which encodes a secreted trimerized SARS-CoV-2 receptor\u2013binding domain; or BNT162b2, which encodes a membrane-anchored SARS-CoV-2 full-length spike, stabilized in the prefusion conformation. The primary outcome was safety (e.g., local and systemic reactions and adverse events); immunogenicity was a secondary outcome. Trial groups were defined according to vaccine candidate, age of the participants, and vaccine dose level (10 \u03bcg, 20 \u03bcg, 30 \u03bcg, and 100 \u03bcg). In all groups but one, participants received two doses, with a 21-day interval between doses; in one group (100 \u03bcg of BNT162b1), participants received one dose. Results A total of 195 participants underwent randomization. In each of 13 groups of 15 participants, 12 participants received vaccine and 3 received placebo. BNT162b2 was associated with a lower incidence and severity of systemic reactions than BNT162b1, particularly in older adults. In both younger and older adults, the two vaccine candidates elicited similar dose-dependent SARS-CoV-2\u2013neutralizing geometric mean titers, which were similar to or higher than the geometric mean titer of a panel of SARS-CoV-2 convalescent serum samples. Conclusions The safety and immunogenicity data from this U.S. phase 1 trial of two vaccine candidates in younger and older adults, added to earlier interim safety and immunogenicity data regarding BNT162b1 in younger adults from trials in Germany and the United States, support the selection of BNT162b2 for advancement to a pivotal phase 2\u20133 safety and efficacy evaluation. (Funded by BioNTech and Pfizer; ClinicalTrials.gov number, NCT04368728.)", "which Organisations ?", "Pfizer", 2362.0, 2368.0], ["ABSTRACT This study analyses the consequences of the Covid-19 crisis on stress and well-being in Switzerland. In particular, we assess whether vulnerable groups in terms of social isolation, increased workload and limited socioeconomic resources are affected more than others. Using longitudinal data from the Swiss Household Panel, including a specific Covid-19 study, we estimate change score models to predict changes in perceived stress and life satisfaction at the end of the semi-lockdown in comparison to before the crisis. We find no general change in life satisfaction and a small decrease in stress. Yet, in line with our expectations, more vulnerable groups in terms of social isolation (young adults, Covid-19 risk group members, individuals without a partner), workload (women) and socioeconomic resources (unemployed and those who experienced a deteriorating financial situation) reported a decrease in life satisfaction. Stress levels decreased most strongly among high earners, workers on short-time work and the highly educated.", "which Examined (sub-)group ?", "men", NaN, NaN], ["ABSTRACT This study analyses the consequences of the Covid-19 crisis on stress and well-being in Switzerland. In particular, we assess whether vulnerable groups in terms of social isolation, increased workload and limited socioeconomic resources are affected more than others. Using longitudinal data from the Swiss Household Panel, including a specific Covid-19 study, we estimate change score models to predict changes in perceived stress and life satisfaction at the end of the semi-lockdown in comparison to before the crisis. We find no general change in life satisfaction and a small decrease in stress. Yet, in line with our expectations, more vulnerable groups in terms of social isolation (young adults, Covid-19 risk group members, individuals without a partner), workload (women) and socioeconomic resources (unemployed and those who experienced a deteriorating financial situation) reported a decrease in life satisfaction. Stress levels decreased most strongly among high earners, workers on short-time work and the highly educated.", "which Examined (sub-)group ?", "women", 784.0, 789.0], ["The coronavirus outbreak has caused significant disruptions to people\u2019s lives. We document the impact of state-wide stay-at-home orders on mental health using real time survey data in the US. The lockdown measures lowered mental health by 0.085 standard deviations. This large negative effect is entirely driven by women. As a result of the lockdown measures, the existing gender gap in mental health has increased by 66%. The negative effect on women\u2019s mental health cannot be explained by an increase in financial worries or childcare responsibilities.", "which Examined (sub-)group ?", "women", 315.0, 320.0], ["We document a decline in mental well-being after the onset of the Covid-19 pandemic in the UK. This decline is twice as large for women as for men. We seek to explain this gender gap by exploring gender differences in: family and caring responsibilities; financial and work situation; social engagement; health situation, and health behaviours, including exercise. Differences in family and caring responsibilities play some role, but the bulk of the gap is explained by social factors. Women reported more close friends before the pandemic than men, and increased loneliness after the pandemic's onset. Other factors are similarly distributed across genders and so play little role. Finally, we document larger declines in well-being for the young, of both genders, than the old.", "which Examined (sub-)group ?", "women", 130.0, 135.0], ["Abstract Objectives To investigate early effects of the COVID-19 pandemic related to (a) levels of worry, risk perception, and social distancing; (b) longitudinal effects on well-being; and (c) effects of worry, risk perception, and social distancing on well-being. Methods We analyzed annual changes in four aspects of well-being over 5 years (2015\u20132020): life satisfaction, financial satisfaction, self-rated health, and loneliness in a subsample (n = 1,071, aged 65\u201371) from a larger survey of Swedish older adults. The 2020 wave, collected March 26\u2013April 2, included measures of worry, risk perception, and social distancing in response to COVID-19. Results (a) In relation to COVID-19: 44.9% worried about health, 69.5% about societal consequences, 25.1% about financial consequences; 86.4% perceived a high societal risk, 42.3% a high risk of infection, and 71.2% reported high levels of social distancing. (b) Well-being remained stable (life satisfaction and loneliness) or even increased (self-rated health and financial satisfaction) in 2020 compared to previous years. (c) More worry about health and financial consequences was related to lower scores in all four well-being measures. Higher societal worry and more social distancing were related to higher well-being. Discussion In the early stage of the pandemic, Swedish older adults on average rated their well-being as high as, or even higher than, previous years. However, those who worried more reported lower well-being. Our findings speak to the resilience, but also heterogeneity, among older adults during the pandemic. Further research, on a broad range of health factors and long-term psychological consequences, is needed.", "which Examined (sub-)group ?", "older adults", 505.0, 517.0], ["We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word\u2019s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.", "which Metadata ?", "date", 296.0, 300.0], ["We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word\u2019s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.", "which Metadata ?", "author", 269.0, 275.0], ["We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word\u2019s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.", "which Metadata ?", "century", 169.0, 176.0], ["We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word\u2019s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.", "which Metadata ?", "era", 291.0, 294.0], ["We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word\u2019s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.", "which Metadata ?", "genre", 284.0, 289.0], ["We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word\u2019s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.", "which Metadata ?", "title", 277.0, 282.0], ["Multi-agent systems (MASs) have received tremendous attention from scholars in different disciplines, including computer science and civil engineering, as a means to solve complex problems by subdividing them into smaller tasks. The individual tasks are allocated to autonomous entities, known as agents. Each agent decides on a proper action to solve the task using multiple inputs, e.g., history of actions, interactions with its neighboring agents, and its goal. The MAS has found multiple applications, including modeling complex systems, smart grids, and computer networks. Despite their wide applicability, there are still a number of challenges faced by MAS, including coordination between agents, security, and task allocation. This survey provides a comprehensive discussion of all aspects of MAS, starting from definitions, features, applications, challenges, and communications to evaluation. A classification on MAS applications and challenges is provided along with references for further studies. We expect this paper to serve as an insightful and comprehensive resource on the MAS for researchers and practitioners in the area.", "which consists of ?", "Agent", 6.0, 11.0], ["Does free access to journal articles result in greater diffusion of scientific knowledge? Using a randomized controlled trial of open access publishing, involving 36 participating journals in the sciences, social sciences, and humanities, we report on the effects of free access on article downloads and citations. Articles placed in the open access condition (n=712) received significantly more downloads and reached a broader audience within the first year, yet were cited no more frequently, nor earlier, than subscription\u2010access control articles (n=2533) within 3 yr. These results may be explained by social stratification, a process that concentrates scientific authors at a small number of elite research universities with excellent access to the scientific literature. The real beneficiaries of open access publishing may not be the research community but communities of practice that consume, but rarely contribute to, the corpus of literature.\u2014Davis, P. M. Open access, readership, citations: a randomized controlled trial of scientific journal publishing. FASEB J. 25, 2129\u20102134 (2011). www.fasebj.org", "which open_access_medium ?", "articles", 28.0, 36.0], ["Abstract This paper studies a selection of 11 Norwegian journals in the humanities and social sciences and their conversion from subscription to open access, a move heavily incentivized by governmental mandates and open access policies. By investigating the journals\u2019 visiting logs in the period 2014\u20132019, the study finds that a conversion to open access induces higher visiting numbers; all journals in the study had a significant increase, which can be attributed to the conversion. Converting a journal had no spillover in terms of increased visits to previously published articles still behind the paywall in the same journals. Visits from previously subscribing Norwegian higher education institutions did not account for the increase in visits, indicating that the increase must be accounted for by visitors from other sectors. The results could be relevant for policymakers concerning the effects of strict policies targeting economically vulnerable national journals, and could further inform journal owners and editors on the effects of converting to open access.", "which open_access_medium ?", "articles", 577.0, 585.0], ["This study is a comparison of AUPress with three other traditional (non-open access) Canadian university presses. The analysis is based on the rankings that are correlated with book sales on Amazon.com and Amazon.ca. Statistical methods include the sampling of the sales ranking of randomly selected books from each press. The results of one-way ANOVA analyses show that there is no significant difference in the ranking of printed books sold by AUPress in comparison with traditional university presses. However, AUPress, can demonstrate a significantly larger readership for its books as evidenced by the number of downloads of the open electronic versions.", "which open_access_medium ?", "books", 300.0, 305.0], ["This article describes an experiment to measure the impact of open access (OA) publishing of academic books. During a period of nine months, three sets of 100 books were disseminated through an institutional repository, the Google Book Search program, or both channels. A fourth set of 100 books was used as control group. OA publishing enhances discovery and online consultation. Within the context of the experiment, no relation could be found between OA publishing and citation rates. Contrary to expectations, OA publishing does not stimulate or diminish sales figures. The Google Book Search program is superior to the repository.", "which open_access_medium ?", "books", 102.0, 107.0], ["We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random tokens, and (2) training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it. SpanBERT consistently outperforms BERT and our better-tuned baselines, with substantial gains on span selection tasks such as question answering and coreference resolution. In particular, with the same training data and model size as BERT large , our single model obtains 94.6% and 88.7% F1 on SQuAD 1.1 and 2.0 respectively. We also achieve a new state of the art on the OntoNotes coreference resolution task (79.6% F1), strong performance on the TACRED relation extraction benchmark, and even gains on GLUE. 1", "which Has ?", "Baselines", 428.0, 437.0], ["Classifying semantic relations between entity pairs in sentences is an important task in natural language processing (NLP). Most previous models applied to relation classification rely on high-level lexical and syntactic features obtained by NLP tools such as WordNet, the dependency parser, part-of-speech (POS) tagger, and named entity recognizers (NER). In addition, state-of-the-art neural models based on attention mechanisms do not fully utilize information related to the entity, which may be the most crucial feature for relation classification. To address these issues, we propose a novel end-to-end recurrent neural model that incorporates an entity-aware attention mechanism with a latent entity typing (LET) method. Our model not only effectively utilizes entities and their latent types as features, but also builds word representations by applying self-attention based on symmetrical similarity of a sentence itself. Moreover, the model is interpretable by visualizing applied attention mechanisms. Experimental results obtained with the SemEval-2010 Task 8 dataset, which is one of the most popular relation classification tasks, demonstrate that our model outperforms existing state-of-the-art models without any high-level features.", "which Has ?", "Model", 626.0, 631.0], ["Dependency trees help relation extraction models capture long-range relations between words. However, existing dependency-based models either neglect crucial information (e.g., negation) by pruning the dependency trees too aggressively, or are computationally inefficient because it is difficult to parallelize over different tree structures. We propose an extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel. To incorporate relevant information while maximally removing irrelevant content, we further apply a novel pruning strategy to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold. The resulting model achieves state-of-the-art performance on the large-scale TACRED dataset, outperforming existing sequence and dependency-based neural models. We also show through detailed analysis that this model has complementary strengths to sequence models, and combining them further improves the state of the art.", "which Has ?", "Model", 797.0, 802.0], ["Dependency trees convey rich structural information that is proven useful for extracting relations among entities in text. However, how to effectively make use of relevant information while ignoring irrelevant information from the dependency trees remains a challenging research question. Existing approaches employing rule based hard-pruning strategies for selecting relevant partial dependency structures may not always yield optimal results. In this work, we propose Attention Guided Graph Convolutional Networks (AGGCNs), a novel model which directly takes full dependency trees as inputs. Our model can be understood as a soft-pruning approach that automatically learns how to selectively attend to the relevant sub-structures useful for the relation extraction task. Extensive results on various tasks including cross-sentence n-ary relation extraction and large-scale sentence-level relation extraction show that our model is able to better leverage the structural information of the full dependency trees, giving significantly better results than previous approaches.", "which Has ?", "results", 436.0, 443.0], ["Relation classification is an important NLP task to extract relations between entities. The state-of-the-art methods for relation classification are primarily based on Convolutional or Recurrent Neural Networks. Recently, the pre-trained BERT model achieves very successful results in many NLP classification / sequence labeling tasks. Relation classification differs from those tasks in that it relies on information of both the sentence and the two target entities. In this paper, we propose a model that both leverages the pre-trained BERT language model and incorporates information from the target entities to tackle the relation classification task. We locate the target entities and transfer the information through the pre-trained architecture and incorporate the corresponding encoding of the two entities. We achieve significant improvement over the state-of-the-art method on the SemEval-2010 task 8 relational dataset.", "which Has ?", "Results", 274.0, 281.0], ["Most conventional sentence similarity methods only focus on similar parts of two input sentences, and simply ignore the dissimilar parts, which usually give us some clues and semantic meanings about the sentences. In this work, we propose a model to take into account both the similarities and dissimilarities by decomposing and composing lexical semantics over sentences. The model represents each word as a vector, and calculates a semantic matching vector for each word based on all words in the other sentence. Then, each word vector is decomposed into a similar component and a dissimilar component based on the semantic matching vector. After this, a two-channel CNN model is employed to capture features by composing the similar and dissimilar components. Finally, a similarity score is estimated over the composed feature vectors. Experimental results show that our model gets the state-of-the-art performance on the answer sentence selection task, and achieves a comparable result on the paraphrase identification task.", "which Has ?", "Results", 852.0, 859.0], ["Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.", "which Has ?", "Results", 935.0, 942.0], ["Background The Electronic Surveillance System for the Early Notification of Community-Based Epidemics (ESSENCE) is a secure web-based tool that enables health care practitioners to monitor health indicators of public health importance for the detection and tracking of disease outbreaks, consequences of severe weather, and other events of concern. The ESSENCE concept began in an internally funded project at the Johns Hopkins University Applied Physics Laboratory, advanced with funding from the State of Maryland, and broadened in 1999 as a collaboration with the Walter Reed Army Institute for Research. Versions of the system have been further developed by Johns Hopkins University Applied Physics Laboratory in multiple military and civilian programs for the timely detection and tracking of health threats. Objective This study aims to describe the components and development of a biosurveillance system increasingly coordinating all-hazards health surveillance and infectious disease monitoring among large and small health departments, to list the key features and lessons learned in the growth of this system, and to describe the range of initiatives and accomplishments of local epidemiologists using it. Methods The features of ESSENCE include spatial and temporal statistical alerting, custom querying, user-defined alert notifications, geographical mapping, remote data capture, and event communications. To expedite visualization, configurable and interactive modes of data stratification and filtering, graphical and tabular customization, user preference management, and sharing features allow users to query data and view geographic representations, time series and data details pages, and reports. These features allow ESSENCE users to gather and organize the resulting wealth of information into a coherent view of population health status and communicate findings among users. Results The resulting broad utility, applicability, and adaptability of this system led to the adoption of ESSENCE by the Centers for Disease Control and Prevention, numerous state and local health departments, and the Department of Defense, both nationally and globally. The open-source version of Suite for Automated Global Electronic bioSurveillance is available for global, resource-limited settings. Resourceful users of the US National Syndromic Surveillance Program ESSENCE have applied it to the surveillance of infectious diseases, severe weather and natural disaster events, mass gatherings, chronic diseases and mental health, and injury and substance abuse. Conclusions With emerging high-consequence communicable diseases and other health conditions, the continued user requirement\u2013driven enhancements of ESSENCE demonstrate an adaptable disease surveillance capability focused on the everyday needs of public health. The challenge of a live system for widely distributed users with multiple different data sources and high throughput requirements has driven a novel, evolving architecture design.", "which Epidemiological surveillance users ?", "Epidemiologists", 1190.0, 1205.0], ["developed countries in many ways. Most people are poorer, less educated, more likely to die at a young age, and less knowledgeable about factors that cause, prevent, or cure disease. Biological and physical hazards are more common, which results in greater incidence, disability, and death. Although disease is common, both the people and government have much fewer resources for prevention or medical care. Many efficacious drugs arc too expensive and not readily available for those in greatest need. Salaries are so low that government physicians or nurses must work after-hours in private clinics to feed, clothe, and educate their families. The establishment and maintenance of an epidemiological surveillance system in such an environment requires a differ\u00ad ent orientation from that found in wealthier nations. The scarcity of resources is a dominant concern. Salaried time spent gathering data is lost to service activities, such as treating gastrointestinal problems or preventing childhood diseases. As a result, components in a surveillance system must be justified, as are purchases of examination tables or radiographic equipment. A costly, extensive surveillance system may cause more harm than good. In this article 1 will define epidemiologic surveillance. 1 also will describe the various components of a surveillance program, show how microcomputers and existing software can be used to increase effectiveness, and illustrate how", "which Epidemiological surveillance users ?", "Physician", NaN, NaN], ["ABSTRACT Background: Tuberculosis (TB) surveillance data are crucial to the effectiveness of National TB Control Programs. In South Africa, few surveillance system evaluations have been undertaken to provide a rigorous assessment of the platform from which the national and district health systems draws data to inform programs and policies. Objective: Evaluate the attributes of Eden District\u2019s TB surveillance system, Western Cape Province, South Africa. Methods: Data quality, sensitivity and positive predictive value were assessed using secondary data from 40,033 TB cases entered in Eden District\u2019s ETR.Net from 2007 to 2013, and 79 purposively selected TB Blue Cards (TBCs), a medical patient file and source document for data entered into ETR.Net. Simplicity, flexibility, acceptability, stability and usefulness of the ETR.Net were assessed qualitatively through interviews with TB nurses, information health officers, sub-district and district coordinators involved in the TB surveillance. Results: TB surveillance system stakeholders report that Eden District\u2019s ETR.Net system was simple, acceptable, flexible and stable, and achieves its objective of informing TB control program, policies and activities. Data were less complete in the ETR.Net (66\u2013100%) than in the TBCs (76\u2013100%), and concordant for most variables except pre-treatment smear results, antiretroviral therapy (ART) and treatment outcome. The sensitivity of recorded variables in ETR.Net was 98% for gender, 97% for patient category, 93% for ART, 92% for treatment outcome and 90% for pre-treatment smear grading. Conclusions: Our results reveal that the system provides useful information to guide TB control program activities in Eden District. However, urgent attention is needed to address gaps in clinical recording on the TBC and data capturing into the ETR.Net system. We recommend continuous training and support of TB personnel involved with TB care, management and surveillance on TB data recording into the TBCs and ETR.Net as well as the implementation of a well-structured quality control and assurance system.", "which Epidemiological surveillance users ?", "TB nurses", 888.0, 897.0], ["The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.", "which Corpus genres ?", "fiction", 174.0, 181.0], ["The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.", "which Corpus genres ?", "newspapers", 202.0, 212.0], ["The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.", "which Corpus genres ?", "non-fiction books", 217.0, 234.0], ["The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.", "which Corpus genres ?", "popular magazines", 183.0, 200.0], ["Multi-agent systems (MASs) have received tremendous attention from scholars in different disciplines, including computer science and civil engineering, as a means to solve complex problems by subdividing them into smaller tasks. The individual tasks are allocated to autonomous entities, known as agents. Each agent decides on a proper action to solve the task using multiple inputs, e.g., history of actions, interactions with its neighboring agents, and its goal. The MAS has found multiple applications, including modeling complex systems, smart grids, and computer networks. Despite their wide applicability, there are still a number of challenges faced by MAS, including coordination between agents, security, and task allocation. This survey provides a comprehensive discussion of all aspects of MAS, starting from definitions, features, applications, challenges, and communications to evaluation. A classification on MAS applications and challenges is provided along with references for further studies. We expect this paper to serve as an insightful and comprehensive resource on the MAS for researchers and practitioners in the area.", "which Uses metric ?", "Inputs", 376.0, 382.0], ["Instruments play an essential role in creating research data. Given the importance of instruments and associated metadata to the assessment of data quality and data reuse, globally unique, persistent and resolvable identification of instruments is crucial. The Research Data Alliance Working Group Persistent Identification of Instruments (PIDINST) developed a community-driven solution for persistent identification of instruments which we present and discuss in this paper. Based on an analysis of 10 use cases, PIDINST developed a metadata schema and prototyped schema implementation with DataCite and ePIC as representative persistent identifier infrastructures and with HZB (Helmholtz-Zentrum Berlin fur Materialien und Energie) and BODC (British Oceanographic Data Centre) as representative institutional instrument providers. These implementations demonstrate the viability of the proposed solution in practice. Moving forward, PIDINST will further catalyse adoption and consolidate the schema by addressing new stakeholder requirements.", "which Entity type ?", "Instruments", 0.0, 11.0], ["The Open Researcher & Contributor ID (ORCID) registry presents a unique opportunity to solve the problem of author name ambiguity. At its core the value of the ORCID registry is that it crosses disciplines, organizations, and countries, linking ORCID with both existing identifier schemes as well as publications and other research activities. By supporting linkages across multiple datasets \u2013 clinical trials, publications, patents, datasets \u2013 such a registry becomes a switchboard for researchers and publishers alike in managing the dissemination of research findings. We describe use cases for embedding ORCID identifiers in manuscript submission workflows, prior work searches, manuscript citations, and repository deposition. We make recommendations for storing and displaying ORCID identifiers in publication metadata to include ORCID identifiers, with CrossRef integration as a specific example. Finally, we provide an overview of ORCID membership and integration tools and resources.", "which Entity type ?", "Researchers", 487.0, 498.0], ["OpenAIRE is the European Union initiative for an Open Access Infrastructure for Research in support of open scholarly communication and access to the research output of European funded projects and open access content from a network of institutional and disciplinary repositories. This article outlines the curation activities conducted in the OpenAIRE infrastructure, which employs a multi-level, multi-targeted approach: the publication and implementation of interoperability guidelines to assist in the local data curation processes, the data curation due to the integration of heterogeneous sources supporting different types of data, the inference of links to accomplish the publication research contextualization and data enrichment, and the end-user metadata curation that allows users to edit the attributes and provide links among the entities.", "which Content ?", "Metadata", 757.0, 765.0], ["This Work in Progress Research paper departs from the recent, turbulent changes in global societies, forcing many citizens to re-skill themselves to (re)gain employment. Learners therefore need to be equipped with skills to be autonomous and strategic about their own skill development. Subsequently, high-quality, on-line, personalized educational content and services are also essential to serve this high demand for learning content. Open Educational Resources (OERs) have high potential to contribute to the mitigation of these problems, as they are available in a wide range of learning and occupational contexts globally. However, their applicability has been limited, due to low metadata quality and complex quality control. These issues resulted in a lack of personalised OER functions, like recommendation and search. Therefore, we suggest a novel, personalised OER recommendation method to match skill development targets with open learning content. This is done by: 1) using an OER quality prediction model based on metadata, OER properties, and content; 2) supporting learners to set individual skill targets based on actual labour market information, and 3) building a personalized OER recommender to help learners to master their skill targets. Accordingly, we built a prototype focusing on Data Science related jobs, and evaluated this prototype with 23 data scientists in different expertise levels. Pilot participants used our prototype for at least 30 minutes and commented on each of the recommended OERs. As a result, more than 400 recommendations were generated and 80.9% of the recommendations were reported as useful.", "which contains ?", "Data", 1305.0, 1309.0], ["Speech emotion recognition is a challenging task, and extensive reliance has been placed on models that use audio features in building well-performing classifiers. In this paper, we propose a novel deep dual recurrent encoder model that utilizes text data and audio signals simultaneously to obtain a better understanding of speech data. As emotional dialogue is composed of sound and spoken content, our model encodes the information from audio and text sequences using dual recurrent neural networks (RNNs) and then combines the information from these sources to predict the emotion class. This architecture analyzes speech data from the signal level to the language level, and it thus utilizes the information within the data more comprehensively than models that focus on audio features. Extensive experiments are conducted to investigate the efficacy and properties of the proposed model. Our proposed model outperforms previous state-of-the-art methods in assigning data to one of four emotion categories (i.e., angry, happy, sad and neutral) when the model is applied to the IEMOCAP dataset, as reflected by accuracies ranging from 68.8% to 71.8%.", "which contains ?", "Model", 226.0, 231.0], ["This Work in Progress Research paper departs from the recent, turbulent changes in global societies, forcing many citizens to re-skill themselves to (re)gain employment. Learners therefore need to be equipped with skills to be autonomous and strategic about their own skill development. Subsequently, high-quality, on-line, personalized educational content and services are also essential to serve this high demand for learning content. Open Educational Resources (OERs) have high potential to contribute to the mitigation of these problems, as they are available in a wide range of learning and occupational contexts globally. However, their applicability has been limited, due to low metadata quality and complex quality control. These issues resulted in a lack of personalised OER functions, like recommendation and search. Therefore, we suggest a novel, personalised OER recommendation method to match skill development targets with open learning content. This is done by: 1) using an OER quality prediction model based on metadata, OER properties, and content; 2) supporting learners to set individual skill targets based on actual labour market information, and 3) building a personalized OER recommender to help learners to master their skill targets. Accordingly, we built a prototype focusing on Data Science related jobs, and evaluated this prototype with 23 data scientists in different expertise levels. Pilot participants used our prototype for at least 30 minutes and commented on each of the recommended OERs. As a result, more than 400 recommendations were generated and 80.9% of the recommendations were reported as useful.", "which contains ?", "Model", 1012.0, 1017.0], ["Abstract Motivation Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. Results We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts. Availability and implementation We make the pre-trained weights of BioBERT freely available at https://github.com/naver/biobert-pretrained, and the source code for fine-tuning BioBERT available at https://github.com/dmis-lab/biobert.", "which contains ?", "Model", 655.0, 660.0], ["In this paper, we investigate medical students medical search behavior on a medical domain. We use two behavioral signals: detailed query analysis (qualitative and quantitative) and task completion time to understand how medical students perform medical searches based on varying task complexity. We also investigate how task complexity and topic familiarity affect search behavior. We gathered 80 interactive search sessions from an exploratory survey with 20 medical students. We observe information searching behavior using 3 simulated work task scenarios and 1 personal scenario. We present quantitative results from two perspectives: overall and user perceived task complexity. We also analyze query properties from a qualitative aspect. Our results show task complexity and topic familiarity affect search behavior of medical students. In some cases, medical students demonstrate different search traits on a personal task in comparison to the simulated work task scenarios. These findings help us better understand medical search behavior. Medical search engines can use these findings to detect and adapt to medical students' search behavior to enhance a student's search experience.", "which contains ?", "Results", 608.0, 615.0], ["Photos, drawings, figures, etc. supplement textual information in various kinds of media, for example, in web news or scientific publications. In this respect, the intended effect of an image can be quite different, e.g., providing additional information, focusing on certain details of surrounding text, or simply being a general illustration of a topic. As a consequence, the semantic correlation between information of different modalities can vary noticeably, too. Moreover, cross-modal interrelations are often hard to describe in a precise way. The variety of possible interrelations of textual and graphical information and the question, how they can be described and automatically estimated have not been addressed yet by previous work. In this paper, we present several contributions to close this gap. First, we introduce two measures to describe cross-modal interrelations: cross-modal mutual information (CMI) and semantic correlation (SC). Second, a novel approach relying on deep learning is suggested to estimate CMI and SC of textual and visual information. Third, three diverse datasets are leveraged to learn an appropriate deep neural network model for the demanding task. The system has been evaluated on a challenging test set and the experimental results demonstrate the feasibility of the approach.", "which contains ?", "Contribution", NaN, NaN], ["Diabetes Mellitus (DM) is a chronic, progressive and life-threatening disease. The ocular manifestations of DM, Diabetic Retinopathy (DR) and Diabetic Macular Edema (DME), are the leading causes of blindness in the adult population throughout the world. Early diagnosis of DR and DM through screening tests and successive treatments can reduce the threat to visual acuity. In this context, we propose an encoder decoder based semantic segmentation network SOP-Net (Segmentation of Ocular Pathologies Using Deep Convolutional Neural Network) for simultaneous delineation of retinal pathologies (hard exudates, soft exudates, hemorrhages, microaneurysms). The proposed semantic segmentation framework is capable of providing segmentation results at pixel-level with good localization of object boundaries. SOP-Net has been trained and tested on IDRiD dataset which is publicly available with pixel level annotations of retinal pathologies. The network achieved average accuracies of 98.98%, 90.46%, 96.79%, and 96.70% for segmentation of hard exudates, soft exudates, hemorrhages, and microaneurysms. The proposed methodology has the capability to be used in developing a diagnostic system for organizing large scale ophthalmic screening programs.", "which contains ?", "Results", 736.0, 743.0], ["How predictable are life trajectories? We investigated this question with a scientific mass collaboration using the common task method; 160 teams built predictive models for six life outcomes using data from the Fragile Families and Child Wellbeing Study, a high-quality birth cohort study. Despite using a rich dataset and applying machine-learning methods optimized for prediction, the best predictions were not very accurate and were only slightly better than those from a simple benchmark model. Within each outcome, prediction error was strongly associated with the family being predicted and weakly associated with the technique used to generate the prediction. Overall, these results suggest practical limits to the predictability of life outcomes in some settings and illustrate the value of mass collaborations in the social sciences.", "which contains ?", "Methods", 358.0, 365.0], ["In our position paper on a technology-enhanced smart learning environment, we propose the innovative combination of a knowledge graph representing what one has to learn and a learning path defining in which order things are going to be learned. In this way, we aim to identify students\u2019 weak spots or knowledge gaps in order to individually assist them in reaching their goals. Based on the performance of different learning paths, one might further identify the characteristics of a learning system that leads to successful students. In addition, by studying assessments and the different ways a particular problem can be solved, new methods for a multi-dimensional classification of assessments can be developed. The theoretical findings on learning paths in combination with the classification of assessments will inform the design and development of a smart learning environment. By combining a knowledge graph with different learning paths and the corresponding practical assessments we enable the creation of a smart learning tool. While the proposed approach can be applied to different educational domains and should lead to more effective learning environments fostering deep learning in schools as well as in professional settings, in this paper we focus on the domain of mathematics in primary and high schools as the main use case.", "which contains ?", "Methods", 635.0, 642.0], ["How predictable are life trajectories? We investigated this question with a scientific mass collaboration using the common task method; 160 teams built predictive models for six life outcomes using data from the Fragile Families and Child Wellbeing Study, a high-quality birth cohort study. Despite using a rich dataset and applying machine-learning methods optimized for prediction, the best predictions were not very accurate and were only slightly better than those from a simple benchmark model. Within each outcome, prediction error was strongly associated with the family being predicted and weakly associated with the technique used to generate the prediction. Overall, these results suggest practical limits to the predictability of life outcomes in some settings and illustrate the value of mass collaborations in the social sciences.", "which contains ?", "Results", 691.0, 698.0], ["The Natural Language Processing (NLP) community has significantly contributed to the solutions for entity and relation recognition from a natural language text, and possibly linking them to proper matches in Knowledge Graphs (KGs). Considering Wikidata as the background KG, there are still limited tools to link knowledge within the text to Wikidata. In this paper, we present Falcon 2.0, the first joint entity and relation linking tool over Wikidata. It receives a short natural language text in the English language and outputs a ranked list of entities and relations annotated with the proper candidates in Wikidata. The candidates are represented by their Internationalized Resource Identifier (IRI) in Wikidata. Falcon 2.0 resorts to the English language model for the recognition task (e.g., N-Gram tiling and N-Gram splitting), and then an optimization approach for the linking task. We have empirically studied the performance of Falcon 2.0 on Wikidata and concluded that it outperforms all the existing baselines. Falcon 2.0 is open source and can be reused by the community; all the required instructions of Falcon 2.0 are well-documented at our GitHub repository (https://github.com/SDM-TIB/falcon2.0). We also demonstrate an online API, which can be run without any technical expertise. Falcon 2.0 and its background knowledge bases are available as resources at https://labs.tib.eu/falcon/falcon2/.", "which contains ?", "Background", 260.0, 270.0], ["The Natural Language Processing (NLP) community has significantly contributed to the solutions for entity and relation recognition from a natural language text, and possibly linking them to proper matches in Knowledge Graphs (KGs). Considering Wikidata as the background KG, there are still limited tools to link knowledge within the text to Wikidata. In this paper, we present Falcon 2.0, the first joint entity and relation linking tool over Wikidata. It receives a short natural language text in the English language and outputs a ranked list of entities and relations annotated with the proper candidates in Wikidata. The candidates are represented by their Internationalized Resource Identifier (IRI) in Wikidata. Falcon 2.0 resorts to the English language model for the recognition task (e.g., N-Gram tiling and N-Gram splitting), and then an optimization approach for the linking task. We have empirically studied the performance of Falcon 2.0 on Wikidata and concluded that it outperforms all the existing baselines. Falcon 2.0 is open source and can be reused by the community; all the required instructions of Falcon 2.0 are well-documented at our GitHub repository (https://github.com/SDM-TIB/falcon2.0). We also demonstrate an online API, which can be run without any technical expertise. Falcon 2.0 and its background knowledge bases are available as resources at https://labs.tib.eu/falcon/falcon2/.", "which contains ?", "Data", NaN, NaN], ["As Linked Data available on the Web continue to grow, understanding their structure and assessing their quality remains a challenging task making such the bottleneck for their reuse. ABSTAT is an online semantic profiling tool which helps data consumers in better understanding of the data by extracting data-driven ontology patterns and statistics about the data. The SHACL Shapes Constraint Language helps users capturing quality issues in the data by means of co straints. In this paper we propose a methodology to improve the quality of different versions of the data by means of SHACL constraints learned from the semantic profiles produced by ABSTAT.", "which input ?", "ontology", 316.0, 324.0], ["Folksonomies emerge as the result of the free tagging activity of a large number of users over a variety of resources. They can be considered as valuable sources from which it is possible to obtain emerging vocabularies that can be leveraged in knowledge extraction tasks. However, when it comes to understanding the meaning of tags in folksonomies, several problems mainly related to the appearance of synonymous and ambiguous tags arise, specifically in the context of multilinguality. The authors aim to turn folksonomies into knowledge structures where tag meanings are identified, and relations between them are asserted. For such purpose, they use DBpedia as a general knowledge base from which they leverage its multilingual capabilities.", "which input ?", "Context", 460.0, 467.0], ["Folksonomies emerge as the result of the free tagging activity of a large number of users over a variety of resources. They can be considered as valuable sources from which it is possible to obtain emerging vocabularies that can be leveraged in knowledge extraction tasks. However, when it comes to understanding the meaning of tags in folksonomies, several problems mainly related to the appearance of synonymous and ambiguous tags arise, specifically in the context of multilinguality. The authors aim to turn folksonomies into knowledge structures where tag meanings are identified, and relations between them are asserted. For such purpose, they use DBpedia as a general knowledge base from which they leverage its multilingual capabilities.", "which input ?", "Tags", 328.0, 332.0], ["In January 2016, the three journals of the Association for Research in Vision and Ophthalmology (ARVO) transitioned to gold open access. Increased author charges were introduced to partially offset the loss of subscription revenue. Submissions to the two established journals initially dropped by almost 15% but have now stabilized. The transition has not impacted acceptance rates and impact factors, and article pageviews and downloads may have increased as a result of open access.", "which research_field_investigated ?", "Ophthalmology", 82.0, 95.0], ["Abstract This paper studies a selection of 11 Norwegian journals in the humanities and social sciences and their conversion from subscription to open access, a move heavily incentivized by governmental mandates and open access policies. By investigating the journals\u2019 visiting logs in the period 2014\u20132019, the study finds that a conversion to open access induces higher visiting numbers; all journals in the study had a significant increase, which can be attributed to the conversion. Converting a journal had no spillover in terms of increased visits to previously published articles still behind the paywall in the same journals. Visits from previously subscribing Norwegian higher education institutions did not account for the increase in visits, indicating that the increase must be accounted for by visitors from other sectors. The results could be relevant for policymakers concerning the effects of strict policies targeting economically vulnerable national journals, and could further inform journal owners and editors on the effects of converting to open access.", "which research_field_investigated ?", "Social Sciences", 87.0, 102.0], ["As Linked Data available on the Web continue to grow, understanding their structure and assessing their quality remains a challenging task making such the bottleneck for their reuse. ABSTAT is an online semantic profiling tool which helps data consumers in better understanding of the data by extracting data-driven ontology patterns and statistics about the data. The SHACL Shapes Constraint Language helps users capturing quality issues in the data by means of co straints. In this paper we propose a methodology to improve the quality of different versions of the data by means of SHACL constraints learned from the semantic profiles produced by ABSTAT.", "which output ?", "statistics", 338.0, 348.0], ["In order to deal with heterogeneous knowledge in the medical field, this paper proposes a method which can learn a heavy-weighted medical ontology based on medical glossaries and Web resources. Firstly, terms and taxonomic relations are extracted based on disease and drug glossaries and a light-weighted ontology is constructed, Secondly, non-taxonomic relations are automatically learned from Web resources with linguistic patterns, and the two ontologies (disease and drug) are expanded from light-weighted level towards heavy-weighted level, At last, the disease ontology and drug ontology are integrated to create a practical medical ontology. Experiment shows that this method can integrate and expand medical terms with taxonomic and different kinds of non-taxonomic relations. Our experiments show that the performance is promising.", "which output ?", "Disease ontology", 559.0, 575.0], ["Ontologies are used in the integration of information resources by describing the semantics of the information sources with machine understandable terms and definitions. But, creating an ontology is a difficult and time-consuming process, especially in the early stage of extracting key concepts and relations. This paper proposes a method for domain ontology building by extracting ontological knowledge from UML models of existing systems. We compare the UML model elements with the OWL ones and derive transformation rules between the corresponding model elements. Based on these rules, we define an XSLT document which implements the transformation processes. We expect that the proposed method reduce the cost and time for building domain ontologies with the reuse of existing UML models", "which output ?", "Domain ontology", 344.0, 359.0], ["In order to deal with heterogeneous knowledge in the medical field, this paper proposes a method which can learn a heavy-weighted medical ontology based on medical glossaries and Web resources. Firstly, terms and taxonomic relations are extracted based on disease and drug glossaries and a light-weighted ontology is constructed, Secondly, non-taxonomic relations are automatically learned from Web resources with linguistic patterns, and the two ontologies (disease and drug) are expanded from light-weighted level towards heavy-weighted level, At last, the disease ontology and drug ontology are integrated to create a practical medical ontology. Experiment shows that this method can integrate and expand medical terms with taxonomic and different kinds of non-taxonomic relations. Our experiments show that the performance is promising.", "which output ?", "Drug ontology", 580.0, 593.0], ["Many digital libraries recommend literature to their users considering the similarity between a query document and their repository. However, they often fail to distinguish what is the relationship that makes two documents alike. In this paper, we model the problem of finding the relationship between two documents as a pairwise document classification task. To find the semantic relation between documents, we apply a series of techniques, such as GloVe, Paragraph Vectors, BERT, and XLNet under different configurations (e.g., sequence length, vector concatenation scheme), including a Siamese architecture for the Transformer-based systems. We perform our experiments on a newly proposed dataset of 32,168 Wikipedia article pairs and Wikidata properties that define the semantic document relations. Our results show vanilla BERT as the best performing system with an F1-score of 0.93, which we manually examine to better understand its applicability to other domains. Our findings suggest that classifying semantic relations between documents is a solvable task and motivates the development of a recommender system based on the evaluated techniques. The discussions in this paper serve as first steps in the exploration of documents through SPARQL-like queries such that one could find documents that are similar in one aspect but dissimilar in another.", "which Dataset used ?", "Wikipedia", 710.0, 719.0], ["The Open Researcher & Contributor ID (ORCID) registry presents a unique opportunity to solve the problem of author name ambiguity. At its core the value of the ORCID registry is that it crosses disciplines, organizations, and countries, linking ORCID with both existing identifier schemes as well as publications and other research activities. By supporting linkages across multiple datasets \u2013 clinical trials, publications, patents, datasets \u2013 such a registry becomes a switchboard for researchers and publishers alike in managing the dissemination of research findings. We describe use cases for embedding ORCID identifiers in manuscript submission workflows, prior work searches, manuscript citations, and repository deposition. We make recommendations for storing and displaying ORCID identifiers in publication metadata to include ORCID identifiers, with CrossRef integration as a specific example. Finally, we provide an overview of ORCID membership and integration tools and resources.", "which provided services ?", "Search", NaN, NaN], ["Instruments play an essential role in creating research data. Given the importance of instruments and associated metadata to the assessment of data quality and data reuse, globally unique, persistent and resolvable identification of instruments is crucial. The Research Data Alliance Working Group Persistent Identification of Instruments (PIDINST) developed a community-driven solution for persistent identification of instruments which we present and discuss in this paper. Based on an analysis of 10 use cases, PIDINST developed a metadata schema and prototyped schema implementation with DataCite and ePIC as representative persistent identifier infrastructures and with HZB (Helmholtz-Zentrum Berlin fur Materialien und Energie) and BODC (British Oceanographic Data Centre) as representative institutional instrument providers. These implementations demonstrate the viability of the proposed solution in practice. Moving forward, PIDINST will further catalyse adoption and consolidate the schema by addressing new stakeholder requirements.", "which provided services ?", "Search", NaN, NaN], ["Although the ultimate objective of Linked Data is linking and integration, it is not currently evident how connected the current Linked Open Data (LOD) cloud is. In this article, we focus on methods, supported by special indexes and algorithms, for performing measurements related to the connectivity of more than two datasets that are useful in various tasks including (a) Dataset Discovery and Selection; (b) Object Coreference, i.e., for obtaining complete information about a set of entities, including provenance information; (c) Data Quality Assessment and Improvement, i.e., for assessing the connectivity between any set of datasets and monitoring their evolution over time, as well as for estimating data veracity; (d) Dataset Visualizations; and various other tasks. Since it would be prohibitively expensive to perform all these measurements in a na\u00efve way, in this article, we introduce indexes (and their construction algorithms) that can speed up such tasks. In brief, we introduce (i) a namespace-based prefix index, (ii) a sameAs catalog for computing the symmetric and transitive closure of the owl:sameAs relationships encountered in the datasets, (iii) a semantics-aware element index (that exploits the aforementioned indexes), and, finally, (iv) two lattice-based incremental algorithms for speeding up the computation of the intersection of URIs of any set of datasets. For enhancing scalability, we propose parallel index construction algorithms and parallel lattice-based incremental algorithms, we evaluate the achieved speedup using either a single machine or a cluster of machines, and we provide insights regarding the factors that affect efficiency. Finally, we report measurements about the connectivity of the (billion triples-sized) LOD cloud that have never been carried out so far.", "which provided services ?", "Dataset Discovery", 374.0, 391.0], ["One of the disciplines behind the science of science is the study of scientific networks. This work focuses on scientific networks as a social network having different nodes and connections. Nodes can be represented by authors, articles or journals while connections by citation, co-citation or co-authorship. One of the challenges in creating scientific networks is the lack of publicly available comprehensive data set. It limits the variety of analyses on the same set of nodes of different scientific networks. To supplement such analyses we have worked on publicly available citation metadata from Crossref and OpenCitatons. Using this data a workflow is developed to create scientific networks. Analysis of these networks gives insights into academic research and scholarship. Different techniques of social network analysis have been applied in the literature to study these networks. It includes centrality analysis, community detection, and clustering coefficient. We have used metadata of Scientometrics journal, as a case study, to present our workflow. We did a sample run of the proposed workflow to identify prominent authors using centrality analysis. This work is not a bibliometric study of any field rather it presents replicable Python scripts to perform network analysis. With an increase in the popularity of open access and open metadata, we hypothesise that this workflow shall provide an avenue for understanding scientific scholarship in multiple dimensions.", "which Bibliographic data source ?", "Crossref", 603.0, 611.0], ["This article aims to identify whether different weighted PageRank algorithms can be applied to author citation networks to measure the popularity and prestige of a scholar from a citation perspective. Information retrieval (IR) was selected as a test field and data from 1956\u20132008 were collected from Web of Science. Weighted PageRank with citation and publication as weighted vectors were calculated on author citation networks. The results indicate that both popularity rank and prestige rank were highly correlated with the weighted PageRank. Principal component analysis was conducted to detect relationships among these different measures. For capturing prize winners within the IR field, prestige rank outperformed all the other measures. \u00a9 2011 Wiley Periodicals, Inc.", "which Bibliographic data source ?", "Web of Science", 301.0, 315.0], ["Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.", "which Software entity types ?", "Application", 894.0, 905.0], ["Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.", "which Software entity types ?", "Plugin", 907.0, 913.0], ["\nPurpose\nIn terms of entrepreneurship, open data benefits include economic growth, innovation, empowerment and new or improved products and services. Hackathons encourage the development of new applications using open data and the creation of startups based on these applications. Researchers focus on factors that affect nascent entrepreneurs\u2019 decision to create a startup but researches in the field of open data hackathons have not been fully investigated yet. This paper aims to suggest a model that incorporates factors that affect the decision of establishing a startup by developers who have participated in open data hackathons.\n\n\nDesign/methodology/approach\nIn total, 70 papers were examined and analyzed using a three-phased literature review methodology, which was suggested by Webster and Watson (2002). These surveys investigated several factors that affect a nascent entrepreneur to create a startup.\n\n\nFindings\nEventually, by identifying the motivations for developers to participate in a hackathon, and understanding the benefits of the use of open data, researchers will be able to elaborate the proposed model and evaluate if the contest has contributed to the decision of establish a startup and what factors affect the decision to establish a startup apply to open data developers, and if the participants of the contest agree with these factors.\n\n\nOriginality/value\nThe paper expands the scope of open data research on entrepreneurship field, stating the need for more research to be conducted regarding the open data in entrepreneurship through hackathons.\n", "which has dubject domain ?", "Hackathons", 228.0, 238.0], ["The theme of smart grids will connote in the immediate future the production and distribution of electricity, integrating effectively and in a sustainable way energy deriving from large power stations with that distributed and supplied by renewable sources. In programmes of urban redevelopment, however, the historical city has not yet been subject to significant experimentation, also due to the specific safeguard on this kind of Heritage. This reflection opens up interesting new perspectives of research and operations, which could significantly contribute to the pursuit of the aims of the Smart City. This is the main goal of the research here presented and focused on the binomial renovation of a historical complex/enhancement and upgrading of its energy efficiency.", "which has subject domain ?", "Smart City", 596.0, 606.0], ["During recent years, the \u2018smart city\u2019 concept has emerged in literature (e.g., Kunttu, 2019; Markkula & Kune, 2018; \u00d6berg, Graham, & Hennelly, 2017; Visvizi & Lytras, 2018). Inherently, the smart city concept includes urban innovation; therefore, simply developing and applying technology is not enough for success. For cities to be 'smart,' they also have to be innovative, apply new ways of thinking among businesses, citizens, and academia, as well as integrate diverse actors, especially universities, in their innovation practices (Kunttu, 2019; Markkula & Kune, 2018).", "which has subject domain ?", "innovation practices", 515.0, 535.0], ["In this article, we share our experiences of using digital technologies and various media to present historical narratives of a museum object collection aiming to provide an engaging experience on multiple platforms. Based on P. Joseph\u2019s article, Dawson presented multiple interpretations and historical views of the Markham car collection across various platforms using multimedia resources. Through her creative production, she explored how to use cylindrical panoramas and rich media to offer new ways of telling the controversial story of the contested heritage of a museum\u2019s veteran and vintage car collection. The production\u2019s usability was investigated involving five experts before it was published online and the general users\u2019 experience was investigated. In this article, we present an important component of findings which indicates that virtual panorama tours featuring multimedia elements could be successful in attracting new audiences and that using this type of storytelling technique can be effective in the museum sector. The storyteller panorama tour presented here may stimulate GLAM (galleries, libraries, archives, and museums) professionals to think of new approaches, implement new strategies or services to engage their audiences more effectively. The research may ameliorate the education of future professionals as well.", "which has subject domain ?", "contested heritage", 547.0, 565.0], ["A diverse and changing array of digital media have been used to present heritage online. While websites have been created for online heritage outreach for nearly two decades, social media is employed increasingly to complement and in some cases replace the use of websites. These same social media are used by stakeholders as a form of participatory culture, to create communities and to discuss heritage independently of narratives offered by official institutions such as museums, memorials, and universities. With difficult or \u201cdark\u201d heritage\u2014places of memory centering on deaths, disasters, and atrocities\u2014these online representations and conversations can be deeply contested. Examining the websites and social media of difficult heritage, with an emphasis on the trans-Atlantic slave trade provides insights into the efficacy of online resources provided by official institutions, as well as the unofficial, participatory communities of stakeholders who use social media for collective memories.", "which has subject domain ?", "participatory culture", 336.0, 357.0], ["This paper discusses the potential of current advancements in Information Communication Technologies (ICT) for cultural heritage preservation, valorization and management within contemporary cities. The paper highlights the potential of virtual environments to assess the impacts of heritage policies on urban development. It does so by discussing the implications of virtual globes and crowdsourcing to support the participatory valuation and management of cultural heritage assets. To this purpose, a review of available valuation techniques is here presented together with a discussion on how these techniques might be coupled with ICT tools to promote inclusive governance.\u00a0", "which has subject domain ?", "valuation techniques", 531.0, 551.0], ["Abstract Hackathons, time-bounded events where participants write computer code and build apps, have become a popular means of socializing tech students and workers to produce \u201cinnovation\u201d despite little promise of material reward. Although they offer participants opportunities for learning new skills and face-to-face networking and set up interaction rituals that create an emotional \u201chigh,\u201d potential advantage is even greater for the events\u2019 corporate sponsors, who use them to outsource work, crowdsource innovation, and enhance their reputation. Ethnographic observations and informal interviews at seven hackathons held in New York during the course of a single school year show how the format of the event and sponsors\u2019 discursive tropes, within a dominant cultural frame reflecting the appeal of Silicon Valley, reshape unpaid and precarious work as an extraordinary opportunity, a ritual of ecstatic labor, and a collective imaginary for fictional expectations of innovation that benefits all, a powerful strategy for manufacturing workers\u2019 consent in the \u201cnew\u201d economy.", "which has subject domain ?", "\u201cnew\u201d economy", NaN, NaN], ["A diverse and changing array of digital media have been used to present heritage online. While websites have been created for online heritage outreach for nearly two decades, social media is employed increasingly to complement and in some cases replace the use of websites. These same social media are used by stakeholders as a form of participatory culture, to create communities and to discuss heritage independently of narratives offered by official institutions such as museums, memorials, and universities. With difficult or \u201cdark\u201d heritage\u2014places of memory centering on deaths, disasters, and atrocities\u2014these online representations and conversations can be deeply contested. Examining the websites and social media of difficult heritage, with an emphasis on the trans-Atlantic slave trade provides insights into the efficacy of online resources provided by official institutions, as well as the unofficial, participatory communities of stakeholders who use social media for collective memories.", "which has subject domain ?", "\u201cdark\u201d heritage", NaN, NaN], ["A diverse and changing array of digital media have been used to present heritage online. While websites have been created for online heritage outreach for nearly two decades, social media is employed increasingly to complement and in some cases replace the use of websites. These same social media are used by stakeholders as a form of participatory culture, to create communities and to discuss heritage independently of narratives offered by official institutions such as museums, memorials, and universities. With difficult or \u201cdark\u201d heritage\u2014places of memory centering on deaths, disasters, and atrocities\u2014these online representations and conversations can be deeply contested. Examining the websites and social media of difficult heritage, with an emphasis on the trans-Atlantic slave trade provides insights into the efficacy of online resources provided by official institutions, as well as the unofficial, participatory communities of stakeholders who use social media for collective memories.", "which has subject domain ?", "collective memories", 981.0, 1000.0], ["ABSTRACT \u2018Heritage Interpretation\u2019 has always been considered as an effective learning, communication and management tool that increases visitors\u2019 awareness of and empathy to heritage sites or artefacts. Yet the definition of \u2018digital heritage interpretation\u2019 is still wide and so far, no significant method and objective are evident within the domain of \u2018digital heritage\u2019 theory and discourse. Considering \u2018digital heritage interpretation\u2019 as a process rather than as a tool to present or communicate with end-users, this paper presents a critical application of a theoretical construct ascertained from multiple disciplines and explicates four objectives for a comprehensive interpretive process. A conceptual model is proposed and further developed into a conceptual framework with fifteen considerations. This framework is then implemented and tested on an online platform to assess its impact on end-users\u2019 interpretation level. We believe the presented interpretive framework (PrEDiC) will help heritage professionals and media designers to develop interpretive heritage project.", "which has subject domain ?", "\u2018digital heritage interpretation\u2019", NaN, NaN], ["A diverse and changing array of digital media have been used to present heritage online. While websites have been created for online heritage outreach for nearly two decades, social media is employed increasingly to complement and in some cases replace the use of websites. These same social media are used by stakeholders as a form of participatory culture, to create communities and to discuss heritage independently of narratives offered by official institutions such as museums, memorials, and universities. With difficult or \u201cdark\u201d heritage\u2014places of memory centering on deaths, disasters, and atrocities\u2014these online representations and conversations can be deeply contested. Examining the websites and social media of difficult heritage, with an emphasis on the trans-Atlantic slave trade provides insights into the efficacy of online resources provided by official institutions, as well as the unofficial, participatory communities of stakeholders who use social media for collective memories.", "which has subject domain ?", "trans-Atlantic slave trade", 769.0, 795.0], ["Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.", "which has subject domain ?", "touristic branding and promotion", 1363.0, 1395.0], ["Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.", "which has subject domain ?", "sustainable urban development", 1651.0, 1680.0], ["The theme of smart grids will connote in the immediate future the production and distribution of electricity, integrating effectively and in a sustainable way energy deriving from large power stations with that distributed and supplied by renewable sources. In programmes of urban redevelopment, however, the historical city has not yet been subject to significant experimentation, also due to the specific safeguard on this kind of Heritage. This reflection opens up interesting new perspectives of research and operations, which could significantly contribute to the pursuit of the aims of the Smart City. This is the main goal of the research here presented and focused on the binomial renovation of a historical complex/enhancement and upgrading of its energy efficiency.", "which has subject domain ?", "energy efficiency", 757.0, 774.0], ["This paper introduces the recently begun REINVENT research project focused on the management of heritage in the cross-border cultural landscape of Derry/Londonderry. The importance of facilitating dialogue over cultural heritage to the maintenance of \u2018thin\u2019 borders in contested cross-border contexts is underlined in the paper, as is the relatively favourable strategic policy context for progressing \u2018heritage diplomacy\u2019 on the island of Ireland. However, it is argued that more inclusive and participatory approaches to the management of heritage are required to assist in the mediation of contestation, particularly accommodating a greater diversity of \u2018non-expert\u2019 opinion, in addition to helping identify value conflicts and dissonance. The application of digital technologies in the form of Public Participation Geographic Information Systems (PPGIS) is proposed, and this is briefly discussed in relation to some of the expected benefits and methodological challenges that must be addressed in the REINVENT project. The paper concludes by emphasising the importance of dialogue and knowledge exchange between academia and heritage policymakers/practitioners.", "which has subject domain ?", "heritage diplomacy", 403.0, 421.0], ["A key challenge for manufacturers today is efficiently producing and delivering products on time. Issues include demand for customized products, changes in orders, and equipment status change, complicating the decision-making process. A real-time digital representation of the manufacturing operation would help address these challenges. Recent technology advancements of smart sensors, IoT, and cloud computing make it possible to realize a \"digital twin\" of a manufacturing system or process. Digital twins or surrogates are data-driven virtual representations that replicate, connect, and synchronize the operation of a manufacturing system or process. They utilize dynamically collected data to track system behaviors, analyze performance, and help make decisions without interrupting production. In this paper, we define digital surrogate, explore their relationships to simulation, digital thread, artificial intelligence, and IoT. We identify the technology and standard requirements and challenges for implementing digital surrogates. A production planning case is used to exemplify the digital surrogate concept.", "which has viewpoint ?", "Process", 226.0, 233.0], ["A key challenge for manufacturers today is efficiently producing and delivering products on time. Issues include demand for customized products, changes in orders, and equipment status change, complicating the decision-making process. A real-time digital representation of the manufacturing operation would help address these challenges. Recent technology advancements of smart sensors, IoT, and cloud computing make it possible to realize a \"digital twin\" of a manufacturing system or process. Digital twins or surrogates are data-driven virtual representations that replicate, connect, and synchronize the operation of a manufacturing system or process. They utilize dynamically collected data to track system behaviors, analyze performance, and help make decisions without interrupting production. In this paper, we define digital surrogate, explore their relationships to simulation, digital thread, artificial intelligence, and IoT. We identify the technology and standard requirements and challenges for implementing digital surrogates. A production planning case is used to exemplify the digital surrogate concept.", "which has viewpoint ?", "System", 476.0, 482.0], ["

Anilido-oxazoline-ligated rare-earth metal complexes show strong fluorescence emissions and good catalytic performance on isoprene polymerization with high cis-1,4-selectivity.

", "which Ligand ?", "Anilido-oxazoline", 3.0, 20.0], ["The coronavirus outbreak has caused significant disruptions to people\u2019s lives. We document the impact of state-wide stay-at-home orders on mental health using real time survey data in the US. The lockdown measures lowered mental health by 0.085 standard deviations. This large negative effect is entirely driven by women. As a result of the lockdown measures, the existing gender gap in mental health has increased by 66%. The negative effect on women\u2019s mental health cannot be explained by an increase in financial worries or childcare responsibilities.", "which Control variables ?", "Gender", 373.0, 379.0], ["We document a decline in mental well-being after the onset of the Covid-19 pandemic in the UK. This decline is twice as large for women as for men. We seek to explain this gender gap by exploring gender differences in: family and caring responsibilities; financial and work situation; social engagement; health situation, and health behaviours, including exercise. Differences in family and caring responsibilities play some role, but the bulk of the gap is explained by social factors. Women reported more close friends before the pandemic than men, and increased loneliness after the pandemic's onset. Other factors are similarly distributed across genders and so play little role. Finally, we document larger declines in well-being for the young, of both genders, than the old.", "which Control variables ?", "Gender", 172.0, 178.0], ["We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10% higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60% less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.", "which Uses dataset ?", "BlogCatalog", 662.0, 673.0], ["We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10% higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60% less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.", "which Uses dataset ?", "Flickr", 675.0, 681.0], ["We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10% higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60% less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.", "which Uses dataset ?", "Youtube", 687.0, 694.0], ["Motivation Biological knowledge is widely represented in the form of ontology\u2010based annotations: ontologies describe the phenomena assumed to exist within a domain, and the annotations associate a (kind of) biological entity with a set of phenomena within the domain. The structure and information contained in ontologies and their annotations make them valuable for developing machine learning, data analysis and knowledge extraction algorithms; notably, semantic similarity is widely used to identify relations between biological entities, and ontology\u2010based annotations are frequently used as features in machine learning applications. Results We propose the Onto2Vec method, an approach to learn feature vectors for biological entities based on their annotations to biomedical ontologies. Our method can be applied to a wide range of bioinformatics research problems such as similarity\u2010based prediction of interactions between proteins, classification of interaction types using supervised learning, or clustering. To evaluate Onto2Vec, we use the gene ontology (GO) and jointly produce dense vector representations of proteins, the GO classes to which they are annotated, and the axioms in GO that constrain these classes. First, we demonstrate that Onto2Vec\u2010generated feature vectors can significantly improve prediction of protein\u2010protein interactions in human and yeast. We then illustrate how Onto2Vec representations provide the means for constructing data\u2010driven, trainable semantic similarity measures that can be used to identify particular relations between proteins. Finally, we use an unsupervised clustering approach to identify protein families based on their Enzyme Commission numbers. Our results demonstrate that Onto2Vec can generate high quality feature vectors from biological entities and ontologies. Onto2Vec has the potential to significantly outperform the state\u2010of\u2010the\u2010art in several predictive applications in which ontologies are involved. Availability and implementation https://github.com/bio\u2010ontology\u2010research\u2010group/onto2vec", "which Uses dataset ?", "Gene Ontology (GO)", NaN, NaN], ["Word embeddings have emerged as a popular approach to unsupervised learning of word relationships in machine learning and natural language processing. In this article, we benchmark two of the most popular algorithms, GloVe and word2vec, to assess their suitability for capturing medical relationships in large sources of biomedical data. Leaning on recent theoretical insights, we provide a unified view of these algorithms and demonstrate how different sources of data can be combined to construct the largest ever set of embeddings for 108,477 medical concepts using an insurance claims database of 60 million members, 20 million clinical notes, and 1.7 million full text biomedical journal articles. We evaluate our approach, called cui2vec, on a set of clinically relevant benchmarks and in many instances demonstrate state of the art performance relative to previous results. Finally, we provide a downloadable set of pre-trained embeddings for other researchers to use, as well as an online tool for interactive exploration of the cui2vec embeddings.", "which Has method ?", "GloVe", 217.0, 222.0], ["Word embeddings have emerged as a popular approach to unsupervised learning of word relationships in machine learning and natural language processing. In this article, we benchmark two of the most popular algorithms, GloVe and word2vec, to assess their suitability for capturing medical relationships in large sources of biomedical data. Leaning on recent theoretical insights, we provide a unified view of these algorithms and demonstrate how different sources of data can be combined to construct the largest ever set of embeddings for 108,477 medical concepts using an insurance claims database of 60 million members, 20 million clinical notes, and 1.7 million full text biomedical journal articles. We evaluate our approach, called cui2vec, on a set of clinically relevant benchmarks and in many instances demonstrate state of the art performance relative to previous results. Finally, we provide a downloadable set of pre-trained embeddings for other researchers to use, as well as an online tool for interactive exploration of the cui2vec embeddings.", "which Has method ?", "word2vec", 227.0, 235.0], ["MOTIVATION Ontologies are widely used in biology for data annotation, integration and analysis. In addition to formally structured axioms, ontologies contain meta-data in the form of annotation axioms which provide valuable pieces of information that characterize ontology classes. Annotation axioms commonly used in ontologies include class labels, descriptions or synonyms. Despite being a rich source of semantic information, the ontology meta-data are generally unexploited by ontology-based analysis methods such as semantic similarity measures. RESULTS We propose a novel method, OPA2Vec, to generate vector representations of biological entities in ontologies by combining formal ontology axioms and annotation axioms from the ontology meta-data. We apply a Word2Vec model that has been pre-trained on either a corpus or abstracts or full-text articles to produce feature vectors from our collected data. We validate our method in two different ways: first, we use the obtained vector representations of proteins in a similarity measure to predict protein-protein interaction on two different datasets. Second, we evaluate our method on predicting gene-disease associations based on phenotype similarity by generating vector representations of genes and diseases using a phenotype ontology, and applying the obtained vectors to predict gene-disease associations using mouse model phenotypes. We demonstrate that OPA2Vec significantly outperforms existing methods for predicting gene-disease associations. Using evidence from mouse models, we apply OPA2Vec to identify candidate genes for several thousand rare and orphan diseases. OPA2Vec can be used to produce vector representations of any biomedical entity given any type of biomedical ontology. AVAILABILITY AND IMPLEMENTATION https://github.com/bio-ontology-research-group/opa2vec. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.", "which Has method ?", "word2vec", 765.0, 773.0], ["Populating ontology graphs represents a long-standing problem for the Semantic Web community. Recent advances in translation-based graph embedding methods for populating instance-level knowledge graphs lead to promising new approaching for the ontology population problem. However, unlike instance-level graphs, the majority of relation facts in ontology graphs come with comprehensive semantic relations, which often include the properties of transitivity and symmetry, as well as hierarchical relations. These comprehensive relations are often too complex for existing graph embedding methods, and direct application of such methods is not feasible. Hence, we propose On2Vec, a novel translation-based graph embedding method for ontology population. On2Vec integrates two model components that effectively characterize comprehensive relation facts in ontology graphs. The first is the Component-specific Model that encodes concepts and relations into low-dimensional embedding spaces without a loss of relational properties; the second is the Hierarchy Model that performs focused learning of hierarchical relation facts. Experiments on several well-known ontology graphs demonstrate the promising capabilities of On2Vec in predicting and verifying new relation facts. These promising results also make possible significant improvements in related methods.", "which Has method ?", "Hierarchy Model", 1045.0, 1060.0], ["We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10% higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60% less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.", "which Has method ?", "Random walk", NaN, NaN], ["NLTK, the Natural Language Toolkit, is a suite of open source program modules, tutorials and problem sets, providing ready-to-use computational linguistics courseware. NLTK covers symbolic and statistical natural language processing, and is interfaced to annotated corpora. Students augment and replace existing components, learn structured programming by example, and manipulate sophisticated models from the outset.", "which is a ?", "Toolkit", 27.0, 34.0], ["NLTK, the Natural Language Toolkit, is a suite of open source program modules, tutorials and problem sets, providing ready-to-use computational linguistics courseware. NLTK covers symbolic and statistical natural language processing, and is interfaced to annotated corpora. Students augment and replace existing components, learn structured programming by example, and manipulate sophisticated models from the outset.", "which Documentation ?", "Tutorials", 79.0, 88.0], ["The electrochemical reduction of solid silica has been investigated in molten CaCl2 at 900 \u00b0C for the one-step, up-scalable, controllable and affordable production of nanostructured silicon with promising photo-responsive properties. Cyclic voltammetry of the metallic cavity electrode loaded with fine silica powder was performed to elaborate the electrochemical reduction mechanism. Potentiostatic electrolysis of porous and dense silica pellets was carried out at different potentials, focusing on the influences of the electrolysis potential and the microstructure of the precursory silica on the product purity and microstructure. The findings suggest a potential range between \u22120.60 and \u22120.95 V (vs. Ag/AgCl) for the production of nanostructured silicon with high purity (>99 wt%). According to the elucidated mechanism on the electro-growth of the silicon nanostructures, optimal process parameters for the controllable preparation of high-purity silicon nanoparticles and nanowires were identified. Scaling-up the optimal electrolysis was successful at the gram-scale for the preparation of high-purity silicon nanowires which exhibited promising photo-responsive properties.", "which (pseudo)reference electrode ?", "Ag/AgCl", 706.0, 713.0], ["

In this work, we present a non-fullerene electron acceptor bearing a fused five-heterocyclic ring containing selenium atoms, denoted as IDSe-T-IC, for fullerene-free polymer solar cells (PSCs).

", "which Mobility ?", "Electron", 44.0, 52.0], ["Low bandgap n-type organic semiconductor (n-OS) ITIC has attracted great attention for the application as an acceptor with medium bandgap p-type conjugated polymer as donor in nonfullerene polymer solar cells (PSCs) because of its attractive photovoltaic performance. Here we report a modification on the molecular structure of ITIC by side-chain isomerization with meta-alkyl-phenyl substitution, m-ITIC, to further improve its photovoltaic performance. In a comparison with its isomeric counterpart ITIC with para-alkyl-phenyl substitution, m-ITIC shows a higher film absorption coefficient, a larger crystalline coherence, and higher electron mobility. These inherent advantages of m-ITIC resulted in a higher power conversion efficiency (PCE) of 11.77% for the nonfullerene PSCs with m-ITIC as acceptor and a medium bandgap polymer J61 as donor, which is significantly improved over that (10.57%) of the corresponding devices with ITIC as acceptor. To the best of our knowledge, the PCE of 11.77% is one of the highest values reported in the literature to date for nonfullerene PSCs. More importantly, the m-ITIC-based device shows less thickness-dependent photovoltaic behavior than ITIC-based devices in the active-layer thickness range of 80-360 nm, which is beneficial for large area device fabrication. These results indicate that m-ITIC is a promising low bandgap n-OS for the application as an acceptor in PSCs, and the side-chain isomerization could be an easy and convenient way to further improve the photovoltaic performance of the donor and acceptor materials for high efficiency PSCs.", "which Mobility ?", "Electron", 637.0, 645.0], ["We have developed a kind of novel fused-ring small molecular acceptor, whose planar conformation can be locked by intramolecular noncovalent interaction. The formation of planar supramolecular fused-ring structure by conformation locking can effectively broaden its absorption spectrum, enhance the electron mobility, and reduce the nonradiative energy loss. Polymer solar cells (PSCs) based on this acceptor afforded a power conversion efficiency (PCE) of 9.6%. In contrast, PSCs based on similar acceptor, which cannot form a flat conformation, only gave a PCE of 2.3%. Such design strategy, which can make the synthesis of small molecular acceptor much easier, will be promising in developing a new acceptor for high efficiency polymer solar cells.", "which Mobility ?", "Electron", 299.0, 307.0], ["A side\u2010chain conjugation strategy in the design of nonfullerene electron acceptors is proposed, with the design and synthesis of a side\u2010chain\u2010conjugated acceptor (ITIC2) based on a 4,8\u2010bis(5\u2010(2\u2010ethylhexyl)thiophen\u20102\u2010yl)benzo[1,2\u2010b:4,5\u2010b\u2032]di(cyclopenta\u2010dithiophene) electron\u2010donating core and 1,1\u2010dicyanomethylene\u20103\u2010indanone electron\u2010withdrawing end groups. ITIC2 with the conjugated side chains exhibits an absorption peak at 714 nm, which redshifts 12 nm relative to ITIC1. The absorption extinction coefficient of ITIC2 is 2.7 \u00d7 105m\u22121 cm\u22121, higher than that of ITIC1 (1.5 \u00d7 105m\u22121 cm\u22121). ITIC2 exhibits slightly higher highest occupied molecular orbital (HOMO) (\u22125.43 eV) and lowest unoccupied molecular orbital (LUMO) (\u22123.80 eV) energy levels relative to ITIC1 (HOMO: \u22125.48 eV; LUMO: \u22123.84 eV), and higher electron mobility (1.3 \u00d7 10\u22123 cm2 V\u22121 s\u22121) than that of ITIC1 (9.6 \u00d7 10\u22124 cm2 V\u22121 s\u22121). The power conversion efficiency of ITIC2\u2010based organic solar cells is 11.0%, much higher than that of ITIC1\u2010based control devices (8.54%). Our results demonstrate that side\u2010chain conjugation can tune energy levels, enhance absorption, and electron mobility, and finally enhance photovoltaic performance of nonfullerene acceptors.", "which Mobility ?", "Electron", 64.0, 72.0], ["A novel non-fullerene acceptor, possessing a very low bandgap of 1.34 eV and a high-lying lowest unoccupied molecular orbital level of -3.95 eV, is designed and synthesized by introducing electron-donating alkoxy groups to the backbone of a conjugated small molecule. Impressive power conversion efficiencies of 8.4% and 10.7% are obtained for fabricated single and tandem polymer solar cells.", "which Mobility ?", "Electron", 188.0, 196.0], ["With an indenoindene core, a new thieno[3,4\u2010b]thiophene\u2010based small\u2010molecule electron acceptor, 2,2\u2032\u2010((2Z,2\u2032Z)\u2010((6,6\u2032\u2010(5,5,10,10\u2010tetrakis(2\u2010ethylhexyl)\u20105,10\u2010dihydroindeno[2,1\u2010a]indene\u20102,7\u2010diyl)bis(2\u2010octylthieno[3,4\u2010b]thiophene\u20106,4\u2010diyl))bis(methanylylidene))bis(5,6\u2010difluoro\u20103\u2010oxo\u20102,3\u2010dihydro\u20101H\u2010indene\u20102,1\u2010diylidene))dimalononitrile (NITI), is successfully designed and synthesized. Compared with 12\u2010\u03c0\u2010electron fluorene, a carbon\u2010bridged biphenylene with an axial symmetry, indenoindene, a carbon\u2010bridged E\u2010stilbene with a centrosymmetry, shows elongated \u03c0\u2010conjugation with 14 \u03c0\u2010electrons and one more sp3 carbon bridge, which may increase the tunability of electronic structure and film morphology. Despite its twisted molecular framework, NITI shows a low optical bandgap of 1.49 eV in thin film and a high molar extinction coefficient of 1.90 \u00d7 105m\u22121 cm\u22121 in solution. By matching NITI with a large\u2010bandgap polymer donor, an extraordinary power conversion efficiency of 12.74% is achieved, which is among the best performance so far reported for fullerene\u2010free organic photovoltaics and is inspiring for the design of new electron acceptors.", "which Mobility ?", "Electron", 77.0, 85.0], ["Ladder-type dithienocyclopentacarbazole (DTCC) cores, which possess highly extended \u03c0-conjugated backbones and versatile modular structures for derivatization, were widely used to develop high-performance p-type polymeric semiconductors. However, an n-type DTCC-based organic semiconductor has not been reported to date. In this study, the first DTCC-based n-type organic semiconductor (DTCC\u2013IC) with a well-defined A\u2013D\u2013A backbone was designed, synthesized, and characterized, in which a DTCC derivative substituted by four p-octyloxyphenyl groups was used as the electron-donating core and two strongly electron-withdrawing 3-(dicyanomethylene)indan-1-one moieties were used as the terminal acceptors. It was found that DTCC\u2013IC has strong light-capturing ability in the range of 500\u2013720 nm and exhibits an impressively high molar absorption coefficient of 2.24 \u00d7 105 M\u22121 cm\u22121 at 669 nm owing to effective intramolecular charge transfer and a strong D\u2013A effect. Cyclic voltammetry measurements indicated that the HOMO and LUMO energy levels of DTCC\u2013IC are \u22125.50 and \u22123.87 eV, respectively. More importantly, a high electron mobility of 2.17 \u00d7 10\u22123 cm2 V\u22121 s\u22121 was determined by the space-charge-limited current method; this electron mobility can be comparable to that of fullerene derivative acceptors (\u03bce \u223c 10\u22123 cm2 V\u22121 s\u22121). To investigate its application potential in non-fullerene solar cells, we fabricated organic solar cells (OSCs) by blending a DTCC\u2013IC acceptor with a PTB7-Th donor under various conditions. The results suggest that the optimized device exhibits a maximum power conversion efficiency (PCE) of up to 6% and a rational high VOC of 0.95 V. These findings demonstrate that the ladder-type DTCC core is a promising building block for the development of high-mobility n-type organic semiconductors for OSCs.", "which Mobility ?", "Electron", 564.0, 572.0], ["We develop an efficient fused-ring electron acceptor (ITIC-Th) based on indacenodithieno[3,2-b]thiophene core and thienyl side-chains for organic solar cells (OSCs). Relative to its counterpart with phenyl side-chains (ITIC), ITIC-Th shows lower energy levels (ITIC-Th: HOMO = -5.66 eV, LUMO = -3.93 eV; ITIC: HOMO = -5.48 eV, LUMO = -3.83 eV) due to the \u03c3-inductive effect of thienyl side-chains, which can match with high-performance narrow-band-gap polymer donors and wide-band-gap polymer donors. ITIC-Th has higher electron mobility (6.1 \u00d7 10(-4) cm(2) V(-1) s(-1)) than ITIC (2.6 \u00d7 10(-4) cm(2) V(-1) s(-1)) due to enhanced intermolecular interaction induced by sulfur-sulfur interaction. We fabricate OSCs by blending ITIC-Th acceptor with two different low-band-gap and wide-band-gap polymer donors. In one case, a power conversion efficiency of 9.6% was observed, which rivals some of the highest efficiencies for single junction OSCs based on fullerene acceptors.", "which Mobility ?", "Electron", 35.0, 43.0], ["A series of halogenated conjugated molecules, containing F, Cl, Br and I, were easily prepared via Knoevenagel condensation and applied in field-effect transistors and organic solar cells. Halogenated conjugated materials were found to possess deep frontier energy levels and high crystallinity compared to their non-halogenated analogues, which is due to the strong electronegativity and heavy atom effect of halogens. As a result, halogenated semiconductors provide high electron mobilities up to 1.3 cm2 V\u22121 s\u22121 in transistors and high efficiencies over 9% in non-fullerene solar cells.", "which Mobility ?", "Electron", 473.0, 481.0], ["Naphtho[1,2\u2010b:5,6\u2010b\u2032]dithiophene is extended to a fused octacyclic building block, which is end capped by strong electron\u2010withdrawing 2\u2010(5,6\u2010difluoro\u20103\u2010oxo\u20102,3\u2010dihydro\u20101H\u2010inden\u20101\u2010ylidene)malononitrile to yield a fused\u2010ring electron acceptor (IOIC2) for organic solar cells (OSCs). Relative to naphthalene\u2010based IHIC2, naphthodithiophene\u2010based IOIC2 with a larger \u03c0\u2010conjugation and a stronger electron\u2010donating core shows a higher lowest unoccupied molecular orbital energy level (IOIC2: \u22123.78 eV vs IHIC2: \u22123.86 eV), broader absorption with a smaller optical bandgap (IOIC2: 1.55 eV vs IHIC2: 1.66 eV), and a higher electron mobility (IOIC2: 1.0 \u00d7 10\u22123 cm2 V\u22121 s\u22121 vs IHIC2: 5.0 \u00d7 10\u22124 cm2 V\u22121 s\u22121). Thus, IOIC2\u2010based OSCs show higher values in open\u2010circuit voltage, short\u2010circuit current density, fill factor, and thereby much higher power conversion efficiency (PCE) values than those of the IHIC2\u2010based counterpart. In particular, as\u2010cast OSCs based on FTAZ: IOIC2 yield PCEs of up to 11.2%, higher than that of the control devices based on FTAZ: IHIC2 (7.45%). Furthermore, by using 0.2% 1,8\u2010diiodooctane as the processing additive, a PCE of 12.3% is achieved from the FTAZ:IOIC2\u2010based devices, higher than that of the FTAZ:IHIC2\u2010based devices (7.31%). These results indicate that incorporating extended conjugation into the electron\u2010donating fused\u2010ring units in nonfullerene acceptors is a promising strategy for designing high\u2010performance electron acceptors.", "which Mobility ?", "Electron", 113.0, 121.0], ["A fused hexacyclic electron acceptor, IHIC, based on strong electron\u2010donating group dithienocyclopentathieno[3,2\u2010b]thiophene flanked by strong electron\u2010withdrawing group 1,1\u2010dicyanomethylene\u20103\u2010indanone, is designed, synthesized, and applied in semitransparent organic solar cells (ST\u2010OSCs). IHIC exhibits strong near\u2010infrared absorption with extinction coefficients of up to 1.6 \u00d7 105m\u22121 cm\u22121, a narrow optical bandgap of 1.38 eV, and a high electron mobility of 2.4 \u00d7 10\u22123 cm2 V\u22121 s\u22121. The ST\u2010OSCs based on blends of a narrow\u2010bandgap polymer donor PTB7\u2010Th and narrow\u2010bandgap IHIC acceptor exhibit a champion power conversion efficiency of 9.77% with an average visible transmittance of 36% and excellent device stability; this efficiency is much higher than any single\u2010junction and tandem ST\u2010OSCs reported in the literature.", "which Mobility ?", "Electron", 19.0, 27.0], ["Organic solar cells (OSCs) are a promising cost-effective alternative for utility of solar energy, and possess low-cost, light-weight, and fl exibility advantages. [ 1\u20137 ] Much attention has been focused on the development of OSCs which have seen a dramatic rise in effi ciency over the last decade, and the encouraging power conversion effi ciency (PCE) over 9% has been achieved from bulk heterojunction (BHJ) OSCs. [ 8 ] With regard to photoactive materials, fullerenes and their derivatives, such as [6,6]-phenyl C 61 butyric acid methyl ester (PC 61 BM), have been the dominant electron-acceptor materials in BHJ OSCs, owing to their high electron mobility, large electron affi nity and isotropy of charge transport. [ 9 ] However, fullerenes have a few disadvantages, such as restricted electronic tuning and weak absorption in the visible region. Furthermore, in typical BHJ system of poly(3-hexylthiophene) (P3HT):PC 61 BM, mismatching energy levels between donor and acceptor leads to energy loss and low open-circuit voltages ( V OC ). To solve these problems, novel electron acceptor materials with strong and broad absorption spectra and appropriate energy levels are necessary for OSCs. Recently, non-fullerene small molecule acceptors have been developed. [ 10 , 11 ] However, rare reports on the devices based on solution-processed non-fullerene small molecule acceptors have shown PCEs approaching or exceeding 1.5%, [ 12\u201319 ] and only one paper reported PCEs over 2%. [ 16 ]", "which Mobility type ?", "Electron", 583.0, 591.0], ["A novel small molecule, FBR, bearing 3-ethylrhodanine flanking groups was synthesized as a nonfullerene electron acceptor for solution-processed bulk heterojunction organic photovoltaics (OPV). A straightforward synthesis route was employed, offering the potential for large scale preparation of this material. Inverted OPV devices employing poly(3-hexylthiophene) (P3HT) as the donor polymer and FBR as the acceptor gave power conversion efficiencies (PCE) up to 4.1%. Transient and steady state optical spectroscopies indicated efficient, ultrafast charge generation and efficient photocurrent generation from both donor and acceptor. Ultrafast transient absorption spectroscopy was used to investigate polaron generation efficiency as well as recombination dynamics. It was determined that the P3HT:FBR blend is highly intermixed, leading to increased charge generation relative to comparative devices with P3HT:PC60BM, but also faster recombination due to a nonideal morphology in which, in contrast to P3HT:PC60BM devices, the acceptor does not aggregate enough to create appropriate percolation pathways that prevent fast nongeminate recombination. Despite this nonoptimal morphology the P3HT:FBR devices exhibit better performance than P3HT:PC60BM devices, used as control, demonstrating that this acceptor shows great promise for further optimization.", "which Mobility type ?", "Electron", 104.0, 112.0], ["Molecular acceptors are promising alternatives to fullerenes (e.g., PC61/71BM) in the fabrication of high-efficiency bulk-heterojunction (BHJ) solar cells. While solution-processed polymer\u2013fullerene BHJ devices have recently met the 10% efficiency threshold, molecular acceptors have yet to prove comparably efficient with polymer donors. At this point in time, it is important to forge a better understanding of the design parameters that directly impact small-molecule (SM) acceptor performance in BHJ solar cells. In this report, we show that 2-(benzo[c][1,2,5]thiadiazol-4-ylmethylene)malononitrile (BM)-terminated SM acceptors can achieve efficiencies as high as 5.3% in BHJ solar cells with the polymer donor PCE10. Through systematic device optimization and characterization studies, we find that the nonfullerene analogues (FBM, CBM, and CDTBM) all perform comparably well, independent of the molecular structure and electronics of the \u03c0-bridge that links the two electron-deficient BM end groups. With estimated...", "which Mobility type ?", "Electron", 972.0, 980.0], ["There has been a growing interest in the design and synthesis of non-fullerene acceptors for organic solar cells that may overcome the drawbacks of the traditional fullerene-based acceptors. Herein, two novel push-pull (acceptor-donor-acceptor) type small-molecule acceptors, that is, ITDI and CDTDI, with indenothiophene and cyclopentadithiophene as the core units and 2-(3-oxo-2,3-dihydroinden-1-ylidene)malononitrile (INCN) as the end-capping units, are designed and synthesized for non-fullerene polymer solar cells (PSCs). After device optimization, PSCs based on ITDI exhibit good device performance with a power conversion efficiency (PCE) as high as 8.00%, outperforming the CDTDI-based counterparts fabricated under identical condition (2.75% PCE). We further discuss the performance of these non-fullerene PSCs by correlating the energy level and carrier mobility with the core of non-fullerene acceptors. These results demonstrate that indenothiophene is a promising electron-donating core for high-performance non-fullerene small-molecule acceptors.", "which Mobility type ?", "Electron", 978.0, 986.0], ["Hybrid cylindrical roller thrust bearing washers of type 81212 were manufactured by tailored forming. An AISI 1022M base material, featuring a sufficient strength for structural loads, was cladded with the bearing steel AISI 52100 by plasma transferred arc welding (PTA). Though AISI 52100 is generally regarded as non-weldable, it could be applied as a cladding material by adjusting PTA parameters. The cladded parts were investigated after each individual process step and subsequently tested under rolling contact load. Welding defects that could not be completely eliminated by the subsequent hot forming were characterized by means of scanning acoustic microscopy and micrographs. Below the surface, pores with a typical size of ten \u00b5m were found to a depth of about 0.45 mm. In the material transition zone and between individual weld seams, larger voids were observed. Grinding of the surface after heat treatment caused compressive residual stresses near the surface with a relatively small depth. Fatigue tests were carried out on an FE8 test rig. Eighty-two percent of the calculated rating life for conventional bearings was achieved. A high failure slope of the Weibull regression was determined. A relationship between the weld defects and the fatigue behavior is likely.", "which has material ?", "material", 121.0, 129.0], ["Components subject to rolling contact fatigue, such as gears and rolling bearings, are among the fundamental machine elements in mechanical and vehicle engineering. Rolling bearings are generally not designed to be fatigue-resistant, as the necessary oversizing is not technically and economically marketable. In order to improve the load-bearing capacity, resource efficiency and application possibilities of rolling bearings and other possible multi-material solid components, a new process chain was developed at Leibniz University Hannover as a part of the Collaborative Research Centre 1153 \u201cTailored Forming\u201d. Semi-finished products, already joined before the forming process, are used here to allow a further optimisation of joint quality by forming and finishing. In this paper, a plasma-powder-deposition welding process is presented, which enables precise material deposition and control of the welding depth. For this study, bearing washers (serving as rolling bearing raceways) of a cylindrical roller thrust bearing, similar to type 81212 with a multi-layer structure, were manufactured. A previously non-weldable high-performance material, steel AISI 5140, was used as the cladding layer. Depending on the degree of forming, grain-refinement within the welded material was achieved by thermo-mechanical treatment of the joining zone during the forming process. This grain-refinements lead to an improvement of the mechanical properties and thus, to a higher lifetime for washers of an axial cylindrical roller bearing, which were examined as an exemplary component on a fatigue test bench. To evaluate the bearing washers, the results of the bearing tests were compared with industrial bearings and deposition welded axial-bearing washers without subsequent forming. In addition, the bearing washers were analysed micro-tribologically and by scanning acoustic microscopy both after welding and after the forming process. Nano-scratch tests were carried out on the bearing washers to analyse the layer properties. Together with the results of additional microscopic images of the surface and cross-sections, the causes of failure due to fatigue and wear were identified.", "which has material ?", "material", 460.0, 468.0], ["To enhance tribological contacts under cyclic load, high performance materials are required. Utilizing the same high-strength material for the whole machine element is not resource-efficient. In order to manufacture machine elements with extended functionality and specific properties, a combination of different materials can be used in a single component for a more efficient material utilization. By combining different joining techniques with subsequent forming, multi-material or tailored components can be manufactured. To reduce material costs and energy consumption during the component service life, a less expensive lightweight material should be used for regions remote from the highly stressed zones. The scope is not only to obtain the desired shape and dimensions for the finishing process, but also to improve properties like the bond strength between different materials and the microscopic structure of the material. The multi-material approach can be applied to all components requiring different properties in separate component regions such as shafts, bearings or bushes. The current study exemplarily presents the process route for the production of an axial bearing washer by means of tailored forming technology. The bearing washers were chosen to fit axial roller bearings (type 81212). The manufacturing process starts with the laser wire cladding of a hard facing made of martensitic chromium silicon steel (1.4718) on a base substrate of S235 (1.0038) steel. Subsequently, the bearing washers are forged. After finishing, the surfaces of the bearing washers were tested in thrust bearings on an FE-8 test rig. The operational test of the bearings consists in a run-in phase at 250 rpm. A bearing failure is determined by a condition monitoring system. Before and after this, the bearings were inspected by optical and ultrasonic microscopy in order to examine whether the bond of the coat is resistant against rolling contact fatigue. The feasibility of the approach could be proven by endurance test. The joining zone was able to withstand the rolling contact stresses and the bearing failed due to material-induced fatigue with high cycle stability.", "which has material ?", "material", 126.0, 134.0], ["Within the Collaborative Research Centre 1153 \u201cTailored Forming\u201c a process chain for the manufacturing of hybrid high performance components is developed. Exemplary process steps consist of deposit welding of high performance steel on low-cost steel, pre-shaping by cross-wedge rolling and finishing by milling.Hard material coatings such as Stellite 6 or Delcrome 253 are used as wear or corrosion protection coatings in industrial applications. Scientists of the Institute of Material Science welded these hard material alloys onto a base material, in this case C22.8, to create a hybrid workpiece. Scientists of the Institut fur Integrierte Produktion Hannover have shown that these hybrid workpieces can be formed without defects (e.g. detachment of the coating) by cross-wedge rolling. After forming, the properties of the coatings are retained or in some cases even improved (e.g. the transition zone between base material and coating). By adjustments in the welding process, it was possible to apply the 100Cr6 rolling bearing steel, as of now declared as non-weldable, on the low-cost steel C22.8. 100Cr6 was formed afterwards in its hybrid bonding state with C22.8 by cross-wedge rolling, thus a component-integrated bearing seat was produced. Even after welding and forming, the rolling bearing steel coating could still be quench-hardened to a hardness of over 60 HRC. This paper shows the potential of forming hybrid billets to tailored parts. Since industrially available standard materials can be used for hard material coatings by this approach, even though they are not weldable by conventional methods, it is not necessary to use expensive, for welding designed materials to implement a hybrid component concept.Within the Collaborative Research Centre 1153 \u201cTailored Forming\u201c a process chain for the manufacturing of hybrid high performance components is developed. Exemplary process steps consist of deposit welding of high performance steel on low-cost steel, pre-shaping by cross-wedge rolling and finishing by milling.Hard material coatings such as Stellite 6 or Delcrome 253 are used as wear or corrosion protection coatings in industrial applications. Scientists of the Institute of Material Science welded these hard material alloys onto a base material, in this case C22.8, to create a hybrid workpiece. Scientists of the Institut fur Integrierte Produktion Hannover have shown that these hybrid workpieces can be formed without defects (e.g. detachment of the coating) by cross-wedge rolling. After forming, the properties of the coatings are retained or in some cases even improved (e.g. the transition zone between base material and coating). By adjustments in the welding process, it was possible to apply the 100Cr6 ro...", "which has material ?", "material", 316.0, 324.0], ["Abstract The Tailored Forming process chain is used to manufacture hybrid components and consists of a joining process or Additive Manufacturing for various materials (e.g. deposition welding), subsequent hot forming, machining and heat treatment. In this way, components can be produced with materials adapted to the load case. For this paper, hybrid shafts are produced by deposition welding of a cladding made of X45CrSi9-3 onto a workpiece made from 20MnCr5. The hybrid shafts are then formed by means of cross-wedge rolling. It is investigated, how the thickness of the cladding and the type of cooling after hot forming (in air or in water) affect the properties of the cladding. The hybrid shafts are formed without layer separation. However, slight core loosening occurres in the area of the bearing seat due to the Mannesmann effect. The microhardness of the cladding is only slightly effected by the cooling strategy, while the microhardness of the base material is significantly higher in water cooled shafts. The microstructure of the cladding after both cooling strategies consists mainly of martensite. In the base material, air cooling results in a mainly ferritic microstructure with grains of ferrite-pearlite. Quenching in water results in a microstructure containing mainly martensite.", "which has material ?", "material", 964.0, 972.0], ["Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179\u00b122 nm were produced. Res was loaded with high EE of 73\u00b10.9% and DL content of 6.2\u00b10.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", "which Polymer ?", "Pluronic F127", 698.0, 711.0], ["Poor delivery of insoluble anticancer drugs has so far precluded their clinical application. In this study, we developed a tumor-targeting delivery system for insoluble drug (paclitaxel, PTX) by PEGylated O-carboxymethyl-chitosan (CMC) nanoparticles grafted with cyclic Arg-Gly-Asp (RGD) peptide. To improve the loading efficiency (LE), we combined O/W/O double emulsion method with temperature-programmed solidification technique and controlled PTX within the matrix network as in situ nanocrystallite form. Furthermore, these CMC nanoparticles were PEGylated, which could reduce recognition by the reticuloendothelial system (RES) and prolong the circulation time in blood. In addition, further graft of cyclic RGD peptide at the terminal of PEG chain endowed these nanoparticles with higher affinity to in vitro Lewis lung carcinoma (LLC) cells and in vivo tumor tissue. These outstanding properties enabled as-designed nanodevice to exhibit a greater tumor growth inhibition effect and much lower side effects over the commercial formulation Taxol.", "which Polymer ?", "Chitosan", 221.0, 229.0], ["Drug delivery has become an important strategy for improving the chemotherapy efficiency. Here we developed a multifunctionalized nanosized albumin-based drug-delivery system with tumor-targeting, cell-penetrating, and endolysosomal pH-responsive properties. cRGD-BSA/KALA/DOX nanoparticles were fabricated by self-assembly through electrostatic interaction between cell-penetrating peptide KALA and cRGD-BSA, with cRGD as a tumor-targeting ligand. Under endosomal/lysosomal acidic conditions, the changes in the electric charges of cRGD-BSA and KALA led to the disassembly of the nanoparticles to accelerate intracellular drug release. cRGD-BSA/KALA/DOX nanoparticles showed an enhanced inhibitory effect in the growth of \u03b1v\u03b23-integrin-overexpressed tumor cells, indicating promising application in cancer treatments.", "which Polymer ?", "Albumin", 140.0, 147.0], ["Poor delivery of insoluble anticancer drugs has so far precluded their clinical application. In this study, we developed a tumor-targeting delivery system for insoluble drug (paclitaxel, PTX) by PEGylated O-carboxymethyl-chitosan (CMC) nanoparticles grafted with cyclic Arg-Gly-Asp (RGD) peptide. To improve the loading efficiency (LE), we combined O/W/O double emulsion method with temperature-programmed solidification technique and controlled PTX within the matrix network as in situ nanocrystallite form. Furthermore, these CMC nanoparticles were PEGylated, which could reduce recognition by the reticuloendothelial system (RES) and prolong the circulation time in blood. In addition, further graft of cyclic RGD peptide at the terminal of PEG chain endowed these nanoparticles with higher affinity to in vitro Lewis lung carcinoma (LLC) cells and in vivo tumor tissue. These outstanding properties enabled as-designed nanodevice to exhibit a greater tumor growth inhibition effect and much lower side effects over the commercial formulation Taxol.", "which Polymer ?", "Arg-Gly-Asp (RGD)", NaN, NaN], [" Acetylcholinesterase was covalently attached to the inner surface of polyethylene tubing. Initial oxidation generated surface carboxylic groups which, on reaction with thionyl chloride, produced acid chloride groups; these were caused to react with excess ethylenediamine. The amine groups on the surface were linked to glutaraldehyde, and acetylcholinesterase was then attached to the surface. Various kinetic tests showed the catalysis of the hydrolysis of acetylthiocholine iodide to be diffusion controlled. The apparent Michaelis constants were strongly dependent on flow rate and were much larger than the value for the free enzyme. Rate measurements over the temperature range 6\u201342 \u00b0C showed changes in activation energies consistent with diffusion control. ", "which Polymer ?", "Polyethylene", 78.0, 90.0], ["Polyethylene terephthalate (PET) is the most important mass\u2010produced thermoplastic polyester used as a packaging material. Recently, thermophilic polyester hydrolases such as TfCut2 from Thermobifida fusca have emerged as promising biocatalysts for an eco\u2010friendly PET recycling process. In this study, postconsumer PET food packaging containers are treated with TfCut2 and show weight losses of more than 50% after 96 h of incubation at 70 \u00b0C. Differential scanning calorimetry analysis indicates that the high linear degradation rates observed in the first 72 h of incubation is due to the high hydrolysis susceptibility of the mobile amorphous fraction (MAF) of PET. The physical aging process of PET occurring at 70 \u00b0C is shown to gradually convert MAF to polymer microstructures with limited accessibility to enzymatic hydrolysis. Analysis of the chain\u2010length distribution of degraded PET by nuclear magnetic resonance spectroscopy reveals that MAF is rapidly hydrolyzed via a combinatorial exo\u2010 and endo\u2010type degradation mechanism whereas the remaining PET microstructures are slowly degraded only by endo\u2010type chain scission causing no detectable weight loss. Hence, efficient thermostable biocatalysts are required to overcome the competitive physical aging process for the complete degradation of postconsumer PET materials close to the glass transition temperature of PET.", "which Polymer ?", "Polyethylene terephthalate", 0.0, 26.0], ["Bladder cancer (BC) is a very common cancer. Nonmuscle-invasive bladder cancer (NMIBC) is the most common type of bladder cancer. After postoperative tumor resection, chemotherapy intravesical instillation is recommended as a standard treatment to significantly reduce recurrences. Nanomedicine-mediated delivery of a chemotherapeutic agent targeting cancer could provide a solution to obtain longer residence time and high bioavailability of an anticancer drug. The approach described here provides a nanomedicine with sustained and prolonged delivery of paclitaxel and enhanced therapy of intravesical bladder cancer, which is paclitaxel/chitosan (PTX/CS) nanosupensions (NSs). The positively charged PTX/CS NSs exhibited a rod-shaped morphology with a mean diameter about 200 nm. They have good dispersivity in water without any protective agents, and the positively charged properties make them easy to be adsorbed on the inner mucosa of the bladder through electrostatic adsorption. PTX/CS NSs also had a high drug loading capacity and can maintain sustained release of paclitaxel which could be prolonged over 10 days. Cell experiments in vitro demonstrated that PTX/CS NSs had good biocompatibility and effective bladder cancer cell proliferation inhibition. The significant anticancer efficacy against intravesical bladder cancer was verified by an in situ bladder cancer model. The paclitaxel/chitosan nanosupensions could provide sustained delivery of chemotherapeutic agents with significant anticancer efficacy against intravesical bladder cancer.", "which Polymer ?", "Chitosan", 640.0, 648.0], ["PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.", "which Polymer ?", "Glyceryl monooleate (GMO)", NaN, NaN], ["Purpose: To determine whether deposition characteristics of ferumoxytol (FMX) iron nanoparticles in tumors, identified by quantitative MRI, may predict tumor lesion response to nanoliposomal irinotecan (nal-IRI). Experimental Design: Eligible patients with previously treated solid tumors had FMX-MRI scans before and following (1, 24, and 72 hours) FMX injection. After MRI acquisition, R2* signal was used to calculate FMX levels in plasma, reference tissue, and tumor lesions by comparison with a phantom-based standard curve. Patients then received nal-IRI (70 mg/m2 free base strength) biweekly until progression. Two percutaneous core biopsies were collected from selected tumor lesions 72 hours after FMX or nal-IRI. Results: Iron particle levels were quantified by FMX-MRI in plasma, reference tissues, and tumor lesions in 13 of 15 eligible patients. On the basis of a mechanistic pharmacokinetic model, tissue permeability to FMX correlated with early FMX-MRI signals at 1 and 24 hours, while FMX tissue binding contributed at 72 hours. Higher FMX levels (ranked relative to median value of multiple evaluable lesions from 9 patients) were significantly associated with reduction in lesion size by RECIST v1.1 at early time points (P < 0.001 at 1 hour and P < 0.003 at 24 hours FMX-MRI, one-way ANOVA). No association was observed with post-FMX levels at 72 hours. Irinotecan drug levels in lesions correlated with patient's time on treatment (Spearman \u03c1 = 0.7824; P = 0.0016). Conclusions: Correlation between FMX levels in tumor lesions and nal-IRI activity suggests that lesion permeability to FMX and subsequent tumor uptake may be a useful noninvasive and predictive biomarker for nal-IRI response in patients with solid tumors. Clin Cancer Res; 23(14); 3638\u201348. \u00a92017 AACR.", "which Indication ?", "Cancer", 1749.0, 1755.0], ["Background: Iron deficiency anemia (IDA) is a common problem in patients with chronic kidney disease (CKD). Use of intravenous (i.v.) iron effectively treats the resultant anemia, but available iron products have side effects or dosing regimens that limit safety and convenience. Objective: Ferumoxytol (Feraheme\u2122) is a new i.v. iron product recently approved for use in treatment of IDA in CKD patients. This article reviews the structure, pharmacokinetics, and clinical trial results on ferumoxytol. The author also offers his opinions on the role of this product in clinical practice. Methods: This review encompasses important information contained in clinical and preclinical studies of ferumoxytol and is supplemented with information from the US Food and Drug Administration. Results/conclusion: Ferumoxytol offers substantial safety and superior efficacy compared with oral iron therapy. As ferumoxytol can be administered as 510 mg in < 1 min, it is substantially more convenient than other iron products in nondialysis patients. Although further experience with this product is needed in patients at higher risk of drug reactions, ferumoxytol is likely to be highly useful in the hospital and outpatient settings for treatment of IDA.", "which Indication ?", "Chronic kidney disease", 78.0, 100.0], ["PURPOSE To compare findings on superparamagnetic iron oxide (SPIO)-enhanced magnetic resonance (MR) images of the head and neck with those from resected lymph node specimens and to determine the effect of such imaging on surgical planning in patients with histopathologically proved squamous cell carcinoma of the head and neck. MATERIALS AND METHODS Thirty patients underwent MR imaging with nonenhanced and SPIO-enhanced (2.6 mg Fe/kg intravenously) T1-weighted (500/15 [repetition time msec/echo time msec]) and T2-weighted (1,900/80) spin-echo and T2-weighted gradient-echo (GRE) (500/15, 15 degrees flip angle) sequences. Signal intensity decrease was measured, and visual analysis was performed. Surgical plans were modified, if necessary, according to MR findings. Histopathologic and MR findings were compared. RESULTS Histopathologic evaluation of 1,029 lymph nodes revealed 69 were metastatic. MR imaging enabled detection of 59 metastases. Regarding lymph node levels, MR diagnosis was correct in 26 of 27 patients who underwent surgery: Only one metastasis was localized in level II with MR imaging, whereas histopathologic evaluation placed it at level III. Extent of surgery was changed in seven patients. SPIO-enhanced T2-weighted GRE was the best sequence for differentiating between benign and malignant lymph nodes. CONCLUSION SPIO-enhanced MR imaging has an important effect on planning the extent of surgery. On a patient basis, SPIO-enhanced MR images compared well with resected specimens.", "which Field of application ?", "Diagnosis", 983.0, 992.0], ["Background: Iron deficiency anemia (IDA) is a common problem in patients with chronic kidney disease (CKD). Use of intravenous (i.v.) iron effectively treats the resultant anemia, but available iron products have side effects or dosing regimens that limit safety and convenience. Objective: Ferumoxytol (Feraheme\u2122) is a new i.v. iron product recently approved for use in treatment of IDA in CKD patients. This article reviews the structure, pharmacokinetics, and clinical trial results on ferumoxytol. The author also offers his opinions on the role of this product in clinical practice. Methods: This review encompasses important information contained in clinical and preclinical studies of ferumoxytol and is supplemented with information from the US Food and Drug Administration. Results/conclusion: Ferumoxytol offers substantial safety and superior efficacy compared with oral iron therapy. As ferumoxytol can be administered as 510 mg in < 1 min, it is substantially more convenient than other iron products in nondialysis patients. Although further experience with this product is needed in patients at higher risk of drug reactions, ferumoxytol is likely to be highly useful in the hospital and outpatient settings for treatment of IDA.", "which Field of application ?", "Therapy", 887.0, 894.0], ["This study aimed to improve skin permeation and deposition of psoralen by using ethosomes and to investigate real-time drug release in the deep skin in rats. We used a uniform design method to evaluate the effects of different ethosome formulations on entrapment efficiency and drug skin deposition. Using in vitro and in vivo methods, we investigated skin penetration and release from psoralen-loaded ethosomes in comparison with an ethanol tincture. In in vitro studies, the use of ethosomes was associated with a 6.56-fold greater skin deposition of psoralen than that achieved with the use of the tincture. In vivo skin microdialysis showed that the peak concentration and area under the curve of psoralen from ethosomes were approximately 3.37 and 2.34 times higher, respectively, than those of psoralen from the tincture. Moreover, it revealed that the percutaneous permeability of ethosomes was greater when applied to the abdomen than when applied to the chest or scapulas. Enhanced permeation and skin deposition of psoralen delivered by ethosomes may help reduce toxicity and improve the efficacy of long-term psoralen treatment.", "which Type of Lipid-based nanoparticle ?", "Ethosomes", 80.0, 89.0], ["Beech lignin was oxidatively cleaved in ionic liquids to give phenols, unsaturated propylaromatics, and aromatic aldehydes. A multiparallel batch reactor system was used to screen different ionic liquids and metal catalysts. Mn(NO(3))(2) in 1-ethyl-3-methylimidazolium trifluoromethanesulfonate [EMIM][CF(3)SO(3)] proved to be the most effective reaction system. A larger scale batch reaction with this system in a 300 mL autoclave (11 g lignin starting material) resulted in a maximum conversion of 66.3 % (24 h at 100 degrees C, 84x10(5) Pa air). By adjusting the reaction conditions and catalyst loading, the selectivity of the process could be shifted from syringaldehyde as the predominant product to 2,6-dimethoxy-1,4-benzoquinone (DMBQ). Surprisingly, the latter could be isolated as a pure substance in 11.5 wt % overall yield by a simple extraction/crystallization process.", "which Product ?", "Unsaturated propylaromatics", 71.0, 98.0], ["Abstract Cleavage of C\u2013O bonds in lignin can afford the renewable aryl sources for fine chemicals. However, the high bond energies of these C\u2013O bonds, especially the 4-O-5-type diaryl ether C\u2013O bonds (~314 kJ/mol) make the cleavage very challenging. Here, we report visible-light photoredox-catalyzed C\u2013O bond cleavage of diaryl ethers by an acidolysis with an aryl carboxylic acid and a following one-pot hydrolysis. Two molecules of phenols are obtained from one molecule of diaryl ether at room temperature. The aryl carboxylic acid used for the acidolysis can be recovered. The key to success of the acidolysis is merging visible-light photoredox catalysis using an acridinium photocatalyst and Lewis acid catalysis using Cu(TMHD) 2 . Preliminary mechanistic studies indicate that the catalytic cycle occurs via a rare selective electrophilic attack of the generated aryl carboxylic radical on the electron-rich aryl ring of the diphenyl ether. This transformation is applied to a gram-scale reaction and the model of 4-O-5 lignin linkages.", "which Product ?", "Phenol", NaN, NaN], ["Purpose: To determine whether deposition characteristics of ferumoxytol (FMX) iron nanoparticles in tumors, identified by quantitative MRI, may predict tumor lesion response to nanoliposomal irinotecan (nal-IRI). Experimental Design: Eligible patients with previously treated solid tumors had FMX-MRI scans before and following (1, 24, and 72 hours) FMX injection. After MRI acquisition, R2* signal was used to calculate FMX levels in plasma, reference tissue, and tumor lesions by comparison with a phantom-based standard curve. Patients then received nal-IRI (70 mg/m2 free base strength) biweekly until progression. Two percutaneous core biopsies were collected from selected tumor lesions 72 hours after FMX or nal-IRI. Results: Iron particle levels were quantified by FMX-MRI in plasma, reference tissues, and tumor lesions in 13 of 15 eligible patients. On the basis of a mechanistic pharmacokinetic model, tissue permeability to FMX correlated with early FMX-MRI signals at 1 and 24 hours, while FMX tissue binding contributed at 72 hours. Higher FMX levels (ranked relative to median value of multiple evaluable lesions from 9 patients) were significantly associated with reduction in lesion size by RECIST v1.1 at early time points (P < 0.001 at 1 hour and P < 0.003 at 24 hours FMX-MRI, one-way ANOVA). No association was observed with post-FMX levels at 72 hours. Irinotecan drug levels in lesions correlated with patient's time on treatment (Spearman \u03c1 = 0.7824; P = 0.0016). Conclusions: Correlation between FMX levels in tumor lesions and nal-IRI activity suggests that lesion permeability to FMX and subsequent tumor uptake may be a useful noninvasive and predictive biomarker for nal-IRI response in patients with solid tumors. Clin Cancer Res; 23(14); 3638\u201348. \u00a92017 AACR.", "which Product ?", "Ferumoxytol", 60.0, 71.0], ["Background: Iron deficiency anemia (IDA) is a common problem in patients with chronic kidney disease (CKD). Use of intravenous (i.v.) iron effectively treats the resultant anemia, but available iron products have side effects or dosing regimens that limit safety and convenience. Objective: Ferumoxytol (Feraheme\u2122) is a new i.v. iron product recently approved for use in treatment of IDA in CKD patients. This article reviews the structure, pharmacokinetics, and clinical trial results on ferumoxytol. The author also offers his opinions on the role of this product in clinical practice. Methods: This review encompasses important information contained in clinical and preclinical studies of ferumoxytol and is supplemented with information from the US Food and Drug Administration. Results/conclusion: Ferumoxytol offers substantial safety and superior efficacy compared with oral iron therapy. As ferumoxytol can be administered as 510 mg in < 1 min, it is substantially more convenient than other iron products in nondialysis patients. Although further experience with this product is needed in patients at higher risk of drug reactions, ferumoxytol is likely to be highly useful in the hospital and outpatient settings for treatment of IDA.", "which Product ?", "Ferumoxytol", 291.0, 302.0], ["Beech lignin was oxidatively cleaved in ionic liquids to give phenols, unsaturated propylaromatics, and aromatic aldehydes. A multiparallel batch reactor system was used to screen different ionic liquids and metal catalysts. Mn(NO(3))(2) in 1-ethyl-3-methylimidazolium trifluoromethanesulfonate [EMIM][CF(3)SO(3)] proved to be the most effective reaction system. A larger scale batch reaction with this system in a 300 mL autoclave (11 g lignin starting material) resulted in a maximum conversion of 66.3 % (24 h at 100 degrees C, 84x10(5) Pa air). By adjusting the reaction conditions and catalyst loading, the selectivity of the process could be shifted from syringaldehyde as the predominant product to 2,6-dimethoxy-1,4-benzoquinone (DMBQ). Surprisingly, the latter could be isolated as a pure substance in 11.5 wt % overall yield by a simple extraction/crystallization process.", "which Product ?", "Phenol", NaN, NaN], ["A catalyst that cleaves aryl-oxygen bonds but not carbon-carbon bonds may help improve lignin processing. Selective hydrogenolysis of the aromatic carbon-oxygen (C-O) bonds in aryl ethers is an unsolved synthetic problem important for the generation of fuels and chemical feedstocks from biomass and for the liquefaction of coal. Currently, the hydrogenolysis of aromatic C-O bonds requires heterogeneous catalysts that operate at high temperature and pressure and lead to a mixture of products from competing hydrogenolysis of aliphatic C-O bonds and hydrogenation of the arene. Here, we report hydrogenolyses of aromatic C-O bonds in alkyl aryl and diaryl ethers that form exclusively arenes and alcohols. This process is catalyzed by a soluble nickel carbene complex under just 1 bar of hydrogen at temperatures of 80 to 120\u00b0C; the relative reactivity of ether substrates scale as Ar-OAr>>Ar-OMe>ArCH2-OMe (Ar, Aryl; Me, Methyl). Hydrogenolysis of lignin model compounds highlights the potential of this approach for the conversion of refractory aryl ether biopolymers to hydrocarbons.", "which Product ?", "Arenes", 687.0, 693.0], ["Catalytic bio\u2010oil upgrading to produce renewable fuels has attracted increasing attention in response to the decreasing oil reserves and the increased fuel demand worldwide. Herein, the catalytic hydrodeoxygenation (HDO) of guaiacol with carbon\u2010supported non\u2010sulfided metal catalysts was investigated. Catalytic tests were performed at 4.0 MPa and temperatures ranging from 623 to 673 K. Both Ru/C and Mo/C catalysts showed promising catalytic performance in HDO. The selectivity to benzene was 69.5 and 83.5 % at 653 K over Ru/C and 10Mo/C catalysts, respectively. Phenol, with a selectivity as high as 76.5 %, was observed mainly on 1Mo/C. However, the reaction pathway over both catalysts is different. Over the Ru/C catalyst, the O\uf8ffCH3 bond was cleaved to form the primary intermediate catechol, whereas only traces of catechol were detected over Mo/C catalysts. In addition, two types of active sites were detected over Mo samples after reduction in H2 at 973 K. Catalytic studies showed that the demethoxylation of guaiacol is performed over residual MoOx sites with high selectivity to phenol whereas the consecutive HDO of phenol is performed over molybdenum carbide species, which is widely available only on the 10Mo/C sample. Different deactivation patterns were also observed over Ru/C and Mo/C catalysts.", "which Product ?", "benzene", 483.0, 490.0], ["A simple and efficient hydrodeoxygenation strategy is described to selectively generate and separate high-value alkylphenols from pyrolysis bio-oil, produced directly from lignocellulosic biomass. The overall process is efficient and only requires low pressures of hydrogen gas (5 bar). Initially, an investigation using model compounds indicates that MoCx /C is a promising catalyst for targeted hydrodeoxygenation, enabling selective retention of the desired Ar-OH substituents. By applying this procedure to pyrolysis bio-oil, the primary products (phenol/4-alkylphenols and hydrocarbons) are easily separable from each other by short-path column chromatography, serving as potential valuable feedstocks for industry. The strategy requires no prior fractionation of the lignocellulosic biomass, no further synthetic steps, and no input of additional (e.g., petrochemical) platform molecules.", "which Product ?", "phenol", 552.0, 558.0], ["ABSTRACT Extracellular vesicles (EVs) hold great potential as novel systems for nucleic acid delivery due to their natural composition. Our goal was to load EVs with microRNA that are synthesized by the cells that produce the EVs. HEK293T cells were engineered to produce EVs expressing a lysosomal associated membrane, Lamp2a fusion protein. The gene encoding pre-miR-199a was inserted into an artificial intron of the Lamp2a fusion protein. The TAT peptide/HIV-1 transactivation response (TAR) RNA interacting peptide was exploited to enhance the EV loading of the pre-miR-199a containing a modified TAR RNA loop. Computational modeling demonstrated a stable interaction between the modified pre-miR-199a loop and TAT peptide. EMSA gel shift, recombinant Dicer processing and luciferase binding assays confirmed the binding, processing and functionality of the modified pre-miR-199a. The TAT-TAR interaction enhanced the loading of the miR-199a into EVs by 65-fold. Endogenously loaded EVs were ineffective at delivering active miR-199a-3p therapeutic to recipient SK-Hep1 cells. While the low degree of miRNA loading into EVs through this approach resulted in inefficient distribution of RNA cargo into recipient cells, the TAT TAR strategy to load miRNA into EVs may be valuable in other drug delivery approaches involving miRNA mimics or other hairpin containing RNAs.", "which Membrane protein ?", "Lamp2a", 320.0, 326.0], ["The purpose of this study was to evaluate two novel liposomal formulations of cisplatin as potential therapeutic agents for treatment of the F98 rat glioma. The first was a commercially produced agent, Lipoplatin\u2122, which currently is in a Phase III clinical trial for treatment of non-small cell lung cancer (NSCLC). The second, produced in our laboratory, was based on the ability of cisplatin to form coordination complexes with lipid cholesteryl hemisuccinate (CHEMS). The in vitro tumoricidal activity of the former previously has been described in detail by other investigators. The CHEMS liposomal formulation had a Pt loading efficiency of 25% and showed more potent in vitro cytotoxicity against F98 glioma cells than free cisplatin at 24 h. In vivo CHEMS liposomes showed high retention at 24 h after intracerebral (i.c.) convection enhanced delivery (CED) to F98 glioma bearing rats. Neurotoxicologic studies were carried out in non-tumor bearing Fischer rats following i.c. CED of Lipoplatin\u2122 or CHEMS liposomes or their \u201chollow\u201d counterparts. Unexpectedly, Lipoplatin\u2122 was highly neurotoxic when given i.c. by CED and resulted in death immediately following or within a few days after administration. Similarly \u201chollow\u201d Lipoplatin\u2122 liposomes showed similar neurotoxicity indicating that this was due to the liposomes themselves rather than the cisplatin. This was particularly surprising since Lipoplatin\u2122 has been well tolerated when administered intravenously. In contrast, CHEMS liposomes and their \u201chollow\u201d counterparts were clinically well tolerated. However, a variety of dose dependent neuropathologic changes from none to severe were seen at either 10 or 14 d following their administration. These findings suggest that further refinements in the design and formulation of cisplatin containing liposomes will be required before they can be administered i.c. by CED for the treatment of brain tumors and that a formulation that may be safe when given systemically may be highly neurotoxic when administered directly into the brain.", "which Type of nanocarrier ?", "Liposomes", 764.0, 773.0], ["Effectiveness of CNS-acting drugs depends on the localization, targeting, and capacity to be transported through the blood\u2013brain barrier (BBB) which can be achieved by designing brain-targeting delivery vectors. Hence, the objective of this study was to screen the formulation and process variables affecting the performance of sertraline (Ser-HCl)-loaded pegylated and glycosylated liposomes. The prepared vectors were characterized for Ser-HCl entrapment, size, surface charge, release behavior, and in vitro transport through the BBB. Furthermore, the compatibility among liposomal components was assessed using SEM, FTIR, and DSC analysis. Through a thorough screening study, enhancement of Ser-HCl entrapment, nanosized liposomes with low skewness, maximized stability, and controlled drug leakage were attained. The solid-state characterization revealed remarkable interaction between Ser-HCl and the charging agent to determine drug entrapment and leakage. Moreover, results of liposomal transport through mouse brain endothelialpolyoma cells demonstrated greater capacity of the proposed glycosylated liposomes to target the cerebellar due to its higher density of GLUT1 and higher glucose utilization. This transport capacity was confirmed by the inhibiting action of both cytochalasin B and phenobarbital. Using C6 glioma cells model, flow cytometry, time-lapse live cell imaging, and in vivo NIR fluorescence imaging demonstrated that optimized glycosylated liposomes can be transported through the BBB by classical endocytosis, as well as by specific transcytosis. In conclusion, the current study proposed a thorough screening of important formulation and process variabilities affecting brain-targeting liposomes for further scale-up processes.", "which Type of nanocarrier ?", "Liposomes", 383.0, 392.0], ["Nanoemulsions are kinetically stable liquid-in-liquid dispersions with droplet sizes on the order of 100 nm. Their small size leads to useful properties such as high surface area per unit volume, robust stability, optically transparent appearance, and tunable rheology. Nanoemulsions are finding application in diverse areas such as drug delivery, food, cosmetics, pharmaceuticals, and material synthesis. Additionally, they serve as model systems to understand nanoscale colloidal dispersions. High and low energy methods are used to prepare nanoemulsions, including high pressure homogenization, ultrasonication, phase inversion temperature and emulsion inversion point, as well as recently developed approaches such as bubble bursting method. In this review article, we summarize the major methods to prepare nanoemulsions, theories to predict droplet size, physical conditions and chemical additives which affect droplet stability, and recent applications.", "which Type of nanocarrier ?", "Nanoemulsions", 0.0, 13.0], ["Recent decades have witnessed the fast and impressive development of nanocarriers as a drug delivery system. Considering the safety, delivery efficiency and stability of nanocarriers, there are many obstacles in accomplishing successful clinical translation of these nanocarrier-based drug delivery systems. The gap has urged drug delivery scientists to develop innovative nanocarriers with high compatibility, stability and longer circulation time. Exosomes are nanometer-sized, lipid-bilayer-enclosed extracellular vesicles secreted by many types of cells. Exosomes serving as versatile drug vehicles have attracted increasing attention due to their inherent ability of shuttling proteins, lipids and genes among cells and their natural affinity to target cells. Attractive features of exosomes, such as nanoscopic size, low immunogenicity, high biocompatibility, encapsulation of various cargoes and the ability to overcome biological barriers, distinguish them from other nanocarriers. To date, exosome-based nanocarriers delivering small molecule drugs as well as bioactive macromolecules have been developed for the treatment of many prevalent and obstinate diseases including cancer, CNS disorders and some other degenerative diseases. Exosome-based nanocarriers have a huge prospect in overcoming many hindrances encountered in drug and gene delivery. This review highlights the advances as well as challenges of exosome-based nanocarriers as drug vehicles. Special focus has been placed on the advantages of exosomes in delivering various cargoes and in treating obstinate diseases, aiming to offer new insights for exploring exosomes in the field of drug delivery.", "which Type of nanocarrier ?", "Extracellular vesicles", 503.0, 525.0], ["Abstract PEG\u2013lipid micelles, primarily conjugates of polyethylene glycol (PEG) and distearyl phosphatidylethanolamine (DSPE) or PEG\u2013DSPE, have emerged as promising drug-delivery carriers to address the shortcomings associated with new molecular entities with suboptimal biopharmaceutical attributes. The flexibility in PEG\u2013DSPE design coupled with the simplicity of physical drug entrapment have distinguished PEG\u2013lipid micelles as versatile and effective drug carriers for cancer therapy. They were shown to overcome several limitations of poorly soluble drugs such as non-specific biodistribution and targeting, lack of water solubility and poor oral bioavailability. Therefore, considerable efforts have been made to exploit the full potential of these delivery systems; to entrap poorly soluble drugs and target pathological sites both passively through the enhanced permeability and retention (EPR) effect and actively by linking the terminal PEG groups with targeting ligands, which were shown to increase delivery efficiency and tissue specificity. This article reviews the current state of PEG\u2013lipid micelles as delivery carriers for poorly soluble drugs, their biological implications and recent developments in exploring their active targeting potential. In addition, this review sheds light on the physical properties of PEG\u2013lipid micelles and their relevance to the inherent advantages and applications of PEG\u2013lipid micelles for drug delivery.", "which Type of nanocarrier ?", "Lipid micelles", 13.0, 27.0], ["Solid lipid nanoparticles (SLNs) are nanocarriers developed as substitute colloidal drug delivery systems parallel to liposomes, lipid emulsions, polymeric nanoparticles, and so forth. Owing to their unique size dependent properties and ability to incorporate drugs, SLNs present an opportunity to build up new therapeutic prototypes for drug delivery and targeting. SLNs hold great potential for attaining the goal of targeted and controlled drug delivery, which currently draws the interest of researchers worldwide. The present review sheds light on different aspects of SLNs including fabrication and characterization techniques, formulation variables, routes of administration, surface modifications, toxicity, and biomedical applications.", "which Type of nanocarrier ?", "Solid lipid nanoparticles", 0.0, 25.0], ["Abstract Background: Delivery of drugs to brain is a subtle task in the therapy of many severe neurological disorders. Solid lipid nanoparticles (SLN) easily diffuse the blood\u2013brain barrier (BBB) due to their lipophilic nature. Furthermore, ligand conjugation on SLN surface enhances the targeting efficiency. Lactoferin (Lf) conjugated SLN system is first time attempted for effective brain targeting in this study. Purpose: Preparation of Lf-modified docetaxel (DTX)-loaded SLN for proficient delivery of DTX to brain. Methods: DTX-loaded SLN were prepared using emulsification and solvent evaporation method and conjugation of Lf on SLN surface (C-SLN) was attained through carbodiimide chemistry. These lipidic nanoparticles were evaluated by DLS, AFM, FTIR, XRD techniques and in vitro release studies. Colloidal stability study was performed in biologically simulated environment (normal saline and serum). These lipidic nanoparticles were further evaluated for its targeting mechanism for uptake in brain tumour cells and brain via receptor saturation studies and distribution studies in brain, respectively. Results: Particle size of lipidic nanoparticles was found to be optimum. Surface morphology (zeta potential, AFM) and surface chemistry (FTIR) confirmed conjugation of Lf on SLN surface. Cytotoxicity studies revealed augmented apoptotic activity of C-SLN than SLN and DTX. Enhanced cytotoxicity was demonstrated by receptor saturation and uptake studies. Brain concentration of DTX was elevated significantly with C-SLN than marketed formulation. Conclusions: It is evident from the cytotoxicity, uptake that SLN has potential to deliver drug to brain than marketed formulation but conjugating Lf on SLN surface (C-SLN) further increased the targeting potential for brain tumour. Moreover, brain distribution studies corroborated the use of C-SLN as a viable vehicle to target drug to brain. Hence, C-SLN was demonstrated to be a promising DTX delivery system to brain as it possessed remarkable biocompatibility, stability and efficacy than other reported delivery systems.", "which Type of nanocarrier ?", "Solid lipid nanoparticles (SLN)", NaN, NaN], ["The purpose of this study was to evaluate two novel liposomal formulations of cisplatin as potential therapeutic agents for treatment of the F98 rat glioma. The first was a commercially produced agent, Lipoplatin\u2122, which currently is in a Phase III clinical trial for treatment of non-small cell lung cancer (NSCLC). The second, produced in our laboratory, was based on the ability of cisplatin to form coordination complexes with lipid cholesteryl hemisuccinate (CHEMS). The in vitro tumoricidal activity of the former previously has been described in detail by other investigators. The CHEMS liposomal formulation had a Pt loading efficiency of 25% and showed more potent in vitro cytotoxicity against F98 glioma cells than free cisplatin at 24 h. In vivo CHEMS liposomes showed high retention at 24 h after intracerebral (i.c.) convection enhanced delivery (CED) to F98 glioma bearing rats. Neurotoxicologic studies were carried out in non-tumor bearing Fischer rats following i.c. CED of Lipoplatin\u2122 or CHEMS liposomes or their \u201chollow\u201d counterparts. Unexpectedly, Lipoplatin\u2122 was highly neurotoxic when given i.c. by CED and resulted in death immediately following or within a few days after administration. Similarly \u201chollow\u201d Lipoplatin\u2122 liposomes showed similar neurotoxicity indicating that this was due to the liposomes themselves rather than the cisplatin. This was particularly surprising since Lipoplatin\u2122 has been well tolerated when administered intravenously. In contrast, CHEMS liposomes and their \u201chollow\u201d counterparts were clinically well tolerated. However, a variety of dose dependent neuropathologic changes from none to severe were seen at either 10 or 14 d following their administration. These findings suggest that further refinements in the design and formulation of cisplatin containing liposomes will be required before they can be administered i.c. by CED for the treatment of brain tumors and that a formulation that may be safe when given systemically may be highly neurotoxic when administered directly into the brain.", "which Disadvantages ?", "Neurotoxicity", 1269.0, 1282.0], ["ABSTRACT Extracellular vesicles (EVs) hold great potential as novel systems for nucleic acid delivery due to their natural composition. Our goal was to load EVs with microRNA that are synthesized by the cells that produce the EVs. HEK293T cells were engineered to produce EVs expressing a lysosomal associated membrane, Lamp2a fusion protein. The gene encoding pre-miR-199a was inserted into an artificial intron of the Lamp2a fusion protein. The TAT peptide/HIV-1 transactivation response (TAR) RNA interacting peptide was exploited to enhance the EV loading of the pre-miR-199a containing a modified TAR RNA loop. Computational modeling demonstrated a stable interaction between the modified pre-miR-199a loop and TAT peptide. EMSA gel shift, recombinant Dicer processing and luciferase binding assays confirmed the binding, processing and functionality of the modified pre-miR-199a. The TAT-TAR interaction enhanced the loading of the miR-199a into EVs by 65-fold. Endogenously loaded EVs were ineffective at delivering active miR-199a-3p therapeutic to recipient SK-Hep1 cells. While the low degree of miRNA loading into EVs through this approach resulted in inefficient distribution of RNA cargo into recipient cells, the TAT TAR strategy to load miRNA into EVs may be valuable in other drug delivery approaches involving miRNA mimics or other hairpin containing RNAs.", "which Cargo ?", "Pre-miR-199a", 361.0, 373.0], ["Recent decades have witnessed the fast and impressive development of nanocarriers as a drug delivery system. Considering the safety, delivery efficiency and stability of nanocarriers, there are many obstacles in accomplishing successful clinical translation of these nanocarrier-based drug delivery systems. The gap has urged drug delivery scientists to develop innovative nanocarriers with high compatibility, stability and longer circulation time. Exosomes are nanometer-sized, lipid-bilayer-enclosed extracellular vesicles secreted by many types of cells. Exosomes serving as versatile drug vehicles have attracted increasing attention due to their inherent ability of shuttling proteins, lipids and genes among cells and their natural affinity to target cells. Attractive features of exosomes, such as nanoscopic size, low immunogenicity, high biocompatibility, encapsulation of various cargoes and the ability to overcome biological barriers, distinguish them from other nanocarriers. To date, exosome-based nanocarriers delivering small molecule drugs as well as bioactive macromolecules have been developed for the treatment of many prevalent and obstinate diseases including cancer, CNS disorders and some other degenerative diseases. Exosome-based nanocarriers have a huge prospect in overcoming many hindrances encountered in drug and gene delivery. This review highlights the advances as well as challenges of exosome-based nanocarriers as drug vehicles. Special focus has been placed on the advantages of exosomes in delivering various cargoes and in treating obstinate diseases, aiming to offer new insights for exploring exosomes in the field of drug delivery.", "which Cargo ?", "Protein", NaN, NaN], ["Abstract PEG\u2013lipid micelles, primarily conjugates of polyethylene glycol (PEG) and distearyl phosphatidylethanolamine (DSPE) or PEG\u2013DSPE, have emerged as promising drug-delivery carriers to address the shortcomings associated with new molecular entities with suboptimal biopharmaceutical attributes. The flexibility in PEG\u2013DSPE design coupled with the simplicity of physical drug entrapment have distinguished PEG\u2013lipid micelles as versatile and effective drug carriers for cancer therapy. They were shown to overcome several limitations of poorly soluble drugs such as non-specific biodistribution and targeting, lack of water solubility and poor oral bioavailability. Therefore, considerable efforts have been made to exploit the full potential of these delivery systems; to entrap poorly soluble drugs and target pathological sites both passively through the enhanced permeability and retention (EPR) effect and actively by linking the terminal PEG groups with targeting ligands, which were shown to increase delivery efficiency and tissue specificity. This article reviews the current state of PEG\u2013lipid micelles as delivery carriers for poorly soluble drugs, their biological implications and recent developments in exploring their active targeting potential. In addition, this review sheds light on the physical properties of PEG\u2013lipid micelles and their relevance to the inherent advantages and applications of PEG\u2013lipid micelles for drug delivery.", "which Composition ?", "Water", 622.0, 627.0], ["Unique structured nanomaterials can facilitate the direct electron transfer between redox proteins and the electrodes. Here, in situ directed growth on an electrode of a ZnO/Cu nanocomposite was prepared by a simple corrosion approach, which enables robust mechanical adhesion and electrical contact between the nanostructured ZnO and the electrodes. This is great help to realize the direct electron transfer between the electrode surface and the redox protein. SEM images demonstrate that the morphology of the ZnO/Cu nanocomposite has a large specific surface area, which is favorable to immobilize the biomolecules and construct biosensors. Using glucose oxidase (GOx) as a model, this ZnO/Cu nanocomposite is employed for immobilization of GOx and the construction of the glucose biosensor. Direct electron transfer of GOx is achieved at ZnO/Cu nanocomposite with a high heterogeneous electron transfer rate constant of 0.67 \u00b1 0.06 s(-1). Such ZnO/Cu nanocomposite provides a good matrix for direct electrochemistry of enzymes and mediator-free enzymatic biosensors.", "which Type of Biosensor ?", "Enzymes", 1024.0, 1031.0], ["We report herein a glucose biosensor based on glucose oxidase (GOx) immobilized on ZnO nanorod array grown by hydrothermal decomposition. In a phosphate buffer solution with a pH value of 7.4, negatively charged GOx was immobilized on positively charged ZnO nanorods through electrostatic interaction. At an applied potential of +0.8V versus Ag\u2215AgCl reference electrode, ZnO nanorods based biosensor presented a high and reproducible sensitivity of 23.1\u03bcAcm\u22122mM\u22121 with a response time of less than 5s. The biosensor shows a linear range from 0.01to3.45mM and an experiment limit of detection of 0.01mM. An apparent Michaelis-Menten constant of 2.9mM shows a high affinity between glucose and GOx immobilized on ZnO nanorods.", "which Type of Biosensor ?", "Glucose", 19.0, 26.0], ["Single crystal zinc oxide nanocombs were synthesized in bulk quantity by vapor phase transport. A glucose biosensor was constructed using these nanocombs as supporting materials for glucose oxidase (GOx) loading. The zinc oxide nanocomb glucose biosensor showed a high sensitivity (15.33\u03bcA\u2215cm2mM) for glucose detection and high affinity of GOx to glucose (the apparent Michaelis-Menten constant KMapp=2.19mM). The detection limit measured was 0.02mM. These results demonstrate that zinc oxide nanostructures have potential applications in biosensors.", "which Type of Biosensor ?", "Glucose", 98.0, 105.0], ["Highly oriented single-crystal ZnO nanotube (ZNT) arrays were prepared by a two-step electrochemical/chemical process on indium-doped tin oxide (ITO) coated glass in an aqueous solution. The prepared ZNT arrays were further used as a working electrode to fabricate an enzyme-based glucose biosensor through immobilizing glucose oxidase in conjunction with a Nafion coating. The present ZNT arrays-based biosensor exhibits high sensitivity of 30.85 \u03bcA cm\u22122 mM\u22121 at an applied potential of +0.8 V vs. SCE, wide linear calibration ranges from 10 \u03bcM to 4.2 mM, and a low limit of detection (LOD) at 10 \u03bcM (measured) for sensing of glucose. The apparent Michaelis\u2212Menten constant KMapp was calculated to be 2.59 mM, indicating a higher bioactivity for the biosensor.", "which Type of Biosensor ?", "Glucose", 281.0, 288.0], ["Abstract Zinc oxide nanoparticles-(ZnONPs)modified carbon paste enzyme electrodes (ZnONPsMCPE) were developed for determination of glucose. The determination of glucose was carried out by oxidation of H2O2 at +0.4 V. ZnONPsMCPE provided biocompatible microenvironment for GOx and necessary pathway of electron transfer between GOx and electrode. The response of GOx/ZnONPsMCPE was proportional to glucose concentration and detection limit was 9.1 \u00d7 10\u20133 mM. Km and Imax, were calculated as 0.124 mM and 2.033 \u03bcA. The developed biosensor exhibits high analytical performance with wide linear range (9.1 \u00d7 10\u20133\u201314.5 mM), selectivity and reproducibility. Serum glucose results allow us to ascertain practical utility of GOx/ZnONPsMCPE biosensor.", "which Type of Biosensor ?", "Glucose", 131.0, 138.0], ["UNLABELLED Interactivity between humans and smart systems, including wearable, body-attachable, or implantable platforms, can be enhanced by realization of multifunctional human-machine interfaces, where a variety of sensors collect information about the surrounding environment, intentions, or physiological conditions of the human to which they are attached. Here, we describe a stretchable, transparent, ultrasensitive, and patchable strain sensor that is made of a novel sandwich-like stacked piezoresisitive nanohybrid film of single-wall carbon nanotubes (SWCNTs) and a conductive elastomeric composite of polyurethane (PU)-poly(3,4-ethylenedioxythiophene) polystyrenesulfonate ( PEDOT PSS). This sensor, which can detect small strains on human skin, was created using environmentally benign water-based solution processing. We attributed the tunability of strain sensitivity (i.e., gauge factor), stability, and optical transparency to enhanced formation of percolating networks between conductive SWCNTs and PEDOT phases at interfaces in the stacked PU-PEDOT:PSS/SWCNT/PU-PEDOT:PSS structure. The mechanical stability, high stretchability of up to 100%, optical transparency of 62%, and gauge factor of 62 suggested that when attached to the skin of the face, this sensor would be able to detect small strains induced by emotional expressions such as laughing and crying, as well as eye movement, and we confirmed this experimentally.", "which Device Location ?", "Face", 1262.0, 1266.0], ["Abstract Background The biological research literature is a major repository of knowledge. As the amount of literature increases, it will get harder to find the information of interest on a particular topic. There has been an increasing amount of work on text mining this literature, but comparing this work is hard because of a lack of standards for making comparisons. To address this, we worked with colleagues at the Protein Design Group, CNB-CSIC, Madrid to develop BioCreAtIvE (Critical Assessment for Information Extraction in Biology), an open common evaluation of systems on a number of biological text mining tasks. We report here on task 1A, which deals with finding mentions of genes and related entities in text. \"Finding mentions\" is a basic task, which can be used as a building block for other text mining tasks. The task makes use of data and evaluation software provided by the (US) National Center for Biotechnology Information (NCBI). Results 15 teams took part in task 1A. A number of teams achieved scores over 80% F-measure (balanced precision and recall). The teams that tried to use their task 1A systems to help on other BioCreAtIvE tasks reported mixed results. Conclusion The 80% plus F-measure results are good, but still somewhat lag the best scores achieved in some other domains such as newswire, due in part to the complexity and length of gene names, compared to person or organization names in newswire.", "which Evaluation metrics ?", "Recall", 1071.0, 1077.0], ["Abstract The Precision Medicine Initiative is a multicenter effort aiming at formulating personalized treatments leveraging on individual patient data (clinical, genome sequence and functional genomic data) together with the information in large knowledge bases (KBs) that integrate genome annotation, disease association studies, electronic health records and other data types. The biomedical literature provides a rich foundation for populating these KBs, reporting genetic and molecular interactions that provide the scaffold for the cellular regulatory systems and detailing the influence of genetic variants in these interactions. The goal of BioCreative VI Precision Medicine Track was to extract this particular type of information and was organized in two tasks: (i) document triage task, focused on identifying scientific literature containing experimentally verified protein\u2013protein interactions (PPIs) affected by genetic mutations and (ii) relation extraction task, focused on extracting the affected interactions (protein pairs). To assist system developers and task participants, a large-scale corpus of PubMed documents was manually annotated for this task. Ten teams worldwide contributed 22 distinct text-mining models for the document triage task, and six teams worldwide contributed 14 different text-mining systems for the relation extraction task. When comparing the text-mining system predictions with human annotations, for the triage task, the best F-score was 69.06%, the best precision was 62.89%, the best recall was 98.0% and the best average precision was 72.5%. For the relation extraction task, when taking homologous genes into account, the best F-score was 37.73%, the best precision was 46.5% and the best recall was 54.1%. Submitted systems explored a wide range of methods, from traditional rule-based, statistical and machine learning systems to state-of-the-art deep learning methods. Given the level of participation and the individual team results we find the precision medicine track to be successful in engaging the text-mining research community. In the meantime, the track produced a manually annotated corpus of 5509 PubMed documents developed by BioGRID curators and relevant for precision medicine. The data set is freely available to the community, and the specific interactions have been integrated into the BioGRID data set. In addition, this challenge provided the first results of automatically identifying PubMed articles that describe PPI affected by mutations, as well as extracting the affected relations from those articles. Still, much progress is needed for computer-assisted precision medicine text mining to become mainstream. Future work should focus on addressing the remaining technical challenges and incorporating the practical benefits of text-mining tools into real-world precision medicine information-related curation.", "which Evaluation metrics ?", "Precision", 13.0, 22.0], ["The BioCreative LitCovid track calls for a community effort to tackle automated topic annotation for COVID-19 literature. The number of COVID-19-related articles in the literature is growing by about 10,000 articles per month, significantly challenging curation efforts and downstream interpretation. LitCovid is a literature database of COVID-19related articles in PubMed, which has accumulated more than 180,000 articles with millions of accesses each month by users worldwide. The rapid literature growth significantly increases the burden of LitCovid curation, especially for topic annotations. Topic annotation in LitCovid assigns one or more (up to eight) labels to articles. The annotated topics have been widely used both directly in LitCovid (e.g., accounting for ~20% of total uses) and downstream studies such as knowledge network generation and citation analysis. It is, therefore, important to develop innovative text mining methods to tackle the challenge. We organized the BioCreative LitCovid track to call for a community effort to tackle automated topic annotation for COVID-19 literature. This article summarizes the BioCreative LitCovid track in terms of data collection and team participation. The dataset is publicly available via https://ftp.ncbi.nlm.nih.gov/pub/lu/LitCovid/biocreative/. It consists of over 30K PubMed articles, one of the largest multilabel classification datasets on biomedical literature. There were 80 submissions in total from 19 teams worldwide. The highestperforming submissions achieved 0.8875, 0.9181, and 0.9394 for macro F1-score, micro F1-score, and instance-based F1-score, respectively. We look forward to further participation in developing biomedical text mining methods in response to the rapid growth of the COVID-19 literature. Keywords\u2014biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; multi-label classification; COVID-19; LitCovid;", "which Evaluation metrics ?", "Micro F1", 1583.0, 1591.0], ["Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive/negative classification from 80% up to 85.4%. The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7%, an improvement of 9.7% over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.", "which Evaluation metrics ?", "Accuracy", 794.0, 802.0], ["The research on the automatic ontology construction has become very popular. It is very useful for the ontology construction to reengineer the existing knowledge resource, such as the thesauri. But many relationships in the thesauri are incorrect or are defined too broadly. Accordingly, extracting ontological relations from the thesauri becomes very important. This paper proposes the method to reengineer the thesauri to ontology, and especially the method to how to obtain the correct semantic relations. The test result shows the accuracy of the semantic relations is86.23%, and one is the hierarchical relations with 89.02%, and the other is non-hierarchical relations with 83.44%.", "which Evaluation metrics ?", "Accuracy", 535.0, 543.0], ["Abstract Background The biological research literature is a major repository of knowledge. As the amount of literature increases, it will get harder to find the information of interest on a particular topic. There has been an increasing amount of work on text mining this literature, but comparing this work is hard because of a lack of standards for making comparisons. To address this, we worked with colleagues at the Protein Design Group, CNB-CSIC, Madrid to develop BioCreAtIvE (Critical Assessment for Information Extraction in Biology), an open common evaluation of systems on a number of biological text mining tasks. We report here on task 1A, which deals with finding mentions of genes and related entities in text. \"Finding mentions\" is a basic task, which can be used as a building block for other text mining tasks. The task makes use of data and evaluation software provided by the (US) National Center for Biotechnology Information (NCBI). Results 15 teams took part in task 1A. A number of teams achieved scores over 80% F-measure (balanced precision and recall). The teams that tried to use their task 1A systems to help on other BioCreAtIvE tasks reported mixed results. Conclusion The 80% plus F-measure results are good, but still somewhat lag the best scores achieved in some other domains such as newswire, due in part to the complexity and length of gene names, compared to person or organization names in newswire.", "which Evaluation metrics ?", "Precision", 1057.0, 1066.0], ["Abstract The Precision Medicine Initiative is a multicenter effort aiming at formulating personalized treatments leveraging on individual patient data (clinical, genome sequence and functional genomic data) together with the information in large knowledge bases (KBs) that integrate genome annotation, disease association studies, electronic health records and other data types. The biomedical literature provides a rich foundation for populating these KBs, reporting genetic and molecular interactions that provide the scaffold for the cellular regulatory systems and detailing the influence of genetic variants in these interactions. The goal of BioCreative VI Precision Medicine Track was to extract this particular type of information and was organized in two tasks: (i) document triage task, focused on identifying scientific literature containing experimentally verified protein\u2013protein interactions (PPIs) affected by genetic mutations and (ii) relation extraction task, focused on extracting the affected interactions (protein pairs). To assist system developers and task participants, a large-scale corpus of PubMed documents was manually annotated for this task. Ten teams worldwide contributed 22 distinct text-mining models for the document triage task, and six teams worldwide contributed 14 different text-mining systems for the relation extraction task. When comparing the text-mining system predictions with human annotations, for the triage task, the best F-score was 69.06%, the best precision was 62.89%, the best recall was 98.0% and the best average precision was 72.5%. For the relation extraction task, when taking homologous genes into account, the best F-score was 37.73%, the best precision was 46.5% and the best recall was 54.1%. Submitted systems explored a wide range of methods, from traditional rule-based, statistical and machine learning systems to state-of-the-art deep learning methods. Given the level of participation and the individual team results we find the precision medicine track to be successful in engaging the text-mining research community. In the meantime, the track produced a manually annotated corpus of 5509 PubMed documents developed by BioGRID curators and relevant for precision medicine. The data set is freely available to the community, and the specific interactions have been integrated into the BioGRID data set. In addition, this challenge provided the first results of automatically identifying PubMed articles that describe PPI affected by mutations, as well as extracting the affected relations from those articles. Still, much progress is needed for computer-assisted precision medicine text mining to become mainstream. Future work should focus on addressing the remaining technical challenges and incorporating the practical benefits of text-mining tools into real-world precision medicine information-related curation.", "which Evaluation metrics ?", "Recall", 1533.0, 1539.0], ["This paper presents our recent work on the design and development of a new, large scale dataset, which we name MS MARCO, for MAchine Reading COmprehension. This new dataset is aimed to overcome a number of well-known weaknesses of previous publicly available datasets for the same task of reading comprehension and question answering. In MS MARCO, all questions are sampled from real anonymized user queries. The context passages, from which answers in the dataset are derived, are extracted from real web documents using the most advanced version of the Bing search engine. The answers to the queries are human generated. Finally, a subset of these queries has multiple answers. We aim to release one million queries and the corresponding answers in the dataset, which, to the best of our knowledge, is the most comprehensive real-world dataset of its kind in both quantity and quality. We are currently releasing 100,000 queries with their corresponding answers to inspire work in reading comprehension and question answering along with gathering feedback from the research community.", "which Type of knowledge source ?", "Documents", 506.0, 515.0], ["The growth of the Web in recent years has resulted in the development of various online platforms that provide healthcare information services. These platforms contain an enormous amount of information, which could be beneficial for a large number of people. However, navigating through such knowledgebases to answer specific queries of healthcare consumers is a challenging task. A majority of such queries might be non-factoid in nature, and hence, traditional keyword-based retrieval models do not work well for such cases. Furthermore, in many scenarios, it might be desirable to get a short answer that sufficiently answers the query, instead of a long document with only a small amount of useful information. In this paper, we propose a neural network model for ranking documents for question answering in the healthcare domain. The proposed model uses a deep attention mechanism at word, sentence, and document levels, for efficient retrieval for both factoid and non-factoid queries, on documents of varied lengths. Specifically, the word-level cross-attention allows the model to identify words that might be most relevant for a query, and the hierarchical attention at sentence and document levels allows it to do effective retrieval on both long and short documents. We also construct a new large-scale healthcare question-answering dataset, which we use to evaluate our model. Experimental evaluation results against several state-of-the-art baselines show that our model outperforms the existing retrieval techniques.", "which Type of knowledge source ?", "Documents", 776.0, 785.0], ["Supervised training procedures for semantic parsers produce high-quality semantic parsers, but they have difficulty scaling to large databases because of the sheer number of logical constants for which they must see labeled training data. We present a technique for developing semantic parsers for large databases based on a reduction to standard supervised training algorithms, schema matching, and pattern learning. Leveraging techniques from each of these areas, we develop a semantic parser for Freebase that is capable of parsing questions with an F1 that improves by 0.42 over a purely-supervised learning algorithm.", "which Type of knowledge source ?", "Freebase", 499.0, 507.0], ["Wikidata is becoming an increasingly important knowledge base whose usage is spreading in the research community. However, most question answering systems evaluation datasets rely on Freebase or DBpedia. We present two new datasets in order to train and benchmark QA systems over Wikidata. The first is a translation of the popular SimpleQuestions dataset to Wikidata, the second is a dataset created by collecting user feedbacks.", "which Knowledge Base ?", "Freebase", 183.0, 191.0], ["Wikidata is becoming an increasingly important knowledge base whose usage is spreading in the research community. However, most question answering systems evaluation datasets rely on Freebase or DBpedia. We present two new datasets in order to train and benchmark QA systems over Wikidata. The first is a translation of the popular SimpleQuestions dataset to Wikidata, the second is a dataset created by collecting user feedbacks.", "which Knowledge Base ?", "Wikidata", 0.0, 8.0], ["We present a system for named entity recognition (ner) in astronomy journal articles. We have developed this system on a ne corpus comprising approximately 200,000 words of text from astronomy articles. These have been manually annotated with \u223c40 entity types of interest to astronomers. We report on the challenges involved in extracting the corpus, defining entity classes and annotating scientific text. We investigate which features of an existing state-of-the-art Maximum Entropy approach perform well on astronomy text. Our system achieves an F-score of 87.8%.", "which Fine-grained Entity types ?", "ion", NaN, NaN], ["Statistical named entity recognisers require costly hand-labelled training data and, as a result, most existing corpora are small. We exploit Wikipedia to create a massive corpus of named entity annotated text. We transform Wikipedia\u2019s links into named entity annotations by classifying the target articles into common entity types (e.g. person, organisation and location). Comparing to MUC, CONLL and BBN corpora, Wikipedia generally performs better than other cross-corpus train/test pairs.", "which Fine-grained Entity types ?", "Person", 338.0, 344.0], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which Number of development documents ?", "Mouse", 583.0, 588.0], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which Number of development documents ?", "Yeast", 567.0, 572.0], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which Number of development documents ?", "Fly", 574.0, 577.0], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which Number of identifiers ?", "Mouse", 583.0, 588.0], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which Number of identifiers ?", "Yeast", 567.0, 572.0], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which Number of identifiers ?", "Fly", 574.0, 577.0], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which Number of test documents ?", "Mouse", 583.0, 588.0], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which Number of test documents ?", "Yeast", 567.0, 572.0], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which Number of test documents ?", "Fly", 574.0, 577.0], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which Number of training documents ?", "Mouse", 583.0, 588.0], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which Number of training documents ?", "Yeast", 567.0, 572.0], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which Number of training documents ?", "Fly", 574.0, 577.0], ["We present the BioCreative VII Task 3 which focuses on drug names extraction from tweets. Recognized to provide unique insights into population health, detecting health related tweets is notoriously challenging for natural language processing tools. Tweets are written about any and all topics, most of them not related to health. Additionally, they are written with little regard for proper grammar, are inherently colloquial, and are almost never proof-read. Given a tweet, task 3 consists of detecting if the tweet has a mention of a drug name and, if so, extracting the span of the drug mention. We made available 182,049 tweets publicly posted by 212 Twitter users with all drugs mentions manually annotated. This corpus exhibits the natural and strongly imbalanced distribution of positive tweets, with only 442 tweets (0.2%) mentioning a drug. This task was an opportunity for participants to evaluate methods robust to classimbalance beyond the simple lexical match. A total of 65 teams registered, and 16 teams submitted a system run. We summarize the corpus and the tools created for the challenge, which is freely available at https://biocreative.bioinformatics.udel.edu/tasks/biocreativevii/track-3/. We analyze the methods and the results of the competing systems with a focus on learning from classimbalanced data. Keywords\u2014social media; pharmacovigilance; named entity recognition; drug name extraction; class-imbalance.", "which Data coverage ?", "Tweets", 82.0, 88.0], ["This paper presents the Bacteria Biotope task as part of the BioNLP Shared Tasks 2011. The Bacteria Biotope task aims at extracting the location of bacteria from scientific Web pages. Bacteria location is a crucial knowledge in biology for phenotype studies. The paper details the corpus specification, the evaluation metrics, summarizes and discusses the participant results.", "which Data coverage ?", "Web pages", 173.0, 182.0], ["The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records presented three tasks: a concept extraction task focused on the extraction of medical concepts from patient reports; an assertion classification task focused on assigning assertion types for medical problem concepts; and a relation classification task focused on assigning relation types that hold between medical problems, tests, and treatments. i2b2 and the VA provided an annotated reference standard corpus for the three tasks. Using this reference standard, 22 systems were developed for concept extraction, 21 for assertion classification, and 16 for relation classification. These systems showed that machine learning approaches could be augmented with rule-based systems to determine concepts, assertions, and relations. Depending on the task, the rule-based systems can either provide input for machine learning or post-process the output of machine learning. Ensembles of classifiers, information from unlabeled data, and external knowledge sources can help when the training data are inadequate.", "which Data coverage ?", "Clinical records", 72.0, 88.0], ["A considerable effort has been made to extract biological and chemical entities, as well as their relationships, from the scientific literature, either manually through traditional literature curation or by using information extraction and text mining technologies. Medicinal chemistry patents contain a wealth of information, for instance to uncover potential biomarkers that might play a role in cancer treatment and prognosis. However, current biomedical annotation databases do not cover such information, partly due to limitations of publicly available biomedical patent mining software. As part of the BioCreative V CHEMDNER patents track, we present the results of the first named entity recognition (NER) assignment carried out to detect mentions of chemical compounds and genes/proteins in running patent text. More specifically, this task aimed to evaluate the performance of automatic name recognition strategies capable of isolating chemical names and gene and gene product mentions from surrounding text within patent titles and abstracts. A total of 22 unique teams submitted results for at least one of the three CHEMDNER subtasks. The first subtask, called the CEMP (chemical entity mention in patents) task, focused on the detection of chemical named entity mentions in patents, requesting teams to return the start and end indices corresponding to all the chemical entities found in a given record. A total of 21 teams submitted 93 runs, for this subtask. The top performing team reached an f-measure of 0.89 with a precision of 0.87 and a recall of 0.91. The CPD (chemical passage detection) task required the classification of patent titles and abstracts whether they do or do not contain chemical compound mentions. Nine teams returned predictions for this task (40 runs). The top run in terms of Matthew\u2019s correlation coefficient (MCC) had a score of 0.88, the highest sensitivity ? Corresponding author", "which Data coverage ?", "Patent titles", 1024.0, 1037.0], ["This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2013, which follows BioNLP-ST-11. The Bacteria Biotope task aims to extract the location of bacteria from scientific web pages and to characterize these locations with respect to the OntoBiotope ontology. Bacteria locations are crucil knowledge in biology for phenotype studies. The paper details the corpus specifications, the evaluation metrics, and it summarizes and discusses the participant results.", "which Data coverage ?", "Web pages", 189.0, 198.0], ["Internet of things (IoT) is an emerging technique that offers advanced connectivity of devices, systems, services, and human beings. With the rapid development of hardware and network technologies, the IoT can refer to a wide variety and large number of devices, resulting in complex relationships among IoT devices. The dependencies among IoT devices, which reflect their relationships, are with reference value for the design, development and management of IoT devices. This paper proposes a stochastic model based approach for evaluating the dependencies of IoT devices. A random walk model is proposed to describe the relationships of IoT devices, and its corresponding Markov chain is obtained for dependency analysis. A framework as well as schemes and algorithms for dependency evaluation in real-world IoT are designed based on traffic measurement. Simulation experiments based on real-life data extracted from smart home environments are conducted to illustrate the efficacy of the approach.", "which Type of considered dependencies ?", "Service", NaN, NaN], ["Abstract Large-scale networks consisting of thousands of connected devices are like a living organism, constantly changing and evolving. It is very difficult for a human administrator to orient in such environment and to react to emerging security threats. With such motivation, this PhD proposal aims to find new methods for automatic identification of devices, the services they provide, their dependencies and importance. The main focus of the proposal is to find novel approaches to building cyber situational awareness in an unknown network for the purpose of computer security incident response. Our research is at the initial phase and will contribute to a PhD thesis in four years.", "which Type of considered dependencies ?", "Service", NaN, NaN], ["AbstractThis study systematised and synthesised the results of observational studies that were aimed at supporting the association between dietary patterns and cardiometabolic risk (CMR) factors among adolescents. Relevant scientific articles were searched in PUBMED, EMBASE, SCIENCE DIRECT, LILACS, WEB OF SCIENCE and SCOPUS. Observational studies that included the measurement of any CMR factor in healthy adolescents and dietary patterns were included. The search strategy retained nineteen articles for qualitative analysis. Among retained articles, the effects of dietary pattern on the means of BMI (n 18), waist circumference (WC) (n 9), systolic blood pressure (n 7), diastolic blood pressure (n 6), blood glucose (n 5) and lipid profile (n 5) were examined. Systematised evidence showed that an unhealthy dietary pattern appears to be associated with poor mean values of CMR factors among adolescents. However, evidence of a protective effect of healthier dietary patterns in this group remains unclear. Considering the number of studies with available information, a meta-analysis of anthropometric measures showed that dietary patterns characterised by the highest intake of unhealthy foods resulted in a higher mean BMI (0\u00b757 kg/m\u00b2; 95 % CI 0\u00b751, 0\u00b763) and WC (0\u00b757 cm; 95 % CI 0\u00b747, 0\u00b767) compared with low intake of unhealthy foods. Controversially, patterns characterised by a low intake of healthy foods were associated with a lower mean BMI (\u22120\u00b741 kg/m\u00b2; 95 % CI \u22120\u00b746,\u22120\u00b736) and WC (\u22120\u00b743 cm; 95 % CI \u22120\u00b752,\u22120\u00b733). An unhealthy dietary pattern may influence markers of CMR among adolescents, but considering the small number and limitations of the studies included, further studies are warranted to strengthen the evidence of this relation.", "which Age group ?", "Adolescents", 234.0, 245.0], ["Abstract Objective: To investigate the association between dietary patterns (DP) and overweight risk in the Malaysian Adult Nutrition Surveys (MANS) of 2003 and 2014. Design: DP were derived from the MANS FFQ using principal component analysis. The cross-sectional association of the derived DP with prevalence of overweight was analysed. Setting: Malaysia. Participants: Nationally representative sample of Malaysian adults from MANS (2003, n 6928; 2014, n 3000). Results: Three major DP were identified for both years. These were \u2018Traditional\u2019 (fish, eggs, local cakes), \u2018Western\u2019 (fast foods, meat, carbonated beverages) and \u2018Mixed\u2019 (ready-to-eat cereals, bread, vegetables). A fourth DP was generated in 2003, \u2018Flatbread & Beverages\u2019 (flatbread, creamer, malted beverages), and 2014, \u2018Noodles & Meat\u2019 (noodles, meat, eggs). These DP accounted for 25\u00b76 and 26\u00b76 % of DP variations in 2003 and 2014, respectively. For both years, Traditional DP was significantly associated with rural households, lower income, men and Malay ethnicity, while Western DP was associated with younger age and higher income. Mixed DP was positively associated with women and higher income. None of the DP showed positive association with overweight risk, except for reduced adjusted odds of overweight with adherence to Traditional DP in 2003. Conclusions: Overweight could not be attributed to adherence to a single dietary pattern among Malaysian adults. This may be due to the constantly morphing dietary landscape in Malaysia, especially in urban areas, given the ease of availability and relative affordability of multi-ethnic and international foods. Timely surveys are recommended to monitor implications of these changes.", "which Age group ?", "Adults", 418.0, 424.0], ["Productivity measurements were carried out during spring 2007 in the northeastern (NE) Indian Ocean, where light availability is controlled by clouds and surface productivity by nutrient and light availability. New productivity is found to be higher than regenerated productivity at most locations, consistent with the earlier findings from the region. A comparison of the present results with the earlier findings reveals that the region contributes significantly in the sequestration of CO2 from the atmosphere, particularly during spring. Diatomdominated plankton community is more efficient than those dominated by other organisms in the uptake of CO2 and its export to the deep. Earlier studies on plankton composition suggest that higher new productivity at most locations could also be due to the dominance of diatoms in the region.", "which Sampling depth covered ?", "Surface", 154.0, 161.0], ["Abstract. The Bay of Bengal (BoB) has long stood as a biogeochemical enigma with subsurface waters containing extremely low, but persistent, concentrations of oxygen in the nanomolar range which \u2013 for some, yet unconstrained reason \u2013 are prevented from becoming anoxic. One reason for this may be the low productivity of the BoB waters due to nutrient limitation, and the resulting lack of respiration of organic material at intermediate waters. Thus, the parameters determining primary production are key to understanding what prevents the BoB from developing anoxia. Primary productivity in the sunlit surface layers of tropical oceans is mostly limited by the supply of reactive nitrogen through upwelling, riverine flux, atmospheric deposition, and biological dinitrogen (N2) fixation. In the BoB, a stable stratification limits nutrient supply via upwelling in the open waters, and riverine or atmospheric fluxes have been shown to support only less than one quarter of the nitrogen for primary production. This leaves a large uncertainty for most of the BoB's nitrogen input, suggesting a potential role of N2 fixation in those waters. Here, we present a survey of N2 fixation and carbon fixation in the BoB during the winter monsoon season. We detected a community of N2 fixers comparable to other OMZ regions, with only a few cyanobacterial clades and a broad diversity of non-phototrophic N2 fixers present throughout the water column (samples collected between 10\u2009m and 560\u2009m water depth). While similar communities of N2 fixers were shown to actively fix N2 in other OMZs, N2 fixation rates were below the detection limit in our samples covering the water column between the deep chlorophyll maximum and the OMZ. Consistent with this, no N2 fixation signal was visible in \u03b415N signatures. We suggest that the absence of N2 fixation may be a consequence of a micronutrient limitation or of an O2 sensitivity of the OMZ diazotrophs in the BoB. To explore how the onset of N2 fixation by cyanobacteria compared to non-phototrophic N2 fixers would impact on OMZ O2 concentrations, a simple model exercise was carried out. We observed that both, photic zone-based and OMZ-based N2 fixation are very sensitive to even minimal changes in water column stratification, with stronger mixing increasing organic matter production and export, which would exhaust remaining O2 traces in the BoB.\n ", "which Coastal/open ocean ?", "Open", 878.0, 882.0], ["Abstract Developing detailed production schedules for dyeing and finishing operations is a very difficult task that has received relatively little attention in the literature. In this paper, a scheduling procedure is presented for a knitted fabric dyeing and finishing plant that is essentially a flexible job shop with sequence-dependent setups. An existing job shop scheduling algorithm is modified to take into account the complexities of the case plant. The resulting approach based on family scheduling is tested on problems generated with case plant characteristics.", "which Positioning in the logistics chain ?", "Production", 29.0, 39.0], ["This paper studies a problem in the knitting process of the textile industry. In such a production system, each job has a number of attributes and each attribute has one or more levels. Because there is at least one different attribute level between two adjacent jobs, it is necessary to make a set-up adjustment whenever there is a switch to a different job. The problem can be formulated as a scheduling problem with multi-attribute set-up times on unrelated parallel machines. The objective of the problem is to assign jobs to different machines to minimise the makespan. A constructive heuristic is developed to obtain a qualified solution. To improve the solution further, a meta-heuristic that uses a genetic algorithm with a new crossover operator and three local searches are proposed. The computational experiments show that the proposed constructive heuristic outperforms two existed heuristics and the current scheduling method used by the case textile plant.", "which Positioning in the logistics chain ?", "Production", 88.0, 98.0], ["Resource saving has become an integral aspect of manufacturing in industry 4.0. This paper proposes a multisystem optimization (MSO) algorithm, inspired by implicit parallelism of heuristic methods, to solve an integrated production scheduling with resource saving problem in textile printing and dyeing. First, a real-world integrated production scheduling with resource saving is formulated as a multisystem optimization problem. Then, the MSO algorithm is proposed to solve multisystem optimization problems that consist of several coupled subsystems, and each of the subsystems may contain multiple objectives and multiple constraints. The proposed MSO algorithm is composed of within-subsystem evolution and cross-subsystem migration operators, and the former is to optimize each subsystem by excellent evolution operators and the later is to complete information sharing between multiple subsystems, to accelerate the global optimization of the whole system. Performance is tested on a set of multisystem benchmark functions and compared with improved NSGA-II and multiobjective multifactorial evolutionary algorithm (MO-MFEA). Simulation results show that the MSO algorithm is better than compared algorithms for the benchmark functions studied in this paper. Finally, the MSO algorithm is successfully applied to the proposed integrated production scheduling with resource saving problem, and the results show that MSO is a promising algorithm for the studied problem.", "which Positioning in the logistics chain ?", "Production", 222.0, 232.0], ["Abstract Cleavage of C\u2013O bonds in lignin can afford the renewable aryl sources for fine chemicals. However, the high bond energies of these C\u2013O bonds, especially the 4-O-5-type diaryl ether C\u2013O bonds (~314 kJ/mol) make the cleavage very challenging. Here, we report visible-light photoredox-catalyzed C\u2013O bond cleavage of diaryl ethers by an acidolysis with an aryl carboxylic acid and a following one-pot hydrolysis. Two molecules of phenols are obtained from one molecule of diaryl ether at room temperature. The aryl carboxylic acid used for the acidolysis can be recovered. The key to success of the acidolysis is merging visible-light photoredox catalysis using an acridinium photocatalyst and Lewis acid catalysis using Cu(TMHD) 2 . Preliminary mechanistic studies indicate that the catalytic cycle occurs via a rare selective electrophilic attack of the generated aryl carboxylic radical on the electron-rich aryl ring of the diphenyl ether. This transformation is applied to a gram-scale reaction and the model of 4-O-5 lignin linkages.", "which Type of transformation ?", "Cleavage", 9.0, 17.0], ["Lignin, which is a highly cross-linked and irregular biopolymer, is nature\u2019s most abundant source of aromatic compounds and constitutes an attractive renewable resource for the production of aromatic commodity chemicals. Herein, we demonstrate a practical and operationally simple two-step degradation approach involving Pd-catalyzed aerobic oxidation and visible-light photoredox-catalyzed reductive fragmentation for the chemoselective cleavage of the \u03b2-O-4 linkage\u2014the predominant linkage in lignin\u2014for the generation of lower-molecular-weight aromatic building blocks. The developed strategy affords the \u03b2-O-4 bond cleaved products with high chemoselectivity and in high yields, is amenable to continuous flow processing, operates at ambient temperature and pressure, and is moisture- and oxygen-tolerant.", "which Type of transformation ?", "Cleavage", 438.0, 446.0], ["The Ni(0)-catalyzed cross-coupling of alkenyl methyl ethers with boronic esters is described. Several types of alkenyl methyl ethers can be coupled with a wide range of boronic esters to give the stilbene derivatives.", "which Type of transformation ?", "Coupling", 26.0, 34.0], ["The first cross-coupling of acylated phenol derivatives has been achieved. In the presence of an air-stable Ni(II) complex, readily accessible aryl pivalates participate in the Suzuki-Miyaura coupling with arylboronic acids. The process is tolerant of considerable variation in each of the cross-coupling components. In addition, a one-pot acylation/cross-coupling sequence has been developed. The potential to utilize an aryl pivalate as a directing group has also been demonstrated, along with the ability to sequentially cross-couple an aryl bromide followed by an aryl pivalate, using palladium and nickel catalysis, respectively.", "which Type of transformation ?", "Coupling", 16.0, 24.0], ["Biaryl scaffolds were constructed via Ni-catalyzed aryl C-O activation by avoiding cleavage of the more reactive acyl C-O bond of aryl carboxylates. Now aryl esters, in general, can be successfully employed in cross-coupling reactions for the first time. The substrate scope and synthetic utility of the chemistry were demonstrated by the syntheses of more than 40 biaryls and by constructing complex organic molecules. Water was observed to play an important role in facilitating this transformation.", "which Type of transformation ?", "Coupling", 216.0, 224.0], ["Treatment of vitamin B 12a 1 (hydroxycobalamin hydrochloride, aquocobalamin) with NaBH 4 and ZnCl 2 leads to the selective cleavage of the nucleotide loop and gives dicyanocobinamide 2a in good yield. Methylcobinamide 4 was prepared from 2 via aquocyanocobinamide 3. The glutathione-mediated methylation of 3 in a pH 3.5 buffer solution proceeded with Mel, but not with MeOTs.", "which Precursor of cobinamide ?", "Hydroxycobalamin", 30.0, 46.0], ["We present a new method for the preparation of cobinamide (CN)2Cbi, a vitamin B12 precursor, that should allow its broader utility. Treatment of vitamin B12 with only NaCN and heating in a microwave reactor affords (CN)2Cbi as the sole product. The purification procedure was greatly simplified, allowing for easy isolation of the product in 94% yield. The use of microwave heating proved beneficial also for (CN)2Cbi(c-lactone) synthesis. Treatment of (CN)2Cbi with triethanolamine led to (CN)2Cbi(c-lactam).", "which Major reactant ?", "NaCN", 167.0, 171.0], ["Antimicrobial peptides (AMPs) have been proposed as a promising class of new antimicrobials partly because they are less susceptible to bacterial resistance evolution. This is possibly caused by their mode of action but also by their pharmacodynamic characteristics, which differ significantly from conventional antibiotics. Although pharmacodynamics of antibiotic resistant strains have been studied, such data are lacking for AMP resistant strains. Here, we investigated if the pharmacodynamics of the Gram-positive human pathogen Staphylococcous aureus evolve under antimicrobial peptide selection. Interestingly, the Hill coefficient (kappa \u03ba) evolves together with the minimum inhibition concentration (MIC). Except for one genotype, strains harboring mutations in menF and atl, all mutants had higher kappa than the non-selected sensitive controls. Higher \u03ba results in steeper pharmacodynamic curve and, importantly, in a narrower mutant selection window. S. aureus selected for resistance to melittin displayed cross resistant against pexiganan and had as steep pharmacodynamic curves (high \u03ba) as pexiganan-selected lines. By contrast, the pexiganan-sensitive tenecin-selected lines displayed lower \u03ba. Taken together, our data demonstrate that pharmacodynamic parameters are not fixed traits of particular drug/strain interactions but actually evolve under drug treatment. The contribution of factors such as \u03ba and the maximum and minimum growth rates on the dynamics and probability of resistance evolution are open questions that require urgent attention.", "which antimicobials used in study ?", "Pexiganan", 1042.0, 1051.0], ["ABSTRACT Staphylococcus aureus small-colony variants (SCVs) often persist despite antibiotic therapy. Against a 108-CFU/ml methicillin-resistant S. aureus (MRSA) (strain COL) population of which 0%, 1%, 10%, 50%, or 100% was an isogenic hemB knockout (Ia48) subpopulation displaying the SCV phenotype, vancomycin achieved maximal reductions of 4.99, 5.39, 4.50, 3.28, and 1.66 log10 CFU/ml over 48 h. Vancomycin at \u226516 mg/liter shifted a population from 50% SCV cells at 0 h to 100% SCV cells at 48 h, which was well characterized by a Hill-type model (R2 > 0.90).", "which antimicobials used in study ?", "Vancomycin", 302.0, 312.0], ["Abstract Granitic lunar samples largely consist of granophyric intergrowths of silica and K-feldspar. The identification of the silica polymorph present in the granophyre can clarify the petrogenesis of the lunar granites. The presence of tridymite or cristobalite would indicate rapid crystallization at high temperature. Quartz would indicate crystallization at low temperature or perhaps intrusive, slow crystallization, allowing for the orderly transformation from high-temperature silica polymorphs (tridymite or cristobalite). We identify the silica polymorphs present in four granitic lunar samples from the Apollo 12 regolith using laser Raman spectroscopy. Typically, lunar silica occurs with a hackle fracture pattern. We did an initial density calculation on the hackle fracture pattern of quartz and determined that the volume of quartz and fracture space is consistent with a molar volume contraction from tridymite or cristobalite, both of which are less dense than quartz. Moreover, we analyzed the silica in the granitic fragments from Apollo 12 by electron-probe microanalysis and found it contains up to 0.7 wt% TiO2, consistent with initial formation as the high-temperature silica polymorphs, which have more open crystal structures that can more readily accommodate cations other than Si. The silica in Apollo 12 granitic samples crystallized rapidly as tridymite or cristobalite, consistent with extrusive volcanism. The silica then inverted to quartz at a later time, causing it to contract and fracture. A hackle fracture pattern is common in silica occurring in extrusive lunar lithologies (e.g., mare basalt). The extrusive nature of these granitic samples makes them excellent candidates to be similar to the rocks that compose positive relief silicic features such as the Gruithuisen Domes.", "which Relavance ?", "Cristobalite", 252.0, 264.0], ["Abstract Granitic lunar samples largely consist of granophyric intergrowths of silica and K-feldspar. The identification of the silica polymorph present in the granophyre can clarify the petrogenesis of the lunar granites. The presence of tridymite or cristobalite would indicate rapid crystallization at high temperature. Quartz would indicate crystallization at low temperature or perhaps intrusive, slow crystallization, allowing for the orderly transformation from high-temperature silica polymorphs (tridymite or cristobalite). We identify the silica polymorphs present in four granitic lunar samples from the Apollo 12 regolith using laser Raman spectroscopy. Typically, lunar silica occurs with a hackle fracture pattern. We did an initial density calculation on the hackle fracture pattern of quartz and determined that the volume of quartz and fracture space is consistent with a molar volume contraction from tridymite or cristobalite, both of which are less dense than quartz. Moreover, we analyzed the silica in the granitic fragments from Apollo 12 by electron-probe microanalysis and found it contains up to 0.7 wt% TiO2, consistent with initial formation as the high-temperature silica polymorphs, which have more open crystal structures that can more readily accommodate cations other than Si. The silica in Apollo 12 granitic samples crystallized rapidly as tridymite or cristobalite, consistent with extrusive volcanism. The silica then inverted to quartz at a later time, causing it to contract and fracture. A hackle fracture pattern is common in silica occurring in extrusive lunar lithologies (e.g., mare basalt). The extrusive nature of these granitic samples makes them excellent candidates to be similar to the rocks that compose positive relief silicic features such as the Gruithuisen Domes.", "which Relavance ?", "Quartz", 323.0, 329.0], ["Abstract Granitic lunar samples largely consist of granophyric intergrowths of silica and K-feldspar. The identification of the silica polymorph present in the granophyre can clarify the petrogenesis of the lunar granites. The presence of tridymite or cristobalite would indicate rapid crystallization at high temperature. Quartz would indicate crystallization at low temperature or perhaps intrusive, slow crystallization, allowing for the orderly transformation from high-temperature silica polymorphs (tridymite or cristobalite). We identify the silica polymorphs present in four granitic lunar samples from the Apollo 12 regolith using laser Raman spectroscopy. Typically, lunar silica occurs with a hackle fracture pattern. We did an initial density calculation on the hackle fracture pattern of quartz and determined that the volume of quartz and fracture space is consistent with a molar volume contraction from tridymite or cristobalite, both of which are less dense than quartz. Moreover, we analyzed the silica in the granitic fragments from Apollo 12 by electron-probe microanalysis and found it contains up to 0.7 wt% TiO2, consistent with initial formation as the high-temperature silica polymorphs, which have more open crystal structures that can more readily accommodate cations other than Si. The silica in Apollo 12 granitic samples crystallized rapidly as tridymite or cristobalite, consistent with extrusive volcanism. The silica then inverted to quartz at a later time, causing it to contract and fracture. A hackle fracture pattern is common in silica occurring in extrusive lunar lithologies (e.g., mare basalt). The extrusive nature of these granitic samples makes them excellent candidates to be similar to the rocks that compose positive relief silicic features such as the Gruithuisen Domes.", "which Relavance ?", "Tridymite", 239.0, 248.0], ["Two-dimensional spatially resolved absolute atomic oxygen densities are measured within an atmospheric pressure micro plasma jet and in its effluent. The plasma is operated in helium with an admixture of 0.5% of oxygen at 13.56 MHz and with a power of 1 W. Absolute atomic oxygen densities are obtained using two photon absorption laser induced fluorescence spectroscopy. The results are interpreted based on measurements of the electron dynamics by phase resolved optical emission spectroscopy in combination with a simple model that balances the production of atomic oxygen with its losses due to chemical reactions and diffusion. Within the discharge, the atomic oxygen density builds up with a rise time of 600 \u00b5s along the gas flow and reaches a plateau of 8 \u00d7 1015 cm\u22123. In the effluent, the density decays exponentially with a decay time of 180 \u00b5s (corresponding to a decay length of 3 mm at a gas flow of 1.0 slm). It is found that both, the species formation behavior and the maximum distance between the jet nozzle and substrates for possible oxygen treatments of surfaces can be controlled by adjusting the gas flow.", "which Unit_gas_flow_rate ?", "slm", 917.0, 920.0], ["The frequency distributions of hydrogen lines broadened by the local fields of both ions and electrons in a plasma are calculated in the classical path approximation. The electron collisions are treated by an impact theory which takes into account the Stark splitting caused by the quasistatic ion fields. The ion field-strength distribution function used includes the effect of electron shielding and ion-ion correlations. The various approximations that were employed are examined for self-consistency and an accuracy of about 10% in the resulting line profiles is expected. Good agreement with experimental H/sub beta / profiles is obtained while there are deviations of factors of two with the usual Holtsmark theory. Asymptotic distributions for the line wings are given for astrophysical applications. Also here the electron effects are generally as important as the ion effects for all values of the electron density and in some cases the electron broadening is larger than the ion broadening. (auth)", "which Comparison to ?", "Experiment", NaN, NaN], ["The frequency distributions of spectral lines of nonhydrogenic atoms broadened by local fields of both electrons and ions in a plasma are calculated in the classical path approximation. The electron collisions are treated by an impact theory which takes into account deviations from adiabaticity. For the ion effects, the adiabatic approximation can be used to describe the time-dependent wave functions. The various approximations employed were examined for self-consistency, and an accuracy of about 20% in the resulting line profiles is expected. Good agreement with Wulff's experimental helium line profiles was obtained while there are large deviations from the adiabatic theory, especially for the line shifts. Asymptotic distributions for the line wings are given for astrophysical applications. Here the ion effects can be as important as the electron effects and lead to large asymmetries, but near the line core electrons usually dominate. Numerical results are tabulated for 24 neutral helium lines with principal quantum numbers up to five.", "which Comparison to ?", "Experiment", NaN, NaN], ["Polyethylene terephthalate (PET) is the most important mass\u2010produced thermoplastic polyester used as a packaging material. Recently, thermophilic polyester hydrolases such as TfCut2 from Thermobifida fusca have emerged as promising biocatalysts for an eco\u2010friendly PET recycling process. In this study, postconsumer PET food packaging containers are treated with TfCut2 and show weight losses of more than 50% after 96 h of incubation at 70 \u00b0C. Differential scanning calorimetry analysis indicates that the high linear degradation rates observed in the first 72 h of incubation is due to the high hydrolysis susceptibility of the mobile amorphous fraction (MAF) of PET. The physical aging process of PET occurring at 70 \u00b0C is shown to gradually convert MAF to polymer microstructures with limited accessibility to enzymatic hydrolysis. Analysis of the chain\u2010length distribution of degraded PET by nuclear magnetic resonance spectroscopy reveals that MAF is rapidly hydrolyzed via a combinatorial exo\u2010 and endo\u2010type degradation mechanism whereas the remaining PET microstructures are slowly degraded only by endo\u2010type chain scission causing no detectable weight loss. Hence, efficient thermostable biocatalysts are required to overcome the competitive physical aging process for the complete degradation of postconsumer PET materials close to the glass transition temperature of PET.", "which Performed at temperature ?", "temperature", 1363.0, 1374.0], ["Abstract Background Undifferentiated embryonal sarcoma (UES) of liver is a rare malignant neoplasm, which affects mostly the pediatric population accounting for 13% of pediatric hepatic malignancies, a few cases has been reported in adults. Case presentation We report a case of undifferentiated embryonal sarcoma of the liver in a 20-year-old Caucasian male. The patient was referred to us for further investigation after a laparotomy in a district hospital for spontaneous abdominal hemorrhage, which was due to a liver mass. After a through evaluation with computed tomography scan and magnetic resonance imaging of the liver and taking into consideration the previous history of the patient, it was decided to surgically explore the patient. Resection of I\u2013IV and VIII hepatic lobe. Patient developed disseminated intravascular coagulation one day after the surgery and died the next day. Conclusion It is a rare, highly malignant hepatic neoplasm, affecting almost exclusively the pediatric population. The prognosis is poor but recent evidence has shown that long-term survival is possible after complete surgical resection with or without postoperative chemotherapy.", "which Sex ?", "Male", 354.0, 358.0], ["SUMMARYA case report of a large primary malignant mesenchymoma of the liver is presented. This tumor was successfully removed with normal liver tissue surrounding the tumor by right hepatolobectomy. The pathologic characteristics and clinical behavior of tumors falling into this general category are", "which Laboratory fi ndings ?", "Normal", 131.0, 137.0], ["Twenty-five patients with an apparently primary sarcoma of the liver are reviewed. Presenting complaints were non-specific, but hepatomegaly and abnormal liver function tests were usual. Use of the contraceptive pill (four of 11 women) was identified as a possible risk factor; one patient had previously been exposed to vinyl chloride monomer. Detailed investigation showed that the primary tumour was extrahepatic in nine of the 25 patients. Distinguishing features of the 15 patients with confirmed primary hepatic sarcoma included a lower incidence of multiple hepatic lesions and a shorter time from first symptoms to diagnosis, but the most valuable discriminator was histology. Angiosarcomas and undifferentiated tumours were all of hepatic origin, epithelioid haemangioendotheliomas (EHAE) occurred as primary and secondary lesions and all other differentiated tumours arose outside the liver. The retroperitoneum was the most common site of an occult primary tumour and its careful examination therefore crucial: computed tomography scanning was found least fallible in this respect in the present series. Where resection (or transplantation), the best treatment, was not possible, results of therapy were disappointing, prognosis being considerably worse for patients with primary hepatic tumours. Patients with EHAE had a better overall prognosis regardless of primary site.", "which Laboratory fi ndings ?", "Normal", NaN, NaN], ["A successful surgical case of malignant undifferentiated (embryonal) sarcoma of the liver (USL), a rare tumor normally found in children, is reported. The patient was a 21-year-old woman, complaining of epigastric pain and abdominal fullness. Chemical analyses of the blood and urine and complete blood counts revealed no significant changes, and serum alpha-fetoprotein levels were within normal limits. A physical examination demonstrated a film, slightly tender lesion at the liver's edge palpable 10 cm below the xiphoid process. CT scan and ultrasonography showed an oval mass, confined to the left lobe of the liver, which proved to be hypovascular on angiography. At laparotomy, a large, 18 x 15 x 13 cm tumor, found in the left hepatic lobe was resected. The lesion was dark red in color, encapsulated, smooth surfaced and of an elastic firm consistency. No metastasis was apparent. Histological examination resulted in a diagnosis of undifferentiated sarcoma of the liver. Three courses of adjuvant chemotherapy, including adriamycin, cis-diaminodichloroplatinum, vincristine and dacarbazine were administered following the surgery with no serious adverse effects. The patient remains well with no evidence of recurrence 12 months after her operation.", "which Laboratory fi ndings ?", "Normal", 390.0, 396.0], ["A 10\u2010year\u2010old girl with undifferentiated (embryonal) sarcoma of the liver reported here had abdominal pain, nausea, vomiting and weakness when she was 8 years old. Chemical analyses of the blood and urine were normal. Serum alpha\u2010fetoprotein was within normal limits. She died of cachexia 1 year and 8 months after the onset of symptoms. Autopsy showed a huge tumor mass in the liver and a few metastatic nodules in the lungs, which were consistent histologically with undifferenitated sarcoma of the liver. To our knowledge, this is the second case report of hepatic undifferentiated sarcoma of children in Japan, the feature being compatible with the description of Stocker and Ishaka.", "which Laboratory fi ndings ?", "Normal", 210.0, 216.0], ["Twenty-five patients with an apparently primary sarcoma of the liver are reviewed. Presenting complaints were non-specific, but hepatomegaly and abnormal liver function tests were usual. Use of the contraceptive pill (four of 11 women) was identified as a possible risk factor; one patient had previously been exposed to vinyl chloride monomer. Detailed investigation showed that the primary tumour was extrahepatic in nine of the 25 patients. Distinguishing features of the 15 patients with confirmed primary hepatic sarcoma included a lower incidence of multiple hepatic lesions and a shorter time from first symptoms to diagnosis, but the most valuable discriminator was histology. Angiosarcomas and undifferentiated tumours were all of hepatic origin, epithelioid haemangioendotheliomas (EHAE) occurred as primary and secondary lesions and all other differentiated tumours arose outside the liver. The retroperitoneum was the most common site of an occult primary tumour and its careful examination therefore crucial: computed tomography scanning was found least fallible in this respect in the present series. Where resection (or transplantation), the best treatment, was not possible, results of therapy were disappointing, prognosis being considerably worse for patients with primary hepatic tumours. Patients with EHAE had a better overall prognosis regardless of primary site.", "which Symptoms and signs ?", "hepatomegaly", 128.0, 140.0], ["A 10\u2010year\u2010old girl with undifferentiated (embryonal) sarcoma of the liver reported here had abdominal pain, nausea, vomiting and weakness when she was 8 years old. Chemical analyses of the blood and urine were normal. Serum alpha\u2010fetoprotein was within normal limits. She died of cachexia 1 year and 8 months after the onset of symptoms. Autopsy showed a huge tumor mass in the liver and a few metastatic nodules in the lungs, which were consistent histologically with undifferenitated sarcoma of the liver. To our knowledge, this is the second case report of hepatic undifferentiated sarcoma of children in Japan, the feature being compatible with the description of Stocker and Ishaka.", "which Symptoms and signs ?", "vomiting", 116.0, 124.0], ["The National Library of Medicine (NLM) is developing an automated system to produce bibliographic records for its MEDLINER database. This system, named Medical Article Record System (MARS), employs document image analysis and understanding techniques and optical character recognition (OCR). This paper describes a key module in MARS called the Automated Labeling (AL) module, which labels all zones of interest (title, author, affiliation, and abstract) automatically. The AL algorithm is based on 120 rules that are derived from an analysis of journal page layouts and features extracted from OCR output. Experiments carried out on more than 11,000 articles in over 1,000 biomedical journals show the accuracy of this rule-based algorithm to exceed 96%.", "which Logical Labels ?", "abstract", 445.0, 453.0], ["A system for document image segmentation and ordering text areas is described and applied to both Japanese and English complex printed page layouts. There is no need to make any assumption about the shape of blocks, hence the segmentation technique can handle not only skewed images without skew-correction but also documents where column are not rectangular. In this technique, on the bottom-up strategy, the connected components are extracted from the reduced image, and classified according to their local information. The connected components are merged into lines, and lines are merged into areas. Extracted text areas are classified as body, caption, header, and footer. A tree graph of the layout of body texts is made, and we get the order of texts by preorder traversal on the graph. The authors introduce the influence range of each node, a procedure for the title part, and extraction of the white horizontal separator. Making it possible to get good results on various documents. The total system is fast and compact.<>", "which Logical Labels ?", "body", 642.0, 646.0], ["A system for document image segmentation and ordering text areas is described and applied to both Japanese and English complex printed page layouts. There is no need to make any assumption about the shape of blocks, hence the segmentation technique can handle not only skewed images without skew-correction but also documents where column are not rectangular. In this technique, on the bottom-up strategy, the connected components are extracted from the reduced image, and classified according to their local information. The connected components are merged into lines, and lines are merged into areas. Extracted text areas are classified as body, caption, header, and footer. A tree graph of the layout of body texts is made, and we get the order of texts by preorder traversal on the graph. The authors introduce the influence range of each node, a procedure for the title part, and extraction of the white horizontal separator. Making it possible to get good results on various documents. The total system is fast and compact.<>", "which Logical Labels ?", "caption", 648.0, 655.0], ["Numerous studies have so far been carried out extensively for the analysis of document image structure, with particular emphasis placed on media conversion and layout analysis. For the conversion of a collection of books in a library into the form of hypertext documents, a logical structure extraction technology is indispensable, in addition to document layout analysis. The table of contents of a book generally involves very concise and faithful information to represent the logical structure of the entire book. That is to say, we can efficiently analyze the logical structure of a book by making full use of its contents pages. This paper proposes a new approach for document logical structure analysis to convert document images and contents information into an electronic document. First, the contents pages of a book are analyzed to acquire the overall document logical structure. Thereafter, we are able to use this information to acquire the logical structure of all the pages of the book by analyzing consecutive pages of a portion of the book. Test results demonstrate very high discrimination rates: up to 97.6% for the headline structure, 99.4% for the text structure, 97.8% for the page-number structure and almost 100% for the head-foot structure.", "which Logical Labels ?", "content", NaN, NaN], ["The analysis of a document image to derive a symbolic description of its structure and contents involves using spatial domain knowledge to classify the different printed blocks (e.g., text paragraphs), group them into logical units (e.g., newspaper stories), and determine the reading order of the text blocks within each unit. These steps describe the conversion of the physical structure of a document into its logical structure. We have developed a computational model for document logical structure derivation, in which a rule-based control strategy utilizes the data obtained from analyzing a digitized document image, and makes inferences using a multi-level knowledge base of document layout rules. The knowledge-based document logical structure derivation system (DeLoS) based on this model consists of a hierarchical rule-based control system to guide the block classification, grouping and read-ordering operations; a global data structure to store the document image data and incremental inferences; and a domain knowledge base to encode the rules governing document layout.", "which Logical Labels ?", "graph", NaN, NaN], ["Numerous studies have so far been carried out extensively for the analysis of document image structure, with particular emphasis placed on media conversion and layout analysis. For the conversion of a collection of books in a library into the form of hypertext documents, a logical structure extraction technology is indispensable, in addition to document layout analysis. The table of contents of a book generally involves very concise and faithful information to represent the logical structure of the entire book. That is to say, we can efficiently analyze the logical structure of a book by making full use of its contents pages. This paper proposes a new approach for document logical structure analysis to convert document images and contents information into an electronic document. First, the contents pages of a book are analyzed to acquire the overall document logical structure. Thereafter, we are able to use this information to acquire the logical structure of all the pages of the book by analyzing consecutive pages of a portion of the book. Test results demonstrate very high discrimination rates: up to 97.6% for the headline structure, 99.4% for the text structure, 97.8% for the page-number structure and almost 100% for the head-foot structure.", "which Logical Labels ?", "head-foot", 1244.0, 1253.0], ["Numerous studies have so far been carried out extensively for the analysis of document image structure, with particular emphasis placed on media conversion and layout analysis. For the conversion of a collection of books in a library into the form of hypertext documents, a logical structure extraction technology is indispensable, in addition to document layout analysis. The table of contents of a book generally involves very concise and faithful information to represent the logical structure of the entire book. That is to say, we can efficiently analyze the logical structure of a book by making full use of its contents pages. This paper proposes a new approach for document logical structure analysis to convert document images and contents information into an electronic document. First, the contents pages of a book are analyzed to acquire the overall document logical structure. Thereafter, we are able to use this information to acquire the logical structure of all the pages of the book by analyzing consecutive pages of a portion of the book. Test results demonstrate very high discrimination rates: up to 97.6% for the headline structure, 99.4% for the text structure, 97.8% for the page-number structure and almost 100% for the head-foot structure.", "which Logical Labels ?", "headline", 1134.0, 1142.0], ["Numerous studies have so far been carried out extensively for the analysis of document image structure, with particular emphasis placed on media conversion and layout analysis. For the conversion of a collection of books in a library into the form of hypertext documents, a logical structure extraction technology is indispensable, in addition to document layout analysis. The table of contents of a book generally involves very concise and faithful information to represent the logical structure of the entire book. That is to say, we can efficiently analyze the logical structure of a book by making full use of its contents pages. This paper proposes a new approach for document logical structure analysis to convert document images and contents information into an electronic document. First, the contents pages of a book are analyzed to acquire the overall document logical structure. Thereafter, we are able to use this information to acquire the logical structure of all the pages of the book by analyzing consecutive pages of a portion of the book. Test results demonstrate very high discrimination rates: up to 97.6% for the headline structure, 99.4% for the text structure, 97.8% for the page-number structure and almost 100% for the head-foot structure.", "which Logical Labels ?", "table", 377.0, 382.0], ["The National Library of Medicine (NLM) is developing an automated system to produce bibliographic records for its MEDLINER database. This system, named Medical Article Record System (MARS), employs document image analysis and understanding techniques and optical character recognition (OCR). This paper describes a key module in MARS called the Automated Labeling (AL) module, which labels all zones of interest (title, author, affiliation, and abstract) automatically. The AL algorithm is based on 120 rules that are derived from an analysis of journal page layouts and features extracted from OCR output. Experiments carried out on more than 11,000 articles in over 1,000 biomedical journals show the accuracy of this rule-based algorithm to exceed 96%.", "which Logical Labels ?", "title", 413.0, 418.0], ["Abstract Background Genomic deletions and duplications are important in the pathogenesis of diseases, such as cancer and mental retardation, and have recently been shown to occur frequently in unaffected individuals as polymorphisms. Affymetrix GeneChip whole genome sampling analysis (WGSA) combined with 100 K single nucleotide polymorphism (SNP) genotyping arrays is one of several microarray-based approaches that are now being used to detect such structural genomic changes. The popularity of this technology and its associated open source data format have resulted in the development of an increasing number of software packages for the analysis of copy number changes using these SNP arrays. Results We evaluated four publicly available software packages for high throughput copy number analysis using synthetic and empirical 100 K SNP array data sets, the latter obtained from 107 mental retardation (MR) patients and their unaffected parents and siblings. We evaluated the software with regards to overall suitability for high-throughput 100 K SNP array data analysis, as well as effectiveness of normalization, scaling with various reference sets and feature extraction, as well as true and false positive rates of genomic copy number variant (CNV) detection. Conclusion We observed considerable variation among the numbers and types of candidate CNVs detected by different analysis approaches, and found that multiple programs were needed to find all real aberrations in our test set. The frequency of false positive deletions was substantial, but could be greatly reduced by using the SNP genotype information to confirm loss of heterozygosity.", "which Vendor ?", "Affymetrix", 234.0, 244.0], ["Several computer programs are available for detecting copy number variants (CNVs) using genome-wide SNP arrays. We evaluated the performance of four CNV detection software suites\u2014Birdsuite, Partek, HelixTree, and PennCNV-Affy\u2014in the identification of both rare and common CNVs. Each program's performance was assessed in two ways. The first was its recovery rate, i.e., its ability to call 893 CNVs previously identified in eight HapMap samples by paired-end sequencing of whole-genome fosmid clones, and 51,440 CNVs identified by array Comparative Genome Hybridization (aCGH) followed by validation procedures, in 90 HapMap CEU samples. The second evaluation was program performance calling rare and common CNVs in the Bipolar Genome Study (BiGS) data set (1001 bipolar cases and 1033 controls, all of European ancestry) as measured by the Affymetrix SNP 6.0 array. Accuracy in calling rare CNVs was assessed by positive predictive value, based on the proportion of rare CNVs validated by quantitative real-time PCR (qPCR), while accuracy in calling common CNVs was assessed by false positive/false negative rates based on qPCR validation results from a subset of common CNVs. Birdsuite recovered the highest percentages of known HapMap CNVs containing >20 markers in two reference CNV datasets. The recovery rate increased with decreased CNV frequency. In the tested rare CNV data, Birdsuite and Partek had higher positive predictive values than the other software suites. In a test of three common CNVs in the BiGS dataset, Birdsuite's call was 98.8% consistent with qPCR quantification in one CNV region, but the other two regions showed an unacceptable degree of accuracy. We found relatively poor consistency between the two \u201cgold standards,\u201d the sequence data of Kidd et al., and aCGH data of Conrad et al. Algorithms for calling CNVs especially common ones need substantial improvement, and a \u201cgold standard\u201d for detection of CNVs remains to be established.", "which Vendor ?", "Affymetrix", 841.0, 851.0], ["Determination of copy number variants (CNVs) inferred in genome wide single nucleotide polymorphism arrays has shown increasing utility in genetic variant disease associations. Several CNV detection methods are available, but differences in CNV call thresholds and characteristics exist. We evaluated the relative performance of seven methods: circular binary segmentation, CNVFinder, cnvPartition, gain and loss of DNA, Nexus algorithms, PennCNV and QuantiSNP. Tested data included real and simulated Illumina HumHap 550 data from the Singapore cohort study of the risk factors for Myopia (SCORM) and simulated data from Affymetrix 6.0 and platform-independent distributions. The normalized singleton ratio (NSR) is proposed as a metric for parameter optimization before enacting full analysis. We used 10 SCORM samples for optimizing parameter settings for each method and then evaluated method performance at optimal parameters using 100 SCORM samples. The statistical power, false positive rates, and receiver operating characteristic (ROC) curve residuals were evaluated by simulation studies. Optimal parameters, as determined by NSR and ROC curve residuals, were consistent across datasets. QuantiSNP outperformed other methods based on ROC curve residuals over most datasets. Nexus Rank and SNPRank have low specificity and high power. Nexus Rank calls oversized CNVs. PennCNV detects one of the fewest numbers of CNVs.", "which Vendor ?", "Illumina", 502.0, 510.0], ["Background The detection of copy number variants (CNVs) and the results of CNV-disease association studies rely on how CNVs are defined, and because array-based technologies can only infer CNVs, CNV-calling algorithms can produce vastly different findings. Several authors have noted the large-scale variability between CNV-detection methods, as well as the substantial false positive and false negative rates associated with those methods. In this study, we use variations of four common algorithms for CNV detection (PennCNV, QuantiSNP, HMMSeg, and cnvPartition) and two definitions of overlap (any overlap and an overlap of at least 40% of the smaller CNV) to illustrate the effects of varying algorithms and definitions of overlap on CNV discovery. Methodology and Principal Findings We used a 56 K Illumina genotyping array enriched for CNV regions to generate hybridization intensities and allele frequencies for 48 Caucasian schizophrenia cases and 48 age-, ethnicity-, and gender-matched control subjects. No algorithm found a difference in CNV burden between the two groups. However, the total number of CNVs called ranged from 102 to 3,765 across algorithms. The mean CNV size ranged from 46 kb to 787 kb, and the average number of CNVs per subject ranged from 1 to 39. The number of novel CNVs not previously reported in normal subjects ranged from 0 to 212. Conclusions and Significance Motivated by the availability of multiple publicly available genome-wide SNP arrays, investigators are conducting numerous analyses to identify putative additional CNVs in complex genetic disorders. However, the number of CNVs identified in array-based studies, and whether these CNVs are novel or valid, will depend on the algorithm(s) used. Thus, given the variety of methods used, there will be many false positives and false negatives. Both guidelines for the identification of CNVs inferred from high-density arrays and the establishment of a gold standard for validation of CNVs are needed.", "which Vendor ?", "Illumina", 803.0, 811.0], ["High\u2010throughput single nucleotide polymorphism (SNP)\u2010array technologies allow to investigate copy number variants (CNVs) in genome\u2010wide scans and specific calling algorithms have been developed to determine CNV location and copy number. We report the results of a reliability analysis comparing data from 96 pairs of samples processed with CNVpartition, PennCNV, and QuantiSNP for Infinium Illumina Human 1Million probe chip data. We also performed a validity assessment with multiplex ligation\u2010dependent probe amplification (MLPA) as a reference standard. The number of CNVs per individual varied according to the calling algorithm. Higher numbers of CNVs were detected in saliva than in blood DNA samples regardless of the algorithm used. All algorithms presented low agreement with mean Kappa Index (KI) <66. PennCNV was the most reliable algorithm (KIw=98.96) when assessing the number of copies. The agreement observed in detecting CNV was higher in blood than in saliva samples. When comparing to MLPA, all algorithms identified poorly known copy aberrations (sensitivity = 0.19\u20130.28). In contrast, specificity was very high (0.97\u20130.99). Once a CNV was detected, the number of copies was truly assessed (sensitivity >0.62). Our results indicate that the current calling algorithms should be improved for high performance CNV analysis in genome\u2010wide scans. Further refinement is required to assess CNVs as risk factors in complex diseases.Hum Mutat 32:1\u201310, 2011. \u00a9 2011 Wiley\u2010Liss, Inc.", "which Vendor ?", "Illumina", 390.0, 398.0], ["The ability to extract topological regularity out of large randomly deployed sensor networks holds the promise to maximally leverage correlation for data aggregation and also to assist with sensor localization and hierarchy creation. This paper focuses on extracting such regular structures from physical topology through the development of a distributed clustering scheme. The topology adaptive spatial clustering (TASC) algorithm presented here is a distributed algorithm that partitions the network into a set of locally isotropic, non-overlapping clusters without prior knowledge of the number of clusters, cluster size and node coordinates. This is achieved by deriving a set of weights that encode distance measurements, connectivity and density information within the locality of each node. The derived weights form the terrain for holding a coordinated leader election in which each node selects the node closer to the center of mass of its neighborhood to become its leader. The clustering algorithm also employs a dynamic density reachability criterion that groups nodes according to their neighborhood's density properties. Our simulation results show that the proposed algorithm can trace locally isotropic structures in non-isotropic network and cluster the network with respect to local density attributes. We also found out that TASC exhibits consistent behavior in the presence of moderate measurement noise levels", "which ole ?", "Aggregation", 154.0, 165.0], ["In wireless sensor networks, it is already noted that nearby sensor nodes monitoring an environmental feature typically register similar values. This kind of data redundancy due to the spatial correlation between sensor observations inspires the research of in-network data aggregation. In this paper, an \u03b1 -local spatial clustering algorithm for sensor networks is proposed. By measuring the spatial correlation between data sampled by different sensors, the algorithm constructs a dominating set as the sensor network backbone used to realize the data aggregation based on the information description/summarization performance of the dominators. In order to evaluate the performance of the algorithm a pattern recognition scenario over environmental data is presented. The evaluation shows that the resulting network achieved by our algorithm can provide environmental information at higher accuracy compared to other algorithms.", "which ole ?", "Aggregation", 274.0, 285.0], ["Wireless sensor networks (WSNs) are composed of a large number of inexpensive power-constrained wireless sensor nodes, which detect and monitor physical parameters around them through self-organization. Utilizing clustering algorithms to form a hierarchical network topology is a common method of implementing network management and data aggregation in WSNs. Assuming that the residual energy of nodes follows the random distribution, we propose a load-balanced clustering algorithm for WSNs on the basis of their distance and density distribution, making it essentially different from the previous clustering algorithms. Simulated tests indicate that the new algorithm can build more balanceable clustering structure and enhance the network life cycle.", "which ole ?", "Aggregation", 338.0, 349.0], [" Acetylcholinesterase was covalently attached to the inner surface of polyethylene tubing. Initial oxidation generated surface carboxylic groups which, on reaction with thionyl chloride, produced acid chloride groups; these were caused to react with excess ethylenediamine. The amine groups on the surface were linked to glutaraldehyde, and acetylcholinesterase was then attached to the surface. Various kinetic tests showed the catalysis of the hydrolysis of acetylthiocholine iodide to be diffusion controlled. The apparent Michaelis constants were strongly dependent on flow rate and were much larger than the value for the free enzyme. Rate measurements over the temperature range 6\u201342 \u00b0C showed changes in activation energies consistent with diffusion control. ", "which Group that reacts\n(with activated\nmatrix) ?", "amine", 286.0, 291.0], ["In contrast to vehicle routing problems, little work has been done in ship routing and scheduling, although large benefits may be expected from improving this scheduling process. We will present a real ship planning problem, which is a combined inventory management problem anda routing problem with time windows. A fleet of ships transports a single product (ammonia) between production and consumption harbors. The quantities loaded and discharged are determined by the production rates of the harbors, possible stock levels, and the actual ship visiting the harbor. We describe the real problem and the underlying mathematical model. To decompose this model, we discuss some model adjustments. Then, the problem can be solved by a Dantzig\u00c2 Wolfe decomposition approach including both ship routing subproblems and inventory management subproblems. The overall problem is solved by branch-and-bound. Our computational results indicate that the proposed method works for the real planning problem.", "which Products ?", "Ammonia", 360.0, 367.0], ["The Norwegian company Omya Hustadmarmor supplies calcium carbonate slurry to European paper manufacturers from a single processing plant, using chemical tank ships of various sizes to transport its products. Transportation costs are lower for large ships than for small ships, but their use increases planning complexity and creates problems in production. In 2001, the company faced overwhelming operational challenges and sought operations-research-based planning support. The CEO, Sturla Steinsvik, contacted More Research Molde, which conducted a project that led to the development of a decision-support system (DSS) for maritime inventory routing. The core of the DSS is an optimization model that is solved through a metaheuristic-based algorithm. The system helps planners to make stronger, faster decisions and has increased predictability and flexibility throughout the supply chain. It has saved production and transportation costs close to US$7 million a year. We project additional direct savings of nearly US$4 million a year as the company adds even larger ships to the fleet as a result of the project. In addition, the company has avoided investments of US$35 million by increasing capacity utilization. Finally, the project has had a positive environmental effect by reducing overall oil consumption by more than 10 percent.", "which Products ?", "Calcium carbonate slurry", 49.0, 73.0], ["This paper describes a real-world application concerning the distribution in Portugal of frozen products of a world-wide food and beverage company. Its focus is the development of a model to support negotiations between a logistics operator and retailers, establishing a common basis for a co-operative scheme in supply chain management. A periodic review policy is adopted and an optimisation procedure based on the heuristic proposed by Viswanathan and Mathur (Mgmnt Sci., 1997, 43, 294\u2013312) is used to devise guidelines for inventory replenishment frequencies and for the design of routes to be used in the distribution process. This provides an integrated approach of the two logistics functions\u2014inventory management and routing\u2014with the objective of minimising long-term average costs, considering an infinite time horizon. A framework to estimate inventory levels, namely safety stocks, is also presented. The model provides full information concerning the expected performance of the proposed solution, which can be compared against the present situation, allowing each party to assess its benefits and drawbacks.", "which Products ?", "Frozen products", 89.0, 104.0], ["For Air Products and Chemicals, Inc., inventory management of industrial gases at customer locations is integrated with vehicle scheduling and dispatching. Their advanced decision support system includes on-line data entry functions, customer usage forecasting, a time/distance network with a shortest path algorithm to compute intercustomer travel times and distances, a mathematical optimization module to produce daily delivery schedules, and an interactive schedule change interface. The optimization module uses a sophisticated Lagrangian relaxation algorithm to solve mixed integer programs with up to 800,000 variables and 200,000 constraints to near optimality. The system, first implemented in October, 1981, has been saving between 6% to 10% of operating costs.", "which Products ?", "Industrial gases", 62.0, 78.0], ["The dialogue between end-user and developer presents several challenges in requirements development. One issue is the gap between the conceptual models of end-users and formal specification/analysis models of developers. This paper presents a novel technique for the video analysis of scenarios, relating the use of video-based requirements to process models of software development. It uses a knowledge model-an RDF graph-based on a semiotic interpretation of film language, which allows mapping conceptual into formal models. It can be queried with RDQL, a query language for RDF. The technique has been implemented with a tool which lets the analyst annotate objects as well as spatial or temporal relationships in the video, to represent the conceptual model. The video can be arranged in a scenario graph effectively representing a multi-path video. It can be viewed in linear time order to facilitate the review of individual scenarios by end-users. Each multi-path scene from the conceptual model is mapped to a UML use case in the formal model. A UML sequence diagram can also be generated from the annotations, which shows the direct mapping of film language to UML. This sequence diagram can be edited by the analyst, refining the conceptual model to reflect deeper understanding of the application domain. The use of the software cinema technique is demonstrated with several prototypical applications. One example is a loan application scenario for a financial services consulting firm which acted as an end-user", "which Application ?", "application", 1297.0, 1308.0], ["A series of benzoate-decorated lanthanide (Ln)-containing tetrameric Dawson-type phosphotungstates [N(CH3)4]6H20[{(P2W17O61)Ln(H2O)3Ln(C6H5COO)(H2O)6]}{[(P2W17O61)Ln(H2O)3}]2Cl2\u00b798H2O [Ln = Sm (1), Eu (2), and Gd (3)] were made using a facile one-step assembly strategy and characterized by several techniques. Notably, the Ln-containing tetrameric Dawson-type polyoxoanions [{(P2W17O61)Ln(H2O)3Ln(C6H5COO)(H2O)6]}{[(P2W17O61)Ln(H2O)3}]224- are all established by four monolacunary Dawson-type [P2W17O61]10- segments, encapsulating a Ln3+ ion with two benzoates coordinating to the Ln3+ ions. 1-3 exhibit reversible photochromism, which can change from intrinsic white to blue for 6 min upon UV irradiation, and their colors gradually recover for 30 h in the dark. The solid-state photoluminescence spectra of 1 and 2 display characteristic emissions of Ln components based on 4f-4f transitions. Time-resolved emission spectra of 1 and 2 were also measured to authenticate the energy transfer from the phosphotungstate and organic chromophores to Eu3+. In particular, 1 shows an effectively switchable luminescence behavior induced by its fast photochromism.", "which Application ?", "Luminescence behavior", 1102.0, 1123.0], ["OBJECTIVES The aims of this study were to (1) investigate prevalence and severity of erosive tooth wear among kindergarten children and (2) determine the relationship between dental erosion and dietary intake, oral hygiene behaviour, systemic diseases and salivary concentration of calcium and phosphate. MATERIALS AND METHODS A sample of 463 children (2-7 years old) from 21 kindergartens were examined under standardized conditions by a calibrated examiner. Dental erosion of primary and permanent teeth was recorded using a scoring system based on O'Sullivan Index [Eur J Paediatr Dent 2 (2000) 69]. Data on the rate and frequency of dietary intake, systemic diseases and oral hygiene behaviour were obtained from a questionnaire completed by the parents. Unstimulated saliva samples of 355 children were analysed for calcium and phosphate concentration by colorimetric assessment. Descriptive statistics and multiple regression analysis were applied to the data. RESULTS Prevalence of erosion amounted to 32% and increased with increasing age of the children. Dentine erosion affecting at least one tooth could be observed in 13.2% of the children. The most affected teeth were the primary maxillary first and second incisors (15.5-25%) followed by the canines (10.5-12%) and molars (1-5%). Erosions on primary mandibular teeth were as follows: incisors: 1.5-3%, canines: 5.5-6% and molars: 3.5-5%. Erosions of the primary first and second molars were mostly seen on the occlusal surfaces (75.9%) involving enamel or enamel-dentine but not the pulp. In primary first and second incisors and canines, erosive lesions were often located incisally (51.2%) or affected multiple surfaces (28.9%). None of the permanent incisors (n = 93) or first molars (n=139) showed signs of erosion. Dietary factors, oral hygiene behaviour, systemic diseases and salivary calcium and phosphate concentration were not associated with the presence of erosion. CONCLUSIONS Erosive tooth wear of primary teeth was frequently seen in primary dentition. As several children showed progressive erosion into dentine or exhibited severe erosion affecting many teeth, preventive and therapeutic measures are recommended.", "which Study population ?", "Kindergarten", 110.0, 122.0], ["BACKGROUND Methamphetamine (MAP) abuse is a significant worldwide problem. This prospective study was conducted to determine if MAP users had distinct patterns of tooth wear. METHODS Methamphetamine users were identified and interviewed about their duration and preferred route of MAP use. Study participants were interviewed in the emergency department of a large urban university hospital serving a geographic area with a high rate of illicit MAP production and consumption. Tooth wear was documented for each study participant and scored using a previously validated index and demographic information was obtained using a questionnaire. RESULTS Forty-three MAP patients were interviewed. Preferred route of administration was injection (37%) followed by snorting (33%). Patients who preferentially snorted MAP had significantly higher tooth wear in the anterior maxillary teeth than patients who injected, smoked, or ingested MAP (P = 0.005). CONCLUSION Patients who use MAP have distinct patterns of wear based on route of administration. This difference may be explained anatomically.", "which Study population ?", "Methamphetamine users", 183.0, 204.0], ["OBJECTIVES The objective of this study was to determine the prevalence of dental erosion in preschool children in Jeddah, Saudi Arabia, and to relate this to caries and rampant caries in the same children. METHODS A sample of 987 children (2-5 years) was drawn from 17 kindergartens. Clinical examinations were carried out under standardised conditions by a trained and calibrated examiner (M.Al-M.). Measurement of erosion was confined to primary maxillary incisors and used a scoring system and criteria based on those used in the UK National Survey of Child Dental Health. Caries was diagnosed using BASCD criteria. Rampant caries was defined as caries affecting the smooth surfaces of two or more maxillary incisors. RESULTS Of the 987 children, 309 (31%) had evidence of erosion. For 186 children this was confined to enamel but for 123 it involved dentine and/or pulp. Caries were diagnosed in 720 (73%) of the children and rampant caries in 336 (34%). The mean dmft for the 987 children was 4.80 (+/-4.87). Of the 384 children who had caries but not rampant caries, 141 (37%) had erosion, a significantly higher proportion than the 72 (27%) out of 267 who were clinically caries free (SND=2.61, P<0.01). Of the 336 with rampant caries, 96 (29%) also had evidence of erosion. CONCLUSIONS The level of erosion was similar to that seen in children of an equivalent age in the UK. Caries was a risk factor for erosion in this group of children.", "which Study population ?", "Preschool children", 92.0, 110.0], ["OBJECTIVES To investigate the prevalence and nature of oral health problems among workers exposed to acid fumes in two industries in Jordan. SETTING Jordan's Phosphate Mining Company and a main private battery factory. DESIGN Comparison of general and oral health conditions between workers exposed to acid fumes and control group from the same workplace. SUBJECTS AND METHODS The sample consisted of 68 subjects from the phosphate industry (37 acid workers and 31 controls) drawn as a sample of convenience and 39 subjects from a battery factory (24 acid workers and 15 controls). Structured questionnaires on medical and dental histories were completed by interview. Clinical examinations were carried out to assess dental erosion, oral hygiene, and gingival health using the appropriate indices. Data were statistically analysed using Wilcoxon rank-sum test to assess the significance of differences between results attained by acid workers and control groups for the investigated parameters. RESULTS Differences in the erosion scores between acid workers in both industries and their controls were highly significant (P<0.05). In both industries, acid workers showed significantly higher oral hygiene scores, obtained by adding the debris and calculus scores, and gingival index scores than their controls (P<0.05). The single most common complaint was tooth hypersensitivity (80%) followed by dry mouth (77%) on average. CONCLUSION Exposure to acid fumes in the work place was significantly associated with dental erosion and deteriorated oral health status. Such exposure was also detrimental to general health. Findings pointed to the need of establishing appropriate educational, preventive and treatment measures coupled with efficient surveillance and environmental monitoring for detection of acid fumes in the workplace atmosphere.", "which Study population ?", "Workers exposed to acid fumes", 82.0, 111.0], ["A non-carious cervical lesion (NCCL) is the loss of hard dental tissue on the neck of the tooth, most frequently located on the vestibular plane. Causal agents are diverse and mutually interrelated. In the present study all vestibular NCCL were observed and recorded by the tooth wear index (TWI). The aim of the study was to determine the prevalence and severity of NCCL. For this purpose, 18555 teeth from the permanent dentition were examined in a population from the city of Rijeka, Croatia. Subjects were divided into six age groups. The teeth with most NCCL were the lower premolars, which also had the largest percentage of higher index levels, indicating the greater severity of the lesions. The most frequent index level was 1, and the prevalence and severity of the lesions increased with age.", "which Study population ?", "Permanent dentition", 412.0, 431.0], ["PURPOSE The purpose of this study was to evaluate the prevalence, distribution, and associated factors of tooth wear among psychiatric patients. MATERIALS AND METHODS Tooth wear was evaluated using the tooth wear index with scores ranging from 0 to 4. The presence of predisposing factors was recorded in 143 psychiatric patients attending the outpatient clinic at the Prince Rashed Hospital in northern Jordan. RESULTS The prevalence of a tooth wear score of 3 in at least one tooth was 90.9%. Patients in the age group 16 to 25 had the lowest prevalence (78.6%) of tooth wear. Increasing age was found to be a significant risk factor for the prevalence of tooth wear (P < .005). The occlusal/incisal surfaces were the most affected by wear, with mandibular teeth being more affected than maxillary teeth, followed by the palatal surface of the maxillary anterior teeth and then the buccal/labial surface of the mandibular teeth. The factors found to be associated with tooth wear were age, retirement and unemployment, masseter muscle pain, depression, and anxiety. CONCLUSION Patients' psychiatric condition and prescribed medication may be considered factors that influence tooth wear.", "which Study population ?", "Psychiatric patients", 123.0, 143.0], ["There are a few documented case studies on the adverse effect of wine on both dental hard and soft tissues. Professional wine tasting could present some degree of increased risk to dental erosion. Alcoholic beverages with a low pH may cause erosion, particularly if the attack is of long duration, and repeated over time. The purpose of this study was to compare the prevalence and severity of tooth surface loss between winemakers (exposed) and their spouses (non-exposed). Utilising a cross-sectional, comparative study design, a clinical examination was conducted to assess caries status; the presence and severity of tooth surface loss; staining (presence or absence); fluorosis and prosthetic status. The salivary flow rate, buffering capacity and pH were also measured. Thirty-six persons, twenty-one winemakers and fifteen of their spouses participated in the study. It was possible to show that there was a difference in terms of the prevalence and severity of tooth surface loss between the teeth of winemakers and those who are not winemakers. The occurrence of tooth surface loss amongst winemakers was highly likely due to frequent exposure of their teeth to wine. Frequent exposure of the teeth to wine, as occurs among wine tasters, is deleterious to enamel, and constitutes an occupational hazard. Erosion is an occupational risk for wine tasters.", "which Study population ?", "Winemakers", 421.0, 431.0], ["BACKGROUND: Production of microbial enzymes in bioreactors is a complex process including such phenomena as metabolic networks and mass transport resistances. The use of neural networks (NNs) to infer the state of bioreactors may be an interesting option that may handle the nonlinear dynamics of biomass growth and protein production. RESULTS: Feedforward multilayer perceptron (MLP) NNs were used for identification of the cultivation phase of Bacillus megaterium to produce the enzyme penicillin G acylase (EC. 3.5.1.11). The following variables were used as input to the net: run time and carbon dioxide concentration in the exhausted gas. The NN output associates a numerical value to the metabolic state of the cultivation, close to 0 during the lag phase, close to 1 during the exponential phase and approximately 2 for the stationary phase. This is a non-conventional approach for pattern recognition. During the exponential phase, another MLP was used to infer cellular concentration. Time, carbon dioxide concentration and stirrer speed form an integrated net input vector. Cellular concentrations provided by the NN were used in a hybrid approach to estimate product concentrations of the enzyme. The model employed a first-order approximation. CONCLUSION: Results showed that the algorithm was able to infer accurate values of cellular and product concentrations up to the end of the exponential growth phase, where an industrial run should stop. Copyright \u00a9 2008 Society of Chemical Industry", "which Systems applied ?", "Bioreactor", NaN, NaN], ["A virtual sensor that estimates product compositions in a middle-vessel batch distillation column has been developed. The sensor is based on a recurrent artificial neural network, and uses information available from secondary measurements (such as temperatures and flow rates). The criteria adopted for selecting the most suitable training data set and the benefits deriving from pre-processing these data by means of principal component analysis are demonstrated by simulation. The effects of sensor location, model initialization, and noisy temperature measurements on the performance of the soft sensor are also investigated. It is shown that the estimated compositions are in good agreement with the actual values.", "which Systems applied ?", "Batch distillation", 72.0, 90.0], ["A novel Structure Approaching Hybrid Neural Network (SAHNN) approach to model batch reactors is presented. The Virtual Supervisor\u2212Artificial Immune Algorithm method is utilized for the training of SAHNN, especially for the batch processes with partial unmeasurable state variables. SAHNN involves the use of approximate mechanistic equations to characterize unmeasured state variables. Since the main interest in batch process operation is on the end-of-batch product quality, an extended integral square error control index based on the SAHNN model is applied to track the desired temperature profile of a batch process. This approach introduces model mismatches and unmeasured disturbances into the optimal control strategy and provides a feedback channel for control. The performance of robustness and antidisturbances of the control system are then enhanced. The simulation result indicates that the SAHNN model and model-based optimal control strategy of the batch process are effective.", "which Systems applied ?", "Batch reactor", NaN, NaN], ["A liner shipping company seeks to provide liner services with shorter transit time compared with the benchmark of market-level transit time because of the ever-increasing competition. When the itineraries of its liner service routes are determined, the liner shipping company designs the schedules of the liner routes such that the wait time at transshipment ports is minimized. As a result of transshipment, multiple paths are available for delivering containers from the origin port to the destination port. Therefore, the medium-term (3 to 6 months) schedule design problem and the operational-level container-routing problem must be investigated simultaneously. The schedule design and container-routing problems were formulated by minimization of the sum of the total transshipment cost and penalty cost associated with longer transit time than the market-level transit time, minus the bonus for shorter transit time. The formulation is nonlinear, noncontinuous, and nonconvex. A genetic local search approach was developed to find good solutions to the problem. The proposed solution method was applied to optimize the Asia\u2013Europe\u2013Oceania liner shipping services of a global liner company.", "which emarkable factor ?", "bonus", 891.0, 896.0], ["Empty container allocation problems arise due to imbalance on trades. Imbalanced trade is a common fact in the liner shipping,creating the necessity of repositioning empty containers from import-dominant ports to export-dominant ports in an economic and efficient way. The present work configures a liner shipping network, by performing the routes assignment and their integration to maximize the profit for a liner shipping company. The empty container repositioning problem is expressly taken into account in whole process. By considering the empty container repositioning problem in the network design, the choice of routes will be also influenced by the empty container flow, resulting in an optimum network, both for loaded and empty cargo. The Liner Shipping Network Design Program (LS-NET program) will define the best set of routes among a set of candidate routes, the best composition of the fleet for the network and configure the empty container repositioning network. Further, a network of Asian ports was studied and the results obtained show that considering the empty container allocation problem in the designing process can influence the final configuration of the network.", "which emarkable factor ?", "Empty container repositioning", 438.0, 467.0], ["A liner shipping company seeks to provide liner services with shorter transit time compared with the benchmark of market-level transit time because of the ever-increasing competition. When the itineraries of its liner service routes are determined, the liner shipping company designs the schedules of the liner routes such that the wait time at transshipment ports is minimized. As a result of transshipment, multiple paths are available for delivering containers from the origin port to the destination port. Therefore, the medium-term (3 to 6 months) schedule design problem and the operational-level container-routing problem must be investigated simultaneously. The schedule design and container-routing problems were formulated by minimization of the sum of the total transshipment cost and penalty cost associated with longer transit time than the market-level transit time, minus the bonus for shorter transit time. The formulation is nonlinear, noncontinuous, and nonconvex. A genetic local search approach was developed to find good solutions to the problem. The proposed solution method was applied to optimize the Asia\u2013Europe\u2013Oceania liner shipping services of a global liner company.", "which emarkable factor ?", "Penalty cost", 796.0, 808.0], ["Production of synthetic fuels from lignocellulose like wood or straw involves complex technology. There\u2010fore, a large BTL (biomass to liquid) plant for biosynfuel production is more economic than many small facilities. A reasonable BTL\u2010plant capacity is \u22651 Mt/a biosynfuel similar to the already existing commercial CTL and GTL (coal to liquid, gas to liquid) plants of SASOL and SHELL, corresponding to at least 10% of the capacity of a modern oil refinery. BTL\u2010plant cost estimates are therefore based on reported experience with CTL and GTL plants. Direct supply of large BTL plants with low bulk density biomass by trucks is limited by high transport costs and intolerable local traffic density. Biomass densification by liquefaction in a fast pyrolysis process generates a compact bioslurry or biopaste, also denoted as biosyncrude as produced by the bioliq\u00ae process. The densified biosyncrude intermediate can now be cheaply transported from many local facilities in silo wagons by electric rail over long distances to a large and more economic central biosynfuel plant. In addition to the capital expenditure (capex) for the large and complex central biosynfuel plant, a comparable investment effort is required for the construction of several dozen regional pyrolysis plants with simpler technology. Investment costs estimated for fast pyrolysis plants reported in the literature have been complemented by own studies for plants with ca. 100 MWth biomass input. The breakdown of BTL synfuel manufacturing costs of ca. 1 \u20ac /kg in central EU shows that about half of the costs are caused by the biofeedstock, including transport. This helps to generate new income for farmers. The other half is caused by technical costs, which are about proportional to the total capital investment (TCI) for the pyrolysis and biosynfuel production plants. Labor is a minor contribution in the relatively large facilities. \u00a9 2009 Society of Chemical Industry and John Wiley & Sons, Ltd", "which Plant type ?", "BtL", 118.0, 121.0], ["This paper presents two-stage bi-objective stochastic programming models for disaster relief operations. We consider a problem that occurs in the aftermath of a natural disaster: a transportation system for supplying disaster victims with relief goods must be established. We propose bi-objective optimization models with a monetary objective and humanitarian objective. Uncertainty in the accessibility of the road network is modeled by a discrete set of scenarios. The key features of our model are the determination of locations for intermediate depots and acquisition of vehicles. Several model variants are considered. First, the operating budget can be fixed at the first stage for all possible scenarios or determined for each scenario at the second stage. Second, the assignment of vehicles to a depot can be either fixed or free. Third, we compare a heterogeneous vehicle fleet to a homogeneous fleet. We study the impact of the variants on the solutions. The set of Pareto-optimal solutions is computed by applying the adaptive Epsilon-constraint method. We solve the deterministic equivalents of the two-stage stochastic programs using the MIP-solver CPLEX.", "which Decisions First-stage ?", "budget", 645.0, 651.0], ["In this study, we consider facility location decisions for a humanitarian relief chain responding to quick-onset disasters. In particular, we develop a model that determines the number and locations of distribution centres in a relief network and the amount of relief supplies to be stocked at each distribution centre to meet the needs of people affected by the disasters. Our model, which is a variant of the maximal covering location model, integrates facility location and inventory decisions, considers multiple item types, and captures budgetary constraints and capacity restrictions. We conduct computational experiments to illustrate how the proposed model works on a realistic problem. Results show the effects of pre- and post-disaster relief funding on relief system's performance, specifically on response time and the proportion of demand satisfied. Finally, we discuss the managerial implications of the proposed model.", "which Decisions First-stage ?", "Locations", 189.0, 198.0], ["This article introduces a risk-averse stochastic modeling approach for a pre-disaster relief network design problem under uncertain demand and transportation capacities. The sizes and locations of the response facilities and the inventory levels of relief supplies at each facility are determined while guaranteeing a certain level of network reliability. A probabilistic constraint on the existence of a feasible flow is introduced to ensure that the demand for relief supplies across the network is satisfied with a specified high probability. Responsiveness is also accounted for by defining multiple regions in the network and introducing local probabilistic constraints on satisfying demand within each region. These local constraints ensure that each region is self-sufficient in terms of providing for its own needs with a large probability. In particular, the Gale\u2013Hoffman inequalities are used to represent the conditions on the existence of a feasible network flow. The solution method rests on two pillars. A preprocessing algorithm is used to eliminate redundant Gale\u2013Hoffman inequalities and then proposed models are formulated as computationally efficient mixed-integer linear programs by utilizing a method based on combinatorial patterns. Computational results for a case study and randomly generated problem instances demonstrate the effectiveness of the models and the solution method.", "which Decisions First-stage ?", "Locations", 184.0, 193.0], ["This paper presents two-stage bi-objective stochastic programming models for disaster relief operations. We consider a problem that occurs in the aftermath of a natural disaster: a transportation system for supplying disaster victims with relief goods must be established. We propose bi-objective optimization models with a monetary objective and humanitarian objective. Uncertainty in the accessibility of the road network is modeled by a discrete set of scenarios. The key features of our model are the determination of locations for intermediate depots and acquisition of vehicles. Several model variants are considered. First, the operating budget can be fixed at the first stage for all possible scenarios or determined for each scenario at the second stage. Second, the assignment of vehicles to a depot can be either fixed or free. Third, we compare a heterogeneous vehicle fleet to a homogeneous fleet. We study the impact of the variants on the solutions. The set of Pareto-optimal solutions is computed by applying the adaptive Epsilon-constraint method. We solve the deterministic equivalents of the two-stage stochastic programs using the MIP-solver CPLEX.", "which Decisions First-stage ?", "Locations", 522.0, 531.0], ["Purpose \u2013 The purpose of this paper is to discuss and to help address the need for quantitative models to support and improve procurement in the context of humanitarian relief efforts.Design/methodology/approach \u2013 This research presents a two\u2010stage stochastic decision model with recourse for procurement in humanitarian relief supply chains, and compares its effectiveness on an illustrative example with respect to a standard solution approach.Findings \u2013 Results show the ability of the new model to capture and model both the procurement process and the uncertainty inherent in a disaster relief situation, in support of more efficient and effective procurement plans.Research limitations/implications \u2013 The research focus is on sudden onset disasters and it does not differentiate between local and international suppliers. A number of extensions of the base model could be implemented, however, so as to address the specific needs of a given organization and their procurement process.Practical implications \u2013 Despi...", "which Decisions First-stage ?", "Procurement", 126.0, 137.0], ["Abstract The goal of this paper is twofold. First, we present a stochastic programming-based model that provides optimal design solutions for transportation networks in light of possible emergency evacuations. Second, as traffic congestion is a growing problem in metropolitan areas around the world, decision makers might not be willing to design transportation networks solely for evacuation purposes since daily traffic patterns differ tremendously from traffic observed during evacuations. This is especially true when potential disaster locations are limited in number and confined to specific regions (e.g. coastal regions might be more prone to flooding). However, as extreme events such as excessive rainfall become more prevalent everywhere, it is less obvious that the design of transportation networks for evacuation planning and congestion reduction is mutually exclusive. That is, capacity expansion decisions to reduce congestion might also be reasonable from an evacuation planning point of view. Conversely, expansion decisions for evacuation planning might turn out to be effective for congestion relief. To date, no numerical evidence has been presented in the literature to support or disprove these conjectures. Preliminary numerical evidence is provided in this paper.", "which Decisions First-stage ?", "Capacity expansion", 894.0, 912.0], ["In this article, we examine the design of an evacuation tree, in which evacuation is subject to capacity restrictions on arcs. The cost of evacuating people in the network is determined by the sum of penalties incurred on arcs on which they travel, where penalties are determined according to a nondecreasing function of time. Given a discrete set of disaster scenarios affecting network population, arc capacities, transit times, and penalty functions, we seek to establish an optimal a priori evacuation tree that minimizes the expected evacuation penalty. The solution strategy is based on Benders decomposition, in which the master problem is a mixed\u2010integer program and each subproblem is a time\u2010expanded network flow problem. We provide efficient methods for obtaining primal and dual subproblem solutions, and analyze techniques for improving the strength of the master problem formulation, thus reducing the number of master problem solutions required for the algorithm's convergence. We provide computational results to compare the efficiency of our methods on a set of randomly generated test instances. \u00a9 2008 Wiley Periodicals, Inc. NETWORKS, 2009", "which Decisions First-stage ?", "Evacuation tree", 45.0, 60.0], ["The increasing number of campus-related emergency incidents, in combination with the requirements imposed by the Clery Act, have prompted college campuses to develop emergency notification systems to inform community members of extreme events that may affect them. Merely deploying emergency notification systems on college campuses, however, does not guarantee that these systems will be effective; student compliance plays a very important role in establishing such effectiveness. Immediate compliance with alerts, as opposed to delayed compliance or noncompliance, is a key factor in improving student safety on campuses. This paper investigates the critical antecedents that motivate students to comply immediately with messages from campus emergency notification systems. Drawing on Etzioni's compliance theory, a model is developed. Using a scenario-based survey method, the model is tested in five types of events--snowstorm, active shooter, building fire, health-related, and robbery--and with more than 800 college students from the Northern region of the United States. The results from this study suggest that subjective norm and information quality trust are, in general, the most important factors that promote immediate compliance. This research contributes to the literature on compliance, emergency notification systems, and emergency response policies.", "which Focus Group ?", "Campus", 25.0, 31.0], ["Research problem: This study investigates the factors influencing students' intentions to use emergency notification services to receive news about campus emergencies through short-message systems (SMS) and social network sites (SNS). Research questions: (1) What are the critical factors that influence students' intention to use SMS to receive emergency notifications? (2) What are the critical factors that influence students' intention to use SNS to receive emergency notifications? Literature review: By adapting Media Richness theory and prior research on emergency notifications, we propose that perceived media richness, perceived trust in information, perceived risk, perceived benefit, and perceived social influence impact the intention to use SMS and SNS to receive emergency notifications. Methodology: We conducted a quantitative, survey-based study that tested our model in five different scenarios, using logistic regression to test the research hypotheses with 574 students of a large research university in the northeastern US. Results and discussion: Results suggest that students' intention to use SNS is impacted by media richness, perceived benefit, and social influence, while students' intention to use SMS is influenced by trust and perceived benefit. Implications to emergency managers suggest how to more effectively manage and market the service through both channels. The results also suggest using SNS as an additional means of providing emergency notifications at academic institutions.", "which Focus Group ?", "Campus", 148.0, 154.0], ["This study employs the perspective of organizational resilience to examine how information and communication technologies (ICTs) were used by organizations to aid in their recovery after Hurricane Katrina. In-depth interviews enabled longitudinal analysis of ICT use. Results showed that organizations enacted a variety of resilient behaviors through adaptive ICT use, including information sharing, (re)connection, and resource acquisition. Findings emphasize the transition of ICT use across different stages of recovery, including an anticipated stage. Key findings advance organizational resilience theory with an additional source of resilience, external availability. Implications and contributions to the literature of ICTs in disaster contexts and organizational resilience are discussed.", "which Focus Group ?", "organizations", 142.0, 155.0], ["Recent extreme events show that Twitter, a micro-blogging service, is emerging as the dominant social reporting tool to spread information on social crises. It is elevating the online public community to the status of first responders who can collectively cope with social crises. However, at the same time, many warnings have been raised about the reliability of community intelligence obtained through social reporting by the amateur online community. Using rumor theory, this paper studies citizen-driven information processing through Twitter services using data from three social crises: the Mumbai terrorist attacks in 2008, the Toyota recall in 2010, and the Seattle cafe shooting incident in 2012. We approach social crises as communal efforts for community intelligence gathering and collective information processing to cope with and adapt to uncertain external situations. We explore two issues: (1) collective social reporting as an information processing mechanism to address crisis problems and gather community intelligence, and (2) the degeneration of social reporting into collective rumor mills. Our analysis reveals that information with no clear source provided was the most important, personal involvement next in importance, and anxiety the least yet still important rumor causing factor on Twitter under social crisis situations.", "which Focus Group ?", "Public", 184.0, 190.0], ["The antifungal activity of Artemisia herba alba was found to be associated with two major volatile compounds isolated from the fresh leaves of the plant. Carvone and piperitone were isolated and identified by GC/MS, GC/IR, and NMR spectroscopy. Antifungal activity was measured against Penicillium citrinum (ATCC 10499) and Mucora rouxii (ATCC 24905). The antifungal activity (IC50) of the purified compounds was estimated to be 5 \u03bc g/ml, 2 \u03bc g/ml against Penicillium citrinum and 7 \u03bc g/ml, 1.5 \u03bc g/ml against Mucora rouxii carvone and piperitone, respectively.", "which Main components (P5%) ?", "Carvone", 154.0, 161.0], ["A 71-year-old man who had undergone an ileorectal anastomosis some years earlier, developed fulminant fatal Clostridium difficile pseudomembranous enteritis and proctitis after a prostatectomy. This case and three reports of C. difficile involvement of the small bowel in adults emphasize that the small intestine can be affected. No case like ours, of enteritis after colectomy from C. difficile, has hitherto been reported.", "which Intestinal operation ?", "Colectomy", 369.0, 378.0], ["Clostridium difficile infection is usually associated with antibiotic therapy and is almost always limited to the colonic mucosa. Small bowel enteritis is rare: only 9 cases have been previously cited in the literature. This report describes a case of C. difficile small bowel enteritis that occurred in a patient after total colectomy and reviews the 9 previously reported cases of C. difficile enteritis.", "which Intestinal operation ?", "Colectomy", 326.0, 335.0], ["To the Editor: A 54-year-old male was admitted to a community hospital with a 3-month history of diarrhea up to 8 times a day associated with bloody bowel motions and weight loss of 6 kg. He had no past medical history or family history of note. A clinical diagnosis of colitis was made and the patient underwent a limited colonoscopy which demonstrated continuous mucosal inflammation and ulceration that was most marked in the rectum. The clinical and endoscopic findings were suggestive of acute ulcerative colitis (UC), which was subsequently supported by histopathology. The patient was managed with bowel rest and intravenous steroids. However, he developed toxic megacolon on day 4 of his admission and underwent a total colectomy with end ileostomy. On the third postoperative day the patient developed a pyrexia of 39\u00b0C, a septic screen was performed, and the central venous line (CVP) was changed with the tip culturing methicillin-resistant Staphylococcus aureus (MRSA). Intravenous gentamycin was commenced and discontinued after 5 days, with the patient remaining afebrile and stable. On the tenth postoperative day the patient became tachycardic (pulse 110/min), diaphoretic (temperature of 39.4\u00b0C), hypotensive (diastolic of 60 mm Hg), and with a high volume nasogastric aspirates noted (2000 mL). A diagnosis of septic shock was considered although the etiology was unclear. The patient was resuscitated with intravenous fluids and transferred to the regional surgical unit for Intensive Care Unit monitoring and management. A computed tomography (CT) of the abdomen showed a marked inflammatory process with bowel wall thickening along the entire small bowel with possible intramural air, raising the suggestion of ischemic bowel (Fig. 1). However, on clinical assessment the patient elicited no signs of peritonism, his vitals were stable, he was not acidotic (pH 7.40), urine output was adequate, and his blood pressure was being maintained without inotropic support. Furthermore, his ileostomy appeared healthy and well perfused, although a high volume (2500 mL in the previous 18 hours), malodorous output was noted. A sample of the stoma output was sent for microbiological analysis. Given that the patient was not exhibiting evidence of peritonitis with normal vital signs, a conservative policy of fluid resuscitation was pursued with plans for exploratory laparotomy if he disimproved. Ileostomy output sent for microbiology assessment was positive for Clostridium difficile toxin A and B utilizing culture and enzyme immunoassays (EIA). Intravenous vancomycin, metronidazole, and rifampicin via a nasogastric tube were commenced in conjunction with bowel rest and total parenteral nutrition. The ileostomy output reduced markedly within 2 days and the patient\u2019s clinical condition improved. Follow-up culture of the ileostomy output was negative for C. difficile toxins. The patient was discharged in good health on full oral diet 12 days following transfer. Review of histopathology relating to the resected colon and subsequent endoscopic assessment of the retained rectum confirmed the initial diagnosis of UC, rather than a primary diagnosis of pseudomembranous colitis. Clostridium difficile is the leading cause of nosocomial diarrhea associated with antibiotic therapy and is almost always limited to the colonic mucosa.1 Small bowel enteritis secondary to C. difficile is exceedingly rare, with only 21 previous cases cited in the literature.2,3 Of this cohort, 18 patients had a surgical procedure at some timepoint prior to the development of C. difficile enteritis, while the remaining 3 patients had no surgical procedure prior to the infection. The time span between surgery and the development of enteritis ranged from 4 days to 31 years. Antibiotic therapy predisposed to the development of C. difficile enteritis in 20 of the cases. A majority of the patients (n 11) had a history of inflammatory bowel disease (IBD), with 8 having UC similar to our patient and the remaining 3 patients having a history of Crohn\u2019s disease. The etiology of small bowel enteritis remains unclear. C. difficile has been successfully isolated from the small bowel in both autopsy specimens and from jejunal aspirate of patients with chronic diarrhea, suggesting that the small bowel may act as a reservoir for C. difficile.4 This would suggest that C. difficile could become pathogenic in the small bowel following a disruption in the small bowel flora in the setting of antibiotic therapy. This would be supported by the observation that the majority of cases reported occurred within 90 days of surgery with attendant disruption of bowel function. The prevalence of C. difficile-associated disease (CDAD) in patients with IBD is increasing. Issa et al5 examined the impact of CDAD in a cohort of patients with IBD. They found that more than half of the patients with a positive culture for C. difficile were admitted and 20% required a colectomy. They reported that maintenance immunomodulator use and colonic involvement were independent risk factors for C. difficile infection in patients with IBD. The rising incidence of C. difficile in patients with IBD coupled with the use of increasingly potent immunomodulatory therapies means that clinicians must have a high index of suspiCopyright \u00a9 2008 Crohn\u2019s & Colitis Foundation of America, Inc. DOI 10.1002/ibd.20758 Published online 22 October 2008 in Wiley InterScience (www.interscience.wiley.com).", "which Intestinal operation ?", "Colectomy", 728.0, 737.0], ["Mobile phone text messages can be used to disseminate information and advice to the public in disasters. We sought to identify factors influencing how adolescents would respond to receiving emergency text messages. Qualitative interviews were conducted with participants aged 12\u201318 years. Participants discussed scenarios relating to flooding and the discovery of an unexploded World War Two bomb and were shown example alerts that might be sent out in these circumstances. Intended compliance with the alerts was high. Participants noted that compliance would be more likely if: they were familiar with the system; the messages were sent by a trusted source; messages were reserved for serious incidents; multiple messages were sent; messages were kept short and formal.", "which paper: Theory / Construct / Model ?", "Compliance", 483.0, 493.0], ["Abstract Since November 2012, Dutch civil defense organizations employ NL-Alert, a cellular broadcast-based warning system to inform the public. Individuals receive a message on their mobile phone about the actual threat, as well as some advice how to deal with the situation at hand. This study reports on the behavioral effects of NL-Alert (n = 643). The current risk communication literature suggested underlying mechanisms as perceived threat, efficacy beliefs, social norms, information sufficiency, and perceived message quality. Results indicate that adaptive behavior and behavioral avoidance can be predicted by subsets of these determinants. Affective and social predictors appear to be more important in this context that socio-cognitive predictors. Implications for the use of cellular broadcast systems like NL-Alert as a warning tool in emergency situations are discussed.", "which paper: Theory / Construct / Model ?", "Behavioral Avoidance", 580.0, 600.0], ["Abstract Since November 2012, Dutch civil defense organizations employ NL-Alert, a cellular broadcast-based warning system to inform the public. Individuals receive a message on their mobile phone about the actual threat, as well as some advice how to deal with the situation at hand. This study reports on the behavioral effects of NL-Alert (n = 643). The current risk communication literature suggested underlying mechanisms as perceived threat, efficacy beliefs, social norms, information sufficiency, and perceived message quality. Results indicate that adaptive behavior and behavioral avoidance can be predicted by subsets of these determinants. Affective and social predictors appear to be more important in this context that socio-cognitive predictors. Implications for the use of cellular broadcast systems like NL-Alert as a warning tool in emergency situations are discussed.", "which paper: Theory / Construct / Model ?", "Behavioral Avoidance", 580.0, 600.0], ["The increasing number of campus-related emergency incidents, in combination with the requirements imposed by the Clery Act, have prompted college campuses to develop emergency notification systems to inform community members of extreme events that may affect them. Merely deploying emergency notification systems on college campuses, however, does not guarantee that these systems will be effective; student compliance plays a very important role in establishing such effectiveness. Immediate compliance with alerts, as opposed to delayed compliance or noncompliance, is a key factor in improving student safety on campuses. This paper investigates the critical antecedents that motivate students to comply immediately with messages from campus emergency notification systems. Drawing on Etzioni's compliance theory, a model is developed. Using a scenario-based survey method, the model is tested in five types of events--snowstorm, active shooter, building fire, health-related, and robbery--and with more than 800 college students from the Northern region of the United States. The results from this study suggest that subjective norm and information quality trust are, in general, the most important factors that promote immediate compliance. This research contributes to the literature on compliance, emergency notification systems, and emergency response policies.", "which paper: Theory / Construct / Model ?", "Compliance theory", 798.0, 815.0], ["This research questions how social media use affords new forms of organizing and collective engagement. The concept of connective action has been introduced to characterize such new forms of collective engagement in which actors coproduce and circulate content based upon an issue of mutual interest. Yet, how the use of social media actually affords connective action still needed to be investigated. Mixed methods analyses of microblogging use during the Gulf of Mexico oil spill bring insights onto this question and reveal in particular how multiple actors enacted emerging and interdependent roles with their distinct patterns of feature use. The findings allow us to elaborate upon the concept of connective affordances as collective level affordances actualized by actors in team interdependent roles. Connective affordances extend research on affordances as a relational concept by considering not only the relationships between technology and users but also the interdependence type among users and the effects of this interdependence onto what users can do with the technology. This study contributes to research on social media use by paying close attention to how distinct patterns of feature use enact emerging roles. Adding to IS scholarship on the collective use of technology, it considers how the patterns of feature use for emerging groups of actors are intricately and mutually related to each other.", "which paper: Theory / Construct / Model ?", "Connective affordances", 703.0, 725.0], ["Research problem: A construct mediated in digital environments, information communication technology (ICT) literacy is operationally defined as the ability of individuals to participate effectively in transactions that invoke illocutionary action. This study investigates ICT literacy through a simulation designed to capture that construct, to deploy the construct model to measure participant improvement of ICT literacy under experimental conditions, and to estimate the potential for expanded model development. Research questions: How might a multidisciplinary literature review inform a model for ICT literacy? How might a simulation be designed that enables sufficient construct representation for modeling? How might prepost testing simulation be designed to investigate the potential for improved command of ICT literacy? How might a regression model account for variance within the model by the addition of affective elements to a cognitive model? Literature review: Existing conceptualizations of the ICT communication environment demonstrate the need for a new communication model that is sensitive to short text messaging demands in crisis communication settings. As a result of this prefect storm of limits requiring the communicator to rely on critical thinking, awareness of context, and information integration, we designed a cognitive-affective model informed by genre theory to capture the ICT construct: A sociocognitive ability that, at its most effective, facilitates illocutionary action-to confirm and warn, to advise and ask, and to thank and request-for specific audiences of emergency responders. Methodology: A prepost design with practitioner subjects (N=50) allowed investigation of performance improvement on tasks demanding illocutionary action after training on tasks of high, moderate, and low demand. Through a model based on the independent variables character count, wordcount, and decreased time on task (X) as related to the dependent variable of an overall episode score (Y), we were able to examine the internal construct strength with and without the addition of affective independent variables. Results and discussion: Of the three prepost models used to study the impact of training, participants demonstrated statistically significant improvement on episodes of high demand on all cognitive model variables. The addition of affective variables, such as attitudes toward text messaging, allowed increased model strength on tasks of high and moderate complexity. These findings suggest that an empirical basis for the construct of ICT literacy is possible and that, under simulation conditions, practitioner improvement may be demonstrated. Practically, it appears that it is possible to train emergency responders to improve their command of ICT literacy so that those most in need of humanitarian response during a crisis may receive it. Future research focusing on communication in digital environments will undoubtedly extend these finding in terms of construct validation and deployment in crisis settings.", "which paper: Theory / Construct / Model ?", "ICT literacy", 272.0, 284.0], ["Research problem: This study investigates the factors influencing students' intentions to use emergency notification services to receive news about campus emergencies through short-message systems (SMS) and social network sites (SNS). Research questions: (1) What are the critical factors that influence students' intention to use SMS to receive emergency notifications? (2) What are the critical factors that influence students' intention to use SNS to receive emergency notifications? Literature review: By adapting Media Richness theory and prior research on emergency notifications, we propose that perceived media richness, perceived trust in information, perceived risk, perceived benefit, and perceived social influence impact the intention to use SMS and SNS to receive emergency notifications. Methodology: We conducted a quantitative, survey-based study that tested our model in five different scenarios, using logistic regression to test the research hypotheses with 574 students of a large research university in the northeastern US. Results and discussion: Results suggest that students' intention to use SNS is impacted by media richness, perceived benefit, and social influence, while students' intention to use SMS is influenced by trust and perceived benefit. Implications to emergency managers suggest how to more effectively manage and market the service through both channels. The results also suggest using SNS as an additional means of providing emergency notifications at academic institutions.", "which paper: Theory / Construct / Model ?", "media richness", 518.0, 532.0], ["Recent extreme events show that Twitter, a micro-blogging service, is emerging as the dominant social reporting tool to spread information on social crises. It is elevating the online public community to the status of first responders who can collectively cope with social crises. However, at the same time, many warnings have been raised about the reliability of community intelligence obtained through social reporting by the amateur online community. Using rumor theory, this paper studies citizen-driven information processing through Twitter services using data from three social crises: the Mumbai terrorist attacks in 2008, the Toyota recall in 2010, and the Seattle cafe shooting incident in 2012. We approach social crises as communal efforts for community intelligence gathering and collective information processing to cope with and adapt to uncertain external situations. We explore two issues: (1) collective social reporting as an information processing mechanism to address crisis problems and gather community intelligence, and (2) the degeneration of social reporting into collective rumor mills. Our analysis reveals that information with no clear source provided was the most important, personal involvement next in importance, and anxiety the least yet still important rumor causing factor on Twitter under social crisis situations.", "which paper: Theory / Construct / Model ?", "Rumor theory", 460.0, 472.0], ["With the successful preparation of \u03b3-alumina with high-energy external surfaces such as {111} facets, the crystal-facet effect of \u03b3-Al2O3 on surface-loaded CrOx has been explored for semihydrogenation of acetylene. Our results indeed demonstrate that the harmonious interaction of CrOx with traditional \u03b3-Al2O3, the external surfaces of which are typically low-energy{110} facets, has caused a highly efficient performance for semihydrogenation of acetylene over CrOx/(110)\u03b3-Al2O3 catalyst, whereas the activity of the CrOx/(111)\u03b3-Al2O3 catalyst for acetylene hydrogenation is suppressed dramatically due to the limited formation of active Cr species, restrained by the high-energy {111} facets of \u03b3-Al2O3. Furthermore, the use of inexpensive CrOx as the active component for semihydrogenation of acetylene is an economically friendly alternative relative to commercial precious Pd catalysts. This work sheds light on a strategy for exploiting the crystal-facet effect of the supports to purposefully tailor the catalyti...", "which catalysts ?", "CrOx/(110)\u03b3-Al2O3", NaN, NaN], ["Developing highly selective and stable catalysts for acetylene hydrogenation is an imperative task in the chemical industry. Herein, core-shell Pd@carbon nanoparticles supported on carbon nanotubes (Pd@C/CNTs) were synthesized. During the hydrogenation of acetylene, the selectivity of Pd@C/CNTs to ethylene was distinctly improved. Moreover, Pd@C/CNTs showed excellent stability during the hydrogenation reaction.", "which catalysts ?", "Pd@C/CNT", NaN, NaN], ["We reported that atomically dispersed Pd on graphene can be fabricated using the atomic layer deposition technique. Aberration-corrected high-angle annular dark-field scanning transmission electron microscopy and X-ray absorption fine structure spectroscopy both confirmed that isolated Pd single atoms dominantly existed on the graphene support. In selective hydrogenation of 1,3-butadiene, the single-atom Pd1/graphene catalyst showed about 100% butenes selectivity at 95% conversion at a mild reaction condition of about 50 \u00b0C, which is likely due to the changes of 1,3-butadiene adsorption mode and enhanced steric effect on the isolated Pd atoms. More importantly, excellent durability against deactivation via either aggregation of metal atoms or carbonaceous deposits during a total 100 h of reaction time on stream was achieved. Therefore, the single-atom catalysts may open up more opportunities to optimize the activity, selectivity, and durability in selective hydrogenation reactions.", "which catalysts ?", "Pd1/graphene", 408.0, 420.0], ["The environmental Kuznets curve hypothesis is a theory by which the relationship between per capita GDP and per capita pollutant emissions has an inverted U shape. This implies that, past a certain point, economic growth may actually be profitable for environmental quality. Most studies on this subject are based on estimating fully parametric quadratic or cubic regression models. While this is not technhically wrong, such an approach somewhat lacks flexibility since it may fail to detect the true shape of the relationship if it happens not to be of the specified form. We use semiparametric and flexible nonlinear parametric modelling methods in an attempt to provide more robust inferences. We find little evidence in favour of the environmental Kuznets curve hypothesis. Our main results could be interpreted as indicating that the oil shock of the 1970s has had an important impact on progress towards less polluting technology and production.", "which Power of income ?", "Cubic", 358.0, 363.0], ["This article investigates the Environmental Kuznets Curves (EKC) for CO2 emissions in a panel of 109 countries during the period 1959 to 2001. The length of the series makes the application of a heterogeneous estimator suitable from an econometric point of view. The results, based on the hierarchical Bayes estimator, show that different EKC dynamics are associated with the different sub-samples of countries considered. On average, more industrialized countries show evidence of EKC in quadratic specifications, which nevertheless are probably evolving into an N-shape based on their cubic specification. Nevertheless, it is worth noting that the EU, and not the Umbrella Group led by US, has been driving currently observed EKC-like shapes. The latter is associated to monotonic income\u2013CO2 dynamics. The EU shows a clear EKC shape. Evidence for less-developed countries consistently shows that CO2 emissions rise positively with income, though there are some signs of an EKC. Analyses of future performance, nevertheless, favour quadratic specifications, thus supporting EKC evidence for wealthier countries and non-EKC shapes for industrializing regions.", "which Power of income ?", "Cubic", 587.0, 592.0], ["AbstractThe paper considers the relationship between greenhouse gas emissions (GHG) as the main variable of climate change and gross domestic product (GDP), using the environmental Kuznets curve (EKC) technique. At early stages of economic growth, EKC indicates the increase of pollution related to the growing use of resources. However, when a certain level of income per capita is reached, the trend reverses and at a higher stage of development, further economic growth leads to improvement of the environment. According to the researchers, this implies that the environmental impact indicator is an inverted U-shaped function of income per capita. In this paper, the cubic equation is used to empirically check the validity of the EKC relationship for European countries. The analysis is based on the survey of EU-27, Norway and Switzerland in the period of 1995\u20132010. The data is taken from the Eurostat database. To gain some insights into the environmental trends in each country, the article highlights the speci...", "which Power of income ?", "Cubic", 671.0, 676.0], ["This article investigates the Environmental Kuznets Curves (EKC) for CO2 emissions in a panel of 109 countries during the period 1959 to 2001. The length of the series makes the application of a heterogeneous estimator suitable from an econometric point of view. The results, based on the hierarchical Bayes estimator, show that different EKC dynamics are associated with the different sub-samples of countries considered. On average, more industrialized countries show evidence of EKC in quadratic specifications, which nevertheless are probably evolving into an N-shape based on their cubic specification. Nevertheless, it is worth noting that the EU, and not the Umbrella Group led by US, has been driving currently observed EKC-like shapes. The latter is associated to monotonic income\u2013CO2 dynamics. The EU shows a clear EKC shape. Evidence for less-developed countries consistently shows that CO2 emissions rise positively with income, though there are some signs of an EKC. Analyses of future performance, nevertheless, favour quadratic specifications, thus supporting EKC evidence for wealthier countries and non-EKC shapes for industrializing regions.", "which Power of income ?", "Quadratic", 489.0, 498.0], ["In the last few years, several studies have found an inverted-U relationship between per capita income and environmental degradation. This relationship, known as the environmental Kuznets curve (EKC), suggests that environmental degradation increases in the early stages of growth, but it eventually decreases as income exceeds a threshold level. However, this paper investigation relationship between per capita CO2 emission, growth economics and trade liberalization based on econometric techniques of unit root test, co-integration and a panel data set during the period 1960-1996 for BRICS countries. Data properties were analyzed to determine their stationarity using the LLC , IPS , ADF and PP unit root tests which indicated that the series are I(1). We find a cointegration relationship between per capita CO2 emission, growth economics and trade liberalization by applying Kao panel cointegration test. The evidence indi cates that in the long-run trade liberalization has a positive significant impact on CO2 emissions and impact of trade liberalization on emissions growth depends on the level of income Our findings suggest that there is a quadratic relationship between relationship between real GDP and CO2 emissions for the region as a whole. The estimated long-run coefficients of real GDP and its square satisfy the EKC hypothesis in all of studied countries. Our estimation shows that the inflection point or optimal point real GDP per capita is about 5269.4 dollars. The results show that on average, sample countries are on the positive side of the inverted U curve. The turning points are very low in some cases and very high in other cases, hence providing poor evidence in support of the EKC hypothesis. Thus, our findings suggest that all BRICS countries need to sacrifice economic growth to decrease their emission levels", "which Power of income ?", "Quadratic", 1152.0, 1161.0], ["In this article, we attempt to use panel unit root and panel cointegration tests as well as the fully-modified ordinary least squares (OLS) approach to examine the relationships among carbon dioxide emissions, energy use and gross domestic product for 22 Organization for Economic Cooperation and Development (OECD) countries (Annex II Parties) over the 1971\u20132000 period. Furthermore, in order to investigate these results for other direct greenhouse gases (GHGs), we have estimated the Environmental Kuznets Curve (EKC) hypothesis by using the total GHG, methane, and nitrous oxide. The empirical results support that energy use still plays an important role in explaining the GHG emissions for OECD countries. In terms of the EKC hypothesis, the results showed that a quadratic relationship was found to exist in the long run. Thus, other countries could learn from developed countries in this regard and try to smooth the EKC curve at relatively less cost.", "which Power of income ?", "Quadratic", 770.0, 779.0], ["Purpose \u2013 The purpose of this paper is to analyse the implication of trade on carbon emissions in a panel of eight highly trading Southeast and East Asian countries, namely, China, Indonesia, South Korea, Malaysia, Hong Kong, The Philippines, Singapore and Thailand. Design/methodology/approach \u2013 The analysis relies on the standard quadratic environmental Kuznets curve (EKC) extended to include energy consumption and international trade. A battery of panel unit root and co-integration tests is applied to establish the variables\u2019 stochastic properties and their long-run relations. Then, the specified EKC is estimated using the panel dynamic ordinary least square (OLS) estimation technique. Findings \u2013 The panel co-integration statistics verifies the validity of the extended EKC for the countries under study. Estimation of the long-run EKC via the dynamic OLS estimation method reveals the environmentally degrading effects of trade in these countries, especially in ASEAN and plus South Korea and Hong Kong. Pra...", "which Power of income ?", "Quadratic", 333.0, 342.0], ["This study investigates the relationship between CO2 emissions, economic growth, energy use and electricity production by hydroelectric sources in Brazil. To verify the environmental Kuznets curve (EKC) hypothesis we use time-series data for the period 1971-2011. The autoregressive distributed lag methodology was used to test for cointegration in the long run. Additionally, the vector error correction model Granger causality test was applied to verify the predictive value of independent variables. Empirical results find that there is a quadratic long run relationship between CO2emissions and economic growth, confirming the existence of an EKC for Brazil. Furthermore, energy use shows increasing effects on emissions, while electricity production by hydropower sources has an inverse relationship with environmental degradation. The short run model does not provide evidence for the EKC theory. The differences between the results in the long and short run models can be considered for establishing environmental policies. This suggests that special attention to both variables-energy use and the electricity production by hydroelectric sources- could be an effective way to mitigate CO2 emissions in Brazil", "which Power of income ?", "Quadratic", 542.0, 551.0], ["Advocates of software risk management claim that by identifying and analyzing threats to success (i.e., risks) action can be taken to reduce the chance of failure of a project. The first step in the risk management process is to identify the risk itself, so that appropriate countermeasures can be taken. One problem in this task, however, is that no validated lists are available to help the project manager understand the nature and types of risks typically faced in a software project. This paper represents a first step toward alleviating this problem by developing an authoritative list of common risk factors. We deploy a rigorous data collection method called a \"ranking-type\" Delphi survey to produce a rank-order list of risk factors. This data collection method is designed to elicit and organize opinions of a panel of experts through iterative, controlled feedback. Three simultaneous surveys were conducted in three different settings: Hong Kong, Finland, and the United States. This was done to broaden our view of the types of risks, rather than relying on the view of a single culture-an aspect that has been ignored in past risk management research. In forming the three panels, we recruited experienced project managers in each country. The paper presents the obtained risk factor list, compares it with other published risk factor lists for completeness and variation, and analyzes common features and differences in risk factor rankings in the three countries. We conclude by discussing implications of our findings for both research and improving risk management practice.", "which esearch method & question set ?", "Delphi", 684.0, 690.0], ["Identifying the risks associated with the implementation of clinical information systems (CIS) in health care organizations can be a major challenge for managers, clinicians, and IT specialists, as there are numerous ways in which they can be described and categorized. Risks vary in nature, severity, and consequence, so it is important that those considered to be high-level risks be identified, understood, and managed. This study addresses this issue by first reviewing the extant literature on IT/CIS project risks, and second conducting a Delphi survey among 21 experts highly involved in CIS projects in Canada. In addition to providing a comprehensive list of risk factors and their relative importance, this study is helpful in unifying the literature on IT implementation and health informatics. Our risk factor-oriented research actually confirmed many of the factors found to be important in both these streams.", "which esearch method & question set ?", "Delphi", 545.0, 551.0], ["In this study, we consider facility location decisions for a humanitarian relief chain responding to quick-onset disasters. In particular, we develop a model that determines the number and locations of distribution centres in a relief network and the amount of relief supplies to be stocked at each distribution centre to meet the needs of people affected by the disasters. Our model, which is a variant of the maximal covering location model, integrates facility location and inventory decisions, considers multiple item types, and captures budgetary constraints and capacity restrictions. We conduct computational experiments to illustrate how the proposed model works on a realistic problem. Results show the effects of pre- and post-disaster relief funding on relief system's performance, specifically on response time and the proportion of demand satisfied. Finally, we discuss the managerial implications of the proposed model.", "which Uncertainty on the first-stage ?", "Demand", 845.0, 851.0], ["This article introduces a risk-averse stochastic modeling approach for a pre-disaster relief network design problem under uncertain demand and transportation capacities. The sizes and locations of the response facilities and the inventory levels of relief supplies at each facility are determined while guaranteeing a certain level of network reliability. A probabilistic constraint on the existence of a feasible flow is introduced to ensure that the demand for relief supplies across the network is satisfied with a specified high probability. Responsiveness is also accounted for by defining multiple regions in the network and introducing local probabilistic constraints on satisfying demand within each region. These local constraints ensure that each region is self-sufficient in terms of providing for its own needs with a large probability. In particular, the Gale\u2013Hoffman inequalities are used to represent the conditions on the existence of a feasible network flow. The solution method rests on two pillars. A preprocessing algorithm is used to eliminate redundant Gale\u2013Hoffman inequalities and then proposed models are formulated as computationally efficient mixed-integer linear programs by utilizing a method based on combinatorial patterns. Computational results for a case study and randomly generated problem instances demonstrate the effectiveness of the models and the solution method.", "which Uncertainty on the first-stage ?", "Demand", 132.0, 138.0], ["This paper proposes a new two-stage optimization method for emergency supplies allocation problem with multisupplier, multiaffected area, multirelief, and multivehicle. The triplet of supply, demand, and the availability of path is unknown prior to the extraordinary event and is descriptive with fuzzy random variable. Considering the fairness, timeliness, and economical efficiency, a multiobjective expected value model is built for facility location, vehicle routing, and supply allocation decisions. The goals of proposed model aim to minimize the proportion of demand nonsatisfied and response time of emergency reliefs and the total cost of the whole process. When the demand and the availability of path are discrete, the expected values in the objective functions are converted into their equivalent forms. When the supply amount is continuous, the equilibrium chance in the constraint is transformed to its equivalent one. To overcome the computational difficulty caused by multiple objectives, a goal programming model is formulated to obtain a compromise solution. Finally, an example is presented to illustrate the validity of the proposed model and the effectiveness of the solution method.", "which Uncertainty on the first-stage ?", "Demand", 200.0, 206.0], ["A comprehensive review of online, official, and scientific literature was carried out in 2012-13 to develop a framework of disaster social media. This framework can be used to facilitate the creation of disaster social media tools, the formulation of disaster social media implementation processes, and the scientific study of disaster social media effects. Disaster social media users in the framework include communities, government, individuals, organisations, and media outlets. Fifteen distinct disaster social media uses were identified, ranging from preparing and receiving disaster preparedness information and warnings and signalling and detecting disasters prior to an event to (re)connecting community members following a disaster. The framework illustrates that a variety of entities may utilise and produce disaster social media content. Consequently, disaster social media use can be conceptualised as occurring at a number of levels, even within the same disaster. Suggestions are provided on how the proposed framework can inform future disaster social media development and research.", "which paper:publised_in ?", "Disasters", 657.0, 666.0], ["This research questions how social media use affords new forms of organizing and collective engagement. The concept of connective action has been introduced to characterize such new forms of collective engagement in which actors coproduce and circulate content based upon an issue of mutual interest. Yet, how the use of social media actually affords connective action still needed to be investigated. Mixed methods analyses of microblogging use during the Gulf of Mexico oil spill bring insights onto this question and reveal in particular how multiple actors enacted emerging and interdependent roles with their distinct patterns of feature use. The findings allow us to elaborate upon the concept of connective affordances as collective level affordances actualized by actors in team interdependent roles. Connective affordances extend research on affordances as a relational concept by considering not only the relationships between technology and users but also the interdependence type among users and the effects of this interdependence onto what users can do with the technology. This study contributes to research on social media use by paying close attention to how distinct patterns of feature use enact emerging roles. Adding to IS scholarship on the collective use of technology, it considers how the patterns of feature use for emerging groups of actors are intricately and mutually related to each other.", "which Emergency Type ?", "oil spill", 472.0, 481.0], ["This study employs the perspective of organizational resilience to examine how information and communication technologies (ICTs) were used by organizations to aid in their recovery after Hurricane Katrina. In-depth interviews enabled longitudinal analysis of ICT use. Results showed that organizations enacted a variety of resilient behaviors through adaptive ICT use, including information sharing, (re)connection, and resource acquisition. Findings emphasize the transition of ICT use across different stages of recovery, including an anticipated stage. Key findings advance organizational resilience theory with an additional source of resilience, external availability. Implications and contributions to the literature of ICTs in disaster contexts and organizational resilience are discussed.", "which Emergency Type ?", "hurricane Katrina", 187.0, 204.0], ["In recent times, social media has been increasingly playing a critical role in response actions following natural catastrophes. From facilitating the recruitment of volunteers during an earthquake to supporting emotional recovery after a hurricane, social media has demonstrated its power in serving as an effective disaster response platform. Based on a case study of Thailand flooding in 2011 \u2013 one of the worst flooding disasters in more than 50 years that left the country severely impaired \u2013 this paper provides an in\u2010depth understanding on the emergent roles of social media in disaster response. Employing the perspective of boundary object, we shed light on how different boundary spanning competences of social media emerged in practice to facilitate cross\u2010boundary response actions during a disaster, with an aim to promote further research in this area. We conclude this paper with guidelines for response agencies and impacted communities to deploy social media for future disaster response.", "which Emergency Type ?", "flooding", 378.0, 386.0], ["Research problem: This study investigates the factors influencing students' intentions to use emergency notification services to receive news about campus emergencies through short-message systems (SMS) and social network sites (SNS). Research questions: (1) What are the critical factors that influence students' intention to use SMS to receive emergency notifications? (2) What are the critical factors that influence students' intention to use SNS to receive emergency notifications? Literature review: By adapting Media Richness theory and prior research on emergency notifications, we propose that perceived media richness, perceived trust in information, perceived risk, perceived benefit, and perceived social influence impact the intention to use SMS and SNS to receive emergency notifications. Methodology: We conducted a quantitative, survey-based study that tested our model in five different scenarios, using logistic regression to test the research hypotheses with 574 students of a large research university in the northeastern US. Results and discussion: Results suggest that students' intention to use SNS is impacted by media richness, perceived benefit, and social influence, while students' intention to use SMS is influenced by trust and perceived benefit. Implications to emergency managers suggest how to more effectively manage and market the service through both channels. The results also suggest using SNS as an additional means of providing emergency notifications at academic institutions.", "which Emergency Type ?", "Campus emergencies", 148.0, 166.0], ["The notion of communities getting together during a disaster to help each other is common. However, how does this communal activity happen within the online world? Here we examine this issue using the Communities of Practice (CoP) approach. We extend CoP to multiple CoP (MCoPs) and examine the role of social media applications in disaster management, extending work done by Ahmed (2011). Secondary data in the form of newspaper reports during 2010 to 2011 were analysed to understand how social media, particularly Facebook and Twitter, facilitated the process of communication among various communities during the Queensland floods in 2010. The results of media-content analysis along with the findings of relevant literature were used to extend our existing understanding on various communities of practice involved in disaster management, their communication tasks and the role of Twitter and Facebook as common conducive platforms of communication during disaster management alongside traditional communication channels.", "which Emergency Type ?", "Queensland flood", NaN, NaN], ["In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency\u2019s perspective, which undermines communities\u2019 role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.", "which Emergency Type ?", "Flood", NaN, NaN], ["Two weeks after the Great Tohoku earthquake followed by the devastating tsunami, we have sent open-ended questionnaires to a randomly selected sample of Twitter users and also analysed the tweets sent from the disaster-hit areas. We found that people in directly affected areas tend to tweet about their unsafe and uncertain situation while people in remote areas post messages to let their followers know that they are safe. Our analysis of the open-ended answers has revealed that unreliable retweets (RTs) on Twitter was the biggest problem the users have faced during the disaster. Some of the solutions offered by the respondents included introducing official hash tags, limiting the number of RTs for each hash tag and adding features that allow users to trace information by maintaining anonymity.", "which Emergency Type ?", "Tsunami", 72.0, 79.0], ["The devastating 2011 Great East Japan Earthquake made people aware of the importance of Information and Communication Technology (ICT) for sustaining life during and soon after a disaster. The difficulty in recovering information systems, because of the failure of ICT, hindered all recovery processes. The paper explores ways to make information systems resilient in disaster situations. Resilience is defined as quickly regaining essential capabilities to perform critical post disaster missions and to smoothly return to fully stable operations thereafter. From case studies and the literature, we propose that a frugal IS design that allows creative responses will make information systems resilient in disaster situations. A three-stage model based on a chronological sequence was employed in structuring the proposed design principles.", "which Emergency Type ?", "earthquake", 38.0, 48.0], ["Introduction As a way to improve student academic performance, educators have begun paying special attention to computer games (Gee, 2005; Oblinger, 2006). Reflecting the interests of the educators, studies have been conducted to explore the effects of computer games on student achievement. However, there has been no consensus on the effects of computer games: Some studies support computer games as educational resources to promote students' learning (Annetta, Mangrum, Holmes, Collazo, & Cheng, 2009; Vogel et al., 2006). Other studies have found no significant effects on the students' performance in school, especially in math achievement of elementary school students (Ke, 2008). Researchers have also been interested in the differential effects of computer games between gender groups. While several studies have reported various gender differences in the preferences of computer games (Agosto, 2004; Kinzie & Joseph, 2008), a few studies have indicated no significant differential effect of computer games between genders and asserted generic benefits for both genders (Vogel et al., 2006). To date, the studies examining computer games and gender interaction are far from conclusive. Moreover, there is a lack of empirical studies examining the differential effects of computer games on the academic performance of diverse learners. These learners included linguistic minority students who speak languages other than English. Recent trends in the K-12 population feature the increasing enrollment of linguistic minority students, whose population reached almost four million (NCES, 2004). These students have been a grieve concern for American educators because of their reported low performance. In response, this study empirically examined the effects of math computer games on the math performance of 4th-graders with focused attention on differential effects for gender and linguistic groups. To achieve greater generalizability of the study findings, the study utilized a US nationally representative database--the 2005 National Assessment of Educational Progress (NAEP). The following research questions guided the current study: 1. Are computer games in math classes associated with the 4th-grade students' math performance? 2. How does the relationship differ by linguistic group? 3. How does the association vary by gender? 4. Is there an interaction effect of computer games on linguistic and gender groups? In other words, how does the effect of computer games on linguistic groups vary by gender group? Literature review Academic performance and computer games According DeBell and Chapman (2004), of 58,273,000 students of nursery and K-12 school age in the USA, 56% of students played computer games. Along with the popularity among students, computer games have received a lot of attention from educators as a potential way to provide learners with effective and fun learning environments (Oblinger, 2006). Gee (2005) agreed that a game would turn out to be good for learning when the game is built to incorporate learning principles. Some researchers have also supported the potential of games for affective domains of learning and fostering a positive attitude towards learning (Ke, 2008; Ke & Grabowski, 2007; Vogel et al., 2006). For example, based on the study conducted on 1,274 1st- and 2nd-graders, Rosas et al. (2003) found a positive effect of educational games on the motivation of students. Although there is overall support for the idea that games have a positive effect on affective aspects of learning, there have been mixed research results regarding the role of games in promoting cognitive gains and academic achievement. In the meta-analysis, Vogel et al. (2006) examined 32 empirical studies and concluded that the inclusion of games for students' learning resulted in significantly higher cognitive gains compared with traditional teaching methods without games. \u2026", "which Educational context ?", "Elementary", 648.0, 658.0], ["Despite their successful use in many conscientious studies involving outdoor learning applications, mobile learning systems still have certain limitations. For instance, because students cannot obtain real-time, contextaware content in outdoor locations such as historical sites, endangered animal habitats, and geological landscapes, they are unable to search, collect, share, and edit information by using information technology. To address such concerns, this work proposes an environment of ubiquitous learning with educational resources (EULER) based on radio frequency identification (RFID), augmented reality (AR), the Internet, ubiquitous computing, embedded systems, and database technologies. EULER helps teachers deliver lessons on site and cultivate student competency in adopting information technology to improve learning. To evaluate its effectiveness, we used the proposed EULER for natural science learning at the Guandu Nature Park in Taiwan. The participants were elementary school teachers and students. The analytical results revealed that the proposed EULER improves student learning. Moreover, the largely positive feedback from a post-study survey confirms the effectiveness of EULER in supporting outdoor learning and its ability to attract the interest of students.", "which Educational context ?", "Elementary", 983.0, 993.0], ["Objective \u2013 Evidence from systematic reviews a decade ago suggested that face-to-face and online methods to provide information literacy training in universities were equally effective in terms of skills learnt, but there was a lack of robust comparative research. The objectives of this review were (1) to update these findings with the inclusion of more recent primary research; (2) to further enhance the summary of existing evidence by including studies of blended formats (with components of both online and face-to-face teaching) compared to single format education; and (3) to explore student views on the various formats employed. Methods \u2013 Authors searched seven databases along with a range of supplementary search methods to identify comparative research studies, dated January 1995 to October 2016, exploring skill outcomes for students enrolled in higher education programs. There were 33 studies included, of which 19 also contained comparative data on student views. Where feasible, meta-analyses were carried out to provide summary estimates of skills development and a thematic analysis was completed to identify student views across the different formats. Results \u2013 A large majority of studies (27 of 33; 82%) found no statistically significant difference between formats in skills outcomes for students. Of 13 studies that could be included in a meta-analysis, the standardized mean difference (SMD) between skill test results for face-to-face versus online formats was -0.01 (95% confidence interval -0.28 to 0.26). Of ten studies comparing blended to single delivery format, seven (70%) found no statistically significant difference between formats, and the remaining studies had mixed outcomes. From the limited evidence available across all studies, there is a potential dichotomy between outcomes measured via skill test and assignment (course work) which is worthy of further investigation. The thematic analysis of student views found no preference in relation to format on a range of measures in 14 of 19 studies (74%). The remainder identified that students perceived advantages and disadvantages for each format but had no overall preference. Conclusions \u2013 There is compelling evidence that information literacy training is effective and well received across a range of delivery formats. Further research looking at blended versus single format methods, and the time implications for each, as well as comparing assignment to skill test outcomes would be valuable. Future studies should adopt a methodologically robust design (such as the randomized controlled trial) with a large student population and validated outcome measures.", "which Educational context ?", "Higher Education", 861.0, 877.0], ["We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render (or synthesize) images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone. Test results show that the method performs almost without error, except on the most extreme lighting directions.", "which Variations ?", "pose", 193.0, 197.0], ["We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new \u201cin the wild\u201d annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).", "which Variations ?", "pose", 47.0, 51.0], ["Human faces captured in real-world conditions present large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food). Current face landmark estimation approaches struggle under such conditions since they fail to provide a principled way of handling outliers. We propose a novel method, called Robust Cascaded Pose Regression (RCPR) which reduces exposure to outliers by detecting occlusions explicitly and using robust shape-indexed features. We show that RCPR improves on previous landmark estimation methods on three popular face datasets (LFPW, LFW and HELEN). We further explore RCPR's performance by introducing a novel face dataset focused on occlusion, composed of 1,007 faces presenting a wide range of occlusion patterns. RCPR reduces failure cases by half on all four datasets, at the same time as it detects face occlusions with a 80/40% precision/recall.", "which Variations ?", "occlusion", 756.0, 765.0], ["Developing powerful deformable face models requires massive, annotated face databases on which techniques can be trained, validated and tested. Manual annotation of each facial image in terms of landmarks requires a trained expert and the workload is usually enormous. Fatigue is one of the reasons that in some cases annotations are inaccurate. This is why, the majority of existing facial databases provide annotations for a relatively small subset of the training images. Furthermore, there is hardly any correspondence between the annotated land-marks across different databases. These problems make cross-database experiments almost infeasible. To overcome these difficulties, we propose a semi-automatic annotation methodology for annotating massive face datasets. This is the first attempt to create a tool suitable for annotating massive facial databases. We employed our tool for creating annotations for MultiPIE, XM2VTS, AR, and FRGC Ver. 2 databases. The annotations will be made publicly available from http://ibug.doc.ic.ac.uk/ resources/facial-point-annotations/. Finally, we present experiments which verify the accuracy of produced annotations.", "which Variations ?", "pose", NaN, NaN], ["Face information processing relies on the quality of data resource. From the data modality point of view, a face database can be 2D or 3D, and static or dynamic. From the task point of view, the data can be used for research of computer based automatic face recognition, face expression recognition, face detection, or cognitive and psychological investigation. With the advancement of 3D imaging technologies, 3D dynamic facial sequences (called 4D data) have been used for face information analysis. In this paper, we focus on the modality of 3D dynamic data for the task of facial expression recognition. We present a newly created high-resolution 3D dynamic facial expression database, which is made available to the scientific research community. The database contains 606 3D facial expression sequences captured from 101 subjects of various ethnic backgrounds. The database has been validated through our facial expression recognition experiment using an HMM based 3D spatio-temporal facial descriptor. It is expected that such a database shall be used to facilitate the facial expression analysis from a static 3D space to a dynamic 3D space, with a goal of scrutinizing facial behavior at a higher level of detail in a real 3D spatio-temporal domain.", "which Variations ?", "expression", 276.0, 286.0], ["Within the past decade, significant effort has occurred in developing methods of facial expression analysis. Because most investigators have used relatively limited data sets, the generalizability of these various methods remains unknown. We describe the problem space for facial expression analysis, which includes level of description, transitions among expressions, eliciting conditions, reliability and validity of training and test data, individual differences in subjects, head orientation and scene complexity image characteristics, and relation to non-verbal behavior. We then present the CMU-Pittsburgh AU-Coded Face Expression Image Database, which currently includes 2105 digitized image sequences from 182 adult subjects of varying ethnicity, performing multiple tokens of most primary FACS action units. This database is the most comprehensive testbed to date for comparative studies of facial expression analysis.", "which Variations ?", "expression", 88.0, 98.0], ["We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render (or synthesize) images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone. Test results show that the method performs almost without error, except on the most extreme lighting directions.", "which Variations ?", "illumination", 222.0, 234.0], ["We present a novel approach to localizing parts in images of human faces. The approach combines the output of local detectors with a nonparametric set of global models for the part locations based on over 1,000 hand-labeled exemplar images. By assuming that the global models generate the part locations as hidden variables, we derive a Bayesian objective function. This function is optimized using a consensus of models for these hidden variables. The resulting localizer handles a much wider range of expression, pose, lighting, and occlusion than prior ones. We show excellent performance on real-world face datasets such as Labeled Faces in the Wild (LFW) and a new Labeled Face Parts in the Wild (LFPW) and show that our localizer achieves state-of-the-art performance on the less challenging BioID dataset.", "which Variations ?", "expression", 503.0, 513.0], ["Human faces captured in real-world conditions present large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food). Current face landmark estimation approaches struggle under such conditions since they fail to provide a principled way of handling outliers. We propose a novel method, called Robust Cascaded Pose Regression (RCPR) which reduces exposure to outliers by detecting occlusions explicitly and using robust shape-indexed features. We show that RCPR improves on previous landmark estimation methods on three popular face datasets (LFPW, LFW and HELEN). We further explore RCPR's performance by introducing a novel face dataset focused on occlusion, composed of 1,007 faces presenting a wide range of occlusion patterns. RCPR reduces failure cases by half on all four datasets, at the same time as it detects face occlusions with a 80/40% precision/recall.", "which Variations ?", "pose", 117.0, 121.0], ["Face alignment is a crucial step in face recognition tasks. Especially, using landmark localization for geometric face normalization has shown to be very effective, clearly improving the recognition results. However, no adequate databases exist that provide a sufficient number of annotated facial landmarks. The databases are either limited to frontal views, provide only a small number of annotated images or have been acquired under controlled conditions. Hence, we introduce a novel database overcoming these limitations: Annotated Facial Landmarks in the Wild (AFLW). AFLW provides a large-scale collection of images gathered from Flickr, exhibiting a large variety in face appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total 25,993 faces in 21,997 real-world images are annotated with up to 21 landmarks per image. Due to the comprehensive set of annotations AFLW is well suited to train and test algorithms for multi-view face detection, facial landmark localization and face pose estimation. Further, we offer a rich set of tools that ease the integration of other face databases and associated annotations into our joint framework.", "which Variations ?", "pose", 697.0, 701.0], ["Detection and tracking of faces in image sequences is among the most well studied problems in the intersection of statistical machine learning and computer vision. Often, tracking and detection methodologies use a rigid representation to describe the facial region 1, hence they can neither capture nor exploit the non-rigid facial deformations, which are crucial for countless of applications (e.g., facial expression analysis, facial motion capture, high-performance face recognition etc.). Usually, the non-rigid deformations are captured by locating and tracking the position of a set of fiducial facial landmarks (e.g., eyes, nose, mouth etc.). Recently, we witnessed a burst of research in automatic facial landmark localisation in static imagery. This is partly attributed to the availability of large amount of annotated data, many of which have been provided by the first facial landmark localisation challenge (also known as 300-W challenge). Even though now well established benchmarks exist for facial landmark localisation in static imagery, to the best of our knowledge, there is no established benchmark for assessing the performance of facial landmark tracking methodologies, containing an adequate number of annotated face videos. In conjunction with ICCV'2015 we run the first competition/challenge on facial landmark tracking in long-term videos. In this paper, we present the first benchmark for long-term facial landmark tracking, containing currently over 110 annotated videos, and we summarise the results of the competition.", "which Variations ?", "expression", 408.0, 418.0], ["In this paper, we document a study of design patterns in commercial, proprietary software and determine whether design pattern participants (i.e. the constituent classes of a pattern) had a greater propensity for faults than non-participants. We studied a commercial software system for a 24 month period and identified design pattern participants by inspecting the design documentation and source code; we also extracted fault data for the same period to determine whether those participant classes were more fault-prone than non-participant classes. Results showed that design pattern participant classes were marginally more fault-prone than non-participant classes, The Adaptor, Method and Singleton patterns were found to be the most fault-prone of thirteen patterns explored. However, the primary reason for this fault-proneness was the propensity of design classes to be changed more often than non-design pattern classes.", "which Quality attribute ?", "Faults", 213.0, 219.0], ["Design patterns are widely used within the software engineer community. Researchers claim that design patterns improve software quality. In this paper, we describe two experiments, using graduate student participants, to study whether design patterns improve the software quality, specifically maintainability and understandability. We replicated a controlled experiment to compare the maintainability of two implementations of an application, one using a design pattern and the other using a simpler alternative. The maintenance tasks in this replication experiment required the participants to answer questions about a Java program and then modify that program. Prior to the replication, we performed a preliminary exercise to investigate whether design patterns improve the understandability of software designs. We gave the participants the graphical design of the systems that would be used in the replication study. The participant received either the version of the design containing the design pattern or the version containing the simpler alternative. We asked the participants a series of questions to see how well they understood the given design. The results of two experiments revealed that the design patterns did not improve either the maintainability or the understandability of the software. We found that there was no significant correlation between the maintainability and the understandability of the software even though the participants had received the design of the systems before they performed the maintenance tasks.", "which Quality attribute ?", "Maintainability", 294.0, 309.0], ["In the software engineering literature, many works claim that the use of design patterns improves the comprehensibility of programs and, more generally, their maintainability. Yet, little work attempted to study the impact of design patterns on the developers' tasks of program comprehension and modification. We design and perform an experiment to collect data on the impact of the Visitor pattern on comprehension and modification tasks with class diagrams. We use an eye-tracker to register saccades and fixations, the latter representing the focus of the developers' attention. Collected data show that the Visitor pattern plays a role in maintenance tasks: class diagrams with its canonical representation requires less efforts from developers.", "which Quality attribute ?", "Maintainability", 159.0, 174.0], ["This paper presents an empirical study of the impact of State Design Pattern Implementation on the memory and execution time of popular fixed-point DSP processor from Texas Instruments; TMS320VC5416. Actually, the object-oriented approach is known to introduce a significant performance penalty compared to classical procedural programming [1]. One can find the studies of the object-oriented penalty on the system performance, in terms of execution time and memory overheads in the literature. Since, to the author's best knowledge the study of the overheads of Design Patterns (DP) in the embedded system programming is not widely published in the literature. The main contribution of the paper is to bring further evidence that embedded system software developers have to consider the memory and the execution time overheads of DPs in their implementations. The results of the experiment show that implementation in C++ with DP increases the memory usage and the execution time but meanwhile these overheads would not prevent embedded system software developers to use DPs.", "which Quality attribute ?", "Performance", 275.0, 286.0], ["A report on the implementation and evaluation of an intelligent learning system; the multimedia geography tutor and game software titled Lainos World SM was localized into English, French, Spanish, German, Portuguese, Russian and Simplified Chinese. Thereafter, multilingual online surveys were setup to which High school students were globally invited via mails to schools, targeted adverts and recruitment on Facebook, Google, etc. 1125 respondents from selected nations completed both the initial and final surveys. The effect of the software on students\u2019 geographical knowledge was analyzed through pre and post achievement test scores. In general, the mean score were higher after exposure to the educational software for fifteen days and it was established that the score differences were statistically significant. This positive effect and other qualitative data show that the localized software from students\u2019 perspective is a widely acceptable and effective educational tool for learning geography in an interactive and gaming environment..", "which Topic ?", "Geography", 96.0, 105.0], ["Using mobile games in education combines situated and active learning with fun in a potentially excellent manner. The effects of a mobile city game called Frequency 1550, which was developed by The Waag Society to help pupils in their first year of secondary education playfully acquire historical knowledge of medieval Amsterdam, were investigated in terms of pupil engagement in the game, historical knowledge, and motivation for History in general and the topic of the Middle Ages in particular. A quasi-experimental design was used with 458 pupils from 20 classes from five schools. The pupils in 10 of the classes played the mobile history game whereas the pupils in the other 10 classes received a regular, project-based lesson series. The results showed those pupils who played the game to be engaged and to gain significantly more knowledge about medieval Amsterdam than those pupils who received regular project-based instruction. No significant differences were found between the two groups with respect to motivation for History or the Middle Ages. The impact of location-based technology and game-based learning on pupil knowledge and motivation are discussed along with suggestions for future research.", "which Topic ?", "History", 432.0, 439.0], ["This paper reports on a pilot study that compared the use of commercial off-the-shelf (COTS) handheld game consoles (HGCs) with traditional teaching methods to develop the automaticity of mathematical calculations and self-concept towards mathematics for year 4 students in two metropolitan schools. One class conducted daily sessions using the HGCs and the Dr Kawashima\u2019s Brain Training software to enhance their mental maths skills while the comparison class engaged in mental maths lessons using more traditional classroom approaches. Students were assessed using standardised tests at the beginning and completion of the term and findings indicated that students who undertook the Brain Training pilot study using the HGCs showed significant improvement in both the speed and accuracy of their mathematical calculations and selfconcept compared to students in the control school. An exploration of the intervention, discussion of methodology and the implications of the use of HGCs in the primary classroom are presented.", "which Topic ?", "Mathematics", 239.0, 250.0], ["Digital game-based learning is a research field within the context of technology-enhanced learning that has attracted significant research interest. Commercial off-the-shelf digital games have the potential to provide concrete learning experiences and allow for drawing links between abstract concepts and real-world situations. The aim of this paper is to provide evidence for the effect of a general-purpose commercial digital game (namely, the \u201cSims 2-Open for Business\u201d) on the achievement of standard curriculum Mathematics educational objectives as well as general educational objectives as defined by standard taxonomies. Furthermore, students\u2019 opinions about their participation in the proposed game-supported educational scenario and potential changes in their attitudes toward math teaching and learning in junior high school are investigated. The results of the conducted research showed that: (i) students engaged in the game-supported educational activities achieved the same results with those who did not, with regard to the subject matter educational objectives, (ii) digital gamesupported educational activities resulted in better achievement of the general educational objectives, and (iii) no significant differences were observed with regard to students\u2019 attitudes towards math teaching and learning.", "which Topic ?", "Mathematics", 517.0, 528.0], ["The aim of this study was to evaluate an occupational therapy nutrition education programme for children who are obese with the use of two interactive games. A quasi-experimental study was carried out at a municipal school in Fortaleza, Brazil. A convenient sample of 200 children ages 8-10 years old participated in the study. Data collection comprised a semi-structured interview, direct and structured observation, and focus group, comparing two interactive games based on the food pyramid (video game and board game) used individually and then combined. Both play activities were efficient in the mediation of nutritional concepts, with a preference for the board game. In the learning strategies, intrinsic motivation and metacognition were analysed. The attention strategy was most applied at the video game. We concluded that both games promoted the learning of nutritional concepts. We confirmed the effectiveness of the simultaneous application of interactive games in an interdisciplinary health environment. It is recommended that a larger sample should be used in evaluating the effectiveness of play and video games in teaching healthy nutrition to children in a school setting.", "which Topic ?", "Nutrition", 62.0, 71.0], ["Orthopedic surgery treats the musculoskeletal system, in which bleeding is common and can be fatal. To help train future surgeons in this complex practice, researchers designed and implemented a serious game for learning orthopedic surgery. The game focuses on teaching trainees blood management skills, which are critical for safe operations. Using state-of-the-art graphics technologies, the game provides an interactive and realistic virtual environment. It also integrates game elements, including task-oriented and time-attack scenarios, bonuses, game levels, and performance evaluation tools. To study the system's effect, the researchers conducted experiments on player completion time and off-target contacts to test their learning of psychomotor skills in blood management.", "which Topic ?", "Surgery", 11.0, 18.0], ["OBJECTIVE. Suboptimal adherence to self-administered medications is a common problem. The purpose of this study was to determine the effectiveness of a video-game intervention for improving adherence and other behavioral outcomes for adolescents and young adults with malignancies including acute leukemia, lymphoma, and soft-tissue sarcoma. METHODS. A randomized trial with baseline and 1- and 3-month assessments was conducted from 2004 to 2005 at 34 medical centers in the United States, Canada, and Australia. A total of 375 male and female patients who were 13 to 29 years old, had an initial or relapse diagnosis of a malignancy, and currently undergoing treatment and expected to continue treatment for at least 4 months from baseline assessment were randomly assigned to the intervention or control group. The intervention was a video game that addressed issues of cancer treatment and care for teenagers and young adults. Outcome measures included adherence, self-efficacy, knowledge, control, stress, and quality of life. For patients who were prescribed prophylactic antibiotics, adherence to trimethoprim-sulfamethoxazole was tracked by electronic pill-monitoring devices (n = 200). Adherence to 6-mercaptopurine was assessed through serum metabolite assays (n = 54). RESULTS. Adherence to trimethoprim-sulfamethoxazole and 6-mercaptopurine was greater in the intervention group. Self-efficacy and knowledge also increased in the intervention group compared with the control group. The intervention did not affect self-report measures of adherence, stress, control, or quality of life. CONCLUSIONS. The video-game intervention significantly improved treatment adherence and indicators of cancer-related self-efficacy and knowledge in adolescents and young adults who were undergoing cancer therapy. The findings support current efforts to develop effective video-game interventions for education and training in health care.", "which Topic ?", "Cancer treatment", 873.0, 889.0], ["Today, the high cost of supercomputers in the one hand and the need for large-scale computational resources on the other hand, has led to use network of computational resources known as Grid. Numerous research groups in universities, research labs, and industries around the world are now working on a type of Grid called Computational Grids that enable aggregation of distributed resources for solving large-scale data intensive problems in science, engineering, and commerce. Several institutions and universities have started research and teaching programs on Grid computing as part of their parallel and distributed computing curriculum. To better use tremendous capabilities of this distributed system, effective and efficient scheduling algorithms are needed. In this paper, we introduce a new scheduling algorithm based on two conventional scheduling algorithms, Min-Min and Max-Min, to use their cons and at the same time, cover their pros. It selects between the two algorithms based on standard deviation of the expected completion time of tasks on resources. We evaluate our scheduling heuristic, the Selective algorithm, within a grid simulator called GridSim. We also compared our approach to its two basic heuristics. The experimental results show that the new heuristic can lead to significant performance gain for a variety of scenarios.", "which Tools used for simulation ?", "Gridsim", 1164.0, 1171.0], ["Cloud computing is emerging as a new paradigm of large-scale distributed computing. In order to utilize the power of cloud computing completely, we need an efficient task scheduling algorithm. The traditional Min-Min algorithm is a simple, efficient algorithm that produces a better schedule that minimizes the total completion time of tasks than other algorithms in the literature [7]. However the biggest drawback of it is load imbalanced, which is one of the central issues for cloud providers. In this paper, an improved load balanced algorithm is introduced on the ground of Min-Min algorithm in order to reduce the makespan and increase the resource utilization (LBIMM). At the same time, Cloud providers offer computer resources to users on a pay-per-use base. In order to accommodate the demands of different users, they may offer different levels of quality for services. Then the cost per resource unit depends on the services selected by the user. In return, the user receives guarantees regarding the provided resources. To observe the promised guarantees, user-priority was considered in our proposed PA-LBIMM so that user's demand could be satisfied more completely. At last, the introduced algorithm is simulated using Matlab toolbox. The simulation results show that the improved algorithm can lead to significant performance gain and achieve over 20% improvement on both VIP user satisfaction and resource utilization ratio.", "which Tools used for simulation ?", "Matlab", 1234.0, 1240.0], ["Abstract The essential oil obtained by hydrodistillation from the aerial parts of Artemisia herba-alba Asso growing wild in M'sila-Algeria, was investigated using both capillary GC and GC/MS techniques. The oil yield was 1.02% based on dry weight. Sixty-eight components amounting to 94.7% of the oil were identifed, 33 of them being reported for the frst time in Algerian A. herba-alba oil and 21 of these components have not been previously reported in A. herba-alba oils. The oil contained camphor (19.4%), trans-pinocarveol (16.9%), chrysanthenone (15.8%) and \u03b2-thujone (15%) as major components. Monoterpenoids are the main components (86.1%), and the irregular monoterpenes fraction represented a 3.1% yield.", "which Isolation Procedure ?", "hydrodistillation", 39.0, 56.0], ["Abstract Isolation of the essential oil from Artemisia herba-alba collected in the North Sahara desert has been conducted by hydrodistillation (HD) and a microwave distillation process (MD). The chemical composition of the two oils was investigated by GC and GC/MS. In total, 94 constituents were identified. The main components were camphor (49.3 and 48.1% in HD and MD oils, respectively), 1,8-cineole (13.4\u201312.4%), borneol (7.3\u20137.1%), pinocarvone (5.6\u20135.5%), camphene (4.9\u20134.5%) and chrysanthenone (3.2\u20133.3%). In comparison with HD, MD allows one to obtain an oil in a very short time, with similar yields, comparable qualities and a substantial savings of energy.", "which Isolation Procedure ?", "hydrodistillation", 125.0, 142.0], ["Essential oil from Artemisia herba alba (Art) was hydrodistilled and tested as corrosion inhibitor of steel in 0.5 M H2SO4 using weight loss measurements and electrochemical polarization methods. Results gathered show that this natural oil reduced the corrosion rate by the cathodic action. Its inhibition efficiency attains the maximum (74%) at 1 g/L. The inhibition efficiency of Arm oil increases with the rise of temperature. The adsorption isotherm of natural product on the steel has been determined. A. herba alba essential oil was obtained by hydrodistillation and its chemical composition oil was investigated by capillary GC and GC/MS. The major components were chrysanthenone (30.6%) and camphor (24.4%).", "which Isolation Procedure ?", "hydrodistillation", 551.0, 568.0], ["Abstract Seedlings of Artemisia herba-alba Asso collected from Kirchaou area were transplanted in an experimental garden near the Institut des R\u00e9gions Arides of M\u00e9denine (Tunisia). During three years, the aerials parts were harvested (three levels of cutting, 25%, 50% and 75% of the plant), at full blossom and during the vegetative stage. The essential oil was isolated by hydrodistillation and its chemical composition was determined by GC(RI) and 13C-NMR. With respect to the quantity of vegetable material and the yield of hydrodistillation, it appears that the best results were obtained for plants cut at 50% of their height and during the full blossom. The chemical composition of the essential oil was dominated by \u03b2-thujone, \u03b1-thujone, 1,8-cineole, camphor and trans-sabinyl acetate, irrespective of the level of cutting and the period of harvest. It remains similar to that of plants growing wild in the same area.", "which Isolation Procedure ?", "hydrodistillation", 375.0, 392.0], ["Abstract Twenty-six oil samples were isolated by hydrodistillation from aerial parts of Artemisia herba-alba Asso growing wild in Tunisia (semi-arid land) and their chemical composition was determined by GC(RI), GC/MS and 13C-NMR. Various compositions were observed, dominated either by a single component (\u03b1-thujone, camphor, chrysanthenone or trans-sabinyl acetate) or characterized by the occurrence, at appreciable contents, of two or more of these compounds. These results confrmed the tremendous chemical variability of A. herba-alba.", "which Isolation Procedure ?", "hydrodistillation", 49.0, 66.0], ["The aim of the present study was to investigate the chemical composition, antioxidant, angiotensin Iconverting enzyme (ACE) inhibitory, antibacterial and antifungal activities of the essential oil of Artemisia herba alba Asso (Aha), a traditional medicinal plant widely growing in Tunisia. The essential oil from the air dried leaves and flowers of Aha were extracted by hydrodistillation and analyzed by GC and GC/MS. More than fifty compounds, out of which 48 were identified. The main chemical class of the oil was represented by oxygenated monoterpenes (50.53%). These were represented by 21 derivatives, among which the cis -chrysantenyl acetate (10.60%), the sabinyl acetate (9.13%) and the \u03b1-thujone (8.73%) were the principal compounds. Oxygenated sesquiterpenes, particularly arbusculones were identified in the essential oil at relatively high rates. The Aha essential oil was found to have an interesting antioxidant activity as evaluated by the 2,2-diphenyl-1-picrylhydrazyl and the \u03b2-carotene bleaching methods. The Aha essential oil also exhibited an inhibitory activity towards the ACE. The antimicrobial activities of Aha essential oil was evaluated against six bacterial strains and three fungal strains by the agar diffusion method and by determining the inhibition zone. The inhibition zones were in the range of 8-51 mm. The essential oil exhibited a strong growth inhibitory activity on all the studied fungi. Our findings demonstrated that Aha growing wild in South-Western of Tunisia seems to be a new chemotype and its essential oil might be a natural potential source for food preservation and for further investigation by developing new bioactive substances.", "which Isolation Procedure ?", "hydrodistillation", 371.0, 388.0], ["Essential oils and their components are becoming increasingly popular as naturally occurring antioxidant agents. In this work, the composition of essential oil in Artemisia herba-alba from southwest Tunisia, obtained by hydrodistillation was determined by GC/MS. Eighteen compounds were identified with the main constituents namely, \u03b1-thujone (24.88%), germacrene D (14.48%), camphor (10.81%), 1,8-cineole (8.91%) and \u03b2-thujone (8.32%). The oil was screened for its antioxidant activity with 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging, \u03b2-carotene bleaching and reducing power assays. The essential oil of A. herba-alba exhibited a good antioxidant activity with all assays with dose dependent manner and can be attributed to its presence in the oil. Key words: Artemisia herba alba, essential oil, chemical composition, antioxidant activity.", "which Isolation Procedure ?", "hydrodistillation", 220.0, 237.0], ["We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render (or synthesize) images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone. Test results show that the method performs almost without error, except on the most extreme lighting directions.", "which Video (v)/image (i) ?", "Image", 851.0, 856.0], ["Over the last couple of years, face recognition researchers have been developing new techniques. These developments are being fueled by advances in computer vision techniques, computer design, sensor design, and interest in fielding face recognition systems. Such advances hold the promise of reducing the error rate in face recognition systems by an order of magnitude over Face Recognition Vendor Test (FRVT) 2002 results. The face recognition grand challenge (FRGC) is designed to achieve this performance goal by presenting to researchers a six-experiment challenge problem along with data corpus of 50,000 images. The data consists of 3D scans and high resolution still imagery taken under controlled and uncontrolled conditions. This paper describes the challenge problem, data corpus, and presents baseline performance and preliminary results on natural statistics of facial imagery.", "which Video (v)/image (i) ?", "Image", NaN, NaN], ["Face alignment is a crucial step in face recognition tasks. Especially, using landmark localization for geometric face normalization has shown to be very effective, clearly improving the recognition results. However, no adequate databases exist that provide a sufficient number of annotated facial landmarks. The databases are either limited to frontal views, provide only a small number of annotated images or have been acquired under controlled conditions. Hence, we introduce a novel database overcoming these limitations: Annotated Facial Landmarks in the Wild (AFLW). AFLW provides a large-scale collection of images gathered from Flickr, exhibiting a large variety in face appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total 25,993 faces in 21,997 real-world images are annotated with up to 21 landmarks per image. Due to the comprehensive set of annotations AFLW is well suited to train and test algorithms for multi-view face detection, facial landmark localization and face pose estimation. Further, we offer a rich set of tools that ease the integration of other face databases and associated annotations into our joint framework.", "which Video (v)/image (i) ?", "Image", 888.0, 893.0], ["We present a novel approach to localizing parts in images of human faces. The approach combines the output of local detectors with a nonparametric set of global models for the part locations based on over 1,000 hand-labeled exemplar images. By assuming that the global models generate the part locations as hidden variables, we derive a Bayesian objective function. This function is optimized using a consensus of models for these hidden variables. The resulting localizer handles a much wider range of expression, pose, lighting, and occlusion than prior ones. We show excellent performance on real-world face datasets such as Labeled Faces in the Wild (LFW) and a new Labeled Face Parts in the Wild (LFPW) and show that our localizer achieves state-of-the-art performance on the less challenging BioID dataset.", "which Video (v)/image (i) ?", "Image", NaN, NaN], ["We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new \u201cin the wild\u201d annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).", "which Video (v)/image (i) ?", "Image", NaN, NaN], ["Developing powerful deformable face models requires massive, annotated face databases on which techniques can be trained, validated and tested. Manual annotation of each facial image in terms of landmarks requires a trained expert and the workload is usually enormous. Fatigue is one of the reasons that in some cases annotations are inaccurate. This is why, the majority of existing facial databases provide annotations for a relatively small subset of the training images. Furthermore, there is hardly any correspondence between the annotated land-marks across different databases. These problems make cross-database experiments almost infeasible. To overcome these difficulties, we propose a semi-automatic annotation methodology for annotating massive face datasets. This is the first attempt to create a tool suitable for annotating massive facial databases. We employed our tool for creating annotations for MultiPIE, XM2VTS, AR, and FRGC Ver. 2 databases. The annotations will be made publicly available from http://ibug.doc.ic.ac.uk/ resources/facial-point-annotations/. Finally, we present experiments which verify the accuracy of produced annotations.", "which Video (v)/image (i) ?", "Image", 177.0, 182.0], ["Detection and tracking of faces in image sequences is among the most well studied problems in the intersection of statistical machine learning and computer vision. Often, tracking and detection methodologies use a rigid representation to describe the facial region 1, hence they can neither capture nor exploit the non-rigid facial deformations, which are crucial for countless of applications (e.g., facial expression analysis, facial motion capture, high-performance face recognition etc.). Usually, the non-rigid deformations are captured by locating and tracking the position of a set of fiducial facial landmarks (e.g., eyes, nose, mouth etc.). Recently, we witnessed a burst of research in automatic facial landmark localisation in static imagery. This is partly attributed to the availability of large amount of annotated data, many of which have been provided by the first facial landmark localisation challenge (also known as 300-W challenge). Even though now well established benchmarks exist for facial landmark localisation in static imagery, to the best of our knowledge, there is no established benchmark for assessing the performance of facial landmark tracking methodologies, containing an adequate number of annotated face videos. In conjunction with ICCV'2015 we run the first competition/challenge on facial landmark tracking in long-term videos. In this paper, we present the first benchmark for long-term facial landmark tracking, containing currently over 110 annotated videos, and we summarise the results of the competition.", "which Video (v)/image (i) ?", "Video", NaN, NaN], ["Purpose \u2013 To explore the current literature base of critical success factors (CSFs) of ERP implementations, prepare a compilation, and identify any gaps that might exist.Design/methodology/approach \u2013 Hundreds of journals were searched using key terms identified in a preliminary literature review. Successive rounds of article abstract reviews resulted in 45 articles being selected for the compilation. CSF constructs were then identified using content analysis methodology and an inductive coding technique. A subsequent critical analysis identified gaps in the literature base.Findings \u2013 The most significant finding is the lack of research that has focused on the identification of CSFs from the perspectives of key stakeholders. Additionally, there appears to be much variance with respect to what exactly is encompassed by change management, one of the most widely cited CSFs, and little detail of specific implementation tactics.Research limitations/implications \u2013 There is a need to focus future research efforts...", "which Types of literature reviews ?", "Inductive", 482.0, 491.0], ["OBJECTIVE An estimated 293,300 healthcare-associated cases of Clostridium difficile infection (CDI) occur annually in the United States. To date, research has focused on developing risk prediction models for CDI that work well across institutions. However, this one-size-fits-all approach ignores important hospital-specific factors. We focus on a generalizable method for building facility-specific models. We demonstrate the applicability of the approach using electronic health records (EHR) from the University of Michigan Hospitals (UM) and the Massachusetts General Hospital (MGH). METHODS We utilized EHR data from 191,014 adult admissions to UM and 65,718 adult admissions to MGH. We extracted patient demographics, admission details, patient history, and daily hospitalization details, resulting in 4,836 features from patients at UM and 1,837 from patients at MGH. We used L2 regularized logistic regression to learn the models, and we measured the discriminative performance of the models on held-out data from each hospital. RESULTS Using the UM and MGH test data, the models achieved area under the receiver operating characteristic curve (AUROC) values of 0.82 (95% confidence interval [CI], 0.80\u20130.84) and 0.75 ( 95% CI, 0.73\u20130.78), respectively. Some predictive factors were shared between the 2 models, but many of the top predictive factors differed between facilities. CONCLUSION A data-driven approach to building models for estimating daily patient risk for CDI was used to build institution-specific models at 2 large hospitals with different patient populations and EHR systems. In contrast to traditional approaches that focus on developing models that apply across hospitals, our generalizable approach yields risk-stratification models tailored to an institution. These hospital-specific models allow for earlier and more accurate identification of high-risk patients and better targeting of infection prevention strategies. Infect Control Hosp Epidemiol 2018;39:425\u2013433", "which Infection ?", "Clostridium difficile infection", 62.0, 93.0], ["Background: Seasonal influenza virus outbreaks cause annual epidemics, mostly during winter in temperate zone countries, especially resulting in increased morbidity and higher mortality in children. In order to conduct rapid screening for influenza in pediatric outpatient units, we developed a pediatric infection screening system with a radar respiration monitor. Methods: The system conducts influenza screening within 10 seconds based on vital signs (i.e., respiration rate monitored using a 24 GHz microwave radar; facial temperature, using a thermopile array; and heart rate, using a pulse photosensor). A support vector machine (SVM) classification method was used to discriminate influenza children from healthy children based on vital signs. To assess the classification performance of the screening system that uses the SVM, we conducted influenza screening for 70 children (i.e., 27 seasonal influenza patients (11 \u00b1 2 years) at a pediatric clinic and 43 healthy control subjects (9 \u00b1 4 years) at a pediatric dental clinic) in the winter of 2013-2014. Results: The screening system using the SVM identified 26 subjects with influenza (22 of the 27 influenza patients and 4 of the 43 healthy subjects). The system discriminated 44 subjects as healthy (5 of the 27 influenza patients and 39 of the 43 healthy subjects), with sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 81.5%, 90.7%, 84.6%, and 88.6%, respectively. Conclusion: The SVM-based screening system achieved classification results for the outpatient children based on vital signs with comparatively high NPV within 10 seconds. At pediatric clinics and hospitals, our system seems potentially useful in the first screening step for infections in the future.", "which Infection ?", "Influenza", 21.0, 30.0], ["Analysis of data from Electronic Health Records (EHR) presents unique challenges, in particular regarding nonuniform temporal resolution of longitudinal variables. A considerable amount of patient information is available in the EHR - including blood tests that are performed routinely during inpatient follow-up. These data are useful for the design of advanced machine learning-based methods and prediction models. Using a matched cohort of patients undergoing gastrointestinal surgery (101 cases and 904 controls), we built a prediction model for post-operative surgical site infections (SSIs) using Gaussian process (GP) regression, time warping and imputation methods to manage the sparsity of the data source, and support vector machines for classification. For most blood tests, wider confidence intervals after imputation were obtained in patients with SSI. Predictive performance with individual blood tests was maintained or improved by joint model prediction, and non-linear classifiers performed consistently better than linear models.", "which Infection ?", "Surgical Site Infection", NaN, NaN], ["Background Sepsis is one of the leading causes of mortality in hospitalized patients. Despite this fact, a reliable means of predicting sepsis onset remains elusive. Early and accurate sepsis onset predictions could allow more aggressive and targeted therapy while maintaining antimicrobial stewardship. Existing detection methods suffer from low performance and often require time-consuming laboratory test results. Objective To study and validate a sepsis prediction method, InSight, for the new Sepsis-3 definitions in retrospective data, make predictions using a minimal set of variables from within the electronic health record data, compare the performance of this approach with existing scoring systems, and investigate the effects of data sparsity on InSight performance. Methods We apply InSight, a machine learning classification system that uses multivariable combinations of easily obtained patient data (vitals, peripheral capillary oxygen saturation, Glasgow Coma Score, and age), to predict sepsis using the retrospective Multiparameter Intelligent Monitoring in Intensive Care (MIMIC)-III dataset, restricted to intensive care unit (ICU) patients aged 15 years or more. Following the Sepsis-3 definitions of the sepsis syndrome, we compare the classification performance of InSight versus quick sequential organ failure assessment (qSOFA), modified early warning score (MEWS), systemic inflammatory response syndrome (SIRS), simplified acute physiology score (SAPS) II, and sequential organ failure assessment (SOFA) to determine whether or not patients will become septic at a fixed period of time before onset. We also test the robustness of the InSight system to random deletion of individual input observations. Results In a test dataset with 11.3% sepsis prevalence, InSight produced superior classification performance compared with the alternative scores as measured by area under the receiver operating characteristic curves (AUROC) and area under precision-recall curves (APR). In detection of sepsis onset, InSight attains AUROC = 0.880 (SD 0.006) at onset time and APR = 0.595 (SD 0.016), both of which are superior to the performance attained by SIRS (AUROC: 0.609; APR: 0.160), qSOFA (AUROC: 0.772; APR: 0.277), and MEWS (AUROC: 0.803; APR: 0.327) computed concurrently, as well as SAPS II (AUROC: 0.700; APR: 0.225) and SOFA (AUROC: 0.725; APR: 0.284) computed at admission (P<.001 for all comparisons). Similar results are observed for 1-4 hours preceding sepsis onset. In experiments where approximately 60% of input data are deleted at random, InSight attains an AUROC of 0.781 (SD 0.013) and APR of 0.401 (SD 0.015) at sepsis onset time. Even with 60% of data missing, InSight remains superior to the corresponding SIRS scores (AUROC and APR, P<.001), qSOFA scores (P=.0095; P<.001) and superior to SOFA and SAPS II computed at admission (AUROC and APR, P<.001), where all of these comparison scores (except InSight) are computed without data deletion. Conclusions Despite using little more than vitals, InSight is an effective tool for predicting sepsis onset and performs well even with randomly missing data.", "which Infection ?", "Sepsis", 11.0, 17.0], ["The National Surgical Quality Improvement Project (NSQIP) is widely recognized as \u201cthe best in the nation\u201d surgical quality improvement resource in the United States. In particular, it rigorously defines postoperative morbidity outcomes, including surgical adverse events occurring within 30 days of surgery. Due to its manual yet expensive construction process, the NSQIP registry is of exceptionally high quality, but its high cost remains a significant bottleneck to NSQIP\u2019s wider dissemination. In this work, we propose an automated surgical adverse events detection tool, aimed at accelerating the process of extracting postoperative outcomes from medical charts. As a prototype system, we combined local EHR data with the NSQIP gold standard outcomes and developed machine learned models to retrospectively detect Surgical Site Infections (SSI), a particular family of adverse events that NSQIP extracts. The built models have high specificity (from 0.788 to 0.988) as well as very high negative predictive values (>0.98), reliably eliminating the vast majority of patients without SSI, thereby significantly reducing the NSQIP extractors\u2019 burden.", "which Infection ?", "Surgical Site Infection", NaN, NaN], ["Clinical measurements that can be represented as time series constitute an important fraction of the electronic health records and are often both uncertain and incomplete. Recurrent neural networks are a special class of neural networks that are particularly suitable to process time series data but, in their original formulation, cannot explicitly deal with missing data. In this paper, we explore imputation strategies for handling missing values in classifiers based on recurrent neural network (RNN) and apply a recently proposed recurrent architecture, the Gated Recurrent Unit with Decay, specifically designed to handle missing data. We focus on the problem of detecting surgical site infection in patients by analyzing time series of their blood sample measurements and we compare the results obtained with different RNN-based classifiers.", "which Infection ?", "Surgical Site Infection", 679.0, 702.0], ["BACKGROUND Predicting recurrent Clostridium difficile infection (rCDI) remains difficult. METHODS. We employed a retrospective cohort design. Granular electronic medical record (EMR) data had been collected from patients hospitalized at 21 Kaiser Permanente Northern California hospitals. The derivation dataset (2007\u20132013) included data from 9,386 patients who experienced incident CDI (iCDI) and 1,311 who experienced their first CDI recurrences (rCDI). The validation dataset (2014) included data from 1,865 patients who experienced incident CDI and 144 who experienced rCDI. Using multiple techniques, including machine learning, we evaluated more than 150 potential predictors. Our final analyses evaluated 3 models with varying degrees of complexity and 1 previously published model. RESULTS Despite having a large multicenter cohort and access to granular EMR data (eg, vital signs, and laboratory test results), none of the models discriminated well (c statistics, 0.591\u20130.605), had good calibration, or had good explanatory power. CONCLUSIONS Our ability to predict rCDI remains limited. Given currently available EMR technology, improvements in prediction will require incorporating new variables because currently available data elements lack adequate explanatory power. Infect Control Hosp Epidemiol 2017;38:1196\u20131203", "which Infection ?", "Clostridium difficile infection", 32.0, 63.0], ["This study describes a novel approach to solve the surgical site infection (SSI) classification problem. Feature engineering has traditionally been one of the most important steps in solving complex classification problems, especially in cases with temporal data. The described novel approach is based on abstraction of temporal data recorded in three temporal windows. Maximum likelihood L1-norm (lasso) regularization was used in penalized logistic regression to predict the onset of surgical site infection occurrence based on available patient blood testing results up to the day of surgery. Prior knowledge of predictors (blood tests) was integrated in the modelling by introduction of penalty factors depending on blood test prices and an early stopping parameter limiting the maximum number of selected features used in predictive modelling. Finally, solutions resulting in higher interpretability and cost-effectiveness were demonstrated. Using repeated holdout cross-validation, the baseline C-reactive protein (CRP) classifier achieved a mean AUC of 0.801, whereas our best full lasso model achieved a mean AUC of 0.956. Best model testing results were achieved for full lasso model with maximum number of features limited at 20 features with an AUC of 0.967. Presented models showed the potential to not only support domain experts in their decision making but could also prove invaluable for improvement in prediction of SSI occurrence, which may even help setting new guidelines in the field of preoperative SSI prevention and surveillance.", "which Infection ?", "Surgical Site Infection", 51.0, 74.0], ["OBJECTIVE To develop a decision support system to identify patients at high risk for hyperlactatemia based upon routinely measured vital signs and laboratory studies. MATERIALS AND METHODS Electronic health records of 741 adult patients at the University of California Davis Health System who met at least two systemic inflammatory response syndrome criteria were used to associate patients' vital signs, white blood cell count (WBC), with sepsis occurrence and mortality. Generative and discriminative classification (na\u00efve Bayes, support vector machines, Gaussian mixture models, hidden Markov models) were used to integrate heterogeneous patient data and form a predictive tool for the inference of lactate level and mortality risk. RESULTS An accuracy of 0.99 and discriminability of 1.00 area under the receiver operating characteristic curve (AUC) for lactate level prediction was obtained when the vital signs and WBC measurements were analysed in a 24 h time bin. An accuracy of 0.73 and discriminability of 0.73 AUC for mortality prediction in patients with sepsis was achieved with only three features: median of lactate levels, mean arterial pressure, and median absolute deviation of the respiratory rate. DISCUSSION This study introduces a new scheme for the prediction of lactate levels and mortality risk from patient vital signs and WBC. Accurate prediction of both these variables can drive the appropriate response by clinical staff and thus may have important implications for patient health and treatment outcome. CONCLUSIONS Effective predictions of lactate levels and mortality risk can be provided with a few clinical variables when the temporal aspect and variability of patient data are considered.", "which Infection ?", "Sepsis", 440.0, 446.0], ["After the outbreak of severe acute respiratory syndrome (SARS) in 2003, many international airport quarantine stations conducted fever-based screening to identify infected passengers using infrared thermography for preventing global pandemics. Due to environmental factors affecting measurement of facial skin temperature with thermography, some previous studies revealed the limits of authenticity in detecting infectious symptoms. In order to implement more strict entry screening in the epidemic seasons of emerging infectious diseases, we developed an infection screening system for airport quarantines using multi-parameter vital signs. This system can automatically detect infected individuals within several tens of seconds by a neural-network-based discriminant function using measured vital signs, i.e., heart rate obtained by a reflective photo sensor, respiration rate determined by a 10-GHz non-contact respiration radar, and the ear temperature monitored by a thermography. In this paper, to reduce the environmental effects on thermography measurement, we adopted the ear temperature as a new screening indicator instead of facial skin. We tested the system on 13 influenza patients and 33 normal subjects. The sensitivity of the infection screening system in detecting influenza were 92.3%, which was higher than the sensitivity reported in our previous paper (88.0%) with average facial skin temperature.", "which Infection ?", "Influenza", 1178.0, 1187.0], ["Background Surgical site infection (SSI) surveillance is a key factor in the elaboration of strategies to reduce SSI occurrence and in providing surgeons with appropriate data feedback (risk indicators, clinical prediction rule). Aim To improve the predictive performance of an individual-based SSI risk model by considering a multilevel hierarchical structure. Patients and Methods Data were collected anonymously by the French SSI active surveillance system in 2011. An SSI diagnosis was made by the surgical teams and infection control practitioners following standardized criteria. A random 20% sample comprising 151 hospitals, 502 wards and 62280 patients was used. Three-level (patient, ward, hospital) hierarchical logistic regression models were initially performed. Parameters were estimated using the simulation-based Markov Chain Monte Carlo procedure. Results A total of 623 SSI were diagnosed (1%). The hospital level was discarded from the analysis as it did not contribute to variability of SSI occurrence (p = 0.32). Established individual risk factors (patient history, surgical procedure and hospitalization characteristics) were identified. A significant heterogeneity in SSI occurrence between wards was found (median odds ratio [MOR] 3.59, 95% credibility interval [CI] 3.03 to 4.33) after adjusting for patient-level variables. The effects of the follow-up duration varied between wards (p<10\u22129), with an increased heterogeneity when follow-up was <15 days (MOR 6.92, 95% CI 5.31 to 9.07]). The final two-level model significantly improved the discriminative accuracy compared to the single level reference model (p<10\u22129), with an area under the ROC curve of 0.84. Conclusion This study sheds new light on the respective contribution of patient-, ward- and hospital-levels to SSI occurrence and demonstrates the significant impact of the ward level over and above risk factors present at patient level (i.e., independently from patient case-mix).", "which Infection ?", "Surgical Site Infection", 11.0, 34.0], ["Objective. Achieving accurate prediction of sepsis detection moment based on bedside monitor data in the intensive care unit (ICU). A good clinical outcome is more probable when onset is suspected and treated on time, thus early insight of sepsis onset may save lives and reduce costs. Methodology. We present a novel approach for feature extraction, which focuses on the hypothesis that unstable patients are more prone to develop sepsis during ICU stay. These features are used in machine learning algorithms to provide a prediction of a patient\u2019s likelihood to develop sepsis during ICU stay, hours before it is diagnosed. Results. Five machine learning algorithms were implemented using R software packages. The algorithms were trained and tested with a set of 4 features which represent the variability in vital signs. These algorithms aimed to calculate a patient\u2019s probability to become septic within the next 4 hours, based on recordings from the last 8 hours. The best area under the curve (AUC) was achieved with Support Vector Machine (SVM) with radial basis function, which was 88.38%. Conclusions. The high level of predictive accuracy along with the simplicity and availability of input variables present great potential if applied in ICUs. Variability of a patient\u2019s vital signs proves to be a good indicator of one\u2019s chance to become septic during ICU stay.", "which Infection ?", "Sepsis", 79.0, 85.0], ["OBJECTIVE To assess the predictive value for the early detection of sepsis of the physiological monitoring parameters currently recommended by the Surviving Sepsis Campaign. METHODS The Project IMPACT data set was used to assess whether the physiological parameters of heart rate, mean arterial pressure, body temperature, and respiratory rate can be used to distinguish between critically ill adult patients with and without sepsis in the first 24 hours of admission to an intensive care unit. RESULTS All predictor variables used in the analyses differed significantly between patients with sepsis and patients without sepsis. However, only 2 of the predictor variables, mean arterial pressure and high temperature, were independently associated with sepsis. In addition, the temperature mean for hypothermia was significantly lower in patients without sepsis. The odds ratio for having sepsis was 2.126 for patients with a temperature of 38 degrees C or higher, 3.874 for patients with a mean arterial blood pressure of less than 70 mm Hg, and 4.63 times greater for patients who had both of these conditions. CONCLUSIONS The results support the use of some of the guidelines of the Surviving Sepsis Campaign. However, the lowest mean temperature was significantly less for patients without sepsis than for patients with sepsis, a finding that calls into question the clinical usefulness of using hypothermia as an early predictor of sepsis. Alone the group of variables used is not sufficient for discriminating between critically ill patients with and without sepsis.", "which Infection ?", "Sepsis", 68.0, 74.0], ["A large fraction of the electronic health records consists of clinical measurements collected over time, such as blood tests, which provide important information about the health status of a patient. These sequences of clinical measurements are naturally represented as time series, characterized by multiple variables and the presence of missing data, which complicate analysis. In this work, we propose a surgical site infection detection framework for patients undergoing colorectal cancer surgery that is completely unsupervised, hence alleviating the problem of getting access to labelled training data. The framework is based on powerful kernels for multivariate time series that account for missing data when computing similarities. Our approach show superior performance compared to baselines that have to resort to imputation techniques and performs comparable to a supervised classification baseline.", "which Infection ?", "Surgical Site Infection", 407.0, 430.0], ["Objective. To develop and validate a risk prediction model that could identify patients at high risk for Clostridium difficile infection (CDI) before they develop disease. Design and Setting. Retrospective cohort study in a tertiary care medical center. Patients. Patients admitted to the hospital for at least 48 hours during the calendar year 2003. Methods. Data were collected electronically from the hospital's Medical Informatics database and analyzed with logistic regression to determine variables that best predicted patients' risk for development of CDI. Model discrimination and calibration were calculated. The model was bootstrapped 500 times to validate the predictive accuracy. A receiver operating characteristic curve was calculated to evaluate potential risk cutoffs. Results. A total of 35,350 admitted patients, including 329 with CDI, were studied. Variables in the risk prediction model were age, CDI pressure, times admitted to hospital in the previous 60 days, modified Acute Physiology Score, days of treatment with high-risk antibiotics, whether albumin level was low, admission to an intensive care unit, and receipt of laxatives, gastric acid suppressors, or antimotility drugs. The calibration and discrimination of the model were very good to excellent (C index, 0.88; Brier score, 0.009). Conclusions. The CDI risk prediction model performed well. Further study is needed to determine whether it could be used in a clinical setting to prevent CDI-associated outcomes and reduce costs.", "which Infection ?", "Clostridium difficile infection", 105.0, 136.0], ["Clostridium Difficile Infection (CDI) is a contagious healthcare-associated infection that imposes a significant burden on the healthcare system. In 2011 alone, half a million patients suffered from CDI in the United States, 29,000 dying within 30 days of diagnosis. Determining which hospital patients are at risk for developing CDI is critical to helping healthcare workers take timely measures to prevent or detect and treat this infection. We improve the state of the art of CDI risk prediction by designing an ensemble logistic regression classifier that given partial patient visit histories, outputs the risk of patients acquiring CDI during their current hospital visit. The novelty of our approach lies in the representation of each patient visit as a collection of co-occurring and chronologically ordered pairs of events. This choice is motivated by our hypothesis that CDI risk is influenced not just by individual events (e.g., Being prescribed a first generation cephalosporin antibiotic), but by the temporal ordering of individual events (e.g., Antibiotic prescription followed by transfer to a certain hospital unit). While this choice explodes the number of features, we use a randomized greedy feature selection algorithm followed by BIC minimization to reduce the dimensionality of the feature space, while retaining the most relevant features. We apply our approach to a rich dataset from the University of Iowa Hospitals and Clinics (UIHC), curated from diverse sources, consisting of 200,000 visits (30,000 per year, 2006-2011) involving 125,000 unique patients, 2 million diagnoses, 8 million prescriptions, 400,000 room transfers spanning a hospital with 700 patient rooms and 200 units. Our approach to classification produces better risk predictions (AUC) than existing risk estimators for CDI, even when trained just on data available at patient admission. It also identifies novel risk factors for CDI that are combinations of co-occurring and chronologically ordered events.", "which Infection ?", "Clostridium difficile infection", 0.0, 31.0], ["Sepsis is a severe medical condition caused by an inordinate immune response to an infection. Early detection of sepsis symptoms is important to prevent the progression into the more severe stages of the disease, which kills one in four it effects. Electronic medical records of 1492 patients containing 233 cases of sepsis were used in a clustering analysis to identify features that are indicative of sepsis and can be further used for training a Bayesian inference network. The Bayesian network was constructed using the systemic inflammatory response syndrome criteria, mean arterial pressure, and lactate levels for sepsis patients. The resulting network reveals a clear correlation between lactate levels and sepsis. Furthermore, it was shown that lactate levels may be predicative of the SIRS criteria. In this light, Bayesian networks of sepsis patients hold the promise of providing a clinical decision support system in the future.", "which Infection ?", "Sepsis", 0.0, 6.0], ["The analysis of a document image to derive a symbolic description of its structure and contents involves using spatial domain knowledge to classify the different printed blocks (e.g., text paragraphs), group them into logical units (e.g., newspaper stories), and determine the reading order of the text blocks within each unit. These steps describe the conversion of the physical structure of a document into its logical structure. We have developed a computational model for document logical structure derivation, in which a rule-based control strategy utilizes the data obtained from analyzing a digitized document image, and makes inferences using a multi-level knowledge base of document layout rules. The knowledge-based document logical structure derivation system (DeLoS) based on this model consists of a hierarchical rule-based control system to guide the block classification, grouping and read-ordering operations; a global data structure to store the document image data and incremental inferences; and a domain knowledge base to encode the rules governing document layout.", "which Key Idea ?", "knowledge-based", 710.0, 725.0], ["The analysis of a document image to derive a symbolic description of its structure and contents involves using spatial domain knowledge to classify the different printed blocks (e.g., text paragraphs), group them into logical units (e.g., newspaper stories), and determine the reading order of the text blocks within each unit. These steps describe the conversion of the physical structure of a document into its logical structure. We have developed a computational model for document logical structure derivation, in which a rule-based control strategy utilizes the data obtained from analyzing a digitized document image, and makes inferences using a multi-level knowledge base of document layout rules. The knowledge-based document logical structure derivation system (DeLoS) based on this model consists of a hierarchical rule-based control system to guide the block classification, grouping and read-ordering operations; a global data structure to store the document image data and incremental inferences; and a domain knowledge base to encode the rules governing document layout.", "which Key Idea ?", "rule-based", 526.0, 536.0], ["A method for extracting alternating horizontal and vertical projection profiles are from nested sub-blocks of scanned page images of technical documents is discussed. The thresholded profile strings are parsed using the compiler utilities Lex and Yacc. The significant document components are demarcated and identified by the recursive application of block grammars. Backtracking for error recovery and branch and bound for maximum-area labeling are implemented with Unix Shell programs. Results of the segmentation and labeling process are stored in a labeled x-y tree. It is shown that families of technical documents that share the same layout conventions can be readily analyzed. Results from experiments in which more than 20 types of document entities were identified in sample pages from two journals are presented. >", "which Key Idea ?", "block grammar", NaN, NaN], ["A new method for logical structure analysis of document images is proposed in this paper as the basis for a document reader which can extract logical information from various printed documents. The proposed system consists of five basic modules: typography analysis, object recognition, object segmentation, object grouping and object modification. Emergent computation, which is a key concept of artificial life, is adopted for the cooperative interaction among the modules in the system in order to achieve an effective and flexible behavior of the whole system. It has two principal advantages over other methods: adaptive system configuration for various and complex logical structures, and robust document analysis that is tolerant of erroneous feature detection.", "which Key Idea ?", "emergent computation", 349.0, 369.0], ["We studied a dipstick assay for the detection of Leptospira-specific immunoglobulin M (IgM) antibodies in human serum samples. A high degree of concordance was observed between the results of the dipstick assay and an IgM enzyme-linked immunosorbent assay (ELISA). Application of the dipstick assay for the detection of acute leptospirosis enabled the accurate identification, early in the disease, of a high proportion of the cases of leptospirosis. Analysis of a second serum sample is recommended, in order to determine seroconversion or increased staining intensity. All serum samples from the patients who were confirmed to be positive for leptospirosis by either a positive microscopic agglutination test or a positive culture but were found to be negative by the dipstick assay were also judged to be negative by the IgM ELISA or revealed borderline titers by the IgM ELISA. Some cross-reactivity was observed for sera from patients with diseases other than leptospirosis, and this should be taken into account in the interpretation of test results. The dipstick assay is easy to perform, can be performed quickly, and requires no electricity or special equipment, and the assay components, a dipstick and a staining reagent, can be stored for a prolonged period without a loss of reactivity, even at elevated temperatures.", "which Application ?", "Leptospirosis", 326.0, 339.0], ["ABSTRACT A newly developed reagent strip assay for the diagnosis of schistosomiasis based on parasite antigen detection in urine of infected individuals was evaluated. The test uses the principle of lateral flow through a nitrocellulose strip of the sample mixed with a colloidal carbon conjugate of a monoclonal antibody specific for Schistosoma circulating cathodic antigen (CCA). The strip assay to diagnose a group of highly infected schoolchildren in Mwanza, Tanzania, demonstrated a high sensitivity and association with the intensity of infection as measured both by egg counts, and by circulating anodic antigen and CCA levels determined by enzyme-linked immunosorbent assay. A specificity of ca. 90% was shown in a group of schistosome-negative schoolchildren from Tarime, Tanzania, an area where schistosomiasis is not endemic. The test is easy to perform and requires no technical equipment or special training. The stability of the strips and the conjugate in the dry format lasts for at least 3 months at ambient temperature in sealed packages, making it suitable for transport and use in areas where schistosomiasis is endemic. This assay can easily be developed to an end-user format.", "which Application ?", "Schistosomiasis", 68.0, 83.0], ["Hyperspectral sensors are devices that acquire images over hundreds of spectral bands, thereby enabling the extraction of spectral signatures for objects or materials observed. Hyperspectral remote sensing has been used over a wide range of applications, such as agriculture, forestry, geology, ecological monitoring and disaster monitoring. In this paper, the specific application of hyperspectral remote sensing to agriculture is examined. The technological development of agricultural methods is of critical importance as the world's population is anticipated to continuously rise much beyond the current number of 7 billion. One area upon which hyperspectral sensing can yield considerable impact is that of precision agriculture - the use of observations to optimize the use of resources and management of farming practices. For example, hyperspectral image processing is used in the monitoring of plant diseases, insect pests and invasive plant species; the estimation of crop yield; and the fine classification of crop distributions. This paper also presents a detailed overview of hyperspectral data processing techniques and suggestions for advancing the agricultural applications of hyperspectral technologies in Turkey.", "which Application ?", "Estimation of Crop Yield", 964.0, 988.0], ["Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.", "which Application ?", "Land cover", 545.0, 555.0], ["Hyperspectral sensors are devices that acquire images over hundreds of spectral bands, thereby enabling the extraction of spectral signatures for objects or materials observed. Hyperspectral remote sensing has been used over a wide range of applications, such as agriculture, forestry, geology, ecological monitoring and disaster monitoring. In this paper, the specific application of hyperspectral remote sensing to agriculture is examined. The technological development of agricultural methods is of critical importance as the world's population is anticipated to continuously rise much beyond the current number of 7 billion. One area upon which hyperspectral sensing can yield considerable impact is that of precision agriculture - the use of observations to optimize the use of resources and management of farming practices. For example, hyperspectral image processing is used in the monitoring of plant diseases, insect pests and invasive plant species; the estimation of crop yield; and the fine classification of crop distributions. This paper also presents a detailed overview of hyperspectral data processing techniques and suggestions for advancing the agricultural applications of hyperspectral technologies in Turkey.", "which Application ?", "Precision agriculture", 712.0, 733.0], ["Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.", "which Application ?", "Soil moisture", 262.0, 275.0], ["Hyperspectral technology is useful for urban studies due to its capability in examining detailed spectral characteristics of urban materials. This study aims to develop a spectral library of urban materials and demonstrate its application in remote sensing analysis of an urban environment. Field measurements were conducted by using ASD FieldSpec 3 Spectroradiometer with wavelength range from 350 to 2500 nm. The spectral reflectance curves of urban materials were interpreted and analyzed. A collection of 22 spectral data was compiled into a spectral library. The spectral library was put to practical use by utilizing the reference spectra for WorldView-2 satellite image classification which demonstrates the usability of such infrastructure to facilitate further progress of remote sensing applications in Malaysia.", "which Application ?", "Urban Materials", 125.0, 140.0], ["We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word\u2019s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.", "which Application ?", "Sketch Engine", 44.0, 57.0], ["The Norwegian company Omya Hustadmarmor supplies calcium carbonate slurry to European paper manufacturers from a single processing plant, using chemical tank ships of various sizes to transport its products. Transportation costs are lower for large ships than for small ships, but their use increases planning complexity and creates problems in production. In 2001, the company faced overwhelming operational challenges and sought operations-research-based planning support. The CEO, Sturla Steinsvik, contacted More Research Molde, which conducted a project that led to the development of a decision-support system (DSS) for maritime inventory routing. The core of the DSS is an optimization model that is solved through a metaheuristic-based algorithm. The system helps planners to make stronger, faster decisions and has increased predictability and flexibility throughout the supply chain. It has saved production and transportation costs close to US$7 million a year. We project additional direct savings of nearly US$4 million a year as the company adds even larger ships to the fleet as a result of the project. In addition, the company has avoided investments of US$35 million by increasing capacity utilization. Finally, the project has had a positive environmental effect by reducing overall oil consumption by more than 10 percent.", "which mode ?", "Maritime", 626.0, 634.0], ["BACKGROUND Clostridium difficile infection of the colon is a common and well-described clinical entity. Clostridium difficile enteritis of the small bowel is believed to be less common and has been described sparsely in the literature. METHODS Case report and literature review. RESULTS We describe a patient who had undergone total proctocolectomy with ileal pouch-anal anastomosis who was treated with broad-spectrum antibiotics and contracted C. difficile refractory to metronidazole. The enteritis resolved quickly after initiation of combined oral vancomycin and metronidazole. A literature review found that eight of the fifteen previously reported cases of C. difficile-associated small-bowel enteritis resulted in death. CONCLUSIONS It is important for physicians who treat acolonic patients to be aware of C. difficile enteritis of the small bowel so that it can be suspected, diagnosed, and treated.", "which Treatment ?", "metronidazole", 473.0, 486.0], ["Pseudomembranous colitis is a well recognized complication of antibiotic use1 and is due to disturbances of the normal colonic bacterial flora, resulting in overgrowth of Clostridium difficile. For recurrent or severe cases, oral vancomycin or metronidazole is the treatment of choice. Progression to acute fulminant colitis with systemic toxic effects occasionally occurs, especially in the elderly and in the immunosuppressed. Some of these patients may need surgical intervention for complications such as perforation.2 Clostridium difficile is commonly regarded as a colonic pathogen and there are few reports of C. difficile enteritis with involvement of the small bowel (Table 1). Pseudomembrane formation caused by C. difficile is generally restricted to the colon, with abrupt termination at the ileocaecal valve.1,3,5,8,9 We report a case of fulminant and fatal C. difficile infection with pseudomembranes throughout the entire small bowel and colon in a patient following complex colorectal surgery. The relevant literature is reviewed.", "which Treatment ?", "Vancomycin", 230.0, 240.0], ["BACKGROUND Clostridium difficile infection of the colon is a common and well-described clinical entity. Clostridium difficile enteritis of the small bowel is believed to be less common and has been described sparsely in the literature. METHODS Case report and literature review. RESULTS We describe a patient who had undergone total proctocolectomy with ileal pouch-anal anastomosis who was treated with broad-spectrum antibiotics and contracted C. difficile refractory to metronidazole. The enteritis resolved quickly after initiation of combined oral vancomycin and metronidazole. A literature review found that eight of the fifteen previously reported cases of C. difficile-associated small-bowel enteritis resulted in death. CONCLUSIONS It is important for physicians who treat acolonic patients to be aware of C. difficile enteritis of the small bowel so that it can be suspected, diagnosed, and treated.", "which Treatment ?", "Vancomycin", 553.0, 563.0], ["To the Editor: A 54-year-old male was admitted to a community hospital with a 3-month history of diarrhea up to 8 times a day associated with bloody bowel motions and weight loss of 6 kg. He had no past medical history or family history of note. A clinical diagnosis of colitis was made and the patient underwent a limited colonoscopy which demonstrated continuous mucosal inflammation and ulceration that was most marked in the rectum. The clinical and endoscopic findings were suggestive of acute ulcerative colitis (UC), which was subsequently supported by histopathology. The patient was managed with bowel rest and intravenous steroids. However, he developed toxic megacolon on day 4 of his admission and underwent a total colectomy with end ileostomy. On the third postoperative day the patient developed a pyrexia of 39\u00b0C, a septic screen was performed, and the central venous line (CVP) was changed with the tip culturing methicillin-resistant Staphylococcus aureus (MRSA). Intravenous gentamycin was commenced and discontinued after 5 days, with the patient remaining afebrile and stable. On the tenth postoperative day the patient became tachycardic (pulse 110/min), diaphoretic (temperature of 39.4\u00b0C), hypotensive (diastolic of 60 mm Hg), and with a high volume nasogastric aspirates noted (2000 mL). A diagnosis of septic shock was considered although the etiology was unclear. The patient was resuscitated with intravenous fluids and transferred to the regional surgical unit for Intensive Care Unit monitoring and management. A computed tomography (CT) of the abdomen showed a marked inflammatory process with bowel wall thickening along the entire small bowel with possible intramural air, raising the suggestion of ischemic bowel (Fig. 1). However, on clinical assessment the patient elicited no signs of peritonism, his vitals were stable, he was not acidotic (pH 7.40), urine output was adequate, and his blood pressure was being maintained without inotropic support. Furthermore, his ileostomy appeared healthy and well perfused, although a high volume (2500 mL in the previous 18 hours), malodorous output was noted. A sample of the stoma output was sent for microbiological analysis. Given that the patient was not exhibiting evidence of peritonitis with normal vital signs, a conservative policy of fluid resuscitation was pursued with plans for exploratory laparotomy if he disimproved. Ileostomy output sent for microbiology assessment was positive for Clostridium difficile toxin A and B utilizing culture and enzyme immunoassays (EIA). Intravenous vancomycin, metronidazole, and rifampicin via a nasogastric tube were commenced in conjunction with bowel rest and total parenteral nutrition. The ileostomy output reduced markedly within 2 days and the patient\u2019s clinical condition improved. Follow-up culture of the ileostomy output was negative for C. difficile toxins. The patient was discharged in good health on full oral diet 12 days following transfer. Review of histopathology relating to the resected colon and subsequent endoscopic assessment of the retained rectum confirmed the initial diagnosis of UC, rather than a primary diagnosis of pseudomembranous colitis. Clostridium difficile is the leading cause of nosocomial diarrhea associated with antibiotic therapy and is almost always limited to the colonic mucosa.1 Small bowel enteritis secondary to C. difficile is exceedingly rare, with only 21 previous cases cited in the literature.2,3 Of this cohort, 18 patients had a surgical procedure at some timepoint prior to the development of C. difficile enteritis, while the remaining 3 patients had no surgical procedure prior to the infection. The time span between surgery and the development of enteritis ranged from 4 days to 31 years. Antibiotic therapy predisposed to the development of C. difficile enteritis in 20 of the cases. A majority of the patients (n 11) had a history of inflammatory bowel disease (IBD), with 8 having UC similar to our patient and the remaining 3 patients having a history of Crohn\u2019s disease. The etiology of small bowel enteritis remains unclear. C. difficile has been successfully isolated from the small bowel in both autopsy specimens and from jejunal aspirate of patients with chronic diarrhea, suggesting that the small bowel may act as a reservoir for C. difficile.4 This would suggest that C. difficile could become pathogenic in the small bowel following a disruption in the small bowel flora in the setting of antibiotic therapy. This would be supported by the observation that the majority of cases reported occurred within 90 days of surgery with attendant disruption of bowel function. The prevalence of C. difficile-associated disease (CDAD) in patients with IBD is increasing. Issa et al5 examined the impact of CDAD in a cohort of patients with IBD. They found that more than half of the patients with a positive culture for C. difficile were admitted and 20% required a colectomy. They reported that maintenance immunomodulator use and colonic involvement were independent risk factors for C. difficile infection in patients with IBD. The rising incidence of C. difficile in patients with IBD coupled with the use of increasingly potent immunomodulatory therapies means that clinicians must have a high index of suspiCopyright \u00a9 2008 Crohn\u2019s & Colitis Foundation of America, Inc. DOI 10.1002/ibd.20758 Published online 22 October 2008 in Wiley InterScience (www.interscience.wiley.com).", "which Treatment ?", "Vancomycin", 2575.0, 2585.0], ["In this paper, we investigate agile team perceptions of factors impacting their productivity. Within this overall goal, we also investigate which productivity concept was adopted by the agile teams studied. We here conducted two case studies in the industry and analyzed data from two projects that we followed for six months. From the perspective of agile team members, the three most perceived factors impacting on their productivity were appropriate team composition and allocation, external dependencies, and staff turnover. Teams also mentioned pair programming and collocation as agile practices that impact productivity. As a secondary finding, most team members did not share the same understanding of the concept of productivity. While some known factors still impact agile team productivity, new factors emerged from the interviews as potential productivity factors impacting agile teams.", "which Focus ?", "Productivity", 80.0, 92.0], ["[fre] Nous evaluons de facon conjointe les differences de productivite et de remuneration existant en France entre diverses categories de travailleurs au moyen d'une nouvelle base de donnees qui reunit des informations tant sur les employes que sur leurs employeurs. Completant une methodologie nouvelle proposee au depart par Heller- stein, Neumark et Troske [1999], nous adoptons des hypotheses moins contraignantes et fournissons une methode utilisant le cout du travail pour les employeurs. De facon surprenante, les resultats trouves pour la France sont tres differents de ceux obtenus pour les Etats-Unis, et plus proches des resultats en Norvege : dans le secteur manufacturier, nous constatons que les travailleurs \u00e2ges sont plus payes par rapport aux travailleurs jeunes que leur difference de productivite ne le laisserait supposer. La robustesse de ces resultats semble confirmee a travers le temps, les secteurs d'activite et les hypotheses retenues. [eng] In this study we analyse the differences between productivity levels and earnings across a range of categories of workers in France, drawing on a new database which brings together data from employers and employees. We take as our starting point the methodology first introduced by Hellerstein, Neumark and Troske [1999], and develop it further by applying les restrictive assumptions and by using a new method which takes into account labour costs incurred by the employers. The results obtained for France are surprisingly different to those for the United States and in fact are closest to the results obtained for Norway. For example, we find that in the manufacturing sector, relatively to younger workers, older workers are paid more than the difference in productivity between the two age groups would suggest. These results appear to be robust over time regardless of the sector studied or the assumptions used.", "which Performance indicator (in per-capita terms if not otherwise indicated) ?", "Productivity", 1018.0, 1030.0], ["In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency\u2019s perspective, which undermines communities\u2019 role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.", "which Emergency Management Phase ?", "Response", 202.0, 210.0], ["\u2022his paper investigates the challenges faced in designing an integrated information platform for emergency response management and uses the Beijing Olympic Games as a case study. The research methods are grounded in action research, participatory design, and situation-awareness oriented design. The completion of a more than two-year industrial secondment and six-month field studies ensured that a full understanding of user requirements had been obtained. A service-centered architecture was proposed to satisfy these user requirements. The proposed architecture consists mainly of information gathering, database management, and decision support services. The decision support services include situational overview, instant risk assessment, emergency response preplan, and disaster development prediction. Abstracting from the experience obtained while building this system, we outline a set of design principles in the general domain of information systems (IS) development for emergency management. These design principles form a contribution to the information systems literature because they provide guidance to developers who are aiming to support emergency response and the development of such systems that have not yet been adequately met by any existing types of IS. We are proud that the information platform developed was deployed in the real world and used in the 2008 Beijing", "which Emergency Management Phase ?", "response", 107.0, 115.0], ["Emergency response requires an efficient information supply chain for the smooth operations of intraand inter-organizational emergency management processes. However, the breakdown of this information supply chain due to the lack of consistent data standards presents a significant problem. In this paper, we adopt a theory driven novel approach to develop a XML-based data model that prescribes a comprehensive set of data standards (semantics and internal structures) for emergency management to better address the challenges of information interoperability. Actual documents currently being used in mitigating chemical emergencies from a large number of incidents are used in the analysis stage. The data model development is guided by Activity Theory and is validated through a RFC-like process used in standards development. This paper applies the standards to the real case of a chemical incident scenario. Further, it complies with the national leading initiatives in emergency standards (National Information Exchange Model).", "which Emergency Management Phase ?", "response", 10.0, 18.0], ["The devastating 2011 Great East Japan Earthquake made people aware of the importance of Information and Communication Technology (ICT) for sustaining life during and soon after a disaster. The difficulty in recovering information systems, because of the failure of ICT, hindered all recovery processes. The paper explores ways to make information systems resilient in disaster situations. Resilience is defined as quickly regaining essential capabilities to perform critical post disaster missions and to smoothly return to fully stable operations thereafter. From case studies and the literature, we propose that a frugal IS design that allows creative responses will make information systems resilient in disaster situations. A three-stage model based on a chronological sequence was employed in structuring the proposed design principles.", "which Emergency Management Phase ?", "response", NaN, NaN], ["In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency\u2019s perspective, which undermines communities\u2019 role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.", "which Emergency Management Phase ?", "Response", 202.0, 210.0], ["Emergency response systems are a relatively new and important area of research in the information systems community. While there is a growing body of literature in this research stream, human-computer interaction (HCI) issues concerning the design of emergency response system interfaces have received limited attention. Emergency responders often work in time pressured situations and depend on fast access to key information. One of the problems studied in HCI research is the design of interfaces to improve user information selection and processing performance. Based on cue-summation theory and research findings on parallel processing, associative processing, and hemispheric differences in information processing, this study proposes that information selection of target information in an emergency response dispatch application can be improved by using supplementary cues. Color-coding and sorting are proposed as relevant cues that can improve processing performance by providing prioritization heuristics. An experimental emergency response dispatch application is developed, and user performance is tested under conditions of varying complexity and time pressure. The results suggest that supplementary cues significantly improve performance, with better results often obtained when both cues are used. Additionally, the use of these cues becomes more beneficial as time pressure and task complexity increase.", "which Emergency Management Phase ?", "response", 10.0, 18.0], ["In the past two decades, organizational scholars have focused significant attention on how organizations manage crises. While most of these studies concentrate on crisis prevention, there is a growing emphasis on crisis response. Because information that is critical to crisis response may become outdated as crisis conditions change, crisis response research recognizes that the management of information flows and networks is critical to crisis response. Yet despite its importance, little is known about the various types of crisis information networks and the role of IT in enabling these information networks. Employing concepts from information flow and social network theories, this paper contributes to crisis management research by developing four crisis response information network prototypes. These networks are based on two main dimensions: (1) information flow intensity and (2) network density. We describe how considerations of these two dimensions with supporting case evidence yield four prototypical crisis information response networks: Information Star, Information Pyramid, Information Forest, and Information Black-out. In addition, we examine the role of IT within each information network structure. We conclude with guidelines for managers to deploy appropriate information networks during crisis response and with suggestions for future research related to IT and crisis management.", "which Emergency Management Phase ?", "Response", 220.0, 228.0], ["The analysis of a document image to derive a symbolic description of its structure and contents involves using spatial domain knowledge to classify the different printed blocks (e.g., text paragraphs), group them into logical units (e.g., newspaper stories), and determine the reading order of the text blocks within each unit. These steps describe the conversion of the physical structure of a document into its logical structure. We have developed a computational model for document logical structure derivation, in which a rule-based control strategy utilizes the data obtained from analyzing a digitized document image, and makes inferences using a multi-level knowledge base of document layout rules. The knowledge-based document logical structure derivation system (DeLoS) based on this model consists of a hierarchical rule-based control system to guide the block classification, grouping and read-ordering operations; a global data structure to store the document image data and incremental inferences; and a domain knowledge base to encode the rules governing document layout.", "which Physical Layout Representation ?", "rules", 699.0, 704.0], ["The National Library of Medicine (NLM) is developing an automated system to produce bibliographic records for its MEDLINER database. This system, named Medical Article Record System (MARS), employs document image analysis and understanding techniques and optical character recognition (OCR). This paper describes a key module in MARS called the Automated Labeling (AL) module, which labels all zones of interest (title, author, affiliation, and abstract) automatically. The AL algorithm is based on 120 rules that are derived from an analysis of journal page layouts and features extracted from OCR output. Experiments carried out on more than 11,000 articles in over 1,000 biomedical journals show the accuracy of this rule-based algorithm to exceed 96%.", "which Physical Layout Representation ?", "zones", 394.0, 399.0], ["Sepsis is a severe medical condition caused by an inordinate immune response to an infection. Early detection of sepsis symptoms is important to prevent the progression into the more severe stages of the disease, which kills one in four it effects. Electronic medical records of 1492 patients containing 233 cases of sepsis were used in a clustering analysis to identify features that are indicative of sepsis and can be further used for training a Bayesian inference network. The Bayesian network was constructed using the systemic inflammatory response syndrome criteria, mean arterial pressure, and lactate levels for sepsis patients. The resulting network reveals a clear correlation between lactate levels and sepsis. Furthermore, it was shown that lactate levels may be predicative of the SIRS criteria. In this light, Bayesian networks of sepsis patients hold the promise of providing a clinical decision support system in the future.", "which Objective ?", "Sepsis", 0.0, 6.0], ["OBJECTIVE To develop a decision support system to identify patients at high risk for hyperlactatemia based upon routinely measured vital signs and laboratory studies. MATERIALS AND METHODS Electronic health records of 741 adult patients at the University of California Davis Health System who met at least two systemic inflammatory response syndrome criteria were used to associate patients' vital signs, white blood cell count (WBC), with sepsis occurrence and mortality. Generative and discriminative classification (na\u00efve Bayes, support vector machines, Gaussian mixture models, hidden Markov models) were used to integrate heterogeneous patient data and form a predictive tool for the inference of lactate level and mortality risk. RESULTS An accuracy of 0.99 and discriminability of 1.00 area under the receiver operating characteristic curve (AUC) for lactate level prediction was obtained when the vital signs and WBC measurements were analysed in a 24 h time bin. An accuracy of 0.73 and discriminability of 0.73 AUC for mortality prediction in patients with sepsis was achieved with only three features: median of lactate levels, mean arterial pressure, and median absolute deviation of the respiratory rate. DISCUSSION This study introduces a new scheme for the prediction of lactate levels and mortality risk from patient vital signs and WBC. Accurate prediction of both these variables can drive the appropriate response by clinical staff and thus may have important implications for patient health and treatment outcome. CONCLUSIONS Effective predictions of lactate levels and mortality risk can be provided with a few clinical variables when the temporal aspect and variability of patient data are considered.", "which Objective ?", "Sepsis", 440.0, 446.0], ["Background Surgical site infection (SSI) surveillance is a key factor in the elaboration of strategies to reduce SSI occurrence and in providing surgeons with appropriate data feedback (risk indicators, clinical prediction rule). Aim To improve the predictive performance of an individual-based SSI risk model by considering a multilevel hierarchical structure. Patients and Methods Data were collected anonymously by the French SSI active surveillance system in 2011. An SSI diagnosis was made by the surgical teams and infection control practitioners following standardized criteria. A random 20% sample comprising 151 hospitals, 502 wards and 62280 patients was used. Three-level (patient, ward, hospital) hierarchical logistic regression models were initially performed. Parameters were estimated using the simulation-based Markov Chain Monte Carlo procedure. Results A total of 623 SSI were diagnosed (1%). The hospital level was discarded from the analysis as it did not contribute to variability of SSI occurrence (p = 0.32). Established individual risk factors (patient history, surgical procedure and hospitalization characteristics) were identified. A significant heterogeneity in SSI occurrence between wards was found (median odds ratio [MOR] 3.59, 95% credibility interval [CI] 3.03 to 4.33) after adjusting for patient-level variables. The effects of the follow-up duration varied between wards (p<10\u22129), with an increased heterogeneity when follow-up was <15 days (MOR 6.92, 95% CI 5.31 to 9.07]). The final two-level model significantly improved the discriminative accuracy compared to the single level reference model (p<10\u22129), with an area under the ROC curve of 0.84. Conclusion This study sheds new light on the respective contribution of patient-, ward- and hospital-levels to SSI occurrence and demonstrates the significant impact of the ward level over and above risk factors present at patient level (i.e., independently from patient case-mix).", "which Objective ?", "Surveillance", 41.0, 53.0], ["Background Sepsis is one of the leading causes of mortality in hospitalized patients. Despite this fact, a reliable means of predicting sepsis onset remains elusive. Early and accurate sepsis onset predictions could allow more aggressive and targeted therapy while maintaining antimicrobial stewardship. Existing detection methods suffer from low performance and often require time-consuming laboratory test results. Objective To study and validate a sepsis prediction method, InSight, for the new Sepsis-3 definitions in retrospective data, make predictions using a minimal set of variables from within the electronic health record data, compare the performance of this approach with existing scoring systems, and investigate the effects of data sparsity on InSight performance. Methods We apply InSight, a machine learning classification system that uses multivariable combinations of easily obtained patient data (vitals, peripheral capillary oxygen saturation, Glasgow Coma Score, and age), to predict sepsis using the retrospective Multiparameter Intelligent Monitoring in Intensive Care (MIMIC)-III dataset, restricted to intensive care unit (ICU) patients aged 15 years or more. Following the Sepsis-3 definitions of the sepsis syndrome, we compare the classification performance of InSight versus quick sequential organ failure assessment (qSOFA), modified early warning score (MEWS), systemic inflammatory response syndrome (SIRS), simplified acute physiology score (SAPS) II, and sequential organ failure assessment (SOFA) to determine whether or not patients will become septic at a fixed period of time before onset. We also test the robustness of the InSight system to random deletion of individual input observations. Results In a test dataset with 11.3% sepsis prevalence, InSight produced superior classification performance compared with the alternative scores as measured by area under the receiver operating characteristic curves (AUROC) and area under precision-recall curves (APR). In detection of sepsis onset, InSight attains AUROC = 0.880 (SD 0.006) at onset time and APR = 0.595 (SD 0.016), both of which are superior to the performance attained by SIRS (AUROC: 0.609; APR: 0.160), qSOFA (AUROC: 0.772; APR: 0.277), and MEWS (AUROC: 0.803; APR: 0.327) computed concurrently, as well as SAPS II (AUROC: 0.700; APR: 0.225) and SOFA (AUROC: 0.725; APR: 0.284) computed at admission (P<.001 for all comparisons). Similar results are observed for 1-4 hours preceding sepsis onset. In experiments where approximately 60% of input data are deleted at random, InSight attains an AUROC of 0.781 (SD 0.013) and APR of 0.401 (SD 0.015) at sepsis onset time. Even with 60% of data missing, InSight remains superior to the corresponding SIRS scores (AUROC and APR, P<.001), qSOFA scores (P=.0095; P<.001) and superior to SOFA and SAPS II computed at admission (AUROC and APR, P<.001), where all of these comparison scores (except InSight) are computed without data deletion. Conclusions Despite using little more than vitals, InSight is an effective tool for predicting sepsis onset and performs well even with randomly missing data.", "which Objective ?", "Early warning", 1365.0, 1378.0], ["After the outbreak of severe acute respiratory syndrome (SARS) in 2003, many international airport quarantine stations conducted fever-based screening to identify infected passengers using infrared thermography for preventing global pandemics. Due to environmental factors affecting measurement of facial skin temperature with thermography, some previous studies revealed the limits of authenticity in detecting infectious symptoms. In order to implement more strict entry screening in the epidemic seasons of emerging infectious diseases, we developed an infection screening system for airport quarantines using multi-parameter vital signs. This system can automatically detect infected individuals within several tens of seconds by a neural-network-based discriminant function using measured vital signs, i.e., heart rate obtained by a reflective photo sensor, respiration rate determined by a 10-GHz non-contact respiration radar, and the ear temperature monitored by a thermography. In this paper, to reduce the environmental effects on thermography measurement, we adopted the ear temperature as a new screening indicator instead of facial skin. We tested the system on 13 influenza patients and 33 normal subjects. The sensitivity of the infection screening system in detecting influenza were 92.3%, which was higher than the sensitivity reported in our previous paper (88.0%) with average facial skin temperature.", "which Objective ?", "Screening", 141.0, 150.0], ["OBJECTIVE To assess the predictive value for the early detection of sepsis of the physiological monitoring parameters currently recommended by the Surviving Sepsis Campaign. METHODS The Project IMPACT data set was used to assess whether the physiological parameters of heart rate, mean arterial pressure, body temperature, and respiratory rate can be used to distinguish between critically ill adult patients with and without sepsis in the first 24 hours of admission to an intensive care unit. RESULTS All predictor variables used in the analyses differed significantly between patients with sepsis and patients without sepsis. However, only 2 of the predictor variables, mean arterial pressure and high temperature, were independently associated with sepsis. In addition, the temperature mean for hypothermia was significantly lower in patients without sepsis. The odds ratio for having sepsis was 2.126 for patients with a temperature of 38 degrees C or higher, 3.874 for patients with a mean arterial blood pressure of less than 70 mm Hg, and 4.63 times greater for patients who had both of these conditions. CONCLUSIONS The results support the use of some of the guidelines of the Surviving Sepsis Campaign. However, the lowest mean temperature was significantly less for patients without sepsis than for patients with sepsis, a finding that calls into question the clinical usefulness of using hypothermia as an early predictor of sepsis. Alone the group of variables used is not sufficient for discriminating between critically ill patients with and without sepsis.", "which Objective ?", "Sepsis", 68.0, 74.0], ["Background Sepsis is one of the leading causes of mortality in hospitalized patients. Despite this fact, a reliable means of predicting sepsis onset remains elusive. Early and accurate sepsis onset predictions could allow more aggressive and targeted therapy while maintaining antimicrobial stewardship. Existing detection methods suffer from low performance and often require time-consuming laboratory test results. Objective To study and validate a sepsis prediction method, InSight, for the new Sepsis-3 definitions in retrospective data, make predictions using a minimal set of variables from within the electronic health record data, compare the performance of this approach with existing scoring systems, and investigate the effects of data sparsity on InSight performance. Methods We apply InSight, a machine learning classification system that uses multivariable combinations of easily obtained patient data (vitals, peripheral capillary oxygen saturation, Glasgow Coma Score, and age), to predict sepsis using the retrospective Multiparameter Intelligent Monitoring in Intensive Care (MIMIC)-III dataset, restricted to intensive care unit (ICU) patients aged 15 years or more. Following the Sepsis-3 definitions of the sepsis syndrome, we compare the classification performance of InSight versus quick sequential organ failure assessment (qSOFA), modified early warning score (MEWS), systemic inflammatory response syndrome (SIRS), simplified acute physiology score (SAPS) II, and sequential organ failure assessment (SOFA) to determine whether or not patients will become septic at a fixed period of time before onset. We also test the robustness of the InSight system to random deletion of individual input observations. Results In a test dataset with 11.3% sepsis prevalence, InSight produced superior classification performance compared with the alternative scores as measured by area under the receiver operating characteristic curves (AUROC) and area under precision-recall curves (APR). In detection of sepsis onset, InSight attains AUROC = 0.880 (SD 0.006) at onset time and APR = 0.595 (SD 0.016), both of which are superior to the performance attained by SIRS (AUROC: 0.609; APR: 0.160), qSOFA (AUROC: 0.772; APR: 0.277), and MEWS (AUROC: 0.803; APR: 0.327) computed concurrently, as well as SAPS II (AUROC: 0.700; APR: 0.225) and SOFA (AUROC: 0.725; APR: 0.284) computed at admission (P<.001 for all comparisons). Similar results are observed for 1-4 hours preceding sepsis onset. In experiments where approximately 60% of input data are deleted at random, InSight attains an AUROC of 0.781 (SD 0.013) and APR of 0.401 (SD 0.015) at sepsis onset time. Even with 60% of data missing, InSight remains superior to the corresponding SIRS scores (AUROC and APR, P<.001), qSOFA scores (P=.0095; P<.001) and superior to SOFA and SAPS II computed at admission (AUROC and APR, P<.001), where all of these comparison scores (except InSight) are computed without data deletion. Conclusions Despite using little more than vitals, InSight is an effective tool for predicting sepsis onset and performs well even with randomly missing data.", "which Objective ?", "Sepsis", 11.0, 17.0], ["Background: Seasonal influenza virus outbreaks cause annual epidemics, mostly during winter in temperate zone countries, especially resulting in increased morbidity and higher mortality in children. In order to conduct rapid screening for influenza in pediatric outpatient units, we developed a pediatric infection screening system with a radar respiration monitor. Methods: The system conducts influenza screening within 10 seconds based on vital signs (i.e., respiration rate monitored using a 24 GHz microwave radar; facial temperature, using a thermopile array; and heart rate, using a pulse photosensor). A support vector machine (SVM) classification method was used to discriminate influenza children from healthy children based on vital signs. To assess the classification performance of the screening system that uses the SVM, we conducted influenza screening for 70 children (i.e., 27 seasonal influenza patients (11 \u00b1 2 years) at a pediatric clinic and 43 healthy control subjects (9 \u00b1 4 years) at a pediatric dental clinic) in the winter of 2013-2014. Results: The screening system using the SVM identified 26 subjects with influenza (22 of the 27 influenza patients and 4 of the 43 healthy subjects). The system discriminated 44 subjects as healthy (5 of the 27 influenza patients and 39 of the 43 healthy subjects), with sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 81.5%, 90.7%, 84.6%, and 88.6%, respectively. Conclusion: The SVM-based screening system achieved classification results for the outpatient children based on vital signs with comparatively high NPV within 10 seconds. At pediatric clinics and hospitals, our system seems potentially useful in the first screening step for infections in the future.", "which Objective ?", "Screening", 225.0, 234.0], ["Intensive Care Unit (ICU) patients have significant morbidity and mortality, often from complications that arise during the hospital stay. Severe sepsis is one of the leading causes of death among these patients. Predictive models have the potential to allow for earlier detection of severe sepsis and ultimately earlier intervention. However, current methods for identifying and predicting severe sepsis are biased and inadequate. The goal of this work is to identify a new framework for the prediction of severe sepsis and identify early predictors utilizing clinical laboratory values and vital signs collected in adult ICU patients. We explore models with logistic regression (LR), support vector machines (SVM), and logistic model trees (LMT) utilizing vital signs, laboratory values, or a combination of vital and laboratory values. When applied to a retrospective cohort of ICU patients, the SVM model using laboratory and vital signs as predictors identified 339 (65%) of the 3,446 patients as developing severe sepsis correctly. Based on this new framework and developed models, we provide a recommendation for the use in clinical decision support in ICU and non-ICU environments.", "which Objective ?", "Sepsis", 146.0, 152.0], ["Sepsis, a dysregulated host response to infection, is a major health burden in terms of both mortality and cost. The difficulties clinicians face in diagnosing sepsis, alongside the insufficiencies of diagnostic biomarkers, motivate the present study. This work develops a machine-learning-based sepsis diagnostic for a high-risk patient group, using a geographically and institutionally diverse collection of nearly 500,000 patient health records. Using only a minimal set of clinical variables, our diagnostics outperform common severity scoring systems and sepsis biomarkers and benefit from being available immediately upon ordering.", "which Objective ?", "Sepsis", 0.0, 6.0], ["In CrowdRE, heterogenous crowds of stakeholders are involved in requirements elicitation. One major challenge is to inform several people about a complex and sophisticated piece of software so that they can effectively contextualize and contribute their opinions and insights. Overly technical or boring textual representations might lead to misunderstandings or even repel some people. Videos may be better suited for this purpose. There are several variants of video available: Linear videos have been used for tutorials on YouTube and similar platforms. Interactive media have been proposed for activating commitment and valuable feedback. Vision videos were explicitly introduced to solicit feedback about product visions and software requirements. In this paper, we describe essential steps of creating a useful video, making it interactive, and presenting it to stakeholders. We consider four potentially useful types of videos for CrowdRE and how to produce them. To evaluate feasibility of this approach for creating video variants, all presented steps were performed in a case study.", "which has study ?", "study", 1086.0, 1091.0], ["The work described in this paper aims at exploring the use of an artificial intelligence technique, i.e. genetic algorithm (GA), for designing an optimal model-based controller to regulate the temperature of a reactor. GA is utilized to identify the best control action for the system by creating possible solutions and thereby to propose the correct control action to the reactor system. This value is then used as the set point for the closed loop control system of the heat exchanger. A continuous stirred tank reactor is chosen as a case study, where the controller is then tested with multiple set-point tracking and changes in its parameters. The GA model-based control (GAMBC) is then implemented experimentally to control the reactor temperature of a pilot plant, where an irreversible exothermic chemical reaction is simulated by using the calculated steam flow rate. The dynamic behavior of the pilot plant reactor during the online control studies is highlighted, and comparison with the conventional tuned proportional integral derivative (PID) is presented. It is found that both controllers are able to control the process with comparable performance. Copyright \u00a9 2007 Curtin University of Technology and John Wiley & Sons, Ltd.", "which Objective/estimate(s) process systems ?", "Temperature", 193.0, 204.0], ["BACKGROUND: Production of microbial enzymes in bioreactors is a complex process including such phenomena as metabolic networks and mass transport resistances. The use of neural networks (NNs) to infer the state of bioreactors may be an interesting option that may handle the nonlinear dynamics of biomass growth and protein production. RESULTS: Feedforward multilayer perceptron (MLP) NNs were used for identification of the cultivation phase of Bacillus megaterium to produce the enzyme penicillin G acylase (EC. 3.5.1.11). The following variables were used as input to the net: run time and carbon dioxide concentration in the exhausted gas. The NN output associates a numerical value to the metabolic state of the cultivation, close to 0 during the lag phase, close to 1 during the exponential phase and approximately 2 for the stationary phase. This is a non-conventional approach for pattern recognition. During the exponential phase, another MLP was used to infer cellular concentration. Time, carbon dioxide concentration and stirrer speed form an integrated net input vector. Cellular concentrations provided by the NN were used in a hybrid approach to estimate product concentrations of the enzyme. The model employed a first-order approximation. CONCLUSION: Results showed that the algorithm was able to infer accurate values of cellular and product concentrations up to the end of the exponential growth phase, where an industrial run should stop. Copyright \u00a9 2008 Society of Chemical Industry", "which Objective/estimate(s) process systems ?", "Cellular concentration", 970.0, 992.0], ["Data fusion is an emerging technology to fuse data from multiple data or information of the environment through measurement and detection to make a more accurate and reliable estimation or decision. In this Article, energy consumption data are collected from ethylene plants with the high temperature steam cracking process technology. An integrated framework of the energy efficiency estimation is proposed on the basis of data fusion strategy. A Hierarchical Variable Variance Fusion (HVVF) algorithm and a Fuzzy Analytic Hierarchy Process (FAHP) method are proposed to estimate energy efficiencies of ethylene equipments. For different equipment scales with the same process technology, the HVVF algorithm is used to estimate energy efficiency ranks among different equipments. For different technologies based on HVVF results, the FAHP method based on the approximate fuzzy eigenvector is used to get energy efficiency indices (EEI) of total ethylene industries. The comparisons are used to assess energy utilization...", "which Objective/estimate(s) process systems ?", "Energy efficiencies of ethylene", 581.0, 612.0], ["An artificial neural network (ANN) was applied successfully to predict flow boiling curves. The databases used in the analysis are from the 1960's, including 1,305 data points which cover these parameter ranges: pressure P=100\u20131,000 kPa, mass flow rate G=40\u2013500 kg/m2-s, inlet subcooling \u0394Tsub =0\u201335\u00b0C, wall superheat \u0394Tw = 10\u2013300\u00b0C and heat flux Q=20\u20138,000kW/m2. The proposed methodology allows us to achieve accurate results, thus it is suitable for the processing of the boiling curve data. The effects of the main parameters on flow boiling curves were analyzed using the ANN. The heat flux increases with increasing inlet subcooling for all heat transfer modes. Mass flow rate has no significant effects on nucleate boiling curves. The transition boiling and film boiling heat fluxes will increase with an increase in the mass flow rate. Pressure plays a predominant role and improves heat transfer in all boiling regions except the film boiling region. There are slight differences between the steady and the transient boiling curves in all boiling regions except the nucleate region. The transient boiling curve lies below the corresponding steady boiling curve.", "which Objective/estimate(s) process systems ?", "Heat flux", 337.0, 346.0], ["Abstract Since online measurement of the melt index (MI) of polyethylene is difficult, a virtual sensor model is desirable. However, a polyethylene process usually produces products with multiple grades. The relation between process and quality variables is highly nonlinear. Besides, a virtual sensor model in real plant process with many inputs has to deal with collinearity and time-varying issues. A new recursive algorithm, which models a multivariable, time-varying and nonlinear system, is presented. Principal component analysis (PCA) is used to eliminate the collinearity. Fuzzy c-means (FCM) and fuzzy Takagi\u2013Sugeno (FTS) modeling are used to decompose the nonlinear system into several linear subsystems. Effectiveness of the model is demonstrated using real plant data from a polyethylene process.", "which Objective/estimate(s) process systems ?", "Melt index", 41.0, 51.0], ["A black-box modeling scheme to predict melt index (MI) in the industrial propylene polymerization process is presented. MI is one of the most important quality variables determining product specification, and is influenced by a large number of process variables. Considering it is costly and time consuming to measure MI in laboratory, a much cheaper and faster statistical modeling method is presented here to predicting MI online, which involves technologies of fuzzy neural network, particle swarm optimization (PSO) algorithm, and online correction strategy (OCS). The learning efficiency and prediction precision of the proposed model are checked based on real plant history data, and the comparison between different learning algorithms is carried out in detail to reveal the advantage of the proposed best-neighbor PSO (BNPSO) algorithm with OCS. \u00a9 2011 American Institute of Chemical Engineers AIChE J, 2012", "which Objective/estimate(s) process systems ?", "Melt index", 39.0, 49.0], ["In this paper, internally recurrent neural networks (IRNN) are used to predict a key polymer product quality variable from an industrial polymerization reactor. IRNN are selected as the modeling tools for two reasons: 1) over the wide range of operating regions required to make multiple polymer grades, the process is highly nonlinear; and 2) the finishing of the polymer product after it leaves the reactor imparts significant dynamics to the process by \"mixing\" effects. IRNN are shown to be very effective tools for predicting key polymer quality variables from secondary measurements taken around the reactor.", "which Objective/estimate(s) process systems ?", "Polymer product quality", 85.0, 108.0], ["Although rotating beds are good equipments for intensified separations and multiphase reactions, but the fundamentals of its hydrodynamics are still unknown. In the wide range of operating conditions, the pressure drop across an irrigated bed is significantly lower than dry bed. In this regard, an approach based on artificial intelligence, that is, artificial neural network (ANN) has been proposed for prediction of the pressure drop across the rotating packed beds (RPB). The experimental data sets used as input data (280 data points) were divided into training and testing subsets. The training data set has been used to develop the ANN model while the testing data set was used to validate the performance of the trained ANN model. The results of the predicted pressure drop values with the experimental values show a good agreement between the prediction and experimental results regarding to some statistical parameters, for example (AARD% = 4.70, MSE = 2.0 \u00d7 10\u22125 and R2 = 0.9994). The designed ANN model can estimate the pressure drop in the countercurrent flow rotating packed bed with unexpected phenomena for higher pressure drop in dry bed than in wet bed. Also, the designed ANN model has been able to predict the pressure drop in a wet bed with the good accuracy with experimental.", "which Objective/estimate(s) process systems ?", "Pressure drop", 205.0, 218.0], ["A virtual sensor that estimates product compositions in a middle-vessel batch distillation column has been developed. The sensor is based on a recurrent artificial neural network, and uses information available from secondary measurements (such as temperatures and flow rates). The criteria adopted for selecting the most suitable training data set and the benefits deriving from pre-processing these data by means of principal component analysis are demonstrated by simulation. The effects of sensor location, model initialization, and noisy temperature measurements on the performance of the soft sensor are also investigated. It is shown that the estimated compositions are in good agreement with the actual values.", "which Objective/estimate(s) process systems ?", "Product compositions", 32.0, 52.0], ["In this paper, we propose a timestamp series approach to defend against Sybil attack in a vehicular ad hoc network (VANET) based on roadside unit support. The proposed approach targets the initial deployment stage of VANET when basic roadside unit (RSU) support infrastructure is available and a small fraction of vehicles have network communication capability. Unlike previously proposed schemes that require a dedicated vehicular public key infrastructure to certify individual vehicles, in our approach RSUs are the only components issuing the certificates. Due to the differences of moving dynamics among vehicles, it is rare to have two vehicles passing by multiple RSUs at exactly the same time. By exploiting this spatial and temporal correlation between vehicles and RSUs, two messages will be treated as Sybil attack issued by one vehicle if they have the similar timestamp series issued by RSUs. The timestamp series approach needs neither vehicular-based public-key infrastructure nor Internet accessible RSUs, which makes it an economical solution suitable for the initial stage of VANET.", "which Confidentiality (Privacy) Technique ?", "Timestamp", 28.0, 37.0], ["This study describes shot peening effects such as shot hardness, shot size and shot projection pressure, on the residual stress distribution and fatigue life in reversed torsion of a 60SC7 spring steel. There appears to be a correlation between the fatigue strength and the area under the residual stress distribution curve. The biggest shot shows the best fatigue life improvement. However, for a shorter time of shot peening, small hard shot showed the best performance. Moreover, the superficial residual stresses and the amount of work hardening (characterised by the width of the X-ray diffraction line) do not remain stable during fatigue cycling. Indeed they decrease and their reduction rate is a function of the cyclic stress level and an inverse function of the depth of the plastically deformed surface layer.", "which Special Notes ?", "Torsion", 170.0, 177.0], ["Cylindrical rods of 450\u00b0C quenched and tempered AISI 41 40 were conventionally shot peened, stress peened and warm peened while rotating in the peening device. Warm peening at Tpeen = 310\u00b0C was conducted using a modified air blast shot peening machine with an electric air flow heater system. To perform stress peening using a torsional pre-stress, a device was conceived which allowed rotating pre-stressed samples without having material of the pre-loading gadget between the shot and the samples. Thus, same peening conditions for all peening procedures were ensured. The residual stress distributions present after the different peening procedures were evaluated and compared with results obtained after peening of flat material of the same steel. The differently peened samples were subjected to torsional pulsating stresses (R = 0) at different loadings to investigate their residual stress relaxation behavior. Additionally, the pulsating torsional strengths for the differently peened samples were determined.", "which Special Notes ?", "Torsion", NaN, NaN], ["Shot peening of steels at elevated temperatures (warm peening) can improve the fatigue behaviour of workpieces. For the steel AI Sf 4140 (German grade 42CrM04) in a quenched and tempered condition, it is shown that this is not only caused by the higher compressive residual stresses induced but also due to an enlarged stability of these residual stresses during cyclic bending. This can be explained by strain aging effects during shot peening, which cause different and more stable dislocation structures.", "which Special Notes ?", "cyclic bending", 363.0, 377.0], ["One of the most important components in a aircraft is its landing gear, due to the high load that it is submitted to during, principally, the take off and landing. For this reason, the AISI 4340 steel is widely used in the aircraft industry for fabrication of structural components, in which strength and toughness are fundamental design requirements [I]. Fatigue is an important parameter to be considered in the behavior of mechanical components subjected to constant and variable amplitude loading. One of the known ways to improve fatigue resistance is by using the shot peening process to induce a conlpressive residual stress in the surface layers of the material, making the nucleation and propagation of fatigue cracks more difficult [2,3]. The shot peening results depend on various parameters. These parameters can be grouped in three different classes according to I<. Fathallah et a1 (41: parameters describing the treated part, parameters of stream energy produced by the process and parameters describing the contact conditions. Furthermore, relaxation of the CKSF induced by shot peening has been observed during the fatigue process 15-71. In the present research the gain in fatigue life of AISI 4340 steel, obtained by shot peening treatment, is evaluated under the two different hardnesses used in landing gear. Rotating bending fatigue tests were conducted and the CRSF was measured by an x-ray tensometry prior and during fatigue tests. The evaluation of fatigue life due the shot peening in relation to the relaxation of CRSF, of crack sources position and roughness variation is done.", "which Special Notes ?", "Rotating bend", NaN, NaN], ["Shot peening of steels at elevated temperatures (warm peening) can improve the fatigue behaviour of workpieces. For the steel AI Sf 4140 (German grade 42CrM04) in a quenched and tempered condition, it is shown that this is not only caused by the higher compressive residual stresses induced but also due to an enlarged stability of these residual stresses during cyclic bending. This can be explained by strain aging effects during shot peening, which cause different and more stable dislocation structures.", "which Special Notes ?", "Warm peening", 49.0, 61.0], ["Cylindrical rods of 450\u00b0C quenched and tempered AISI 41 40 were conventionally shot peened, stress peened and warm peened while rotating in the peening device. Warm peening at Tpeen = 310\u00b0C was conducted using a modified air blast shot peening machine with an electric air flow heater system. To perform stress peening using a torsional pre-stress, a device was conceived which allowed rotating pre-stressed samples without having material of the pre-loading gadget between the shot and the samples. Thus, same peening conditions for all peening procedures were ensured. The residual stress distributions present after the different peening procedures were evaluated and compared with results obtained after peening of flat material of the same steel. The differently peened samples were subjected to torsional pulsating stresses (R = 0) at different loadings to investigate their residual stress relaxation behavior. Additionally, the pulsating torsional strengths for the differently peened samples were determined.", "which Special Notes ?", "Warm peening", 160.0, 172.0], ["Using a modified air blasting machine warm peening at 20 O C < T I 410 \"C was feasible. An optimized peening temperature of about 310 \"C was identified for a 450 \"C quenched and ternpered steel AISI 4140. Warm peening was also investigated for a normalized, a 650 \"C quenched and tempered, and a martensitically hardened material state. The quasi static surface compressive yield strengths as well as the cyclic surface yield strengths were determined from residual stress relaxation tests conducted at different stress amplitudes and numbers of loading cycles. Dynamic and static strain aging effects acting during and after warm peening clearly increased the residual stress stability and the alternating bending strength for all material states.", "which Special Notes ?", "Warm peening", 38.0, 50.0], ["AbstractThis paper presents an experimental investigation of the surface residual stress relaxation behaviour of a shot peened 0.4% carbon low alloy steel under fatigue loading. A round specimen with a circumferential notch and a notch factor Kt = 1.75 was fatigue loaded in both shot peened and ground conditions. Loading conditions included axial fatigue with stress ratio R = \u22121 and R = 0 and also R = \u22121 with an additional peak overload applied at 106 cycles. Plain unnotched shot peened specimens were also fatigue loaded with stress ratio R = \u22121. The results show how the relaxation is dependent on load level, how the peak load changes the surface residual stress state, and that relaxation of the smooth and notched conditions is similar. Two different shot peening conditions were used, one with Almen intensity of 30\u201335A (mm/100) and another of 50\u201355 A (mm/l00).", "which Special Notes ?", "Smooth and notched", 705.0, 723.0], ["AbstractThis paper presents an experimental investigation of the surface residual stress relaxation behaviour of a shot peened 0.4% carbon low alloy steel under fatigue loading. A round specimen with a circumferential notch and a notch factor Kt = 1.75 was fatigue loaded in both shot peened and ground conditions. Loading conditions included axial fatigue with stress ratio R = \u22121 and R = 0 and also R = \u22121 with an additional peak overload applied at 106 cycles. Plain unnotched shot peened specimens were also fatigue loaded with stress ratio R = \u22121. The results show how the relaxation is dependent on load level, how the peak load changes the surface residual stress state, and that relaxation of the smooth and notched conditions is similar. Two different shot peening conditions were used, one with Almen intensity of 30\u201335A (mm/100) and another of 50\u201355 A (mm/l00).", "which Special Notes ?", "Smooth and notched", 705.0, 723.0], ["This paper presents two-stage bi-objective stochastic programming models for disaster relief operations. We consider a problem that occurs in the aftermath of a natural disaster: a transportation system for supplying disaster victims with relief goods must be established. We propose bi-objective optimization models with a monetary objective and humanitarian objective. Uncertainty in the accessibility of the road network is modeled by a discrete set of scenarios. The key features of our model are the determination of locations for intermediate depots and acquisition of vehicles. Several model variants are considered. First, the operating budget can be fixed at the first stage for all possible scenarios or determined for each scenario at the second stage. Second, the assignment of vehicles to a depot can be either fixed or free. Third, we compare a heterogeneous vehicle fleet to a homogeneous fleet. We study the impact of the variants on the solutions. The set of Pareto-optimal solutions is computed by applying the adaptive Epsilon-constraint method. We solve the deterministic equivalents of the two-stage stochastic programs using the MIP-solver CPLEX.", "which Second-stage2 ?", "Transport", NaN, NaN], ["This article presents a new methodology to implement the concept of equity in regional earthquake risk mitigation programs using an optimization framework. It presents a framework that could be used by decisionmakers (government and authorities) to structure budget allocation strategy toward different seismic risk mitigation measures, i.e., structural retrofitting for different building structural types in different locations and planning horizons. A two\u2010stage stochastic model is developed here to seek optimal mitigation measures based on minimizing mitigation expenditures, reconstruction expenditures, and especially large losses in highly seismically active countries. To consider fairness in the distribution of financial resources among different groups of people, the equity concept is incorporated using constraints in model formulation. These constraints limit inequity to the user\u2010defined level to achieve the equity\u2010efficiency tradeoff in the decision\u2010making process. To present practical application of the proposed model, it is applied to a pilot area in Tehran, the capital city of Iran. Building stocks, structural vulnerability functions, and regional seismic hazard characteristics are incorporated to compile a probabilistic seismic risk model for the pilot area. Results illustrate the variation of mitigation expenditures by location and structural type for buildings. These expenditures are sensitive to the amount of available budget and equity consideration for the constant risk aversion. Most significantly, equity is more easily achieved if the budget is unlimited. Conversely, increasing equity where the budget is limited decreases the efficiency. The risk\u2010return tradeoff, equity\u2010reconstruction expenditures tradeoff, and variation of per\u2010capita expected earthquake loss in different income classes are also presented.", "which Second-stage2 ?", "Reconstruction expenditures", 581.0, 608.0], ["A document understanding method based on the tree representation of document structures is proposed. It is shown that documents have an obvious hierarchical structure in their geometry which is represented by a tree. A small number of rules are introduced to transform the geometric structure into the logical structure which represents the semantics. The virtual field separator technique is employed to utilize the information carried by special constituents of documents such as field separators and frames, keeping the number of transformation rules small. Experimental results on a variety of document formats have shown that the proposed method is applicable to most of the documents commonly encountered in daily use, although there is still room for further refinement of the transformation rules.<>", "which Logical Structure Representation ?", "tree", 45.0, 49.0], ["A system for document image segmentation and ordering text areas is described and applied to both Japanese and English complex printed page layouts. There is no need to make any assumption about the shape of blocks, hence the segmentation technique can handle not only skewed images without skew-correction but also documents where column are not rectangular. In this technique, on the bottom-up strategy, the connected components are extracted from the reduced image, and classified according to their local information. The connected components are merged into lines, and lines are merged into areas. Extracted text areas are classified as body, caption, header, and footer. A tree graph of the layout of body texts is made, and we get the order of texts by preorder traversal on the graph. The authors introduce the influence range of each node, a procedure for the title part, and extraction of the white horizontal separator. Making it possible to get good results on various documents. The total system is fast and compact.<>", "which Logical Structure Representation ?", "tree", 679.0, 683.0], ["Describes a syntactic approach to deducing the logical structure of printed documents from their physical layout. Page layout is described by a two-dimensional grammar, similar to a context-free string grammar, and a chart parser is used to parse segmented page images according to the grammar. This process is part of a system which reads scanned document images and produces computer-readable text in a logical mark-up format such as SGML. The system is briefly outlined, the grammar formalism and the parsing algorithm are described in detail, and some experimental results are reported.<>", "which Logical Structure Representation ?", "context-free string grammar", 182.0, 209.0], ["We present a study of user gains from their participation in a participatory design (PD) project at Danish primary schools. We explore user experiences and reported gains from the project in relation to the multiple aims of PD, based on a series of interviews with pupils, teachers, administrators, and consultants, conducted approximately three years after the end of the project. In particular, we reflect on how the PD initiatives were sustained after the project had ended. We propose that not only are ideas and initiatives disseminated directly within the organization, but also through networked relationships among people, stretching across organizations and project groups. Moreover, we demonstrate how users' gains related to their acting within these networks. These results suggest a heightened focus on the indirect and distributed channels through which the long-term impact of PD emerges.", "which participants ?", "Users", 712.0, 717.0], ["Abstract The essential oil obtained by hydrodistillation from the aerial parts of Artemisia herba-alba Asso growing wild in M'sila-Algeria, was investigated using both capillary GC and GC/MS techniques. The oil yield was 1.02% based on dry weight. Sixty-eight components amounting to 94.7% of the oil were identifed, 33 of them being reported for the frst time in Algerian A. herba-alba oil and 21 of these components have not been previously reported in A. herba-alba oils. The oil contained camphor (19.4%), trans-pinocarveol (16.9%), chrysanthenone (15.8%) and \u03b2-thujone (15%) as major components. Monoterpenoids are the main components (86.1%), and the irregular monoterpenes fraction represented a 3.1% yield.", "which Collection site ?", "Wild", 116.0, 120.0], ["The intraspecific chemical variability of essential oils (50 samples) isolated from the aerial parts of Artemisia herba\u2010alba Asso growing wild in the arid zone of Southeastern Tunisia was investigated. Analysis by GC (RI) and GC/MS allowed the identification of 54 essential oil components. The main compounds were \u03b2\u2010thujone and \u03b1\u2010thujone, followed by 1,8\u2010cineole, camphor, chrysanthenone, trans\u2010sabinyl acetate, trans\u2010pinocarveol, and borneol. Chemometric analysis (k\u2010means clustering and PCA) led to the partitioning into three groups. The composition of two thirds of the samples was dominated by \u03b1\u2010thujone or \u03b2\u2010thujone. Therefore, it could be expected that wild plants of A. herba\u2010alba randomly harvested in the area of Kirchaou and transplanted by local farmers for the cultivation in arid zones of Southern Tunisia produce an essential oil belonging to the \u03b1\u2010thujone/\u03b2\u2010thujone chemotype and containing also 1,8\u2010cineole, camphor, and trans\u2010sabinyl acetate at appreciable amounts.", "which Collection site ?", "Wild", 138.0, 142.0], ["Abstract Twenty-six oil samples were isolated by hydrodistillation from aerial parts of Artemisia herba-alba Asso growing wild in Tunisia (semi-arid land) and their chemical composition was determined by GC(RI), GC/MS and 13C-NMR. Various compositions were observed, dominated either by a single component (\u03b1-thujone, camphor, chrysanthenone or trans-sabinyl acetate) or characterized by the occurrence, at appreciable contents, of two or more of these compounds. These results confrmed the tremendous chemical variability of A. herba-alba.", "which Collection site ?", "Wild", 122.0, 126.0], ["The aim of the present study was to investigate the chemical composition, antioxidant, angiotensin Iconverting enzyme (ACE) inhibitory, antibacterial and antifungal activities of the essential oil of Artemisia herba alba Asso (Aha), a traditional medicinal plant widely growing in Tunisia. The essential oil from the air dried leaves and flowers of Aha were extracted by hydrodistillation and analyzed by GC and GC/MS. More than fifty compounds, out of which 48 were identified. The main chemical class of the oil was represented by oxygenated monoterpenes (50.53%). These were represented by 21 derivatives, among which the cis -chrysantenyl acetate (10.60%), the sabinyl acetate (9.13%) and the \u03b1-thujone (8.73%) were the principal compounds. Oxygenated sesquiterpenes, particularly arbusculones were identified in the essential oil at relatively high rates. The Aha essential oil was found to have an interesting antioxidant activity as evaluated by the 2,2-diphenyl-1-picrylhydrazyl and the \u03b2-carotene bleaching methods. The Aha essential oil also exhibited an inhibitory activity towards the ACE. The antimicrobial activities of Aha essential oil was evaluated against six bacterial strains and three fungal strains by the agar diffusion method and by determining the inhibition zone. The inhibition zones were in the range of 8-51 mm. The essential oil exhibited a strong growth inhibitory activity on all the studied fungi. Our findings demonstrated that Aha growing wild in South-Western of Tunisia seems to be a new chemotype and its essential oil might be a natural potential source for food preservation and for further investigation by developing new bioactive substances.", "which Collection site ?", "Wild", 1474.0, 1478.0], ["The background and the literature in liner fleet scheduling is reviewed and the objectives and assumptions of our approach are explained. We develop a detailed and realistic model for the estimation of the operating costs of liner ships on various routes, and present a linear programming formulation for the liner fleet deployment problem. Independent approaches for fixing both the service frequencies in the different routes and the speeds of the ships, are presented.", "which Main question ?", "Rout", NaN, NaN], ["Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and \u201cfeeling left out\u201d can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.", "which Process ?", "\u201cfeeling left out\u201d", NaN, NaN], ["Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, but abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards automatically compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In pattern-based query log extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.4M statements for 130K popular Wikidata entities.", "which Process ?", "question answering", 137.0, 155.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Process ?", "automatic identification", 276.0, 300.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Process ?", "Network-based Co-operative Planning Processes", 753.0, 798.0], ["Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .", "which Process ?", "agile, cost-efficient processes and scale up productivity", 728.0, 785.0], ["Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information \u2013 meaningful secondary or derived data \u2013 about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known \u201c(elaborated) data products,\u201d for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation \u2013 as machine readable data and their machine-readable meaning \u2013 is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.", "which Process ?", "further processing", 1041.0, 1059.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Process ?", "The planning process", 0.0, 20.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Process ?", "Network-based Co-operative Planning Processes", 753.0, 798.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Process ?", "attrition", 1176.0, 1185.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Process ?", "sarcasm, stance, rumor, and hate speech detection", 346.0, 395.0], ["It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.", "which Process ?", "Positive effects", 305.0, 321.0], ["The essential and significant components of one's job performance, such as facts, principles, and concepts are considered as job knowledge. This paper provides a framework for forging links between the knowledge, skills, and abilities taught in vocational education and training (VET) and competence prerequisites of jobs. Specifically, the study is aimed at creating an ontology for the semantic representation of that which is taught in the VET, that which is required on the job, and how the two are related. In particular, the creation of a job knowledge (Job-Know) ontology, which represents task and knowledge domains, and the relation between these two domains is discussed. Deploying the Job-Know ontology facilitates bridging job and knowledge elements collected from various sources (such as job descriptions), the identification of knowledge shortages and the determination of mismatches between the task and the knowledge domains that, in a broader perspective, facilitate the bridging requirements of labor market and education systems.", "which Process ?", "creation of a job knowledge (Job-Know) ontology", NaN, NaN], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Process ?", "Co-operative Building Planning", 492.0, 522.0], ["Background Visual atypicalities in autism spectrum disorder (ASD) are a well documented phenomenon, beginning as early as 2\u20136 months of age and manifesting in a significantly decreased attention to the eyes, direct gaze and socially salient information. Early emerging neurobiological deficits in perceiving social stimuli as rewarding or its active avoidance due to the anxiety it entails have been widely purported as potential reasons for this atypicality. Parallel research evidence also points to the significant benefits of animal presence for reducing social anxiety and enhancing social interaction in children with autism. While atypicality in social attention in ASD has been widely substantiated, whether this atypicality persists equally across species types or is confined to humans has not been a key focus of research insofar. Methods We attempted a comprehensive examination of the differences in visual attention to static images of human and animal faces (40 images; 20 human faces and 20 animal faces) among children with ASD using an eye tracking paradigm. 44 children (ASD n = 21; TD n = 23) participated in the study (10,362 valid observations) across five regions of interest (left eye, right eye, eye region, face and screen). Results Results obtained revealed significantly greater social attention across human and animal stimuli in typical controls when compared to children with ASD. However in children with ASD, a significantly greater attention allocation was seen to animal faces and eye region and lesser attention to the animal mouth when compared to human faces, indicative of a clear attentional preference to socially salient regions of animal stimuli. The positive attentional bias toward animals was also seen in terms of a significantly greater visual attention to direct gaze in animal images. Conclusion Our results suggest the possibility that atypicalities in social attention in ASD may not be uniform across species. It adds to the current neural and biomarker evidence base of the potentially greater social reward processing and lesser social anxiety underlying animal stimuli as compared to human stimuli in children with ASD.", "which Process ?", "autism spectrum disorder", 35.0, 59.0], ["Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information \u2013 meaningful secondary or derived data \u2013 about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known \u201c(elaborated) data products,\u201d for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation \u2013 as machine readable data and their machine-readable meaning \u2013 is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.", "which Process ?", "a studied natural phenomenon", 1583.0, 1611.0], ["This paper discusses the potential of current advancements in Information Communication Technologies (ICT) for cultural heritage preservation, valorization and management within contemporary cities. The paper highlights the potential of virtual environments to assess the impacts of heritage policies on urban development. It does so by discussing the implications of virtual globes and crowdsourcing to support the participatory valuation and management of cultural heritage assets. To this purpose, a review of available valuation techniques is here presented together with a discussion on how these techniques might be coupled with ICT tools to promote inclusive governance.\u00a0", "which Process ?", "participatory valuation and management", 424.0, 462.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Process ?", "Co-operative Building Planning", 492.0, 522.0], ["It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.", "which Process ?", "design problem", 289.0, 303.0], ["This paper discusses the potential of current advancements in Information Communication Technologies (ICT) for cultural heritage preservation, valorization and management within contemporary cities. The paper highlights the potential of virtual environments to assess the impacts of heritage policies on urban development. It does so by discussing the implications of virtual globes and crowdsourcing to support the participatory valuation and management of cultural heritage assets. To this purpose, a review of available valuation techniques is here presented together with a discussion on how these techniques might be coupled with ICT tools to promote inclusive governance.\u00a0", "which Process ?", "cultural heritage preservation", 119.0, 149.0], ["Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.", "which Process ?", "Natural Language Processing (NLP)", NaN, NaN], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Process ?", "disciplinary under-specialization", 1205.0, 1238.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Process ?", "Network-based Co-operative Planning Processes", 753.0, 798.0], ["Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.", "which Process ?", "liveability and sustainable urban development", 1897.0, 1942.0], ["It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.", "which Process ?", "process of creating", 101.0, 120.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Process ?", "self-citation", 502.0, 515.0], ["It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.", "which Process ?", "structured methods", 30.0, 48.0], ["The proliferation of embedded devices in modern vehicles has opened the traditionally-closed vehicular system to the risk of cybersecurity attacks through physical and remote access to the in-vehicle network such as the controller area network (CAN). The CAN bus does not implement a security protocol that can protect the vehicle against the increasing cyber and physical attacks. To address this risk, we introduce a novel algorithm to extract the real-time model parameters of the CAN bus and develop SAIDuCANT, a specification-based intrusion detection system (IDS) using anomaly-based supervised learning with the real-time model as input. We evaluate the effectiveness of SAIDuCANT with real CAN logs collected from two passenger cars and on an open-source CAN dataset collected from real-world scenarios. Experimental results show that SAIDuCANT can effectively detect data injection attacks with low false positive rates. Over four real attack scenarios from the open-source dataset, SAIDuCANT observes at most one false positive before detecting an attack whereas other detection approaches using CAN timing features detect on average more than a hundred false positives before a real attack occurs.", "which Intrusion Detection Type ?", "Specification-based", 517.0, 536.0], ["Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.", "which Terms learning ?", "Features", 662.0, 670.0], ["Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.", "which Terms learning ?", "Form", 1138.0, 1142.0], ["Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.", "which Terms learning ?", "Shape", 1125.0, 1130.0], ["Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.", "which Terms learning ?", "Size", 1132.0, 1136.0], ["Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.", "which Terms learning ?", "Conceptual entities", 1063.0, 1082.0], ["Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.", "which Properties learning ?", "Form", 1138.0, 1142.0], ["Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.", "which Properties learning ?", "Shape", 1125.0, 1130.0], ["Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.", "which Properties learning ?", "Size", 1132.0, 1136.0], ["In this work, we offer an approach to combine standard multimedia analysis techniques with knowledge drawn from conceptual metadata provided by domain experts of a specialized scholarly domain, to learn a domain-specific multimedia ontology from a set of annotated examples. A standard Bayesian network learning algorithm that learns structure and parameters of a Bayesian network is extended to include media observables in the learning. An expert group provides domain knowledge to construct a basic ontology of the domain as well as to annotate a set of training videos. These annotations help derive the associations between high-level semantic concepts of the domain and low-level MPEG-7 based features representing audio-visual content of the videos. We construct a more robust and refined version of this ontology by learning from this set of conceptually annotated videos. To encode this knowledge, we use MOWL, a multimedia extension of Web Ontology Language (OWL) which is capable of describing domain concepts in terms of their media properties and of capturing the inherent uncertainties involved. We use the ontology specified knowledge for recognizing concepts relevant to a video to annotate fresh addition to the video database with relevant concepts in the ontology. These conceptual annotations are used to create hyperlinks in the video collection, to provide an effective video browsing interface to the user.", "which Learning method ?", "Bayesian network", 286.0, 302.0], ["Ontologies are used in the integration of information resources by describing the semantics of the information sources with machine understandable terms and definitions. But, creating an ontology is a difficult and time-consuming process, especially in the early stage of extracting key concepts and relations. This paper proposes a method for domain ontology building by extracting ontological knowledge from UML models of existing systems. We compare the UML model elements with the OWL ones and derive transformation rules between the corresponding model elements. Based on these rules, we define an XSLT document which implements the transformation processes. We expect that the proposed method reduce the cost and time for building domain ontologies with the reuse of existing UML models", "which Learning method ?", "transformation rules", 505.0, 525.0], ["As the Semantic Web initiative gains momentum, a fundamental problem of integrating existing data-intensive WWW applications into the Semantic Web emerges. In order for today\u2019s relational database supported Web applications to transparently participate in the Semantic Web, their associated database schemas need to be converted into semantically equivalent ontologies. In this paper we present a solution to an important special case of the automatic mapping problem with wide applicability: mapping well-formed Entity-Relationship (ER) schemas to semantically equivalent OWL Lite ontologies. We present a set of mapping rules that fully capture the ER schema semantics, along with an overview of an implementation of the complete mapping algorithm integrated into the current SFSU ER Design Tools software.", "which Learning method ?", "Automatic mapping", 442.0, 459.0], ["One of the main holdbacks towards a wide use of ontologies is the high building cost. In order to reduce this effort, reuse of existing Knowledge Organization Systems (KOSs), and in particular thesauri, is a valuable and much cheaper alternative to build ontologies from scratch. In the literature tools to support such reuse and conversion of thesauri as well as re-engineering patterns already exist. However, few of these tools rely on a sort of semi-automatic reasoning on the structure of the thesaurus being converted. Furthermore, patterns proposed in the literature are not updated considering the new ISO 25964 standard on thesauri. This paper introduces a new application framework aimed to convert thesauri into OWL ontologies, differing from the existing approaches for taking into consideration ISO 25964 compliant thesauri and for applying completely automatic conversion rules.", "which Learning method ?", "Rules", 886.0, 891.0], ["Web 2.0 is an evolution toward a more social, interactive and collaborative web, where user is at the center of service in terms of publications and reactions. This transforms the user from his old status as a consumer to a new one as a producer. Folksonomies are one of the technologies of Web 2.0 that permit users to annotate resources on the Web. This is done by allowing users to use any keyword or tag that they find relevant. Although folksonomies require a context-independent and inter-subjective definition of meaning, many researchers have proven the existence of an implicit semantics in these unstructured data. In this paper, we propose an improvement of our previous approach to extract ontological structures from folksonomies. The major contributions of this paper are a Normalized Co-occurrences in Distinct Users (NCDU) similarity measure, and a new algorithm to define context of tags and detect ambiguous ones. We compared our similarity measure to a widely used method for identifying similar tags based on the cosine measure. We also compared the new algorithm with the Fuzzy Clustering Algorithm (FCM) used in our original approach. The evaluation shows promising results and emphasizes the advantage of our approach.", "which Learning method ?", "Fuzzy clustering", 1093.0, 1109.0], ["Existing taxonomies are valuable input for creating ontologies, because they reflect some degree of community consensus and contain, readily available, a wealth of concept definitions plus a hierarchy. However, the transformation of such taxonomies into useful ontologies is not as straightforward as it appears, because simply taking the hierarchy of concepts, which was originally developed for some external purpose other than ontology engineering, as the subsumption hierarchy using rdfs:subClassOf can yield useless ontologies. In this paper, we (1) illustrate the problem by analyzing OWL and RDF-S ontologies derived from UNSPSC (a products and services taxonomy), (2) detail how the interpretation and representation of the original taxonomic relationship is an important modeling decision when deriving ontologies from existing taxonomies, (3) propose a novel \u201cgen/tax\u201d approach to capture the original semantics of taxonomies in OWL, based on the split of each category in the taxonomy into two concepts, a generic concept and a taxonomy concept, and (4) show the usefulness of this approach by transforming eCl@ss into a fully-fledged products and services ontology.", "which Learning method ?", "gen/tax", 870.0, 877.0], ["Ontology provides a shared and reusable piece of knowledge about a specific domain, and has been applied in many fields, such as semantic Web, e-commerce and information retrieval, etc. However, building ontology by hand is a very hard and error-prone task. Learning ontology from existing resources is a good solution. Because relational database is widely used for storing data and OWL is the latest standard recommended by W3C, this paper proposes an approach of learning OWL ontology from data in relational database. Compared with existing methods, the approach can acquire ontology from relational database automatically by using a group of learning rules instead of using a middle model. In addition, it can obtain OWL ontology, including the classes, properties, properties characteristics, cardinality and instances, while none of existing methods can acquire all of them. The proposed learning rules have been proven to be correct by practice.", "which Learning method ?", "Learning rule", NaN, NaN], ["In order to deal with heterogeneous knowledge in the medical field, this paper proposes a method which can learn a heavy-weighted medical ontology based on medical glossaries and Web resources. Firstly, terms and taxonomic relations are extracted based on disease and drug glossaries and a light-weighted ontology is constructed, Secondly, non-taxonomic relations are automatically learned from Web resources with linguistic patterns, and the two ontologies (disease and drug) are expanded from light-weighted level towards heavy-weighted level, At last, the disease ontology and drug ontology are integrated to create a practical medical ontology. Experiment shows that this method can integrate and expand medical terms with taxonomic and different kinds of non-taxonomic relations. Our experiments show that the performance is promising.", "which Learning method ?", "Linguistic patterns", 414.0, 433.0], ["One of the main challenges in content-based or semantic image retrieval is still to bridge the gap between low-level features and semantic information. In this paper, An approach is presented using integrated multi-level image features in ontology fusion construction by a fusion framework, which based on the latent semantic analysis. The proposed method promotes images ontology fusion efficiently and broadens the application fields of image ontology retrieval system. The relevant experiment shows that this method ameliorates the problem, such as too many redundant data and relations, in the traditional ontology system construction, as well as improves the performance of semantic images retrieval.", "which Learning method ?", "Latent semantic analysis", 310.0, 334.0], ["As the Semantic Web initiative gains momentum, a fundamental problem of integrating existing data-intensive WWW applications into the Semantic Web emerges. In order for today\u2019s relational database supported Web applications to transparently participate in the Semantic Web, their associated database schemas need to be converted into semantically equivalent ontologies. In this paper we present a solution to an important special case of the automatic mapping problem with wide applicability: mapping well-formed Entity-Relationship (ER) schemas to semantically equivalent OWL Lite ontologies. We present a set of mapping rules that fully capture the ER schema semantics, along with an overview of an implementation of the complete mapping algorithm integrated into the current SFSU ER Design Tools software.", "which Learning method ?", "mapping rules", 614.0, 627.0], ["One of the main challenges in content-based or semantic image retrieval is still to bridge the gap between low-level features and semantic information. In this paper, An approach is presented using integrated multi-level image features in ontology fusion construction by a fusion framework, which based on the latent semantic analysis. The proposed method promotes images ontology fusion efficiently and broadens the application fields of image ontology retrieval system. The relevant experiment shows that this method ameliorates the problem, such as too many redundant data and relations, in the traditional ontology system construction, as well as improves the performance of semantic images retrieval.", "which Knowledge source ?", "Image", 56.0, 61.0], ["Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.", "which Knowledge source ?", "Image", 32.0, 37.0], ["With the increase in smart devices and abundance of video contents, efficient techniques for the indexing, analysis and retrieval of videos are becoming more and more desirable. Improved indexing and automated analysis of millions of videos could be accomplished by getting videos tagged automatically. A lot of existing methods fail to precisely tag videos because of their lack of ability to capture the video context. The context in a video represents the interactions of objects in a scene and their overall meaning. In this work, we propose a novel approach that integrates the video scene ontology with CNN (Convolutional Neural Network) for improved video tagging. Our method captures the content of a video by extracting the information from individual key frames. The key frames are then fed to a CNN based deep learning model to train its parameters. The trained parameters are used to generate the most frequent tags. Highly frequent tags are used to summarize the input video. The proposed technique is benchmarked on the most widely used dataset of video activities, namely, UCF-101. Our method managed to achieve an overall accuracy of 99.8% with an F1- score of 96.2%.", "which Knowledge source ?", "Video", 52.0, 57.0], ["In this work, we offer an approach to combine standard multimedia analysis techniques with knowledge drawn from conceptual metadata provided by domain experts of a specialized scholarly domain, to learn a domain-specific multimedia ontology from a set of annotated examples. A standard Bayesian network learning algorithm that learns structure and parameters of a Bayesian network is extended to include media observables in the learning. An expert group provides domain knowledge to construct a basic ontology of the domain as well as to annotate a set of training videos. These annotations help derive the associations between high-level semantic concepts of the domain and low-level MPEG-7 based features representing audio-visual content of the videos. We construct a more robust and refined version of this ontology by learning from this set of conceptually annotated videos. To encode this knowledge, we use MOWL, a multimedia extension of Web Ontology Language (OWL) which is capable of describing domain concepts in terms of their media properties and of capturing the inherent uncertainties involved. We use the ontology specified knowledge for recognizing concepts relevant to a video to annotate fresh addition to the video database with relevant concepts in the ontology. These conceptual annotations are used to create hyperlinks in the video collection, to provide an effective video browsing interface to the user.", "which Knowledge source ?", "Video", 1189.0, 1194.0], ["Ontologies have proven beneficial in different settings that make use of textual reviews. However, manually constructing ontologies is a laborious and time-consuming process in need of automation. We propose a novel methodology for automatically extracting ontologies, in the form of meronomies, from product reviews, using a very limited amount of hand-annotated training data. We show that the ontologies generated by our method outperform hand-crafted ontologies (WordNet) and ontologies extracted by existing methods (Text2Onto and COMET) in several, diverse settings. Specifically, our generated ontologies outperform the others when evaluated by human annotators as well as on an existing Q&A dataset from Amazon. Moreover, our method is better able to generalise, in capturing knowledge about unseen products. Finally, we consider a real-world setting, showing that our method is better able to determine recommended products based on their reviews, in alternative to using Amazon\u2019s standard score aggregations.", "which Knowledge source ?", "Text", NaN, NaN], ["The objective of this paper is to present the role of Ontology Learning Process in supporting an ontology engineer for creating and maintaining ontologies from textual resources. The knowledge structures that interest us are legal domain-specific ontologies. We will use these ontologies to build legal domain ontology for a Lebanese legal knowledge based system. The domain application of this work is the Lebanese criminal system. Ontologies can be learnt from various sources, such as databases, structured and unstructured documents. Here, the focus is on the acquisition of ontologies from unstructured text, provided as input. In this work, the Ontology Learning Process represents a knowledge extraction phase using Natural Language Processing techniques. The resulted ontology is considered as inexpressive ontology. There is a need to reengineer it in order to build a complete, correct and more expressive domain-specific ontology.", "which Knowledge source ?", "Text", 608.0, 612.0], ["Ontologies play an increasingly important role in Knowledge Management. One of the main problems associated with ontologies is that they need to be constructed and maintained. Manual construction of larger ontologies is usually not feasible within companies because of the effort and costs required. Therefore, a semi-automatic approach to ontology construction and maintenance is what everybody is wishing for. The paper presents a framework for semi-automatically learning ontologies from domainspecific texts by applying machine learning techniques. The TEXT-TO-ONTO framework integrates manual engineering facilities to follow a balanced cooperative modelling paradigm.", "which Knowledge source ?", "Text", 557.0, 561.0], ["Existing taxonomies are valuable input for creating ontologies, because they reflect some degree of community consensus and contain, readily available, a wealth of concept definitions plus a hierarchy. However, the transformation of such taxonomies into useful ontologies is not as straightforward as it appears, because simply taking the hierarchy of concepts, which was originally developed for some external purpose other than ontology engineering, as the subsumption hierarchy using rdfs:subClassOf can yield useless ontologies. In this paper, we (1) illustrate the problem by analyzing OWL and RDF-S ontologies derived from UNSPSC (a products and services taxonomy), (2) detail how the interpretation and representation of the original taxonomic relationship is an important modeling decision when deriving ontologies from existing taxonomies, (3) propose a novel \u201cgen/tax\u201d approach to capture the original semantics of taxonomies in OWL, based on the split of each category in the taxonomy into two concepts, a generic concept and a taxonomy concept, and (4) show the usefulness of this approach by transforming eCl@ss into a fully-fledged products and services ontology.", "which Relationships ?", "Subclass", NaN, NaN], ["In this work, pure and IIIA element doped ZnO thin films were grown on p type silicon (Si) with (100) orientated surface by sol-gel method, and were characterized for comparing their electrical characteristics. The heterojunction parameters were obtained from the current-voltage (I-V) and capacitance-voltage (C-V) characteristics at room temperature. The ideality factor (n), saturation current (Io) and junction resistance of ZnO/p-Si heterojunction for both pure and doped (with Al or In) cases were determined by using different methods at room ambient. Other electrical parameters such as Fermi energy level (EF), barrier height (\u03a6B), acceptor concentration (Na), built-in potential (\u03a6i) and voltage dependence of surface states (Nss) profile were obtained from the C-V measurements. The results reveal that doping ZnO with IIIA (Al or In) elements to fabricate n-ZnO/p-Si heterojunction can result in high performance diode characteristics.", "which ZnO film deposition method ?", "sol-gel", 124.0, 131.0], ["In this study, temperature-dependent electrical properties of n-type Ga-doped ZnO thin film / p-type Si nanowire heterojunction diodes were reported. Metal-assisted chemical etching (MACE) process was performed to fabricate Si nanowires. Ga-doped ZnO films were then deposited onto nanowires through chemical bath deposition (CBD) technique to build three-dimensional nanowire-based heterojunction diodes. Fabricated devices revealed significant diode characteristics in the temperature range of 220 - 360 K. Electrical measurements shown that diodes had a well-defined rectifying behavior with a good rectification ratio of 103 \u00b13 V at room temperature. Ideality factor (n) were changed from 2.2 to 1.2 with increasing temperature.In this study, temperature-dependent electrical properties of n-type Ga-doped ZnO thin film / p-type Si nanowire heterojunction diodes were reported. Metal-assisted chemical etching (MACE) process was performed to fabricate Si nanowires. Ga-doped ZnO films were then deposited onto nanowires through chemical bath deposition (CBD) technique to build three-dimensional nanowire-based heterojunction diodes. Fabricated devices revealed significant diode characteristics in the temperature range of 220 - 360 K. Electrical measurements shown that diodes had a well-defined rectifying behavior with a good rectification ratio of 103 \u00b13 V at room temperature. Ideality factor (n) were changed from 2.2 to 1.2 with increasing temperature.", "which ZnO film deposition method ?", "chemical bath deposition ", NaN, NaN], ["Many governments react to the current coronavirus/COVID\u201019 pandemic by restricting daily (work) life. On the basis of theories from occupational health, we propose that the duration of the pandemic, its demands (e.g., having to work from home, closing of childcare facilities, job insecurity, work\u2010privacy conflicts, privacy\u2010work conflicts) and personal\u2010 and job\u2010related resources (co\u2010worker social support, job autonomy, partner support and corona self\u2010efficacy) interact in their effect on employee exhaustion. We test the hypotheses with a three\u2010wave sample of German employees during the pandemic from April to June 2020 (N w1 = 2900, N w12 = 1237, N w123 = 789). Our findings show a curvilinear effect of pandemic duration on working women's exhaustion. The data also show that the introduction and the easing of lockdown measures affect exhaustion, and that women with children who work from home while childcare is unavailable are especially exhausted. Job autonomy and partner support mitigated some of these effects. In sum, women's psychological health was more strongly affected by the pandemic than men's. We discuss implications for occupational health theories and that interventions targeted at mitigating the psychological consequences of the COVID\u201019 pandemic should target women specifically.", "which Indicator for well-being ?", "Exhaustion", 501.0, 511.0], ["ABSTRACT This paper provides a timely evaluation of whether the main COVID-19 lockdown policies \u2013 remote work, short-time work and closure of schools and childcare \u2013 have an immediate effect on the German population in terms of changes in satisfaction with work and family life. Relying on individual level panel data collected before and during the lockdown, we examine (1) how family satisfaction and work satisfaction of individuals have changed over the lockdown period, and (2) how lockdown-driven changes in the labour market situation (i.e. working remotely and being sent on short-time work) have affected satisfactions. We apply first-difference regressions for mothers, fathers, and persons without children. Our results show a general decrease in family satisfaction. We also find an overall decline in work satisfaction which is most pronounced for mothers and those without children who have to switch to short-time work. In contrast, fathers' well-being is less affected negatively and their family satisfaction even increased after changing to short-time work. We conclude that while the lockdown circumstances generally have a negative effect on the satisfaction with work and family of individuals in Germany, effects differ between childless persons, mothers, and fathers with the latter being least negatively affected.", "which Indicator for well-being ?", "Family satisfaction", 379.0, 398.0], ["Abstract Objectives To investigate early effects of the COVID-19 pandemic related to (a) levels of worry, risk perception, and social distancing; (b) longitudinal effects on well-being; and (c) effects of worry, risk perception, and social distancing on well-being. Methods We analyzed annual changes in four aspects of well-being over 5 years (2015\u20132020): life satisfaction, financial satisfaction, self-rated health, and loneliness in a subsample (n = 1,071, aged 65\u201371) from a larger survey of Swedish older adults. The 2020 wave, collected March 26\u2013April 2, included measures of worry, risk perception, and social distancing in response to COVID-19. Results (a) In relation to COVID-19: 44.9% worried about health, 69.5% about societal consequences, 25.1% about financial consequences; 86.4% perceived a high societal risk, 42.3% a high risk of infection, and 71.2% reported high levels of social distancing. (b) Well-being remained stable (life satisfaction and loneliness) or even increased (self-rated health and financial satisfaction) in 2020 compared to previous years. (c) More worry about health and financial consequences was related to lower scores in all four well-being measures. Higher societal worry and more social distancing were related to higher well-being. Discussion In the early stage of the pandemic, Swedish older adults on average rated their well-being as high as, or even higher than, previous years. However, those who worried more reported lower well-being. Our findings speak to the resilience, but also heterogeneity, among older adults during the pandemic. Further research, on a broad range of health factors and long-term psychological consequences, is needed.", "which Indicator for well-being ?", "Life satisfaction", 357.0, 374.0], ["ABSTRACT This paper provides a timely evaluation of whether the main COVID-19 lockdown policies \u2013 remote work, short-time work and closure of schools and childcare \u2013 have an immediate effect on the German population in terms of changes in satisfaction with work and family life. Relying on individual level panel data collected before and during the lockdown, we examine (1) how family satisfaction and work satisfaction of individuals have changed over the lockdown period, and (2) how lockdown-driven changes in the labour market situation (i.e. working remotely and being sent on short-time work) have affected satisfactions. We apply first-difference regressions for mothers, fathers, and persons without children. Our results show a general decrease in family satisfaction. We also find an overall decline in work satisfaction which is most pronounced for mothers and those without children who have to switch to short-time work. In contrast, fathers' well-being is less affected negatively and their family satisfaction even increased after changing to short-time work. We conclude that while the lockdown circumstances generally have a negative effect on the satisfaction with work and family of individuals in Germany, effects differ between childless persons, mothers, and fathers with the latter being least negatively affected.", "which Indicator for well-being ?", "Work satisfaction", 403.0, 420.0], ["The authors assess levels and within-person changes in psychological well-being (i.e., depressive symptoms and life satisfaction) from before to during the COVID-19 pandemic for individuals in the United States, in general and by socioeconomic status (SES). The data is from 2 surveys of 1,143 adults from RAND Corporation's nationally representative American Life Panel, the first administered between April-June, 2019 and the second during the initial peak of the pandemic in the United States in April, 2020. Depressive symptoms during the pandemic were higher than population norms before the pandemic. Depressive symptoms increased from before to during COVID-19 and life satisfaction decreased. Individuals with higher education experienced a greater increase in depressive symptoms and a greater decrease in life satisfaction from before to during COVID-19 in comparison to those with lower education. Supplemental analysis illustrates that income had a curvilinear relationship with changes in well-being, such that individuals at the highest levels of income experienced a greater decrease in life satisfaction from before to during COVID-19 than individuals with lower levels of income. We draw on conservation of resources theory and the theory of fundamental social causes to examine four key mechanisms (perceived financial resources, perceived control, interpersonal resources, and COVID-19-related knowledge/news consumption) underlying the relationship between SES and well-being during COVID-19. These resources explained changes in well-being for the sample as a whole but did not provide insight into why individuals of higher education experienced a greater decline in well-being from before to during COVID-19. (PsycInfo Database Record (c) 2020 APA, all rights reserved).", "which Indicator for well-being ?", "Life satisfaction", 111.0, 128.0], ["The COVID-19 pandemic has considerably impacted many people's lives. This study examined changes in subjective wellbeing between December 2019 and May 2020 and how stress appraisals and coping strategies relate to individual differences and changes in subjective wellbeing during the early stages of the pandemic. Data were collected at 4 time points from 979 individuals in Germany. Results showed that, on average, life satisfaction, positive affect, and negative affect did not change significantly between December 2019 and March 2020 but decreased between March and May 2020. Across the latter timespan, individual differences in life satisfaction were positively related to controllability appraisals, active coping, and positive reframing, and negatively related to threat and centrality appraisals and planning. Positive affect was positively related to challenge and controllable-by-self appraisals, active coping, using emotional support, and religion, and negatively related to threat appraisal and humor. Negative affect was positively related to threat and centrality appraisals, denial, substance use, and self-blame, and negatively related to controllability appraisals and emotional support. Contrary to expectations, the effects of stress appraisals and coping strategies on changes in subjective wellbeing were small and mostly nonsignificant. These findings imply that the COVID-19 pandemic represents not only a major medical and economic crisis, but also has a psychological dimension, as it can be associated with declines in key facets of people's subjective wellbeing. Psychological practitioners should address potential declines in subjective wellbeing with their clients and attempt to enhance clients' general capability to use functional stress appraisals and effective coping strategies. (PsycInfo Database Record (c) 2020 APA, all rights reserved).", "which Indicator for well-being ?", "Life satisfaction", 417.0, 434.0], ["ABSTRACT This study analyses the consequences of the Covid-19 crisis on stress and well-being in Switzerland. In particular, we assess whether vulnerable groups in terms of social isolation, increased workload and limited socioeconomic resources are affected more than others. Using longitudinal data from the Swiss Household Panel, including a specific Covid-19 study, we estimate change score models to predict changes in perceived stress and life satisfaction at the end of the semi-lockdown in comparison to before the crisis. We find no general change in life satisfaction and a small decrease in stress. Yet, in line with our expectations, more vulnerable groups in terms of social isolation (young adults, Covid-19 risk group members, individuals without a partner), workload (women) and socioeconomic resources (unemployed and those who experienced a deteriorating financial situation) reported a decrease in life satisfaction. Stress levels decreased most strongly among high earners, workers on short-time work and the highly educated.", "which Indicator for well-being ?", "Life satisfaction", 445.0, 462.0], ["The coronavirus outbreak has caused significant disruptions to people\u2019s lives. We document the impact of state-wide stay-at-home orders on mental health using real time survey data in the US. The lockdown measures lowered mental health by 0.085 standard deviations. This large negative effect is entirely driven by women. As a result of the lockdown measures, the existing gender gap in mental health has increased by 66%. The negative effect on women\u2019s mental health cannot be explained by an increase in financial worries or childcare responsibilities.", "which Indicator for well-being ?", "Mental health", 139.0, 152.0], ["We document a decline in mental well-being after the onset of the Covid-19 pandemic in the UK. This decline is twice as large for women as for men. We seek to explain this gender gap by exploring gender differences in: family and caring responsibilities; financial and work situation; social engagement; health situation, and health behaviours, including exercise. Differences in family and caring responsibilities play some role, but the bulk of the gap is explained by social factors. Women reported more close friends before the pandemic than men, and increased loneliness after the pandemic's onset. Other factors are similarly distributed across genders and so play little role. Finally, we document larger declines in well-being for the young, of both genders, than the old.", "which Indicator for well-being ?", "Mental well-being", 25.0, 42.0], ["Abstract In a humanitarian response, leaders are often tasked with making large numbers of decisions, many of which have significant consequences, in situations of urgency and uncertainty. These conditions have an impact on the decision-maker (causing stress, for example) and subsequently on how decisions get made. Evaluations of humanitarian action suggest that decision-making is an area of weakness in many operations. There are examples of important decisions being missed and of decision-making processes that are slow and ad hoc. As part of a research process to address these challenges, this article considers literature from the humanitarian and emergency management sectors that relates to decision-making. It outlines what the literature tells us about the nature of the decisions that leaders at the country level are taking during humanitarian operations, and the circumstances under which these decisions are taken. It then considers the potential application of two different types of decision-making process in these contexts: rational/analytical decision-making and naturalistic decision-making. The article concludes with broad hypotheses that can be drawn from the literature and with the recommendation that these be further tested by academics with an interest in the topic.", "which Individual factors influencing evidence-based decision-making ?", "stress", 252.0, 258.0], ["BACKGROUND Depression is a frequently occurring condition in family practice patients, but time limitations may hamper the physician's ability to it treat effectively. Referrals to mental health professionals are frequently resisted by patients. The need for more effective treatment strategies led to the development and evaluation of a telephone-based, problem-solving intervention. METHODS Patients in a family practice residency practice were evaluated through the Medical Outcomes Study Depression Screening Scale and the Diagnostic Interview Schedule to identify those with subthreshold or minor depression. Twenty-nine subjects were randomly assigned to either a treatment or comparison group. Initial scores on the Hamilton Depression Rating Scale were equivalent for the groups and were in the mildly depressed range. Six problem-solving therapy sessions were conducted over the telephone by graduate student therapists supervised by a psychiatrist. RESULTS Treatment group subjects had significantly lower post-intervention scores on the Hamilton Depression Rating Scale compared with their pre-intervention scores (P < .05). Scores did not differ significantly over time in the comparison group. Post-intervention, treatment group subjects also had lower Beck Depression Inventory scores than did the comparison group (P < .02), as well as more positive scores for social health (P < .002), mental health (P < .05), and self-esteem (P < .05) on the Duke Health Profile. CONCLUSIONS The findings indicate that brief, telephone-based treatment for minor depression in family practice settings may be an efficient and effective method to decrease symptoms of depression and improve functioning. Nurses in these settings with appropriate training and supervision may also be able to provide this treatment.", "which Recruitment ?", "Screening", 503.0, 512.0], ["Software-intensive systems often consist of cooperating reactive components. In mobile and reconfigurable systems, their topology changes at run-time, which influences how the components must cooperate. The Scenario Modeling Language (SML) offers a formal approach for specifying the reactive behavior such systems that aligns with how humans conceive and communicate behavioral requirements. Simulation and formal checks can find specification flaws early. We present a framework for the Scenario-based Programming (SBP) that reflects the concepts of SML in Java and makes the scenario modeling approach available for programming. SBP code can also be generated from SML and extended with platform-specific code, thus streamlining the transition from design to implementation. As an example serves a car-to-x communication system. Demo video and artifact: http://scenariotools.org/esecfse-2017-tool-demo/", "which Has application in ?", "car-to-x", 801.0, 809.0], ["Users of today's online software services are often diversified and distributed, whose needs are hard to elicit using conventional RE approaches. As a consequence, crowd-based, data intensive requirements engineering approaches are considered important. In this paper, we have conducted an experimental study on a dataset of 2,966 requirements statements to evaluate the performance of three text clustering algorithms. The purpose of the study is to aggregate similar requirement statements suggested by the crowd users, and also to identify domain objects and operations, as well as required features from the given requirements statements dataset. The experimental results are then cross-checked with original tags provided by data providers for validation.", "which Utilities in CrowdRE ?", "Crowd", 164.0, 169.0], ["MyERP is a fictional developer of an Enterprise Resource Planning (ERP) system. Driven by the competition, they face the challenge of losing market share if they fail to de-ploy a Software as a Service (SaaS) ERP system to the European market quickly, but with high quality product. This also means that the requirements engineering (RE) activities will have to be performed efficiently and provide solid results. An additional problem they face is that their (potential) stakeholders are phys-ically distributed, it makes sense to consider them a \"crowd\". This competition paper suggests a Crowd-based RE approach that first identifies the crowd, then collects and analyzes their feedback to derive wishes and needs, and validate the results through prototyping. For this, techniques are introduced that have so far been rarely employed within RE, but more \"traditional\" RE techniques, will also be integrated and/or adapted to attain the best possible result in the case of MyERP.", "which Utilities in CrowdRE ?", "Crowd", 549.0, 554.0], ["Crowdsourcing is an emerging method to collect requirements for software systems. Applications seeking global acceptance need to meet the expectations of a wide range of users. Collecting requirements and arriving at consensus with a wide range of users is difficult using traditional method of requirements elicitation. This paper presents crowdsourcing based approach for German medium-size software company MyERP that might help the company to get access to requirements from non-German customers. We present the tasks involved in the proposed solution that would help the company meet the goal of eliciting requirements at a fast pace with non-German customers.", "which Utilities in CrowdRE ?", "Crowd", NaN, NaN], ["Software systems are designed to support their users in performing tasks that are parts of more general processes. Unfortunately, software designers often make invalid assumptions about the users' processes and therefore about the requirements to support such processes. Eliciting and validating such assumptions through manual means (e.g., through observations, interviews, and workshops) is expensive, time-consuming, and may fail to identify the users' real processes. Using process mining may reduce these problems by automating the monitoring and discovery of the actual processes followed by a crowd of users. The Crowd provides an opportunity to involve diverse groups of users to interact with a system and conduct their intended processes. This implicit feedback in the form of discovered processes can then be used to modify the existing system's functionalities and ensure whether or not a software product is used as initially designed. In addition, the analysis of user-system interactions may reveal lacking functionalities and quality issues. These ideas are illustrated on the GreenSoft personal energy management system.", "which Utilities in CrowdRE ?", "Crowd", 600.0, 605.0], ["Requirements engineering is a preliminary and crucial phase for the correctness and quality of software systems. Despite the agreement on the positive correlation between user involvement in requirements engineering and software success, current development methods employ a too narrow concept of that user and rely on a recruited set of users considered to be representative. Such approaches might not cater for the diversity and dynamism of the actual users and the context of software usage. This is especially true in new paradigms such as cloud and mobile computing. To overcome these limitations, we propose crowd-centric requirements engineering (CCRE) as a revised method for requirements engineering where users become primary contributors, resulting in higher-quality requirements and increased user satisfaction. CCRE relies on crowd sourcing to support a broader user involvement, and on gamification to motivate that voluntary involvement.", "which Utilities in CrowdRE ?", "Crowd", 614.0, 619.0], ["The ever increasing accessibility of the web for the crowd offered by various electronic devices such as smartphones has facilitated the communication of the needs, ideas, and wishes of millions of stakeholders. To cater for the scale of this input and reduce the overhead of manual elicitation methods, data mining and text mining techniques have been utilised to automatically capture and categorise this stream of feedback, which is also used, amongst other things, by stakeholders to communicate their requirements to software developers. Such techniques, however, fall short of identifying some of the peculiarities and idiosyncrasies of the natural language that people use colloquially. This paper proposes CRAFT, a technique that utilises the power of the crowd to support richer, more powerful text mining by enabling the crowd to categorise and annotate feedback through a context menu. This, in turn, helps requirements engineers to better identify user requirements within such feedback. This paper presents the theoretical foundations as well as the initial evaluation of this crowd-based feedback annotation technique for requirements identification.", "which Utilities in CrowdRE ?", "Crowd", 53.0, 58.0], ["Due to the pervasive use of online forums and social media, users' feedback are more accessible today and can be used within a requirements engineering context. However, such information is often fragmented, with multiple perspectives from multiple parties involved during on\u2010going interactions. In this paper, the authors propose a Crowd\u2010based Requirements Engineering approach by Argumentation (CrowdRE\u2010Arg). The framework is based on the analysis of the textual conversations found in user forums, identification of features, issues and the arguments that are in favour or opposing a given requirements statement. The analysis is to generate an argumentation model of the involved user statements, retrieve the conflicting\u2010viewpoints, reason about the winning\u2010arguments and present that to systems analysts to make informed\u2010requirements decisions. For this purpose, the authors adopted a bipolar argumentation framework and a coalition\u2010based meta\u2010argumentation framework as well as user voting techniques. The CrowdRE\u2010Arg approach and its algorithms are illustrated through two sample conversations threads taken from the Reddit forum. Additionally, the authors devised algorithms that can identify conflict\u2010free features or issues based on their supporting and attacking arguments. The authors tested these machine learning algorithms on a set of 3,051 user comments, preprocessed using the content analysis technique. The results show that the proposed algorithms correctly and efficiently identify conflict\u2010free features and issues along with their winning arguments.", "which Utilities in CrowdRE ?", "Crowd", 333.0, 338.0], ["Crowd-based requirements engineering (CrowdRE) is promising to derive requirements by gathering and analyzing information from the crowd. Setting up CrowdRE in practice seems challenging, although first solutions to support CrowdRE exist. In this paper, we report on a German software company's experience on crowd involvement by using feedback communication channels and a monitoring solution for user-event data. In our case study, we identified several problem areas that a software company is confronted with to setup an environment for gathering requirements from the crowd. We conclude that a CrowdRE process cannot be implemented ad-hoc and that future work is needed to create and analyze a continuous feedback and monitoring data stream.", "which Utilities in CrowdRE ?", "Crowd", 0.0, 5.0], ["Crowd-based requirements engineering (CrowdRE) could significantly change RE. Performing RE activities such as elicitation with the crowd of stakeholders turns RE into a participatory effort, leads to more accurate requirements, and ultimately boosts software quality. Although any stakeholder in the crowd can contribute, CrowdRE emphasizes one stakeholder group whose role is often trivialized: users. CrowdRE empowers the management of requirements, such as their prioritization and segmentation, in a dynamic, evolved style through collecting and harnessing a continuous flow of user feedback and monitoring data on the usage context. To analyze the large amount of data obtained from the crowd, automated approaches are key. This article presents current research topics in CrowdRE; discusses the benefits, challenges, and lessons learned from projects and experiments; and assesses how to apply the methods and tools in industrial contexts. This article is part of a special issue on Crowdsourcing for Software Engineering.", "which Utilities in CrowdRE ?", "Crowd", 0.0, 5.0], ["The Internet is a social space that is shaped by humans through the development of websites, the release of web services, the collaborative creation of encyclopedias and forums, the exchange of information through social networks, the provision of work through crowdsourcing platforms, etc. This landscape offers novel possibilities for software systems to satisfy their requirements, e.g., by retrieving and aggregating the information from Internet websites as well as by crowdsourcing the execution of certain functions. In this paper, we present a special type of functional requirements (called unbounded) that is not fully satisfiable and whose satisfaction is increased by gathering evidence from multiple sources. In addition to charac- terizing unbounded requirements, we explain how to maximize their satisfaction by asking and by combining opinions of mul- tiple sources: people, services, information, and algorithms. We provide evidence of the existence of these requirements through examples by studying a modern Web application (Spotify) and from a traditional system (Microsoft Word).", "which Utilities in CrowdRE ?", "Crowd", NaN, NaN], ["This paper proposes a gradual approach to crowd-based requirements engineering (RE) for supporting the establishment of a more engaged crowd, hence, mitigating the low involvement risk in crowd-based RE. Our approach advocates involving micro-crowds (MCs), where in each micro-crowd, the population is relatively cohesive and familiar with each other. Using this approach, the evolving product is developed iteratively. At each iteration, a new MC can join the already established crowd to enhance the requirements for the next version, while adding terminology to an evolving folksonomy. We are currently using this approach in an on-going research project to develop an online social network (OSN) for academic researchers that will facilitate discussions and knowledge sharing around conferences.", "which Utilities in CrowdRE ?", "Crowd", 42.0, 47.0], ["Twitter enables large populations of end-users of software to publicly share their experiences and concerns about software systems in the form of micro-blogs. Such data can be collected and classified to help software developers infer users' needs, detect bugs in their code, and plan for future releases of their systems. However, automatically capturing, classifying, and presenting useful tweets is not a trivial task. Challenges stem from the scale of the data available, its unique format, diverse nature, and high percentage of irrelevant information and spam. Motivated by these challenges, this paper reports on a three-fold study that is aimed at leveraging Twitter as a main source of software user requirements. The main objective is to enable a responsive, interactive, and adaptive data-driven requirements engineering process. Our analysis is conducted using 4,000 tweets collected from the Twitter feeds of 10 software systems sampled from a broad range of application domains. The results reveal that around 50% of collected tweets contain useful technical information. The results also show that text classifiers such as Support Vector Machines and Naive Bayes can be very effective in capturing and categorizing technically informative tweets. Additionally, the paper describes and evaluates multiple summarization strategies for generating meaningful summaries of informative software-relevant tweets.", "which Utilities in CrowdRE ?", "Task", 416.0, 420.0], ["Crowdsourcing is an emerging method to collect requirements for software systems. Applications seeking global acceptance need to meet the expectations of a wide range of users. Collecting requirements and arriving at consensus with a wide range of users is difficult using traditional method of requirements elicitation. This paper presents crowdsourcing based approach for German medium-size software company MyERP that might help the company to get access to requirements from non-German customers. We present the tasks involved in the proposed solution that would help the company meet the goal of eliciting requirements at a fast pace with non-German customers.", "which RE activities with crowd involvement ?", "Elicitation", 308.0, 319.0], ["Crowd-based requirements engineering (CrowdRE) could significantly change RE. Performing RE activities such as elicitation with the crowd of stakeholders turns RE into a participatory effort, leads to more accurate requirements, and ultimately boosts software quality. Although any stakeholder in the crowd can contribute, CrowdRE emphasizes one stakeholder group whose role is often trivialized: users. CrowdRE empowers the management of requirements, such as their prioritization and segmentation, in a dynamic, evolved style through collecting and harnessing a continuous flow of user feedback and monitoring data on the usage context. To analyze the large amount of data obtained from the crowd, automated approaches are key. This article presents current research topics in CrowdRE; discusses the benefits, challenges, and lessons learned from projects and experiments; and assesses how to apply the methods and tools in industrial contexts. This article is part of a special issue on Crowdsourcing for Software Engineering.", "which RE activities with crowd involvement ?", "Elicitation", 111.0, 122.0], ["The ever increasing accessibility of the web for the crowd offered by various electronic devices such as smartphones has facilitated the communication of the needs, ideas, and wishes of millions of stakeholders. To cater for the scale of this input and reduce the overhead of manual elicitation methods, data mining and text mining techniques have been utilised to automatically capture and categorise this stream of feedback, which is also used, amongst other things, by stakeholders to communicate their requirements to software developers. Such techniques, however, fall short of identifying some of the peculiarities and idiosyncrasies of the natural language that people use colloquially. This paper proposes CRAFT, a technique that utilises the power of the crowd to support richer, more powerful text mining by enabling the crowd to categorise and annotate feedback through a context menu. This, in turn, helps requirements engineers to better identify user requirements within such feedback. This paper presents the theoretical foundations as well as the initial evaluation of this crowd-based feedback annotation technique for requirements identification.", "which RE activities with crowd involvement ?", "Elicitation", 283.0, 294.0], ["To build needed mobile applications in specific domains, requirements should be collected and analyzed in holistic approach. However, resource is limited for small vendor groups to perform holistic requirement acquisition and elicitation. The rise of crowdsourcing and crowdfunding gives small vendor groups new opportunities to build needed mobile applications for the crowd. By finding prior stakeholders and gathering requirements effectively from the crowd, mobile application projects can establish sound foundation in early phase of software process. Therefore, integration of crowd-based requirement engineering into software process is important for small vendor groups. Conventional requirement acquisition and elicitation methods are analyst-centric. Very little discussion is in adapting requirement acquisition tools for crowdcentric context. In this study, several tool features of use case documentation are revised in crowd-centric context. These features constitute a use case-based framework, called UCFrame, for crowd-centric requirement acquisition. An instantiation of UCFrame is also presented to demonstrate the effectiveness of UCFrame in collecting crowd requirements for building two mobile applications.", "which RE activities with crowd involvement ?", "Elicitation", 226.0, 237.0], ["Internetware is required to respond quickly to emergent user requirements or requirements changes by providing application upgrade or making context-aware recommendations. As user requirements in Internet computing environment are often changing fast and new requirements emerge more and more in a creative way, traditional requirements engineering approaches based on requirements elicitation and analysis cannot ensure the quick response of Internetware. In this paper, we propose an approach for mining context-aware user requirements from crowd contributed mobile data. The approach captures behavior records contributed by a crowd of mobile users and automatically mines context-aware user behavior patterns (i.e., when, where and under what conditions users require a specific service) from them using Apriori-M algorithm. Based on the mined user behaviors, emergent requirements or requirements changes can be inferred from the mined user behavior patterns and solutions that satisfy the requirements can be recommended to users. To evaluate the proposed approach, we conduct an experimental study and show the effectiveness of the requirements mining approach.", "which RE activities with crowd involvement ?", "Elicitation", 382.0, 393.0], ["In this paper we provide empirical evidence that the rating that an app attracts can be accurately predicted from the features it offers. Our results, based on an analysis of 11,537 apps from the Samsung Android and BlackBerry World app stores, indicate that the rating of 89% of these apps can be predicted with 100% accuracy. Our prediction model is built by using feature and rating information from the existing apps offered in the App Store and it yields highly accurate rating predictions, using only a few (11-12) existing apps for case-based prediction. These findings may have important implications for requirements engineering in app stores: They indicate that app developers may be able to obtain (very accurate) assessments of the customer reaction to their proposed feature sets (requirements), thereby providing new opportunities to support the requirements elicitation process for app developers.", "which RE activities with crowd involvement ?", "Elicitation", 873.0, 884.0], ["Twitter is one of the most popular social networks. Previous research found that users employ Twitter to communicate about software applications via short messages, commonly referred to as tweets, and that these tweets can be useful for requirements engineering and software evolution. However, due to their large number---in the range of thousands per day for popular applications---a manual analysis is unfeasible.In this work we present ALERTme, an approach to automatically classify, group and rank tweets about software applications. We apply machine learning techniques for automatically classifying tweets requesting improvements, topic modeling for grouping semantically related tweets and a weighted function for ranking tweets according to specific attributes, such as content category, sentiment and number of retweets. We ran our approach on 68,108 collected tweets from three software applications and compared its results against software practitioners' judgement. Our results show that ALERTme is an effective approach for filtering, summarizing and ranking tweets about software applications. ALERTme enables the exploitation of Twitter as a feedback channel for information relevant to software evolution, including end-user requirements.", "which RE activities with crowd involvement ?", "Evolution", 275.0, 284.0], ["Crowd-based requirements engineering (CrowdRE) could significantly change RE. Performing RE activities such as elicitation with the crowd of stakeholders turns RE into a participatory effort, leads to more accurate requirements, and ultimately boosts software quality. Although any stakeholder in the crowd can contribute, CrowdRE emphasizes one stakeholder group whose role is often trivialized: users. CrowdRE empowers the management of requirements, such as their prioritization and segmentation, in a dynamic, evolved style through collecting and harnessing a continuous flow of user feedback and monitoring data on the usage context. To analyze the large amount of data obtained from the crowd, automated approaches are key. This article presents current research topics in CrowdRE; discusses the benefits, challenges, and lessons learned from projects and experiments; and assesses how to apply the methods and tools in industrial contexts. This article is part of a special issue on Crowdsourcing for Software Engineering.", "which RE activities with crowd involvement ?", "Prioritization", 467.0, 481.0], ["Verification and validation (V&V) is only marginally addressed in software process improvement models like CMM and CMMI. A roadmap for the establishment of a sound verification and validation process in software development organizations is badly needed. This paper presents a basis for a roadmap; it describes a framework for improvement of the V&V process, based on the Testing Maturity Model (TMM), but with considerable enhancements. The model, tentatively named MB-V/sup 2/M/sup 2/ (Metrics Based Verification and Validation Maturity Model), has been initiated by a consortium of industrial companies, consultancy & service agencies and an academic institute, operating and residing in the Netherlands. MB-V/sup 2/M/sup 2/ is designed to be universally applicable, to unite the strengths of known (verification and validation) improvement models and to reflect proven work practices. It recommends a metrics base to select process improvements and to track and control implementation of improvement actions. This paper outlines the model and addresses the current status.", "which Domain Name ?", "Software", 66.0, 74.0], ["Despite the long tradition on the study of human values, the impact of this field in the software engineering domain is rarely studied. To these regards, this study focuses on applying human values to agile software development process, more specifically to scrum roles. Thus, the goal of the study is to explore possible associations between human values and scrum roles preferences among students. Questionnaires are designed by employing the Short Schwartz's Value Survey and are distributed among 57 students. The results of the quantitative analysis process consisting of descriptive statistics, linear regression models and Pearson correlation coefficients, revealed that values such as power and self-direction influence the preference for the product owner role, the value of hedonism influences the preference for scrum masters and self-direction is associated with team members' preference.", "which Subjects ?", "students", 390.0, 398.0], ["Background: Human values, such as prestige, social justice, and financial success, influence software production decision-making processes. While their subjectivity makes some values difficult to measure, their impact on software motivates our research. Aim: To contribute to the scientific understanding and the empirical investigation of human values in Software Engineering (SE). Approach: Drawing from social psychology, we consider values as mental representations to be investigated on three levels: at a system (L1), personal (L2), and instantiation level (L3). Method: We design and develop a selection of tools for the investigation of values at each level, and focus on the design, development, and use of the Values Q-Sort. Results: From our study with 12 software practitioners, it is possible to extract three values `prototypes' indicative of an emergent typology of values considerations in SE. Conclusions: The Values Q-Sort generates quantitative values prototypes indicating values relations (L1) as well as rich personal narratives (L2) that reflect specific software practices (L3). It thus offers a systematic, empirical approach to capturing values in SE.", "which Subjects ?", "software practitioners", 767.0, 789.0], ["Workshops are an established technique for requirements elicitation. A lot of information is revealed during a workshop, which is generally captured via textual minutes. The scribe suffers from a cognitive overload due to the difficulty of gathering all information, listening and writing at the same time. Video recording is used as additional option to capture more information, including non-verbal gestures. Since a workshop can take several hours, the recorded video will be long and may be disconnected from the scribe's notes. Therefore, the weak and unclear structure of the video complicates the access to the recorded information, for example in subsequent requirements engineering activities. We propose the combination of textual minutes and video with a software tool. Our objective is connecting textual notes with the corresponding part of the video. By highlighting relevant sections of a video and attaching notes that summarize those sections, a more useful structure can be achieved. This structure allows an easy and fast access to the relevant information and their corresponding video context. Thus, a scribe's overload can be mitigated and further use of a video can be simplified. Tool-supported analysis of such an enriched video can facilitate the access to all communicated information of a workshop. This allows an easier elicitation of high-quality requirements. We performed a preliminary evaluation of our approach in an experimental set-up with 12 participants. They were able to elicit higher-quality requirements with our software tool.", "which Has research method ?", "Experiment", NaN, NaN], ["Overgrazing and climate warming may be important drivers of alpine rangeland degradation in the Qinghai-Tibetan Plateau (QTP). In this study, the effects of grazing and experimental warming on the vegetation of cultivated grasslands, alpine steppe and alpine meadows on the QTP were investigated. The three treatments were a control, a warming treatment and a grazing treatment and were replicated three times on each vegetation type. The warming treatment was applied using fibreglass open-top chambers and the grazing treatment was continuous grazing by yaks at a moderately high stocking rate. Both grazing and warming negatively affected vegetation cover. Grazing reduced vegetation height while warming increased vegetation height. Grazing increased but warming reduced plant diversity. Grazing decreased and warming increased the aboveground plant biomass. Grazing increased the preferred forage species in native rangelands (alpine steppe and alpine meadow), while warming increased the preferred forage species in the cultivated grassland. Grazing reduced the vegetation living state (VLS) of all three alpine grasslands by nearly 70%, while warming reduced the VLS of the cultivated grassland and the alpine steppe by 32% and 56%, respectively, and promoted the VLS of the alpine meadow by 20.5%. It was concluded that overgrazing was the main driver of change to the alpine grassland vegetation on the QTP. The findings suggest that grazing regimes should be adapted in order for them to be sustainable in a warmer future.", "which Experimental treatment ?", "Warming", 24.0, 31.0], ["Overgrazing and climate warming may be important drivers of alpine rangeland degradation in the Qinghai-Tibetan Plateau (QTP). In this study, the effects of grazing and experimental warming on the vegetation of cultivated grasslands, alpine steppe and alpine meadows on the QTP were investigated. The three treatments were a control, a warming treatment and a grazing treatment and were replicated three times on each vegetation type. The warming treatment was applied using fibreglass open-top chambers and the grazing treatment was continuous grazing by yaks at a moderately high stocking rate. Both grazing and warming negatively affected vegetation cover. Grazing reduced vegetation height while warming increased vegetation height. Grazing increased but warming reduced plant diversity. Grazing decreased and warming increased the aboveground plant biomass. Grazing increased the preferred forage species in native rangelands (alpine steppe and alpine meadow), while warming increased the preferred forage species in the cultivated grassland. Grazing reduced the vegetation living state (VLS) of all three alpine grasslands by nearly 70%, while warming reduced the VLS of the cultivated grassland and the alpine steppe by 32% and 56%, respectively, and promoted the VLS of the alpine meadow by 20.5%. It was concluded that overgrazing was the main driver of change to the alpine grassland vegetation on the QTP. The findings suggest that grazing regimes should be adapted in order for them to be sustainable in a warmer future.", "which Study type ?", "Experiment", NaN, NaN], ["Thesauri are used for document referencing. They define hierarchies of domains. We show how document and domain contents can be used to validate and update a classification based on a thesaurus. We use document indexing and classification techniques to automate these operations. We also draft a methodology to systematically address those issues. Our techniques are applied to Urbamet, a thesaurus in the field of town planning.", "which Ontology type ?", "Thesaurus", 184.0, 193.0], ["Thesauri are used for document referencing. They define hierarchies of domains. We show how document and domain contents can be used to validate and update a classification based on a thesaurus. We use document indexing and classification techniques to automate these operations. We also draft a methodology to systematically address those issues. Our techniques are applied to Urbamet, a thesaurus in the field of town planning.", "which Ontology name ?", "Urbamet", 378.0, 385.0], ["Entrepreneurs and start-up founders using innovation spaces and hubs often find themselves inside a filter bubble or echo chamber, where like-minded people tend to come up with similar ideas and recommend similar approaches to innovation. This trend towards homophily and a polarisation of like-mindedness is aggravated by algorithmic filtering and recommender systems embedded in mobile technology and social media platforms. Yet, genuine innovation thrives on social inclusion fostering a diversity of ideas. To escape these echo chambers, we designed and tested the Skunkworks Finder - an exploratory tool that employs social network analysis to help users discover spaces of difference and otherness in their local urban innovation ecosystem.", "which uses Recommendation Method ?", "Algorithm", NaN, NaN], ["With the advancement of smart city, the development of intelligent mobile terminal and wireless network, the traditional text information service no longer meet the needs of the community residents, community image service appeared as a new media service. \u201cThere are pictures of the truth\u201d has become a community residents to understand and master the new dynamic community, image information service has become a new information service. However, there are two major problems in image information service. Firstly, the underlying eigenvalues extracted by current image feature extraction techniques are difficult for users to understand, and there is a semantic gap between the image content itself and the user\u2019s understanding; secondly, in community life of the image data increasing quickly, it is difficult to find their own interested image data. Aiming at the two problems, this paper proposes a unified image semantic scene model to express the image content. On this basis, a collaborative filtering recommendation model of fusion scene semantics is proposed. In the recommendation model, a comprehensiveness and accuracy user interest model is proposed to improve the recommendation quality. The results of the present study have achieved good results in the pilot cities of Wenzhou and Yan'an, and it is applied normally.", "which uses Recommendation Method ?", "Collaborative filtering", 985.0, 1008.0], ["Cloud computing has taken the limelight with respect to the present industry scenario due to its multi-tenant and pay-as-you-use models, where users need not bother about buying resources like hardware, software, infrastructure, etc. on an permanently basis. As much as the technological benefits, cloud computing also has its downside. By looking at its financial benefits, customers who cannot afford initial investments, choose cloud by compromising on its concerns, like security, performance, estimation, availability, etc. At the same time due to its risks, customers - relatively majority in number, avoid migration towards cloud. Considering this fact, performance and estimation are being the major critical factors for any application deployment in cloud environment; this paper brings the roadmap for an improved performance-centric cloud storage estimation approach, which is based on balanced PCTFree allocation technique for database systems deployment in cloud environment. Objective of this approach is to highlight the set of key activities that have to be jointly done by the database technical team and business users of the software system in order to perform an accurate analysis to arrive at estimation for sizing of the database. For the evaluation of this approach, an experiment has been performed through varied-size PCTFree allocations on an experimental setup with 100000 data records. The result of this experiment shows the impact of PCTFree configuration on database performance. Basis this fact, we propose an improved performance-centric cloud storage estimation approach in cloud. Further, this paper applies our improved performance-centric storage estimation approach on decision support system (DSS) as a case study.", "which Components ?", "Application", 733.0, 744.0], ["Digital transformation is an emerging trend in developing the way how the work is being done, and it is present in the private and public sector, in all industries and fields of work. Smart cities, as one of the concepts related to digital transformation, is usually seen as a matter of local governments, as it is their responsibility to ensure a better quality of life for the citizens. Some cities have already taken advantages of possibilities offered by the concept of smart cities, creating new values to all stakeholders interacting in the living city ecosystems, thus serving as examples of good practice, while others are still developing and growing on their intentions to become smart. This paper provides a structured literature analysis and investigates key scope, services and technologies related to smart cities and digital transformation as concepts of empowering social and collaboration interactions, in order to identify leading factors in most smart city initiatives.", "which Components ?", "Services", 778.0, 786.0], ["Digital transformation is an emerging trend in developing the way how the work is being done, and it is present in the private and public sector, in all industries and fields of work. Smart cities, as one of the concepts related to digital transformation, is usually seen as a matter of local governments, as it is their responsibility to ensure a better quality of life for the citizens. Some cities have already taken advantages of possibilities offered by the concept of smart cities, creating new values to all stakeholders interacting in the living city ecosystems, thus serving as examples of good practice, while others are still developing and growing on their intentions to become smart. This paper provides a structured literature analysis and investigates key scope, services and technologies related to smart cities and digital transformation as concepts of empowering social and collaboration interactions, in order to identify leading factors in most smart city initiatives.", "which Components ?", "Technologies", 791.0, 803.0], ["The digital transformation of our life changes the way we work, learn, communicate, and collaborate. Enterprises are presently transforming their strategy, culture, processes, and their information systems to become digital. The digital transformation deeply disrupts existing enterprises and economies. Digitization fosters the development of IT systems with many rather small and distributed structures, like Internet of Things, Microservices and mobile services. Since years a lot of new business opportunities appear using the potential of services computing, Internet of Things, mobile systems, big data with analytics, cloud computing, collaboration networks, and decision support. Biological metaphors of living and adaptable ecosystems provide the logical foundation for self-optimizing and resilient run-time environments for intelligent business services and adaptable distributed information systems with service-oriented enterprise architectures. This has a strong impact for architecting digital services and products following both a value-oriented and a service perspective. The change from a closed-world modeling world to a more flexible open-world composition and evolution of enterprise architectures defines the moving context for adaptable and high distributed systems, which are essential to enable the digital transformation. The present research paper investigates the evolution of Enterprise Architecture considering new defined value-oriented mappings between digital strategies, digital business models and an improved digital enterprise architecture.", "which Components ?", "Service perspective", 1069.0, 1088.0], ["The digital transformation of our life changes the way we work, learn, communicate, and collaborate. Enterprises are presently transforming their strategy, culture, processes, and their information systems to become digital. The digital transformation deeply disrupts existing enterprises and economies. Digitization fosters the development of IT systems with many rather small and distributed structures, like Internet of Things, Microservices and mobile services. Since years a lot of new business opportunities appear using the potential of services computing, Internet of Things, mobile systems, big data with analytics, cloud computing, collaboration networks, and decision support. Biological metaphors of living and adaptable ecosystems provide the logical foundation for self-optimizing and resilient run-time environments for intelligent business services and adaptable distributed information systems with service-oriented enterprise architectures. This has a strong impact for architecting digital services and products following both a value-oriented and a service perspective. The change from a closed-world modeling world to a more flexible open-world composition and evolution of enterprise architectures defines the moving context for adaptable and high distributed systems, which are essential to enable the digital transformation. The present research paper investigates the evolution of Enterprise Architecture considering new defined value-oriented mappings between digital strategies, digital business models and an improved digital enterprise architecture.", "which Components ?", "Perspective", 1077.0, 1088.0], ["Manufacturing industry based on steam know as Industry 1.0 is evolving to Industry 4.0 a digital ecosystem consisting of an interconnected automated system with real-time data. This paper investigates and proposes, how the digital ecosystem complemented with Enterprise Architecture practice will ensure the success of digital transformation.", "which Components ?", "Data", 171.0, 175.0], ["Cloud computing has taken the limelight with respect to the present industry scenario due to its multi-tenant and pay-as-you-use models, where users need not bother about buying resources like hardware, software, infrastructure, etc. on an permanently basis. As much as the technological benefits, cloud computing also has its downside. By looking at its financial benefits, customers who cannot afford initial investments, choose cloud by compromising on its concerns, like security, performance, estimation, availability, etc. At the same time due to its risks, customers - relatively majority in number, avoid migration towards cloud. Considering this fact, performance and estimation are being the major critical factors for any application deployment in cloud environment; this paper brings the roadmap for an improved performance-centric cloud storage estimation approach, which is based on balanced PCTFree allocation technique for database systems deployment in cloud environment. Objective of this approach is to highlight the set of key activities that have to be jointly done by the database technical team and business users of the software system in order to perform an accurate analysis to arrive at estimation for sizing of the database. For the evaluation of this approach, an experiment has been performed through varied-size PCTFree allocations on an experimental setup with 100000 data records. The result of this experiment shows the impact of PCTFree configuration on database performance. Basis this fact, we propose an improved performance-centric cloud storage estimation approach in cloud. Further, this paper applies our improved performance-centric storage estimation approach on decision support system (DSS) as a case study.", "which Components ?", "Business", 1122.0, 1130.0], ["Digital transformation is an emerging trend in developing the way how the work is being done, and it is present in the private and public sector, in all industries and fields of work. Smart cities, as one of the concepts related to digital transformation, is usually seen as a matter of local governments, as it is their responsibility to ensure a better quality of life for the citizens. Some cities have already taken advantages of possibilities offered by the concept of smart cities, creating new values to all stakeholders interacting in the living city ecosystems, thus serving as examples of good practice, while others are still developing and growing on their intentions to become smart. This paper provides a structured literature analysis and investigates key scope, services and technologies related to smart cities and digital transformation as concepts of empowering social and collaboration interactions, in order to identify leading factors in most smart city initiatives.", "which Components ?", "Scope", 771.0, 776.0], ["The article focuses on the range of problems arising on the way of innovative technologies implementation in the structure of existing cities. The concept of intellectualization of historic cities, as illustrated by Samara, is offered, which was chosen for the realization of a large Russian project \u201cSmart City. Successful Region\u201d in 2018. One of the problems was to study the experience of information hubs projecting with the purpose of determination of their priority functional directions. The following typology of information hubs was made: scientific and research ones, scientific and technical ones, innovative and cultural ones, cultural and informational ones, scientific and informational ones, technological ones, centres for data processing, scientific centres with experimental and production laboratories. As a result of the conducted research, a suggestion on smart city\u2019s infrastructure is developed, the final levels of innovative technologies implementation in the structure of historic territories are determined. A model suggestion on the formation of a scientific and project centre with experimental and production laboratories branded as named \u201cPark-plant\u201d is developed. Smart (as well as real) city technologies, which are supposed to be placed on the territory of \u201cPark-plant\u201d, are systematized. The organizational structure of the promotion of model projects is offered according to the concept of \u201ctriad of development agents\u201d, in which the flagship university \u2013 urban community \u2013 park-plant interact within the project programme. The effects of the development of the being renovated territory of the historic city centre are enumerated.", "which Components ?", "Information hub", NaN, NaN], ["Cloud computing has taken the limelight with respect to the present industry scenario due to its multi-tenant and pay-as-you-use models, where users need not bother about buying resources like hardware, software, infrastructure, etc. on an permanently basis. As much as the technological benefits, cloud computing also has its downside. By looking at its financial benefits, customers who cannot afford initial investments, choose cloud by compromising on its concerns, like security, performance, estimation, availability, etc. At the same time due to its risks, customers - relatively majority in number, avoid migration towards cloud. Considering this fact, performance and estimation are being the major critical factors for any application deployment in cloud environment; this paper brings the roadmap for an improved performance-centric cloud storage estimation approach, which is based on balanced PCTFree allocation technique for database systems deployment in cloud environment. Objective of this approach is to highlight the set of key activities that have to be jointly done by the database technical team and business users of the software system in order to perform an accurate analysis to arrive at estimation for sizing of the database. For the evaluation of this approach, an experiment has been performed through varied-size PCTFree allocations on an experimental setup with 100000 data records. The result of this experiment shows the impact of PCTFree configuration on database performance. Basis this fact, we propose an improved performance-centric cloud storage estimation approach in cloud. Further, this paper applies our improved performance-centric storage estimation approach on decision support system (DSS) as a case study.", "which Components ?", "Data", 1400.0, 1404.0], ["The literature on managerial competences has not sufficiently addressed the value contents of competences and the generic features of public managers. This article presents a model of five competence areas: task competence, professional competence in substantive policy field, professional competence in administration, political competence and ethical competence. Each competence area includes both value and instrumental competences. Relatively permanent value competences are understood as commitments. The assumptions of new public management question not only the instrumental competences but also the commitments of traditional public service. The efficacy of human resource development is limited in learning new commitments. Apart from structural reforms that speed up the process, the friction in the change of commitments is seen as slow cultural change in many public organisations. This is expressed by transitional tensions in task commitment, professional commitment, political commitment, and ethical commitment of public managers.", "which has character traits ?", "commitment", 945.0, 955.0], ["Our world and our lives are changing in many ways. Communication, networking, and computing technologies are among the most influential enablers that shape our lives today. Digital data and connected worlds of physical objects, people, and devices are rapidly changing the way we work, travel, socialize, and interact with our surroundings, and they have a profound impact on different domains, such as healthcare, environmental monitoring, urban systems, and control and management applications, among several other areas. Cities currently face an increasing demand for providing services that can have an impact on people's everyday lives. The CityPulse framework supports smart city service creation by means of a distributed system for semantic discovery, data analytics, and interpretation of large-scale (near-)real-time Internet of Things data and social media data streams. To goal is to break away from silo applications and enable cross-domain data integration. The CityPulse framework integrates multimodal, mixed quality, uncertain and incomplete data to create reliable, dependable information and continuously adapts data processing techniques to meet the quality of information requirements from end users. Different than existing solutions that mainly offer unified views of the data, the CityPulse framework is also equipped with powerful data analytics modules that perform intelligent data aggregation, event detection, quality assessment, contextual filtering, and decision support. This paper presents the framework, describes its components, and demonstrates how they interact to support easy development of custom-made applications for citizens. The benefits and the effectiveness of the framework are demonstrated in a use-case scenario implementation presented in this paper.", "which Ontology domains ?", "Event", 1422.0, 1427.0], ["With the advances in new-generation information technologies, especially big data and digital twin, smart manufacturing is becoming the focus of global manufacturing transformation and upgrading. Intelligence comes from data. Integrated analysis for the manufacturing big data is beneficial to all aspects of manufacturing. Besides, the digital twin paves a way for the cyber-physical integration of manufacturing, which is an important bottleneck to achieve smart manufacturing. In this paper, the big data and digital twin in manufacturing are reviewed, including their concept as well as their applications in product design, production planning, manufacturing, and predictive maintenance. On this basis, the similarities and differences between big data and digital twin are compared from the general and data perspectives. Since the big data and digital twin can be complementary, how they can be integrated to promote smart manufacturing are discussed.", "which has environment ?", "General", 797.0, 804.0], ["Future generations of NASA and U.S. Air Force vehicles will require lighter mass while being subjected to higher loads and more extreme service conditions over longer time periods than the present generation. Current approaches for certification, fleet management and sustainment are largely based on statistical distributions of material properties, heuristic design philosophies, physical testing and assumed similitude between testing and operational conditions and will likely be unable to address these extreme requirements. To address the shortcomings of conventional approaches, a fundamental paradigm shift is needed. This paradigm shift, the Digital Twin, integrates ultra-high fidelity simulation with the vehicle s on-board integrated vehicle health management system, maintenance history and all available historical and fleet data to mirror the life of its flying twin and enable unprecedented levels of safety and reliability.", "which Has health ?", "management", 253.0, 263.0], ["Technological advances have driven dramatic increases in industrial productivity since the dawn of the Industrial Revolution. The steam engine powered factories in the nineteenth century, electrification led to mass production in the early part of the twentieth century, and industry became automated in the 1970s. In the decades that followed, however, industrial technological advancements were only incremental, especially compared with the breakthroughs that transformed IT, mobile communications, and e-commerce.", "which has performance ?", "Part", 240.0, 244.0], ["Traditional sources of information for small and rural communities have been disappearing over the past decade. A lot of the information and discussion related to such local geographic areas is now scattered across websites of numerous local organizations, individual blogs, social media and other user-generated media (YouTube, Flickr). It is important to capture this information and make it easily accessible to local citizens to facilitate citizen engagement and social interaction. Furthermore, a system that has location-based support can provide local citizens with an engaging way to interact with this information and identify the local issues most relevant to them. A location-based interface for a local geographic area enables people to identify and discuss local issues related to specific locations such as a particular street or a road construction site. We created an information aggregator, called the Virtual Town Square (VTS), to support and facilitate local discussion and interaction. We created a location-based interface for users to access the information collected by VTS. In this paper, we discuss focus group interviews with local citizens that motivated our design of a local news and information aggregator to facilitate civic participation. We then discuss the unique design challenges in creating such a local news aggregator and our design approach to create a local information ecosystem. We describe VTS and the initial evaluation and feedback we received from local users and through weekly meetings with community partners.", "which has Implementation level ?", "System", 502.0, 508.0], ["Abstract SARS-CoV-2 emerged in China at the end of 2019 and has rapidly become a pandemic with roughly 2.7 million recorded COVID-19 cases and greater than 189,000 recorded deaths by April 23rd, 2020 (www.WHO.org). There are no FDA approved antivirals or vaccines for any coronavirus, including SARS-CoV-2. Current treatments for COVID-19 are limited to supportive therapies and off-label use of FDA approved drugs. Rapid development and human testing of potential antivirals is greatly needed. A quick way to test compounds with potential antiviral activity is through drug repurposing. Numerous drugs are already approved for human use and subsequently there is a good understanding of their safety profiles and potential side effects, making them easier to fast-track to clinical studies in COVID-19 patients. Here, we present data on the antiviral activity of 20 FDA approved drugs against SARS-CoV-2 that also inhibit SARS-CoV and MERS-CoV. We found that 17 of these inhibit SARS-CoV-2 at a range of IC50 values at non-cytotoxic concentrations. We directly follow up with seven of these to demonstrate all are capable of inhibiting infectious SARS-CoV-2 production. Moreover, we have evaluated two of these, chloroquine and chlorpromazine, in vivo using a mouse-adapted SARS-CoV model and found both drugs protect mice from clinical disease.", "which has role ?", "drug", 696.0, 700.0], ["We describe here the design, synthesis, molecular modeling, and biological evaluation of a series of small molecule, nonpeptide inhibitors of SARS-CoV PLpro. Our initial lead compound was identified via high-throughput screening of a diverse chemical library. We subsequently carried out structure-activity relationship studies and optimized the lead structure to potent inhibitors that have shown antiviral activity against SARS-CoV infected Vero E6 cells. Upon the basis of the X-ray crystal structure of inhibitor 24-bound to SARS-CoV PLpro, a drug design template was created. Our structure-based modification led to the design of a more potent inhibitor, 2 (enzyme IC(50) = 0.46 microM; antiviral EC(50) = 6 microM). Interestingly, its methylamine derivative, 49, displayed good enzyme inhibitory potency (IC(50) = 1.3 microM) and the most potent SARS antiviral activity (EC(50) = 5.2 microM) in the series. We have carried out computational docking studies and generated a predictive 3D-QSAR model for SARS-CoV PLpro inhibitors.", "which has role ?", "small molecule", 101.0, 115.0], ["The ongoing COVID-19 pandemic has stimulated a search for reservoirs and species potentially involved in back and forth transmission. Studies have postulated bats as one of the key reservoirs of coronaviruses (CoVs), and different CoVs have been detected in bats. So far, CoVs have not been found in bats in Sweden and we therefore tested whether they carry CoVs. In summer 2020, we sampled a total of 77 adult bats comprising 74 Myotis daubentonii, 2 Pipistrellus pygmaeus, and 1 M. mystacinus bats in southern Sweden. Blood, saliva and feces were sampled, processed and subjected to a virus next-generation sequencing target enrichment protocol. An Alphacoronavirus was detected and sequenced from feces of a M. daubentonii adult female bat. Phylogenetic analysis of the almost complete virus genome revealed a close relationship with Finnish and Danish strains. This was the first finding of a CoV in bats in Sweden, and bats may play a role in the transmission cycle of CoVs in Sweden. Focused and targeted surveillance of CoVs in bats is warranted, with consideration of potential conflicts between public health and nature conservation required as many bat species in Europe are threatened and protected.", "which Has Host ?", "bats", 158.0, 162.0], ["The ecology of ebolaviruses is still poorly understood and the role of bats in outbreaks needs to be further clarified. Straw-colored fruit bats (Eidolon helvum) are the most common fruit bats in Africa and antibodies to ebolaviruses have been documented in this species. Between December 2018 and November 2019, samples were collected at approximately monthly intervals in roosting and feeding sites from 820 bats from an Eidolon helvum colony. Dried blood spots (DBS) were tested for antibodies to Zaire, Sudan, and Bundibugyo ebolaviruses. The proportion of samples reactive with GP antigens increased significantly with age from 0\u20139/220 (0\u20134.1%) in juveniles to 26\u2013158/225 (11.6\u201370.2%) in immature adults and 10\u2013225/372 (2.7\u201360.5%) in adult bats. Antibody responses were lower in lactating females. Viral RNA was not detected in 456 swab samples collected from 152 juvenile and 214 immature adult bats. Overall, our study shows that antibody levels increase in young bats suggesting that seroconversion to Ebola or related viruses occurs in older juvenile and immature adult bats. Multiple year monitoring would be needed to confirm this trend. Knowledge of the periods of the year with the highest risk of Ebolavirus circulation can guide the implementation of strategies to mitigate spill-over events.", "which Has Host ?", "bats", 71.0, 75.0], ["The ongoing COVID-19 pandemic has stimulated a search for reservoirs and species potentially involved in back and forth transmission. Studies have postulated bats as one of the key reservoirs of coronaviruses (CoVs), and different CoVs have been detected in bats. So far, CoVs have not been found in bats in Sweden and we therefore tested whether they carry CoVs. In summer 2020, we sampled a total of 77 adult bats comprising 74 Myotis daubentonii, 2 Pipistrellus pygmaeus, and 1 M. mystacinus bats in southern Sweden. Blood, saliva and feces were sampled, processed and subjected to a virus next-generation sequencing target enrichment protocol. An Alphacoronavirus was detected and sequenced from feces of a M. daubentonii adult female bat. Phylogenetic analysis of the almost complete virus genome revealed a close relationship with Finnish and Danish strains. This was the first finding of a CoV in bats in Sweden, and bats may play a role in the transmission cycle of CoVs in Sweden. Focused and targeted surveillance of CoVs in bats is warranted, with consideration of potential conflicts between public health and nature conservation required as many bat species in Europe are threatened and protected.", "which Has Host ?", "Myotis", 430.0, 436.0], ["The ongoing COVID-19 pandemic has stimulated a search for reservoirs and species potentially involved in back and forth transmission. Studies have postulated bats as one of the key reservoirs of coronaviruses (CoVs), and different CoVs have been detected in bats. So far, CoVs have not been found in bats in Sweden and we therefore tested whether they carry CoVs. In summer 2020, we sampled a total of 77 adult bats comprising 74 Myotis daubentonii, 2 Pipistrellus pygmaeus, and 1 M. mystacinus bats in southern Sweden. Blood, saliva and feces were sampled, processed and subjected to a virus next-generation sequencing target enrichment protocol. An Alphacoronavirus was detected and sequenced from feces of a M. daubentonii adult female bat. Phylogenetic analysis of the almost complete virus genome revealed a close relationship with Finnish and Danish strains. This was the first finding of a CoV in bats in Sweden, and bats may play a role in the transmission cycle of CoVs in Sweden. Focused and targeted surveillance of CoVs in bats is warranted, with consideration of potential conflicts between public health and nature conservation required as many bat species in Europe are threatened and protected.", "which Has Host ?", "myotis", 430.0, 436.0], ["The ongoing COVID-19 pandemic has stimulated a search for reservoirs and species potentially involved in back and forth transmission. Studies have postulated bats as one of the key reservoirs of coronaviruses (CoVs), and different CoVs have been detected in bats. So far, CoVs have not been found in bats in Sweden and we therefore tested whether they carry CoVs. In summer 2020, we sampled a total of 77 adult bats comprising 74 Myotis daubentonii, 2 Pipistrellus pygmaeus, and 1 M. mystacinus bats in southern Sweden. Blood, saliva and feces were sampled, processed and subjected to a virus next-generation sequencing target enrichment protocol. An Alphacoronavirus was detected and sequenced from feces of a M. daubentonii adult female bat. Phylogenetic analysis of the almost complete virus genome revealed a close relationship with Finnish and Danish strains. This was the first finding of a CoV in bats in Sweden, and bats may play a role in the transmission cycle of CoVs in Sweden. Focused and targeted surveillance of CoVs in bats is warranted, with consideration of potential conflicts between public health and nature conservation required as many bat species in Europe are threatened and protected.", "which Has Host ?", "Myotis daubentonii", 430.0, 448.0], ["Seven new chromones, siamchromones A-G (1-7), and 12 known chromones (8-19) were isolated from the stems of Cassia siamea. Compounds 1-19 were evaluated for their antitobacco mosaic virus (anti-TMV) and anti-HIV-1 activities. Compound 6 showed antitobacco mosaic virus (anti-TMV) activity with an inhibition rate of 35.3% and IC50 value of 31.2 \u03bcM, which is higher than that of the positive control, ningnamycin. Compounds 1, 10, 13, and 16 showed anti-TMV activities with inhibition rates above 10%. Compounds 4, 6, 13, and 19 showed anti-HIV-1 activities with therapeutic index values above 50.", "which Class of compound ?", "Chromone", NaN, NaN], ["The association of heptamethine cyanine cation 1(+) with various counterions A (A = Br(-), I(-), PF(6)(-), SbF(6)(-), B(C(6)F(5))(4)(-), TRISPHAT) was realized. The six different ion pairs have been characterized by X-ray diffraction, and their absorption properties were studied in polar (DCM) and apolar (toluene) solvents. A small, hard anion (Br(-)) is able to strongly polarize the polymethine chain, resulting in the stabilization of an asymmetric dipolar-like structure in the crystal and in nondissociating solvents. On the contrary, in more polar solvents or when it is associated with a bulky soft anion (TRISPHAT or B(C(6)F(5))(4)(-)), the same cyanine dye adopts preferentially the ideal polymethine state. The solid-state and solution absorption properties of heptamethine dyes are therefore strongly correlated to the nature of the counterion.", "which Class of compound ?", "Heptamethine cyanine ", 19.0, 40.0], ["To identify possible candidates for progression towards clinical studies against SARS-CoV-2, we screened a well-defined collection of 5632 compounds including 3488 compounds which have undergone clinical investigations (marketed drugs, phases 1 -3, and withdrawn) across 600 indications. Compounds were screened for their inhibition of viral induced cytotoxicity using the human epithelial colorectal adenocarcinoma cell line Caco-2 and a SARS-CoV-2 isolate. The primary screen of 5632 compounds gave 271 hits. A total of 64 compounds with IC50 <20 \u00b5M were identified, including 19 compounds with IC50 < 1 \u00b5M. Of this confirmed hit population, 90% have not yet been previously reported as active against SARS-CoV-2 in-vitro cell assays. Some 37 of the actives are launched drugs, 19 are in phases 1-3 and 10 pre-clinical. Several inhibitors were associated with modulation of host pathways including kinase signaling P53 activation, ubiquitin pathways and PDE activity modulation, with long chain acyl transferases were effective viral inhibitors.", "which has mode of action ?", "inhibition", 322.0, 332.0], ["The rising threat of pandemic viruses, such as SARS-CoV-2, requires development of new preclinical discovery platforms that can more rapidly identify therapeutics that are active in vitro and also translate in vivo. Here we show that human organ-on-a-chip (Organ Chip) microfluidic culture devices lined by highly differentiated human primary lung airway epithelium and endothelium can be used to model virus entry, replication, strain-dependent virulence, host cytokine production, and recruitment of circulating immune cells in response to infection by respiratory viruses with great pandemic potential. We provide a first demonstration of drug repurposing by using oseltamivir in influenza A virus-infected organ chip cultures and show that co-administration of the approved anticoagulant drug, nafamostat, can double oseltamivir\u2019s therapeutic time window. With the emergence of the COVID-19 pandemic, the Airway Chips were used to assess the inhibitory activities of approved drugs that showed inhibition in traditional cell culture assays only to find that most failed when tested in the Organ Chip platform. When administered in human Airway Chips under flow at a clinically relevant dose, one drug \u2013 amodiaquine - significantly inhibited infection by a pseudotyped SARS-CoV-2 virus. Proof of concept was provided by showing that amodiaquine and its active metabolite (desethylamodiaquine) also significantly reduce viral load in both direct infection and animal-to-animal transmission models of native SARS-CoV-2 infection in hamsters. These data highlight the value of Organ Chip technology as a more stringent and physiologically relevant platform for drug repurposing, and suggest that amodiaquine should be considered for future clinical testing.", "which has mode of action ?", "inhibition", 998.0, 1008.0], ["ABSTRACT This study has investigated farm households' simultaneous use of social networks, field extension, traditional media, and modern information and communication technologies (ICTs) to access information on cotton crop production. The study was based on a field survey, conducted in Punjab, Pakistan. Data were collected from 399 cotton farm households using the multistage sampling technique. Important combinations of information sources were found in terms of their simultaneous use to access information. The study also examined the factors influencing the use of various available information sources. A multivariate probit model was used considering the correlation among the use of social networks, field extension, traditional media, and modern ICTs. The findings indicated the importance of different socioeconomic and institutional factors affecting farm households' use of available information sources on cotton production. Important policy conclusions are drawn based on findings.", "which data collection ?", " Field survey", 261.0, 274.0], ["ABSTRACT This study has investigated farm households' simultaneous use of social networks, field extension, traditional media, and modern information and communication technologies (ICTs) to access information on cotton crop production. The study was based on a field survey, conducted in Punjab, Pakistan. Data were collected from 399 cotton farm households using the multistage sampling technique. Important combinations of information sources were found in terms of their simultaneous use to access information. The study also examined the factors influencing the use of various available information sources. A multivariate probit model was used considering the correlation among the use of social networks, field extension, traditional media, and modern ICTs. The findings indicated the importance of different socioeconomic and institutional factors affecting farm households' use of available information sources on cotton production. Important policy conclusions are drawn based on findings.", "which Econometric model ?", "Multivariate probit model", 615.0, 640.0], ["Abstract Purpose: The paper analyzes factors that affect the likelihood of adoption of different agriculture-related information sources by farmers. Design/Methodology/Approach: The paper links the theoretical understanding of the existing multiple sources of information that farmer use, with the empirical model to analyze the factors that affect the farmer's adoption of different agriculture-related information sources. The analysis is done using a multivariate probit model and primary survey data of 1,200 farmer households of five Indo-Gangetic states of India, covering 120 villages. Findings: The results of the study highlight that farmer's age, education level and farm size influence farmer's behaviour in selecting different sources of information. The results show that farmers use multiple information sources, that may be complementary or substitutes to each other and this also implies that any single source does not satisfy all information needs of the farmer. Practical implication: If we understand the likelihood of farmer's choice of source of information then direction can be provided and policies can be developed to provide information through those sources in targeted regions with the most effective impact. Originality/Value: Information plays a key role in a farmer's life by enhancing their knowledge and strengthening their decision-making ability. Farmers use multiple sources of information as no one source is sufficient in itself.", "which Econometric model ?", "Multivariate probit model", 454.0, 479.0], ["Precision farming information demanded by cotton producers is provided by various suppliers, including consultants, farm input dealerships, University Extension systems, and media sources. Factors associated with the decisions to select among information sources to search for precision farming information are analyzed using a multivariate probit regression accounting for correlation among the different selection decisions. Factors influencing these decisions are age, education, and income. These findings should be valuable to precision farming information providers who may be able to better meet their target clientele needs.", "which Econometric model ?", "Multivariate probit regression", 336.0, 366.0], ["A radio-frequency atmospheric pressure argon plasma jet is used for the inactivation of bacteria (Pseudomonas aeruginosa) in solutions. The source is characterized by measurements of power dissipation, gas temperature, absolute UV irradiance as well as mass spectrometry measurements of emitted ions. The plasma-induced liquid chemistry is studied by performing liquid ion chromatography and hydrogen peroxide concentration measurements on treated distilled water samples. Additionally, a quantitative estimation of an extensive liquid chemistry induced by the plasma is made by solution kinetics calculations. The role of the different active components of the plasma is evaluated based on either measurements, as mentioned above, or estimations based on published data of measurements of those components. For the experimental conditions being considered in this work, it is shown that the bactericidal effect can be solely ascribed to plasma-induced liquid chemistry, leading to the production of stable and transient chemical species. It is shown that HNO2, ONOO \u2212 and H2O2 are present in the liquid phase in similar quantities to concentrations which are reported in the literature to cause bacterial inactivation. The importance of plasma-induced chemistry at the gas\u2010liquid interface is illustrated and discussed in detail. (Some figures may appear in colour only in the online journal)", "which Intended_Application ?", "Bacterial inactivation", 1196.0, 1218.0], ["Single-shot digital holographic microscopy with an adjustable field of view and magnification was demonstrated by using a tabletop 32.8 nm soft-x-ray laser. The holographic images were reconstructed with a two-dimensional fast-Fourier-transform algorithm, and a new configuration of imaging was developed to overcome the pixel-size limit of the recording device without reducing the effective NA. The image of an atomic-force-microscope cantilever was reconstructed with a lateral resolution of 480 nm, and the phase contrast image of a 20 nm carbon mesh foil demonstrated that profiles of sample thickness can be reconstructed with few-nanometers uncertainty. The ultrashort x-ray pulse duration combined with single-shot capability offers great advantage for flash imaging of delicate samples.", "which Research objective ?", "Holographic microscopy", 20.0, 42.0], ["Neural-symbolic systems combine the strengths of neural networks and symbolic formalisms. In this paper, we introduce a neural-symbolic system which combines restricted Boltzmann machines and probabilistic semi-abstract argumentation. We propose to train networks on argument labellings explaining the data, so that any sampled data outcome is associated with an argument labelling. Argument labellings are integrated as constraints within restricted Boltzmann machines, so that the neural networks are used to learn probabilistic dependencies amongst argument labels. Given a dataset and an argumentation graph as prior knowledge, for every example/case K in the dataset, we use a so-called K-maxconsistent labelling of the graph, and an explanation of case K refers to a K-maxconsistent labelling of the given argumentation graph. The abilities of the proposed system to predict correct labellings were evaluated and compared with standard machine learning techniques. Experiments revealed that such argumentation Boltzmann machines can outperform other classification models, especially in noisy settings.", "which has Input ?", " an argumentation graph", 596.0, 619.0], ["In this paper, we present electronic participatory budgeting (ePB) as a novel application domain for recommender systems. On public data from the ePB platforms of three major US cities - Cambridge, Miami and New York City-, we evaluate various methods that exploit heterogeneous sources and models of user preferences to provide personalized recommendations of citizen proposals. We show that depending on characteristics of the cities and their participatory processes, particular methods are more effective than others for each city. This result, together with open issues identified in the paper, call for further research in the area.", "which has Recommended items ?", "Citizen proposals", 361.0, 378.0], ["This paper aims at studying the exploitation of intelligent agents for supporting citizens to access e-government services. To this purpose, it proposes a multi-agent system capable of suggesting to the users the most interesting services for them; specifically, these suggestions are computed by taking into account both their exigencies/preferences and the capabilities of the devices they are currently exploiting. The paper first describes the proposed system and, then, reports various experimental results. Finally, it presents a comparison between our system and other related ones already presented in the literature.", "which has Recommended items ?", "Government services", 103.0, 122.0], ["Neuro-symbolic artificial intelligence is a novel area of AI research which seeks to combine traditional rules-based AI approaches with modern deep learning techniques. Neurosymbolic models have already demonstrated the capability to outperform state-of-the-art deep learning models in domains such as image and video reasoning. They have also been shown to obtain high accuracy with significantly less training data than traditional models. Due to the recency of the field\u2019s emergence and relative sparsity of published results, the performance characteristics of these models are not well understood. In this paper, we describe and analyze the performance characteristics of three recent neuro-symbolic models. We find that symbolic models have less potential parallelism than traditional neural models due to complex control flow and low-operational-intensity operations, such as scalar multiplication and tensor addition. However, the neural aspect of computation dominates the symbolic part in cases where they are clearly separable. We also find that data movement poses a potential bottleneck, as it does in many ML workloads.", "which Workload taxonomy ?", "Data Movement", 1057.0, 1070.0], ["We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1027 canteen trays for a total of 3616 food instances belonging to 73 food classes. The food on the tray images has been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented with three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using convolutional-neural-networks-based features. The dataset, as well as the benchmark framework, are available to the research community.", "which Task ?", "Food recognition", 47.0, 63.0], ["We introduce the first visual dataset of fast foods with a total of 4,545 still images, 606 stereo pairs, 303 360\u00b0 videos for structure from motion, and 27 privacy-preserving videos of eating events of volunteers. This work was motivated by research on fast food recognition for dietary assessment. The data was collected by obtaining three instances of 101 foods from 11 popular fast food chains, and capturing images and videos in both restaurant conditions and a controlled lab setting. We benchmark the dataset using two standard approaches, color histogram and bag of SIFT features in conjunction with a discriminative classifier. Our dataset and the benchmarks are designed to stimulate research in this area and will be released freely to the research community.", "which Task ?", "Food recognition", 258.0, 274.0], ["Computer vision-based food recognition could be used to estimate a meal's carbohydrate content for diabetic patients. This study proposes a methodology for automatic food recognition, based on the bag-of-features (BoF) model. An extensive technical investigation was conducted for the identification and optimization of the best performing components involved in the BoF architecture, as well as the estimation of the corresponding parameters. For the design and evaluation of the prototype system, a visual dataset with nearly 5000 food images was created and organized into 11 classes. The optimized system computes dense local features, using the scale-invariant feature transform on the HSV color space, builds a visual dictionary of 10000 visual words by using the hierarchical k-means clustering and finally classifies the food images with a linear support vector machine classifier. The system achieved classification accuracy of the order of 78%, thus proving the feasibility of the proposed approach in a very challenging image dataset.", "which Task ?", "Food recognition", 22.0, 38.0], ["We propose a mobile food recognition system the poses of which are estimating calorie and nutritious of foods and recording a user's eating habits. Since all the processes on image recognition performed on a smart-phone, the system does not need to send images to a server and runs on an ordinary smartphone in a real-time way. To recognize food items, a user draws bounding boxes by touching the screen first, and then the system starts food item recognition within the indicated bounding boxes. To recognize them more accurately, we segment each food item region by GrubCut, extract a color histogram and SURF-based bag-of-features, and finally classify it into one of the fifty food categories with linear SVM and fast 2 kernel. In addition, the system estimates the direction of food regions where the higher SVM output score is expected to be obtained, show it as an arrow on the screen in order to ask a user to move a smartphone camera. This recognition process is performed repeatedly about once a second. We implemented this system as an Android smartphone application so as to use multiple CPU cores effectively for real-time recognition. In the experiments, we have achieved the 81.55% classification rate for the top 5 category candidates when the ground-truth bounding boxes are given. In addition, we obtained positive evaluation by user study compared to the food recording system without object recognition.", "which Task ?", "Food recognition", 20.0, 36.0], ["The aims of XML data conversion to ontologies are the indexing, integration and enrichment of existing ontologies with knowledge acquired from these sources. The contribution of this paper consists in providing a classification of the approaches used for the conversion of XML documents into OWL ontologies. This classification underlines the usage profile of each conversion method, providing a clear description of the advantages and drawbacks belonging to each method. Hence, this paper focuses on two main processes, which are ontology enrichment and ontology population using XML data. Ontology enrichment is related to the schema of the ontology (TBox), and ontology population is related to an individual (Abox). In addition, the ontologies described in these methods are based on formal languages of the Semantic Web such as OWL (Ontology Web Language) or RDF (Resource Description Framework). These languages are formal because the semantics are formally defined and take advantage of the Description Logics. In contrast, XML data sources are without formal semantics. The XML language is used to store, export and share data between processes able to process the specific data structure. However, even if the semantics is not explicitly expressed, data structure contains the universe of discourse by using a qualified vocabulary regarding a consensual agreement. In order to formalize this semantics, the OWL language provides rich logical constraints. Therefore, these logical constraints are evolved in the transformation of XML documents into OWL documents, allowing the enrichment and the population of the target ontology. To design such a transformation, the current research field establishes connections between OWL constructs (classes, predicates, simple or complex data types, etc.) and XML constructs (elements, attributes, element lists, etc.). Two different approaches for the transformation process are exposed. The instance approaches are based on XML documents without any schema associated. The validation approaches are based on the XML schema and document validated by the associated schema. The second approaches benefit from the schema definition to provide automated transformations with logic constraints. Both approaches are discussed in the text.", "which Learning purpose ?", "Ontology enrichment", 531.0, 550.0], ["The aims of XML data conversion to ontologies are the indexing, integration and enrichment of existing ontologies with knowledge acquired from these sources. The contribution of this paper consists in providing a classification of the approaches used for the conversion of XML documents into OWL ontologies. This classification underlines the usage profile of each conversion method, providing a clear description of the advantages and drawbacks belonging to each method. Hence, this paper focuses on two main processes, which are ontology enrichment and ontology population using XML data. Ontology enrichment is related to the schema of the ontology (TBox), and ontology population is related to an individual (Abox). In addition, the ontologies described in these methods are based on formal languages of the Semantic Web such as OWL (Ontology Web Language) or RDF (Resource Description Framework). These languages are formal because the semantics are formally defined and take advantage of the Description Logics. In contrast, XML data sources are without formal semantics. The XML language is used to store, export and share data between processes able to process the specific data structure. However, even if the semantics is not explicitly expressed, data structure contains the universe of discourse by using a qualified vocabulary regarding a consensual agreement. In order to formalize this semantics, the OWL language provides rich logical constraints. Therefore, these logical constraints are evolved in the transformation of XML documents into OWL documents, allowing the enrichment and the population of the target ontology. To design such a transformation, the current research field establishes connections between OWL constructs (classes, predicates, simple or complex data types, etc.) and XML constructs (elements, attributes, element lists, etc.). Two different approaches for the transformation process are exposed. The instance approaches are based on XML documents without any schema associated. The validation approaches are based on the XML schema and document validated by the associated schema. The second approaches benefit from the schema definition to provide automated transformations with logic constraints. Both approaches are discussed in the text.", "which Learning purpose ?", "Ontology population", 555.0, 574.0], ["With the increase in smart devices and abundance of video contents, efficient techniques for the indexing, analysis and retrieval of videos are becoming more and more desirable. Improved indexing and automated analysis of millions of videos could be accomplished by getting videos tagged automatically. A lot of existing methods fail to precisely tag videos because of their lack of ability to capture the video context. The context in a video represents the interactions of objects in a scene and their overall meaning. In this work, we propose a novel approach that integrates the video scene ontology with CNN (Convolutional Neural Network) for improved video tagging. Our method captures the content of a video by extracting the information from individual key frames. The key frames are then fed to a CNN based deep learning model to train its parameters. The trained parameters are used to generate the most frequent tags. Highly frequent tags are used to summarize the input video. The proposed technique is benchmarked on the most widely used dataset of video activities, namely, UCF-101. Our method managed to achieve an overall accuracy of 99.8% with an F1- score of 96.2%.", "which Learning purpose ?", "Improved video tagging", 648.0, 670.0], ["Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.", "which Learning purpose ?", "Ontology construction", 733.0, 754.0], ["Ontologies play an increasingly important role in Knowledge Management. One of the main problems associated with ontologies is that they need to be constructed and maintained. Manual construction of larger ontologies is usually not feasible within companies because of the effort and costs required. Therefore, a semi-automatic approach to ontology construction and maintenance is what everybody is wishing for. The paper presents a framework for semi-automatically learning ontologies from domainspecific texts by applying machine learning techniques. The TEXT-TO-ONTO framework integrates manual engineering facilities to follow a balanced cooperative modelling paradigm.", "which Learning purpose ?", "Ontology construction", 340.0, 361.0], ["Learning ontologies requires the acquisition of relevant domain concepts and taxonomic, as well as non-taxonomic, relations. In this chapter, we present a methodology for automatic ontology enrichment and document annotation with concepts and relations of an existing domain core ontology. Natural language definitions from available glossaries in a given domain are processed and regular expressions are applied to identify general-purpose and domain-specific relations. We evaluate the methodology performance in extracting hypernymy and non-taxonomic relations. To this end, we annotated and formalized a relevant fragment of the glossary of Art and Architecture (AAT) with a set of 10 relations (plus the hypernymy relation) defined in the CRM CIDOC cultural heritage core ontology, a recent W3C standard. Finally, we assessed the generality of the approach on a set of web pages from the domains of history and biography.", "which Learning purpose ?", "Ontology enrichment", 181.0, 200.0], ["The aims of XML data conversion to ontologies are the indexing, integration and enrichment of existing ontologies with knowledge acquired from these sources. The contribution of this paper consists in providing a classification of the approaches used for the conversion of XML documents into OWL ontologies. This classification underlines the usage profile of each conversion method, providing a clear description of the advantages and drawbacks belonging to each method. Hence, this paper focuses on two main processes, which are ontology enrichment and ontology population using XML data. Ontology enrichment is related to the schema of the ontology (TBox), and ontology population is related to an individual (Abox). In addition, the ontologies described in these methods are based on formal languages of the Semantic Web such as OWL (Ontology Web Language) or RDF (Resource Description Framework). These languages are formal because the semantics are formally defined and take advantage of the Description Logics. In contrast, XML data sources are without formal semantics. The XML language is used to store, export and share data between processes able to process the specific data structure. However, even if the semantics is not explicitly expressed, data structure contains the universe of discourse by using a qualified vocabulary regarding a consensual agreement. In order to formalize this semantics, the OWL language provides rich logical constraints. Therefore, these logical constraints are evolved in the transformation of XML documents into OWL documents, allowing the enrichment and the population of the target ontology. To design such a transformation, the current research field establishes connections between OWL constructs (classes, predicates, simple or complex data types, etc.) and XML constructs (elements, attributes, element lists, etc.). Two different approaches for the transformation process are exposed. The instance approaches are based on XML documents without any schema associated. The validation approaches are based on the XML schema and document validated by the associated schema. The second approaches benefit from the schema definition to provide automated transformations with logic constraints. Both approaches are discussed in the text.", "which Individual extraction/learning ?", "XML document", NaN, NaN], ["DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.", "which Individual extraction/learning ?", "XML document", 908.0, 920.0], ["The aims of XML data conversion to ontologies are the indexing, integration and enrichment of existing ontologies with knowledge acquired from these sources. The contribution of this paper consists in providing a classification of the approaches used for the conversion of XML documents into OWL ontologies. This classification underlines the usage profile of each conversion method, providing a clear description of the advantages and drawbacks belonging to each method. Hence, this paper focuses on two main processes, which are ontology enrichment and ontology population using XML data. Ontology enrichment is related to the schema of the ontology (TBox), and ontology population is related to an individual (Abox). In addition, the ontologies described in these methods are based on formal languages of the Semantic Web such as OWL (Ontology Web Language) or RDF (Resource Description Framework). These languages are formal because the semantics are formally defined and take advantage of the Description Logics. In contrast, XML data sources are without formal semantics. The XML language is used to store, export and share data between processes able to process the specific data structure. However, even if the semantics is not explicitly expressed, data structure contains the universe of discourse by using a qualified vocabulary regarding a consensual agreement. In order to formalize this semantics, the OWL language provides rich logical constraints. Therefore, these logical constraints are evolved in the transformation of XML documents into OWL documents, allowing the enrichment and the population of the target ontology. To design such a transformation, the current research field establishes connections between OWL constructs (classes, predicates, simple or complex data types, etc.) and XML constructs (elements, attributes, element lists, etc.). Two different approaches for the transformation process are exposed. The instance approaches are based on XML documents without any schema associated. The validation approaches are based on the XML schema and document validated by the associated schema. The second approaches benefit from the schema definition to provide automated transformations with logic constraints. Both approaches are discussed in the text.", "which RDF Graph ?", "XML document", NaN, NaN], ["Abstract The novel coronavirus disease 2019 (COVID-19) caused by SARS-COV-2 has raised myriad of global concerns. There is currently no FDA approved antiviral strategy to alleviate the disease burden. The conserved 3-chymotrypsin-like protease (3CLpro), which controls coronavirus replication is a promising drug target for combating the coronavirus infection. This study screens some African plants derived alkaloids and terpenoids as potential inhibitors of coronavirus 3CLpro using in silico approach. Bioactive alkaloids (62) and terpenoids (100) of plants native to Africa were docked to the 3CLpro of the novel SARS-CoV-2. The top twenty alkaloids and terpenoids with high binding affinities to the SARS-CoV-2 3CLpro were further docked to the 3CLpro of SARS-CoV and MERS-CoV. The docking scores were compared with 3CLpro-referenced inhibitors (Lopinavir and Ritonavir). The top docked compounds were further subjected to ADEM/Tox and Lipinski filtering analyses for drug-likeness prediction analysis. This ligand-protein interaction study revealed that more than half of the top twenty alkaloids and terpenoids interacted favourably with the coronaviruses 3CLpro, and had binding affinities that surpassed that of lopinavir and ritonavir. Also, a highly defined hit-list of seven compounds (10-Hydroxyusambarensine, Cryptoquindoline, 6-Oxoisoiguesterin, 22-Hydroxyhopan-3-one, Cryptospirolepine, Isoiguesterin and 20-Epibryonolic acid) were identified. Furthermore, four non-toxic, druggable plant derived alkaloids (10-Hydroxyusambarensine, and Cryptoquindoline) and terpenoids (6-Oxoisoiguesterin and 22-Hydroxyhopan-3-one), that bind to the receptor-binding site and catalytic dyad of SARS-CoV-2 3CLpro were identified from the predictive ADME/tox and Lipinski filter analysis. However, further experimental analyses are required for developing these possible leads into natural anti-COVID-19 therapeutic agents for combating the pandemic. Communicated by Ramaswamy H. Sarma", "which Standard drugs used ?", "Lopinavir and Ritonavir", 851.0, 874.0], ["Depression is a serious mental disorder that affects millions of people all over the world. Traditional clinical diagnosis methods are subjective, complicated and need extensive participation of experts. Audio-visual automatic depression analysis systems predominantly base their predictions on very brief sequential segments, sometimes as little as one frame. Such data contains much redundant information, causes a high computational load, and negatively affects the detection accuracy. Final decision making at the sequence level is then based on the fusion of frame or segment level predictions. However, this approach loses longer term behavioural correlations, as the behaviours themselves are abstracted away by the frame-level predictions. We propose to on the one hand use automatically detected human behaviour primitives such as Gaze directions, Facial action units (AU), etc. as low-dimensional multi-channel time series data, which can then be used to create two sequence descriptors. The first calculates the sequence-level statistics of the behaviour primitives and the second casts the problem as a Convolutional Neural Network problem operating on a spectral representation of the multichannel behaviour signals. The results of depression detection (binary classification) and severity estimation (regression) experiments conducted on the AVEC 2016 DAIC-WOZ database show that both methods achieved significant improvement compared to the previous state of the art in terms of the depression severity estimation.", "which Aims ?", "depression severity", 1498.0, 1517.0], ["Accurate diagnosis of psychiatric disorders plays a critical role in improving the quality of life for patients and potentially supports the development of new treatments. Many studies have been conducted on machine learning techniques that seek brain imaging data for specific biomarkers of disorders. These studies have encountered the following dilemma: A direct classification overfits to a small number of high-dimensional samples but unsupervised feature-extraction has the risk of extracting a signal of no interest. In addition, such studies often provided only diagnoses for patients without presenting the reasons for these diagnoses. This study proposed a deep neural generative model of resting-state functional magnetic resonance imaging (fMRI) data. The proposed model is conditioned by the assumption of the subject's state and estimates the posterior probability of the subject's state given the imaging data, using Bayes\u2019 rule. This study applied the proposed model to diagnose schizophrenia and bipolar disorders. Diagnostic accuracy was improved by a large margin over competitive approaches, namely classifications of functional connectivity, discriminative/generative models of regionwise signals, and those with unsupervised feature-extractors. The proposed model visualizes brain regions largely related to the disorders, thus motivating further biological investigation.", "which Aims ?", "Diagnosis of psychiatric disorder", NaN, NaN], ["It is of significant importance to detect and manage stress before it turns into severe problems. However, existing stress detection methods usually rely on psychological scales or physiological devices, making the detection complicated and costly. In this paper, we explore to automatically detect individuals' psychological stress via social media. Employing real online micro-blog data, we first investigate the correlations between users' stress and their tweeting content, social engagement and behavior patterns. Then we define two types of stress-related attributes: 1) low-level content attributes from a single tweet, including text, images and social interactions; 2) user-scope statistical attributes through their weekly micro-blog postings, leveraging information of tweeting time, tweeting types and linguistic styles. To combine content attributes with statistical attributes, we further design a convolutional neural network (CNN) with cross autoencoders to generate user-scope content attributes from low-level content attributes. Finally, we propose a deep neural network (DNN) model to incorporate the two types of user-scope attributes to detect users' psychological stress. We test the trained model on four different datasets from major micro-blog platforms including Sina Weibo, Tencent Weibo and Twitter. Experimental results show that the proposed model is effective and efficient on detecting psychological stress from micro-blog data. We believe our model would be useful in developing stress detection tools for mental health agencies and individuals.", "which Aims ?", "Stress detection", 116.0, 132.0], ["Long-term stress may lead to many severe physical and mental problems. Traditional psychological stress detection usually relies on the active individual participation, which makes the detection labor-consuming, time-costing and hysteretic. With the rapid development of social networks, people become more and more willing to share moods via microblog platforms. In this paper, we propose an automatic stress detection method from cross-media microblog data. We construct a three-level framework to formulate the problem. We first obtain a set of low-level features from the tweets. Then we define and extract middle-level representations based on psychological and art theories: linguistic attributes from tweets' texts, visual attributes from tweets' images, and social attributes from tweets' comments, retweets and favorites. Finally, a Deep Sparse Neural Network is designed to learn the stress categories incorporating the cross-media attributes. Experiment results show that the proposed method is effective and efficient on detecting psychological stress from microblog data.", "which Aims ?", "Stress detection", 97.0, 113.0], ["Psychological stress is threatening people\u2019s health. It is non-trivial to detect stress timely for proactive care. With the popularity of social media, people are used to sharing their daily activities and interacting with friends on social media platforms, making it feasible to leverage online social network data for stress detection. In this paper, we find that users stress state is closely related to that of his/her friends in social media, and we employ a large-scale dataset from real-world social platforms to systematically study the correlation of users\u2019 stress states and social interactions. We first define a set of stress-related textual, visual, and social attributes from various aspects, and then propose a novel hybrid model - a factor graph model combined with Convolutional Neural Network to leverage tweet content and social interaction information for stress detection. Experimental results show that the proposed model can improve the detection performance by 6-9 percent in F1-score. By further analyzing the social interaction data, we also discover several intriguing phenomena, i.e., the number of social structures of sparse connections (i.e., with no delta connections) of stressed users is around 14 percent higher than that of non-stressed users, indicating that the social structure of stressed users\u2019 friends tend to be less connected and less complicated than that of non-stressed users.", "which Aims ?", "Stress detection", 320.0, 336.0], ["The health disorders due to stress and depression should not be considered trivial because it has a negative impact on health. Prolonged stress not only triggers mental fatigue but also affects physical health. Therefore, we must be able to identify stress early. In this paper, we proposed the new methods for stress recognition on three classes (neutral, low stress, high stress) from a facial frontal image. Each image divided into three parts, i.e. pairs of eyes, nose, and mouth. Facial features have extracted on each image pixel using DoG, HOG, and DWT. The strength of orthonormality features is considered by the RICA. The GDA distributes the nonlinear covariance. Furthermore, the histogram features of the image parts are applied at a depth-based learning of ConvNet to model the facial stress expression. The proposed method is used FERET databases for training and validation. The k-fold validation method is used as a validation with k=5. Based on the experiments result, the proposed method accuracy showing outperforms compared with other works.", "which Aims ?", "Stress recognition", 311.0, 329.0], ["The slime mould Physarum polycephalum, an aneural organism, uses information from previous experiences to adjust its behaviour, but the mechanisms by which this is accomplished remain unknown. This article examines the possible role of oscillations in learning and memory in slime moulds. Slime moulds share surprising similarities with the network of synaptic connections in animal brains. First, their topology derives from a network of interconnected, vein-like tubes in which signalling molecules are transported. Second, network motility, which generates slime mould behaviour, is driven by distinct oscillations that organize into spatio-temporal wave patterns. Likewise, neural activity in the brain is organized in a variety of oscillations characterized by different frequencies. Interestingly, the oscillating networks of slime moulds are not precursors of nervous systems but, rather, an alternative architecture. Here, we argue that comparable information-processing operations can be realized on different architectures sharing similar oscillatory properties. After describing learning abilities and oscillatory activities of P. polycephalum, we explore the relation between network oscillations and learning, and evaluate the organism's global architecture with respect to information-processing potential. We hypothesize that, as in the brain, modulation of spontaneous oscillations may sustain learning in slime mould. This article is part of the theme issue \u2018Basal cognition: conceptual tools and the view from the single cell\u2019.", "which about species ?", "Physarum polycephalum", 16.0, 37.0], ["Membrane lysis caused by antibiotic peptides is often rationalized by means of two different models: the so-called carpet model and the pore-forming model. We report here on the lytic activity of antibiotic peptides from Australian tree frogs, maculatin 1.1, citropin 1.1, and aurein 1.2, on POPC or POPC/POPG model membranes. Leakage experiments using fluorescence spectroscopy indicated that the peptide/lipid mol ratio necessary to induce 50% of probe leakage was smaller for maculatin compared with aurein or citropin, regardless of lipid membrane composition. To gain further insight into the lytic mechanism of these peptides we performed single vesicle experiments using confocal fluorescence microscopy. In these experiments, the time course of leakage for different molecular weight (water soluble) fluorescent markers incorporated inside of single giant unilamellar vesicles is observed after peptide exposure. We conclude that maculatin and its related peptides demonstrate a pore-forming mechanism (differential leakage of small fluorescent probe compared with high molecular weight markers). Conversely, citropin and aurein provoke a total membrane destabilization with vesicle burst without sequential probe leakage, an effect that can be assigned to a carpeting mechanism of lytic action. Additionally, to study the relevance of the proline residue on the membrane-action properties of maculatin, the same experimental approach was used for maculatin-Ala and maculatin-Gly (Pro-15 was replaced by Ala or Gly, respectively). Although a similar peptide/lipid mol ratio was necessary to induce 50% of leakage for POPC membranes, the lytic activity of maculatin-Ala and maculatin-Gly decreased in POPC/POPG (1:1 mol) membranes compared with that observed for the naturally occurring maculatin sequence. As observed for maculatin, the lytic action of Maculatin-Ala and maculatin-Gly is in keeping with the formation of pore-like structures at the membrane independently of lipid composition.", "which has target ?", "POPC/POPG model membranes", 300.0, 325.0], ["Background and Aims Niche divergence between polyploids and their lower ploidy progenitors is one of the primary mechanisms fostering polyploid establishment and adaptive divergence. However, within-species chromosomal and reproductive variability have usually been neglected in community ecology and biodiversity analyses even though they have been recognized to play a role in the adaptive diversification of lineages. Methods We used Paspalum intermedium, a grass species with diverging genetic systems (diploidy vs. autopolyploidy, allogamy vs. autogamy and sexuality vs. apomixis), to recognize the causality of biogeographic patterns, adaptation and ecological flexibility of cytotypes. Chromosome counts and flow cytometry were used to characterize within-species genetic systems diversity. Environmental niche modelling was used to evaluate intraspecific ecological attributes associated with environmental and climatic factors and to assess correlations among ploidy, reproductive modes and ecological conditions ruling species' population dynamics, range expansion, adaptation and evolutionary history. Key Results Two dominant cytotypes non-randomly distributed along local and regional geographical scales displayed niche differentiation, a directional shift in niche optima and signs of disruptive selection on ploidy-related ecological aptitudes for the exploitation of environmental resources. Ecologically specialized allogamous sexual diploids were found in northern areas associated with higher temperature, humidity and productivity, while generalist autogamous apomictic tetraploids occurred in southern areas, occupying colder and less productive environments. Four localities with a documented shift in ploidy and four mixed populations in a zone of ecological transition revealed an uneven replacement between cytotypes. Conclusions Polyploidy and contrasting reproductive traits between cytotypes have promoted shifts in niche optima, and increased ecological tolerance and niche divergence. Ecologically specialized diploids maintain cytotype stability in core areas by displacing tetraploids, while broader ecological preferences and a shift from sexuality to apomixis favoured polyploid colonization in peripheral areas where diploids are displaced, and fostered the ecological opportunity for autotetraploids supporting range expansion to open southern habitats.", "which Species ?", "Paspalum intermedium", 437.0, 457.0], ["BACKGROUND AND AIMS Pilosella officinarum (syn. Hieracium pilosella) is a highly structured species with respect to the ploidy level, with obvious cytogeographic trends. Previous non-collated data indicated a possible differentiation in the frequency of particular ploidy levels in the Czech Republic and Slovakia. Therefore, detailed sampling and ploidy level analyses were assessed to reveal a boundary of common occurrence of tetraploids on one hand and higher ploids on the other. For a better understanding of cytogeographic differentiation of P. officinarum in central Europe, a search was made for a general cytogeographic pattern in Europe based on published data. METHODS DNA-ploidy level and/or chromosome number were identified for 1059 plants using flow cytometry and/or chromosome counting on root meristem preparations. Samples were collected from 336 localities in the Czech Republic, Slovakia and north-eastern Hungary. In addition, ploidy levels were determined for plants from 18 localities in Bulgaria, Georgia, Ireland, Italy, Romania and Ukraine. KEY RESULTS Four ploidy levels were found in the studied area with a contrasting pattern of distribution. The most widespread cytotype in the western part of the Czech Republic is tetraploid (4x) reproducing sexually, while the apomictic pentaploids and mostly apomictic hexaploids (5x and 6x, respectively) clearly prevail in Slovakia and the eastern part of the Czech Republic. The boundary between common occurrence of tetraploids and higher ploids is very obvious and represents the geomorphologic boundary between the Bohemian Massif and the Western Carpathians with the adjacent part of Pannonia. Mixed populations consisting of two different ploidy levels were recorded in nearly 11% of localities. A statistically significant difference in a vertical distribution of penta- and hexaploids was observed in the Western Carpathians and the adjacent Pannonian Plain. Hexaploid populations tend to occur at lower elevations (usually below 500 m), while the pentaploid level is more or less evenly distributed up to 1000 m a.s.l. For the first time the heptaploid level (7x) was found on one site in Slovakia. In Europe, the sexual tetraploid level has clearly a sub-Atlantic character of distribution. The plants of higher ploidy level (penta- and hexa-) with mostly apomictic reproduction prevail in the northern part of Scandinavia and the British Isles, the Alps and the Western Carpathians with the adjacent part of Pannonia. A detailed overview of published data shows that extremely rare records on existence of diploid populations in the south-west Alps are with high probability erroneous and most probably refer to the closely related diploid species P. peleteriana. CONCLUSIONS The recent distribution of P. officinarum in Europe is complex and probably reflects the climatic changes during the Pleistocene and consequent postglacial migrations. Probably both penta- and hexaploids arose independently in central Europe (Alps and Carpathian Mountains) and in northern Europe (Scandinavia, Great Britain, Ireland), where the apomictic plants colonized deglaciated areas. We suggest that P. officinarum is in fact an amphidiploid species with a basic tetraploid level, which probably originated from hybridizations of diploid taxa from the section Pilosellina.", "which Species ?", "Pilosella officinarum", 20.0, 41.0], ["Abstract Apomicts tend to have larger geographical distributional ranges and to occur in ecologically more extreme environments than their sexual progenitors. However, the expression of apomixis is typically linked to polyploidy. Thus, it is a priori not clear whether intrinsic effects related to the change in the reproductive mode or rather in the ploidy drive ecological differentiation. We used sympatric sexual and apomictic populations of Potentilla puberula to test for ecological differentiation. To distinguish the effects of reproductive mode and ploidy on the ecology of cytotypes, we compared the niches (a) of sexuals (tetraploids) and autopolyploid apomicts (penta\u2010, hepta\u2010, and octoploids) and (b) of the three apomictic cytotypes. We based comparisons on a ploidy screen of 238 populations along a latitudinal transect through the Eastern European Alps and associated bioclimatic, and soil and topographic data. Sexual tetraploids preferred primary habitats at drier, steeper, more south\u2010oriented slopes, while apomicts mostly occurred in human\u2010made habitats with higher water availability. Contrariwise, we found no or only marginal ecological differentiation among the apomictic higher ploids. Based on the pronounced ecological differences found between sexuals and apomicts, in addition to the lack of niche differentiation among cytotypes of the same reproductive mode, we conclude that reproductive mode rather than ploidy is the main driver of the observed differences. Moreover, we compared our system with others from the literature, to stress the importance of identifying alternative confounding effects (such as hybrid origin). Finally, we underline the relevance of studying ecological parthenogenesis in sympatry, to minimize the effects of differential migration abilities.", "which Species ?", "Potentilla puberula", 446.0, 465.0], ["Sources and implications of genetic diversity in agamic complexes are still under debate. Population studies (amplified fragment length polymorphisms, microsatellites) and karyological methods (Feulgen DNA image densitometry and flow cytometry) were employed for characterization of genetic diversity and ploidy levels of 10 populations of Ranunculus carpaticola in central Slovakia. Whereas two diploid populations showed high levels of genetic diversity, as expected for sexual reproduction, eight populations are hexaploid and harbour lower degrees of genotypic variation, but maintain high levels of heterozygosity at many loci, as is typical for apomicts. Polyploid populations consist either of a single AFLP genotype or of one dominant and a few deviating genotypes. genotype/genodive and character incompatibility analyses suggest that genotypic variation within apomictic populations is caused by mutations, but in one population probably also by recombination. This local facultative sexuality may have a great impact on regional genotypic diversity. Two microsatellite loci discriminated genotypes separated by the accumulation of few mutations (\u2018clone mates\u2019) within each AFLP clone. Genetic diversity is partitioned mainly among apomictic populations and is not geographically structured, which may be due to facultative sexuality and/or multiple colonizations of sites by different clones. Habitat differentiation and a tendency to inhabit artificial meadows is more pronounced in apomictic than in sexual populations. We hypothesize that maintenance of genetic diversity and superior colonizing abilities of apomicts in temporally and spatially heterogeneous environments are important for their distributional success.", "which Species ?", "Ranunculus carpaticola", 340.0, 362.0], ["A novel and simple method for preparing highly photoactive nanocrystalline F--doped TiO2 photocatalyst with anatase and brookite phase was developed by hydrolysis of titanium tetraisopropoxide in a mixed NH4F\u2212H2O solution. The prepared F--doped TiO2 powders were characterized by differential thermal analysis-thermogravimetry (DTA-TG), X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), UV\u2212vis absorption spectroscopy, photoluminescence spectra (PL), transmission electron microscopy (TEM), and BET surface areas. The photocatalytic activity was evaluated by the photocatalytic oxidation of acetone in air. The results showed that the crystallinity of anatase was improved upon F- doping. Moreover, fluoride ions not only suppressed the formation of brookite phase but also prevented phase transition of anatase to rutile. The F--doped TiO2 samples exhibited stronger absorption in the UV\u2212visible range with a red shift in the band gap transition. The photocatalytic activity of F--doped TiO2 powders prep...", "which visible-light driven photocatalysis ?", "photocatalytic oxidation of acetone", 578.0, 613.0], ["Sn4+ ion doped TiO2 (TiO2\u2013Sn4+) nanoparticulate films with a doping ratio of about 7\u2236100 [(Sn)\u2236(Ti)] were prepared by the plasma-enhanced chemical vapor deposition (PCVD) method. The doping mode (lattice Ti substituted by Sn4+ ions) and the doping energy level of Sn4+ were determined by X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), surface photovoltage spectroscopy (SPS) and electric field induced surface photovoltage spectroscopy (EFISPS). It is found that the introduction of a doping energy level of Sn4+ ions is profitable to the separation of photogenerated carriers under both UV and visible light excitation. Characterization of the films with XRD and SPS indicates that after doping by Sn, more surface defects are present on the surface. Consequently, the photocatalytic activity for photodegradation of phenol in the presence of the TiO2\u2013Sn4+ film is higher than that of the pure TiO2 film under both UV and visible light irradiation.", "which visible-light driven photocatalysis ?", "photodegradation of phenol", 816.0, 842.0], ["Through a facile one-step combustion method, partially reduced TiO(2) has been synthesized. Electron paramagnetic resonance (EPR) spectra confirm the presence of Ti(3+) in the bulk of an as-prepared sample. The UV-vis spectra show that the Ti(3+) here extends the photoresponse of TiO(2) from the UV to the visible light region, which leads to high visible-light photocatalytic activity for the generation of hydrogen gas from water. It is worth noting that the Ti(3+) sites in the sample are highly stable in air or water under irradiation and the photocatalyst can be repeatedly used without degradation in the activity.", "which visible-light driven photocatalysis ?", "high visible-light photocatalytic activity for the generation of hydrogen gas from water", 344.0, 432.0], ["By comparing the transient absorption spectra of nanosized anatase TiO2 colloidal systems with and without SCN\u2212, the broad absorption band around 520 nm observed immediately after band-gap excitation for the system without SCN\u2212 has been assigned to shallowly trapped holes. In the presence of SCN\u2212, the absorption from the trapped holes at 520 nm cannot be observed because of the ultrafast interfacial hole transfer between TiO2 nanoparticles and SCN\u2212. The hole and electron trapping times were estimated to be <50 and 260 fs, respectively, by the analysis of rise and decay dynamics of transient absorption spectra. The rate of the hole transfer from nanosized TiO2 colloid to SCN\u2212 is comparable to that of the hole trapping and the time of formation of a weakly coupled (SCN\u00b7\u00b7\u00b7SCN)\u2022\u2212 is estimated to be \u223d2.3 ps with 0.3 M KSCN. A further structural change to form a stable (SCN)2\u2022\u2212 is observed in a timescale of 100\u223d150 ps, which is almost independent of the concentration of SCN\u2212.", "which Has output ?", "Transient absorption", 17.0, 37.0], ["Reactive species, holes, and electrons in photoexcited nanocrystalline TiO2 films were studied by transient absorption spectroscopy in the wavelength range from 400 to 2500 nm. The electron spectrum was obtained through a hole-scavenging reaction under steady-state light irradiation. The spectrum can be analyzed by a superposition of the free-electron and trapped-electron spectra. By subtracting the electron spectrum from the transient absorption spectrum, the spectrum of trapped holes was obtained. As a result, three reactive speciestrapped holes and free and trapped electronswere identified in the transient absorption spectrum. The reactivity of these species was evaluated through transient absorption spectroscopy in the presence of hole- and electron-scavenger molecules. The spectra indicate that trapped holes and electrons are localized at the surface of the particles and free electrons are distributed in the bulk.", "which Has output ?", "Transient absorption", 98.0, 118.0], ["Query optimization in RDF Stores is a challenging problem as SPARQL queries typically contain many more joins than equivalent relational plans, and hence lead to a large join order search space. In such cases, cost-based query optimization often is not possible. One practical reason for this is that statistics typically are missing in web scale setting such as the Linked Open Datasets (LOD). The more profound reason is that due to the absence of schematic structure in RDF, join-hit ratio estimation requires complicated forms of correlated join statistics; and currently there are no methods to identify the relevant correlations beforehand. For this reason, the use of good heuristics is essential in SPARQL query optimization, even in the case that are partially used with cost-based statistics (i.e., hybrid query optimization). In this paper we describe a set of useful heuristics for SPARQL query optimizers. We present these in the context of a new Heuristic SPARQL Planner (HSP) that is capable of exploiting the syntactic and the structural variations of the triple patterns in a SPARQL query in order to choose an execution plan without the need of any cost model. For this, we define the variable graph and we show a reduction of the SPARQL query optimization problem to the maximum weight independent set problem. We implemented our planner on top of the MonetDB open source column-store and evaluated its effectiveness against the state-of-the-art RDF-3X engine as well as comparing the plan quality with a relational (SQL) equivalent of the benchmarks.", "which Has output ?", "variable graph", 1203.0, 1217.0], ["Urban performance currently depends not only on a city's endowment of hard infrastructure (physical capital), but also, and increasingly so, on the availability and quality of knowledge communication and social infrastructure (human and social capital). The latter form of capital is decisive for urban competitiveness. Against this background, the concept of the \u201csmart city\u201d has recently been introduced as a strategic device to encompass modern urban production factors in a common framework and, in particular, to highlight the importance of Information and Communication Technologies (ICTs) in the last 20 years for enhancing the competitive profile of a city. The present paper aims to shed light on the often elusive definition of the concept of the \u201csmart city.\u201d We provide a focused and operational definition of this construct and present consistent evidence on the geography of smart cities in the EU27. Our statistical and graphical analyses exploit in depth, for the first time to our knowledge, the most recent version of the Urban Audit data set in order to analyze the factors determining the performance of smart cities. We find that the presence of a creative class, the quality of and dedicated attention to the urban environment, the level of education, and the accessibility to and use of ICTs for public administration are all positively correlated with urban wealth. This result prompts the formulation of a new strategic agenda for European cities that will allow them to achieve sustainable urban development and a better urban landscape.", "which Has output ?", "operational definition of this construct and present consistent evidence on the geography of smart cities in the EU27", 796.0, 913.0], ["Abstract. Climate sensitivity to CO2 remains the key uncertainty in projections of future climate change. Transient climate response (TCR) is the metric of temperature sensitivity that is most relevant to warming in the next few decades and contributes the biggest uncertainty to estimates of the carbon budgets consistent with the Paris targets. Equilibrium climate sensitivity (ECS) is vital for understanding longer-term climate change and stabilisation targets. In the IPCC 5th Assessment Report (AR5), the stated \u201clikely\u201d ranges (16 %\u201384 % confidence) of TCR (1.0\u20132.5 K) and ECS (1.5\u20134.5 K) were broadly consistent with the ensemble of CMIP5 Earth system models (ESMs) available at the time. However, many of the latest CMIP6 ESMs have larger climate sensitivities, with 5 of 34 models having TCR values above 2.5 K and an ensemble mean TCR of 2.0\u00b10.4 K. Even starker, 12 of 34 models have an ECS value above 4.5 K. On the face of it, these latest ESM results suggest that the IPCC likely ranges may need revising upwards, which would cast further doubt on the feasibility of the Paris targets. Here we show that rather than increasing the uncertainty in climate sensitivity, the CMIP6 models help to constrain the likely range of TCR to 1.3\u20132.1 K, with a central estimate of 1.68 K. We reach this conclusion through an emergent constraint approach which relates the value of TCR linearly to the global warming from 1975 onwards. This is a period when the signal-to-noise ratio of the net radiative forcing increases strongly, so that uncertainties in aerosol forcing become progressively less problematic. We find a consistent emergent constraint on TCR when we apply the same method to CMIP5 models. Our constraints on TCR are in good agreement with other recent studies which analysed CMIP ensembles. The relationship between ECS and the post-1975 warming trend is less direct and also non-linear. However, we are able to derive a likely range of ECS of 1.9\u20133.4 K from the CMIP6 models by assuming an underlying emergent relationship based on a two-box energy balance model. Despite some methodological differences; this is consistent with a previously published ECS constraint derived from warming trends in CMIP5 models to 2005. Our results seem to be part of a growing consensus amongst studies that have applied the emergent constraint approach to different model ensembles and to different aspects of the record of global warming.", "which Global and annual mean surface air temperature ?", "Climate response", 116.0, 132.0], ["Abstract. Climate sensitivity to CO2 remains the key uncertainty in projections of future climate change. Transient climate response (TCR) is the metric of temperature sensitivity that is most relevant to warming in the next few decades and contributes the biggest uncertainty to estimates of the carbon budgets consistent with the Paris targets. Equilibrium climate sensitivity (ECS) is vital for understanding longer-term climate change and stabilisation targets. In the IPCC 5th Assessment Report (AR5), the stated \u201clikely\u201d ranges (16 %\u201384 % confidence) of TCR (1.0\u20132.5 K) and ECS (1.5\u20134.5 K) were broadly consistent with the ensemble of CMIP5 Earth system models (ESMs) available at the time. However, many of the latest CMIP6 ESMs have larger climate sensitivities, with 5 of 34 models having TCR values above 2.5 K and an ensemble mean TCR of 2.0\u00b10.4 K. Even starker, 12 of 34 models have an ECS value above 4.5 K. On the face of it, these latest ESM results suggest that the IPCC likely ranges may need revising upwards, which would cast further doubt on the feasibility of the Paris targets. Here we show that rather than increasing the uncertainty in climate sensitivity, the CMIP6 models help to constrain the likely range of TCR to 1.3\u20132.1 K, with a central estimate of 1.68 K. We reach this conclusion through an emergent constraint approach which relates the value of TCR linearly to the global warming from 1975 onwards. This is a period when the signal-to-noise ratio of the net radiative forcing increases strongly, so that uncertainties in aerosol forcing become progressively less problematic. We find a consistent emergent constraint on TCR when we apply the same method to CMIP5 models. Our constraints on TCR are in good agreement with other recent studies which analysed CMIP ensembles. The relationship between ECS and the post-1975 warming trend is less direct and also non-linear. However, we are able to derive a likely range of ECS of 1.9\u20133.4 K from the CMIP6 models by assuming an underlying emergent relationship based on a two-box energy balance model. Despite some methodological differences; this is consistent with a previously published ECS constraint derived from warming trends in CMIP5 models to 2005. Our results seem to be part of a growing consensus amongst studies that have applied the emergent constraint approach to different model ensembles and to different aspects of the record of global warming.", "which Global and annual mean surface air temperature ?", "Climate sensitivity", 10.0, 29.0], ["OBJECTIVE Previous studies indicate that people respond defensively to threatening health information, especially when the information challenges self-relevant goals. The authors investigated whether reduced acceptance of self-relevant health risk information is already visible in early attention processes, that is, attention disengagement processes. DESIGN In a randomized, controlled trial with 29 smoking and nonsmoking students, a variant of Posner's cueing task was used in combination with the high-temporal resolution method of event-related brain potentials (ERPs). MAIN OUTCOME MEASURES Reaction times and P300 ERP. RESULTS Smokers showed lower P300 amplitudes in response to high- as opposed to low-threat invalid trials when moving their attention to a target in the opposite visual field, indicating more efficient attention disengagement processes. Furthermore, both smokers and nonsmokers showed increased P300 amplitudes in response to the presentation of high- as opposed to low-threat valid trials, indicating threat-induced attention-capturing processes. Reaction time measures did not support the ERP data, indicating that the ERP measure can be extremely informative to measure low-level attention biases in health communication. CONCLUSION The findings provide the first neuroscientific support for the hypothesis that threatening health information causes more efficient disengagement among those for whom the health threat is self-relevant.", "which has_analysis_approach ?", "P300 amplitude", NaN, NaN], ["Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and \u201cfeeling left out\u201d can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.", "which problem ?", "emotional impact on underserved audiences", 406.0, 447.0], ["In the present economic climate, it is often the case that profits can only be improved, or for that matter maintained, by improving efficiency and cutting costs. This is particularly notorious in the shipping business, where it has been seen that the competition is getting tougher among carriers, thus alliances and partnerships are resulting for cost effective services in recent years. In this scenario, effective planning methods are important not only for strategic but also operating tasks, covering their entire transportation systems. Container fleet size planning is an important part of the strategy of any shipping line. This paper addresses the problem of fleet size planning for refrigerated containers, to achieve cost-effective services in a competitive maritime shipping market. An analytical model is first discussed to determine the optimal size of an own dry container fleet. Then, this is extended for an own refrigerated container fleet, which is the case when an extremely unbalanced trade represents one of the major investment decisions to be taken by liner operators. Next, a simulation model is developed for fleet sizing in a more practical situation and, by using this, various scenarios are analysed to determine the most convenient composition of refrigerated fleet between own and leased containers for the transpacific cargo trade.", "which problem ?", "Fleet sizing", 1136.0, 1148.0], ["We introduce the ACL Anthology Network (AAN), a manually curated networked database of citations, collaborations, and summaries in the field of Computational Linguistics. We also present a number of statistics about the network including the most cited authors, the most central collaborators, as well as network statistics about the paper citation, author citation, and author collaboration networks.", "which consists ?", "Author collaboration network", NaN, NaN], ["Linguistically annotated corpus based on texts in biomedical domain has been constructed to tune natural language processing (NLP) tools for biotextmining. As the focus of information extraction is shifting from \"nominal\" information such as named entity to \"verbal\" information such as function and interaction of substances, application of parsers has become one of the key technologies and thus the corpus annotated for syntactic structure of sentences is in demand. A subset of the GENIA corpus consisting of 500 MEDLINE abstracts has been annotated for syntactic structure in an XMLbased format based on Penn Treebank II (PTB) scheme. Inter-annotator agreement test indicated that the writing style rather than the contents of the research abstracts is the source of the difficulty in tree annotation, and that annotation can be stably done by linguists without much knowledge of biology with appropriate guidelines regarding to linguistic phenomena particular to scientific texts.", "which Annotation scheme ?", "Penn Treebank II (PTB) scheme", NaN, NaN], ["Optimization is a promising way to generate new animations from a minimal amount of input data. Physically based optimization techniques, however, are difficult to scale to complex animated characters, in part because evaluating and differentiating physical quantities becomes prohibitively slow. Traditional approaches often require optimizing or constraining parameters involving joint torques; obtaining first derivatives for these parameters is generally an O(D2) process, where D is the number of degrees of freedom of the character. In this paper, we describe a set of objective functions and constraints that lead to linear time analytical first derivatives. The surprising finding is that this set includes constraints on physical validity, such as ground contact constraints. Considering only constraints and objective functions that lead to linear time first derivatives results in fast per-iteration computation times and an optimization problem that appears to scale well to more complex characters. We show that qualities such as squash-and-stretch that are expected from physically based optimization result from our approach. Our animation system is particularly useful for synthesizing highly dynamic motions, and we show examples of swinging and leaping motions for characters having from 7 to 22 degrees of freedom.", "which has parameters ?", "Joint torques", 382.0, 395.0], ["Since acquiring pixel-wise annotations for training convolutional neural networks for semantic image segmentation is time-consuming, weakly supervised approaches that only require class tags have been proposed. In this work, we propose another form of supervision, namely image captions as they can be found on the Internet. These captions have two advantages. They do not require additional curation as it is the case for the clean class tags used by current weakly supervised approaches and they provide textual context for the classes present in an image. To leverage such textual context, we deploy a multi-modal network that learns a joint embedding of the visual representation of the image and the textual representation of the caption. The network estimates text activation maps (TAMs) for class names as well as compound concepts, i.e. combinations of nouns and their attributes. The TAMs of compound concepts describing classes of interest substantially improve the quality of the estimated class activation maps which are then used to train a network for semantic segmentation. We evaluate our method on the COCO dataset where it achieves state of the art results for weakly supervised image segmentation.", "which Major Contributions ?", "weakly supervised image segmentation", 1179.0, 1215.0], ["The Web contains vast amounts of HTML tables. Most of these tables are used for layout purposes, but a small subset of the tables is relational, meaning that they contain structured data describing a set of entities [2]. As these relational Web tables cover a very wide range of different topics, there is a growing body of research investigating the utility of Web table data for completing cross-domain knowledge bases [6], for extending arbitrary tables with additional attributes [7, 4], as well as for translating data values [5]. The existing research shows the potentials of Web tables. However, comparing the performance of the different systems is difficult as up till now each system is evaluated using a different corpus of Web tables and as most of the corpora are owned by large search engine companies and are thus not accessible to the public. In this poster, we present a large public corpus of Web tables which contains over 233 million tables and has been extracted from the July 2015 version of the CommonCrawl. By publishing the corpus as well as all tools that we used to extract it from the crawled data, we intend to provide a common ground for evaluating Web table systems. The main difference of the corpus compared to an earlier corpus that we extracted from the 2012 version of the CommonCrawl as well as the corpus extracted by Eberius et al. [3] from the 2014 version of the CommonCrawl is that the current corpus contains a richer set of metadata for each table. This metadata includes table-specific information such as table orientation, table caption, header row, and key column, but also context information such as the text before and after the table, the title of the HTML page, as well as timestamp information that was found before and after the table. The context information can be useful for recovering the semantics of a table [7]. The timestamp information is crucial for fusing time-depended data, such as alternative population numbers for a city [8].", "which Scope ?", "Web tables", 241.0, 251.0], ["Metabolic pathways are an important part of systems biology research since they illustrate complex interactions between metabolites, enzymes, and regulators. Pathway maps are drawn to elucidate metabolism or to set data in a metabolic context. We present MetaboMAPS, a web-based platform to visualize numerical data on individual metabolic pathway maps. Metabolic maps can be stored, distributed and downloaded in SVG-format. MetaboMAPS was designed for users without computational background and supports pathway sharing without strict conventions. In addition to existing applications that established standards for well-studied pathways, MetaboMAPS offers a niche for individual, customized pathways beyond common knowledge, supporting ongoing research by creating publication-ready visualizations of experimental data.", "which Scope ?", "Pathway sharing", 506.0, 521.0], ["In this article, we share our experiences of using digital technologies and various media to present historical narratives of a museum object collection aiming to provide an engaging experience on multiple platforms. Based on P. Joseph\u2019s article, Dawson presented multiple interpretations and historical views of the Markham car collection across various platforms using multimedia resources. Through her creative production, she explored how to use cylindrical panoramas and rich media to offer new ways of telling the controversial story of the contested heritage of a museum\u2019s veteran and vintage car collection. The production\u2019s usability was investigated involving five experts before it was published online and the general users\u2019 experience was investigated. In this article, we present an important component of findings which indicates that virtual panorama tours featuring multimedia elements could be successful in attracting new audiences and that using this type of storytelling technique can be effective in the museum sector. The storyteller panorama tour presented here may stimulate GLAM (galleries, libraries, archives, and museums) professionals to think of new approaches, implement new strategies or services to engage their audiences more effectively. The research may ameliorate the education of future professionals as well.", "which Models technology ?", "cylindrical panoramas and rich media", 450.0, 486.0], ["Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.", "which uses framework ?", "Digital Strategy Canvas", 1254.0, 1277.0], ["A diverse and changing array of digital media have been used to present heritage online. While websites have been created for online heritage outreach for nearly two decades, social media is employed increasingly to complement and in some cases replace the use of websites. These same social media are used by stakeholders as a form of participatory culture, to create communities and to discuss heritage independently of narratives offered by official institutions such as museums, memorials, and universities. With difficult or \u201cdark\u201d heritage\u2014places of memory centering on deaths, disasters, and atrocities\u2014these online representations and conversations can be deeply contested. Examining the websites and social media of difficult heritage, with an emphasis on the trans-Atlantic slave trade provides insights into the efficacy of online resources provided by official institutions, as well as the unofficial, participatory communities of stakeholders who use social media for collective memories.", "which materal ?", "heritage online", 72.0, 87.0], ["This article presents a case study of a collaborative public history project between participants in two countries, the United Kingdom and Italy. Its subject matter is the bombing war in Europe, 1939-1945, which is remembered and commemorated in very different ways in these two countries: the sensitivities involved thus constitute not only a case of public history conducted at the national level but also one involving contested heritage. An account of the ways in which public history has developed in the UK and Italy is presented. This is followed by an explanation of how the bombing war has been remembered in each country. In the UK, veterans of RAF Bomber Command have long felt a sense of neglect, largely because the deliberate targeting of civilians has not fitted comfortably into the dominant victor narrative. In Italy, recollections of being bombed have remained profoundly dissonant within the received liberation discourse. The International Bomber Command Centre Digital Archive (or Archive) is then described as a case study that employs a public history approach, focusing on various aspects of its inclusive ethos, intended to preserve multiple perspectives. The Italian component of the project is highlighted, problematising the digitisation of contested heritage within the broader context of twentieth-century history. Reflections on the use of digital archiving practices and working in partnership are offered, as well as a brief account of user analytics of the Archive through its first eighteen months online.", "which Institution ?", "International Bomber Command Centre", 947.0, 982.0], ["This article presents a case study of a collaborative public history project between participants in two countries, the United Kingdom and Italy. Its subject matter is the bombing war in Europe, 1939-1945, which is remembered and commemorated in very different ways in these two countries: the sensitivities involved thus constitute not only a case of public history conducted at the national level but also one involving contested heritage. An account of the ways in which public history has developed in the UK and Italy is presented. This is followed by an explanation of how the bombing war has been remembered in each country. In the UK, veterans of RAF Bomber Command have long felt a sense of neglect, largely because the deliberate targeting of civilians has not fitted comfortably into the dominant victor narrative. In Italy, recollections of being bombed have remained profoundly dissonant within the received liberation discourse. The International Bomber Command Centre Digital Archive (or Archive) is then described as a case study that employs a public history approach, focusing on various aspects of its inclusive ethos, intended to preserve multiple perspectives. The Italian component of the project is highlighted, problematising the digitisation of contested heritage within the broader context of twentieth-century history. Reflections on the use of digital archiving practices and working in partnership are offered, as well as a brief account of user analytics of the Archive through its first eighteen months online.", "which Institution ?", "RAF Bomber Command", 655.0, 673.0], ["In this article, we share our experiences of using digital technologies and various media to present historical narratives of a museum object collection aiming to provide an engaging experience on multiple platforms. Based on P. Joseph\u2019s article, Dawson presented multiple interpretations and historical views of the Markham car collection across various platforms using multimedia resources. Through her creative production, she explored how to use cylindrical panoramas and rich media to offer new ways of telling the controversial story of the contested heritage of a museum\u2019s veteran and vintage car collection. The production\u2019s usability was investigated involving five experts before it was published online and the general users\u2019 experience was investigated. In this article, we present an important component of findings which indicates that virtual panorama tours featuring multimedia elements could be successful in attracting new audiences and that using this type of storytelling technique can be effective in the museum sector. The storyteller panorama tour presented here may stimulate GLAM (galleries, libraries, archives, and museums) professionals to think of new approaches, implement new strategies or services to engage their audiences more effectively. The research may ameliorate the education of future professionals as well.", "which museum collection ?", "Markham car collection", 317.0, 339.0], ["Most Semantic Web applications rely on querying graphs, typically by using SPARQL with a triple store. Increasingly, applications also analyze properties of the graph structure to compute statistical inferences. The current Semantic Web infrastructure, however, does not efficiently support such operations. This forces developers to extract the relevant data for external statistical post-processing. In this paper we propose to rethink query execution in a triple store as a highly parallelized asynchronous graph exploration on an active index data structure. This approach also allows to integrate SPARQL-querying with the sampling of graph properties. To evaluate this architecture we implemented Random Walk TripleRush, which is built on a distributed graph processing system. Our evaluations show that this architecture enables both competitive graph querying, as well as the ability to execute various types of random walks with restarts that sample interesting graph properties. Thanks to the asynchronous architecture, first results are sometimes returned in a fraction of the full execution time. We also evaluate the scalability and show that the architecture supports fast query-times on a dataset with more than a billion triples.", "which Compares ?", "Execution time", 1092.0, 1106.0], ["Name ambiguity has long been viewed as a challenging problem in many applications, such as scientific literature management, people search, and social network analysis. When we search a person name in these systems, many documents (e.g., papers, web pages) containing that person's name may be returned. It is hard to determine which documents are about the person we care about. Although much research has been conducted, the problem remains largely unsolved, especially with the rapid growth of the people information available on the Web. In this paper, we try to study this problem from a new perspective and propose an ADANA method for disambiguating person names via active user interactions. In ADANA, we first introduce a pairwise factor graph (PFG) model for person name disambiguation. The model is flexible and can be easily extended by incorporating various features. Based on the PFG model, we propose an active name disambiguation algorithm, aiming to improve the disambiguation performance by maximizing the utility of the user's correction. Experimental results on three different genres of data sets show that with only a few user corrections, the error rate of name disambiguation can be reduced to 3.1%. A real system has been developed based on the proposed method and is available online.", "which Graph ?", "Pairwise Factor Graph", 730.0, 751.0], ["Massive open online courses (MOOCs), a unique form of online education enabled by web-based learning technologies, allow learners from anywhere in the world with any level of educational background to enjoy online education experience provided by many top universities all around the world. Traditionally, MOOC learning contents are always delivered as text-based or video-based materials. Although introducing immersive learning experience for MOOCs may sound exciting and potentially significative, there are a number of challenges given this unique setting. In this paper, we present the design and evaluation methodologies for delivering immersive learning experience to MOOC learners via multiple media. Specifically, we have applied the techniques in the production of a MOOC entitled Virtual Hong Kong: New World, Old Traditions, led by AIMtech Centre, City University of Hong Kong, which is the first MOOC (as our knowledge) that delivers immersive learning content for distant learners to appreciate and experience how the traditional culture and folklore of Hong Kong impact upon the lives of its inhabitants in the 21st Century. The methodologies applied here can be further generalized as the fundamental framework of delivering immersive learning for future MOOCs.", "which Type of MOOC ?", "Immersive learning", 411.0, 429.0], ["Based on the progress of hyperspectral data acquisition, information processing and geological requirements, the current status and trends of hyperspectral remote sensing technology to geological applications are reviewed. The advantages and prospects of hyperspectral remote sensing applications to mineral recognition and mapping, lithologic mapping, mineral resource prospecting, mining environment monitoring, and leakage monitoring of oil and gas, are summarized and analyzed. Finally the open problems and future trends for this technology are pointed out.", "which Outcome ?", " Mining environment monitoring", NaN, NaN], ["Hyperspectral technology is useful for urban studies due to its capability in examining detailed spectral characteristics of urban materials. This study aims to develop a spectral library of urban materials and demonstrate its application in remote sensing analysis of an urban environment. Field measurements were conducted by using ASD FieldSpec 3 Spectroradiometer with wavelength range from 350 to 2500 nm. The spectral reflectance curves of urban materials were interpreted and analyzed. A collection of 22 spectral data was compiled into a spectral library. The spectral library was put to practical use by utilizing the reference spectra for WorldView-2 satellite image classification which demonstrates the usability of such infrastructure to facilitate further progress of remote sensing applications in Malaysia.", "which Outcome ?", " Spectral Library", 170.0, 187.0], ["Abstract The study demonstrates a methodology for mapping various hematite ore classes based on their reflectance and absorption spectra, using Hyperion satellite imagery. Substantial validation is carried out, using the spectral feature fitting technique, with the field spectra measured over the Bailadila hill range in Chhattisgarh State in India. The results of the study showed a good correlation between the concentration of iron oxide with the depth of the near-infrared absorption feature (R 2 = 0.843) and the width of the near-infrared absorption feature (R 2 = 0.812) through different empirical models, with a root-mean-square error (RMSE) between < 0.317 and < 0.409. The overall accuracy of the study is 88.2% with a Kappa coefficient value of 0.81. Geochemical analysis and X-ray fluorescence (XRF) of field ore samples are performed to ensure different classes of hematite ore minerals. Results showed a high content of Fe > 60 wt% in most of the hematite ore samples, except banded hematite quartzite (BHQ) (< 47 wt%).", "which Analysis ?", "Kappa coefficient", 737.0, 754.0], ["The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a research facility instrument launched on NASA's Terra spacecraft in December 1999. Spectral indices, a kind of orthogonal transformation in the five-dimensional space formed by the five ASTER short-wave-infrared (SWIR) bands, were proposed for discrimination and mapping of surface rock types. These include Alunite Index, Kaolinite Index, Calcite Index, and Montmorillonite Index, and can be calculated by linear combination of reflectance values of the five SWIR bands. The transform coefficients were determined so as to direct transform axes to the average spectral pattern of the typical minerals. The spectral indices were applied to the simulated ASTER dataset of Cuprite, Nevada, USA after converting its digital numbers to surface reflectance. The resultant spectral index images were useful for lithologic mapping and were easy to interpret geologically. An advantage of this method is that we can use the pre-determined transform coefficients, as long as image data are converted to surface reflectance.", "which Analysis ?", "Alunite Index", 388.0, 401.0], ["ASTER is an advanced Thermal Emission and Reflection Radiometer, a multispectral sensor, which measures reflected and emitted electromagnetic radiation of earth surface with 14 bands. The present study aims to delineate different rock types in the Sittampundi Anorthositic Complex (SAC), Tamil Nadu using Visible (VIS), near-infrared (NIR) and short wave infrared (SWIR) reflectance data of ASTER 9 band data. We used different band ratioing, band combinations in the VNIR and SWIR region for discriminating lithological boundaries. SAC is also considered as a lunar highland analog rock. Anorthosite is a plagioclase-rich igneous rock with subordinate amounts of pyroxenes, olivine and other minerals. A methodology has been applied to correct the cross talk effect and radiance to reflectance. Principal Component Analysis (PCA) has been realized on the 9 ASTER bands in order to reduce the redundancy information in highly correlated bands. PCA derived FCC results enable the validation and support to demarcate the different lithological boundaries defined on previous geological map. The image derived spectral profiles for anorthosite are compared with the ASTER resampled laboratory spectra, JHU spectral library spectra and Apollo 14 lunar anorthosites spectra. The Spectral Angle Mapping imaging spectroscopy technique has been practiced to classify the ASTER image of the study area and found that, the processing of ASTER remote sensing data set can be used as a powerful tool for mapping the terrestrial Anorthositic regions and similar kind of process could be applied to map the planetary surfaces (E.g. Moon).", "which Analysis ?", "Principal Component Analysis (PCA)", NaN, NaN], ["ASTER is an advanced Thermal Emission and Reflection Radiometer, a multispectral sensor, which measures reflected and emitted electromagnetic radiation of earth surface with 14 bands. The present study aims to delineate different rock types in the Sittampundi Anorthositic Complex (SAC), Tamil Nadu using Visible (VIS), near-infrared (NIR) and short wave infrared (SWIR) reflectance data of ASTER 9 band data. We used different band ratioing, band combinations in the VNIR and SWIR region for discriminating lithological boundaries. SAC is also considered as a lunar highland analog rock. Anorthosite is a plagioclase-rich igneous rock with subordinate amounts of pyroxenes, olivine and other minerals. A methodology has been applied to correct the cross talk effect and radiance to reflectance. Principal Component Analysis (PCA) has been realized on the 9 ASTER bands in order to reduce the redundancy information in highly correlated bands. PCA derived FCC results enable the validation and support to demarcate the different lithological boundaries defined on previous geological map. The image derived spectral profiles for anorthosite are compared with the ASTER resampled laboratory spectra, JHU spectral library spectra and Apollo 14 lunar anorthosites spectra. The Spectral Angle Mapping imaging spectroscopy technique has been practiced to classify the ASTER image of the study area and found that, the processing of ASTER remote sensing data set can be used as a powerful tool for mapping the terrestrial Anorthositic regions and similar kind of process could be applied to map the planetary surfaces (E.g. Moon).", "which Analysis ?", "Band Ratio", NaN, NaN], ["Abstract The study demonstrates a methodology for mapping various hematite ore classes based on their reflectance and absorption spectra, using Hyperion satellite imagery. Substantial validation is carried out, using the spectral feature fitting technique, with the field spectra measured over the Bailadila hill range in Chhattisgarh State in India. The results of the study showed a good correlation between the concentration of iron oxide with the depth of the near-infrared absorption feature (R 2 = 0.843) and the width of the near-infrared absorption feature (R 2 = 0.812) through different empirical models, with a root-mean-square error (RMSE) between < 0.317 and < 0.409. The overall accuracy of the study is 88.2% with a Kappa coefficient value of 0.81. Geochemical analysis and X-ray fluorescence (XRF) of field ore samples are performed to ensure different classes of hematite ore minerals. Results showed a high content of Fe > 60 wt% in most of the hematite ore samples, except banded hematite quartzite (BHQ) (< 47 wt%).", "which Analysis ?", "Geochemical analysis", 770.0, 790.0], ["The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a research facility instrument launched on NASA's Terra spacecraft in December 1999. Spectral indices, a kind of orthogonal transformation in the five-dimensional space formed by the five ASTER short-wave-infrared (SWIR) bands, were proposed for discrimination and mapping of surface rock types. These include Alunite Index, Kaolinite Index, Calcite Index, and Montmorillonite Index, and can be calculated by linear combination of reflectance values of the five SWIR bands. The transform coefficients were determined so as to direct transform axes to the average spectral pattern of the typical minerals. The spectral indices were applied to the simulated ASTER dataset of Cuprite, Nevada, USA after converting its digital numbers to surface reflectance. The resultant spectral index images were useful for lithologic mapping and were easy to interpret geologically. An advantage of this method is that we can use the pre-determined transform coefficients, as long as image data are converted to surface reflectance.", "which Analysis ?", "Kaolinite Index", 403.0, 418.0], ["Abstract Spatial distribution of altered minerals in rocks and soils in the Gadag Schist Belt (GSB) is carried out using Hyperion data of March 2013. The entire spectral range is processed with emphasis on VNIR (0.4\u20131.0 \u03bcm) and SWIR regions (2.0\u20132.4 \u03bcm). Processing methodology includes Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes correction, minimum noise fraction transformation, spectral feature fitting (SFF) and spectral angle mapper (SAM) in conjunction with spectra collected, using an analytical spectral device spectroradiometer. A total of 155 bands were analysed to identify and map the major altered minerals by studying the absorption bands between the 0.4\u20131.0-\u03bcm and 2.0\u20132.3-\u03bcm wavelength regions. The most important and diagnostic spectral absorption features occur at 0.6\u20130.7 \u03bcm, 0.86 and at 0.9 \u03bcm in the VNIR region due to charge transfer of crystal field effect in the transition elements, whereas absorption near 2.1, 2.2, 2.25 and 2.33 \u03bcm in the SWIR region is related to the bending and stretching of the bonds in hydrous minerals (Al-OH, Fe-OH and Mg-OH), particularly in clay minerals. SAM and SFF techniques are implemented to identify the minerals present. A score of 0.33\u20131 was assigned for both SAM and SFF, where a value of 1 indicates the exact mineral type. However, endmember spectra were compared with United States Geological Survey and John Hopkins University spectral libraries for minerals and soils. Five minerals, i.e. kaolinite-5, kaolinite-2, muscovite, haematite, kaosmec and one soil, i.e. greyish brown loam have been identified. Greyish brown loam and kaosmec have been mapped as the major weathering/altered products present in soils and rocks of the GSB. This was followed by haematite and kaolinite. The SAM classifier was then applied on a Hyperion image to produce a mineral map. The dominant lithology of the area included greywacke, argillite and granite gneiss.", "which Analysis ?", "Spectral feature fitting (SFF) ", NaN, NaN], ["The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a research facility instrument launched on NASA's Terra spacecraft in December 1999. Spectral indices, a kind of orthogonal transformation in the five-dimensional space formed by the five ASTER short-wave-infrared (SWIR) bands, were proposed for discrimination and mapping of surface rock types. These include Alunite Index, Kaolinite Index, Calcite Index, and Montmorillonite Index, and can be calculated by linear combination of reflectance values of the five SWIR bands. The transform coefficients were determined so as to direct transform axes to the average spectral pattern of the typical minerals. The spectral indices were applied to the simulated ASTER dataset of Cuprite, Nevada, USA after converting its digital numbers to surface reflectance. The resultant spectral index images were useful for lithologic mapping and were easy to interpret geologically. An advantage of this method is that we can use the pre-determined transform coefficients, as long as image data are converted to surface reflectance.", "which Analysis ?", "Montmorillonite Index", 439.0, 460.0], ["This study describes the utility of Earth Observation (EO)-1 Hyperion data for sub-pixel mineral investigation using Mixture Tuned Target Constrained Interference Minimized Filter (MTTCIMF) algorithm in hostile mountainous terrain of Rajsamand district of Rajasthan, which hosts economic mineralization such as lead, zinc, and copper etc. The study encompasses pre-processing, data reduction, Pixel Purity Index (PPI) and endmember extraction from reflectance image of surface minerals such as illite, montmorillonite, phlogopite, dolomite and chlorite. These endmembers were then assessed with USGS mineral spectral library and lab spectra of rock samples collected from field for spectral inspection. Subsequently, MTTCIMF algorithm was implemented on processed image to obtain mineral distribution map of each detected mineral. A virtual verification method has been adopted to evaluate the classified image, which uses directly image information to evaluate the result and confirm the overall accuracy and kappa coefficient of 68 % and 0.6 respectively. The sub-pixel level mineral information with reasonable accuracy could be a valuable guide to geological and exploration community for expensive ground and/or lab experiments to discover economic deposits. Thus, the study demonstrates the feasibility of Hyperion data for sub-pixel mineral mapping using MTTCIMF algorithm with cost and time effective approach.", "which Analysis ?", "Pixel Purity Index", 393.0, 411.0], ["In the present study, we have attempted the delineation of limestone using different spectral mapping algorithms in ASTER data. Each spectral mapping algorithm derives limestone exposure map independently. Although these spectral maps are broadly similar to each other, they are also different at places in terms of spatial disposition of limestone pixels. Therefore, an attempt is made to integrate the results of these spectral maps to derive an integrated map using minimum noise fraction (MNF) method. The first MNF image is the result of two cascaded principal component methods suitable for preserving complementary information derived from each spectral map. While implementing MNF, noise or non-coherent pixels occurring within a homogeneous patch of limestone are removed first using shift difference method, before attempting principal component analysis on input spectral maps for deriving composite spectral map of limestone exposures. The limestone exposure map is further validated based on spectral data and ancillary geological data.", "which Analysis ?", "Minimum Noise Fraction (MNF)", NaN, NaN], ["The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a research facility instrument launched on NASA's Terra spacecraft in December 1999. Spectral indices, a kind of orthogonal transformation in the five-dimensional space formed by the five ASTER short-wave-infrared (SWIR) bands, were proposed for discrimination and mapping of surface rock types. These include Alunite Index, Kaolinite Index, Calcite Index, and Montmorillonite Index, and can be calculated by linear combination of reflectance values of the five SWIR bands. The transform coefficients were determined so as to direct transform axes to the average spectral pattern of the typical minerals. The spectral indices were applied to the simulated ASTER dataset of Cuprite, Nevada, USA after converting its digital numbers to surface reflectance. The resultant spectral index images were useful for lithologic mapping and were easy to interpret geologically. An advantage of this method is that we can use the pre-determined transform coefficients, as long as image data are converted to surface reflectance.", "which Analysis ?", "Calcite Index", 420.0, 433.0], ["The coordination of humanitarian relief, e.g. in a natural disaster or a conflict situation, is often complicated by a scarcity of data to inform planning. Remote sensing imagery, from satellites or drones, can give important insights into conditions on the ground, including in areas which are difficult to access. Applications include situation awareness after natural disasters, structural damage assessment in conflict, monitoring human rights violations or population estimation in settlements. We review machine learning approaches for automating these problems, and discuss their potential and limitations. We also provide a case study of experiments using deep learning methods to count the numbers of structures in multiple refugee settlements in Africa and the Middle East. We find that while high levels of accuracy are possible, there is considerable variation in the characteristics of imagery collected from different sensors and regions. In this, as in the other applications discussed in the paper, critical inferences must be made from a relatively small amount of pixel data. We, therefore, consider that using machine learning systems as an augmentation of human analysts is a reasonable strategy to transition from current fully manual operational pipelines to ones which are both more efficient and have the necessary levels of quality control. This article is part of a discussion meeting issue \u2018The growing ubiquity of algorithms in society: implications, impacts and innovations\u2019.", "which Output/Application ?", "Damage assessment", 393.0, 410.0], ["The coordination of humanitarian relief, e.g. in a natural disaster or a conflict situation, is often complicated by a scarcity of data to inform planning. Remote sensing imagery, from satellites or drones, can give important insights into conditions on the ground, including in areas which are difficult to access. Applications include situation awareness after natural disasters, structural damage assessment in conflict, monitoring human rights violations or population estimation in settlements. We review machine learning approaches for automating these problems, and discuss their potential and limitations. We also provide a case study of experiments using deep learning methods to count the numbers of structures in multiple refugee settlements in Africa and the Middle East. We find that while high levels of accuracy are possible, there is considerable variation in the characteristics of imagery collected from different sensors and regions. In this, as in the other applications discussed in the paper, critical inferences must be made from a relatively small amount of pixel data. We, therefore, consider that using machine learning systems as an augmentation of human analysts is a reasonable strategy to transition from current fully manual operational pipelines to ones which are both more efficient and have the necessary levels of quality control. This article is part of a discussion meeting issue \u2018The growing ubiquity of algorithms in society: implications, impacts and innovations\u2019.", "which Output/Application ?", "Natural disasters", 363.0, 380.0], ["Remote sensing data processing deals with real-life applications with great societal values. For instance urban monitoring, fire detection or flood prediction from remotely sensed multispectral or radar images have a great impact on economical and environmental issues. To treat efficiently the acquired data and provide accurate products, remote sensing has evolved into a multidisciplinary field, where machine learning and signal processing algorithms play an important role nowadays. This paper serves as a survey of methods and applications, and reviews the latest methodological advances in machine learning for remote sensing data analysis.", "which Output/Application ?", "Urban monitoring", 106.0, 122.0], ["The Naipa and Muiane mines are located on the Nampula complex, a stratigraphic tectonic subdivision of the Mozambique Belt, in the Alto Ligonha region. The pegmatites are of the Li-Cs-Ta type, intrude a chlorite phyllite and gneisses with amphibole and biotite. The mines are still active. The main objective of this work was to analyze the pegmatite\u2019s spectral behavior considering ASTER and Landsat 8 OLI data. An ASTER image from 27/05/2005, and an image Landsat OLI image from 02/02/2018 were considered. The data were radiometric calibrated and after atmospheric corrected considered the Dark Object Subtraction algorithm available in the Semi-Automatic Classification Plugin accessible in QGIS software. In the field, samples were collected from lepidolite waste pile in Naipa and Muaine mines. A spectroadiometer was used in order to analyze the spectral behavior of several pegmatite\u2019s samples collected in the field in Alto Ligonha (Naipa and Muiane mines). In addition, QGIS software was also used for the spectral mapping of the hypothetical hydrothermal alterations associated with occurrences of basic metals, beryl gemstones, tourmalines, columbite-tantalites, and lithium minerals. A supervised classification algorithm was employed - Spectral Angle Mapper for the data processing, and the overall accuracy achieved was 80%. The integration of ASTER and Landsat 8 OLI data have proved very useful for pegmatite\u2019s mapping. From the results obtained, we can conclude that: (i) the combination of ASTER and Landsat 8 OLI data allows us to obtain more information about mineral composition than just one sensor, i.e., these two sensors are complementary; (ii) the alteration spots identified in the mines area are composed of clay minerals. In the future, more data and others image processing algorithms can be applied in order to identify the different Lithium minerals, as spodumene, petalite, amblygonite and lepidolite.", "which Preprocesing ?", "Dark Object Subtraction", 593.0, 616.0], ["Raman spectroscopy, complemented by infrared spectroscopy has been used to characterise the ferroaxinite minerals of theoretical formula Ca2Fe2+Al2BSi4O15(OH), a ferrous aluminium borosilicate. The Raman spectra are complex but are subdivided into sections based upon the vibrating units. The Raman spectra are interpreted in terms of the addition of borate and silicate spectra. Three characteristic bands of ferroaxinite are observed at 1082, 1056 and 1025 cm-1 and are attributed to BO4 stretching vibrations. Bands at 1003, 991, 980 and 963 cm-1 are assigned to SiO4 stretching vibrations. Bands are found in these positions for each of the ferroaxinites studied. No Raman bands were found above 1100 cm-1 showing that ferroaxinites contain only tetrahedral boron. The hydroxyl stretching region of ferroaxinites is characterised by a single Raman band between 3368 and 3376 cm-1, the position of which is sample dependent. Bands for ferroaxinite at 678, 643, 618, 609, 588, 572, 546 cm-1 may be attributed to the \u03bd4 bending modes and the three bands at 484, 444 and 428 cm-1 may be attributed to the \u03bd2 bending modes of the (SiO4)2-.", "which Techniques ?", "Infrared spectroscopy", 36.0, 57.0], ["Gale Crater on Mars has the layered structure of deposit covered by the Noachian/Hesperian boundary. Mineral identification and classification at this region can provide important constrains on environment and geological evolution for Mars. Although Curiosity rove has provided the in-situ mineralogical analysis in Gale, but it restricted in small areas. Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) aboard the Mars Reconnaissance Orbiter (MRO) with enhanced spectral resolution can provide more information in spatial and time scale. In this paper, CRISM near-infrared spectral data are used to identify mineral classes and groups at Martian Gale region. By using diagnostic absorptions features analysis in conjunction with spectral angle mapper (SAM), detailed mineral species are identified at Gale region, e.g., kaolinite, chlorites, smectite, jarosite, and northupite. The clay minerals' diversity in Gale Crater suggests the variation of aqueous alteration. The detection of northupite suggests that the Gale region has experienced the climate change from moist condition with mineral dissolution to dryer climate with water evaporation. The presence of ferric sulfate mineral jarosite formed through the oxidation of iron sulfides in acidic environments shows the experience of acidic sulfur-rich condition in Gale history.", "which Techniques ?", "Spectral Angle Mapper (SAM)", NaN, NaN], ["The larger synoptic view and contiguous channels arrangement of Hyperion hyperspectral remote sensing data enhance the minor spectral identification of earth\u2019s features such as minerals, atmospheric gasses, vegetation and so on. Hydrothermal alteration minerals mostly associated with vicinity of geological structural features such as lineaments and fractures. In this study Hyperion data is used for identification of hydrothermally altered minerals and alteration facies near Chhabadiya village of Jahajpur area, Bhilwara, Rajasthan. There are some minerals such as talc minerals identified through Hyperion imagery. The identified talc minerals correlated and evaluated through petrographic analysis, XRD analysis and spectroscopic analysis. The validation of identified minerals completed by field survey, field sample spectra and USGS spectral library talc mineral spectra. The conclusion is that Hyperion hyperspectral remote sensing data have capability to identify the minerals, mineral assemblage, alteration minerals and alteration facies.", "which Supplementary sources ?", "USGS Spectral Library", 836.0, 857.0], ["Summary Biological invasions threaten ecosystem integrity and biodiversity, with numerous adverse implications for native flora and fauna. Established populations of two notorious freshwater invaders, the snail Tarebia granifera and the fish Pterygoplichthys disjunctivus, have been reported on three continents and are frequently predicted to be in direct competition with native species for dietary resources. Using comparisons of species' isotopic niche widths and stable isotope community metrics, we investigated whether the diets of the invasive T. granifera and P. disjunctivus overlapped with those of native species in a highly invaded river. We also attempted to resolve diet composition for both species, providing some insight into the original pathway of invasion in the Nseleni River, South Africa. Stable isotope metrics of the invasive species were similar to or consistently mid-range in comparison with their native counterparts, with the exception of markedly more uneven spread in isotopic space relative to indigenous species. Dietary overlap between the invasive P. disjunctivus and native fish was low, with the majority of shared food resources having overlaps of <0.26. The invasive T. granifera showed effectively no overlap with the native planorbid snail. However, there was a high degree of overlap between the two invasive species (\u02dc0.86). Bayesian mixing models indicated that detrital mangrove Barringtonia racemosa leaves contributed the largest proportion to P. disjunctivus diet (0.12\u20130.58), while the diet of T. granifera was more variable with high proportions of detrital Eichhornia crassipes (0.24\u20130.60) and Azolla filiculoides (0.09\u20130.33) as well as detrital Barringtonia racemosa leaves (0.00\u20130.30). Overall, although the invasive T. granifera and P. disjunctivus were not in direct competition for dietary resources with native species in the Nseleni River system, their spread in isotopic space suggests they are likely to restrict energy available to higher consumers in the food web. Establishment of these invasive populations in the Nseleni River is thus probably driven by access to resources unexploited or unavailable to native residents.", "which Type of effect description ?", "Dietary overlap", 1048.0, 1063.0], ["The quantification of invader impacts remains a major hurdle to understanding and managing invasions. Here, we demonstrate a method for quantifying the community-level impact of multiple plant invaders by applying Parker et al.'s (1999) equation (impact = range x local abundance x per capita effect or per unit effect) using data from 620 survey plots from 31 grasslands across west-central Montana, USA. In testing for interactive effects of multiple invaders on native plant abundance (percent cover), we found no evidence for invasional meltdown or synergistic interactions for the 25 exotics tested. While much concern exists regarding impact thresholds, we also found little evidence for nonlinear relationships between invader abundance and impacts. These results suggest that management actions that reduce invader abundance should reduce invader impacts monotonically in this system. Eleven of 25 invaders had significant per unit impacts (negative local-scale relationships between invader and native cover). In decomposing the components of impact, we found that local invader abundance had a significant influence on the likelihood of impact, but range (number of plots occupied) did not. This analysis helped to differentiate measures of invasiveness (local abundance and range) from impact to distinguish high-impact invaders from invaders that exhibit negligible impacts, even when widespread. Distinguishing between high- and low-impact invaders should help refine trait-based prediction of problem species. Despite the unique information derived from evaluation of per unit effects of invaders, invasiveness 'scores based on range and local abundance produced similar rankings to impact scores that incorporated estimates of per unit effects. Hence, information on range and local abundance alone was sufficient to identify problematic plant invaders at the regional scale. In comparing empirical data on invader impacts to the state noxious weed list, we found that the noxious weed list captured 45% of the high impact invaders but missed 55% and assigned the lowest risk category to the highest-impact invader. While such subjective weed lists help to guide invasive species management, empirical data are needed to develop more comprehensive rankings of ecological impacts. Using weed lists to classify invaders for testing invasion theory is not well supported.", "which Type of effect description ?", "interactive effects of multiple invaders on native plant abundance (percent cover), we found no evidence for invasional meltdown or synergistic interactions", NaN, NaN], ["Biological invasions are a growing aspect of global biodiversity change. In many regions, introduced species richness increases supralinearly over time. This does not, however, necessarily indicate increasing introduction rates or invasion success. We develop a simple null model to identify the expected trend in invasion records over time. For constant introduction rates and success, the expected trend is exponentially increasing. Model extensions with varying introduction rate and success can also generate exponential distributions. We then analyse temporal trends in aquatic, marine and terrestrial invasion records. Most data sets support an exponential distribution (15/16) and the null invasion model (12/16). Thus, our model shows that no change in introduction rate or success need be invoked to explain the majority of observed trends. Further, an exponential trend does not necessarily indicate increasing invasion success or 'invasional meltdown', and a saturating trend does not necessarily indicate decreasing success or biotic resistance.", "which Type of effect description ?", "null model to identify the expected trend in invasion records over time", 269.0, 340.0], ["We sampled the understory community in an old-growth, temperate forest to test alternative hypotheses explaining the establishment of exotic plants. We quantified the individual and net importance of distance from areas of human disturbance, native plant diversity, and environmental gradients in determining exotic plant establishment. Distance from disturbed areas, both within and around the reserve, was not correlated to exotic species richness. Numbers of native and exotic species were positively correlated at large (50 m 2 ) and small (10 m 2 ) plot sizes, a trend that persisted when relationships to environ- mental gradients were controlled statistically. Both native and exotic species richness in- creased with soil pH and decreased along a gradient of increasing nitrate availability. Exotic species were restricted to the upper portion of the pH gradient and had individualistic responses to the availability of soil resources. These results are inconsistent with both the diversity-resistance and resource-enrichment hypotheses for invasibility. Environmental conditions favoring native species richness also favor exotic species richness, and com- petitive interactions with the native flora do not appear to limit the entry of additional species into the understory community at this site. It appears that exotic species with niche requirements poorly represented in the regional flora of native species may establish with relatively little resistance or consequence for native species richness.", "which Measure of disturbance ?", "distance from areas of human disturbance", 200.0, 240.0], ["Few invaded ecosystems are free from habitat loss and disturbance, leading to uncertainty whether dominant invasive species are driving community change or are passengers along for the environmental ride. The ''driver'' model predicts that invaded communities are highly interactive, with subordinate native species being limited or ex- cluded by competition from the exotic dominants. The ''passenger'' model predicts that invaded communities are primarily structured by noninteractive factors (environmental change, dispersal limitation) that are less constraining on the exotics, which thus dominate. We tested these alternative hypotheses in an invaded, fragmented, and fire-suppressed oak savanna. We examined the impact of two invasive dominant perennial grasses on community structure using a reduction (mowing of aboveground biomass) and removal (weeding of above- and belowground biomass) experiment conducted at different seasons and soil depths. We examined the relative importance of competition vs. dispersal limitation with experimental seed additions. Competition by the dominants limits the abundance and re- production of many native and exotic species based on their increased performance with removals and mowing. The treatments resulted in increased light availability and bare soil; soil moisture and N were unaffected. Although competition was limiting for some, 36 of 79 species did not respond to the treatments or declined in the absence of grass cover. Seed additions revealed that some subordinates are dispersal limited; competition alone was insufficient to explain their rarity even though it does exacerbate dispersal inefficiencies by lowering reproduction. While the net effects of the dominants were negative, their presence restricted woody plants, facilitated seedling survival with moderate disturbance (i.e., treatments applied in the fall), or was not the primary limiting factor for the occurrence of some species. Finally, the species most functionally distinct from the dominants (forbs, woody plants) responded most significantly to the treatments. This suggests that relative abundance is determined more by trade-offs relating to environmental conditions (long- term fire suppression) than to traits relating to resource capture (which should most impact functionally similar species). This points toward the passenger model as the underlying cause of exotic dominance, although their combined effects (suppressive and facilitative) on community structure are substantial.", "which Measure of disturbance ?", "reduction (mowing of aboveground biomass) ", NaN, NaN], ["Abstract: The reed Phragmites australis Cav. is aggressively invading salt marshes along the Atlantic Coast of North America. We examined the interactive role of habitat alteration (i.e., shoreline development) in driving this invasion and its consequences for plant richness in New England salt marshes. We surveyed 22 salt marshes in Narragansett Bay, Rhode Island, and quantified shoreline development, Phragmites cover, soil salinity, and nitrogen availability. Shoreline development, operationally defined as removal of the woody vegetation bordering marshes, explained >90% of intermarsh variation in Phragmites cover. Shoreline development was also significantly correlated with reduced soil salinities and increased nitrogen availability, suggesting that removing woody vegetation bordering marshes increases nitrogen availability and decreases soil salinities, thus facilitating Phragmites invasion. Soil salinity (64%) and nitrogen availability (56%) alone explained a large proportion of variation in Phragmites cover, but together they explained 80% of the variation in Phragmites invasion success. Both univariate and aggregate (multidimensional scaling) analyses of plant community composition revealed that Phragmites dominance in developed salt marshes resulted in an almost three\u2010fold decrease in plant species richness. Our findings illustrate the importance of maintaining integrity of habitat borders in conserving natural communities and provide an example of the critical role that local conservation can play in preserving these systems. In addition, our findings provide ecologists and natural resource managers with a mechanistic understanding of how human habitat alteration in one vegetation community can interact with species introductions in adjacent communities (i.e., flow\u2010on or adjacency effects) to hasten ecosystem degradation.", "which Measure of disturbance ?", "shoreline development", 188.0, 209.0], ["1. Disturbance and anthropogenic land use changes are usually considered to be key factors facilitating biological invasions. However, specific comparisons of invasion success between sites affected to different degrees by these factors are rare. 2. In this study we related the large-scale distribution of the invading New Zealand mud snail ( Potamopyrgus antipodarum ) in southern Victorian streams, Australia, to anthropogenic land use, flow variability, water quality and distance from the site to the sea along the stream channel. 3. The presence of P. antipodarum was positively related to an index of flow-driven disturbance, the coefficient of variability of mean daily flows for the year prior to the study. 4. Furthermore, we found that the invader was more likely to occur at sites with multiple land uses in the catchment, in the forms of grazing, forestry and anthropogenic developments (e.g. towns and dams), compared with sites with low-impact activities in the catchment. However, this relationship was confounded by a higher likelihood of finding this snail in lowland sites close to the sea. 5. We conclude that P. antipodarum could potentially be found worldwide at sites with similar ecological characteristics. We hypothesise that its success as an invader may be related to an ability to quickly re-colonise denuded areas and that population abundances may respond to increased food resources. Disturbances could facilitate this invader by creating spaces for colonisation (e.g. a possible consequence of floods) or changing resource levels (e.g. increased nutrient levels in streams with intense human land use in their catchments).", "which Measure of disturbance ?", "anthropogenic developments (e.g. towns and dams)", NaN, NaN], ["Summary 1. Urban and agricultural activities are not part of natural disturbance regimes and may bear little resemblance to them. Such disturbances are common in densely populated semi-arid shrub communities of the south-western US, yet successional studies in these regions have been limited primarily to natural successional change and the impact of human-induced changes on natural disturbance regimes. Although these communities are resilient to recurrent and large-scale disturbance by fire, they are not necessarily well-adapted to recover from exotic disturbances. 2. This study investigated the effects of severe exotic disturbance (construction, heavy-vehicle activity, landfill operations, soil excavation and tillage) on shrub communities in southern California. These disturbances led to the conversion of indigenous shrublands to exotic annual communities with low native species richness. 3. Nearly 60% of the cover on disturbed sites consisted of exotic annual species, while undisturbed sites were primarily covered by native shrub species (68%). Annual species dominant on disturbed sites included Erodium botrys, Hypochaeris glabra, Bromus spp., Vulpia myuros and Avena spp. 4. The cover of native species remained low on disturbed sites even 71 years after initial exotic disturbance ceased. Native shrub seedlings were also very infrequent on disturbed sites, despite the presence of nearby seed sources. Only two native shrubs, Eriogonum fasciculatum and Baccharis sarothroides, colonized some disturbed sites in large numbers. 5. Although some disturbed sites had lower total soil nitrogen and percentage organic matter and higher pH than undisturbed sites, soil variables measured in this study were not sufficient to explain variations in species abundances on these sites. 6. Non-native annual communities observed in this study did not recover to a predisturbed state within typical successional time (< 25 years), supporting the hypothesis that altered stable states can occur if a community is pushed beyond its threshold of resilience.", "which Measure of disturbance ?", "landfill operations, soil excavation and tillage", 679.0, 727.0], ["Abstract Extensive areas in the mountain grasslands of central Argentina are heavily invaded by alien species from Europe. A decrease in biodiversity and a loss of palatable species is also observed. The invasibility of the tall-grass mountain grassland community was investigated in an experiment of factorial design. Six alien species which are widely distributed in the region were sown in plots where soil disturbance, above-ground biomass removal by cutting and burning were used as treatments. Alien species did not establish in undisturbed plots. All three types of disturbances increased the number and cover of alien species; the effects of soil disturbance and biomass removal was cumulative. Cirsium vulgare and Oenothera erythrosepala were the most efficient alien colonizers. In conditions where disturbances did not continue the cover of aliens started to decrease in the second year, by the end of the third season, only a few adults were established. Consequently, disturbances are needed to maintain ali...", "which Measure of disturbance ?", "soil disturbance, above-ground biomass removal by cutting and burning", 405.0, 474.0], ["What determines the number of alien species in a given region? \u2018Native biodiversity\u2019 and \u2018human impact\u2019 are typical answers to this question. Indeed, studies comparing different regions have frequently found positive relationships between number of alien species and measures of both native biodiversity (e.g. the number of native species) and human impact (e.g. human population). These relationships are typically explained by biotic acceptance or resistance, i.e. by influence of native biodiversity and human impact on the second step of the invasion process, establishment. The first step of the invasion process, introduction, has often been ignored. Here we investigate whether relationships between number of alien mammals and native biodiversity or human impact in 43 European countries are mainly shaped by differences in number of introduced mammals or establishment success. Our results suggest that correlation between number of native and established mammals is spurious, as it is simply explainable by the fact that both quantities are linked to country area. We also demonstrate that countries with higher human impact host more alien mammals than other countries because they received more introductions than other countries. Differences in number of alien mammals cannot be explained by differences in establishment success. Our findings highlight importance of human activities and question, at least for mammals in Europe, importance of biotic acceptance and resistance.", "which Measure of resistance/susceptibility ?", "Establishment success", 864.0, 885.0], ["Theory suggests that introduction effort (propagule size or number) should be a key determinant of establishment success for exotic species. Unfortunately, however, propagule pressure is not recorded for most introductions. Studies must therefore either use proxies whose efficacy must be largely assumed, or ignore effort altogether. The results of such studies will be flawed if effort is not distributed at random with respect to other characteristics that are predicted to influence success. We use global data for more than 600 introduction events for birds to show that introduction effort is both the strongest correlate of introduction success, and correlated with a large number of variables previously thought to influence success. Apart from effort, only habitat generalism relates to establishment success in birds.", "which Measure of resistance/susceptibility ?", "Establishment success", 99.0, 120.0], ["Abstract Context. According to the tens rule, 10% of introduced species establish themselves. Aims. We tested this component of the tens rule for amphibians and reptiles globally, in Europe and North America, where data are presumably of good quality, and on islands versus continents. We also tested whether there was a taxonomic difference in establishment success between amphibians and reptiles. Methods. We examined data comprising 206 successful and 165 failed introduction records for 161 species of amphibians to 55 locations, and 560 successful and 641 failed introduction records for 469 species of reptiles to 116 locations around the world. Key results. Globally, establishment success was not different between amphibians (67%) and reptiles (62%). Both means were well above the 10% value predicted by the tens rule. In Europe and North America, establishment success was lower, although still higher than 10%. For reptiles, establishment success was higher on islands than on continents. Our results question the tens rule and do not show taxonomic differences in establishment success. Implications. Similar to studies on other taxa (birds and mammals), we found that establishment success was generally above 40%. This suggests that we should focus management on reducing the number of herptile species introduced because both reptiles and amphibians have a high likelihood of establishing. As data collection on invasions continue, testing establishment success in light of other factors, including propagule pressure, climate matching and taxonomic classifications, may provide additional insight into which species are most likely to establish in particular areas.", "which Measure of resistance/susceptibility ?", "Establishment success", 345.0, 366.0], ["Abstract: Concern over the impact of invaders on biodiversity and on the functioning of ecosystems has generated a rising tide of comparative analyses aiming to unveil the factors that shape the success of introduced species across different regions. One limitation of these studies is that they often compare geographically rather than ecologically defined regions. We propose an approach that can help address this limitation: comparison of invasions across convergent ecosystems that share similar climates. We compared avian invasions in five convergent mediterranean climate systems around the globe. Based on a database of 180 introductions representing 121 avian species, we found that the proportion of bird species successfully established was high in all mediterranean systems (more than 40% for all five regions). Species differed in their likelihood to become established, although success was not higher for those originating from mediterranean systems than for those from nonmediterranean regions. Controlling for this taxonomic effect with generalized linear mixed models, species introduced into mediterranean islands did not show higher establishment success than those introduced to the mainland. Susceptibility to avian invaders, however, differed substantially among the different mediterranean regions. The probability that a species will become established was highest in the Mediterranean Basin and lowest in mediterranean Australia and the South African Cape. Our results suggest that many of the birds recently introduced into mediterranean systems, and especially into the Mediterranean Basin, have a high potential to establish self\u2010sustaining populations. This finding has important implications for conservation in these biologically diverse hotspots.", "which Measure of resistance/susceptibility ?", "Establishment success", 1154.0, 1175.0], ["Darwin\u2019s naturalization hypothesis predicts that the success of alien invaders will decrease with increasing taxonomic similarity to the native community. Alternatively, shared traits between aliens and the native assemblage may preadapt aliens to their novel surroundings, thereby facilitating establishment (the preadaptation hypothesis). Here we examine successful and failed introductions of amphibian species across the globe and find that the probability of successful establishment is higher when congeneric species are present at introduction locations and increases with increasing congener species richness. After accounting for positive effects of congeners, residence time, and propagule pressure, we also find that invader establishment success is higher on islands than on mainland areas and is higher in areas with abiotic conditions similar to the native range. These findings represent the first example in which the preadaptation hypothesis is supported in organisms other than plants and suggest that preadaptation has played a critical role in enabling introduced species to succeed in novel environments.", "which Measure of resistance/susceptibility ?", "Establishment success", 736.0, 757.0], ["Biological invasions as drivers of biodiversity loss have recently been challenged. Fundamentally, we must know where species that are threatened by invasive alien species (IAS) live, and the degree to which they are threatened. We report the first study linking 1372 vertebrates threatened by more than 200 IAS from the completely revised Global Invasive Species Database. New maps of the vulnerability of threatened vertebrates to IAS permit assessments of whether IAS have a major influence on biodiversity, and if so, which taxonomic groups are threatened and where they are threatened. We found that centres of IAS-threatened vertebrates are concentrated in the Americas, India, Indonesia, Australia and New Zealand. The areas in which IAS-threatened species are located do not fully match the current hotspots of invasions, or the current hotspots of threatened species. The relative importance of biological invasions as drivers of biodiversity loss clearly varies across regions and taxa, and changes over time, with mammals from India, Indonesia, Australia and Europe are increasingly being threatened by IAS. The chytrid fungus primarily threatens amphibians, whereas invasive mammals primarily threaten other vertebrates. The differences in IAS threats between regions and taxa can help efficiently target IAS, which is essential for achieving the Strategic Plan 2020 of the Convention on Biological Diversity.", "which Measure of resistance/susceptibility ?", "IAS-threatened species", 741.0, 763.0], ["Abstract We investigated some of the factors influencing exotic invasion of native sub-alpine plant communities at a site in southeast Australia. Structure, floristic composition and invasibility of the plant communities and attributes of the invasive species were studied. To determine the plant characteristics correlated with invasiveness, we distinguished between roadside invaders, native community invaders and non-invasive exotic species, and compared these groups across a range of traits including functional group, taxonomic affinity, life history, mating system and morphology. Poa grasslands and Eucalyptus-Poa woodlands contained the largest number of exotic species, although all communities studied appeared resilient to invasion by most species. Most community invaders were broad-leaved herbs while roadside invaders contained both herbs and a range of grass species. Over the entire study area the richness and cover of native and exotic herbaceous species were positively related, but exotic herbs were more negatively related to cover of specific functional groups (e.g. trees) than native herbs. Compared with the overall pool of exotic species, those capable of invading native plant communities were disproportionately polycarpic, Asteracean and cross-pollinating. Our data support the hypothesis that strong ecological filtering of exotic species generates an exotic assemblage containing few dominant species and which functionally converges on the native assemblage. These findings contrast with those observed in the majority of invaded natural systems. We conclude that the invasion of closed sub-alpine communities must be viewed in terms of the unique attributes of the invading species, the structure and composition of the invaded communities and the strong extrinsic physical and climatic factors typical of the sub-alpine environment. Nomenclature: Australian Plant Name Index (APNI); http://www.anbg.gov.au/cgi-bin/apni Abbreviations: KNP = Kosciuszko National Park; MRPP = Multi response permutation procedure; VE = Variance explained.", "which Measure of resistance/susceptibility ?", "Number of exotic species", 655.0, 679.0], ["Some theories and experimental studies suggest that areas of low plant spe- cies richness may be invaded more easily than areas of high plant species richness. We gathered nested-scale vegetation data on plant species richness, foliar cover, and frequency from 200 1-m 2 subplots (20 1000-m 2 modified-Whittaker plots) in the Colorado Rockies (USA), and 160 1-m 2 subplots (16 1000-m 2 plots) in the Central Grasslands in Colorado, Wyoming, South Dakota, and Minnesota (USA) to test the generality of this paradigm. At the 1-m 2 scale, the paradigm was supported in four prairie types in the Central Grasslands, where exotic species richness declined with increasing plant species richness and cover. At the 1-m 2 scale, five forest and meadow vegetation types in the Colorado Rockies contradicted the paradigm; exotic species richness increased with native-plant species richness and foliar cover. At the 1000-m 2 plot scale (among vegetation types), 83% of the variance in exotic species richness in the Central Grasslands was explained by the total percentage of nitrogen in the soil and the cover of native plant species. In the Colorado Rockies, 69% of the variance in exotic species richness in 1000-m 2 plots was explained by the number of native plant species and the total percentage of soil carbon. At landscape and biome scales, exotic species primarily invaded areas of high species richness in the four Central Grasslands sites and in the five Colorado Rockies vegetation types. For the nine vegetation types in both biomes, exotic species cover was positively correlated with mean foliar cover, mean soil percentage N, and the total number of exotic species. These patterns of invasibility depend on spatial scale, biome and vegetation type, spatial autocorrelation effects, availability of resources, and species-specific responses to grazing and other disturbances. We conclude that: (1) sites high in herbaceous foliar cover and soil fertility, and hot spots of plant diversity (and biodiversity), are invasible in many landscapes; and (2) this pattern may be more closely related to the degree resources are available in native plant communities, independent of species richness. Exotic plant in- vasions in rare habitats and distinctive plant communities pose a significant challenge to land managers and conservation biologists.", "which Measure of resistance/susceptibility ?", "Number of exotic species", 1647.0, 1671.0], ["The invasion of exotic species into assemblages of native plants is a pervasive and widespread phenomenon. Many theoretical and observational studies suggest that diverse communities are more resistant to invasion by exotic species than less diverse ones. However, experimental results do not always support such a relationship. Therefore, the hypothesis of diversity-community invasibility is still a focus of controversy in the field of invasion ecology. In this study, we established and manipulated communities with different species diversity and different species functional groups (16 species belong to C3, C4, forbs and legumes, respectively) to test Elton's hypothesis and other relevant hypotheses by studying the process of invasion. Alligator weed (Alternanthera philoxeroides) was chosen as the invader. We found that the correlation between the decrement of extractable soil nitrogen and biomass of alligator weed was not significant, and that species diversity, independent of functional groups diversity, did not show a significant correlation with invasibility. However, the communities with higher functional groups diversity significantly reduced the biomass of alligator weed by decreasing its resource opportunity. Functional traits of species also influenced the success of the invasion. Alternanthera sessilis, in the same morphological and functional group as alligator weed, was significantly resistant to alligator weed invasion. Because community invasibility is influenced by many factors and interactions among them, the pattern and mechanisms of community invasibility are likely to be far subtler than we found in this study. More careful manipulated experiments coupled with theoretical modeling studies are essential steps to a more profound understanding of community invasibility.", "which Measure of species similarity ?", "Functional groups", 570.0, 587.0], ["Summary 1. Although observed functional differences between alien and native plant species support the idea that invasions are favoured by niche differentiation (ND), when considering invasions along large ecological gradients, habitat filtering (HF) has been proposed to constrain alien species such that they exhibit similar trait values to natives. 2. To reconcile these contrasting observations, we used a multiscale approach using plant functional traits to evaluate how biotic interactions with native species and grazing might determine the functional structure of highly invaded grasslands along an elevation gradient in New Zealand. 3. At a regional scale, functional differences between alien and native plant species translated into nonrandom community assembly and high ND. Alien and native species showed contrasting responses to elevation and the degree of ND between them decreased as elevation increased, suggesting a role for HF. At the plant-neighbourhood scale, species with contrasting traits were generally spatially segregated, highlighting the impact of biotic interactions in structuring local plant communities. A confirmatory multilevel path analysis showed that the effect of elevation and grazing was moderated by the presence of native species, which in turn influenced the local abundance of alien species. 4. Our study showed that functional differences between aliens and natives are fundamental to understand the interplay between multiple mechanisms driving alien species success and their coexistence with natives. In particular, the success of alien species is driven by the presence of native species which can have a negative (biotic resistance) or a positive (facilitation) effect depending on the functional identity of alien species.", "which Measure of species similarity ?", "Functional traits", 442.0, 459.0], ["Understanding the functional traits that allow invasives to outperform natives is a necessary first step in improving our ability to predict and manage the spread of invaders. In nutrient-limited systems, plant competitive ability is expected to be closely tied to the ability of a plant to exploit nutrient-rich microsites and use these captured nutrients efficiently. The broad objective of this work was to compare the ability of native and invasive perennial forbs to acquire and use nutrients from nutrient-rich microsites. We evaluated morphological and physiological responses among four native and four invasive species exposed to heterogeneous (patch) or homogeneous (control) nutrient distribution. Invasives, on average, allocated more biomass to roots and allocated proportionately more root length to nutrient-rich microsites than did natives. Invasives also had higher leaf N, photosynthetic rates, and photosynthetic nitrogen use efficiency than natives, regardless of treatment. While these results suggest multiple traits may contribute to the success of invasive forbs in low-nutrient environments, we also observed large variation in these traits among native forbs. These observations support the idea that functional trait variation in the plant community may be a better predictor of invasion resistance than the functional group composition of the plant community.", "which Measure of species similarity ?", "Functional traits", 18.0, 35.0], ["The limiting similarity hypothesis predicts that communities should be more resistant to invasion by non\u2010natives when they include natives with a diversity of traits from more than one functional group. In restoration, planting natives with a diversity of traits may result in competition between natives of different functional groups and may influence the efficacy of different seeding and maintenance methods, potentially impacting native establishment. We compare initial establishment and first\u2010year performance of natives and the effectiveness of maintenance techniques in uniform versus mixed functional group plantings. We seeded ruderal herbaceous natives, longer\u2010lived shrubby natives, or a mixture of the two functional groups using drill\u2010 and hand\u2010seeding methods. Non\u2010natives were left undisturbed, removed by hand\u2010weeding and mowing, or treated with herbicide to test maintenance methods in a factorial design. Native functional groups had highest establishment, growth, and reproduction when planted alone, and hand\u2010seeding resulted in more natives as well as more of the most common invasive, Brassica nigra. Wick herbicide removed more non\u2010natives and resulted in greater reproduction of natives, while hand\u2010weeding and mowing increased native density. Our results point to the importance of considering competition among native functional groups as well as between natives and invasives in restoration. Interactions among functional groups, seeding methods, and maintenance techniques indicate restoration will be easier to implement when natives with different traits are planted separately.", "which Measure of species similarity ?", "Functional groups", 318.0, 335.0], ["Biotic resistance, the ability of species in a community to limit invasion, is central to our understanding of how communities at risk of invasion assemble after disturbances, but it has yet to translate into guiding principles for the restoration of invasion\u2010resistant plant communities. We combined experimental, functional, and modelling approaches to investigate processes of community assembly contributing to biotic resistance to an introduced lineage of Phragmites australis, a model invasive species in North America. We hypothesized that (i) functional group identity would be a good predictor of biotic resistance to P. australis, while species identity effect would be redundant within functional group (ii) mixtures of species would be more invasion resistant than monocultures. We classified 36 resident wetland plants into four functional groups based on eight functional traits. We conducted two competition experiments based on the additive competition design with P. australis and monocultures or mixtures of wetland plants. As an indicator of biotic resistance, we calculated a relative competition index (RCIavg) based on the average performance of P. australis in competition treatment compared with control. To explain diversity effect further, we partitioned it into selection effect and complementarity effect and tested several diversity\u2013interaction models. In monoculture treatments, RCIavg of wetland plants was significantly different among functional groups, but not within each functional group. We found the highest RCIavg for fast\u2010growing annuals, suggesting priority effect. RCIavg of wetland plants was significantly greater in mixture than in monoculture mainly due to complementarity\u2013diversity effect among functional groups. In diversity\u2013interaction models, species interaction patterns in mixtures were described best by interactions between functional groups when fitted to RCIavg or biomass, implying niche partitioning. Synthesis. Functional group identity and diversity of resident plant communities are good indicators of biotic resistance to invasion by introduced Phragmites australis, suggesting niche pre\u2010emption (priority effect) and niche partitioning (diversity effect) as underlying mechanisms. Guiding principles to understand and/or manage biological invasion could emerge from advances in community theory and the use of a functional framework. Targeting widely distributed invasive plants in different contexts and scaling up to field situations will facilitate generalization.", "which Measure of species similarity ?", "Functional traits", 875.0, 892.0], ["Summary 1 Biological invasion can permanently alter ecosystem structure and function. Invasive species are difficult to eradicate, so methods for constraining invasions would be ecologically valuable. We examined the potential of ecological restoration to constrain invasion of an old field by Agropyron cristatum, an introduced C3 grass. 2 A field experiment was conducted in the northern Great Plains of North America. One-hundred and forty restored plots were planted in 1994\u201396 with a mixture of C3 and C4 native grass seed, while 100 unrestored plots were not. Vegetation on the plots was measured periodically between 1994 and 2002. 3 Agropyron cristatum invaded the old field between 1994 and 2002, occurring in 5% of plots in 1994 and 66% of plots in 2002, and increasing in mean cover from 0\u00b72% in 1994 to 17\u00b71% in 2002. However, A. cristatum invaded one-third fewer restored than unrestored plots between 1997 and 2002, suggesting that restoration constrained invasion. Further, A. cristatum cover in restored plots decreased with increasing planted grass cover. Stepwise regression indicated that A. cristatum cover was more strongly correlated with planted grass cover than with distance from the A. cristatum source, species richness, percentage bare ground or percentage litter. 4 The strength of the negative relationship between A. cristatum and planted native grasses varied among functional groups: the correlation was stronger with species with phenology and physiology similar to A. cristatum (i.e. C3 grasses) than with dissimilar species (C4 grasses). 5 Richness and cover of naturally establishing native species decreased with increasing A. cristatum cover. In contrast, restoration had little effect on the establishment and colonization of naturally establishing native species. Thus, A. cristatum hindered colonization by native species while planted native grasses did not. 6 Synthesis and applications. To our knowledge, this study provides the first indication that restoration can act as a filter, constraining invasive species while allowing colonization by native species. These results suggest that resistance to invasion depends on the identity of species in the community and that restoration seed mixes might be tailored to constrain selected invaders. Restoring areas before invasive species become established can reduce the magnitude of biological invasion.", "which Measure of species similarity ?", "Functional groups", 1398.0, 1415.0], ["Although many theoretical and observational studies suggest that diverse systems are more resistant to invasion by novel species than are less diverse systems, experimental data are uncommon. In this experiment, I manipulated the functional group richness and composition of a grassland community to test two related hypotheses: (1) Diversity and invasion resistance are positively related through diversity's effects on the resources necessary for invading plants' growth. (2) Plant communities resist invasion by species in functional groups already present in the community. To test these hypotheses, I removed plant functional groups (forbs, C3 graminoids, and C4 graminoids) from existing grassland vegetation to create communities that contained all possible combinations of one, two, or three functional groups. After three years of growth, I added seeds of 16 different native prairie species (legumes, nonleguminous forbs, C3 graminoids, and C4 graminoids) to a1 3 1 m portion of each 4 3 8 m plot. Overall invasion success was negatively related to resident functional group richness, but there was only weak evidence that resident species repelled functionally similar invaders. A weak effect of functional group richness on some resources did not explain the significant diversity-invasibility relationship. Other factors, particularly the different responses of resident functional groups to the initial disturbance of the experimental manipulation, seem to have been more important to community in- vasibility.", "which Measure of species similarity ?", "Functional groups", 526.0, 543.0], ["Fox's assembly rule, that relative dearth of certain functional groups in a community will facilitate invasion of that particular functional group, serves as the basis for investigation into the functional group effects of invasion resistance. We explored resistance to plant invaders by eliminating or decreasing the number of understory plant species in particular functional groups from plots at a riparian site in southwestern Virginia, USA. Our functional groups comprise combinations of aboveground biomass and rooting structure type. Manipulated plots were planted with 10 randomly chosen species from widespread native and introduced plants commonly found throughout the floodplains of Big Stony Creek. We assessed success of an invasion by plant survivorship and growth. We analyzed survivorship of functional groups with loglinear models for the analysis of categorical data in a 4-way table. There was a significant interaction between functional groups removed in a plot and survivorship in the functional groups added to that plot. However, survivorship of species in functional groups introduced into plots with their respective functional group removed did not differ from survivorship when any other functional group was removed. Additionally, growth of each of the most abundant species did not differ significantly among plots with different functional groups manipulated. Specifically, species did not fare better in those plots that had representatives of their own functional group removed. Fox's assembly rule does not hold for these functional groups in this plant community; however, composition of the recipient community is a significant factor in community assembly.", "which Measure of species similarity ?", "Functional groups", 53.0, 70.0], ["What determines the number of alien species in a given region? \u2018Native biodiversity\u2019 and \u2018human impact\u2019 are typical answers to this question. Indeed, studies comparing different regions have frequently found positive relationships between number of alien species and measures of both native biodiversity (e.g. the number of native species) and human impact (e.g. human population). These relationships are typically explained by biotic acceptance or resistance, i.e. by influence of native biodiversity and human impact on the second step of the invasion process, establishment. The first step of the invasion process, introduction, has often been ignored. Here we investigate whether relationships between number of alien mammals and native biodiversity or human impact in 43 European countries are mainly shaped by differences in number of introduced mammals or establishment success. Our results suggest that correlation between number of native and established mammals is spurious, as it is simply explainable by the fact that both quantities are linked to country area. We also demonstrate that countries with higher human impact host more alien mammals than other countries because they received more introductions than other countries. Differences in number of alien mammals cannot be explained by differences in establishment success. Our findings highlight importance of human activities and question, at least for mammals in Europe, importance of biotic acceptance and resistance.", "which Measure of native biodiversity ?", "Number of native species", 314.0, 338.0], ["Darwin acknowledged contrasting, plausible arguments for how species invasions are influenced by phylogenetic relatedness to the native community. These contrasting arguments persist today without clear resolution. Using data on the naturalization and abundance of exotic plants in the Auckland region, we show how different expectations can be accommodated through attention to scale, assumptions about niche overlap, and stage of invasion. Probability of naturalization was positively related to the number of native species in a genus but negatively related to native congener abundance, suggesting the importance of both niche availability and biotic resistance. Once naturalized, however, exotic abundance was not related to the number of native congeners, but positively related to native congener abundance. Changing the scale of analysis altered this outcome: within habitats exotic abundance was negatively related to native congener abundance, implying that native and exotic species respond similarly to broad scale environmental variation across habitats, with biotic resistance occurring within habitats.", "which Measure of native biodiversity ?", "Number of native species", 502.0, 526.0], ["Besides exacerbated exploitation, pollution, flow alteration and habitats degradation, freshwater biodiversity is also threatened by biological invasions. This paper addresses how native aquatic macrophyte communities are affected by the non-native species Urochloa arrecta, a current successful invader in Brazilian freshwater systems. We compared the native macrophytes colonizing patches dominated and non-dominated by this invader species. We surveyed eight streams in Northwest Parana State (Brazil). In each stream, we recorded native macrophytes' richness and biomass in sites where U. arrecta was dominant and in sites where it was not dominant or absent. No native species were found in seven, out of the eight investigated sites where U. arrecta was dominant. Thus, we found higher native species richness, Shannon index and native biomass values in sites without dominance of U. arrecta than in sites dominated by this invader. Although difficult to conclude about causes of such differences, we infer that the elevated biomass production by this grass might be the primary reason for alterations in invaded environments and for the consequent impacts on macrophytes' native communities. However, biotic resistance offered by native richer sites could be an alternative explanation for our results. To mitigate potential impacts and to prevent future environmental perturbations, we propose mechanical removal of the invasive species and maintenance or restoration of riparian vegetation, for freshwater ecosystems have vital importance for the maintenance of ecological services and biodiversity and should be preserved.", "which Measure of native biodiversity ?", "Shannon index", 817.0, 830.0], ["Assessing the colonizing ability of a species is important for predicting its future distribution or for planning the introduction or reintroduction of that species for conservation purposes. The best way to assess colonizing ability is by making experimental introductions of the species and monitoring the outcome. In this study, different-sized propagules of Roesel's bush-cricket, Metrioptera roeseli, were experimentally introduced into 70 habitat islands, previously uninhabited by the species, in farmland fields in south- eastern Sweden. The areas of introduction were carefully monitored for 2-3 yr to determine whether the propagules had successfully colonized the patches. The study showed that large propagules resulted in larger local populations during the years following introduction. Probability of colonization for each propagule size was measured and showed that propagule size had a significant effect on colonization success, i.e., large propagules were more successful in colonizing new patches. If future introductions were to be made with this or a similar species, a propagule size of at least 32 individuals would be required to establish a viable population with a high degree of certainty.", "which Measure of propagule pressure ?", "Propagule size", 838.0, 852.0], ["Ecological theory on biological invasions attempts to characterize the predictors of invasion success and the relative importance of the different drivers of population establishment. An outstanding question is how propagule pressure determines the probability of population establishment, where propagule pressure is the number of individuals of a species introduced into a specific location (propagule size) and their frequency of introduction (propagule number). Here, we used large-scale replicated mesocosm ponds over three reproductive seasons to identify how propagule size and number predict the probability of establishment of one of world's most invasive fish, Pseudorasbora parva, as well as its effect on the somatic growth of individuals during establishment. We demonstrated that, although a threshold of 11 introduced pairs of fish (a pair is 1 male, 1 female) was required for establishment probability to exceed 95%, establishment also occurred at low propagule size (1-5 pairs). Although single introduction events were as effective as multiple events at enabling establishment, the propagule sizes used in the multiple introductions were above the detected threshold for establishment. After three reproductive seasons, population abundance was also a function of propagule size, with rapid increases in abundance only apparent when propagule size exceeded 25 pairs. This was initially assisted by adapted biological traits, including rapid individual somatic growth that helped to overcome demographic bottlenecks.", "which Measure of propagule pressure ?", "Propagule size", 394.0, 408.0], ["Theory suggests that introduction effort (propagule size or number) should be a key determinant of establishment success for exotic species. Unfortunately, however, propagule pressure is not recorded for most introductions. Studies must therefore either use proxies whose efficacy must be largely assumed, or ignore effort altogether. The results of such studies will be flawed if effort is not distributed at random with respect to other characteristics that are predicted to influence success. We use global data for more than 600 introduction events for birds to show that introduction effort is both the strongest correlate of introduction success, and correlated with a large number of variables previously thought to influence success. Apart from effort, only habitat generalism relates to establishment success in birds.", "which Measure of propagule pressure ?", "Propagule size", 42.0, 56.0], ["We are now beginning to understand the role of intraspecific diversity on fundamental ecological phenomena. There exists a paucity of knowledge, however, regarding how intraspecific, or genetic diversity, may covary with other important factors such as propagule pressure. A combination of theoretical modelling and experimentation was used to explore the way propagule pressure and genetic richness may interact. We compare colonization rates of the Australian bivalve Saccostrea glomerata (Gould 1885). We cross propagule size and genetic richness in a factorial design in order to examine the generalities of our theoretical model. Modelling showed that diversity and propagule pressure should generally interact synergistically when positive feedbacks occur (e.g. aggregation). The strength of genotype effects depended on propagule size, or the numerical abundance of arriving individuals. When propagule size was very small (<4 individuals), however, greater genetic richness unexpectedly reduced colonization. The probability of S. glomerata colonization was 76% in genetically rich, larger propagules, almost 39 percentage points higher than in genetically poor propagules of similar size. This pattern was not observed in less dense, smaller propagules. We predict that density-dependent interactions between larvae in the water column may explain this pattern.", "which Measure of propagule pressure ?", "Propagule size", 514.0, 528.0], ["Abstract: Factors that contribute to the successful establishment of invasive species are often poorly understood. Propagule size is considered a key determinant of establishment success, but experimental tests of its importance are rare. We used experimental colonies of the invasive Argentine ant ( Linepithema humile) that differed both in worker and queen number to test how these attributes influence the survivorship and growth of incipient colonies. All propagules without workers experienced queen mortality, in contrast to only 6% of propagules with workers. In small propagules (10\u20131,000 workers), brood production increased with worker number but not queen number. In contrast, per capita measures of colony growth decreased with worker number over these colony sizes. In larger propagules ( 1,000\u201311,000 workers), brood production also increased with increasing worker number, but per capita brood production appeared independent of colony size. Our results suggest that queens need workers to establish successfully but that propagules with as few as 10 workers can grow quickly. Given the requirements for propagule success in Argentine ants, it is not surprising how easily they spread via human commerce.", "which Measure of propagule pressure ?", "Propagule size", 115.0, 129.0], ["High propagule pressure is arguably the only consistent predictor of colonization success. More individuals enhance colonization success because they aid in overcoming demographic consequences of small population size (e.g. stochasticity and Allee effects). The number of founders can also have direct genetic effects: with fewer individuals, more inbreeding and thus inbreeding depression will occur, whereas more individuals typically harbour greater genetic variation. Thus, the demographic and genetic components of propagule pressure are interrelated, making it difficult to understand which mechanisms are most important in determining colonization success. We experimentally disentangled the demographic and genetic components of propagule pressure by manipulating the number of founders (fewer or more), and genetic background (inbred or outbred) of individuals released in a series of three complementary experiments. We used Bemisia whiteflies and released them onto either their natal host (benign) or a novel host (challenging). Our experiments revealed that having more founding individuals and those individuals being outbred both increased the number of adults produced, but that only genetic background consistently shaped net reproductive rate of experimental populations. Environment was also important and interacted with propagule size to determine the number of adults produced. Quality of the environment interacted also with genetic background to determine establishment success, with a more pronounced effect of inbreeding depression in harsh environments. This interaction did not hold for the net reproductive rate. These data show that the positive effect of propagule pressure on founding success can be driven as much by underlying genetic processes as by demographics. Genetic effects can be immediate and have sizable effects on fitness.", "which Measure of propagule pressure ?", "Propagule size", 1341.0, 1355.0], ["Summary 1. The number of individuals involved in an invasion event, or \u2018propagule size\u2019, has a strong theoretical basis for influencing invasion success. However, rarely has propagule size been experimentally manipulated to examine changes in invader behaviour, and propagule longevity and success. 2. We manipulated propagule size of the invasive Argentine ant Linepithema humile in laboratory and field studies. Laboratory experiments involved L. humile propagules containing two queens and 10, 100, 200 or 1000 workers. Propagules were introduced into arenas containing colonies of queens and 200 workers of the competing native ant Monomorium antarcticum . The effects of food availability were investigated via treatments of only one central resource, or 10 separated resources. Field studies used similar colony sizes of L. humile , which were introduced into novel environments near an invasion front. 3. In laboratory studies, small propagules of L. humile were quickly annihilated. Only the larger propagule size survived and killed the native ant colony in some replicates. Aggression was largely independent of food availability, but the behaviour of L. humile changed substantially with propagule size. In larger propagules, aggressive behaviour was significantly more frequent, while L. humile were much more likely to avoid conflict in smaller propagules. 4. In field studies, however, propagule size did not influence colony persistence. Linepithema humile colonies persisted for up to 2 months, even in small propagules of 10 workers. Factors such as temperature or competitor abundance had no effect, although some colonies were decimated by M. antarcticum . 5. Synthesis and applications. Although propagule size has been correlated with invasion success in a wide variety of taxa, our results indicate that it will have limited predictive power with species displaying behavioural plasticity. We recommend that aspects of animal behaviour be given much more consideration in attempts to model invasion success. Secondly, areas of high biodiversity are thought to offer biotic resistance to invasion via the abundance of predators and competitors. Invasive pests such as L. humile appear to modify their behaviour according to local conditions, and establishment was not related to resource availability. We cannot necessarily rely on high levels of native biodiversity to repel invasions.", "which Measure of propagule pressure ?", "Propagule size", 72.0, 86.0], ["Hybrid energy systems (HESs) generate electricity from multiple energy sources that complement each other. Recently, due to the reduction in costs of photovoltaic (PV) modules and wind turbines, these types of systems have become economically competitive. In this study, a mathematical programming model is applied to evaluate the techno-economic feasibility of autonomous units located in two isolated areas of Ecuador: first, the province of Galapagos (subtropical island) and second, the province of Morona Santiago (Amazonian tropical forest). The two case studies suggest that HESs are potential solutions to reduce the dependence of rural villages on fossil fuels and viable mechanisms to bring electrical power to isolated communities in Ecuador. Our results reveal that not only from the economic but also from the environmental point of view, for the case of the Galapagos province, a hybrid energy system with a PV\u2013wind\u2013battery configuration and a levelized cost of energy (LCOE) equal to 0.36 $/kWh is the optimal energy supply system. For the case of Morona Santiago, a hybrid energy system with a PV\u2013diesel\u2013battery configuration and an LCOE equal to 0.37 $/kWh is the most suitable configuration to meet the load of a typical isolated community in Ecuador. The proposed optimization model can be used as a decision-support tool for evaluating the viability of autonomous HES projects at any other location.", "which System components ?", "Wind turbine", NaN, NaN], ["A capacitance-voltage (C- V) model is developed for RF microelectromechanical systems (MEMS) switches at upstate and downstate. The transient capacitance response of the RF MEMS switches at different switch states was measured for different humidity levels. By using the C -V model as well as the voltage shift dependent of trapped charges, the transient trapped charges at different switch states and humidity levels are obtained. Charging models at different switch states are explored in detail. It is shown that the injected charges increase linearly with humidity levels and the internal polarization increases with increasing humidity at downstate. The speed of charge injection at 80% relative humidity (RH) is about ten times faster than that at 20% RH. A measurement of pull-in voltage shifts by C- V sweep cycles at 20% and 80 % RH gives a reasonable evidence. The present model is useful to understand the pull-in voltage shift of the RF MEMS switch.", "which Study Area ?", " Microelectromechanical Systems (MEMS", NaN, NaN], ["This paper, the first of two parts, presents an electromagnetic model for membrane microelectromechanical systems (MEMS) shunt switches for microwave/millimeter-wave applications. The up-state capacitance can be accurately modeled using three-dimensional static solvers, and full-wave solvers are used to predict the current distribution and inductance of the switch. The loss in the up-state position is equivalent to the coplanar waveguide line loss and is 0.01-0.02 dB at 10-30 GHz for a 2-/spl mu/m-thick Au MEMS shunt switch. It is seen that the capacitance, inductance, and series resistance can be accurately extracted from DC-40 GHz S-parameter measurements. It is also shown that dramatic increase in the down-state isolation (20/sup +/ dB) can be achieved with the choice of the correct LC series resonant frequency of the switch. In part 2 of this paper, the equivalent capacitor-inductor-resistor model is used in the design of tuned high isolation switches at 10 and 30 GHz.", "which Study Area ?", "Microelectromechanical Systems (MEMS)", NaN, NaN], ["This paper reports on the experimental and theoretical characterization of RF microelectromechanical systems (MEMS) switches for high-power applications. First, we investigate the problem of self-actuation due to high RF power and we demonstrate switches that do not self-actuate or catastrophically fail with a measured RF power of up to 5.5 W. Second, the problem of switch stiction to the down state as a function of the applied RF power is also theoretically and experimentally studied. Finally, a novel switch design with a top electrode is introduced and its advantages related to RF power-handling capabilities are presented. By applying this technology, we demonstrate hot-switching measurements with a maximum power of 0.8 W. Our results, backed by theory and measurements, illustrate that careful design can significantly improve the power-handling capabilities of RF MEMS switches.", "which Study Area ?", "Microelectromechanical Systems (MEMS)", NaN, NaN], ["A new technique for the fabrication of radio frequency (RF) microelectromechanical systems (MEMS) shunt switches in recessed coplaner waveguide (CPW) configuration on glass substrates is presented. Membranes with low spring constant are used for reducing the pull-in voltage. A layer of silicon dioxide is deposited on glass wafer and is used to form the recess, which partially defines the gap between the membrane and signal line. Positive photoresist S1813 is used as a sacrificial layer and gold as the membrane material. The membranes are released with the help of Pirhana solution and finally rinsed in low surface tension liquid to avoid stiction during release. Switches with 500 \u00b5m long two-meander membranes show very high isolation of greater than 40 dB at their resonant frequency of 61 GHz and pull-in voltage less than 15 V, while switches with 700 \u00b5m long six-strip membranes show isolation greater than 30 dB at the frequency of 65 GHz and pull-in voltage less than 10 V. Both types of switches show insertion loss less than 0.65 dB up to 65 GHz.", "which Study Area ?", "Microelectromechanical Systems (MEMS)", NaN, NaN], ["RF microelectromechanical systems (MEMS) capacitive switches for two different dielectrics, aluminum nitride (AlN) and silicon nitride (Si3N4), are presented. The switches have been characterized and compared in terms of DC and RF performance (5-40 GHz). Switches based on AlN have higher down-state capacitance for similar dielectric thicknesses and provide better isolation and smaller insertion losses compared to Si3N4 switches. Experiments were carried out on RF MEMS switches with stiffening bars to prevent membrane deformation due to residual stress and with different spring and meander-type anchor designs. For a ~300-nm dielectric thickness, an air gap of 2.3 \u03bcm and identical spring-type designs, the AlN switches systematically show an improvement in the isolation by more than -12 dB (-35.8 dB versus -23.7 dB) and a better insertion loss (-0.68 dB versus -0.90 dB) at 40 GHz compared to Si3N4. DC measurements show small leakage current densities for both dielectrics (<;10-8 A/cm2 at 1 MV/cm). However, the resulting leakage current for AlN devices is ten times higher than for Si3N4 when applying a larger electric field. The fabricated switches were also stressed by applying different voltages in air and vacuum, and dielectric charging effects were investigated. AlN switches eliminate the residual or injected charge faster than the Si3N4 devices do.", "which Study Area ?", "Microelectromechanical Systems (MEMS)", NaN, NaN], ["This paper studies the effect of surface roughness on up-state and down-state capacitances of microelectromechanical systems (MEMS) capacitive switches. When the root-mean-square (RMS) roughness is 10 nm, the up-state capacitance is approximately 9% higher than the theoretical value. When the metal bridge is driven down, the normalized contact area between the metal bridge and the surface of the dielectric layer is less than 1% if the RMS roughness is larger than 2 nm. Therefore, the down-state capacitance is actually determined by the non-contact part of the metal bridge. The normalized isolation is only 62% for RMS roughness of 10 nm when the hold-down voltage is 30 V. The analysis also shows that the down-state capacitance and the isolation increase with the hold-down voltage. The normalized isolation increases from 58% to 65% when the hold-down voltage increases from 10 V to 60 V for RMS roughness of 10 nm.", "which Study Area ?", "Microelectromechanical Systems (MEMS) ", NaN, NaN], ["High-performance p-type oxide thin film transistors (TFTs) have great potential for many semiconductor applications. However, these devices typically suffer from low hole mobility and high off-state currents. We fabricated p-type TFTs with a phase-pure polycrystalline Cu2O semiconductor channel grown by atomic layer deposition (ALD). The TFT switching characteristics were improved by applying a thin ALD Al2O3 passivation layer on the Cu2O channel, followed by vacuum annealing at 300 \u00b0C. Detailed characterization by transmission electron microscopy-energy dispersive X-ray analysis and X-ray photoelectron spectroscopy shows that the surface of Cu2O is reduced following Al2O3 deposition and indicates the formation of a 1-2 nm thick CuAlO2 interfacial layer. This, together with field-effect passivation caused by the high negative fixed charge of the ALD Al2O3, leads to an improvement in the TFT performance by reducing the density of deep trap states as well as by reducing the accumulation of electrons in the semiconducting layer in the device off-state.", "which Film deposition method ?", "Atomic layer deposition (ALD)", NaN, NaN], ["In this study, we successfully deposit c-axis oriented aluminum nitride (AlN) piezoelectric films at low temperature (100 \u00b0C) via the DC sputtering method with tilt gun. The X-ray diffraction (XRD) observations prove that the deposited films have a c-axis preferred orientation. Effective d33 value of the proposed films is 5.92 pm/V, which is better than most of the reported data using DC sputtering or other processing methods. It is found that the gun placed at 25\u00b0 helped the films to rearrange at low temperature and c-axis orientation AlN films were successfully grown at 100 \u00b0C. This temperature is much lower than the reported growing temperature. It means the piezoelectric films can be deposited at flexible substrate and the photoresist can be stable at this temperature. The cantilever beam type microelectromechanical systems (MEMS) piezoelectric accelerometers are then fabricated based on the proposed AlN films with a lift-off process. The results show that the responsivity of the proposed devices is 8.12 mV/g, and the resonance frequency is 460 Hz, which indicates they can be used for machine tools.", "which Film deposition method ?", "DC sputtering method", 134.0, 154.0], ["This paper presents the effect of thermal annealing on \u03b2\u2010Ga2O3 thin film solar\u2010blind (SB) photodetector (PD) synthesized on c\u2010plane sapphire substrates by a low pressure chemical vapor deposition (LPCVD). The thin films were synthesized using high purity gallium (Ga) and oxygen (O2) as source precursors. The annealing was performed ex situ the under the oxygen atmosphere, which helped to reduce oxygen or oxygen\u2010related vacancies in the thin film. Metal/semiconductor/metal (MSM) type photodetectors were fabricated using both the as\u2010grown and annealed films. The PDs fabricated on the annealed films had lower dark current, higher photoresponse and improved rejection ratio (R250/R370 and R250/R405) compared to the ones fabricated on the as\u2010grown films. These improved PD performances are due to the significant reduction of the photo\u2010generated carriers trapped by oxygen or oxygen\u2010related vacancies.", "which Film deposition method ?", "Low pressure chemical vapor deposition (LPCVD)", NaN, NaN], ["Highly robust poly\u2010Si thin\u2010film transistor (TFT) on polyimide (PI) substrate using blue laser annealing (BLA) of amorphous silicon (a\u2010Si) for lateral crystallization is demonstrated. Its foldability is compared with the conventional excimer laser annealing (ELA) poly\u2010Si TFT on PI used for foldable displays exhibiting field\u2010effect mobility of 85 cm2 (V s)\u22121. The BLA poly\u2010Si TFT on PI exhibits the field\u2010effect mobility, threshold voltage (VTH), and subthreshold swing of 153 cm2 (V s)\u22121, \u22122.7 V, and 0.2 V dec\u22121, respectively. Most important finding is the excellent foldability of BLA TFT compared with the ELA poly\u2010Si TFTs on PI substrates. The VTH shift of BLA poly\u2010Si TFT is \u22480.1 V, which is much smaller than that (\u22482 V) of ELA TFT on PI upon 30 000 cycle folding. The defects are generated at the grain boundary region of ELA poly\u2010Si during folding. However, BLA poly\u2010Si has no protrusion in the poly\u2010Si channel and thus no defect generation during folding. This leads to excellent foldability of BLA poly\u2010Si on PI substrate.", "which Film deposition method ?", "Blue laser annealing (BLA) of amorphous silicon ", NaN, NaN], ["Highly c-axis oriented aluminum nitrade (AlN) films were successfully deposited on flexible Hastelloy tapes by middle-frequency magnetron sputtering. The microstructure and piezoelectric properties of the AlN films were investigated. The results show that the AlN films deposited directly on the bare Hastelloy substrate have rough surface with root mean square (RMS) roughness of 32.43nm and its full width at half maximum (FWHM) of the AlN (0002) peak is 12.5\u2218. However, the AlN films deposited on the Hastelloy substrate with Y2O3 buffer layer show smooth surface with RMS roughness of 5.46nm and its FWHM of the AlN (0002) peak is only 3.7\u2218. The piezoelectric coefficient d33 of the AlN films deposited on the Y2O3/Hastelloy substrate is larger than three times that of the AlN films deposited on the bare Hastelloy substrate. The prepared highly c-axis oriented AlN films can be used to develop high-temperature flexible SAW sensors.", "which Device type ?", "Flexible SAW sensor", NaN, NaN], ["In this study, we successfully deposit c-axis oriented aluminum nitride (AlN) piezoelectric films at low temperature (100 \u00b0C) via the DC sputtering method with tilt gun. The X-ray diffraction (XRD) observations prove that the deposited films have a c-axis preferred orientation. Effective d33 value of the proposed films is 5.92 pm/V, which is better than most of the reported data using DC sputtering or other processing methods. It is found that the gun placed at 25\u00b0 helped the films to rearrange at low temperature and c-axis orientation AlN films were successfully grown at 100 \u00b0C. This temperature is much lower than the reported growing temperature. It means the piezoelectric films can be deposited at flexible substrate and the photoresist can be stable at this temperature. The cantilever beam type microelectromechanical systems (MEMS) piezoelectric accelerometers are then fabricated based on the proposed AlN films with a lift-off process. The results show that the responsivity of the proposed devices is 8.12 mV/g, and the resonance frequency is 460 Hz, which indicates they can be used for machine tools.", "which Device type ?", "Piezoelectric accelerometers ", 847.0, 876.0], ["Highly sensitive, transparent, and durable pressure sensors are fabricated using sea-urchin-shaped metal nanoparticles and insulating polyurethane elastomer. The pressure sensors exhibit outstanding sensitivity (2.46 kPa-1 ), superior optical transmittance (84.8% at 550 nm), fast response/relaxation time (30 ms), and excellent operational durability. In addition, the pressure sensors successfully detect minute movements of human muscles.", "which Piezoresistive Material ?", "Metal Nanoparticles", 99.0, 118.0], ["Registration systems for diseases and other health outcomes provide important resource for biomedical research, as well as tools for public health surveillance and improvement of quality of care. The Ministry of Health and Medical Education (MOHME) of Iran launched a national program to establish registration systems for different diseases and health outcomes. Based on the national program, we organized several workshops and training programs and disseminated the concepts and knowledge of the registration systems. Following a call for proposals, we received 100 applications and after thorough evaluation and corrections by the principal investigators, we approved and granted about 80 registries for three years. Having strong steering committee, committed executive and scientific group, establishing national and international collaboration, stating clear objectives, applying feasible software, and considering stable financing were key components for a successful registry and were considered in the evaluation processes. We paid particulate attention to non-communicable diseases, which constitute an emerging public health problem. We prioritized establishment of regional population-based cancer registries (PBCRs) in 10 provinces in collaboration with the International Agency for Research on Cancer. This initiative was successful and registry programs became popular among researchers and research centers and created several national and international collaborations in different areas to answer important public health and clinical questions. In this paper, we report the details of the program and list of registries that were granted in the first round.", "which Executive ?", "Ministry of Health and Medical Education", 200.0, 240.0], ["Focusing on the archival records of the production and performance of Dance in Trees and Church by the Swedish independent dance group Rubicon, this article conceptualizes a records-oriented costume ethics. Theorizations of costume as a co-creative agent of performance are brought into the dance archive to highlight the productivity of paying attention to costume in the making of performance history. Addressing recent developments within archival studies, a feminist ethics of care and radical empathy is employed, which is the capability to empathically engage with others, even if it can be difficult, as a means of exploring how a records-centred costume ethics can be conceptualized for the dance archive. The exploration resulted in two ethical stances useful for better attending to costume-bodies in the dance archive: (1) caring for costume-body relations in the dance archive means that a conventional, so-called static understanding of records as neutral carriers of facts is replaced by a more inclusive, expanding and infinite process. By moving across time and space, and with a caring attitude finding and exploring fragments from various, sometimes contradictory production processes, one can help scattered and poorly represented dance and costume histories to emerge and contribute to the formation of identity and memory. (2) The use of bodily empathy with records can respectfully bring together the understanding of costume in performance as inseparable from the performer\u2019s body with dance as an art form that explicitly uses the dancing costume-body as an expressive tool. It is argued that bodily empathy with records in the dance archive helps one access bodily holisms that create possibilities for exploring the potential of art to critically expose and render strange ideological systems and normativities.", "which performance title ?", "Dance in Trees and Church", 91.0, 116.0], ["Focusing on the archival records of the production and performance of Dance in Trees and Church by the Swedish independent dance group Rubicon, this article conceptualizes a records-oriented costume ethics. Theorizations of costume as a co-creative agent of performance are brought into the dance archive to highlight the productivity of paying attention to costume in the making of performance history. Addressing recent developments within archival studies, a feminist ethics of care and radical empathy is employed, which is the capability to empathically engage with others, even if it can be difficult, as a means of exploring how a records-centred costume ethics can be conceptualized for the dance archive. The exploration resulted in two ethical stances useful for better attending to costume-bodies in the dance archive: (1) caring for costume-body relations in the dance archive means that a conventional, so-called static understanding of records as neutral carriers of facts is replaced by a more inclusive, expanding and infinite process. By moving across time and space, and with a caring attitude finding and exploring fragments from various, sometimes contradictory production processes, one can help scattered and poorly represented dance and costume histories to emerge and contribute to the formation of identity and memory. (2) The use of bodily empathy with records can respectfully bring together the understanding of costume in performance as inseparable from the performer\u2019s body with dance as an art form that explicitly uses the dancing costume-body as an expressive tool. It is argued that bodily empathy with records in the dance archive helps one access bodily holisms that create possibilities for exploring the potential of art to critically expose and render strange ideological systems and normativities.", "which has research domain ?", "Theorizations of costume as a co-creative agent of performance", 242.0, 304.0], ["Urban performance currently depends not only on a city's endowment of hard infrastructure (physical capital), but also, and increasingly so, on the availability and quality of knowledge communication and social infrastructure (human and social capital). The latter form of capital is decisive for urban competitiveness. Against this background, the concept of the \u201csmart city\u201d has recently been introduced as a strategic device to encompass modern urban production factors in a common framework and, in particular, to highlight the importance of Information and Communication Technologies (ICTs) in the last 20 years for enhancing the competitive profile of a city. The present paper aims to shed light on the often elusive definition of the concept of the \u201csmart city.\u201d We provide a focused and operational definition of this construct and present consistent evidence on the geography of smart cities in the EU27. Our statistical and graphical analyses exploit in depth, for the first time to our knowledge, the most recent version of the Urban Audit data set in order to analyze the factors determining the performance of smart cities. We find that the presence of a creative class, the quality of and dedicated attention to the urban environment, the level of education, and the accessibility to and use of ICTs for public administration are all positively correlated with urban wealth. This result prompts the formulation of a new strategic agenda for European cities that will allow them to achieve sustainable urban development and a better urban landscape.", "which has research domain ?", "definition of the concept of the \u201csmart city.\u201d", NaN, NaN], ["An optimization study of the mix ratio for substitution of Wheat flour with Cassava and African Yam Bean flours (AYB) was carried out and reported in this paper. The aim was to obtain a mix ratio that would optimise selected physical properties of the bread. Wheat flour was substituted with Cassava and African Yam Bean flours at different levels: 80% to 100% of wheat, 0% to 10% of cassava flour and 0% to 10% for AYB flour. The experiment was conducted in mixture design which was generated and analysed by Design-Expert Software 11 version. The Composite dough was prepared in different mix ratios according to the design matrix and subsequently baked under the same conditions and analysed for the following loaf quality attributes: Loaf Specific Volume, Bread Crumb Hardness and Crumb Colour Index as response variables. The objective functions were to maximize Loaf Specific Volume, minimize Wheat flour, Bread Crumb Hardness and Crumb Colour Index to obtain the most suitable substitution ratio acceptable to consumers. Predictive models for the response variables were developed with the coefficient of determination (R) of 0.991 for Loaf Specific Volume (LSV) while that of Bread Crumb Hardness (BCH) and Crumb Colour Index (CCI) were 0.834 and 0.895 respectively at 95% confidence interval (CI).The predicted optimal substitution ratio was obtained as follows: 88% Wheat flour, 10% Cassava flour, and 2% AYB flour. At this formulation, the predicted Loaf Specific Volume was 2.11cm/g, Bread Crumb Hardness was 25.12N, and Crumb Colour Index was 18.88.The study shows that addition of 2% of AYB flour in the formulation would help to optimise the LSV, BCH and the CCI of the Wheat-Cassava flour bread at the mix ratio of 88:10. Application of the results of this study in bread industries will reduce the cost of bread in Nigeria, which is influenced by the rising cost of imported wheat. This is a significant development because wheat flour was the sole baking flour in Nigeria before wheat substitution initiative.", "which non wheat flour ?", "Cassava and African Yam Bean", 76.0, 104.0], ["Acute lymphoblastic leukemia (ALL) is the most common childhood malignancy, and implementation of risk-adapted therapy has been instrumental in the dramatic improvements in clinical outcomes. A key to risk-adapted therapies includes the identification of genomic features of individual tumors, including chromosome number (for hyper- and hypodiploidy) and gene fusions, notably ETV6-RUNX1, TCF3-PBX1, and BCR-ABL1 in B-cell ALL (B-ALL). RNA-sequencing (RNA-seq) of large ALL cohorts has expanded the number of recurrent gene fusions recognized as drivers in ALL, and identification of these new entities will contribute to refining ALL risk stratification. We used RNA-seq on 126 ALL patients from our clinical service to test the utility of including RNA-seq in standard-of-care diagnostic pipelines to detect gene rearrangements and IKZF1 deletions. RNA-seq identified 86% of rearrangements detected by standard-of-care diagnostics. KMT2A (MLL) rearrangements, although usually identified, were the most commonly missed by RNA-seq as a result of low expression. RNA-seq identified rearrangements that were not detected by standard-of-care testing in 9 patients. These were found in patients who were not classifiable using standard molecular assessment. We developed an approach to detect the most common IKZF1 deletion from RNA-seq data and validated this using an RQ-PCR assay. We applied an expression classifier to identify Philadelphia chromosome-like B-ALL patients. T-ALL proved a rich source of novel gene fusions, which have clinical implications or provide insights into disease biology. Our experience shows that RNA-seq can be implemented within an individual clinical service to enhance the current molecular diagnostic risk classification of ALL.", "which Population size ?", "b-cell ALL", 417.0, 427.0], ["Remote sensing techniques have emerged as an asset for various geological studies. Satellite images obtained by different sensors contain plenty of information related to the terrain. Digital image processing further helps in customized ways for the prospecting of minerals. In this study, an attempt has been made to map the hydrothermally altered zones using multispectral and hyperspectral datasets of South East Rajasthan. Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Hyperion (Level1R) dataset have been processed to generate different Band Ratio Composites (BRCs). For this study, ASTER derived BRCs were generated to delineate the alteration zones, gossans, abundant clays and host rocks. ASTER and Hyperion images were further processed to extract mineral end members and classified mineral maps have been produced using Spectral Angle Mapper (SAM) method. Results were validated with the geological map of the area which shows positive agreement with the image processing outputs. Thus, this study concludes that the band ratios and image processing in combination play significant role in demarcation of alteration zones which may provide pathfinders for mineral prospecting studies. Keywords\u2014Advanced space-borne thermal emission and reflection radiometer, ASTER, Hyperion, Band ratios, Alteration zones, spectral angle mapper.", "which Band Parameters ?", "Band Ratio", 571.0, 581.0], ["Reducing the number of image bands input for principal component analysis (PCA) ensures that certain materials will not be mapped and increases the likelihood that others will be unequivocally mapped into only one of the principal component images. In arid terrain, PCA of four TM bands will avoid iron-oxide and thus more reliably detect hydroxyl-bearing minerals if only one input band is from the visible spectrum. PCA for iron-oxide mapping will avoid hydroxyls if only one of the SWIR bands is used. A simple principal component color composite image can then be created in which anomalous concentrations of hydroxyl, hydroxyl plus iron-oxide, and iron-oxide are displayed brightly in red-green-blue (RGB) color space. This composite allows qualitative inferences on alteration type and intensity to be made which can be widely applied.", "which Data Dimensionality Reduction Methods ?", "Principal Component Analysis (PCA)", NaN, NaN], ["Principal component analysis (PCA) is an image processing technique that has been commonly applied to Landsat Thematic Mapper (TM) data to locate hydrothermal alteration zones related to metallic deposits. With the advent of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), a 14-band multispectral sensor operating onboard the Earth Observation System (EOS)-Terra satellite, the availability of spectral information in the shortwave infrared (SWIR) portion of the electromagnetic spectrum has been greatly increased. This allows detailed spectral characterization of surface targets, particularly of those belonging to the groups of minerals with diagnostic spectral features in this wavelength range, including phyllosilicates (\u2018clay\u2019 minerals), sulphates and carbonates, among others. In this study, PCA was applied to ASTER bands covering the SWIR with the objective of mapping the occurrence of mineral endmembers related to an epithermal gold prospect in Patagonia, Argentina. The results illustrate ASTER's ability to provide information on alteration minerals which are valuable for mineral exploration activities and support the role of PCA as a very effective and robust image processing technique for that purpose.", "which Data Dimensionality Reduction Methods ?", "Principal Component Analysis (PCA)", NaN, NaN], ["Remote sensing techniques have emerged as an asset for various geological studies. Satellite images obtained by different sensors contain plenty of information related to the terrain. Digital image processing further helps in customized ways for the prospecting of minerals. In this study, an attempt has been made to map the hydrothermally altered zones using multispectral and hyperspectral datasets of South East Rajasthan. Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Hyperion (Level1R) dataset have been processed to generate different Band Ratio Composites (BRCs). For this study, ASTER derived BRCs were generated to delineate the alteration zones, gossans, abundant clays and host rocks. ASTER and Hyperion images were further processed to extract mineral end members and classified mineral maps have been produced using Spectral Angle Mapper (SAM) method. Results were validated with the geological map of the area which shows positive agreement with the image processing outputs. Thus, this study concludes that the band ratios and image processing in combination play significant role in demarcation of alteration zones which may provide pathfinders for mineral prospecting studies. Keywords\u2014Advanced space-borne thermal emission and reflection radiometer, ASTER, Hyperion, Band ratios, Alteration zones, spectral angle mapper.", "which Spectral Mapping Technique ?", "Spectral Angle Mapper (SAM)", NaN, NaN], ["Abstract The objective of the present study is to estimate total organic carbon (TOC) content over the entire thickness of Cambay Shale, in the boreholes of Jambusar\u2013Broach block of Cambay Basin, India. To achieve this objective, support vector regression (SVR), a supervised data mining technique, has been utilized using five basic wireline logs as input variables. Suitable SVR model has been developed by selecting epsilon-SVR algorithm and varying three different kernel functions and parameters like gamma and cost on a sample dataset. The best result is obtained when the radial-basis kernel function with gamma = 1 and cost = 1, are used. Finally, the performance of developed SVR model is compared with the \u0394logR method. The TOC computed by SVR method is found to be more precise than the \u0394logR method, as it has better agreement with the core-TOC. Thus, in the present study area, the SVR method is found to be a powerful tool for estimating TOC of Cambay Shale in a continuous and rapid manner.", "which Area of study ?", "Cambay Basin, India", 182.0, 201.0], ["Abstract The objective of the present study is to estimate total organic carbon (TOC) content over the entire thickness of Cambay Shale, in the boreholes of Jambusar\u2013Broach block of Cambay Basin, India. To achieve this objective, support vector regression (SVR), a supervised data mining technique, has been utilized using five basic wireline logs as input variables. Suitable SVR model has been developed by selecting epsilon-SVR algorithm and varying three different kernel functions and parameters like gamma and cost on a sample dataset. The best result is obtained when the radial-basis kernel function with gamma = 1 and cost = 1, are used. Finally, the performance of developed SVR model is compared with the \u0394logR method. The TOC computed by SVR method is found to be more precise than the \u0394logR method, as it has better agreement with the core-TOC. Thus, in the present study area, the SVR method is found to be a powerful tool for estimating TOC of Cambay Shale in a continuous and rapid manner.", "which Machine learning algorithms ?", "support vector regression", 230.0, 255.0], ["Over the past decade, people have been expressing more and more of their personalities online. Online social networks such as Facebook.com capture much of individuals' personalities through their published interests, attributes and social interactions. Knowledge of an individual's personality can be of wide utility, either for social research, targeted marketing or a variety of other fields A key problem to predicting and utilizing personality information is the myriad of ways it is expressed across various people, locations and cultures. Similarly, a model predicting personality based on online data which cannot be extrapolated to \"real world\" situations is of limited utility for researchers. This paper presents initial work done on generating a probabilistic model of personality which uses representations of people's connections to other people, places, cultures, and ideas, as expressed through Face book. To this end, personality was predicted using a machine learning method known as a Bayesian Network. The model was trained using Face book data combined with external data sources to allow further inference. The results of this paper present one predictive model of personality that this project has produced. This model demonstrates the potential of this methodology in two ways: First, it is able to explain up to 56% of all variation in a personality trait from a sample of 615 individuals. Second it is able to clearly present how this variability is explained through findings such as how to determine how agreeable a man is based on his age, number of Face book wall posts, and his willingness to disclose his preference for music made by Lady Gaga.", "which Machine learning algorithms ?", "Bayesian Network", 1003.0, 1019.0], ["Twitter enables large populations of end-users of software to publicly share their experiences and concerns about software systems in the form of micro-blogs. Such data can be collected and classified to help software developers infer users' needs, detect bugs in their code, and plan for future releases of their systems. However, automatically capturing, classifying, and presenting useful tweets is not a trivial task. Challenges stem from the scale of the data available, its unique format, diverse nature, and high percentage of irrelevant information and spam. Motivated by these challenges, this paper reports on a three-fold study that is aimed at leveraging Twitter as a main source of software user requirements. The main objective is to enable a responsive, interactive, and adaptive data-driven requirements engineering process. Our analysis is conducted using 4,000 tweets collected from the Twitter feeds of 10 software systems sampled from a broad range of application domains. The results reveal that around 50% of collected tweets contain useful technical information. The results also show that text classifiers such as Support Vector Machines and Naive Bayes can be very effective in capturing and categorizing technically informative tweets. Additionally, the paper describes and evaluates multiple summarization strategies for generating meaningful summaries of informative software-relevant tweets.", "which Machine learning algorithms ?", "Naive bayes", 1166.0, 1177.0], ["Context: The leading App distribution platforms, Apple App Store, Google Play, and Windows Phone Store, have over 4 million Apps. Research shows that user reviews contain abundant useful information which may help developers to improve their Apps. Extracting and considering Non-Functional Requirements (NFRs), which describe a set of quality attributes wanted for an App and are hidden in user reviews, can help developers to deliver a product which meets users' expectations. Objective: Developers need to be aware of the NFRs from massive user reviews during software maintenance and evolution. Automatic user reviews classification based on an NFR standard provides a feasible way to achieve this goal. Method: In this paper, user reviews were automatically classified into four types of NFRs (reliability, usability, portability, and performance), Functional Requirements (FRs), and Others. We combined four classification techniques BoW, TF-IDF, CHI2, and AUR-BoW (proposed in this work) with three machine learning algorithms Naive Bayes, J48, and Bagging to classify user reviews. We conducted experiments to compare the F-measures of the classification results through all the combinations of the techniques and algorithms. Results: We found that the combination of AUR-BoW with Bagging achieves the best result (a precision of 71.4%, a recall of 72.3%, and an F-measure of 71.8%) among all the combinations. Conclusion: Our finding shows that augmented user reviews can lead to better classification results, and the machine learning algorithm Bagging is more suitable for NFRs classification from user reviews than Na\u00efve Bayes and J48.", "which Machine learning algorithms ?", "Naive bayes", 1033.0, 1044.0], ["Twitter enables large populations of end-users of software to publicly share their experiences and concerns about software systems in the form of micro-blogs. Such data can be collected and classified to help software developers infer users' needs, detect bugs in their code, and plan for future releases of their systems. However, automatically capturing, classifying, and presenting useful tweets is not a trivial task. Challenges stem from the scale of the data available, its unique format, diverse nature, and high percentage of irrelevant information and spam. Motivated by these challenges, this paper reports on a three-fold study that is aimed at leveraging Twitter as a main source of software user requirements. The main objective is to enable a responsive, interactive, and adaptive data-driven requirements engineering process. Our analysis is conducted using 4,000 tweets collected from the Twitter feeds of 10 software systems sampled from a broad range of application domains. The results reveal that around 50% of collected tweets contain useful technical information. The results also show that text classifiers such as Support Vector Machines and Naive Bayes can be very effective in capturing and categorizing technically informative tweets. Additionally, the paper describes and evaluates multiple summarization strategies for generating meaningful summaries of informative software-relevant tweets.", "which Machine learning algorithms ?", "Support vector machines", 1138.0, 1161.0], ["Alt text (short for \"alternative text\") is descriptive text associated with an image in HTML and other document formats. Screen reader technologies speak the alt text aloud to people who are visually impaired. Introduced with HTML 2.0 in 1995, the alt attribute has not evolved despite significant changes in technology over the past two decades. In light of the expanding volume, purpose, and importance of digital imagery, we reflect on how alt text could be supplemented to offer a richer experience of visual content to screen reader users. Our contributions include articulating the design space of representations of visual content for screen reader users, prototypes illustrating several points within this design space, and evaluations of several of these new image representations with people who are blind. We close by discussing the implications of our taxonomy, prototypes, and user study findings.", "which User recommendation ?", "Design space", 588.0, 600.0], ["There is an urgent need for vaccines to counter the COVID-19 pandemic due to infections with severe acute respiratory syndrome coronavirus (SARS-CoV-2). Evidence from convalescent sera and preclinical studies has identified the viral Spike (S) protein as a key antigenic target for protective immune responses. We have applied an mRNA-based technology platform, RNActive, to develop CVnCoV which contains sequence optimized mRNA coding for a stabilized form of S protein encapsulated in lipid nanoparticles (LNP). Following demonstration of protective immune responses against SARS-CoV-2 in animal models we performed a dose-escalation phase 1 study in healthy 18-60 year-old volunteers. This interim analysis shows that two doses of CVnCoV ranging from 2 g to 12 g per dose, administered 28 days apart were safe. No vaccine-related serious adverse events were reported. There were dose-dependent increases in frequency and severity of solicited systemic adverse events, and to a lesser extent of local reactions, but the majority were mild or moderate and transient in duration. Immune responses when measured as IgG antibodies against S protein or its receptor-binding domain (RBD) by ELISA, and SARS-CoV-2-virus neutralizing antibodies measured by micro-neutralization, displayed dose-dependent increases. Median titers measured in these assays two weeks after the second 12 g dose were comparable to the median titers observed in convalescent sera from COVID-19 patients. Seroconversion (defined as a 4-fold increase over baseline titer) of virus neutralizing antibodies two weeks after the second vaccination occurred in all participants who received 12 g doses. Preliminary results in the subset of subjects who were enrolled with known SARS-CoV-2 seropositivity at baseline show that CVnCoV is also safe and well tolerated in this population, and is able to boost the pre-existing immune response even at low dose levels. Based on these results, the 12 g dose is selected for further clinical investigation, including a phase 2b/3 study that will investigate the efficacy, safety, and immunogenicity of the candidate vaccine CVnCoV.", "which Delivery Vehicle ?", "Lipid nanoparticles", 487.0, 506.0], ["This article aims to identify whether different weighted PageRank algorithms can be applied to author citation networks to measure the popularity and prestige of a scholar from a citation perspective. Information retrieval (IR) was selected as a test field and data from 1956\u20132008 were collected from Web of Science. Weighted PageRank with citation and publication as weighted vectors were calculated on author citation networks. The results indicate that both popularity rank and prestige rank were highly correlated with the weighted PageRank. Principal component analysis was conducted to detect relationships among these different measures. For capturing prize winners within the IR field, prestige rank outperformed all the other measures. \u00a9 2011 Wiley Periodicals, Inc.", "which Scientific network(s) ?", "Author Citation", 95.0, 110.0], ["BioPortal is a repository of biomedical ontologies-the largest such repository, with more than 300 ontologies to date. This set includes ontologies that were developed in OWL, OBO and other formats, as well as a large number of medical terminologies that the US National Library of Medicine distributes in its own proprietary format. We have published the RDF version of all these ontologies at http://sparql.bioontology.org. This dataset contains 190M triples, representing both metadata and content for the 300 ontologies. We use the metadata that the ontology authors provide and simple RDFS reasoning in order to provide dataset users with uniform access to key properties of the ontologies, such as lexical properties for the class names and provenance data. The dataset also contains 9.8M cross-ontology mappings of different types, generated both manually and automatically, which come with their own metadata.", "which Application field ?", "Biomedical ontologies", 29.0, 50.0], ["INTRODUCTION: The 2019 coronavirus disease (COVID-19) is a major global health concern. Joint efforts for effective surveillance of COVID-19 require immediate transmission of reliable data. In this regard, a standardized and interoperable reporting framework is essential in a consistent and timely manner. Thus, this research aimed at to determine data requirements towards interoperability. MATERIALS AND METHODS: In this cross-sectional and descriptive study, a combination of literature study and expert consensus approach was used to design COVID-19 Minimum Data Set (MDS). A MDS checklist was extracted and validated. The definitive data elements of the MDS were determined by applying the Delphi technique. Then, the existing messaging and data standard templates (Health Level Seven-Clinical Document Architecture [HL7-CDA] and SNOMED-CT) were used to design the surveillance interoperable framework. RESULTS: The proposed MDS was divided into administrative and clinical sections with three and eight data classes and 29 and 40 data fields, respectively. Then, for each data field, structured data values along with SNOMED-CT codes were defined and structured according HL7-CDA standard. DISCUSSION AND CONCLUSION: The absence of effective and integrated system for COVID-19 surveillance can delay critical public health measures, leading to increased disease prevalence and mortality. The heterogeneity of reporting templates and lack of uniform data sets hamper the optimal information exchange among multiple systems. Thus, developing a unified and interoperable reporting framework is more effective to prompt reaction to the COVID-19 outbreak.", "which Epidemiological surveillance system purpose ?", "COVID-19 surveillance", 1275.0, 1296.0], ["An increasing trend toward product development in a collaborative environment has resulted in the use of various software tools to enhance the product design. This requires a meaningful representation and exchange of product data semantics across different application domains. This paper proposes an ontology-based framework to enable such semantic interoperability. A standards-based approach is used to develop a Product Semantic Representation Language (PSRL). Formal description logic (DAML+OIL) is used to encode the PSRL. Mathematical logic and corresponding reasoning is used to determine semantic equivalences between an application ontology and the PSRL. The semantic equivalence matrix enables resolution of ambiguities created due to differences in syntaxes and meanings associated with terminologies in different application domains. Successful semantic interoperability will form the basis of seamless communication and thereby enable better integration of product development systems. Note to Practitioners-Semantic interoperability of product information refers to automating the exchange of meaning associated with the data, among information resources throughout the product development. This research is motivated by the problems in enabling such semantic interoperability. First, product information is formalized into an explicit, extensible, and comprehensive product semantics representation language (PSRL). The PSRL is open and based on standard W3C constructs. Next, in order to enable semantic translation, the paper describes a procedure to semi-automatically determine mappings between exactly equivalent concepts across representations of the interacting applications. The paper demonstrates that this approach to translation is feasible, but it has not yet been implemented commercially. Current limitations and the directions for further research are discussed. Future research addresses the determination of semantic similarities (not exact equivalences) between the interacting information resources.", "which hasRepresentationMasterData ?", "Description Logic", 472.0, 489.0], ["In this paper, we present a novel approach to estimate the relative depth of regions in monocular images. There are several contributions. First, the task of monocular depth estimation is considered as a learning-to-rank problem which offers several advantages compared to regression approaches. Second, monocular depth clues of human perception are modeled in a systematic manner. Third, we show that these depth clues can be modeled and integrated appropriately in a Rankboost framework. For this purpose, a space-efficient version of Rankboost is derived that makes it applicable to rank a large number of objects, as posed by the given problem. Finally, the monocular depth clues are combined with results from a deep learning approach. Experimental results show that the error rate is reduced by adding the monocular features while outperforming state-of-the-art systems.", "which Has metric ?", "Error rate", 776.0, 786.0], ["Two modalities are often used to convey information in a complementary and beneficial manner, e.g., in online news, videos, educational resources, or scientific publications. The automatic understanding of semantic correlations between text and associated images as well as their interplay has a great potential for enhanced multimodal web search and recommender systems. However, automatic understanding of multimodal information is still an unsolved research problem. Recent approaches such as image captioning focus on precisely describing visual content and translating it to text, but typically address neither semantic interpretations nor the specific role or purpose of an image-text constellation. In this paper, we go beyond previous work and investigate, inspired by research in visual communication, useful semantic image-text relations for multimodal information retrieval. We derive a categorization of eight semantic image-text classes (e.g., \"illustration\" or \"anchorage\") and show how they can systematically be characterized by a set of three metrics: cross-modal mutual information, semantic correlation, and the status relation of image and text. Furthermore, we present a deep learning system to predict these classes by utilizing multimodal embeddings. To obtain a sufficiently large amount of training data, we have automatically collected and augmented data from a variety of datasets and web resources, which enables future research on this topic. Experimental results on a demanding test set demonstrate the feasibility of the approach.", "which Has metric ?", "Cross-Modal Mutual Information", 1069.0, 1099.0], ["Two modalities are often used to convey information in a complementary and beneficial manner, e.g., in online news, videos, educational resources, or scientific publications. The automatic understanding of semantic correlations between text and associated images as well as their interplay has a great potential for enhanced multimodal web search and recommender systems. However, automatic understanding of multimodal information is still an unsolved research problem. Recent approaches such as image captioning focus on precisely describing visual content and translating it to text, but typically address neither semantic interpretations nor the specific role or purpose of an image-text constellation. In this paper, we go beyond previous work and investigate, inspired by research in visual communication, useful semantic image-text relations for multimodal information retrieval. We derive a categorization of eight semantic image-text classes (e.g., \"illustration\" or \"anchorage\") and show how they can systematically be characterized by a set of three metrics: cross-modal mutual information, semantic correlation, and the status relation of image and text. Furthermore, we present a deep learning system to predict these classes by utilizing multimodal embeddings. To obtain a sufficiently large amount of training data, we have automatically collected and augmented data from a variety of datasets and web resources, which enables future research on this topic. Experimental results on a demanding test set demonstrate the feasibility of the approach.", "which Has metric ?", "Semantic Correlation", 1101.0, 1121.0], ["The impacts of alien plants on native richness are usually assessed at small spatial scales and in locations where the alien is at high abundance. But this raises two questions: to what extent do impacts occur where alien species are at low abundance, and do local impacts translate to effects at the landscape scale? In an analysis of 47 widespread alien plant species occurring across a 1,000 km2 landscape, we examined the relationship between their local abundance and native plant species richness in 594 grassland plots. We first defined the critical abundance at which these focal alien species were associated with a decline in native \u03b1\u2010richness (plot\u2010scale species numbers), and then assessed how this local decline was translated into declines in native species \u03b3\u2010richness (landscape\u2010scale species numbers). After controlling for sampling biases and environmental gradients that might lead to spurious relationships, we found that eight out of 47 focal alien species were associated with a significant decline in native \u03b1\u2010richness as their local abundance increased. Most of these significant declines started at low to intermediate classes of abundance. For these eight species, declines in native \u03b3\u2010richness were, on average, an order of magnitude (32.0 vs. 2.2 species) greater than those found for native \u03b1\u2010richness, mostly due to spatial homogenization of native communities. The magnitude of the decrease at the landscape scale was best explained by the number of plots where an alien species was found above its critical abundance. Synthesis. Even at low abundance, alien plants may impact native plant richness at both local and landscape scales. Local impacts may result in much greater declines in native richness at larger spatial scales. Quantifying impact at the landscape scale requires consideration of not only the prevalence of an alien plant, but also its critical abundance and its effect on native community homogenization. This suggests that management approaches targeting only those locations dominated by alien plants might not mitigate impacts effectively. Our integrated approach will improve the ranking of alien species risks at a spatial scale appropriate for prioritizing management and designing conservation policies.", "which Has metric ?", "critical abundance", 548.0, 566.0], ["This Editorial describes the rationale, focus, scope and technology behind the newly launched, open access, innovative Food Modelling Journal (FMJ). The Journal is designed to publish those outputs of the research cycle that usually precede the publication of the research article, but have their own value and re-usability potential. Such outputs are methods, models, software and data. The Food Modelling Journal is launched by the AGINFRA+ community and is integrated with the AGINFRA+ Virtual Research Environment (VRE) to facilitate and streamline the authoring, peer review and publication of the manuscripts via the ARPHA Publishing Platform.", "which used in ?", "Food Modelling Journal", 127.0, 149.0], ["Information systems that build on sensor networks often process data produced by measuring physical properties. These data can serve in the acquisition of knowledge for real-world situations that are of interest to information services and, ultimately, to people. Such systems face a common challenge, namely the considerable gap between the data produced by measurement and the abstract terminology used to describe real-world situations. We present and discuss the architecture of a software system that utilizes sensor data, digital signal processing, machine learning, and knowledge representation and reasoning to acquire, represent, and infer knowledge about real-world situations observable by a sensor network. We demonstrate the application of the system to vehicle detection and classification by measurement of road pavement vibration. Thus, real-world situations involve vehicles and information for their type, speed, and driving direction.", "which employs ?", "Machine Learning", 555.0, 571.0], ["Multi-agent systems (MASs) have received tremendous attention from scholars in different disciplines, including computer science and civil engineering, as a means to solve complex problems by subdividing them into smaller tasks. The individual tasks are allocated to autonomous entities, known as agents. Each agent decides on a proper action to solve the task using multiple inputs, e.g., history of actions, interactions with its neighboring agents, and its goal. The MAS has found multiple applications, including modeling complex systems, smart grids, and computer networks. Despite their wide applicability, there are still a number of challenges faced by MAS, including coordination between agents, security, and task allocation. This survey provides a comprehensive discussion of all aspects of MAS, starting from definitions, features, applications, challenges, and communications to evaluation. A classification on MAS applications and challenges is provided along with references for further studies. We expect this paper to serve as an insightful and comprehensive resource on the MAS for researchers and practitioners in the area.", "which applicable in ?", "Modeling Complex Systems", 517.0, 541.0], ["Multi-agent systems (MASs) have received tremendous attention from scholars in different disciplines, including computer science and civil engineering, as a means to solve complex problems by subdividing them into smaller tasks. The individual tasks are allocated to autonomous entities, known as agents. Each agent decides on a proper action to solve the task using multiple inputs, e.g., history of actions, interactions with its neighboring agents, and its goal. The MAS has found multiple applications, including modeling complex systems, smart grids, and computer networks. Despite their wide applicability, there are still a number of challenges faced by MAS, including coordination between agents, security, and task allocation. This survey provides a comprehensive discussion of all aspects of MAS, starting from definitions, features, applications, challenges, and communications to evaluation. A classification on MAS applications and challenges is provided along with references for further studies. We expect this paper to serve as an insightful and comprehensive resource on the MAS for researchers and practitioners in the area.", "which applicable in ?", "Smart Grids", 543.0, 554.0], ["In this poster, we propose a capability-maturity model (CMM) for scientific data management that includes a set of process areas required for data management, grouped at three levels of organizational capability maturity. The goal is to provide a framework for comparing and improving project and organizational data management practices.", "which has research field ?", "Scientific Data Management", 65.0, 91.0], ["An increasing trend toward product development in a collaborative environment has resulted in the use of various software tools to enhance the product design. This requires a meaningful representation and exchange of product data semantics across different application domains. This paper proposes an ontology-based framework to enable such semantic interoperability. A standards-based approach is used to develop a Product Semantic Representation Language (PSRL). Formal description logic (DAML+OIL) is used to encode the PSRL. Mathematical logic and corresponding reasoning is used to determine semantic equivalences between an application ontology and the PSRL. The semantic equivalence matrix enables resolution of ambiguities created due to differences in syntaxes and meanings associated with terminologies in different application domains. Successful semantic interoperability will form the basis of seamless communication and thereby enable better integration of product development systems. Note to Practitioners-Semantic interoperability of product information refers to automating the exchange of meaning associated with the data, among information resources throughout the product development. This research is motivated by the problems in enabling such semantic interoperability. First, product information is formalized into an explicit, extensible, and comprehensive product semantics representation language (PSRL). The PSRL is open and based on standard W3C constructs. Next, in order to enable semantic translation, the paper describes a procedure to semi-automatically determine mappings between exactly equivalent concepts across representations of the interacting applications. The paper demonstrates that this approach to translation is feasible, but it has not yet been implemented commercially. Current limitations and the directions for further research are discussed. Future research addresses the determination of semantic similarities (not exact equivalences) between the interacting information resources.", "which hasMappingtoSource ?", "Semantic Translation", 1512.0, 1532.0], ["Recently, Semantic Web Technologies (SWT) have been introduced and adopted to address the problem of enterprise data integration (e.g., to solve the problem of terms and concepts heterogeneity within large organizations). One of the challenges of adopting SWT for enterprise data integration is to provide the means to define and validate structural constraints over Resource Description Framework (RDF) graphs. This is difficult since RDF graph axioms behave like implications instead of structural constraints. SWT researchers and practitioners have proposed several solutions to address this challenge (e.g., SPIN and Shape Expression). However, to the best of our knowledge, none of them provide an integrated solution within open source ontology editors (e.g., Prot\u00e9g\u00e9). We identified this absence of the integrated solution and developed SHACL4P, a Prot\u00e9g\u00e9 plugin for defining and validating Shapes Constraint Language (SHACL), the upcoming W3C standard for constraint validation within Prot\u00e9g\u00e9 ontology editor.", "which supported data modelling language ?", "Shapes Constraint Language (SHACL)", NaN, NaN], ["Background The Electronic Surveillance System for the Early Notification of Community-Based Epidemics (ESSENCE) is a secure web-based tool that enables health care practitioners to monitor health indicators of public health importance for the detection and tracking of disease outbreaks, consequences of severe weather, and other events of concern. The ESSENCE concept began in an internally funded project at the Johns Hopkins University Applied Physics Laboratory, advanced with funding from the State of Maryland, and broadened in 1999 as a collaboration with the Walter Reed Army Institute for Research. Versions of the system have been further developed by Johns Hopkins University Applied Physics Laboratory in multiple military and civilian programs for the timely detection and tracking of health threats. Objective This study aims to describe the components and development of a biosurveillance system increasingly coordinating all-hazards health surveillance and infectious disease monitoring among large and small health departments, to list the key features and lessons learned in the growth of this system, and to describe the range of initiatives and accomplishments of local epidemiologists using it. Methods The features of ESSENCE include spatial and temporal statistical alerting, custom querying, user-defined alert notifications, geographical mapping, remote data capture, and event communications. To expedite visualization, configurable and interactive modes of data stratification and filtering, graphical and tabular customization, user preference management, and sharing features allow users to query data and view geographic representations, time series and data details pages, and reports. These features allow ESSENCE users to gather and organize the resulting wealth of information into a coherent view of population health status and communicate findings among users. Results The resulting broad utility, applicability, and adaptability of this system led to the adoption of ESSENCE by the Centers for Disease Control and Prevention, numerous state and local health departments, and the Department of Defense, both nationally and globally. The open-source version of Suite for Automated Global Electronic bioSurveillance is available for global, resource-limited settings. Resourceful users of the US National Syndromic Surveillance Program ESSENCE have applied it to the surveillance of infectious diseases, severe weather and natural disaster events, mass gatherings, chronic diseases and mental health, and injury and substance abuse. Conclusions With emerging high-consequence communicable diseases and other health conditions, the continued user requirement\u2013driven enhancements of ESSENCE demonstrate an adaptable disease surveillance capability focused on the everyday needs of public health. The challenge of a live system for widely distributed users with multiple different data sources and high throughput requirements has driven a novel, evolving architecture design.", "which Statistical analysis techniques ?", "Time series", 1668.0, 1679.0], ["Scholarly information usually contains millions of raw data, such as authors, papers, citations, as well as scholarly networks. With the rapid growth of the digital publishing and harvesting, how to visually present the data efficiently becomes challenging. Nowadays, various visualization techniques can be easily applied on scholarly data visualization and visual analysis, which enables scientists to have a better way to represent the structure of scholarly data sets and reveal hidden patterns in the data. In this paper, we first introduce the basic concepts and the collection of scholarly data. Then, we provide a comprehensive overview of related data visualization tools, existing techniques, as well as systems for the analyzing volumes of diverse scholarly data. Finally, open issues are discussed to pursue new solutions for abundant and complicated scholarly data visualization, as well as techniques, that support a multitude of facets.", "which has elements ?", "Visualization tools", 661.0, 680.0], ["This article aims to identify whether different weighted PageRank algorithms can be applied to author citation networks to measure the popularity and prestige of a scholar from a citation perspective. Information retrieval (IR) was selected as a test field and data from 1956\u20132008 were collected from Web of Science. Weighted PageRank with citation and publication as weighted vectors were calculated on author citation networks. The results indicate that both popularity rank and prestige rank were highly correlated with the weighted PageRank. Principal component analysis was conducted to detect relationships among these different measures. For capturing prize winners within the IR field, prestige rank outperformed all the other measures. \u00a9 2011 Wiley Periodicals, Inc.", "which Social network analysis ?", "Weighted PageRank", 48.0, 65.0], ["Abstract Hackathons, time-bounded events where participants write computer code and build apps, have become a popular means of socializing tech students and workers to produce \u201cinnovation\u201d despite little promise of material reward. Although they offer participants opportunities for learning new skills and face-to-face networking and set up interaction rituals that create an emotional \u201chigh,\u201d potential advantage is even greater for the events\u2019 corporate sponsors, who use them to outsource work, crowdsource innovation, and enhance their reputation. Ethnographic observations and informal interviews at seven hackathons held in New York during the course of a single school year show how the format of the event and sponsors\u2019 discursive tropes, within a dominant cultural frame reflecting the appeal of Silicon Valley, reshape unpaid and precarious work as an extraordinary opportunity, a ritual of ecstatic labor, and a collective imaginary for fictional expectations of innovation that benefits all, a powerful strategy for manufacturing workers\u2019 consent in the \u201cnew\u201d economy.", "which Has finding ?", "appeal of Silicon Valley", 796.0, 820.0], ["ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.", "which Has finding ?", "most smart city strategies fail to incorporate bottom-up approaches", 814.0, 881.0], ["Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19\u2013related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.", "which Has finding ?", "hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype", 1989.0, 2193.0], ["\nPurpose\nIn terms of entrepreneurship, open data benefits include economic growth, innovation, empowerment and new or improved products and services. Hackathons encourage the development of new applications using open data and the creation of startups based on these applications. Researchers focus on factors that affect nascent entrepreneurs\u2019 decision to create a startup but researches in the field of open data hackathons have not been fully investigated yet. This paper aims to suggest a model that incorporates factors that affect the decision of establishing a startup by developers who have participated in open data hackathons.\n\n\nDesign/methodology/approach\nIn total, 70 papers were examined and analyzed using a three-phased literature review methodology, which was suggested by Webster and Watson (2002). These surveys investigated several factors that affect a nascent entrepreneur to create a startup.\n\n\nFindings\nEventually, by identifying the motivations for developers to participate in a hackathon, and understanding the benefits of the use of open data, researchers will be able to elaborate the proposed model and evaluate if the contest has contributed to the decision of establish a startup and what factors affect the decision to establish a startup apply to open data developers, and if the participants of the contest agree with these factors.\n\n\nOriginality/value\nThe paper expands the scope of open data research on entrepreneurship field, stating the need for more research to be conducted regarding the open data in entrepreneurship through hackathons.\n", "which Has finding ?", "need for more research to be conducted regarding the open data in entrepreneurship through hackathons", 1848.0, 1949.0], ["Through a redox-transmetallation procedure, divalent NHC-LnII (NHC = N-heterocyclic carbene; Ln = Eu, Yb) complexes were obtained from the corresponding NHC-AgI. The lability of the NHC-LnII bond was investigated and treatment with CO2 led to insertion reactions without oxidation of the metal centre. The EuII complex [EuI2(IMes)(THF)3] (IMes = 1,3-dimesitylimidazol-2-ylidene) exhibits photoluminescence with a quantum yield reaching 53%.", "which Metal used ?", "Eu, Yb", 98.0, 104.0], ["Wet bulk micromachining is a popular technique for the fabrication of microstructures in research labs as well as in industry. However, increasing the throughput still remains an active area of research, and can be done by increasing the etching rate. Moreover, the release time of a freestanding structure can be reduced if the undercutting rate at convex corners can be improved. In this paper, we investigate a non-conventional etchant in the form of NH2OH added in 5 wt% tetramethylammonium hydroxide (TMAH) to determine its etching characteristics. Our analysis is focused on a Si{1 0 0} wafer as this is the most widely used in the fabrication of planer devices (e.g. complementary metal oxide semiconductors) and microelectromechanical systems (e.g. inertial sensors). We perform a systematic and parametric analysis with concentrations of NH2OH varying from 5% to 20% in step of 5%, all in 5 wt% TMAH, to obtain the optimum concentration for achieving improved etching characteristics including higher etch rate, undercutting at convex corners, and smooth etched surface morphology. Average surface roughness (R a), etch depth, and undercutting length are measured using a 3D scanning laser microscope. Surface morphology of the etched Si{1 0 0} surface is examined using a scanning electron microscope. Our investigation has revealed a two-fold increment in the etch rate of a {1 0 0} surface with the addition of NH2OH in the TMAH solution. Additionally, the incorporation of NH2OH significantly improves the etched surface morphology and the undercutting at convex corners, which is highly desirable for the quick release of microstructures from the substrate. The results presented in this paper are extremely useful for engineering applications and will open a new direction of research for scientists in both academic and industrial laboratories.", "which Primary etching solution ?", "tetramethylammonium hydroxide", 475.0, 504.0], ["Abstract Semantic embedding of knowledge graphs has been widely studied and used for prediction and statistical analysis tasks across various domains such as Natural Language Processing and the Semantic Web. However, less attention has been paid to developing robust methods for embedding OWL (Web Ontology Language) ontologies, which contain richer semantic information than plain knowledge graphs, and have been widely adopted in domains such as bioinformatics. In this paper, we propose a random walk and word embedding based ontology embedding method named , which encodes the semantics of an OWL ontology by taking into account its graph structure, lexical information and logical constructors. Our empirical evaluation with three real world datasets suggests that benefits from these three different aspects of an ontology in class membership prediction and class subsumption prediction tasks. Furthermore, often significantly outperforms the state-of-the-art methods in our experiments.", "which Has evaluation task ?", "Class subsumption prediction", 864.0, 892.0], ["Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.", "which Has evaluation task ?", "Link prediction", 1184.0, 1199.0], ["Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.", "which Has evaluation task ?", "multi-label classification", 1153.0, 1179.0], ["The widespread of Coronavirus has led to a worldwide pandemic with a high mortality rate. Currently, the knowledge accumulated from different studies about this virus is very limited. Leveraging a wide-range of biological knowledge, such as gene on-tology and protein-protein interaction (PPI) networks from other closely related species presents a vital approach to infer the molecular impact of a new species. In this paper, we propose the transferred multi-relational embedding model Bio-JOIE to capture the knowledge of gene ontology and PPI networks, which demonstrates superb capability in modeling the SARS-CoV-2-human protein interactions. Bio-JOIE jointly trains two model components. The knowledge model encodes the relational facts from the protein and GO domains into separated embedding spaces, using a hierarchy-aware encoding technique employed for the GO terms. On top of that, the transfer model learns a non-linear transformation to transfer the knowledge of PPIs and gene ontology annotations across their embedding spaces. By leveraging only structured knowledge, Bio-JOIE significantly outperforms existing state-of-the-art methods in PPI type prediction on multiple species. Furthermore, we also demonstrate the potential of leveraging the learned representations on clustering proteins with enzymatic function into enzyme commission families. Finally, we show that Bio-JOIE can accurately identify PPIs between the SARS-CoV-2 proteins and human proteins, providing valuable insights for advancing research on this new disease.", "which Has evaluation task ?", "PPI type prediction", 1156.0, 1175.0], ["Heat is of fundamental importance in many cellular processes such as cell metabolism, cell division and gene expression. (1-3) Accurate and noninvasive monitoring of temperature changes in individual cells could thus help clarify intricate cellular processes and develop new applications in biology and medicine. Here we report the use of green fluorescent proteins (GFP) as thermal nanoprobes suited for intracellular temperature mapping. Temperature probing is achieved by monitoring the fluorescence polarization anisotropy of GFP. The method is tested on GFP-transfected HeLa and U-87 MG cancer cell lines where we monitored the heat delivery by photothermal heating of gold nanorods surrounding the cells. A spatial resolution of 300 nm and a temperature accuracy of about 0.4 \u00b0C are achieved. Benefiting from its full compatibility with widely used GFP-transfected cells, this approach provides a noninvasive tool for fundamental and applied research in areas ranging from molecular biology to therapeutic and diagnostic studies.", "which Readout ?", "Polarization anisotropy", 503.0, 526.0], ["Cryogenic temperature detection plays an irreplaceable role in exploring nature. Developing high sensitivity, accurate, observable and convenient measurements of cryogenic temperature is not only a challenge but also an opportunity for the thermometer field. The small molecule 9-(9,9-dimethyl-9H-fluoren-3yl)-14-phenyl-9,14-dihydrodibenzo[a,c]phenazine (FIPAC) in 2-methyl-tetrahydrofuran (MeTHF) solution is utilized for the detection of cryogenic temperature with a wide range from 138 K to 343 K. This system possesses significantly high sensitivity at low temperature, which reaches as high as 19.4% K(-1) at 138 K. The temperature-dependent ratio of the dual emission intensity can be fitted as a single-exponential curve as a function of temperature. This single-exponential curve can be explained by the mechanism that the dual emission feature of FIPAC results from the excited-state configuration transformations upon heating or cooling, which is very different from the previously reported mechanisms. Here, our work gives an overall interpretation for this mechanism. Therefore, application of FIPAC as a cryogenic thermometer is experimentally and theoretically feasible.", "which Readout ?", "Emission intensity", 665.0, 683.0], ["The development of simple fluorescent and colorimetric assays that enable point-of-care DNA and RNA detection has been a topic of significant research because of the utility of such assays in resource limited settings. The most common motifs utilize hybridization to a complementary detection strand coupled with a sensitive reporter molecule. Here, a paper-based colorimetric assay for DNA detection based on pyrrolidinyl peptide nucleic acid (acpcPNA)-induced nanoparticle aggregation is reported as an alternative to traditional colorimetric approaches. PNA probes are an attractive alternative to DNA and RNA probes because they are chemically and biologically stable, easily synthesized, and hybridize efficiently with the complementary DNA strands. The acpcPNA probe contains a single positive charge from the lysine at C-terminus and causes aggregation of citrate anion-stabilized silver nanoparticles (AgNPs) in the absence of complementary DNA. In the presence of target DNA, formation of the anionic DNA-acpcPNA duplex results in dispersion of the AgNPs as a result of electrostatic repulsion, giving rise to a detectable color change. Factors affecting the sensitivity and selectivity of this assay were investigated, including ionic strength, AgNP concentration, PNA concentration, and DNA strand mismatches. The method was used for screening of synthetic Middle East respiratory syndrome coronavirus (MERS-CoV), Mycobacterium tuberculosis (MTB), and human papillomavirus (HPV) DNA based on a colorimetric paper-based analytical device developed using the aforementioned principle. The oligonucleotide targets were detected by measuring the color change of AgNPs, giving detection limits of 1.53 (MERS-CoV), 1.27 (MTB), and 1.03 nM (HPV). The acpcPNA probe exhibited high selectivity for the complementary oligonucleotides over single-base-mismatch, two-base-mismatch, and noncomplementary DNA targets. The proposed paper-based colorimetric DNA sensor has potential to be an alternative approach for simple, rapid, sensitive, and selective DNA detection.", "which Mechanism of Antiviral Action ?", "color change", 1132.0, 1144.0], ["Worldwide outbreaks of infectious diseases necessitate the development of rapid and accurate diagnostic methods. Colorimetric assays are a representative tool to simply identify the target molecules in specimens through color changes of an indicator (e.g., nanosized metallic particle, and dye molecules). The detection method is used to confirm the presence of biomarkers visually and measure absorbance of the colored compounds at a specific wavelength. In this study, we propose a colorimetric assay based on an extended form of double-stranded DNA (dsDNA) self-assembly shielded gold nanoparticles (AuNPs) under positive electrolyte (e.g., 0.1 M MgCl2) for detection of Middle East respiratory syndrome coronavirus (MERS-CoV). This platform is able to verify the existence of viral molecules through a localized surface plasmon resonance (LSPR) shift and color changes of AuNPs in the UV\u2013vis wavelength range. We designed a pair of thiol-modified probes at either the 5\u2032 end or 3\u2032 end to organize complementary base pairs with upstream of the E protein gene (upE) and open reading frames (ORF) 1a on MERS-CoV. The dsDNA of the target and probes forms a disulfide-induced long self-assembled complex, which protects AuNPs from salt-induced aggregation and transition of optical properties. This colorimetric assay could discriminate down to 1 pmol/\u03bcL of 30 bp MERS-CoV and further be adapted for convenient on-site detection of other infectious diseases, especially in resource-limited settings.", "which Mechanism of Antiviral Action ?", "Color changes of AuNPs", 859.0, 881.0], ["Middle East respiratory syndrome (MERS) coronavirus (MERS-CoV), an infectious coronavirus first reported in 2012, has a mortality rate greater than 35%. Therapeutic antibodies are key tools for preventing and treating MERS-CoV infection, but to date no such agents have been approved for treatment of this virus. Nanobodies (Nbs) are camelid heavy chain variable domains with properties distinct from those of conventional antibodies and antibody fragments. We generated two oligomeric Nbs by linking two or three monomeric Nbs (Mono-Nbs) targeting the MERS-CoV receptor-binding domain (RBD), and compared their RBD-binding affinity, RBD\u2013receptor binding inhibition, stability, and neutralizing and cross-neutralizing activity against MERS-CoV. Relative to Mono-Nb, dimeric Nb (Di-Nb) and trimeric Nb (Tri-Nb) had significantly greater ability to bind MERS-CoV RBD proteins with or without mutations in the RBD, thereby potently blocking RBD\u2013MERS-CoV receptor binding. The engineered oligomeric Nbs were very stable under extreme conditions, including low or high pH, protease (pepsin), chaotropic denaturant (urea), and high temperature. Importantly, Di-Nb and Tri-Nb exerted significantly elevated broad-spectrum neutralizing activity against at least 19 human and camel MERS-CoV strains isolated in different countries and years. Overall, the engineered Nbs could be developed into effective therapeutic agents for prevention and treatment of MERS-CoV infection.", "which Mechanism of Antiviral Action ?", "RBD\u2013receptor binding inhibition", 634.0, 665.0], ["ABSTRACT Camelid heavy-chain variable domains (VHHs) are the smallest, intact, antigen-binding units to occur in nature. VHHs possess high degrees of solubility and robustness enabling generation of multivalent constructs with increased avidity \u2013 characteristics that mark their superiority to other antibody fragments and monoclonal antibodies. Capable of effectively binding to molecular targets inaccessible to classical immunotherapeutic agents and easily produced in microbial culture, VHHs are considered promising tools for pharmaceutical biotechnology. With the aim to demonstrate the perspective and potential of VHHs for the development of prophylactic and therapeutic drugs to target diseases caused by bacterial and viral infections, this review article will initially describe the structural features that underlie the unique properties of VHHs and explain the methods currently used for the selection and recombinant production of pathogen-specific VHHs, and then thoroughly summarize the experimental findings of five distinct studies that employed VHHs as inhibitors of host\u2013pathogen interactions or neutralizers of infectious agents. Past and recent studies suggest the potential of camelid heavy-chain variable domains as a novel modality of immunotherapeutic drugs and a promising alternative to monoclonal antibodies. VHHs demonstrate the ability to interfere with bacterial pathogenesis by preventing adhesion to host tissue and sequestering disease-causing bacterial toxins. To protect from viral infections, VHHs may be employed as inhibitors of viral entry by binding to viral coat proteins or blocking interactions with cell-surface receptors. The implementation of VHHs as immunotherapeutic agents for infectious diseases is of considerable potential and set to contribute to public health in the near future.", "which Mechanism of Antiviral Action ?", "Binding to viral coat proteins or blocking interactions with cell-surface receptors", 1584.0, 1667.0], ["Virus like particles (VLPs) produced by the expression of viral structural proteins can serve as versatile nanovectors or potential vaccine candidates. In this study we describe for the first time the generation of HCoV-NL63 VLPs using baculovirus system. Major structural proteins of HCoV-NL63 have been expressed in tagged or native form, and their assembly to form VLPs was evaluated. Additionally, a novel procedure for chromatography purification of HCoV-NL63 VLPs was developed. Interestingly, we show that these nanoparticles may deliver cargo and selectively transduce cells expressing the ACE2 protein such as ciliated cells of the respiratory tract. Production of a specific delivery vector is a major challenge for research concerning targeting molecules. The obtained results show that HCoV-NL63 VLPs may be efficiently produced, purified, modified and serve as a delivery platform. This study constitutes an important basis for further development of a promising viral vector displaying narrow tissue tropism.", "which Mechanism of Antiviral Action ?", "Selectively transduce cells expressing the ACE2 protein", 555.0, 610.0], ["Infectious bronchitis virus (IBV) affects poultry respiratory, renal and reproductive systems. Currently the efficacy of available live attenuated or killed vaccines against IBV has been challenged. We designed a novel IBV vaccine alternative using a highly innovative platform called Self-Assembling Protein Nanoparticle (SAPN). In this vaccine, B cell epitopes derived from the second heptad repeat (HR2) region of IBV spike proteins were repetitively presented in its native trimeric conformation. In addition, flagellin was co-displayed in the SAPN to achieve a self-adjuvanted effect. Three groups of chickens were immunized at four weeks of age with the vaccine prototype, IBV-Flagellin-SAPN, a negative-control construct Flagellin-SAPN or a buffer control. The immunized chickens were challenged with 5x104.7 EID50 IBV M41 strain. High antibody responses were detected in chickens immunized with IBV-Flagellin-SAPN. In ex vivo proliferation tests, peripheral mononuclear cells (PBMCs) derived from IBV-Flagellin-SAPN immunized chickens had a significantly higher stimulation index than that of PBMCs from chickens receiving Flagellin-SAPN. Chickens immunized with IBV-Flagellin-SAPN had a significant reduction of tracheal virus shedding and lesser tracheal lesion scores than did negative control chickens. The data demonstrated that the IBV-Flagellin-SAPN holds promise as a vaccine for IBV.", "which Mechanism of Antiviral Action ?", "The second heptad repeat (HR2) region of IBV spike proteins", NaN, NaN], ["Rationale: Anti-tumor necrosis factor (TNF) therapy is a very effective way to treat inflammatory bowel disease. However, systemic exposure to anti-TNF-\u03b1 antibodies through current clinical systemic administration can cause serious adverse effects in many patients. Here, we report a facile prepared self-assembled supramolecular nanoparticle based on natural polyphenol tannic acid and poly(ethylene glycol) containing polymer for oral antibody delivery. Method: This supramolecular nanoparticle was fabricated within minutes in aqueous solution and easily scaled up to gram level due to their pH-dependent reversible assembly. DSS-induced colitis model was prepared to evaluate the ability of inflammatory colon targeting ability and therapeutic efficacy of this antibody-loaded nanoparticles. Results: This polyphenol-based nanoparticle can be aqueous assembly without organic solvent and thus scaled up easily. The oral administration of antibody loaded nanoparticle achieved high accumulation in the inflamed colon and low systemic exposure. The novel formulation of anti-TNF-\u03b1 antibodies administrated orally achieved high efficacy in the treatment of colitis mice compared with free antibodies administered orally. The average weight, colon length, and inflammatory factors in colon and serum of colitis mice after the treatment of novel formulation of anti-TNF-\u03b1 antibodies even reached the similar level to healthy controls. Conclusion: This polyphenol-based supramolecular nanoparticle is a promising platform for oral delivery of antibodies for the treatment of inflammatory bowel diseases, which may have promising clinical translation prospects.", "which Colitis model ?", "DSS-induced colitis", 629.0, 648.0], ["Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179\u00b122 nm were produced. Res was loaded with high EE of 73\u00b10.9% and DL content of 6.2\u00b10.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", "which Cytotoxicity assay ?", "MTT assay", 1165.0, 1174.0], ["Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p\u02c20.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.", "which Polymer charachterisation ?", "nuclear magnetic resonance", 721.0, 747.0], ["Drug delivery has become an important strategy for improving the chemotherapy efficiency. Here we developed a multifunctionalized nanosized albumin-based drug-delivery system with tumor-targeting, cell-penetrating, and endolysosomal pH-responsive properties. cRGD-BSA/KALA/DOX nanoparticles were fabricated by self-assembly through electrostatic interaction between cell-penetrating peptide KALA and cRGD-BSA, with cRGD as a tumor-targeting ligand. Under endosomal/lysosomal acidic conditions, the changes in the electric charges of cRGD-BSA and KALA led to the disassembly of the nanoparticles to accelerate intracellular drug release. cRGD-BSA/KALA/DOX nanoparticles showed an enhanced inhibitory effect in the growth of \u03b1v\u03b23-integrin-overexpressed tumor cells, indicating promising application in cancer treatments.", "which Nanoparticles preparation method ?", "Self-assembly through electrostatic interaction", 310.0, 357.0], ["Background: Paclitaxel (PTX) is one of the most important and effective anticancer drugs for the treatment of human cancer. However, its low solubility and severe adverse effects limited clinical use. To overcome this limitation, nanotechnology has been used to overcome tumors due to its excellent antimicrobial activity. Objective: This study was to demonstrate the anticancer properties of functionalization silver nanoparticles loaded with paclitaxel (Ag@PTX) induced A549 cells apoptosis through ROS-mediated signaling pathways. Methods: The Ag@PTX nanoparticles were charged with a zeta potential of about -17 mv and characterized around 2 nm with a narrow size distribution. Results: Ag@PTX significantly decreased the viability of A549 cells and possessed selectivity between cancer and normal cells. Ag@PTX induced A549 cells apoptosis was confirmed by nuclear condensation, DNA fragmentation, and activation of caspase-3. Furthermore, Ag@PTX enhanced the anti-cancer activity of A549 cells through ROS-mediated p53 and AKT signalling pathways. Finally, in a xenograft nude mice model, Ag@PTX suppressed the growth of tumors. Conclusion: Our findings suggest that Ag@PTX may be a candidate as a chemopreventive agent and could be a highly efficient way to achieve anticancer synergism for human cancers.", "which Type of inorganic nanoparticles ?", "Silver nanoparticles", 411.0, 431.0], ["Microorganisms are well adapted to their habitat but are partially sensitive to toxic metabolites or abiotic compounds secreted by other organisms or chemically formed under the respective environmental conditions. Thermoacidophiles are challenged by pyroglutamate, a lactam that is spontaneously formed by cyclization of glutamate under aerobic thermoacidophilic conditions. It is known that growth of the thermoacidophilic crenarchaeon Saccharolobus solfataricus (formerly Sulfolobus solfataricus) is completely inhibited by pyroglutamate. In the present study, we investigated the effect of pyroglutamate on the growth of S. solfataricus and the closely related crenarchaeon Sulfolobus acidocaldarius. In contrast to S. solfataricus, S. acidocaldarius was successfully cultivated with pyroglutamate as a sole carbon source. Bioinformatical analyses showed that both members of the Sulfolobaceae have at least one candidate for a 5-oxoprolinase, which catalyses the ATP-dependent conversion of pyroglutamate to glutamate. In S. solfataricus, we observed the intracellular accumulation of pyroglutamate and crude cell extract assays showed a less effective degradation of pyroglutamate. Apparently, S. acidocaldarius seems to be less versatile regarding carbohydrates and prefers peptidolytic growth compared to S. solfataricus. Concludingly, S. acidocaldarius exhibits a more efficient utilization of pyroglutamate and is not inhibited by this compound, making it a better candidate for applications with glutamate-containing media at high temperatures.", "which Organism ?", "Saccharolobus solfataricus", 438.0, 464.0], ["Microorganisms are well adapted to their habitat but are partially sensitive to toxic metabolites or abiotic compounds secreted by other organisms or chemically formed under the respective environmental conditions. Thermoacidophiles are challenged by pyroglutamate, a lactam that is spontaneously formed by cyclization of glutamate under aerobic thermoacidophilic conditions. It is known that growth of the thermoacidophilic crenarchaeon Saccharolobus solfataricus (formerly Sulfolobus solfataricus) is completely inhibited by pyroglutamate. In the present study, we investigated the effect of pyroglutamate on the growth of S. solfataricus and the closely related crenarchaeon Sulfolobus acidocaldarius. In contrast to S. solfataricus, S. acidocaldarius was successfully cultivated with pyroglutamate as a sole carbon source. Bioinformatical analyses showed that both members of the Sulfolobaceae have at least one candidate for a 5-oxoprolinase, which catalyses the ATP-dependent conversion of pyroglutamate to glutamate. In S. solfataricus, we observed the intracellular accumulation of pyroglutamate and crude cell extract assays showed a less effective degradation of pyroglutamate. Apparently, S. acidocaldarius seems to be less versatile regarding carbohydrates and prefers peptidolytic growth compared to S. solfataricus. Concludingly, S. acidocaldarius exhibits a more efficient utilization of pyroglutamate and is not inhibited by this compound, making it a better candidate for applications with glutamate-containing media at high temperatures.", "which Organism ?", "Sulfolobus acidocaldarius", 678.0, 703.0], ["SnO2 nanowire gas sensors have been fabricated on Cd\u2212Au comb-shaped interdigitating electrodes using thermal evaporation of the mixed powders of SnO2 and active carbon. The self-assembly grown sensors have excellent performance in sensor response to hydrogen concentration in the range of 10 to 1000 ppm. This high response is attributed to the large portion of undercoordinated atoms on the surface of the SnO2 nanowires. The influence of the Debye length of the nanowires and the gap between electrodes in the gas sensor response is examined and discussed.", "which Sensing element nanomaterial ?", "SnO2 nanowires", 407.0, 421.0], ["The BioCreative NLM-Chem track calls for a community effort to fine-tune automated recognition of chemical names in biomedical literature. Chemical names are one of the most searched biomedical entities in PubMed and \u2013 as highlighted during the COVID-19 pandemic \u2013 their identification may significantly advance research in multiple biomedical subfields. While previous community challenges focused on identifying chemical names mentioned in titles and abstracts, the full text contains valuable additional detail. We organized the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document. This manuscript summarizes the BioCreative NLM-Chem track. We received a total of 88 submissions in total from 17 teams worldwide. The highest performance achieved for the Chemical Identification task was 0.8672 f-score (0.8759 precision, 0.8587 recall) for strict NER performance and 0.8136 f-score (0.8621 precision, 0.7702 recall) for strict normalization performance. The highest performance achieved for the Chemical Indexing task was 0.4825 f-score (0.4397 precision, 0.5344 recall). The NLM-Chem track dataset and other challenge materials are publicly available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/. This community challenge demonstrated 1) the current substantial achievements in deep learning technologies can be utilized to further improve automated prediction accuracy, and 2) the Chemical Indexing task is substantially more challenging. We look forward to further development of biomedical text mining methods to respond to the rapid growth of biomedical literature. Keywords\u2014 biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; text mining; chemical entity recognition; chemical indexing", "which subtasks ?", "Chemical Indexing prediction task", 736.0, 769.0], ["Abstract The Precision Medicine Initiative is a multicenter effort aiming at formulating personalized treatments leveraging on individual patient data (clinical, genome sequence and functional genomic data) together with the information in large knowledge bases (KBs) that integrate genome annotation, disease association studies, electronic health records and other data types. The biomedical literature provides a rich foundation for populating these KBs, reporting genetic and molecular interactions that provide the scaffold for the cellular regulatory systems and detailing the influence of genetic variants in these interactions. The goal of BioCreative VI Precision Medicine Track was to extract this particular type of information and was organized in two tasks: (i) document triage task, focused on identifying scientific literature containing experimentally verified protein\u2013protein interactions (PPIs) affected by genetic mutations and (ii) relation extraction task, focused on extracting the affected interactions (protein pairs). To assist system developers and task participants, a large-scale corpus of PubMed documents was manually annotated for this task. Ten teams worldwide contributed 22 distinct text-mining models for the document triage task, and six teams worldwide contributed 14 different text-mining systems for the relation extraction task. When comparing the text-mining system predictions with human annotations, for the triage task, the best F-score was 69.06%, the best precision was 62.89%, the best recall was 98.0% and the best average precision was 72.5%. For the relation extraction task, when taking homologous genes into account, the best F-score was 37.73%, the best precision was 46.5% and the best recall was 54.1%. Submitted systems explored a wide range of methods, from traditional rule-based, statistical and machine learning systems to state-of-the-art deep learning methods. Given the level of participation and the individual team results we find the precision medicine track to be successful in engaging the text-mining research community. In the meantime, the track produced a manually annotated corpus of 5509 PubMed documents developed by BioGRID curators and relevant for precision medicine. The data set is freely available to the community, and the specific interactions have been integrated into the BioGRID data set. In addition, this challenge provided the first results of automatically identifying PubMed articles that describe PPI affected by mutations, as well as extracting the affected relations from those articles. Still, much progress is needed for computer-assisted precision medicine text mining to become mainstream. Future work should focus on addressing the remaining technical challenges and incorporating the practical benefits of text-mining tools into real-world precision medicine information-related curation.", "which subtasks ?", "document triage task", 775.0, 795.0], ["Abstract The Precision Medicine Initiative is a multicenter effort aiming at formulating personalized treatments leveraging on individual patient data (clinical, genome sequence and functional genomic data) together with the information in large knowledge bases (KBs) that integrate genome annotation, disease association studies, electronic health records and other data types. The biomedical literature provides a rich foundation for populating these KBs, reporting genetic and molecular interactions that provide the scaffold for the cellular regulatory systems and detailing the influence of genetic variants in these interactions. The goal of BioCreative VI Precision Medicine Track was to extract this particular type of information and was organized in two tasks: (i) document triage task, focused on identifying scientific literature containing experimentally verified protein\u2013protein interactions (PPIs) affected by genetic mutations and (ii) relation extraction task, focused on extracting the affected interactions (protein pairs). To assist system developers and task participants, a large-scale corpus of PubMed documents was manually annotated for this task. Ten teams worldwide contributed 22 distinct text-mining models for the document triage task, and six teams worldwide contributed 14 different text-mining systems for the relation extraction task. When comparing the text-mining system predictions with human annotations, for the triage task, the best F-score was 69.06%, the best precision was 62.89%, the best recall was 98.0% and the best average precision was 72.5%. For the relation extraction task, when taking homologous genes into account, the best F-score was 37.73%, the best precision was 46.5% and the best recall was 54.1%. Submitted systems explored a wide range of methods, from traditional rule-based, statistical and machine learning systems to state-of-the-art deep learning methods. Given the level of participation and the individual team results we find the precision medicine track to be successful in engaging the text-mining research community. In the meantime, the track produced a manually annotated corpus of 5509 PubMed documents developed by BioGRID curators and relevant for precision medicine. The data set is freely available to the community, and the specific interactions have been integrated into the BioGRID data set. In addition, this challenge provided the first results of automatically identifying PubMed articles that describe PPI affected by mutations, as well as extracting the affected relations from those articles. Still, much progress is needed for computer-assisted precision medicine text mining to become mainstream. Future work should focus on addressing the remaining technical challenges and incorporating the practical benefits of text-mining tools into real-world precision medicine information-related curation.", "which subtasks ?", "relation extraction task", 952.0, 976.0], ["We describe the design and use of the Stanford CoreNLP toolkit, an extensible pipeline that provides core natural language analysis. This toolkit is quite widely used, both in the research NLP community and also among commercial and government users of open source NLP technology. We suggest that this follows from a simple, approachable design, straightforward interfaces, the inclusion of robust and good quality analysis components, and not requiring use of a large amount of associated baggage.", "which Tool ?", "Stanford CoreNLP", 38.0, 54.0], ["Water scarcity in mountain regions such as the Himalaya has been studied with a pre-existing notion of scarcity justified by decades of communities' suffering from physical water shortages combined by difficulties of access. The Eastern Himalayan Region (EHR) of India receives significantly high amounts of annual precipitation. Studies have nonetheless shown that this region faces a strange dissonance: an acute water scarcity in a supposedly \u2018water-rich\u2019 region. The main objective of this paper is to decipher various drivers of water scarcity by locating the contemporary history of water institutions within the development trajectory of the Darjeeling region, particularly Darjeeling Municipal Town in West Bengal, India. A key feature of the region's urban water governance that defines the water scarcity narrative is the multiplicity of water institutions and the intertwining of formal and informal institutions at various scales. These factors affect the availability of and basic access to domestic water by communities in various ways resulting in the creation of a preferred water bundle consisting of informal water markets over and above traditional sourcing from springs and the formal water supply from the town municipality.", "which Region of data collection ?", "Eastern Himalaya", NaN, NaN], ["Abstract. We computed high-resolution (1o latitude x 1o longitude) seasonal and annual nitrous oxide (N2O) concentration fields for the Arabian Sea surface layer using a database containing more than 2400 values measured between December 1977 and July 1997. N2O concentrations are highest during the southwest (SW) monsoon along the southern Indian continental shelf. Annual emissions range from 0.33 to 0.70 Tg N2O and are dominated by fluxes from coastal regions during the SW and northeast monsoons. Our revised estimate for the annual N2O flux from the Arabian Sea is much more tightly constrained than the previous consensus derived using averaged in-situ data from a smaller number of studies. However, the tendency to focus on measurements in locally restricted features in combination with insufficient seasonal data coverage leads to considerable uncertainties of the concentration fields and thus in the flux estimates, especially in the coastal zones of the northern and eastern Arabian Sea. The overall mean relative error of the annual N2O emissions from the Arabian Sea was estimated to be at least 65%.", "which Region of data collection ?", "Arabian Sea", 136.0, 147.0], ["During the CARIACO time series program, microbial standing stocks, bacterial production, and acetate turnover were consistently elevated in the redox transition zone (RTZ) of the Cariaco Basin, the depth interval (~240\u2013450 m) of steepest gradient in oxidation\u2010reduction potential. Anomalously high fluxes of particulate carbon were captured in sediment traps below this zone (455 m) in 16 of 71 observations. Here we present new evidence that bacterial chemoautotrophy, fueled by reduced sulfur species, supports an active secondary microbial food web in the RTZ and is potentially a large midwater source of labile, chemically unique, sedimenting biogenic debris to the basin's interior. Dissolved inorganic carbon assimilation (27\u2013159 mmol C m\u22122 d\u22121) in this zone was equivalent to 10%\u2013333% of contemporaneous primary production, depending on the season. However, vertical diffusion rates to the RTZ of electron donors and electron acceptors were inadequate to support this production. Therefore, significant lateral intrusions of oxic waters, mixing processes, or intensive cycling of C, S, N, Mn, and Fe across the RTZ are necessary to balance electron equivalents. Chemoautotrophic production appears to be decoupled temporally from short\u2010term surface processes, such as seasonal upwelling and blooms, and potentially is more responsive to long\u2010term changes in surface productivity and deep\u2010water ventilation on interannual to decadal timescales. Findings suggest that midwater production of organic carbon may contribute a unique signature to the basin's sediment record, thereby altering its paleoclimatological interpretation.", "which Region of data collection ?", "Cariaco Basin", 179.0, 192.0], ["Total carbon dioxide (TCO 2) and computations of partial pressure of carbon dioxide (pCO 2) had been examined in Northerneastern region of Indian Ocean. It exhibit seasonal and spatial variability. North-south gradients in the pCO 2 levels were closely related to gradients in salinity caused by fresh water discharge received from rivers. Eddies observed in this region helped to elevate the nutrients availability and the biological controls by increasing the productivity. These phenomena elevated the carbon dioxide draw down during the fair seasons. Seasonal fluxes estimated from local wind speed and air-sea carbon dioxide difference indicate that during southwest monsoon, the northeastern Indian Ocean acts as a strong sink of carbon dioxide (-20.04 mmol m \u20132 d -1 ). Also during fall intermonsoon the area acts as a weak sink of carbon dioxide (-4.69 mmol m \u20132 d -1 ). During winter monsoon, this region behaves as a weak carbon dioxide source with an average sea to air flux of 4.77 mmol m -2 d -1 . In the northern region, salinity levels in the surface level are high during winter compared to the other two seasons. Northeastern Indian Ocean shows significant intraseasonal variability in carbon dioxide fluxes that are mediated by eddies which provide carbon dioxide and nutrients from the subsurface waters to the mixed layer.", "which Region of data collection ?", "Northeastern Indian Ocean", 685.0, 710.0], ["Nutrient addition bioassay experiments were performed in the low\u2010nutrient, low\u2010chlorophyll oligotrophic subtropical North Atlantic Ocean to investigate the influence of nitrogen (N), phosphorus (P), and/or iron (Fe) on phytoplankton physiology and the limitation of primary productivity or picophytoplankton biomass. Additions of N alone resulted in 1.5\u20102 fold increases in primary productivity and chlorophyll after 48 h, with larger (~threefold) increases observed for the addition of P in combination with N (NP). Measurements of cellular chlorophyll contents permitted evaluation of the physiological response of the photosynthetic apparatus to N and P additions in three picophytoplankton groups. In both Prochlorococcus and the picoeukaryotes, cellular chlorophyll increased by similar amounts in N and NP treatments relative to all other treatments, suggesting that pigment synthesis was N limited. In contrast, the increase of cellular chlorophyll was greater in NP than in N treatments in Synechococcus, suggestive of NP co\u2010limitation. Relative increases in cellular nucleic acid were also only observed in Synechococcus for NP treatments, indicating co\u2010limitation of net nucleic acid synthesis. A lack of response to relief of nutrient stress for the efficiency of photosystem II photochemistry, Fv :Fm, suggests that the low nutrient supply to this region resulted in a condition of balanced nutrient limited growth, rather than starvation. N thus appears to be the proximal (i.e. direct physiological) limiting nutrient in the oligotrophic sub\u2010tropical North Atlantic. In addition, some major picophytoplankton groups, as well as overall autotrophic community biomass, appears to be co\u2010limited by N and P.", "which Region of data collection ?", "North Atlantic Ocean", 116.0, 136.0], ["Despite its importance for the global oceanic nitrogen (N) cycle, considerable uncertainties exist about the N fluxes of the Arabian Sea. On the basis of our recent measurements during the German Arabian Sea Process Study as part of the Joint Global Ocean Flux Study (JGOFS) in 1995 and 1997, we present estimates of various N sources and sinks such as atmospheric dry and wet depositions of N aerosols, pelagic denitrification, nitrous oxide (N2O) emissions, and advective N input from the south. Additionally, we estimated the N burial in the deep sea and the sedimentary shelf denitrification. On the basis of our measurements and literature data, the N budget for the Arabian Sea was reassessed. It is dominated by the N loss due to denitrification, which is balanced by the advective input of N from the south. The role of N fixation in the Arabian Sea is still difficult to assess owing to the small database available; however, there are hints that it might be more important than previously thought. Atmospheric N depositions are important on a regional scale during the intermonsoon in the central Arabian Sea; however, they play only a minor role for the overall N cycling. Emissions of N2O and ammonia, deep\u2010sea N burial, and N inputs by rivers and marginal seas (i.e., Persian Gulf and Red Sea) are of minor importance. We found that the magnitude of the sedimentary denitrification at the shelf might be \u223c17% of the total denitrification in the Arabian Sea, indicating that the shelf sediments might be of considerably greater importance for the N cycling in the Arabian Sea than previously thought. Sedimentary and pelagic denitrification together demand \u223c6% of the estimated particulate organic nitrogen export flux from the photic zone. The main northward transport of N into the Arabian Sea occurs in the intermediate layers, indicating that the N cycle of the Arabian Sea might be sensitive to variations of the intermediate water circulation of the Indian Ocean.", "which Region of data collection ?", "Arabian Sea", 125.0, 136.0], ["The spatiotemporal variability of upper ocean inorganic carbon parameters and air\u2010sea CO2 exchange in the Indian Ocean was examined using inorganic carbon data collected as part of the World Ocean Circulation Experiment (WOCE) cruises in 1995. Multiple linear regression methods were used to interpolate and extrapolate the temporally and geographically limited inorganic carbon data set to the entire Indian Ocean basin using other climatological hydrographic and biogeochemical data. The spatiotemporal distributions of total carbon dioxide (TCO2), alkalinity, and seawater pCO2 were evaluated for the Indian Ocean and regions of interest including the Arabian Sea, Bay of Bengal, and 10\u00b0N\u201335\u00b0S zones. The Indian Ocean was a net source of CO2 to the atmosphere, and a net sea\u2010to\u2010air CO2 flux of +237 \u00b1 132 Tg C yr\u22121 (+0.24 Pg C yr\u22121) was estimated. Regionally, the Arabian Sea, Bay of Bengal, and 10\u00b0N\u201310\u00b0S zones were perennial sources of CO2 to the atmosphere. In the 10\u00b0S\u201335\u00b0S zone, the CO2 sink or source status of the surface ocean shifts seasonally, although the region is a net oceanic sink of atmospheric CO2.", "which Region of data collection ?", "Indian Ocean", 106.0, 118.0], ["Extensive measurements of nitrous oxide (N2O) have been made during April\u2013May 1994 (intermonsoon), February\u2013March 1995 (northeast monsoon), July\u2013August 1995 and August 1996 (southwest monsoon) in the Arabian Sea. Low N2O supersaturations in the surface waters are observed during intermonsoon compared to those in northeast and southwest monsoons. Spatial distributions of supersaturations manifest the effects of larger mixing during winter cooling and wind\u2010driven upwelling during monsoon period off the Indian west coast. A net positive flux is observable during all the seasons, with no discernible differences from the open ocean to coastal regions. The average ocean\u2010to\u2010atmosphere fluxes of N2O are estimated, using wind speed dependent gas transfer velocity, to be of the order of 0.26, 0.003, and 0.51, and 0.78 pg (pico grams) cm\u22122 s\u22121 during northeast monsoon, intermonsoon, and southwest monsoon in 1995 and 1996, respectively. The lower range of annual emission of N2O is estimated to be 0.56\u20130.76 Tg N2O per year which constitutes 13\u201317% of the net global oceanic source. However, N2O emission from the Arabian Sea can be as high as 1.0 Tg N2O per year using different gas transfer models.", "which Region of data collection ?", "Arabian Sea", 200.0, 211.0], ["Abstract Oligotrophic ocean gyre ecosystems may be expanding due to rising global temperatures [1\u20135]. Models predicting carbon flow through these changing ecosystems require accurate descriptions of phytoplankton communities and their metabolic activities [6]. We therefore measured distributions and activities of cyanobacteria and small photosynthetic eukaryotes throughout the euphotic zone on a zonal transect through the South Pacific Ocean, focusing on the ultraoligotrophic waters of the South Pacific Gyre (SPG). Bulk rates of CO 2 fixation were low (0.1 \u00b5mol C l \u22121 d \u22121 ) but pervasive throughout both the surface mixed-layer (upper 150 m), as well as the deep chlorophyll a maximum of the core SPG. Chloroplast 16S rRNA metabarcoding, and single-cell 13 CO 2 uptake experiments demonstrated niche differentiation among the small eukaryotes and picocyanobacteria. Prochlorococcus abundances, activity, and growth were more closely associated with the rims of the gyre. Small, fast-growing, photosynthetic eukaryotes, likely related to the Pelagophyceae, characterized the deep chlorophyll a maximum. In contrast, a slower growing population of photosynthetic eukaryotes, likely comprised of Dictyochophyceae and Chrysophyceae, dominated the mixed layer that contributed 65\u201388% of the areal CO 2 fixation within the core SPG. Small photosynthetic eukaryotes may thus play an underappreciated role in CO 2 fixation in the surface mixed-layer waters of ultraoligotrophic ecosystems.", "which Region of data collection ?", "South Pacific Ocean", 426.0, 445.0], ["The global budget of marine nitrogen (N) is not balanced, with N removal largely exceeding N fixation. One of the major causes of this imbalance is our inadequate understanding of the diversity and distribution of marine N2 fixers (diazotrophs) as well as their contribution to N2 fixation. Here, we performed a large\u2010scale cross\u2010system study spanning the South China Sea, Luzon Strait, Philippine Sea, and western tropical Pacific Ocean to compare the biogeography of seven major diazotrophic groups and N2 fixation rates in these ecosystems. Distinct spatial niche differentiation was observed. Trichodesmium was dominant in the South China Sea and western equatorial Pacific, whereas the unicellular cyanobacterium UCYN\u2010B dominated in the Philippine Sea. Furthermore, contrasting diel patterns of Trichodesmium nifH genes and UCYN\u2010B nifH gene transcript activity were observed. The heterotrophic diazotroph Gamma A phylotype was widespread throughout the western Pacific Ocean and occupied an ecological niche that overlapped with that of UCYN\u2010B. Moreover, Gamma A (or other possible unknown/undetected diazotrophs) rather than Trichodesmium and UCYN\u2010B may have been responsible for the high N2 fixation rates in some samples. Regional biogeochemistry analyses revealed cross\u2010system variations in N2\u2010fixing community composition and activity constrained by sea surface temperature, aerosol optical thickness, current velocity, mixed\u2010layer depth, and chlorophyll a concentration. These factors except for temperature essentially control/reflected iron supply/bioavailability and thus drive diazotroph biogeography. This study highlights biogeographical controls on marine N2 fixers and increases our understanding of global diazotroph biogeography.", "which Region of data collection ?", "Western Pacific Ocean", 958.0, 979.0], ["Depth profiles of dissolved nitrous oxide (N2O) were measured in the central and western Arabian Sea during four cruises in May and July\u2013August 1995 and May\u2013July 1997 as part of the German contribution to the Arabian Sea Process Study of the Joint Global Ocean Flux Study. The vertical distribution of N2O in the water column on a transect along 65\u00b0E showed a characteristic double-peak structure, indicating production of N2O associated with steep oxygen gradients at the top and bottom of the oxygen minimum zone. We propose a general scheme consisting of four ocean compartments to explain the N2O cycling as a result of nitrification and denitrification processes in the water column of the Arabian Sea. We observed a seasonal N2O accumulation at 600\u2013800 m near the shelf break in the western Arabian Sea. We propose that, in the western Arabian Sea, N2O might also be formed during bacterial oxidation of organic matter by the reduction of IO3 \u2212 to I\u2212, indicating that the biogeochemical cycling of N2O in the Arabian Sea during the SW monsoon might be more complex than previously thought. A compilation of sources and sinks of N2O in the Arabian Sea suggested that the N2O budget is reasonably balanced.", "which Region of data collection ?", "Arabian Sea", 89.0, 100.0], ["Abstract Picophytoplankton were investigated during spring 2015 and 2016 extending from near\u2010shore coastal waters to oligotrophic open waters in the eastern Indian Ocean (EIO). They were typically composed of Prochlorococcus (Pro), Synechococcus (Syn), and picoeukaryotes (PEuks). Pro dominated most regions of the entire EIO and were approximately 1\u20132 orders of magnitude more abundant than Syn and PEuks. Under the influence of physicochemical conditions induced by annual variations of circulations and water masses, no coherent abundance and horizontal distributions of picophytoplankton were observed between spring 2015 and 2016. Although previous studies reported the limited effects of nutrients and heavy metals around coastal waters or upwelling zones could constrain Pro growth, Pro abundance showed strong positive correlation with nutrients, indicating the increase in nutrient availability particularly in the oligotrophic EIO could appreciably elevate their abundance. The exceptional appearance of picophytoplankton with high abundance along the equator appeared to be associated with the advection processes supported by the Wyrtki jets. For vertical patterns of picophytoplankton, a simple conceptual model was built based upon physicochemical parameters. However, Pro and PEuks simultaneously formed a subsurface maximum, while Syn generally restricted to the upper waters, significantly correlating with the combined effects of temperature, light, and nutrient availability. The average chlorophyll a concentrations (Chl a) of picophytoplankton accounted for above 49.6% and 44.9% of the total Chl a during both years, respectively, suggesting that picophytoplankton contributed a significant proportion of the phytoplankton community in the whole EIO.", "which Region of data collection ?", "Eastern Indian Ocean", 149.0, 169.0], ["Abstract. The Indian Ocean (44\u00b0 S\u201330\u00b0 N) plays an important role in the global carbon cycle, yet it remains one of the most poorly sampled ocean regions. Several approaches have been used to estimate net sea\u2013air CO2 fluxes in this region: interpolated observations, ocean biogeochemical models, atmospheric and ocean inversions. As part of the RECCAP (REgional Carbon Cycle Assessment and Processes) project, we combine these different approaches to quantify and assess the magnitude and variability in Indian Ocean sea\u2013air CO2 fluxes between 1990 and 2009. Using all of the models and inversions, the median annual mean sea\u2013air CO2 uptake of \u22120.37 \u00b1 0.06 PgC yr\u22121 is consistent with the \u22120.24 \u00b1 0.12 PgC yr\u22121 calculated from observations. The fluxes from the southern Indian Ocean (18\u201344\u00b0 S; \u22120.43 \u00b1 0.07 PgC yr\u22121 are similar in magnitude to the annual uptake for the entire Indian Ocean. All models capture the observed pattern of fluxes in the Indian Ocean with the following exceptions: underestimation of upwelling fluxes in the northwestern region (off Oman and Somalia), overestimation in the northeastern region (Bay of Bengal) and underestimation of the CO2 sink in the subtropical convergence zone. These differences were mainly driven by lack of atmospheric CO2 data in atmospheric inversions, and poor simulation of monsoonal currents and freshwater discharge in ocean biogeochemical models. Overall, the models and inversions do capture the phase of the observed seasonality for the entire Indian Ocean but overestimate the magnitude. The predicted sea\u2013air CO2 fluxes by ocean biogeochemical models (OBGMs) respond to seasonal variability with strong phase lags with reference to climatological CO2 flux, whereas the atmospheric inversions predicted an order of magnitude higher seasonal flux than OBGMs. The simulated interannual variability by the OBGMs is weaker than that found by atmospheric inversions. Prediction of such weak interannual variability in CO2 fluxes by atmospheric inversions was mainly caused by a lack of atmospheric data in the Indian Ocean. The OBGM models suggest a small strengthening of the sink over the period 1990\u20132009 of \u22120.01 PgC decade\u22121. This is inconsistent with the observations in the southwestern Indian Ocean that shows the growth rate of oceanic pCO2 was faster than the observed atmospheric CO2 growth, a finding attributed to the trend of the Southern Annular Mode (SAM) during the 1990s.\n ", "which Region of data collection ?", "Indian Ocean", 22.0, 34.0], ["The Eastern Tropical North Pacific Ocean hosts one of the world's largest oceanic oxygen deficient zones (ODZs). Hot spots for reactive nitrogen (Nr) removal processes, ODZs generate conditions proposed to promote Nr inputs via dinitrogen (N2) fixation. In this study, we quantified N2 fixation rates by 15N tracer bioassay across oxygen, nutrient, and light gradients within and adjacent to the ODZ. Within subeuphotic oxygen\u2010deplete waters, N2 fixation was largely undetectable; however, addition of dissolved organic carbon stimulated N2 fixation in suboxic (<20 \u03bcmol/kg O2) waters, suggesting that diazotroph communities are likely energy limited or carbon limited and able to fix N2 despite high ambient concentrations of dissolved inorganic nitrogen. Elevated rates (>9 nmol N\u00b7L\u22121\u00b7day\u22121) were also observed in suboxic waters near volcanic islands where N2 fixation was quantifiable to 3,000 m. Within the overlying euphotic waters, N2 fixation rates were highest near the continent, exceeding 500 \u03bcmol N\u00b7m\u22122\u00b7day\u22121 at one third of inshore stations. These findings support the expansion of the known range of diazotrophs to deep, cold, and dissolved inorganic nitrogen\u2010replete waters. Additionally, this work bolsters calls for the reconsideration of ocean margins as important sources of Nr. Despite high rates at some inshore stations, regional N2 fixation appears insufficient to compensate for Nr loss locally as observed previously in the Eastern Tropical South Pacific ODZ.", "which Region of data collection ?", "Eastern tropical north Pacific Ocean", 4.0, 40.0], ["Dissolved and atmospheric nitrous oxide (N2O) were measured on the legs 3 and 5 of the R/V Meteor cruise 32 in the Arabian Sea. A cruise track along 65\u00b0E was followed during both the intermonsoon (May 1995) and the southwest (SW) monsoon (July/August 1995) periods. During the second leg the coastal and open ocean upwelling regions off the Arabian Peninsula were also investigated. Mean N2O saturations for the oceanic regions of the Arabian Sea were in the range of 99\u2013103% during the intermonsoon and 103\u2013230% during the SW monsoon. Computed annual emissions of 0.8\u20131.5 Tg N2O for the Arabian Sea are considerably higher than previous estimates, indicating that the role of upwelling regions, such as the Arabian Sea, may be more important than previously assumed in global budgets of oceanic N2O emissions.", "which Region of data collection ?", "Arabian Sea", 115.0, 126.0], ["Abstract. We performed N budgets at three stations in the western tropical South Pacific (WTSP) Ocean during austral summer conditions (Feb. Mar. 2015) and quantified all major N fluxes both entering the system (N2 fixation, nitrate eddy diffusion, atmospheric deposition) and leaving the system (PN export). Thanks to a Lagrangian strategy, we sampled the same water mass for the entire duration of each long duration (5 days) station, allowing to consider only vertical exchanges. Two stations located at the western end of the transect (Melanesian archipelago (MA) waters, LD A and LD B) were oligotrophic and characterized by a deep chlorophyll maximum (DCM) located at 51\u2009\u00b1\u200918\u2009m and 81\u2009\u00b1\u20099\u2009m at LD A and LD B. Station LD C was characterized by a DCM located at 132\u2009\u00b1\u20097\u2009m, representative of the ultra-oligotrophic waters of the South Pacific gyre (SPG water). N2 fixation rates were extremely high at both LD A (593\u2009\u00b1\u200951\u2009\u00b5mol\u2009N\u2009m\u22122\u2009d\u22121) and LD B (706\u2009\u00b1\u2009302\u2009\u00b5mol\u2009N\u2009m\u22122\u2009d\u22121), and the diazotroph community was dominated by Trichodesmium. N2 fixation rates were lower (59\u2009\u00b1\u200916\u2009\u00b5mol\u2009N\u2009m\u22122\u2009d\u22121) at LD C and the diazotroph community was dominated by unicellular N2-fixing cyanobacteria (UCYN). At all stations, N2 fixation was the major source of new N (>\u200990\u2009%) before atmospheric deposition and upward nitrate fluxes induced by turbulence. N2 fixation contributed circa 8\u201312\u2009% of primary production in the MA region and 3\u2009% in the SPG water and sustained nearly all new primary production at all stations. The e-ratio (e-ratio\u2009=\u2009PC export/PP) was maximum at LD A (9.7\u2009%) and was higher than the e-ratio in most studied oligotrophic regions (~\u20091\u2009%), indicating a high efficiency of the WTSP to export carbon relative to primary production. The direct export of diazotrophs assessed by qPCR of the nifH gene in sediment traps represented up to 30.6\u2009% of the PC export at LD A, while there contribution was 5 and \n ", "which Season ?", "Austral summer", 117.0, 131.0], ["We report the first measurements of new production (15N tracer technique), the component of primary production that sustains on extraneous nutrient inputs to the euphotic zone, in the Bay of Bengal. Experiments done in two different seasons consistently show high new production (averaging around 4 mmol N m\u22122 d\u22121 during post monsoon and 5.4 mmol N m\u22122 d\u22121 during pre monsoon), validating the earlier conjecture of high new production, based on pCO2 measurements, in the Bay. Averaged over annual time scales, higher new production could cause higher rate of removal of organic carbon. This could also be one of the reasons for comparable organic carbon fluxes observed in the sediment traps of the Bay of Bengal and the eastern Arabian Sea. Thus, oceanic regions like Bay of Bengal may play a more significant role in removing the excess CO2 from the atmosphere than hitherto believed.", "which Sampling depth covered (m) ?", "Euphotic zone", 162.0, 175.0], ["The uptake of dissolved inorganic nitrogen by phytoplankton is an important aspect of the nitrogen cycle of oceans. Here, we present nitrate () and ammonium () uptake rates in the northeastern Arabian Sea using tracer technique. In this relatively underexplored region, productivity is high during winter due to supply of nutrients by convective mixing caused by the cooling of the surface by the northeast monsoon winds. Studies done during different months (January and late February-early March) of the northeast monsoon 2003 revealed a fivefold increase in the average euphotic zone integrated uptake from January (2.3 mmolN ) to late February-early March (12.7 mmolN ). The -ratio during January appeared to be affected by the winter cooling effect and increased by more than 50% from the southernmost station to the northern open ocean stations, indicating hydrographic and meteorological control. Estimates of residence time suggested that entrained in the water column during January contributed to the development of blooms during late February-early March.", "which Sampling depth covered (m) ?", "Euphotic zone", 573.0, 586.0], ["Diazotrophy in the Indian Ocean is poorly understood compared to that in the Atlantic and Pacific Oceans. We first examined the basin\u2010scale community structure of diazotrophs and their nitrogen fixation activity within the euphotic zone during the northeast monsoon period along about 69\u00b0E from 17\u00b0N to 20\u00b0S in the oligotrophic Indian Ocean, where a shallow nitracline (49\u201359 m) prevailed widely and the sea surface temperature (SST) was above 25\u00b0C. Phosphate was detectable at the surface throughout the study area. The dissolved iron concentration and the ratio of iron to nitrate + nitrite at the surface were significantly higher in the Arabian Sea than in the equatorial and southern Indian Ocean. Nitrogen fixation in the Arabian Sea (24.6\u201347.1 \u03bcmolN m\u22122 d\u22121) was also significantly greater than that in the equatorial and southern Indian Ocean (6.27\u201316.6 \u03bcmolN m\u22122 d\u22121), indicating that iron could control diazotrophy in the Indian Ocean. Phylogenetic analysis of nifH showed that most diazotrophs belonged to the Proteobacteria and that cyanobacterial diazotrophs were absent in the study area except in the Arabian Sea. Furthermore, nitrogen fixation was not associated with light intensity throughout the study area. These results are consistent with nitrogen fixation in the Indian Ocean, being largely performed by heterotrophic bacteria and not by cyanobacteria. The low cyanobacterial diazotrophy was attributed to the shallow nitracline, which is rarely observed in the Pacific and Atlantic oligotrophic oceans. Because the shallower nitracline favored enhanced upward nitrate flux, the competitive advantage of cyanobacterial diazotrophs over nondiazotrophic phytoplankton was not as significant as it is in other oligotrophic oceans.", "which Water column zone ?", "Euphotic zone", 223.0, 236.0], ["Abstract. Prochlorococcus, Synechococcus, picophytoeukaryotes and bacterioplankton abundances and contributions to the total particulate organic carbon concentration, derived from the total particle beam attenuation coefficient (cp), were determined across the eastern South Pacific between the Marquesas Islands and the coast of Chile. All flow cytometrically derived abundances decreased towards the hyper-oligotrophic centre of the gyre and were highest at the coast, except for Prochlorococcus, which was not detected under eutrophic conditions. Temperature and nutrient availability appeared important in modulating picophytoplankton abundance, according to the prevailing trophic conditions. Although the non-vegetal particles tended to dominate the cp signal everywhere along the transect (50 to 83%), this dominance seemed to weaken from oligo- to eutrophic conditions, the contributions by vegetal and non-vegetal particles being about equal under mature upwelling conditions. Spatial variability in the vegetal compartment was more important than the non-vegetal one in shaping the water column particle beam attenuation coefficient. Spatial variability in picophytoplankton biomass could be traced by changes in both total chlorophyll a (i.e. mono + divinyl chlorophyll a) concentration and cp. Finally, picophytoeukaryotes contributed ~38% on average to the total integrated phytoplankton carbon biomass or vegetal attenuation signal along the transect, as determined by size measurements (i.e. equivalent spherical diameter) on cells sorted by flow cytometry and optical theory. Although there are some uncertainties associated with these estimates, the new approach used in this work further supports the idea that picophytoeukaryotes play a dominant role in carbon cycling in the upper open ocean, even under hyper-oligotrophic conditions.", "which Material/Method ?", "Flow cytometry", 1557.0, 1571.0], ["Purpose This paper aims to present the results of energy management and optimization studies in one Turkish textile factory. In a case study of a print and dye factory in Istanbul, the authors identified energy-sensitive processes and proposed energy management applications. Design/methodology/approach Appropriate energy management methods have been implemented in the factory, and the results were examined in terms of energy efficiency and cost reduction. Findings By applying the methods for fuel distribution optimization, the authors demonstrated that energy costs could be decreased by approximately. Originality/value Energy management is a vital issue for industries particularly in developing countries such as Turkey. Turkey is an energy poor country and imports more than half of its energy to satisfy its increasing domestic demands. An important share of these demands stems from the presence of a strong textile industry that operates throughout the country.", "which Type of industry ?", "Textile industry", 920.0, 936.0], ["This paper studies a problem in the knitting process of the textile industry. In such a production system, each job has a number of attributes and each attribute has one or more levels. Because there is at least one different attribute level between two adjacent jobs, it is necessary to make a set-up adjustment whenever there is a switch to a different job. The problem can be formulated as a scheduling problem with multi-attribute set-up times on unrelated parallel machines. The objective of the problem is to assign jobs to different machines to minimise the makespan. A constructive heuristic is developed to obtain a qualified solution. To improve the solution further, a meta-heuristic that uses a genetic algorithm with a new crossover operator and three local searches are proposed. The computational experiments show that the proposed constructive heuristic outperforms two existed heuristics and the current scheduling method used by the case textile plant.", "which Type of industry ?", "Textile industry", 60.0, 76.0], ["We study localized dissipative structures in a generalized Lugiato-Lefever equation, exhibiting normal group-velocity dispersion and anomalous quartic group-velocity dispersion. In the conservative system, this parameter-regime has proven to enable generalized dispersion Kerr solitons. Here, we demonstrate via numerical simulations that our dissipative system also exhibits equivalent localized states, including special molecule-like two-color bound states recently reported. We investigate their generation, characterize the observed steady-state solution, and analyze their propagation dynamics under perturbations.", "which Mathematical model ?", "Generalized Lugiato-Lefever equation", 47.0, 83.0], ["Abstract Cleavage of C\u2013O bonds in lignin can afford the renewable aryl sources for fine chemicals. However, the high bond energies of these C\u2013O bonds, especially the 4-O-5-type diaryl ether C\u2013O bonds (~314 kJ/mol) make the cleavage very challenging. Here, we report visible-light photoredox-catalyzed C\u2013O bond cleavage of diaryl ethers by an acidolysis with an aryl carboxylic acid and a following one-pot hydrolysis. Two molecules of phenols are obtained from one molecule of diaryl ether at room temperature. The aryl carboxylic acid used for the acidolysis can be recovered. The key to success of the acidolysis is merging visible-light photoredox catalysis using an acridinium photocatalyst and Lewis acid catalysis using Cu(TMHD) 2 . Preliminary mechanistic studies indicate that the catalytic cycle occurs via a rare selective electrophilic attack of the generated aryl carboxylic radical on the electron-rich aryl ring of the diphenyl ether. This transformation is applied to a gram-scale reaction and the model of 4-O-5 lignin linkages.", "which Photoredox catalyst ?", "Acridinium photocatalyst", 670.0, 694.0], ["We present a new method for the preparation of cobinamide (CN)2Cbi, a vitamin B12 precursor, that should allow its broader utility. Treatment of vitamin B12 with only NaCN and heating in a microwave reactor affords (CN)2Cbi as the sole product. The purification procedure was greatly simplified, allowing for easy isolation of the product in 94% yield. The use of microwave heating proved beneficial also for (CN)2Cbi(c-lactone) synthesis. Treatment of (CN)2Cbi with triethanolamine led to (CN)2Cbi(c-lactam).", "which Special conditions ?", "Microwave reactor", 189.0, 206.0], ["Abstract Granitic lunar samples largely consist of granophyric intergrowths of silica and K-feldspar. The identification of the silica polymorph present in the granophyre can clarify the petrogenesis of the lunar granites. The presence of tridymite or cristobalite would indicate rapid crystallization at high temperature. Quartz would indicate crystallization at low temperature or perhaps intrusive, slow crystallization, allowing for the orderly transformation from high-temperature silica polymorphs (tridymite or cristobalite). We identify the silica polymorphs present in four granitic lunar samples from the Apollo 12 regolith using laser Raman spectroscopy. Typically, lunar silica occurs with a hackle fracture pattern. We did an initial density calculation on the hackle fracture pattern of quartz and determined that the volume of quartz and fracture space is consistent with a molar volume contraction from tridymite or cristobalite, both of which are less dense than quartz. Moreover, we analyzed the silica in the granitic fragments from Apollo 12 by electron-probe microanalysis and found it contains up to 0.7 wt% TiO2, consistent with initial formation as the high-temperature silica polymorphs, which have more open crystal structures that can more readily accommodate cations other than Si. The silica in Apollo 12 granitic samples crystallized rapidly as tridymite or cristobalite, consistent with extrusive volcanism. The silica then inverted to quartz at a later time, causing it to contract and fracture. A hackle fracture pattern is common in silica occurring in extrusive lunar lithologies (e.g., mare basalt). The extrusive nature of these granitic samples makes them excellent candidates to be similar to the rocks that compose positive relief silicic features such as the Gruithuisen Domes.", "which Techniques/ Analysis ?", "Laser Raman", 640.0, 651.0], ["Quantification of mineral proportions in rocks and soils by Raman spectroscopy on a planetary surface is best done by taking many narrow-beam spectra from different locations on the rock or soil, with each spectrum yielding peaks from only one or two minerals. The proportion of each mineral in the rock or soil can then be determined from the fraction of the spectra that contain its peaks, in analogy with the standard petrographic technique of point counting. The method can also be used for nondestructive laboratory characterization of rock samples. Although Raman peaks for different minerals seldom overlap each other, it is impractical to obtain proportions of constituent minerals by Raman spectroscopy through analysis of peak intensities in a spectrum obtained by broad-beam sensing of a representative area of the target material. That is because the Raman signal strength produced by a mineral in a rock or soil is not related in a simple way through the Raman scattering cross section of that mineral to its proportion in the rock, and the signal-to-noise ratio of a Raman spectrum is poor when a sample is stimulated by a low-power laser beam of broad diameter. Results obtained by the Raman point-count method are demonstrated for a lunar thin section (14161,7062) and a rock fragment (15273,7039). Major minerals (plagioclase and pyroxene), minor minerals (cristobalite and K-feldspar), and accessory minerals (whitlockite, apatite, and baddeleyite) were easily identified. Identification of the rock types, KREEP basalt or melt rock, from the 100-location spectra was straightforward.", "which Techniques/ Analysis ?", "Raman spectroscopy", 60.0, 78.0], ["The Spectral Profiler (SP) onboard the Japanese SELENE (KAGUYA) spacecraft is now providing global high spectral resolution visible\u2010near infrared continuous reflectance spectra of the Moon. The reflectance spectra of impact craters on the farside of the Moon reveal lithologies that were not previously identified. The achievements of SP so far include: the most definite detection of crystalline iron\u2010bearing plagioclase with its characteristic 1.3 \u03bcm absorption band on the Moon; a new interpretation of the lithology of Tsiolkovsky crater central peaks, previously classified as \u201colivine\u2010rich,\u201d as mixtures of plagioclase and pyroxene; and the lower limit of Mg number of low\u2010Ca pyroxene found at Antoniadi crater central peak and peak ring which were estimated through direct comparison with laboratory spectra of natural and synthetic pyroxene samples.", "which Instrument ?", "Spectral Profiler (SP)", NaN, NaN], ["Orbital data acquired by the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) and High Resolution Imaging Science Experiment instruments on the Mars Reconnaissance Orbiter (MRO) provide a synoptic view of compositional stratigraphy on the floor of Gale crater surrounding the area where the Mars Science Laboratory (MSL) Curiosity landed. Fractured, light\u2010toned material exhibits a 2.2 \u00b5m absorption consistent with enrichment in hydroxylated silica. This material may be distal sediment from the Peace Vallis fan, with cement and fracture fill containing the silica. This unit is overlain by more basaltic material, which has 1 \u00b5m and 2 \u00b5m absorptions due to pyroxene that are typical of Martian basaltic materials. Both materials are partially obscured by aeolian dust and basaltic sand. Dunes to the southeast exhibit differences in mafic mineral signatures, with barchan dunes enhanced in olivine relative to pyroxene\u2010containing longitudinal dunes. This compositional difference may be related to aeolian grain sorting.", "which Instrument ?", "Compact Reconnaissance Imaging Spectrometer for Mars (CRISM)", NaN, NaN], ["Over 100 Martian gully sites were analyzed using orbital data collected by the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) and High Resolution Imaging Science Experiment on the Mars Reconnaissance Orbiter (MRO). Most gullies are spectrally indistinct from their surroundings, due to mantling by dust. Where spectral information on gully sediments was obtained, a variety of mineralogies were identified. Their relationship to the source rock suggests that gully\u2010forming processes transported underlying material downslope. There is no evidence for specific compositions being more likely to be associated with gullies or with the formation of hydrated minerals in situ as a result of recent liquid water activity. Seasonal CO2 and H2O frosts were observed in gullies at middle to high latitudes, consistent with seasonal frost\u2010driven processes playing important roles in the evolution of gullies. Our results do not clearly indicate a role for long\u2010lived liquid water in gully formation and evolution.", "which Instrument ?", "Compact Reconnaissance Imaging Spectrometer for Mars (CRISM)", NaN, NaN], ["Stark broadening and atomic data calculations have been developed for the recent years, especially atomic and line broadening data for highly ionized ions of argon. We present in this paper atomic data (such as energy levels, line strengths, oscillator strengths and radiative decay rates) for Ar XVI ion and quantum Stark broadening calculations for 10 Ar XVI lines. Radiative atomic data for this ion have been calculated using the University College London (UCL) codes (SUPERSTRUCTURE, DISTORTED WAVE, JAJOM) and have been compared with other results. Using our quantum mechanical method, our Stark broadening calculations for Ar XVI lines are performed at electron density Ne = 10 20 cm\u22123 and for electron temperature varying from 7.5\u00d710 to 7.5\u00d710 K. No Stark broadening results in the literature to compare with. So, our results come to fill this lack of data.", "which Ionization_state ?", "Ar XVI", 294.0, 300.0], ["The complex dynamics of radio-frequency driven atmospheric pressure plasma jets is investigated using various optical diagnostic techniques and numerical simulations. Absolute number densities of ground state atomic oxygen radicals in the plasma effluent are measured by two-photon absorption laser induced fluorescence spectroscopy (TALIF). Spatial profiles are compared with (vacuum) ultra-violet radiation from excited states of atomic oxygen and molecular oxygen, respectively. The excitation and ionization dynamics in the plasma core are dominated by electron impact and observed by space and phase resolved optical emission spectroscopy (PROES). The electron dynamics is governed through the motion of the plasma boundary sheaths in front of the electrodes as illustrated in numerical simulations using a hybrid code based on fluid equations and kinetic treatment of electrons.", "which Research_plan ?", "Electron dynamics", 657.0, 674.0], ["A new computationally assisted diagnostic to measure NO densities in atmospheric-pressure microplasmas by Optical Emission Spectroscopy (OES) is developed and validated against absorption spectroscopy in a volume Dielectric Barrier Discharge (DBD). The OES method is then applied to a twin surface DBD operated in N 2 to measure the NO density as a function of the O 2 admixture ( 0.1%\u2013 1%). The underlying rate equation model reveals that NO ( A 2 \u03a3 + ) is primarily excited by reactions of the ground state NO ( X 2 \u03a0 ) with metastables N 2 ( A 3 \u03a3 u + ).A new computationally assisted diagnostic to measure NO densities in atmospheric-pressure microplasmas by Optical Emission Spectroscopy (OES) is developed and validated against absorption spectroscopy in a volume Dielectric Barrier Discharge (DBD). The OES method is then applied to a twin surface DBD operated in N 2 to measure the NO density as a function of the O 2 admixture ( 0.1%\u2013 1%). The underlying rate equation model reveals that NO ( A 2 \u03a3 + ) is primarily excited by reactions of the ground state NO ( X 2 \u03a0 ) with metastables N 2 ( A 3 \u03a3 u + ).", "which Research_plan ?", "NO densities", 53.0, 65.0], ["Polyethylene terephthalate (PET) is the most important mass\u2010produced thermoplastic polyester used as a packaging material. Recently, thermophilic polyester hydrolases such as TfCut2 from Thermobifida fusca have emerged as promising biocatalysts for an eco\u2010friendly PET recycling process. In this study, postconsumer PET food packaging containers are treated with TfCut2 and show weight losses of more than 50% after 96 h of incubation at 70 \u00b0C. Differential scanning calorimetry analysis indicates that the high linear degradation rates observed in the first 72 h of incubation is due to the high hydrolysis susceptibility of the mobile amorphous fraction (MAF) of PET. The physical aging process of PET occurring at 70 \u00b0C is shown to gradually convert MAF to polymer microstructures with limited accessibility to enzymatic hydrolysis. Analysis of the chain\u2010length distribution of degraded PET by nuclear magnetic resonance spectroscopy reveals that MAF is rapidly hydrolyzed via a combinatorial exo\u2010 and endo\u2010type degradation mechanism whereas the remaining PET microstructures are slowly degraded only by endo\u2010type chain scission causing no detectable weight loss. Hence, efficient thermostable biocatalysts are required to overcome the competitive physical aging process for the complete degradation of postconsumer PET materials close to the glass transition temperature of PET.", "which has result ?", "weight loss", 1154.0, 1165.0], ["Alternative migratory pathways in the life histories of fishes can be difficult to assess but may have great importance to the dynamics of spatially structured populations. We used Sr/Ca in otoliths as a tracer of time spent in freshwater and brackish habitats to study the ontogenetic mov- ments of white perch Morone americana in the Patuxent River estuary. We observed that, soon after the larvae metamorphose, juveniles either move to brackish habitats (brackish contingent) or take up residency in tidal fresh water (freshwater contingent) for the first year of life. In one intensively stud- ied cohort of juveniles, the mean age at which individuals moved into brackish environments was 45 d (post-hatch), corresponding to the metamorphosis of lavae to juveniles and settlement in littoral habitats. Back-calculated growth rates of the freshwater contingent at this same age (median = 0.6 mm d -1 ) were significantly higher than the brackish contingent (median = 0.5 mm d -1 ). Strong year-class variability (>100-fold) was evident from juvenile surveys and from the age composition of adults sampled during spawning. Adult samples were dominated by the brackish contingent (93% of n = 363), which exhibited a significantly higher growth rate (von Bertalanffy, k = 0.67 yr -1 ) than the freshwater contingent (k = 0.39 yr -1 ). Combined with evidence that the relative frequency of the brackish contingent has increased in year-classes with high juvenile recruitment, these results impli- cate brackish environments as being important for maintaining abundance and productivity of the population. By comparison, disproportionately greater recruitment to the adult population by the freshwater contingent during years of low juvenile abundance suggested that freshwater habitats sustain a small but crucial reproductive segment of the population. Thus, both contingents appeared to have unique and complementary roles in the population dynamics of white perch.", "which Species Order ?", "Morone americana", 312.0, 328.0], ["To confirm the occurrence of marine residents of the Japanese eel, Anguilla japonica, which have never entered freshwater ('sea eels'), we measured Sr and Ca concentrations by X-ray electron microprobe analysis of the otoliths of 69 yellow and silver eels, collected from 10 localities in seawater and freshwater habitats around Japan, and classified their migratory histories. Two-dimen- sional images of the Sr concentration in the otoliths showed that all specimens generally had a high Sr core at the center of their otolith, which corresponded to a period of their leptocephalus and early glass eel stages in the ocean, but there were a variety of different patterns of Sr concentration and concentric rings outside the central core. Line analysis of Sr/Ca ratios along the radius of each otolith showed peaks (ca 15 \u00d7 10 -3 ) between the core and out to about 150 \u00b5m (elver mark). The pattern change of the Sr/Ca ratio outside of 150 \u00b5m indicated 3 general categories of migratory history: 'river eels', 'estuarine eels' and 'sea eels'. These 3 categories corresponded to mean values of Sr/Ca ratios of \u2265 6.0 \u00d7 10 -3 for sea eels, which spent most of their life in the sea and did not enter freshwater, of 2.5 to 6.0 \u00d7 10 -3 for estuarine eels, which inhabited estuaries or switched between different habitats, and of <2.5 \u00d7 10 -3 for river eels, which entered and remained in freshwater river habitats after arrival in the estuary. The occurrence of sea eels was 20% of all specimens examined and that of river eels, 23%, while estuarine eels were the most prevalent (57%). The occurrence of sea eels was confirmed at 4 localities in Japanese coastal waters, including offshore islands, a small bay and an estuary. The finding of estuarine eels as an intermediate type, which appear to frequently move between different habitats, and their presence at almost all localities, suggested that A. japonica has a flexible pattern of migration, with an ability to adapt to various habitats and salinities. Thus, anguillid eel migrations into freshwater are clearly not an obligatory migratory pathway, and this form of diadromy should be defined as facultative catadromy, with the sea eel as one of several ecophenotypes. Furthermore, this study indicates that eels which utilize the marine environment to various degrees during their juve- nile growth phase may make a substantial contribution to the spawning stock each year.", "which Species Order ?", "Anguilla japonica", 67.0, 84.0], ["To understand the migratory behavior and habitat use of the Japanese eel Anguilla japonica in the Kaoping River, SW Taiwan, the temporal changes of strontium (Sr) and calcium (Ca) contents in otoliths of the eels in combination with age data were examined by wavelength dispersive X-ray spectrometry with an electron probe microanalyzer. Ages of the eel were determined by the annulus mark in their otolith. The pattern of the Sr:Ca ratios in the otoliths, before the elver stage, was similar among all specimens. Post-elver stage Sr:Ca ratios indicated that the eels experienced different salinity histories in their growth phase yellow stage. The mean (\u00b1SD) Sr:Ca ratios in otoliths beyond elver check of the 6 yellow eels from the freshwater middle reach were 1.8 \u00b1 0.2 x 10 -3 with a maximum value of 3.73 x 10 -3 . Sr:Ca ratios of less than 4 x 10-3 were used to discriminate the freshwater from seawater resident eels. Eels from the lower reach of the river were classified into 3 types: (1) freshwater contingents, Sr:Ca ratio <4 x 10 -3 , constituted 14 % of the eels examined; (2) seawater contingent, Sr:Ca ratio 5.1 \u00b1 1.1 x 10-3 (5%); and (3) estuarine contingent, Sr:Ca ratios ranged from 0 to 10 x 10 -3 , with migration between freshwater and seawater (81 %). The frequency distribution of the 3 contingents differed between yellow and silver eel stages (0.01 < p < 0.05 for each case) and changed with age of the eel, indicating that most of the eels stayed in the estuary for the first year then migrated to the freshwater until 6 yr old. The eel population in the river system was dominated by the estuarine contingent, probably because the estuarine environment was more stable and had a larger carrying capacity than the freshwater middle reach did, and also due to a preference for brackish water by the growth-phase, yellow eel.", "which Species Order ?", "Anguilla japonica", 73.0, 90.0], ["The ability to identify past patterns of salinity habitat use in coastal fishes is viewed as a critical development in evaluating nursery habitats and their role in population dynamics. The utility of otolith tracers (\u03b4 13 C, \u03b4 18 O, and Sr/Ca) as proxies for environmental salinity was tested for the estuarine-dependent juvenile white perch Morone americana. Analysis of water samples revealed a positive relationship between the salinity gradient and \u03b4 18 O, \u03b4 13 C, and Sr/Ca values of water in the Patuxent River estuary. Similarly, analysis of otolith material from young-of-the-year white perch (2001, 2004, 2005) revealed a positive relationship between salinity and otolith \u03b4 13 C, \u03b4 18 O, and Sr/Ca values. In classifying fish to their known salinity habitat, \u03b4 18 O and Sr/Ca were moderately accurate tracers (53 to 79% and 75% correct classification, respectively), and \u03b4 13 C provided near complete dis- crimination between habitats (93 to 100% correct classification). Further, \u03b4 13 C exhibited the lowest inter-annual variability and the largest range of response across salinity habitats. Thus, across estuaries, it is expected that resolution and reliability of salinity histories of juvenile white perch will be improved through the application of stable isotopes as tracers of salinity history.", "which Species Order ?", "Morone americana", 343.0, 359.0], ["Strontium isotope and Sr/Ca ratios measured in situ by ion microprobe along radial transects of otoliths of juvenile chinook salmon (Oncorhynchus tshawytscha) vary between watersheds with contrasting geology. Otoliths from ocean-type chinook from Skagit River estuary, Washington, had prehatch regions with 87Sr/86Sr ratios of ~0.709, suggesting a maternally inherited marine signature, extensive fresh water growth zones with 87Sr/86Sr ratios similar to those of the Skagit River at ~0.705, and marine-like 87Sr/86Sr ratios near their edges. Otoliths from stream-type chinook from central Idaho had prehatch 87Sr/86Sr ratios \u22650.711, indicating that a maternal marine Sr isotopic signature is not preserved after the ~1000- to 1400-km migration from the Pacific Ocean. 87Sr/86Sr ratios in the outer portions of otoliths from these Idaho juveniles were similar to those of their respective streams (~0.708\u00960.722). For Skagit juveniles, fresh water growth was marked by small decreases in otolith Sr/Ca, with increases in ...", "which Species Order ?", "Oncorhynchus tshawytscha", 133.0, 157.0], ["The migratory history of the white-spotted charr Salvelinus leucomaenis was examined using otolith microchemical analysis. The fish migrated between freshwater and marine environments multiple times during their life history. Some white-spotted charr used an estuarine habitat prior to smolting and repeated seaward migration within a year.", "which Species Order ?", "Salvelinus leucomaenis", 49.0, 71.0], ["Dispersal during the early life history of the anadromous rainbow smelt, Osmerus mordax, was examined using assignment testing and mixture analysis of multilocus genotypes and otolith elemental composition. Six spawning areas and associated estuarine nurseries were sampled throughout southeastern Newfoundland. Samples of adults and juveniles isolated by > 25 km displayed moderate genetic differentiation (FST ~ 0.05), whereas nearby (< 25 km) spawning and nursery samples displayed low differentiation (FST < 0.01). Self\u2010assignment and mixture analysis of adult spawning samples supported the hypothesis of independence of isolated spawning locations (> 80% self\u2010assignment) with nearby runs self\u2010assigning at rates between 50 % and 70%. Assignment and mixture analysis of juveniles using adult baselines indicated high local recruitment at several locations (70\u201390%). Nearby (< 25 km) estuaries at the head of St Mary's Bay showed mixtures of individuals (i.e. 20\u201340% assignment to adjacent spawning location). Laser ablation inductively coupled mass spectrometry transects across otoliths of spawning adults of unknown dispersal history were used to estimate dispersal among estuaries across the first year of life. Single\u2010element trends and multivariate discriminant function analysis (Sr:Ca and Ba:Ca) classified the majority of samples as estuarine suggesting limited movement between estuaries (< 0.5%). The mixtures of juveniles evident in the genetic data at nearby sites and a lack of evidence of straying in the otolith data support a hypothesis of selective mortality of immigrants. If indeed selective mortality of immigrants reduces the survivorship of dispersers, estimates of dispersal in marine environments that neglect survival may significantly overestimate gene flow.", "which Species Order ?", "Osmerus mordax", 73.0, 87.0], ["The environmental history of American eels Anguilla rostrata from the East River, Nova Scotia, was investigated by electron microprobe analysis of the Sr:Ca ratio along transects of the eel otolith. The mean (\u00b1SD) Sr:Ca ratio in the otoliths of juvenile American eels was 5.42 \u00d7 10 -3 \u00b1 1.22 \u00d7 10 -3 at the elver check and decreased to 2.38 \u00d7 10 -3 \u00b1 0.99 \u00d7 10 -3 at the first annulus for eels that migrated directly into the river but increased to 7.28 \u00d7 10 -3 \u00b1 1.09 \u00d7 10 -3 for eels that had remained in the estuary for 1 yr or more before entering the river. At the otolith edge, Sr:Ca ratios of 4.0 \u00d7 10 -3 or less indicated freshwater residence and ratios of 5.0 \u00d7 10 -3 or more indicated estuarine residence. Four distinct but interrelated behavioural groups were identified by the temporal changes in Sr:Ca ratios in their otoliths: (1) entrance into freshwater as an elver, (2) coastal or estuarine residence for 1y r or more before entering freshwater, and, after entering freshwater, (3) continuous freshwater res- idence until the silver eel stage and (4) freshwater residence for 1 yr or more before engaging in peri- odic, seasonal movements between estuary and freshwater until the silver eel stage. Small (< 70 mm total length), highly pigmented elvers that arrived early in the elver run were confirmed as slow growing age-1 juvenile eels. Juvenile eels that remained 1 yr or more in the estuary before entering the river contributed to the production of silver eels to a relatively greater extent than did elvers that entered the river during the year of continental arrival.", "which Species Order ?", "Anguilla rostrata", 43.0, 60.0], ["Reproductive isolation between steelhead and resident rainbow trout (Oncorhynchus mykiss) was examined in the Deschutes River, Oregon, through surveys of spawning timing and location. Otolith microchemistry was used to de- termine the occurrence of steelhead and resident rainbow trout progeny in the adult populations of steelhead and resi - dent rainbow trout in the Deschutes River and in the Babine River, British Columbia. In the 3 years studied, steelhead spawning occurred from mid March through May and resident rainbow trout spawning occurred from mid March through August. The timing of 50% spawning was 9-10 weeks earlier for steelhead than for resident rainbow trout. Spawning sites selected by steelhead were in deeper water and had larger substrate than those selected by resident rain - bow trout. Maternal origin was identified by comparing Sr/Ca ratios in the primordia and freshwater growth regions of the otolith with a wavelength-dispersive electron microprobe. In the Deschutes River, only steelhead of steelhead mater - nal origin and resident rainbow trout of resident rainbow trout origin were observed. In the Babine River, steelhead of resident rainbow trout origin and resident rainbow trout of steelhead maternal origin were also observed. Based on these findings, we suggest that steelhead and resident rainbow trout in the Deschutes River may constitute reproduc- tively isolated populations.", "which Species Order ?", "Oncorhynchus mykiss", 69.0, 88.0], ["The environmental history of the shirauo, Salangichthys microdon, was examined in terms of strontium (Sr) and calcium (Ca) uptake in the otolith, by means of wavelength dispersive X-ray spectrometry on an electron microprobe. Anadromous and lacustrine type of the shirauo were found to occur sympatric. Otolith Sr concentration or Sr : Ca ratios of anadromous shirauo fluctuated strongly along the life-history transect in accordance with the migration (habitat) pattern from sea to freshwater. In contrast, the Sr concentration or the Sr : Ca ratios of lacustrine shirauo remained at consistently low levels throughout the otolith. The higher ratios in anadromous shirauo, in the otolith region from the core to 90\u2013230 \u03bcm, corresponded to the initial sea-going period, probably reflecting the ambient salinity or the seawater\u2013freshwater gradient in Sr concentration. The findings clearly indicated that otolith Sr : Ca ratios reflected individual life histories, enabling these anadromous shirauo to be distinguished from lacustrine shirauo.", "which Species Order ?", "Salangichthys microdon", 42.0, 64.0], ["MOTIVATION Array Comparative Genomic Hybridization (CGH) can reveal chromosomal aberrations in the genomic DNA. These amplifications and deletions at the DNA level are important in the pathogenesis of cancer and other diseases. While a large number of approaches have been proposed for analyzing the large array CGH datasets, the relative merits of these methods in practice are not clear. RESULTS We compare 11 different algorithms for analyzing array CGH data. These include both segment detection methods and smoothing methods, based on diverse techniques such as mixture models, Hidden Markov Models, maximum likelihood, regression, wavelets and genetic algorithms. We compute the Receiver Operating Characteristic (ROC) curves using simulated data to quantify sensitivity and specificity for various levels of signal-to-noise ratio and different sizes of abnormalities. We also characterize their performance on chromosomal regions of interest in a real dataset obtained from patients with Glioblastoma Multiforme. While comparisons of this type are difficult due to possibly sub-optimal choice of parameters in the methods, they nevertheless reveal general characteristics that are helpful to the biological investigator.", "which Platform ?", "array CGH", 306.0, 315.0], ["Abstract Background Genomic deletions and duplications are important in the pathogenesis of diseases, such as cancer and mental retardation, and have recently been shown to occur frequently in unaffected individuals as polymorphisms. Affymetrix GeneChip whole genome sampling analysis (WGSA) combined with 100 K single nucleotide polymorphism (SNP) genotyping arrays is one of several microarray-based approaches that are now being used to detect such structural genomic changes. The popularity of this technology and its associated open source data format have resulted in the development of an increasing number of software packages for the analysis of copy number changes using these SNP arrays. Results We evaluated four publicly available software packages for high throughput copy number analysis using synthetic and empirical 100 K SNP array data sets, the latter obtained from 107 mental retardation (MR) patients and their unaffected parents and siblings. We evaluated the software with regards to overall suitability for high-throughput 100 K SNP array data analysis, as well as effectiveness of normalization, scaling with various reference sets and feature extraction, as well as true and false positive rates of genomic copy number variant (CNV) detection. Conclusion We observed considerable variation among the numbers and types of candidate CNVs detected by different analysis approaches, and found that multiple programs were needed to find all real aberrations in our test set. The frequency of false positive deletions was substantial, but could be greatly reduced by using the SNP genotype information to confirm loss of heterozygosity.", "which Platform ?", "SNP array", 839.0, 848.0], ["Background The detection of copy number variants (CNVs) and the results of CNV-disease association studies rely on how CNVs are defined, and because array-based technologies can only infer CNVs, CNV-calling algorithms can produce vastly different findings. Several authors have noted the large-scale variability between CNV-detection methods, as well as the substantial false positive and false negative rates associated with those methods. In this study, we use variations of four common algorithms for CNV detection (PennCNV, QuantiSNP, HMMSeg, and cnvPartition) and two definitions of overlap (any overlap and an overlap of at least 40% of the smaller CNV) to illustrate the effects of varying algorithms and definitions of overlap on CNV discovery. Methodology and Principal Findings We used a 56 K Illumina genotyping array enriched for CNV regions to generate hybridization intensities and allele frequencies for 48 Caucasian schizophrenia cases and 48 age-, ethnicity-, and gender-matched control subjects. No algorithm found a difference in CNV burden between the two groups. However, the total number of CNVs called ranged from 102 to 3,765 across algorithms. The mean CNV size ranged from 46 kb to 787 kb, and the average number of CNVs per subject ranged from 1 to 39. The number of novel CNVs not previously reported in normal subjects ranged from 0 to 212. Conclusions and Significance Motivated by the availability of multiple publicly available genome-wide SNP arrays, investigators are conducting numerous analyses to identify putative additional CNVs in complex genetic disorders. However, the number of CNVs identified in array-based studies, and whether these CNVs are novel or valid, will depend on the algorithm(s) used. Thus, given the variety of methods used, there will be many false positives and false negatives. Both guidelines for the identification of CNVs inferred from high-density arrays and the establishment of a gold standard for validation of CNVs are needed.", "which Platform ?", "SNP array", NaN, NaN], ["Several computer programs are available for detecting copy number variants (CNVs) using genome-wide SNP arrays. We evaluated the performance of four CNV detection software suites\u2014Birdsuite, Partek, HelixTree, and PennCNV-Affy\u2014in the identification of both rare and common CNVs. Each program's performance was assessed in two ways. The first was its recovery rate, i.e., its ability to call 893 CNVs previously identified in eight HapMap samples by paired-end sequencing of whole-genome fosmid clones, and 51,440 CNVs identified by array Comparative Genome Hybridization (aCGH) followed by validation procedures, in 90 HapMap CEU samples. The second evaluation was program performance calling rare and common CNVs in the Bipolar Genome Study (BiGS) data set (1001 bipolar cases and 1033 controls, all of European ancestry) as measured by the Affymetrix SNP 6.0 array. Accuracy in calling rare CNVs was assessed by positive predictive value, based on the proportion of rare CNVs validated by quantitative real-time PCR (qPCR), while accuracy in calling common CNVs was assessed by false positive/false negative rates based on qPCR validation results from a subset of common CNVs. Birdsuite recovered the highest percentages of known HapMap CNVs containing >20 markers in two reference CNV datasets. The recovery rate increased with decreased CNV frequency. In the tested rare CNV data, Birdsuite and Partek had higher positive predictive values than the other software suites. In a test of three common CNVs in the BiGS dataset, Birdsuite's call was 98.8% consistent with qPCR quantification in one CNV region, but the other two regions showed an unacceptable degree of accuracy. We found relatively poor consistency between the two \u201cgold standards,\u201d the sequence data of Kidd et al., and aCGH data of Conrad et al. Algorithms for calling CNVs especially common ones need substantial improvement, and a \u201cgold standard\u201d for detection of CNVs remains to be established.", "which Platform ?", "SNP array", NaN, NaN], ["Chlorella virus PBCV-1 encodes two putative chitinase genes, a181/182r and a260r, and one chitosanase gene, a292l. The three genes were cloned and expressed in Escherichia coli. The recombinant A181/182R protein has endochitinase activity, recombinant A260R has both endochitinase and exochitinase activities, and recombinant A292L has chitosanase activity. Transcription of a181/182r, a260r, and a292l genes begins at 30, 60, and 60 min p.i., respectively; transcription of all three genes continues until the cells lyse. A181/182R, A260R, and A292L proteins are first detected by Western blots at 60, 90, and 120 min p.i., respectively. Therefore, a181/182r is an early gene and a260r and a292l are late genes. All three genes are widespread in chlorella viruses. Phylogenetic analyses indicate that the ancestral condition of the a181/182r gene arose from the most recent common ancestor of a gene found in tobacco, whereas the genealogical position of the a260r gene could not be unambiguously resolved.", "which Sources ?", "Chlorella virus", 0.0, 15.0], ["Customization is one of the known challenges in traditional ERP systems. With the advent of Cloud ERP systems, a question of determining the state of such systems regarding customization and configuration ability arises. As there are only a few literature sources partially covering this topic, a more comprehensive and systematic literature review is needed. Thus, this paper presents a literature review performed in order to give an overview of reported research on \"Cloud ERP Customization\" topic performed in the last 5 years. In two search iterations, a total of 32 relevant papers are identified and analyzed. The results show that several dominant research trends are identified along with 12 challenges and issues. Additionally, based on the results, the possible future researches are proposed.", "which field ?", "Cloud ERP", 92.0, 101.0], ["In this paper we aim to answer the following two questions: 1) has the Common Monetary Area in Southern Africa (henceforth CMA) ever been an optimal currency area (OCA)? 2) What are the costs and benefits of the CMA for its participating countries? In order to answer these questions, we carry out a two-step econometric exercise based on the theory of generalised purchasing power parity (G-PPP). The econometric evidence shows that the CMA (but also Botswana as a de facto member) form an OCA given the existence of common long-run trends in their bilateral real exchange rates. Second, we also test that in the case of the CMA and Botswana the smoothness of the operation of the common currency area \u2014 measured through the degree of relative price correlation \u2014 depends on a variety of factors. These factors signal both the advantages and disadvantages of joining a monetary union. On the one hand, the more open and more similarly diversified the economies are, the higher the benefits they ... Ce Document de travail s'efforce de repondre a deux questions : 1) la zone monetaire commune de l'Afrique australe (Common Monetary Area - CMA) a-t-elle vraiment reussi a devenir une zone monetaire optimale ? 2) quels sont les couts et les avantages de la CMA pour les pays participants ? Nous avons effectue un exercice econometrique en deux etapes base sur la theorie des parites de pouvoir d'achat generalisees. D'apres les resultats econometriques, la CMA (avec le Botswana comme membre de facto) est effectivement une zone monetaire optimale etant donne les evolutions communes sur le long terme de leurs taux de change bilateraux. Nous avons egalement mis en evidence que le bon fonctionnement de l'union monetaire \u2014 mesure par le degre de correlation des prix relatifs \u2014 depend de plusieurs facteurs. Ces derniers revelent a la fois les couts et les avantages de l'appartenance a une union monetaire. D'un cote, plus les economies sont ouvertes et diversifiees de facon comparable, plus ...", "which Justification/ recommendation ?", "Common long-run trends", 518.0, 540.0], [" The East African Community\u2019s (EAC) economic integration has gained momentum recently, with the EAC countries aiming to adopt a single currency in 2015. This article evaluates empirically the readiness of the EAC countries for monetary union. First, structural similarity in terms of similarity of production and exports of the EAC countries is measured. Second, the symmetry of shocks is examined with structural vector auto-regression analysis (SVAR). The lack of macroeconomic convergence gives evidence against a hurried transition to a monetary union. Given the divergent macroeconomic outcomes, structural reforms, including closing infrastructure gaps and harmonizing macroeconomic policies that would raise synchronization of business cycles, need to be in place before moving to monetary union. ", "which Justification/ recommendation ?", "Lack of macroeconomic convergence", 466.0, 499.0], ["There is a proposal for a fast-tracked approach to the African Community (EAC) monetary union. This paper uses cointegration techniques to determine whether the member countries would form a successful monetary union based on the long-run behavior of nominal and real exchange rates and monetary base. The three variables are each analyzed for co-movements among the five countries. The empirical results indicate only partial convergence for the variables considered, suggesting there could be substantial costs for the member countries from a fast-tracked process. This implies the EAC countries need significant adjustments to align their monetary policies and to allow a period of monetary policy coordination to foster convergence that will improve the chances of a sustainable currency union.", "which Justification/ recommendation ?", "Only partial convergence", 414.0, 438.0], ["This paper compares different nominal anchors to promote internal and external competitiveness in the case of a fixed exchange rate regime for the future single regional currency of the Economic Community of the West African States (ECOWAS). We use counterfactual analyses and estimate a model of dependent economy for small commodity exporting countries. We consider four foreign anchor currencies: the US dollar, the euro, the yen and the yuan. Our simulations show little support for a dominant peg in the ECOWAS area if they pursue several goals: maximizing the export revenues, minimizing their variability, stabilizing them and minimizing the real exchange rate misalignments from the fundamental value.", "which Justification/ recommendation ?", "Simulations show little support for a dominant peg", 451.0, 501.0], ["The increasing number of campus-related emergency incidents, in combination with the requirements imposed by the Clery Act, have prompted college campuses to develop emergency notification systems to inform community members of extreme events that may affect them. Merely deploying emergency notification systems on college campuses, however, does not guarantee that these systems will be effective; student compliance plays a very important role in establishing such effectiveness. Immediate compliance with alerts, as opposed to delayed compliance or noncompliance, is a key factor in improving student safety on campuses. This paper investigates the critical antecedents that motivate students to comply immediately with messages from campus emergency notification systems. Drawing on Etzioni's compliance theory, a model is developed. Using a scenario-based survey method, the model is tested in five types of events--snowstorm, active shooter, building fire, health-related, and robbery--and with more than 800 college students from the Northern region of the United States. The results from this study suggest that subjective norm and information quality trust are, in general, the most important factors that promote immediate compliance. This research contributes to the literature on compliance, emergency notification systems, and emergency response policies.", "which paper: Theory / Concept / Model ?", "Compliance theory", 798.0, 815.0], ["This research questions how social media use affords new forms of organizing and collective engagement. The concept of connective action has been introduced to characterize such new forms of collective engagement in which actors coproduce and circulate content based upon an issue of mutual interest. Yet, how the use of social media actually affords connective action still needed to be investigated. Mixed methods analyses of microblogging use during the Gulf of Mexico oil spill bring insights onto this question and reveal in particular how multiple actors enacted emerging and interdependent roles with their distinct patterns of feature use. The findings allow us to elaborate upon the concept of connective affordances as collective level affordances actualized by actors in team interdependent roles. Connective affordances extend research on affordances as a relational concept by considering not only the relationships between technology and users but also the interdependence type among users and the effects of this interdependence onto what users can do with the technology. This study contributes to research on social media use by paying close attention to how distinct patterns of feature use enact emerging roles. Adding to IS scholarship on the collective use of technology, it considers how the patterns of feature use for emerging groups of actors are intricately and mutually related to each other.", "which paper: Theory / Concept / Model ?", "Connective affordances", 703.0, 725.0], ["Research problem: A construct mediated in digital environments, information communication technology (ICT) literacy is operationally defined as the ability of individuals to participate effectively in transactions that invoke illocutionary action. This study investigates ICT literacy through a simulation designed to capture that construct, to deploy the construct model to measure participant improvement of ICT literacy under experimental conditions, and to estimate the potential for expanded model development. Research questions: How might a multidisciplinary literature review inform a model for ICT literacy? How might a simulation be designed that enables sufficient construct representation for modeling? How might prepost testing simulation be designed to investigate the potential for improved command of ICT literacy? How might a regression model account for variance within the model by the addition of affective elements to a cognitive model? Literature review: Existing conceptualizations of the ICT communication environment demonstrate the need for a new communication model that is sensitive to short text messaging demands in crisis communication settings. As a result of this prefect storm of limits requiring the communicator to rely on critical thinking, awareness of context, and information integration, we designed a cognitive-affective model informed by genre theory to capture the ICT construct: A sociocognitive ability that, at its most effective, facilitates illocutionary action-to confirm and warn, to advise and ask, and to thank and request-for specific audiences of emergency responders. Methodology: A prepost design with practitioner subjects (N=50) allowed investigation of performance improvement on tasks demanding illocutionary action after training on tasks of high, moderate, and low demand. Through a model based on the independent variables character count, wordcount, and decreased time on task (X) as related to the dependent variable of an overall episode score (Y), we were able to examine the internal construct strength with and without the addition of affective independent variables. Results and discussion: Of the three prepost models used to study the impact of training, participants demonstrated statistically significant improvement on episodes of high demand on all cognitive model variables. The addition of affective variables, such as attitudes toward text messaging, allowed increased model strength on tasks of high and moderate complexity. These findings suggest that an empirical basis for the construct of ICT literacy is possible and that, under simulation conditions, practitioner improvement may be demonstrated. Practically, it appears that it is possible to train emergency responders to improve their command of ICT literacy so that those most in need of humanitarian response during a crisis may receive it. Future research focusing on communication in digital environments will undoubtedly extend these finding in terms of construct validation and deployment in crisis settings.", "which paper: Theory / Concept / Model ?", "ICT literacy", 272.0, 284.0], ["Research problem: This study investigates the factors influencing students' intentions to use emergency notification services to receive news about campus emergencies through short-message systems (SMS) and social network sites (SNS). Research questions: (1) What are the critical factors that influence students' intention to use SMS to receive emergency notifications? (2) What are the critical factors that influence students' intention to use SNS to receive emergency notifications? Literature review: By adapting Media Richness theory and prior research on emergency notifications, we propose that perceived media richness, perceived trust in information, perceived risk, perceived benefit, and perceived social influence impact the intention to use SMS and SNS to receive emergency notifications. Methodology: We conducted a quantitative, survey-based study that tested our model in five different scenarios, using logistic regression to test the research hypotheses with 574 students of a large research university in the northeastern US. Results and discussion: Results suggest that students' intention to use SNS is impacted by media richness, perceived benefit, and social influence, while students' intention to use SMS is influenced by trust and perceived benefit. Implications to emergency managers suggest how to more effectively manage and market the service through both channels. The results also suggest using SNS as an additional means of providing emergency notifications at academic institutions.", "which paper: Theory / Concept / Model ?", "media richness", 518.0, 532.0], ["Recent extreme events show that Twitter, a micro-blogging service, is emerging as the dominant social reporting tool to spread information on social crises. It is elevating the online public community to the status of first responders who can collectively cope with social crises. However, at the same time, many warnings have been raised about the reliability of community intelligence obtained through social reporting by the amateur online community. Using rumor theory, this paper studies citizen-driven information processing through Twitter services using data from three social crises: the Mumbai terrorist attacks in 2008, the Toyota recall in 2010, and the Seattle cafe shooting incident in 2012. We approach social crises as communal efforts for community intelligence gathering and collective information processing to cope with and adapt to uncertain external situations. We explore two issues: (1) collective social reporting as an information processing mechanism to address crisis problems and gather community intelligence, and (2) the degeneration of social reporting into collective rumor mills. Our analysis reveals that information with no clear source provided was the most important, personal involvement next in importance, and anxiety the least yet still important rumor causing factor on Twitter under social crisis situations.", "which paper: Theory / Concept / Model ?", "Rumor theory", 460.0, 472.0], ["Products can be transported in containers from one port to another. At a container terminal these containers are transshipped from one mode of transportation to another. Cranes remove containers from a ship and put them at a certain time (i.e., release time) into a buffer area with limited capacity. A vehicle lifts a container from the buffer area before the buffer area is full (i.e., in due time) and transports the container from the buffer area to the storage area. At the storage area the container is placed in another buffer area. The advantage of using these buffer areas is the resultant decoupling of the unloading and transportation processes. We study the case in which each container has a time window [release time, due time] in which the transportation should start.The objective is to minimize the vehicle fleet size such that the transportation of each container starts within its time window. No literature has been found studying this relevant problem. We have developed an integer linear programming model to solve the problem of determining vehicle requirements under time-window constraints. We use simulation to validate the estimates of the vehicle fleet size by the analytical model. We test the ability of the model under various conditions. From these numerical experiments we conclude that the results of the analytical model are close to the results of the simulation model. Furthermore, we conclude that the analytical model performs well in the context of a container terminal.", "which Modality ?", "Container terminal", 73.0, 91.0], ["The body of research relating to the implementation of enterprise resource planning (ERP) systems in small- and medium-sized enterprises (SMEs) has been increasing rapidly over the last few years. It is important, particularly for SMEs, to recognize the elements for a successful ERP implementation in their environments. This research aims to examine the critical elements that constitute a successful ERP implementation in SMEs. The objective is to identify the constituents within the critical elements. A comprehensive literature review and interviews with eight SMEs in the UK were carried out. The results serve as the basic input into the formation of the critical elements and their constituents. Three main critical elements are formed: critical success factors, critical people and critical uncertainties. Within each critical element, the related constituents are identified. Using the process theory approach, the constituents within each critical element are linked to their specific phase(s) of ERP implementation. Ten constituents for critical success factors were found, nine constituents for critical people and 21 constituents for critical uncertainties. The research suggests that a successful ERP implementation often requires the identification and management of the critical elements and their constituents at each phase of implementation. The results are constructed as a reference framework that aims to provide researchers and practitioners with indicators and guidelines to improve the success rate of ERP implementation in SMEs.", "which Foci ?", "Critical elements", 356.0, 373.0], ["It is consensual that Enterprise Resource Planning (ERP) after a successful implementation has significant effects on the productivity of firm as well small and medium-sized enterprises (SMEs) recognized as fundamentally different environments compared to large enterprises. There are few reviews in the literature about the post-adoption phase and even fewer at SME level. Furthermore, to the best of our knowledge there is none with focus in ERP value stage. This review will fill this gap. It provides an updated bibliography of ERP publications published in the IS journal and conferences during the period of 2000 and 2012. A total of 33 articles from 21 journals and 12 conferences are reviewed. The main focus of this paper is to shed the light on the areas that lack sufficient research within the ERP in SME domain, in particular in ERP business value stage, suggest future research avenues, as well as, present the current research findings that could support researchers and practitioners when embarking on ERP projects.", "which Foci ?", "ERP value", 444.0, 453.0], ["This paper studies the operational logic in an inter-bay automated material handling system (AMHS) in semiconductor wafer fabrication. This system consists of stockers located in a two-floor layout. Automated moving devices transfer lots between stockers within the same floor (intra-floor lot transfer) or between different floors (inter-floor lot transfer). Intra-floor lot-transferring transports use a two-rail one-directional system, whereas inter-floor lot-transferring transports use lifters. The decision problem consists of selecting rails and lifters that minimize average lot-delivery time. Several operation rules to deliver lots from source stocker to destination stocker are proposed and their performance is evaluated by discrete event simulation.", "which Objective function(s) ?", "Delivery time", 587.0, 600.0], ["This paper presents an efficient policy for AGV and part routing in semiconductor and LCD production bays using information on the future state of systems where AGVs play a central role in material handling. These highly informative systems maintain a great deal of information on current and near-future status, such as the arrival and operation completion times of parts, thereby enabling a new approach for production shop control. Efficient control of AGVs is vital in semiconductor and LCD plants because AGV systems often limit the total production capacity of these very expensive plants. With the proposed procedure, the cell controller records the future events chronologically and uses this information to determine the destination and source of parts between the parts' operation machine and temporary storage. It is shown by simulation that the new control policy reduces AGV requirements and flow time of parts.", "which Objective function(s) ?", "Flow time", 905.0, 914.0], ["Here, the performance evaluation of a double-loop interbay automated material handling system (AMHS) in wafer fab was analysed by considering the effects of the dispatching rules. Discrete event simulation models based on SIMPLE++ were developed to implement the heuristic dispatching rules in such an AMHS system with a zone control scheme to avoid vehicle collision. The layout of an interbay system is a combination configuration in which the hallway contains double loops and the vehicles have double capacity. The results show that the dispatching rule has a significant impact on average transport time, waiting time, throughput and vehicle utilization. The combination of the shortest distance with nearest vehicle and the first encounter first served rule outperformed the other rules. Furthermore, the relationship between vehicle number and material flow rate by experimenting with a simulation model was investigated. The optimum combination of these two factors can be obtained by response surface methodology.", "which Objective function(s) ?", "Transport time", 594.0, 608.0], ["Here, the performance evaluation of a double-loop interbay automated material handling system (AMHS) in wafer fab was analysed by considering the effects of the dispatching rules. Discrete event simulation models based on SIMPLE++ were developed to implement the heuristic dispatching rules in such an AMHS system with a zone control scheme to avoid vehicle collision. The layout of an interbay system is a combination configuration in which the hallway contains double loops and the vehicles have double capacity. The results show that the dispatching rule has a significant impact on average transport time, waiting time, throughput and vehicle utilization. The combination of the shortest distance with nearest vehicle and the first encounter first served rule outperformed the other rules. Furthermore, the relationship between vehicle number and material flow rate by experimenting with a simulation model was investigated. The optimum combination of these two factors can be obtained by response surface methodology.", "which Objective function(s) ?", "waiting time", 610.0, 622.0], ["Automation is an essential component in today's semiconductor manufacturing. As factories migrate to 300 mm technology, automated handling becomes increasingly important for variety of ergonomic, safety, and yield considerations. Traditional semiconductor AMHS systems, such as the Overhead Monorail Vehicles (OMV) or Overhead Hoist, can be overly expensive. Cost projections for a 300 mm inter/intrabay AMHS installation are in the range of $50 M-$100 M. As an alternative, a lower cost alternative AMHS, called Continuous Flow Transport has been proposed. The CFT system is similar to what has historically been identified as a conveyor based movement system. The CFT system provides cost savings at reduced flexibility and longer delivery time. This study compares the CFT to Overhead Monorail transport, determining a cumulative delivery time distribution. As expected, the CFT system requires a longer average delivery time interval than OMV, but may provide total savings through reduced transport variability.", "which Performance measures ?", "Delivery time distribution", 833.0, 859.0], ["The aim of this study was to evaluate the prevalence, clinical manifestations, and etiology of dental erosion among children. A total of 153 healthy, 11-year-old children were sampled from a downtown public school in Istanbul, Turkey comprised of middle-class children. Data were obtained via: (1) dinical examination; (2) questionnaire; and (3) standardized data records. A new dental erosion index for children designed by O'Sullivan (2000) was used. Twenty-eight percent (N=43) of the children exhibited dental erosion. Of children who consumed orange juice, 32% showed erosion, while 40% who consumed carbonated beverages showed erosion. Of children who consumed fruit yogurt, 36% showed erosion. Of children who swam professionally in swimming pools, 60% showed erosion. Multiple regression analysis revealed no relationship between dental erosion and related erosive sources (P > .05).", "which Aim of the study ?", "Dental erosion", 95.0, 109.0], ["OBJECTIVES The aims of this study were to (1) investigate prevalence and severity of erosive tooth wear among kindergarten children and (2) determine the relationship between dental erosion and dietary intake, oral hygiene behaviour, systemic diseases and salivary concentration of calcium and phosphate. MATERIALS AND METHODS A sample of 463 children (2-7 years old) from 21 kindergartens were examined under standardized conditions by a calibrated examiner. Dental erosion of primary and permanent teeth was recorded using a scoring system based on O'Sullivan Index [Eur J Paediatr Dent 2 (2000) 69]. Data on the rate and frequency of dietary intake, systemic diseases and oral hygiene behaviour were obtained from a questionnaire completed by the parents. Unstimulated saliva samples of 355 children were analysed for calcium and phosphate concentration by colorimetric assessment. Descriptive statistics and multiple regression analysis were applied to the data. RESULTS Prevalence of erosion amounted to 32% and increased with increasing age of the children. Dentine erosion affecting at least one tooth could be observed in 13.2% of the children. The most affected teeth were the primary maxillary first and second incisors (15.5-25%) followed by the canines (10.5-12%) and molars (1-5%). Erosions on primary mandibular teeth were as follows: incisors: 1.5-3%, canines: 5.5-6% and molars: 3.5-5%. Erosions of the primary first and second molars were mostly seen on the occlusal surfaces (75.9%) involving enamel or enamel-dentine but not the pulp. In primary first and second incisors and canines, erosive lesions were often located incisally (51.2%) or affected multiple surfaces (28.9%). None of the permanent incisors (n = 93) or first molars (n=139) showed signs of erosion. Dietary factors, oral hygiene behaviour, systemic diseases and salivary calcium and phosphate concentration were not associated with the presence of erosion. CONCLUSIONS Erosive tooth wear of primary teeth was frequently seen in primary dentition. As several children showed progressive erosion into dentine or exhibited severe erosion affecting many teeth, preventive and therapeutic measures are recommended.", "which Aim of the study ?", "Erosive tooth wear", 85.0, 103.0], ["OBJECTIVE The aim of this study was to assess the prevalence and severity of dental erosion among 12-year-old schoolchildren in Joa\u00e7aba, southern Brazil, and to compare prevalence between boys and girls, and between public and private school students. METHODS A cross-sectional study was carried out involving all of the municipality's 499, 12-year-old schoolchildren. The dental erosion index proposed by O'Sullivan was used for the four maxillary incisors. Data analysis included descriptive statistics, location, distribution, and extension of affected area and severity of dental erosion. RESULTS The prevalence of dental erosion was 13.0% (95% confidence interval = 9.0-17.0). There was no statistically significant difference in prevalence between boys and girls, but prevalence was higher in private schools (21.1%) than in public schools (9.7%) (P < 0.001). Labial surfaces were less often affected than palatal surfaces. Enamel loss was the most prevalent type of dental erosion (4.86 of 100 incisors). Sixty-three per cent of affected teeth showed more than a half of their surface affected. CONCLUSION The prevalence of dental erosion in 12-year-old schoolchildren living in a small city in southern Brazil appears to be lower than that seen in most of epidemiological studies carried out in different parts of the world. Further longitudinal studies should be conducted in Brazil in order to measure the incidence of dental erosion and its impact on children's quality of life.", "which Aim of the study ?", "Prevalence of dental erosion", 605.0, 633.0], ["00-5712/$ see front matter q 200 i:10.1016/j.jdent.2004.08.007 * Corresponding author. Tel.: C44 7 5 1282. E-mail address: r.bedi@eastman.uc Summary Objective. To describe the prevalence of dental erosion and associated factors in preschool children in Guangxi and Hubei provinces of China. Methods. Dental examinations were carried out on 1949 children aged 3\u20135 years. Measurement of erosion was confined to primary maxillary incisors. The erosion index used was based upon the 1993 UK National Survey of Children\u2019s Dental Health. The children\u2019s general information as well as social background and dietary habits were collected based on a structured questionnaire. Results. A total of 112 children (5.7%) showed erosion on their maxillary incisors. Ninety-five (4.9%) was scored as being confined to enamel and 17 (0.9%) as erosion extending into dentine or pulp. There was a positive association between erosion and social class in terms of parental education. A significantly higher prevalence of erosion was observed in children whose parents had post-secondary education than those whose parents had secondary or lower level of education. There was also a correlation between the presence of dental erosion and intake of fruit drink from a feeding bottle or consumption of fruit drinks at bedtime. Conclusion. Erosion is not a serious problem for dental heath in Chinese preschool children. The prevalence of erosion is associated with social and dietary factors in this sample of children. q 2004 Elsevier Ltd. All rights reserved.", "which Aim of the study ?", "Prevalence of dental erosion", 176.0, 204.0], ["Textural characterization and heavy mineral studies of beach sediments in Ibeno and Eastern Obolo Local Government Areas of Akwa Ibom State were carried out in the present study. The main aim was to infer their provenance, transport history and environment of deposition. Sediment samples were collected at the water\u2013sediment contact along the shoreline at an interval of about 3m. Ten samples were collected from study location 1 (Ibeno Beach) and twelve samples were collected from study location 2 (Eastern Obolo Beach). A total of twenty\u2013two samples were collected from the field and brought to the laboratory for textural and compositional analyses. The results showed that the value of graphic mean size ranged from 1.70\u0424 to 2.83\u0424, sorting values ranged from 0.39\u0424 \u2013 0.60\u0424, skewness values ranged from -0.02 to 0.10 while kurtosis values ranged from 1.02 to 2.46, indicating medium to fine grained and well sorted sediments. This suggested that the sediments have been transported far from their source. Longshore current and onshore\u2013offshore movements of sediment are primarily responsible in sorting of the heavy minerals. The histogram charts for the different samples and standard deviation versus skewness indicated a beach environment of deposition. This implies that the sediments are dominated by one class of grain size; a phenomenon characteristic of beach environments. The heavy mineral assemblages identified in this research work were rutile, zircon, tourmaline, hornblende, apatite, diopside, glauconite, pumpellyite, cassiterite, epidote, garnet, augite, enstatite, andalusite and opaque minerals. The zircon-tourmaline-rutile (ZTR) index ranged from 47.30% to 87.00% with most of the samples showing a ZTR index greater than 50%. These indicated that the sediments were mineralogically sub-mature and have been transported far from their source. The heavy minerals identified are indicative of being products of reworked sediments of both metamorphic (high rank) and igneous (both mafic and sialic) origin probably derived from the basement rocks of the Oban Massif as well as reworked sediments of the Benue Trough. Therefore, findings from the present study indicated that erosion, accretion, and stability of beaches are controlled by strong hydrodynamic and hydraulic processes.", "which Aim of the study ?", "The main aim was to infer their provenance, transport history and environment of deposition.", NaN, NaN], ["High-resolution remote sensing imagery provides an important data source for ship detection and classification. However, due to shadow effect, noise and low-contrast between objects and background existing in this kind of data, traditional segmentation approaches have much difficulty in separating ship targets from complex sea-surface background. In this paper, we propose a novel coarse-to-fine segmentation strategy for identifying ships in 1-meter resolution imagery. This approach starts from a coarse segmentation by selecting local intensity variance as detection feature to segment ship objects from background. After roughly obtaining the regions containing ship candidates, a shape-driven level-set segmentation is used to extract precise boundary of each object which is good for the following stages such as detection and classification. Experimental results show that the proposed approach outperforms other algorithms in terms of recognition accuracy.", "which Main purpose ?", "Detection and classification", 82.0, 110.0], ["Sea-land segmentation is a key step for target detection. Due to the complex texture and uneven gray value of the land in optical remote sensing image, traditional sea-land segmentation algorithms often recognize land as sea incorrectly. A new segmentation scheme is presented in this paper to solve this problem. This scheme determines the threshold according to the adaptively established statistical model of the sea area, and removes the incorrectly classified land according to the difference of the variance in the statistical model between land and sea. Experimental results show our segmentation scheme has small computation complexity, and it has better performance and higher robustness compared to the traditional algorithms.", "which Main purpose ?", "Sea-land segmentation", 0.0, 21.0], ["Abstract Recreational boating activities represent one of the highest risk populations in the marine environment. Moreover, there is a trend of increased risk exposure by recreational boaters such as those who undertake adventure tourism, sport fishing/hunting, and personal watercraft (PWC) activities. When trying to plan search and rescue activities, there are data deficiencies regarding inventories, activity type, and spatial location of small, recreational boats. This paper examines the current body of research in the application of remote sensing technology in marine search and rescue. The research suggests commercially available very high spatial resolution satellite (VHSR) imagery can be used to detect small recreational vessels using a sub\u2010pixel detection methodology. The sub\u2010pixel detection method utilizes local image statistics based on spatio\u2010spectral considerations. This methodology would have to be adapted for use with VHSR imagery as it was originally used in hyperspectral imaging. Further, the authors examine previous research on \u2018target characterization\u2019 which uses a combination of spectral based classification, and context based feature extraction to generate information such as: length, heading, position, and material of construction for target vessels. This technique is based on pixel\u2010based processing used in generic digital image processing and computer vision. Finally, a preliminary recreational vessel surveillance system \u2010 called Marine Recreational Vessel Reconnaissance (MRV Recon) is tested on some modified VHSR imagery.", "which Main purpose ?", "Search and rescue", 324.0, 341.0], ["This paper examines the performance of a spatiospectral template on Ikonos imagery to automatically detect small recreational boats. The spatiospectral template is utilized and then enhanced through the use of a weighted Euclidean distance metric adapted from the Mahalanobis distance metric. The aim is to assist the Canadian Coast Guard in gathering data on recreational boating for the modeling of search and rescue incidence risk. To test the detection accuracy of the enhanced spatiospectral template, a dataset was created by gathering position and attribute data for 53 recreational vessel targets purposely moored for this research within Cadboro Bay, British Columbia, Canada. The Cadboro Bay study site containing the targets was imaged using Ikonos. Overall detection accuracy was 77%. Targets were broken down into 2 categories: 1) Category A-less than 6 m in length, and Category B-more than 6 m long. The detection rate for Category B targets was 100%, while the detection rate for Category A targets was 61%. It is important to note that some Category A targets were intentionally selected for their small size to test the detection limits of the enhanced spatiospectral template. The smallest target detected was 2.2 m long and 1.1 m wide. The analysis also revealed that the ability to detect targets between 2.2 and 6 m long was diminished if the target was dark in color.", "which Main purpose ?", "Search and rescue", 401.0, 418.0], ["Maritime situation awareness is supported by a combination of satellite, airborne, and terrestrial sensor systems. This paper presents several solutions to process that sensor data into information that supports operator decisions. Examples are vessel detection algorithms based on multispectral image techniques in combination with background subtraction, feature extraction techniques that estimate the vessel length to support vessel classification, and data fusion techniques to combine image based information, detections from coastal radar, and reports from cooperative systems such as (satellite) AIS. Other processing solutions include persistent tracking techniques that go beyond kinematic tracking, and include environmental information from navigation charts, and if available, ELINT reports. And finally rule-based and statistical solutions for the behavioural analysis of anomalous vessels. With that, trends and future work will be presented.", "which Main purpose ?", "Vessel detection", 245.0, 261.0], ["Abstract. Vessel monitoring and surveillance is important for maritime safety and security, environment protection and border control. Ship monitoring systems based on Synthetic-aperture Radar (SAR) satellite images are operational. On SAR images the ships made of metal with sharp edges appear as bright dots and edges, therefore they can be well distinguished from the water. Since the radar is independent from the sun light and can acquire images also by cloudy weather and rain, it provides a reliable service. Vessel detection from spaceborne optical images (VDSOI) can extend the SAR based systems by providing more frequent revisit times and overcoming some drawbacks of the SAR images (e.g. lower spatial resolution, difficult human interpretation). Optical satellite images (OSI) can have a higher spatial resolution thus enabling the detection of smaller vessels and enhancing the vessel type classification. The human interpretation of an optical image is also easier than as of SAR image. In this paper I present a rapid automatic vessel detection method which uses pattern recognition methods, originally developed in the computer vision field. In the first step I train a binary classifier from image samples of vessels and background. The classifier uses simple features which can be calculated very fast. For the detection the classifier is slided along the image in various directions and scales. The detector has a cascade structure which rejects most of the background in the early stages which leads to faster execution. The detections are grouped together to avoid multiple detections. Finally the position, size(i.e. length and width) and heading of the vessels is extracted from the contours of the vessel. The presented method is parallelized, thus it runs fast (in minutes for 16000 \u00d7 16000 pixels image) on a multicore computer, enabling near real-time applications, e.g. one hour from image acquisition to end user.", "which Main purpose ?", "Vessel detection", 516.0, 532.0], ["Satellite imagery provides a valuable source of information for maritime surveillance. The vast majority of the research regarding satellite imagery for maritime surveillance focuses on vessel detection and image enhancement, whilst vessel classification remains a largely unexplored research topic. This paper presents a vessel classifier for spaceborne electro-optical imagery based on a feature representative across all satellite imagery, texture. Local Binary Patterns were selected to represent vessels for their high distinctivity and low computational complexity. Considering vessels characteristic super-structure, the extracted vessel signatures are sub-divided in three sections bow, middle and stern. A hierarchical decision-level classification is proposed, analysing first each vessel section individually and then combining the results in the second stage. The proposed approach is evaluated with the electro-optical satellite image dataset presented in [1]. Experimental results reveal an accuracy of 85.64% across four vessel categories.", "which Main purpose ?", "Vessel detection", 186.0, 202.0], ["Background Sepsis is one of the leading causes of mortality in hospitalized patients. Despite this fact, a reliable means of predicting sepsis onset remains elusive. Early and accurate sepsis onset predictions could allow more aggressive and targeted therapy while maintaining antimicrobial stewardship. Existing detection methods suffer from low performance and often require time-consuming laboratory test results. Objective To study and validate a sepsis prediction method, InSight, for the new Sepsis-3 definitions in retrospective data, make predictions using a minimal set of variables from within the electronic health record data, compare the performance of this approach with existing scoring systems, and investigate the effects of data sparsity on InSight performance. Methods We apply InSight, a machine learning classification system that uses multivariable combinations of easily obtained patient data (vitals, peripheral capillary oxygen saturation, Glasgow Coma Score, and age), to predict sepsis using the retrospective Multiparameter Intelligent Monitoring in Intensive Care (MIMIC)-III dataset, restricted to intensive care unit (ICU) patients aged 15 years or more. Following the Sepsis-3 definitions of the sepsis syndrome, we compare the classification performance of InSight versus quick sequential organ failure assessment (qSOFA), modified early warning score (MEWS), systemic inflammatory response syndrome (SIRS), simplified acute physiology score (SAPS) II, and sequential organ failure assessment (SOFA) to determine whether or not patients will become septic at a fixed period of time before onset. We also test the robustness of the InSight system to random deletion of individual input observations. Results In a test dataset with 11.3% sepsis prevalence, InSight produced superior classification performance compared with the alternative scores as measured by area under the receiver operating characteristic curves (AUROC) and area under precision-recall curves (APR). In detection of sepsis onset, InSight attains AUROC = 0.880 (SD 0.006) at onset time and APR = 0.595 (SD 0.016), both of which are superior to the performance attained by SIRS (AUROC: 0.609; APR: 0.160), qSOFA (AUROC: 0.772; APR: 0.277), and MEWS (AUROC: 0.803; APR: 0.327) computed concurrently, as well as SAPS II (AUROC: 0.700; APR: 0.225) and SOFA (AUROC: 0.725; APR: 0.284) computed at admission (P<.001 for all comparisons). Similar results are observed for 1-4 hours preceding sepsis onset. In experiments where approximately 60% of input data are deleted at random, InSight attains an AUROC of 0.781 (SD 0.013) and APR of 0.401 (SD 0.015) at sepsis onset time. Even with 60% of data missing, InSight remains superior to the corresponding SIRS scores (AUROC and APR, P<.001), qSOFA scores (P=.0095; P<.001) and superior to SOFA and SAPS II computed at admission (AUROC and APR, P<.001), where all of these comparison scores (except InSight) are computed without data deletion. Conclusions Despite using little more than vitals, InSight is an effective tool for predicting sepsis onset and performs well even with randomly missing data.", "which Setting ?", "Early warning", 1365.0, 1378.0], ["Early pathogen exposure detection allows better patient care and faster implementation of public health measures (patient isolation, contact tracing). Existing exposure detection most frequently relies on overt clinical symptoms, namely fever, during the infectious prodromal period. We have developed a robust machine learning based method to better detect asymptomatic states during the incubation period using subtle, sub-clinical physiological markers. Starting with high-resolution physiological waveform data from non-human primate studies of viral (Ebola, Marburg, Lassa, and Nipah viruses) and bacterial (Y. pestis) exposure, we processed the data to reduce short-term variability and normalize diurnal variations, then provided these to a supervised random forest classification algorithm and post-classifier declaration logic step to reduce false alarms. In most subjects detection is achieved well before the onset of fever; subject cross-validation across exposure studies (varying viruses, exposure routes, animal species, and target dose) lead to 51h mean early detection (at 0.93 area under the receiver-operating characteristic curve [AUCROC]). Evaluating the algorithm against entirely independent datasets for Lassa, Nipah, and Y. pestis exposures un-used in algorithm training and development yields a mean 51h early warning time (at AUCROC=0.95). We discuss which physiological indicators are most informative for early detection and options for extending this capability to limited datasets such as those available from wearable, non-invasive, ECG-based sensors.", "which Setting ?", "Early warning", 1330.0, 1343.0], ["OBJECTIVE The researchers evaluated the effectiveness of paroxetine and Problem-Solving Treatment for Primary Care (PST-PC) for patients with minor depression or dysthymia. STUDY DESIGN This was an 11-week randomized placebo-controlled trial conducted in primary care practices in 2 communities (Lebanon, NH, and Seattle, Wash). Paroxetine (n=80) or placebo (n=81) therapy was started at 10 mg per day and increased to a maximum 40 mg per day, or PST-PC was provided (n=80). There were 6 scheduled visits for all treatment conditions. POPULATION A total of 241 primary care patients with minor depression (n=114) or dysthymia (n=127) were included. Of these, 191 patients (79.3%) completed all treatment visits. OUTCOMES Depressive symptoms were measured using the 20-item Hopkins Depression Scale (HSCL-D-20). Remission was scored on the Hamilton Depression Rating Scale (HDRS) as less than or equal to 6 at 11 weeks. Functional status was measured with the physical health component (PHC) and mental health component (MHC) of the 36-item Medical Outcomes Study Short Form. RESULTS All treatment conditions showed a significant decline in depressive symptoms over the 11-week period. There were no significant differences between the interventions or by diagnosis. For dysthymia the remission rate for paroxetine (80%) and PST-PC (57%) was significantly higher than for placebo (44%, P=.008). The remission rate was high for minor depression (64%) and similar for each treatment group. For the MHC there were significant outcome differences related to baseline level for paroxetine compared with placebo. For the PHC there were no significant differences between the treatment groups. CONCLUSIONS For dysthymia, paroxetine and PST-PC improved remission compared with placebo plus nonspecific clinical management. Results varied for the other outcomes measured. For minor depression, the 3 interventions were equally effective; general clinical management (watchful waiting) is an appropriate treatment option.", "which Setting ?", "Primary care", 102.0, 114.0], ["This study provides an empirical evaluation of Cognitive Behaviour Therapy (CBT) alone vs Treatment as usual (TAU) alone (generally pharmacotherapy) for late life depression in a UK primary care setting.", "which Setting ?", "Primary care", 182.0, 194.0], ["Abstract Objective: To determine whether, in the treatment of major depression in primary care, a brief psychological treatment (problem solving) was (a) as effective as antidepressant drugs and more effective than placebo; (b) feasible in practice; and (c) acceptable to patients. Design: Randomised controlled trial of problem solving treatment, amitriptyline plus standard clinical management, and drug placebo plus standard clinical management. Each treatment was delivered in six sessions over 12 weeks. Setting: Primary care in Oxfordshire. Subjects: 91 patients in primary care who had major depression. Main outcome measures: Observer and self reported measures of severity of depression, self reported measure of social outcome, and observer measure of psychological symptoms at six and 12 weeks; self reported measure of patient satisfaction at 12 weeks. Numbers of patients recovered at six and 12 weeks. Results: At six and 12 weeks the difference in score on the Hamilton rating scale for depression between problem solving and placebo treatments was significant (5.3 (95% confidence interval 1.6 to 9.0) and 4.7 (0.4 to 9.0) respectively), but the difference between problem solving and amitriptyline was not significant (1.8 (\u22121.8 to 5.5) and 0.9 (\u22123.3 to 5.2) respectively). At 12 weeks 60% (18/30) of patients given problem solving treatment had recovered on the Hamilton scale compared with 52% (16/31) given amitriptyline and 27% (8/30) given placebo. Patients were satisfied with problem solving treatment; all patients who completed treatment (28/30) rated the treatment as helpful or very helpful. The six sessions of problem solving treatment totalled a mean therapy time of 3 1/2 hours. Conclusions: As a treatment for major depression in primary care, problem solving treatment is effective, feasible, and acceptable to patients. Key messages Key messages Patient compliance with antidepressant treatment is often poor, so there is a need for a psychological treatment This study found that problem solving is an effective psychological treatment for major depression in primary care\u2014as effective as amitriptyline and more effective than placebo Problem solving is a feasible treatment in primary care, being effective when given over six sessions by a general practitioner Problem solving treatment is acceptable to patients", "which Setting ?", "Primary care", 82.0, 94.0], ["Abstract Objectives: To determine whether problem solving treatment combined with antidepressant medication is more effective than either treatment alone in the management of major depression in primary care. To assess the effectiveness of problem solving treatment when given by practice nurses compared with general practitioners when both have been trained in the technique. Design: Randomised controlled trial with four treatment groups. Setting: Primary care in Oxfordshire. Participants: Patients aged 18-65 years with major depression on the research diagnostic criteria\u2014a score of 13 or more on the 17 item Hamilton rating scale for depression and a minimum duration of illness of four weeks. Interventions: Problem solving treatment by research general practitioner or research practice nurse or antidepressant medication or a combination of problem solving treatment and antidepressant medication. Main outcome measures: Hamilton rating scale for depression, Beck depression inventory, clinical interview schedule (revised), and the modified social adjustment schedule assessed at 6, 12, and 52 weeks. Results: Patients in all groups showed a clear improvement over 12 weeks. The combination of problem solving treatment and antidepressant medication was no more effective than either treatment alone. There was no difference in outcome irrespective of who delivered the problem solving treatment. Conclusions: Problem solving treatment is an effective treatment for depressive disorders in primary care. The treatment can be delivered by suitably trained practice nurses or general practitioners. The combination of this treatment with antidepressant medication is no more effective than either treatment alone. Key messages Problem solving treatment is an effective treatment for depressive disorders in primary care Problem solving treatment can be delivered by suitably trained practice nurses as effectively as by general practitioners The combination of problem solving treatment and antidepressant medication is no more effective than either treatment alone Problem solving treatment is most likely to benefit patients who have a depressive disorder of moderate severity and who wish to participate in an active psychological treatment", "which Setting ?", "Primary care", 195.0, 207.0], ["Background The consensus statement on the treatment of depression (Paykel & Priest, 1992) advocates the use of cognitive therapy techniques as an adjunct to medication. Method This paper describes a randomised controlled trial of brief cognitive therapy (BCT) plus \u2018treatment as usual\u2019 versus treatment as usual in the management of 48 patients with major depressive disorder presenting in primary care. Results At the end of the acute phase, significantly more subjects (P < 0.05) met recovery criteria in the intervention group (n=15) compared with the control group (n=8). When initial neuroticism scores were controlled for, reductions in Beck Depression Inventory and Hamilton Rating Scale for Depression scores favoured the BCT group throughout the 12 months of follow-up. Conclusions BCT may be beneficial, but given the time constraints, therapists need to be more rather than less skilled in cognitive therapy. This, plus methodological limitations, leads us to advise caution before applying this approach more widely in primary care.", "which Setting ?", "Primary care", 390.0, 402.0], ["CONTEXT Both antidepressant medication and structured psychotherapy have been proven efficacious, but less than one third of people with depressive disorders receive effective levels of either treatment. OBJECTIVE To compare usual primary care for depression with 2 intervention programs: telephone care management and telephone care management plus telephone psychotherapy. DESIGN Three-group randomized controlled trial with allocation concealment and blinded outcome assessment conducted between November 2000 and May 2002. SETTING AND PARTICIPANTS A total of 600 patients beginning antidepressant treatment for depression were systematically sampled from 7 group-model primary care clinics; patients already receiving psychotherapy were excluded. INTERVENTIONS Usual primary care; usual care plus a telephone care management program including at least 3 outreach calls, feedback to the treating physician, and care coordination; usual care plus care management integrated with a structured 8-session cognitive-behavioral psychotherapy program delivered by telephone. MAIN OUTCOME MEASURES Blinded telephone interviews at 6 weeks, 3 months, and 6 months assessed depression severity (Hopkins Symptom Checklist Depression Scale and the Patient Health Questionnaire), patient-rated improvement, and satisfaction with treatment. Computerized administrative data examined use of antidepressant medication and outpatient visits. RESULTS Treatment participation rates were 97% for telephone care management and 93% for telephone care management plus psychotherapy. Compared with usual care, the telephone psychotherapy intervention led to lower mean Hopkins Symptom Checklist Depression Scale depression scores (P =.02), a higher proportion of patients reporting that depression was \"much improved\" (80% vs 55%, P<.001), and a higher proportion of patients \"very satisfied\" with depression treatment (59% vs 29%, P<.001). The telephone care management program had smaller effects on patient-rated improvement (66% vs 55%, P =.04) and satisfaction (47% vs 29%, P =.001); effects on mean depression scores were not statistically significant. CONCLUSIONS For primary care patients beginning antidepressant treatment, a telephone program integrating care management and structured cognitive-behavioral psychotherapy can significantly improve satisfaction and clinical outcomes. These findings suggest a new public health model of psychotherapy for depression including active outreach and vigorous efforts to improve access to and motivation for treatment.", "which Setting ?", "Primary care", 231.0, 243.0], ["CONTEXT Insufficient evidence exists for recommendation of specific effective treatments for older primary care patients with minor depression or dysthymia. OBJECTIVE To compare the effectiveness of pharmacotherapy and psychotherapy in primary care settings among older persons with minor depression or dysthymia. DESIGN Randomized, placebo-controlled trial (November 1995-August 1998). SETTING Four geographically and clinically diverse primary care practices. PARTICIPANTS A total of 415 primary care patients (mean age, 71 years) with minor depression (n = 204) or dysthymia (n = 211) and a Hamilton Depression Rating Scale (HDRS) score of at least 10 were randomized; 311 (74.9%) completed all study visits. INTERVENTIONS Patients were randomly assigned to receive paroxetine (n = 137) or placebo (n = 140), starting at 10 mg/d and titrated to a maximum of 40 mg/d, or problem-solving treatment-primary care (PST-PC; n = 138). For the paroxetine and placebo groups, the 6 visits over 11 weeks included general support and symptom and adverse effects monitoring; for the PST-PC group, visits were for psychotherapy. MAIN OUTCOME MEASURES Depressive symptoms, by the 20-item Hopkins Symptom Checklist Depression Scale (HSCL-D-20) and the HDRS; and functional status, by the Medical Outcomes Study Short-Form 36 (SF-36) physical and mental components. RESULTS Paroxetine patients showed greater (difference in mean [SE] 11-week change in HSCL-D-20 scores, 0.21 [0. 07]; P =.004) symptom resolution than placebo patients. Patients treated with PST-PC did not show more improvement than placebo (difference in mean [SE] change in HSCL-D-20 scores, 0.11 [0.13]; P =.13), but their symptoms improved more rapidly than those of placebo patients during the latter treatment weeks (P =.01). For dysthymia, paroxetine improved mental health functioning vs placebo among patients whose baseline functioning was high (difference in mean [SE] change in SF-36 mental component scores, 5.8 [2.02]; P =. 01) or intermediate (difference in mean [SE] change in SF-36 mental component scores, 4.4 [1.74]; P =.03). Mental health functioning in dysthymia patients was not significantly improved by PST-PC compared with placebo (P>/=.12 for low-, intermediate-, and high-functioning groups). For minor depression, both paroxetine and PST-PC improved mental health functioning in patients in the lowest tertile of baseline functioning (difference vs placebo in mean [SE] change in SF-36 mental component scores, 4.7 [2.03] for those taking paroxetine; 4.7 [1.96] for the PST-PC treatment; P =.02 vs placebo). CONCLUSIONS Paroxetine showed moderate benefit for depressive symptoms and mental health function in elderly patients with dysthymia and more severely impaired elderly patients with minor depression. The benefits of PST-PC were smaller, had slower onset, and were more subject to site differences than those of paroxetine.", "which Setting ?", "Primary care", 99.0, 111.0], ["Procurement is an important component in the field of operating resource management and e-procurement is the golden key to optimizing the supply chains system. Global firms are optimistic on the level of savings that can be achieved through full implementation of e-procurement strategies. E-procurement is an Internet-based business process for obtaining materials and services and managing their inflow into the organization. In this paper, the subjects of supply chains and e-procurement and its benefits to organizations have been studied. Also, e-procurement in construction and its drivers and barriers have been discussed and a framework of supplier selection in an e-procurement environment has been demonstrated. This paper also has addressed critical success factors in adopting e-procurement in supply chains. Keywords\u2014E-Procurement, Supply Chain, Benefits, Construction, Drivers, Barriers, Supplier Selection, CFSs.", "which SCM field ?", "e-Procurement in supply chain", NaN, NaN], ["Extranet is an enabler/system that enriches the information service quality in e-supply chain. This paper uses factor analysis to determine four extranet success factors: system quality, information quality, service quality, and work performance quality. A critical analysis of areas that require improvement is also conducted.", "which SCM field ?", "e-Supply chain", 79.0, 93.0], ["Presents a framework for distribution companies to establish and improve their logistics systems continuously. Recently, much attention has been given to automation in services, the use of new information technology and the integration of the supply chain. Discusses these areas, which have great potential to increase logistics productivity and provide customers with high level service. The exploration of each area is enriched with Taiwanese logistics management practices and experiences. Includes a case study of one prominent food processor and retailer in Taiwan in order to demonstrate the pragmatic operations of the integrated logistics management system. Also, a survey of 45 Taiwanese retailers was conducted to investigate the extent of logistics management in Taiwan. Concludes by suggesting how distribution companies can overcome noticeable logistics management barriers, build store automation systems, and follow the key steps to logistics success.", "which SCM field ?", "Integrated logistics management system", 626.0, 664.0], ["Describes the elements of a successful logistics partnership. Looks at what can cause failure and questions whether the benefits of a logistics partnership are worth the effort required. Concludes that strategic alliances are increasingly becoming a matter of survival, not merely a matter of competitive advantage. Refers to the example of the long\u2010term relationship between Kimberly\u2010Clark Corporation and Interamerican group\u2019s Tricor Warehousing, Inc.", "which SCM field ?", "Logistics partnership", 39.0, 60.0], ["SUMMARY One important factor in the design of an organization's supply chain is the number of suppliers used for a given product or service. Supply base reduction is one option useful in managing the supply base. The current paper reports the results of case studies in 10 organizations that recently implemented supply base reduction activities. Specifically, the paper identifies the key success factors in supply base reduction efforts and prescribes processes to capture the benefits of supply base reduction.", "which SCM field ?", "Supply base reduction", 141.0, 162.0], ["The purpose of this paper is to shed the light on the critical success factors that lead to high supply chain performance outcomes in a Malaysian manufacturing company. The critical success factors consist of relationship with customer and supplier, information communication and technology (ICT), material flow management, corporate culture and performance measurement. Questionnaire was the main instrument for the study and it was distributed to 84 staff from departments of purchasing, planning, logistics and operation. Data analysis was conducted by employing descriptive analysis (mean and standard deviation), reliability analysis, Pearson correlation analysis and multiple regression. The findings show that there are relationships exist between relationship with customer and supplier, ICT, material flow management, performance measurement and supply chain management (SCM) performance, but not for corporate culture. Forming a good customer and supplier relationship is the main predictor of SCM performance, followed by performance measurement, material flow management and ICT. It is recommended that future study to determine additional success factors that are pertinent to firms\u2019 current SCM strategies and directions, competitive advantages and missions. Logic suggests that further study to include more geographical data coverage, other nature of businesses and research instruments. Key words: Supply chain management, critical success factor.", "which SCM field ?", "Supply chain performance", 97.0, 121.0], ["This paper describes a strategic framework for the development of supply chain quality management (SCQM). The framework integrates both vision- and gap-driven change approaches to evaluate not only the implementation gaps but also their potential countermeasures. Based on literature review, drivers of supply chain quality are identified. They are: supply chain competence, critical success factors (CSF), strategic components, and SCQ practices/activities/programmes. Based on SCQM literature, five survey items are also presented in this study for each drive. The Analytic Hierarchy Process (AHP) is used to develop priority indices for these survey items. Knowledge of these critical dimensions and possible implementation discrepancies could help multinational enterprises and their supply chain partners lay out effective and efficient SCQM plans.", "which SCM field ?", "Supply chain quality management", 66.0, 97.0], ["Purpose \u2013 The aim of this paper is threefold: first, to examine the content of supply chain quality management (SCQM); second, to identify the structure of SCQM; and third, to show ways for finding improvement opportunities and organizing individual institution's resources/actions into collective performance outcomes.Design/methodology/approach \u2013 To meet the goals of this work, the paper uses abductive reasoning and two qualitative methods: content analysis and formal concept analysis (FCA). Primary data were collected from both original design manufacturers (ODMs) and original equipment manufacturers (OEMs) in Taiwan.Findings \u2013 According to the qualitative empirical study, modern enterprises need to pay immediate attention to the following two pathways: a compliance approach and a voluntary approach. For the former, three strategic content variables are identified: training programs, ISO, and supplier quality audit programs. As for initiating a voluntary effort, modern lead firms need to instill \u201cmotivat...", "which SCM field ?", "Supply chain quality management", 79.0, 110.0], ["Recent studies have reported that organizations are often unable to identify the key success factors of Sustainable Supply Chain Management (SSCM) and to understand their implications for management practice. For this reason, the implementation of SSCM often does not result in noticeable benefits. So far, research has failed to offer any explanations for this discrepancy. In view of this fact, our study aims at identifying and analyzing the factors that underlie successful SSCM. Success factors are identified by means of a systematic literature review and are then integrated into an explanatory model. Consequently, the proposed success factor model is tested on the basis of an empirical study focusing on recycling networks of the electrics and electronics industry. We found that signaling, information provision and the adoption of standards are crucial preconditions for strategy commitment, mutual learning, the establishment of ecological cycles and hence for the overall success of SSCM. Copyright \u00a9 2011 John Wiley & Sons, Ltd and ERP Environment.", "which SCM field ?", "Sustainable supply chain", 104.0, 128.0], ["Identifying the risks associated with the implementation of clinical information systems (CIS) in health care organizations can be a major challenge for managers, clinicians, and IT specialists, as there are numerous ways in which they can be described and categorized. Risks vary in nature, severity, and consequence, so it is important that those considered to be high-level risks be identified, understood, and managed. This study addresses this issue by first reviewing the extant literature on IT/CIS project risks, and second conducting a Delphi survey among 21 experts highly involved in CIS projects in Canada. In addition to providing a comprehensive list of risk factors and their relative importance, this study is helpful in unifying the literature on IT implementation and health informatics. Our risk factor-oriented research actually confirmed many of the factors found to be important in both these streams.", "which Application area studied ?", "Health care", 98.0, 109.0], ["Methylation of DNA cytosines affects whether transposons are silenced and genes are expressed, and is a major epigenetic mechanism whereby plants respond to environmental change. Analyses of methylation\u2010sensitive amplification polymorphism (MS\u2010AFLP or MSAP) have been often used to assess methyl\u2010cytosine changes in response to stress treatments and, more recently, in ecological studies of wild plant populations. MSAP technique does not require a sequenced reference genome and provides many anonymous loci randomly distributed over the genome for which the methylation status can be ascertained. Scoring of MSAP data, however, is not straightforward, and efforts are still required to standardize this step to make use of the potential to distinguish between methylation at different nucleotide contexts. Furthermore, it is not known how accurately MSAP infers genome\u2010wide cytosine methylation levels in plants. Here, we analyse the relationship between MSAP results and the percentage of global cytosine methylation in genomic DNA obtained by HPLC analysis. A screening of literature revealed that methylation of cytosines at cleavage sites assayed by MSAP was greater than genome\u2010wide estimates obtained by HPLC, and percentages of methylation at different nucleotide contexts varied within and across species. Concurrent HPLC and MSAP analyses of DNA from 200 individuals of the perennial herb Helleborus foetidus confirmed that methyl\u2010cytosine was more frequent in CCGG contexts than in the genome as a whole. In this species, global methylation was unrelated to methylation at the inner CG site. We suggest that global HPLC and context\u2010specific MSAP methylation estimates provide complementary information whose combination can improve our current understanding of methylation\u2010based epigenetic processes in nonmodel plants.", "which Sp ?", "Helleborus foetidus", 1400.0, 1419.0], ["Quantitative and qualitative levels of DNA methylation were evaluated in leaves and callus of Pennisetum purpureum Schum. The level of methylation did not change during leaf differentiation or aging and similar levels of methylation were found in embryogenic and nonembryogenic callus.", "which Sp ?", "Pennisetum purpureum", 94.0, 114.0], ["The aim of the present study was to investigate the chemical composition, antioxidant, angiotensin Iconverting enzyme (ACE) inhibitory, antibacterial and antifungal activities of the essential oil of Artemisia herba alba Asso (Aha), a traditional medicinal plant widely growing in Tunisia. The essential oil from the air dried leaves and flowers of Aha were extracted by hydrodistillation and analyzed by GC and GC/MS. More than fifty compounds, out of which 48 were identified. The main chemical class of the oil was represented by oxygenated monoterpenes (50.53%). These were represented by 21 derivatives, among which the cis -chrysantenyl acetate (10.60%), the sabinyl acetate (9.13%) and the \u03b1-thujone (8.73%) were the principal compounds. Oxygenated sesquiterpenes, particularly arbusculones were identified in the essential oil at relatively high rates. The Aha essential oil was found to have an interesting antioxidant activity as evaluated by the 2,2-diphenyl-1-picrylhydrazyl and the \u03b2-carotene bleaching methods. The Aha essential oil also exhibited an inhibitory activity towards the ACE. The antimicrobial activities of Aha essential oil was evaluated against six bacterial strains and three fungal strains by the agar diffusion method and by determining the inhibition zone. The inhibition zones were in the range of 8-51 mm. The essential oil exhibited a strong growth inhibitory activity on all the studied fungi. Our findings demonstrated that Aha growing wild in South-Western of Tunisia seems to be a new chemotype and its essential oil might be a natural potential source for food preservation and for further investigation by developing new bioactive substances.", "which Country / Plant part CS ?", "Leaves and flowers", 327.0, 345.0], ["A successful surgical case of malignant undifferentiated (embryonal) sarcoma of the liver (USL), a rare tumor normally found in children, is reported. The patient was a 21-year-old woman, complaining of epigastric pain and abdominal fullness. Chemical analyses of the blood and urine and complete blood counts revealed no significant changes, and serum alpha-fetoprotein levels were within normal limits. A physical examination demonstrated a film, slightly tender lesion at the liver's edge palpable 10 cm below the xiphoid process. CT scan and ultrasonography showed an oval mass, confined to the left lobe of the liver, which proved to be hypovascular on angiography. At laparotomy, a large, 18 x 15 x 13 cm tumor, found in the left hepatic lobe was resected. The lesion was dark red in color, encapsulated, smooth surfaced and of an elastic firm consistency. No metastasis was apparent. Histological examination resulted in a diagnosis of undifferentiated sarcoma of the liver. Three courses of adjuvant chemotherapy, including adriamycin, cis-diaminodichloroplatinum, vincristine and dacarbazine were administered following the surgery with no serious adverse effects. The patient remains well with no evidence of recurrence 12 months after her operation.", "which Surgery ?", "Left lob", NaN, NaN], ["Acidic soft drinks, including sports drinks, have been implicated in dental erosion with limited supporting data in scarce erosion studies worldwide. The purpose of this study was to determine the prevalence of dental erosion in a sample of athletes at a large Midwestern state university in the USA, and to evaluate whether regular consumption of sports drinks was associated with dental erosion. A cross-sectional, observational study was done using a convenience sample of 304 athletes, selected irrespective of sports drinks usage. The Lussi Index was used in a blinded clinical examination to grade the frequency and severity of erosion of all tooth surfaces excluding third molars and incisal surfaces of anterior teeth. A self-administered questionnaire was used to gather details on sports drink usage, lifestyle, health problems, dietary and oral health habits. Intraoral color slides were taken of all teeth with erosion. Sports drinks usage was found in 91.8% athletes and the total prevalence of erosion was 36.5%. Nonparametric tests and stepwise regression analysis using history variables showed no association between dental erosion and the use of sports drinks, quantity and frequency of consumption, years of usage and nonsport usage of sports drinks. The most significant predictor of erosion was found to be not belonging to the African race (p < 0.0001). The results of this study reveal no relationship between consumption of sports drinks and dental erosion.", "which efered index ?", "Lussi Index", 540.0, 551.0], ["Most paratransit agencies use a mix of different types of vehicles ranging from small sedans to large converted vans as a cost-effective way to meet the diverse travel needs and seating requirements of their clients. Currently, decisions on what types of vehicles and how many vehicles to use are mostly made by service managers on an ad hoc basis without much systematic analysis and optimization. The objective of this research is to address the underlying fleet size and mix problem and to develop a practical procedure that can be used to determine the optimal fleet mix for a given application. A real-life example illustrates the relationship between the performance of a paratransit service system and the size of its service vehicles. A heuristic procedure identifies the optimal fleet mix that maximizes the operating efficiency of a service system. A set of recommendations is offered for future research; the most important is the need to incorporate a life-cycle cost framework into the paratransit service planning process.", "which Industry ?", "Paratransit service", 678.0, 697.0], ["This paper describes a successful implementation of a decision support system that is used by the fleet management division at North American Van Lines to plan fleet configuration. At the heart of the system is a large linear programming (LP) model that helps management decide what type of tractors to sell to owner/operators or to trade in each week. The system is used to answer a wide variety of \u201cWhat if\u201d questions, many of which have significant financial impact.", "which Industry ?", "Van lines", 142.0, 151.0], ["This paper addresses empty container reposition planning by plainly considering safety stock management and geographical regions. This plan could avoid drawback in practice which collects mass empty containers at a port then repositions most empty containers at a time. Empty containers occupy slots on vessel and the liner shipping company loses chance to yield freight revenue. The problem is drawn up as a two-stage problem. The upper problem is identified to estimate the empty container stock at each port and the lower problem models the empty container reposition planning with shipping service network as the Transportation Problem by Liner Problem. We looked at case studies of the Taiwan Liner Shipping Company to show the application of the proposed model. The results show the model provides optimization techniques to minimize cost of empty container reposition and to provide an evidence to adjust strategy of restructuring the shipping service network.", "which Inventory policy ?", "Safety stock", 80.0, 92.0], ["Publishing data about individuals without revealing sensitive information about them is an important problem. In recent years, a new definition of privacy called \\kappa-anonymity has gained popularity. In a \\kappa-anonymized dataset, each record is indistinguishable from at least k\u20141 other records with respect to certain \"identifying\" attributes. In this paper we show with two simple attacks that a \\kappa-anonymized dataset has some subtle, but severe privacy problems. First, we show that an attacker can discover the values of sensitive attributes when there is little diversity in those sensitive attributes. Second, attackers often have background knowledge, and we show that \\kappa-anonymity does not guarantee privacy against attackers using background knowledge. We give a detailed analysis of these two attacks and we propose a novel and powerful privacy definition called \\ell-diversity. In addition to building a formal foundation for \\ell-diversity, we show in an experimental evaluation that \\ell-diversity is practical and can be implemented efficiently.", "which Background information ?", "Sensitive attributes", 533.0, 553.0], ["Edges in social network graphs may represent sensitive relationships. In this paper, we consider the problem of edges anonymity in graphs. We propose a probabilistic notion of edge anonymity, called graph confidence, which is general enough to capture the privacy breach made by an adversary who can pinpoint target persons in a graph partition based on any given set of topological features of vertexes. We consider a special type of edge anonymity problem which uses vertex degree to partition a graph. We analyze edge disclosure in real-world social networks and show that although some graphs can preserve vertex anonymity, they may still not preserve edge anonymity. We present three heuristic algorithms that protect edge anonymity using edge swap or edge deletion. Our experimental results, based on three real-world social networks and several utility measures, show that these algorithms can effectively preserve edge anonymity yet obtain anonymous graphs of acceptable utility.", "which Background information ?", "Vertex degree", 469.0, 482.0], ["TerraPy, Magic Wet and Chitosan are soil and plant revitalizers based on natural renewable raw materials. These products stimulate microbial activity in the soil and promote plant growth. Their importance to practical agriculture can be seen in their ability to improve soil health, especially where intensive cultivation has shifted the biological balance in the soil ecosystem to high numbers of plant pathogens. The objective of this study was to investigate the plant beneficial capacities of TerraPy, Magic Wet and Chitosan and to evaluate their effect on bacterial and nematode communities in soils. Tomato seedlings (Lycopersicum esculentum cv. Hellfrucht Fr\u00fchstamm) were planted into pots containing a sand/soil mixture (1:1, v/v) and were treated with TerraPy, Magic Wet and Chitosan at 200 kg/ha. At 0, 1, 3, 7 and 14 days after inoculation the following soil parameters were evaluated: soil pH, bacterial and fungal population density (cfu/g soil), total number of saprophytic and plant-parasitic nematodes. At the final sampling date tomato shoot and root fresh weight as well as Meloidogyne infestation was recorded. Plant growth was lowest and nematode infestation was highest in the control. Soil bacterial population densities increased within 24 hours after treatment between 4-fold (Magic Wet) and 19-fold (Chitosan). Bacterial richness and diversity were not significantly altered. Dominant bacterial genera were Acinetobacter (41%) and Pseudomonas (22%) for TerraPy, Pseudomonas (30%) and Acinetobacter (13%) for Magic Wet, Acinetobacter (8.9%) and Pseuodomonas (81%) for Chitosan and Bacillus (42%) and Pseudomonas (32%) for the control. Increased microbial activity also was associated with higher numbers of saprophytic nematodes. The results demonstrated the positive effects of natural products in stimulating soil microbial activity and thereby the antagonistic potential in soils leading to a reduction in nematode infestation and improved plant growth.", "which Applications ?", "Soil and plant revitalizer", NaN, NaN], ["Software testing is a crucial measure used to assure the quality of software. Path testing can detect bugs earlier because of it performs higher error coverage. This paper presents a model of generating test data based on an improved ant colony optimization and path coverage criteria. Experiments show that the algorithm has a better performance than other two algorithms and improve the efficiency of test data generation notably.", "which Fields ?", "Test Data Generation", 403.0, 423.0], ["This paper incorporate the multilevel selection (MLS) theory into the genetic algorithm. Based on this theory, a Multilevel Cooperative Genetic Algorithm (MLGA) is presented. In MLGA, a species is subdivided in a set of populations, each population is subdivided in groups, and evolution occurs at two levels so called individual and group level. A fast population dynamics occurs at individual level. At this level, selection occurs between individuals of the same group. The popular genetic operators such as mutation and crossover are applied within groups. A slow population dynamics occurs at group level. At this level, selection occurs between groups of a population. A group level operator so called colonization is applied between groups in which a group is selected as extinct, and replaced by offspring of a colonist group. We used a set of well known numerical functions in order to evaluate performance of the proposed algorithm. The results showed that the MLGA is robust, and provides an efficient way for numerical function optimization.", "which Recombination Within island ?", "Within group", NaN, NaN], ["In this paper, we present DistSim, a Scalable Distributed in-Memory Semantic Similarity Estimation framework for Knowledge Graphs. DistSim provides a multitude of state-of-the-art similarity estimators. We have developed the Similarity Estimation Pipeline by combining generic software modules. For large scale RDF data, DistSim proposes MinHash with locality sensitivity hashing to achieve better scalability over all-pair similarity estimations. The modules of DistSim can be set up using a multitude of (hyper)-parameters allowing to adjust the tradeoff between information taken into account, and processing time. Furthermore, the output of the Similarity Estimation Pipeline is native RDF. DistSim is integrated into the SANSA stack, documented in scala-docs, and covered by unit tests. Additionally, the variables and provided methods follow the Apache Spark MLlib name-space conventions. The performance of DistSim was tested over a distributed cluster, for the dimensions of data set size and processing power versus processing time, which shows the scalability of DistSim w.r.t. increasing data set sizes and processing power. DistSim is already in use for solving several RDF data analytics related use cases. Additionally, DistSim is available and integrated into the open-source GitHub project SANSA.", "which Scalability Framework ?", "Apache Spark", 852.0, 864.0], ["In this paper, we present DistSim, a Scalable Distributed in-Memory Semantic Similarity Estimation framework for Knowledge Graphs. DistSim provides a multitude of state-of-the-art similarity estimators. We have developed the Similarity Estimation Pipeline by combining generic software modules. For large scale RDF data, DistSim proposes MinHash with locality sensitivity hashing to achieve better scalability over all-pair similarity estimations. The modules of DistSim can be set up using a multitude of (hyper)-parameters allowing to adjust the tradeoff between information taken into account, and processing time. Furthermore, the output of the Similarity Estimation Pipeline is native RDF. DistSim is integrated into the SANSA stack, documented in scala-docs, and covered by unit tests. Additionally, the variables and provided methods follow the Apache Spark MLlib name-space conventions. The performance of DistSim was tested over a distributed cluster, for the dimensions of data set size and processing power versus processing time, which shows the scalability of DistSim w.r.t. increasing data set sizes and processing power. DistSim is already in use for solving several RDF data analytics related use cases. Additionally, DistSim is available and integrated into the open-source GitHub project SANSA.", "which Data types ?", "Knowledge Graph", NaN, NaN], ["In order to deal with heterogeneous knowledge in the medical field, this paper proposes a method which can learn a heavy-weighted medical ontology based on medical glossaries and Web resources. Firstly, terms and taxonomic relations are extracted based on disease and drug glossaries and a light-weighted ontology is constructed, Secondly, non-taxonomic relations are automatically learned from Web resources with linguistic patterns, and the two ontologies (disease and drug) are expanded from light-weighted level towards heavy-weighted level, At last, the disease ontology and drug ontology are integrated to create a practical medical ontology. Experiment shows that this method can integrate and expand medical terms with taxonomic and different kinds of non-taxonomic relations. Our experiments show that the performance is promising.", "which Relationship learning ?", "Taxonomic relations", 213.0, 232.0], ["Abstract Previous research on the relationship between students\u2019 home and school Information and Communication Technology (ICT) resources and academic performance has shown ambiguous results. The availability of ICT resources at school has been found to be unrelated or negatively related to academic performance, whereas the availability of ICT resources at home has been found to be both positively and negatively related to academic performance. In addition, the frequency of use of ICT is related to students\u2019 academic achievement. This relationship has been found to be negative for ICT use at school, however, for ICT use at home the literature on the relationship with academic performance is again ambiguous. In addition to ICT availability and ICT use, students\u2019 attitudes towards ICT have also been found to play a role in student performance. In the present study, we examine how availability of ICT resources, students\u2019 use of those resources (at school, outside school for schoolwork, outside school for leisure), and students\u2019 attitudes toward ICT (interest in ICT, perceived ICT competence, perceived ICT autonomy) relate to individual differences in performance on a digital assessment of reading in one comprehensive model using the Dutch PISA 2015 sample of 5183 15-year-olds (49.2% male). Student gender and students\u2019 economic, social, and cultural status accounted for a substantial part of the variation in digitally assessed reading performance. Controlling for these relationships, results indicated that students with moderate access to ICT resources, moderate use of ICT at school or outside school for schoolwork, and moderate interest in ICT had the highest digitally assessed reading performance. In contrast, students who reported moderate competence in ICT had the lowest digitally assessed reading performance. In addition, frequent use of ICT outside school for leisure was negatively related to digitally assessed reading performance, whereas perceived autonomy was positively related. Taken together, the findings suggest that excessive access to ICT resources, excessive use of ICT, and excessive interest in ICT is associated with lower digitally assessed reading performance.", "which includes ?", "ICT autonomy", 1116.0, 1128.0], ["Abstract Previous research on the relationship between students\u2019 home and school Information and Communication Technology (ICT) resources and academic performance has shown ambiguous results. The availability of ICT resources at school has been found to be unrelated or negatively related to academic performance, whereas the availability of ICT resources at home has been found to be both positively and negatively related to academic performance. In addition, the frequency of use of ICT is related to students\u2019 academic achievement. This relationship has been found to be negative for ICT use at school, however, for ICT use at home the literature on the relationship with academic performance is again ambiguous. In addition to ICT availability and ICT use, students\u2019 attitudes towards ICT have also been found to play a role in student performance. In the present study, we examine how availability of ICT resources, students\u2019 use of those resources (at school, outside school for schoolwork, outside school for leisure), and students\u2019 attitudes toward ICT (interest in ICT, perceived ICT competence, perceived ICT autonomy) relate to individual differences in performance on a digital assessment of reading in one comprehensive model using the Dutch PISA 2015 sample of 5183 15-year-olds (49.2% male). Student gender and students\u2019 economic, social, and cultural status accounted for a substantial part of the variation in digitally assessed reading performance. Controlling for these relationships, results indicated that students with moderate access to ICT resources, moderate use of ICT at school or outside school for schoolwork, and moderate interest in ICT had the highest digitally assessed reading performance. In contrast, students who reported moderate competence in ICT had the lowest digitally assessed reading performance. In addition, frequent use of ICT outside school for leisure was negatively related to digitally assessed reading performance, whereas perceived autonomy was positively related. Taken together, the findings suggest that excessive access to ICT resources, excessive use of ICT, and excessive interest in ICT is associated with lower digitally assessed reading performance.", "which includes ?", "ICT competence", 1090.0, 1104.0], ["Abstract As a relevant cognitive-motivational aspect of ICT literacy, a new construct ICT Engagement is theoretically based on self-determination theory and involves the factors ICT interest, Perceived ICT competence, Perceived autonomy related to ICT use, and ICT as a topic in social interaction. In this manuscript, we present different sources of validity supporting the construct interpretation of test scores in the ICT Engagement scale, which was used in PISA 2015. Specifically, we investigated the internal structure by dimensional analyses and investigated the relation of ICT Engagement aspects to other variables. The analyses are based on public data from PISA 2015 main study from Switzerland ( n = 5860) and Germany ( n = 6504). First, we could confirm the four-dimensional structure of ICT Engagement for the Swiss sample using a structural equation modelling approach. Second, ICT Engagement scales explained the highest amount of variance in ICT Use for Entertainment, followed by Practical use. Third, we found significantly lower values for girls in all ICT Engagement scales except ICT Interest. Fourth, we found a small negative correlation between the scores in the subscale \u201cICT as a topic in social interaction\u201d and reading performance in PISA 2015. We could replicate most results for the German sample. Overall, the obtained results support the construct interpretation of the four ICT Engagement subscales.", "which includes ?", "ICT competence", 202.0, 216.0], ["The aim of the present study is twofold: (1) to identify a factor structure between variables-interest in broad science topics, perceived information and communications technology (ICT) competence, environmental awareness and optimism; and (2) to explore the relations between these variables at the country level. The first part of the aim is addressed using exploratory factor analysis with data from the Program for International Student Assessment (PISA) for 15-year-old students from Singapore and Finland. The results show that a comparable structure with four factors was verified in both countries. Correlation analyses and linear regression were used to address the second part of the aim. The results show that adolescents\u2019 interest in broad science topics can predict perceived ICT competence. Their interest in broad science topics and perceived ICT competence can predict environmental awareness in both countries. However, there is difference in predicting environmental optimism. Singaporean students\u2019 interest in broad science topics and their perceived ICT competences are positive predictors, whereas environmental awareness is a negative predictor. Finnish students\u2019 environmental awareness negatively predicted environmental optimism.", "which includes ?", "ICT competence", 797.0, 811.0], ["The present study investigated the factor structure of and measurement invariance in the information and communication technology (ICT) engagement construct, and the relationship between ICT engagement and students' performance on science, mathematics and reading in China and Germany. Samples were derived from the Programme for International Student Assessment (PISA) 2015 survey. Configural, metric and scalar equivalence were found in a multigroup exploratory structural equation model. In the regression model, a significantly positive association between interest in ICT and student achievement was found in China, in contrast to a significantly negative association in Germany. All achievement scores were negatively and significantly correlated with perceived ICT competence scores in China, whereas science and mathematics achievement scores were not predicted by scores on ICT competence in Germany. Similar patterns were found in China and Germany in terms of perceived autonomy in using ICT and social relatedness in using ICT to predict students' achievement. The implications of all the findings were discussed. [ABSTRACT FROM AUTHOR]", "which includes ?", "ICT competence", 768.0, 782.0], ["Abstract As a relevant cognitive-motivational aspect of ICT literacy, a new construct ICT Engagement is theoretically based on self-determination theory and involves the factors ICT interest, Perceived ICT competence, Perceived autonomy related to ICT use, and ICT as a topic in social interaction. In this manuscript, we present different sources of validity supporting the construct interpretation of test scores in the ICT Engagement scale, which was used in PISA 2015. Specifically, we investigated the internal structure by dimensional analyses and investigated the relation of ICT Engagement aspects to other variables. The analyses are based on public data from PISA 2015 main study from Switzerland ( n = 5860) and Germany ( n = 6504). First, we could confirm the four-dimensional structure of ICT Engagement for the Swiss sample using a structural equation modelling approach. Second, ICT Engagement scales explained the highest amount of variance in ICT Use for Entertainment, followed by Practical use. Third, we found significantly lower values for girls in all ICT Engagement scales except ICT Interest. Fourth, we found a small negative correlation between the scores in the subscale \u201cICT as a topic in social interaction\u201d and reading performance in PISA 2015. We could replicate most results for the German sample. Overall, the obtained results support the construct interpretation of the four ICT Engagement subscales.", "which includes ?", "ICT interest", 178.0, 190.0], ["Abstract Objectives: To determine the acceptability of two psychological interventions for depressed adults in the community and their effect on caseness, symptoms, and subjective function. Design: A pragmatic multicentre randomised controlled trial, stratified by centre. Setting: Nine urban and rural communities in Finland, Republic of Ireland, Norway, Spain, and the United Kingdom. Participants: 452 participants aged 18 to 65, identified through a community survey with depressive or adjustment disorders according to the international classification of diseases, 10th revision or Diagnostic and Statistical Manual of Mental Disorders, fourth edition. Interventions: Six individual sessions of problem solving treatment (n=128), eight group sessions of the course on prevention of depression (n=108), and controls (n=189). Main outcome measures: Completion rates for each intervention, diagnosis of depression, and depressive symptoms and subjective function. Results: 63% of participants assigned to problem solving and 44% assigned to prevention of depression completed their intervention. The proportion of problem solving participants depressed at six months was 17% less than that for controls, giving a number needed to treat of 6; the mean difference in Beck depression inventory score was \u22122.63 (95% confidence interval \u22124.95 to \u22120.32), and there were significant improvements in SF-36 scores. For depression prevention, the difference in proportions of depressed participants was 14% (number needed to treat of 7); the mean difference in Beck depression inventory score was \u22121.50 (\u22124.16 to 1.17), and there were significant improvements in SF-36 scores. Such differences were not observed at 12 months. Neither specific diagnosis nor treatment with antidepressants affected outcome. Conclusions: When offered to adults with depressive disorders in the community, problem solving treatment was more acceptable than the course on prevention of depression. Both interventions reduced caseness and improved subjective function.", "which Depression outcomes (sources) ?", "Beck Depression Inventory", 1267.0, 1292.0], ["CONTEXT Both antidepressant medication and structured psychotherapy have been proven efficacious, but less than one third of people with depressive disorders receive effective levels of either treatment. OBJECTIVE To compare usual primary care for depression with 2 intervention programs: telephone care management and telephone care management plus telephone psychotherapy. DESIGN Three-group randomized controlled trial with allocation concealment and blinded outcome assessment conducted between November 2000 and May 2002. SETTING AND PARTICIPANTS A total of 600 patients beginning antidepressant treatment for depression were systematically sampled from 7 group-model primary care clinics; patients already receiving psychotherapy were excluded. INTERVENTIONS Usual primary care; usual care plus a telephone care management program including at least 3 outreach calls, feedback to the treating physician, and care coordination; usual care plus care management integrated with a structured 8-session cognitive-behavioral psychotherapy program delivered by telephone. MAIN OUTCOME MEASURES Blinded telephone interviews at 6 weeks, 3 months, and 6 months assessed depression severity (Hopkins Symptom Checklist Depression Scale and the Patient Health Questionnaire), patient-rated improvement, and satisfaction with treatment. Computerized administrative data examined use of antidepressant medication and outpatient visits. RESULTS Treatment participation rates were 97% for telephone care management and 93% for telephone care management plus psychotherapy. Compared with usual care, the telephone psychotherapy intervention led to lower mean Hopkins Symptom Checklist Depression Scale depression scores (P =.02), a higher proportion of patients reporting that depression was \"much improved\" (80% vs 55%, P<.001), and a higher proportion of patients \"very satisfied\" with depression treatment (59% vs 29%, P<.001). The telephone care management program had smaller effects on patient-rated improvement (66% vs 55%, P =.04) and satisfaction (47% vs 29%, P =.001); effects on mean depression scores were not statistically significant. CONCLUSIONS For primary care patients beginning antidepressant treatment, a telephone program integrating care management and structured cognitive-behavioral psychotherapy can significantly improve satisfaction and clinical outcomes. These findings suggest a new public health model of psychotherapy for depression including active outreach and vigorous efforts to improve access to and motivation for treatment.", "which Depression outcomes (sources) ?", "Symptom Checklist", 1195.0, 1212.0], ["Background The consensus statement on the treatment of depression (Paykel & Priest, 1992) advocates the use of cognitive therapy techniques as an adjunct to medication. Method This paper describes a randomised controlled trial of brief cognitive therapy (BCT) plus \u2018treatment as usual\u2019 versus treatment as usual in the management of 48 patients with major depressive disorder presenting in primary care. Results At the end of the acute phase, significantly more subjects (P < 0.05) met recovery criteria in the intervention group (n=15) compared with the control group (n=8). When initial neuroticism scores were controlled for, reductions in Beck Depression Inventory and Hamilton Rating Scale for Depression scores favoured the BCT group throughout the 12 months of follow-up. Conclusions BCT may be beneficial, but given the time constraints, therapists need to be more rather than less skilled in cognitive therapy. This, plus methodological limitations, leads us to advise caution before applying this approach more widely in primary care.", "which Depressive disorder ?", "Major depressive disorder", 350.0, 375.0], ["OBJECTIVE The researchers evaluated the effectiveness of paroxetine and Problem-Solving Treatment for Primary Care (PST-PC) for patients with minor depression or dysthymia. STUDY DESIGN This was an 11-week randomized placebo-controlled trial conducted in primary care practices in 2 communities (Lebanon, NH, and Seattle, Wash). Paroxetine (n=80) or placebo (n=81) therapy was started at 10 mg per day and increased to a maximum 40 mg per day, or PST-PC was provided (n=80). There were 6 scheduled visits for all treatment conditions. POPULATION A total of 241 primary care patients with minor depression (n=114) or dysthymia (n=127) were included. Of these, 191 patients (79.3%) completed all treatment visits. OUTCOMES Depressive symptoms were measured using the 20-item Hopkins Depression Scale (HSCL-D-20). Remission was scored on the Hamilton Depression Rating Scale (HDRS) as less than or equal to 6 at 11 weeks. Functional status was measured with the physical health component (PHC) and mental health component (MHC) of the 36-item Medical Outcomes Study Short Form. RESULTS All treatment conditions showed a significant decline in depressive symptoms over the 11-week period. There were no significant differences between the interventions or by diagnosis. For dysthymia the remission rate for paroxetine (80%) and PST-PC (57%) was significantly higher than for placebo (44%, P=.008). The remission rate was high for minor depression (64%) and similar for each treatment group. For the MHC there were significant outcome differences related to baseline level for paroxetine compared with placebo. For the PHC there were no significant differences between the treatment groups. CONCLUSIONS For dysthymia, paroxetine and PST-PC improved remission compared with placebo plus nonspecific clinical management. Results varied for the other outcomes measured. For minor depression, the 3 interventions were equally effective; general clinical management (watchful waiting) is an appropriate treatment option.", "which Depressive disorder ?", "Minor depression or dysthymia", 142.0, 171.0], ["Errors are prevalent in time series data, which is particularly common in the industrial field. Data with errors could not be stored in the database, which results in the loss of data assets. At present, to deal with these time series containing errors, besides keeping original erroneous data, discarding erroneous data and manually checking erroneous data, we can also use the cleaning algorithm widely used in the database to automatically clean the time series data. This survey provides a classification of time series data cleaning techniques and comprehensively reviews the state-of-the-art methods of each type. Besides we summarize data cleaning tools, systems and evaluation criteria from research and industry. Finally, we highlight possible directions time series data cleaning.", "which Definition ?", "time series data", 24.0, 40.0], ["This paper addresses the problem of low impact of smart city applications observed in the fields of energy and transport, which constitute high-priority domains for the development of smart cities. However, these are not the only fields where the impact of smart cities has been limited. The paper provides an explanation for the low impact of various individual applications of smart cities and discusses ways of improving their effectiveness. We argue that the impact of applications depends primarily on their ontology, and secondarily on smart technology and programming features. Consequently, we start by creating an overall ontology for the smart city, defining the building blocks of this ontology with respect to the most cited definitions of smart cities, and structuring this ontology with the Prot\u00e9g\u00e9 5.0 editor, defining entities, class hierarchy, object properties, and data type properties. We then analyze how the ontologies of a sample of smart city applications fit into the overall Smart City Ontology, the consistency between digital spaces, knowledge processes, city domains targeted by the applications, and the types of innovation that determine their impact. In conclusion, we underline the relationships between innovation and ontology, and discuss how we can improve the effectiveness of smart city applications, combining expert and user-driven ontology design with the integration and or-chestration of applications over platforms and larger city entities such as neighborhoods, districts, clusters, and sectors of city activities.", "which Technology level ?", "Digital Space", NaN, NaN], ["Digital transformation is an emerging trend in developing the way how the work is being done, and it is present in the private and public sector, in all industries and fields of work. Smart cities, as one of the concepts related to digital transformation, is usually seen as a matter of local governments, as it is their responsibility to ensure a better quality of life for the citizens. Some cities have already taken advantages of possibilities offered by the concept of smart cities, creating new values to all stakeholders interacting in the living city ecosystems, thus serving as examples of good practice, while others are still developing and growing on their intentions to become smart. This paper provides a structured literature analysis and investigates key scope, services and technologies related to smart cities and digital transformation as concepts of empowering social and collaboration interactions, in order to identify leading factors in most smart city initiatives.", "which Issue(s) Addressed\t ?", "Empowering social and collaboration interactions", 870.0, 918.0], ["The advancement of technology has influenced all the enterprises. Enterprises should come up with the evolving approaches to face the challenges. With an evolving approach, the enterprise will be able to adapt to successive changes. Enterprise architecture is introduced as an approach to confront these challenges. The main issue is the generalization of this evolving approach to enterprise architecture. In an evolving approach, all aspects of the enterprise, as well as the ecosystem of the enterprise are considered. In this study, the notion of Internet of Things is considered as a transition factor in enterprise and enterprise architecture. Industry 4.0 and digital transformation have also been explored in the enterprise. Common challenges are extracted and defined.", "which Technologies Deployed ?", "Internet of Things", 551.0, 569.0], ["Digital transformation from \u201cSmart\u201d to \u201cIntelligent city\u201d is based on new information technologies and knowledge, as well as on organizational and security processes. The authors of this paper will present the legal and regulatory framework and challenges of Internet of things in development of smart cities on the way to become intelligent cities. The special contribution of the paper will be an overview of new legal and regulatory framework General Data Protection Regulation (GDPR) which is of great importance for European union legal and regulation framework and bringing novelties in citizen's privacy and protection of personal data.", "which Technologies Deployed ?", "Internet of Things", 259.0, 277.0], ["The digital transformation of our life changes the way we work, learn, communicate, and collaborate. Enterprises are presently transforming their strategy, culture, processes, and their information systems to become digital. The digital transformation deeply disrupts existing enterprises and economies. Digitization fosters the development of IT systems with many rather small and distributed structures, like Internet of Things, Microservices and mobile services. Since years a lot of new business opportunities appear using the potential of services computing, Internet of Things, mobile systems, big data with analytics, cloud computing, collaboration networks, and decision support. Biological metaphors of living and adaptable ecosystems provide the logical foundation for self-optimizing and resilient run-time environments for intelligent business services and adaptable distributed information systems with service-oriented enterprise architectures. This has a strong impact for architecting digital services and products following both a value-oriented and a service perspective. The change from a closed-world modeling world to a more flexible open-world composition and evolution of enterprise architectures defines the moving context for adaptable and high distributed systems, which are essential to enable the digital transformation. The present research paper investigates the evolution of Enterprise Architecture considering new defined value-oriented mappings between digital strategies, digital business models and an improved digital enterprise architecture.", "which Technologies Deployed ?", "Internet of Things", 411.0, 429.0], ["Abstract Background As a reaction to the novel coronavirus disease (COVID-19), countries around the globe have implemented various measures to reduce the spread of the virus. The transportation sector is particularly affected by the pandemic situation. The current study aims to contribute to the empirical knowledge regarding the effects of the coronavirus situation on the mobility of people by (1) broadening the perspective to the mobility rural area\u2019s residents and (2) providing subjective data concerning the perceived changes of affected persons\u2019 mobility practices, as these two aspects have scarcely been considered in research so far. Methods To address these research gaps, a mixed-methods study was conducted that integrates a qualitative telephone interview study ( N = 15) and a quantitative household survey ( N = 301). The rural district of Altmarkkreis Salzwedel in Northern Germany was chosen as a model region. Results The results provide in-depth insights into the changing mobility practices of residents of a rural area during the legal restrictions to stem the spread of the virus. A high share of respondents (62.6%) experienced no changes in their mobility behavior due to the COVID-19 pandemic situation. However, nearly one third of trips were also cancelled overall. A modal shift was observed towards the reduction of trips by car and bus, and an increase of trips by bike. The share of trips by foot was unchanged. The majority of respondents did not predict strong long-term effects of the corona pandemic on their mobility behavior.", "which Area ?", "rural area", 444.0, 454.0], ["This paper addresses the problem of low impact of smart city applications observed in the fields of energy and transport, which constitute high-priority domains for the development of smart cities. However, these are not the only fields where the impact of smart cities has been limited. The paper provides an explanation for the low impact of various individual applications of smart cities and discusses ways of improving their effectiveness. We argue that the impact of applications depends primarily on their ontology, and secondarily on smart technology and programming features. Consequently, we start by creating an overall ontology for the smart city, defining the building blocks of this ontology with respect to the most cited definitions of smart cities, and structuring this ontology with the Prot\u00e9g\u00e9 5.0 editor, defining entities, class hierarchy, object properties, and data type properties. We then analyze how the ontologies of a sample of smart city applications fit into the overall Smart City Ontology, the consistency between digital spaces, knowledge processes, city domains targeted by the applications, and the types of innovation that determine their impact. In conclusion, we underline the relationships between innovation and ontology, and discuss how we can improve the effectiveness of smart city applications, combining expert and user-driven ontology design with the integration and or-chestration of applications over platforms and larger city entities such as neighborhoods, districts, clusters, and sectors of city activities.", "which Linked Ontology ?", "Smart City Ontology", 1009.0, 1028.0], [" Information technology (IT) is often less emphasized in coursework related to public administration education, despite the growing need for technological capabilities in those joining the public sector workforce. This coupled with a lesser emphasis on e-government/IT skills by accreditation standards adds to the widening gap between theory and practice in the field. This study examines the emphasis placed on e-government/IT concepts in Master of Public Administration (MPA) and Master of Public Policy (MPP) programs, either through complete course offerings or through related courses such as public management, strategic planning, performance measurement and organization theory. Based on a content analysis of their syllabi, the paper analyzes the extent to which the IT/e-government courses in MPA/Master of Public Policy programs address the Network of Schools of Public Policy, Affairs, and Administration competency standards, and further discuss the orientation of the courses with two of the competencies: management and policy. Specifically, are e-government/IT courses more management-oriented or policy-oriented? Do public management, strategic planning, performance measurement, and organization theory courses address IT concerns? ", "which has business competence ?", "strategic planning", 626.0, 644.0], ["Abstract Background During the current worldwide pandemic, coronavirus disease 2019 (Covid-19) was first diagnosed in Iceland at the end of February. However, data are limited on how SARS-CoV-2, the virus that causes Covid-19, enters and spreads in a population. Methods We targeted testing to persons living in Iceland who were at high risk for infection (mainly those who were symptomatic, had recently traveled to high-risk countries, or had contact with infected persons). We also carried out population screening using two strategies: issuing an open invitation to 10,797 persons and sending random invitations to 2283 persons. We sequenced SARS-CoV-2 from 643 samples. Results As of April 4, a total of 1221 of 9199 persons (13.3%) who were recruited for targeted testing had positive results for infection with SARS-CoV-2. Of those tested in the general population, 87 (0.8%) in the open-invitation screening and 13 (0.6%) in the random-population screening tested positive for the virus. In total, 6% of the population was screened. Most persons in the targeted-testing group who received positive tests early in the study had recently traveled internationally, in contrast to those who tested positive later in the study. Children under 10 years of age were less likely to receive a positive result than were persons 10 years of age or older, with percentages of 6.7% and 13.7%, respectively, for targeted testing; in the population screening, no child under 10 years of age had a positive result, as compared with 0.8% of those 10 years of age or older. Fewer females than males received positive results both in targeted testing (11.0% vs. 16.7%) and in population screening (0.6% vs. 0.9%). The haplotypes of the sequenced SARS-CoV-2 viruses were diverse and changed over time. The percentage of infected participants that was determined through population screening remained stable for the 20-day duration of screening. Conclusions In a population-based study in Iceland, children under 10 years of age and females had a lower incidence of SARS-CoV-2 infection than adolescents or adults and males. The proportion of infected persons identified through population screening did not change substantially during the screening period, which was consistent with a beneficial effect of containment efforts. (Funded by deCODE Genetics\u2013Amgen.)", "which has study design ?", "population screening", 497.0, 517.0], ["Abstract Background During the current worldwide pandemic, coronavirus disease 2019 (Covid-19) was first diagnosed in Iceland at the end of February. However, data are limited on how SARS-CoV-2, the virus that causes Covid-19, enters and spreads in a population. Methods We targeted testing to persons living in Iceland who were at high risk for infection (mainly those who were symptomatic, had recently traveled to high-risk countries, or had contact with infected persons). We also carried out population screening using two strategies: issuing an open invitation to 10,797 persons and sending random invitations to 2283 persons. We sequenced SARS-CoV-2 from 643 samples. Results As of April 4, a total of 1221 of 9199 persons (13.3%) who were recruited for targeted testing had positive results for infection with SARS-CoV-2. Of those tested in the general population, 87 (0.8%) in the open-invitation screening and 13 (0.6%) in the random-population screening tested positive for the virus. In total, 6% of the population was screened. Most persons in the targeted-testing group who received positive tests early in the study had recently traveled internationally, in contrast to those who tested positive later in the study. Children under 10 years of age were less likely to receive a positive result than were persons 10 years of age or older, with percentages of 6.7% and 13.7%, respectively, for targeted testing; in the population screening, no child under 10 years of age had a positive result, as compared with 0.8% of those 10 years of age or older. Fewer females than males received positive results both in targeted testing (11.0% vs. 16.7%) and in population screening (0.6% vs. 0.9%). The haplotypes of the sequenced SARS-CoV-2 viruses were diverse and changed over time. The percentage of infected participants that was determined through population screening remained stable for the 20-day duration of screening. Conclusions In a population-based study in Iceland, children under 10 years of age and females had a lower incidence of SARS-CoV-2 infection than adolescents or adults and males. The proportion of infected persons identified through population screening did not change substantially during the screening period, which was consistent with a beneficial effect of containment efforts. (Funded by deCODE Genetics\u2013Amgen.)", "which has study design ?", "targeted testing", 274.0, 290.0], ["Echovirus 7, 13 and 19 are part of the diseases-causing enteroviruses identified in Nigeria. Presently, no treatment modality is clinically available against these enteric viruses. Herein, we investigated the ability of two anthraquinones (physcion and chrysophanol) and two lupane triterpenoids (betulinic acid and lupeol), isolated from the stem bark of Senna siamea, to reduce the viral-induced cytopathic effect on rhabdomyosarcoma cells using MTT (3-[4,5-dimethylthiazol\u20132-yl]-2,5diphenyltetrazolium bromide) colorimetric method. Viral-induced CPE by E7 and E19 was inhibited in the presence of all tested compounds, E13 was resistant to all the compounds except betulinic acid. Physcion was the most active with IC50 of 0.42 and 0.33 \u03bcg/mL on E7 and E19, respectively. We concluded that these compounds from Senna siamea possess anti-enteroviral activities and betulinic acid may represent a potential therapeutic agent to control E7, E13, and E19 infections, especially due its ability to inhibit CPE caused by the impervious E13.", "which Source name ?", "Senna siamea", 356.0, 368.0], ["Previous research explored workplace climate as a factor of workplace bullying and coping with workplace bullying, but these concepts were not closely related to workplace bullying behaviors (WBBs). To examine whether the perceived exposure to bullying mediates the relationship between the climate of accepting WBBs and job satisfaction under the condition of different levels of WBBs coping self-efficacy beliefs, we performed moderated mediation analysis. The Negative Acts Questionnaire \u2013 Revised was given to 329 employees from Serbia for assessing perceived exposure to bullying. Leaving the original scale items, the instruction of the original Negative Acts Questionnaire \u2013 Revised was modified for assessing (1) the climate of accepting WBBs and (2) WBBs coping self-efficacy beliefs. There was a significant negative relationship between exposure to bullying and job satisfaction. WBB acceptance climate was positively related to exposure to workplace bullying and negatively related to job satisfaction. WBB acceptance climate had an indirect relationship with job satisfaction through bullying exposure, and the relationship between WBB acceptance and exposure to bullying was weaker among those who believed that they were more efficient in coping with workplace bullying. Workplace bullying could be sustained by WBB acceptance climate which threatens the job-related outcomes. WBBs coping self-efficacy beliefs have some buffering effects.", "which Variables ?", "Coping self-efficacy beliefs", 386.0, 414.0], ["Previous research explored workplace climate as a factor of workplace bullying and coping with workplace bullying, but these concepts were not closely related to workplace bullying behaviors (WBBs). To examine whether the perceived exposure to bullying mediates the relationship between the climate of accepting WBBs and job satisfaction under the condition of different levels of WBBs coping self-efficacy beliefs, we performed moderated mediation analysis. The Negative Acts Questionnaire \u2013 Revised was given to 329 employees from Serbia for assessing perceived exposure to bullying. Leaving the original scale items, the instruction of the original Negative Acts Questionnaire \u2013 Revised was modified for assessing (1) the climate of accepting WBBs and (2) WBBs coping self-efficacy beliefs. There was a significant negative relationship between exposure to bullying and job satisfaction. WBB acceptance climate was positively related to exposure to workplace bullying and negatively related to job satisfaction. WBB acceptance climate had an indirect relationship with job satisfaction through bullying exposure, and the relationship between WBB acceptance and exposure to bullying was weaker among those who believed that they were more efficient in coping with workplace bullying. Workplace bullying could be sustained by WBB acceptance climate which threatens the job-related outcomes. WBBs coping self-efficacy beliefs have some buffering effects.", "which Variables ?", "Job satisfaction", 321.0, 337.0], ["Previous research explored workplace climate as a factor of workplace bullying and coping with workplace bullying, but these concepts were not closely related to workplace bullying behaviors (WBBs). To examine whether the perceived exposure to bullying mediates the relationship between the climate of accepting WBBs and job satisfaction under the condition of different levels of WBBs coping self-efficacy beliefs, we performed moderated mediation analysis. The Negative Acts Questionnaire \u2013 Revised was given to 329 employees from Serbia for assessing perceived exposure to bullying. Leaving the original scale items, the instruction of the original Negative Acts Questionnaire \u2013 Revised was modified for assessing (1) the climate of accepting WBBs and (2) WBBs coping self-efficacy beliefs. There was a significant negative relationship between exposure to bullying and job satisfaction. WBB acceptance climate was positively related to exposure to workplace bullying and negatively related to job satisfaction. WBB acceptance climate had an indirect relationship with job satisfaction through bullying exposure, and the relationship between WBB acceptance and exposure to bullying was weaker among those who believed that they were more efficient in coping with workplace bullying. Workplace bullying could be sustained by WBB acceptance climate which threatens the job-related outcomes. WBBs coping self-efficacy beliefs have some buffering effects.", "which Variables ?", "Workplace bullying", 60.0, 78.0], ["Systematic studies on the influence of crystalline vs disordered nanocomposite structures on barrier properties and water vapor sensitivity are scarce as it is difficult to switch between the two morphologies without changing other critical parameters. By combining water-soluble poly(vinyl alcohol) (PVOH) and ultrahigh aspect ratio synthetic sodium fluorohectorite (Hec) as filler, we were able to fabricate nanocomposites from a single nematic aqueous suspension by slot die coating that, depending on the drying temperature, forms different desired morphologies. Increasing the drying temperature from 20 to 50 \u00b0C for the same formulation triggers phase segregation and disordered nanocomposites are obtained, while at room temperature, one-dimensional (1D) crystalline, intercalated hybrid Bragg Stacks form. The onset of swelling of the crystalline morphology is pushed to significantly higher relative humidity (RH). This disorder-order transition renders PVOH/Hec a promising barrier material at RH of up to 65%, which is relevant for food packaging. The oxygen permeability (OP) of the 1D crystalline PVOH/Hec is an order of magnitude lower compared to the OP of the disordered nanocomposite at this elevated RH (OP = 0.007 cm3 \u03bcm m-2 day-1 bar-1 cf. OP = 0.047 cm3 \u03bcm m-2 day-1 bar-1 at 23 \u00b0C and 65% RH).", "which Fabrication method ?", "Slot die", 469.0, 477.0], ["Abstract Flower-like palladium nanoclusters (FPNCs) are electrodeposited onto graphene electrode that are prepared by chemical vapor deposition (CVD). The CVD graphene layer is transferred onto a poly(ethylene naphthalate) (PEN) film to provide a mechanical stability and flexibility. The surface of the CVD graphene is functionalized with diaminonaphthalene (DAN) to form flower shapes. Palladium nanoparticles act as templates to mediate the formation of FPNCs, which increase in size with reaction time. The population of FPNCs can be controlled by adjusting the DAN concentration as functionalization solution. These FPNCs_CG electrodes are sensitive to hydrogen gas at room temperature. The sensitivity and response time as a function of the FPNCs population are investigated, resulted in improved performance with increasing population. Furthermore, the minimum detectable level (MDL) of hydrogen is 0.1 ppm, which is at least 2 orders of magnitude lower than that of chemical sensors based on other Pd-based hybrid materials.", "which Limit of detection (ppm) ?", "0.1", 906.0, 909.0], ["Abstract Flower-like palladium nanoclusters (FPNCs) are electrodeposited onto graphene electrode that are prepared by chemical vapor deposition (CVD). The CVD graphene layer is transferred onto a poly(ethylene naphthalate) (PEN) film to provide a mechanical stability and flexibility. The surface of the CVD graphene is functionalized with diaminonaphthalene (DAN) to form flower shapes. Palladium nanoparticles act as templates to mediate the formation of FPNCs, which increase in size with reaction time. The population of FPNCs can be controlled by adjusting the DAN concentration as functionalization solution. These FPNCs_CG electrodes are sensitive to hydrogen gas at room temperature. The sensitivity and response time as a function of the FPNCs population are investigated, resulted in improved performance with increasing population. Furthermore, the minimum detectable level (MDL) of hydrogen is 0.1 ppm, which is at least 2 orders of magnitude lower than that of chemical sensors based on other Pd-based hybrid materials.", "which Minimum experimental range (ppm) ?", "0.1", 906.0, 909.0], ["In this article, cupric oxide (CuO) leafletlike nanosheets have been synthesized by a facile, low-cost, and surfactant-free method, and they have further been successfully developed for sensitive and selective determination of hydrogen sulfide (H2S) with high recovery ability. The experimental results have revealed that the sensitivity and recovery time of the present H2S gas sensor are strongly dependent on the working temperature. The best H2S sensing performance has been achieved with a low detection limit of 2 ppb and broad linear range from 30 ppb to 1.2 ppm. The gas sensor is reversible, with a quick response time of 4 s and a short recovery time of 9 s. In addition, negligible responses can be observed exposed to 100-fold concentrations of other gases which may exist in the atmosphere such as nitrogen (N2), oxygen (O2), nitric oxide (NO), cabon monoxide (CO), nitrogen dioxide (NO2), hydrogen (H2), and so on, indicating relatively high selectivity of the present H2S sensor. The H2S sensor based on t...", "which Maximum experimental range (ppm) ?", "1.2", 562.0, 565.0], ["Purpose To determine the effects of hypercholesterolemia in pregnant mice on the susceptibility to atherosclerosis in adult life through a new animal modeling approach. Methods Male offspring from apoE\u2212/\u2212 mice fed with regular (R) or high (H) cholesterol chow during pregnancy were randomly subjected to regular (Groups R\u2013R and H\u2013R, n = 10) or high cholesterol diet (Groups R\u2013H and H\u2013H, n = 10) for 14 weeks. Plasma lipid profiles were determined in all rats. The abdominal aorta was examined for the severity of atherosclerotic lesions in offspring. Results Lipids significantly increased while high-density lipoprotein-cholesterol/low-density lipoprotein-cholesterol decreased in mothers fed high cholesterol chow after delivery compared with before pregnancy (p < 0.01). Groups R\u2013H and H\u2013R indicated dyslipidemia and significant atherosclerotic lesions. Group H\u2013H demonstrated the highest lipids, lowest high-density lipoprotein-cholesterol/low-density lipoprotein-cholesterol, highest incidence (90%), plaque area to luminal area ratio (0.78 \u00b1 0.02) and intima to media ratio (1.57 \u00b1 0.05). Conclusion Hypercholesterolemia in pregnant mice may increase susceptibility to atherosclerosis in their adult offspring.", "which p-value of atherosclerotic lesion sizes ?", "0.01", 767.0, 771.0], ["Background Recognition of salient MRI morphologic and kinetic features of various malignant tumor subtypes and benign diseases, either visually or with artificial intelligence (AI), allows radiologists to improve diagnoses that may improve patient treatment. Purpose To evaluate whether the diagnostic performance of radiologists in the differentiation of cancer from noncancer at dynamic contrast material-enhanced (DCE) breast MRI is improved when using an AI system compared with conventionally available software. Materials and Methods In a retrospective clinical reader study, images from breast DCE MRI examinations were interpreted by 19 breast imaging radiologists from eight academic and 11 private practices. Readers interpreted each examination twice. In the \"first read,\" they were provided with conventionally available computer-aided evaluation software, including kinetic maps. In the \"second read,\" they were also provided with AI analytics through computer-aided diagnosis software. Reader diagnostic performance was evaluated with receiver operating characteristic (ROC) analysis, with the area under the ROC curve (AUC) as a figure of merit in the task of distinguishing between malignant and benign lesions. The primary study end point was the difference in AUC between the first-read and the second-read conditions. Results One hundred eleven women (mean age, 52 years \u00b1 13 [standard deviation]) were evaluated with a total of 111 breast DCE MRI examinations (54 malignant and 57 nonmalignant lesions). The average AUC of all readers improved from 0.71 to 0.76 (P = .04) when using the AI system. The average sensitivity improved when Breast Imaging Reporting and Data System (BI-RADS) category 3 was used as the cut point (from 90% to 94%; 95% confidence interval [CI] for the change: 0.8%, 7.4%) but not when using BI-RADS category 4a (from 80% to 85%; 95% CI: -0.9%, 11%). The average specificity showed no difference when using either BI-RADS category 4a or category 3 as the cut point (52% and 52% [95% CI: -7.3%, 6.0%], and from 29% to 28% [95% CI: -6.4%, 4.3%], respectively). Conclusion Use of an artificial intelligence system improves radiologists' performance in the task of differentiating benign and malignant MRI breast lesions. \u00a9 RSNA, 2020 Online supplemental material is available for this article. See also the editorial by Krupinski in this issue.", "which AUC without AI ?", "0.71", 1569.0, 1573.0], ["Purpose To evaluate the benefits of an artificial intelligence (AI)-based tool for two-dimensional mammography in the breast cancer detection process. Materials and Methods In this multireader, multicase retrospective study, 14 radiologists assessed a dataset of 240 digital mammography images, acquired between 2013 and 2016, using a counterbalance design in which half of the dataset was read without AI and the other half with the help of AI during a first session and vice versa during a second session, which was separated from the first by a washout period. Area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and reading time were assessed as endpoints. Results The average AUC across readers was 0.769 (95% CI: 0.724, 0.814) without AI and 0.797 (95% CI: 0.754, 0.840) with AI. The average difference in AUC was 0.028 (95% CI: 0.002, 0.055, P = .035). Average sensitivity was increased by 0.033 when using AI support (P = .021). Reading time changed dependently to the AI-tool score. For low likelihood of malignancy (< 2.5%), the time was about the same in the first reading session and slightly decreased in the second reading session. For higher likelihood of malignancy, the reading time was on average increased with the use of AI. Conclusion This clinical investigation demonstrated that the concurrent use of this AI tool improved the diagnostic performance of radiologists in the detection of breast cancer without prolonging their workflow.Supplemental material is available for this article.\u00a9 RSNA, 2020.", "which AUC without AI ?", "0.769", 740.0, 745.0], ["We aimed to evaluate an artificial intelligence (AI) system that can detect and diagnose lesions of maximum intensity projection (MIP) in dynamic contrast-enhanced (DCE) breast magnetic resonance imaging (MRI). We retrospectively gathered MIPs of DCE breast MRI for training and validation data from 30 and 7 normal individuals, 49 and 20 benign cases, and 135 and 45 malignant cases, respectively. Breast lesions were indicated with a bounding box and labeled as benign or malignant by a radiologist, while the AI system was trained to detect and calculate possibilities of malignancy using RetinaNet. The AI system was analyzed using test sets of 13 normal, 20 benign, and 52 malignant cases. Four human readers also scored these test data with and without the assistance of the AI system for the possibility of a malignancy in each breast. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were 0.926, 0.828, and 0.925 for the AI system; 0.847, 0.841, and 0.884 for human readers without AI; and 0.889, 0.823, and 0.899 for human readers with AI using a cutoff value of 2%, respectively. The AI system showed better diagnostic performance compared to the human readers (p = 0.002), and because of the increased performance of human readers with the assistance of the AI system, the AUC of human readers was significantly higher with than without the AI system (p = 0.039). Our AI system showed a high performance ability in detecting and diagnosing lesions in MIPs of DCE breast MRI and increased the diagnostic performance of human readers.", "which AUC without AI ?", "0.884", 1000.0, 1005.0], ["Background Recognition of salient MRI morphologic and kinetic features of various malignant tumor subtypes and benign diseases, either visually or with artificial intelligence (AI), allows radiologists to improve diagnoses that may improve patient treatment. Purpose To evaluate whether the diagnostic performance of radiologists in the differentiation of cancer from noncancer at dynamic contrast material-enhanced (DCE) breast MRI is improved when using an AI system compared with conventionally available software. Materials and Methods In a retrospective clinical reader study, images from breast DCE MRI examinations were interpreted by 19 breast imaging radiologists from eight academic and 11 private practices. Readers interpreted each examination twice. In the \"first read,\" they were provided with conventionally available computer-aided evaluation software, including kinetic maps. In the \"second read,\" they were also provided with AI analytics through computer-aided diagnosis software. Reader diagnostic performance was evaluated with receiver operating characteristic (ROC) analysis, with the area under the ROC curve (AUC) as a figure of merit in the task of distinguishing between malignant and benign lesions. The primary study end point was the difference in AUC between the first-read and the second-read conditions. Results One hundred eleven women (mean age, 52 years \u00b1 13 [standard deviation]) were evaluated with a total of 111 breast DCE MRI examinations (54 malignant and 57 nonmalignant lesions). The average AUC of all readers improved from 0.71 to 0.76 (P = .04) when using the AI system. The average sensitivity improved when Breast Imaging Reporting and Data System (BI-RADS) category 3 was used as the cut point (from 90% to 94%; 95% confidence interval [CI] for the change: 0.8%, 7.4%) but not when using BI-RADS category 4a (from 80% to 85%; 95% CI: -0.9%, 11%). The average specificity showed no difference when using either BI-RADS category 4a or category 3 as the cut point (52% and 52% [95% CI: -7.3%, 6.0%], and from 29% to 28% [95% CI: -6.4%, 4.3%], respectively). Conclusion Use of an artificial intelligence system improves radiologists' performance in the task of differentiating benign and malignant MRI breast lesions. \u00a9 RSNA, 2020 Online supplemental material is available for this article. See also the editorial by Krupinski in this issue.", "which AUC with AI ?", "0.76", 1577.0, 1581.0], ["Purpose To evaluate the benefits of an artificial intelligence (AI)-based tool for two-dimensional mammography in the breast cancer detection process. Materials and Methods In this multireader, multicase retrospective study, 14 radiologists assessed a dataset of 240 digital mammography images, acquired between 2013 and 2016, using a counterbalance design in which half of the dataset was read without AI and the other half with the help of AI during a first session and vice versa during a second session, which was separated from the first by a washout period. Area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and reading time were assessed as endpoints. Results The average AUC across readers was 0.769 (95% CI: 0.724, 0.814) without AI and 0.797 (95% CI: 0.754, 0.840) with AI. The average difference in AUC was 0.028 (95% CI: 0.002, 0.055, P = .035). Average sensitivity was increased by 0.033 when using AI support (P = .021). Reading time changed dependently to the AI-tool score. For low likelihood of malignancy (< 2.5%), the time was about the same in the first reading session and slightly decreased in the second reading session. For higher likelihood of malignancy, the reading time was on average increased with the use of AI. Conclusion This clinical investigation demonstrated that the concurrent use of this AI tool improved the diagnostic performance of radiologists in the detection of breast cancer without prolonging their workflow.Supplemental material is available for this article.\u00a9 RSNA, 2020.", "which AUC with AI ?", "0.797", 784.0, 789.0], ["We aimed to evaluate an artificial intelligence (AI) system that can detect and diagnose lesions of maximum intensity projection (MIP) in dynamic contrast-enhanced (DCE) breast magnetic resonance imaging (MRI). We retrospectively gathered MIPs of DCE breast MRI for training and validation data from 30 and 7 normal individuals, 49 and 20 benign cases, and 135 and 45 malignant cases, respectively. Breast lesions were indicated with a bounding box and labeled as benign or malignant by a radiologist, while the AI system was trained to detect and calculate possibilities of malignancy using RetinaNet. The AI system was analyzed using test sets of 13 normal, 20 benign, and 52 malignant cases. Four human readers also scored these test data with and without the assistance of the AI system for the possibility of a malignancy in each breast. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were 0.926, 0.828, and 0.925 for the AI system; 0.847, 0.841, and 0.884 for human readers without AI; and 0.889, 0.823, and 0.899 for human readers with AI using a cutoff value of 2%, respectively. The AI system showed better diagnostic performance compared to the human readers (p = 0.002), and because of the increased performance of human readers with the assistance of the AI system, the AUC of human readers was significantly higher with than without the AI system (p = 0.039). Our AI system showed a high performance ability in detecting and diagnosing lesions in MIPs of DCE breast MRI and increased the diagnostic performance of human readers.", "which AUC with AI ?", "0.899", 1058.0, 1063.0], ["There is currently a gap between the natural language expression of scholarly publications and their structured semantic content modeling to enable intelligent content search. With the volume of research growing exponentially every year, a search feature operating over semantically structured content is compelling. The SemEval-2021 Shared Task NLPContributionGraph (a.k.a. \u2018the NCG task\u2019) tasks participants to develop automated systems that structure contributions from NLP scholarly articles in the English language. Being the first-of-its-kind in the SemEval series, the task released structured data from NLP scholarly articles at three levels of information granularity, i.e. at sentence-level, phrase-level, and phrases organized as triples toward Knowledge Graph (KG) building. The sentence-level annotations comprised the few sentences about the article\u2019s contribution. The phrase-level annotations were scientific term and predicate phrases from the contribution sentences. Finally, the triples constituted the research overview KG. For the Shared Task, participating systems were then expected to automatically classify contribution sentences, extract scientific terms and relations from the sentences, and organize them as KG triples. Overall, the task drew a strong participation demographic of seven teams and 27 participants. The best end-to-end task system classified contribution sentences at 57.27% F1, phrases at 46.41% F1, and triples at 22.28% F1. While the absolute performance to generate triples remains low, as conclusion to the article, the difficulty of producing such data and as a consequence of modeling it is highlighted.", "which Pipelined Triples Extraction Performance F-score ?", "22.28", 1459.0, 1464.0], ["There is currently a gap between the natural language expression of scholarly publications and their structured semantic content modeling to enable intelligent content search. With the volume of research growing exponentially every year, a search feature operating over semantically structured content is compelling. The SemEval-2021 Shared Task NLPContributionGraph (a.k.a. \u2018the NCG task\u2019) tasks participants to develop automated systems that structure contributions from NLP scholarly articles in the English language. Being the first-of-its-kind in the SemEval series, the task released structured data from NLP scholarly articles at three levels of information granularity, i.e. at sentence-level, phrase-level, and phrases organized as triples toward Knowledge Graph (KG) building. The sentence-level annotations comprised the few sentences about the article\u2019s contribution. The phrase-level annotations were scientific term and predicate phrases from the contribution sentences. Finally, the triples constituted the research overview KG. For the Shared Task, participating systems were then expected to automatically classify contribution sentences, extract scientific terms and relations from the sentences, and organize them as KG triples. Overall, the task drew a strong participation demographic of seven teams and 27 participants. The best end-to-end task system classified contribution sentences at 57.27% F1, phrases at 46.41% F1, and triples at 22.28% F1. While the absolute performance to generate triples remains low, as conclusion to the article, the difficulty of producing such data and as a consequence of modeling it is highlighted.", "which Pipelined Phrases Extraction Performance F-score ?", "46.4", NaN, NaN], ["There is currently a gap between the natural language expression of scholarly publications and their structured semantic content modeling to enable intelligent content search. With the volume of research growing exponentially every year, a search feature operating over semantically structured content is compelling. The SemEval-2021 Shared Task NLPContributionGraph (a.k.a. \u2018the NCG task\u2019) tasks participants to develop automated systems that structure contributions from NLP scholarly articles in the English language. Being the first-of-its-kind in the SemEval series, the task released structured data from NLP scholarly articles at three levels of information granularity, i.e. at sentence-level, phrase-level, and phrases organized as triples toward Knowledge Graph (KG) building. The sentence-level annotations comprised the few sentences about the article\u2019s contribution. The phrase-level annotations were scientific term and predicate phrases from the contribution sentences. Finally, the triples constituted the research overview KG. For the Shared Task, participating systems were then expected to automatically classify contribution sentences, extract scientific terms and relations from the sentences, and organize them as KG triples. Overall, the task drew a strong participation demographic of seven teams and 27 participants. The best end-to-end task system classified contribution sentences at 57.27% F1, phrases at 46.41% F1, and triples at 22.28% F1. While the absolute performance to generate triples remains low, as conclusion to the article, the difficulty of producing such data and as a consequence of modeling it is highlighted.", "which Sentences Extraction Performance F-score ?", "57.27", 1411.0, 1416.0], ["This paper proposes a robust and real-time capable algorithm for classification of the first and second heart sounds. The classification algorithm is based on the evaluation of the envelope curve of the phonocardiogram. For the evaluation, in contrast to other studies, measurements on 12 probands were conducted in different physiological conditions. Moreover, for each measurement the auscultation point, posture and physical stress were varied. The proposed envelope-based algorithm is tested with two different methods for envelope curve extraction: the Hilbert transform and the short-time Fourier transform. The performance of the classification of the first heart sounds is evaluated by using a reference electrocardiogram. Overall, by using the Hilbert transform, the algorithm has a better performance regarding the F1-score and computational effort. The proposed algorithm achieves for the S1 classification an F1-score up to 95.7% and in average 90.5%. The algorithm is robust against the age, BMI, posture, heart rate and auscultation point (except measurements on the back) of the subjects.", "which Average F1-Score ?", "90.5", 965.0, 969.0], ["This paper proposes a robust and real-time capable algorithm for classification of the first and second heart sounds. The classification algorithm is based on the evaluation of the envelope curve of the phonocardiogram. For the evaluation, in contrast to other studies, measurements on 12 probands were conducted in different physiological conditions. Moreover, for each measurement the auscultation point, posture and physical stress were varied. The proposed envelope-based algorithm is tested with two different methods for envelope curve extraction: the Hilbert transform and the short-time Fourier transform. The performance of the classification of the first heart sounds is evaluated by using a reference electrocardiogram. Overall, by using the Hilbert transform, the algorithm has a better performance regarding the F1-score and computational effort. The proposed algorithm achieves for the S1 classification an F1-score up to 95.7% and in average 90.5%. The algorithm is robust against the age, BMI, posture, heart rate and auscultation point (except measurements on the back) of the subjects.", "which F1-score ?", "95.7", 944.0, 948.0], ["Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.", "which F1-score ?", "88.6", 933.0, 937.0], ["The auscultation of heart sounds has been for decades a fundamental diagnostic tool in clinical practice. Higher effectiveness can be achieved by recording the corresponding biomedical signal, namely the phonocardiographic signal, and processing it by means of traditional signal processing techniques. An unavoidable processing step is the heart sound segmentation, which is still a challenging task from a technical viewpoint\u2014a limitation of state-of-the-art approaches is the unavailability of trustworthy techniques for the detection of heart sound components. The aim of this work is to design a reliable algorithm for the identification and the classification of heart sounds\u2019 main components. The proposed methodology was tested on a sample population of 24 healthy subjects over 10-min-long simultaneous electrocardiographic and phonocardiographic recordings and it was found capable of correctly detecting and classifying an average of 99.2% of the heart sounds along with their components. Moreover, the delay of each component with respect to the corresponding R-wave peak and the delay among the components of the same heart sound were computed: the resulting experimental values are coherent with what is expected from the literature and what was obtained by other studies.", "which Accuracy ?", "99.2", 945.0, 949.0], ["Background and Aims Prediction of severe clinical outcomes in Clostridium difficile infection (CDI) is important to inform management decisions for optimum patient care. Currently, treatment recommendations for CDI vary based on disease severity but validated methods to predict severe disease are lacking. The aim of the study was to derive and validate a clinical prediction tool for severe outcomes in CDI. Methods A cohort totaling 638 patients with CDI was prospectively studied at three tertiary care clinical sites (Boston, Dublin and Houston). The clinical prediction rule (CPR) was developed by multivariate logistic regression analysis using the Boston cohort and the performance of this model was then evaluated in the combined Houston and Dublin cohorts. Results The CPR included the following three binary variables: age \u2265 65 years, peak serum creatinine \u22652 mg/dL and peak peripheral blood leukocyte count of \u226520,000 cells/\u03bcL. The Clostridium difficile severity score (CDSS) correctly classified 76.5% (95% CI: 70.87-81.31) and 72.5% (95% CI: 67.52-76.91) of patients in the derivation and validation cohorts, respectively. In the validation cohort, CDSS scores of 0, 1, 2 or 3 were associated with severe clinical outcomes of CDI in 4.7%, 13.8%, 33.3% and 40.0% of cases respectively. Conclusions We prospectively derived and validated a clinical prediction rule for severe CDI that is simple, reliable and accurate and can be used to identify high-risk patients most likely to benefit from measures to prevent complications of CDI.", "which Accuracy ?", "72.5", 1041.0, 1045.0], ["We report the in situ measurement of the ultraviolet/vacuum-ultraviolet (UV/VUV) emission from a plasma produced by high power impulse magnetron sputtering with aluminum target, using argon as background gas. The UV/VUV detection system is based upon the quantification of the re-emitted fluorescence from a sodium salicylate layer that is placed in a housing inside the vacuum chamber, at 11 cm from the center of the cathode. The detector is equipped with filters that allow for differentiating various spectral regions, and with a front collimating tube that provides a spatial resolution \u2248 0.5 cm. Using various views of the plasma, the measured absolutely calibrated photon rates enable to calculate emissivities and irradiances based on a model of the ionization region. We present results that demonstrate that Al++ ions are responsible for most of the VUV irradiance. We also discuss the photoelectric emission due to irradiances on the target ~ 2\u00d71018 s-1 cm-2 produced by high energy photons from resonance lines of Ar+.", "which Pressure ?", "0.5", 594.0, 597.0], ["Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.", "which AUC ?", "0.941", 984.0, 989.0], ["Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.", "which Classifier accuracy ?", "86.2", 963.0, 967.0], ["Leukemia is a fatal cancer and has two main types: Acute and chronic. Each type has two more subtypes: Lymphoid and myeloid. Hence, in total, there are four subtypes of leukemia. This study proposes a new approach for diagnosis of all subtypes of leukemia from microscopic blood cell images using convolutional neural networks (CNN), which requires a large training data set. Therefore, we also investigated the effects of data augmentation for an increasing number of training samples synthetically. We used two publicly available leukemia data sources: ALL-IDB and ASH Image Bank. Next, we applied seven different image transformation techniques as data augmentation. We designed a CNN architecture capable of recognizing all subtypes of leukemia. Besides, we also explored other well-known machine learning algorithms such as naive Bayes, support vector machine, k-nearest neighbor, and decision tree. To evaluate our approach, we set up a set of experiments and used 5-fold cross-validation. The results we obtained from experiments showed that our CNN model performance has 88.25% and 81.74% accuracy, in leukemia versus healthy and multi-class classification of all subtypes, respectively. Finally, we also showed that the CNN model has a better performance than other well-known machine learning algorithms.", "which Classifier accuracy ?", "88.25", 1079.0, 1084.0], ["Transparency, flexibility, and especially ultralow oxygen (OTR) and water vapor (WVTR) transmission rates are the key issues to be addressed for packaging of flexible organic photovoltaics and organic light-emitting diodes. Concomitant optimization of all essential features is still a big challenge. Here we present a thin (1.5 \u03bcm), highly transparent, and at the same time flexible nanocomposite coating with an exceptionally low OTR and WVTR (1.0 \u00d7 10(-2) cm(3) m(-2) day(-1) bar(-1) and <0.05 g m(-2) day(-1) at 50% RH, respectively). A commercially available polyurethane (Desmodur N 3600 and Desmophen 670 BA, Bayer MaterialScience AG) was filled with a delaminated synthetic layered silicate exhibiting huge aspect ratios of about 25,000. Functional films were prepared by simple doctor-blading a suspension of the matrix and the organophilized clay. This preparation procedure is technically benign, is easy to scale up, and may readily be applied for encapsulation of sensitive flexible electronics.", "which Coating thickness, micrometers ?", "1.5", 325.0, 328.0], ["Wetlands mapping using multispectral imagery from Landsat multispectral scanner (MSS) and thematic mapper (TM) and Syst\u00e8me pour l'observation de la Terre (SPOT) does not in general provide high classification accuracies because of poor spectral and spatial resolutions. This study tests the feasibility of using high-resolution hyperspectral imagery to map wetlands in Iowa with two nontraditional classification techniques: the spectral angle mapper (SAM) method and a new nonparametric object-oriented (OO) classification. The software programs used were ENVI and eCognition. Accuracies of these classified images were assessed by using the information collected through a field survey with a global positioning system and high-resolution color infrared images. Wetlands were identified more accurately with the OO method (overall accuracy 92.3%) than with SAM (63.53%). This paper also discusses the limitations of these classification techniques for wetlands, as well as discussing future directions for study.", "which SAM Accuracy (%) ?", "63.53", 864.0, 869.0], ["Hyperspectral Imaging (HSI) is used to provide a wealth of information which can be used to address a variety of problems in different applications. The main requirement in all applications is the classification of HSI data. In this paper, supervised HSI classification algorithms are used to extract agriculture areas that specialize in wheat growing and get a classified image. In particular, Parallelepiped and Spectral Angel Mapper (SAM) algorithms are used. They are implemented by a software tool used to analyse and process geospatial images that is an Environment of Visualizing Images (ENVI). They are applied on Al-Kharj, Saudi Arabia as the study area. The overall accuracy after applying the algorithms on the image of the study area for SAM classification was 66.67%, and 33.33% for Parallelepiped classification. Therefore, SAM algorithm has provided a better a study area image classification.", "which SAM Accuracy (%) ?", "66.67", 773.0, 778.0], ["Raman spectroscopy, complemented by infrared spectroscopy has been used to characterise the ferroaxinite minerals of theoretical formula Ca2Fe2+Al2BSi4O15(OH), a ferrous aluminium borosilicate. The Raman spectra are complex but are subdivided into sections based upon the vibrating units. The Raman spectra are interpreted in terms of the addition of borate and silicate spectra. Three characteristic bands of ferroaxinite are observed at 1082, 1056 and 1025 cm-1 and are attributed to BO4 stretching vibrations. Bands at 1003, 991, 980 and 963 cm-1 are assigned to SiO4 stretching vibrations. Bands are found in these positions for each of the ferroaxinites studied. No Raman bands were found above 1100 cm-1 showing that ferroaxinites contain only tetrahedral boron. The hydroxyl stretching region of ferroaxinites is characterised by a single Raman band between 3368 and 3376 cm-1, the position of which is sample dependent. Bands for ferroaxinite at 678, 643, 618, 609, 588, 572, 546 cm-1 may be attributed to the \u03bd4 bending modes and the three bands at 484, 444 and 428 cm-1 may be attributed to the \u03bd2 bending modes of the (SiO4)2-.", "which ferroaxinite (OH) ?", "3376", 874.0, 878.0], ["Selected joaquinite minerals have been studied by Raman spectroscopy. The minerals are categorised into two groups depending upon whether bands occur in the 3250 to 3450 cm\u22121 region and in the 3450 to 3600 cm\u22121 region, or in the latter region only. The first set of bands is attributed to water stretching vibrations and the second set to OH stretching bands. In the literature, X-ray diffraction could not identify the presence of OH units in the structure of joaquinite. Raman spectroscopy proves that the joaquinite mineral group contains OH units in their structure, and in some cases both water and OH units. A series of bands at 1123, 1062, 1031, 971, 912 and 892 cm\u22121 are assigned to SiO stretching vibrations. Bands above 1000 cm\u22121 are attributable to the \u03bdas modes of the (SiO4)4\u2212 and (Si2O7)6\u2212 units. Bands that are observed at 738, around 700, 682 and around 668, 621 and 602 cm\u22121 are attributed to OSiO bending modes. The patterns do not appear to match the published infrared spectral patterns of either (SiO4)4\u2212 or (Si2O7)6\u2212 units. The reason is attributed to the actual formulation of the joaquinite mineral, in which significant amounts of Ti or Nb and Fe are found. Copyright \u00a9 2007 John Wiley & Sons, Ltd.", "which Joaquinite (OH) ?", "3600", 201.0, 205.0], ["Abstract A short fragment of mt DNA from the cytochrome c oxidase 1 (CO1) region was used to provide the first CO1 barcodes for 37 species of Canadian mosquitoes (Diptera: Culicidae) from the provinces Ontario and New Brunswick. Sequence variation was analysed in a 617\u2010bp fragment from the 5\u2032 end of the CO1 region. Sequences of each mosquito species formed barcode clusters with tight cohesion that were usually clearly distinct from those of allied species. CO1 sequence divergences were, on average, nearly 20 times higher for congeneric species than for members of a species; divergences between congeneric species averaged 10.4% (range 0.2\u201317.2%), whereas those for conspecific individuals averaged 0.5% (range 0.0\u20133.9%).", "which nearest neighbor distance lower limit ?", "0.2", 642.0, 645.0], ["Background With about 1,000 species in the Neotropics, the Eumaeini (Theclinae) are one of the most diverse butterfly tribes. Correct morphology-based identifications are challenging in many genera due to relatively little interspecific differences in wing patterns. Geographic infraspecific variation is sometimes more substantial than variation between species. In this paper we present a large DNA barcode dataset of South American Lycaenidae. We analyze how well DNA barcode BINs match morphologically delimited species. Methods We compare morphology-based species identifications with the clustering of molecular operational taxonomic units (MOTUs) delimitated by the RESL algorithm in BOLD, which assigns Barcode Index Numbers (BINs). We examine intra- and interspecific divergences for genera represented by at least four morphospecies. We discuss the existence of local barcode gaps in a genus by genus analysis. We also note differences in the percentage of species with barcode gaps in groups of lowland and high mountain genera. Results We identified 2,213 specimens and obtained 1,839 sequences of 512 species in 90 genera. Overall, the mean intraspecific divergence value of CO1 sequences was 1.20%, while the mean interspecific divergence between nearest congeneric neighbors was 4.89%, demonstrating the presence of a barcode gap. However, the gap seemed to disappear from the entire set when comparing the maximum intraspecific distance (8.40%) with the minimum interspecific distance (0.40%). Clear barcode gaps are present in many genera but absent in others. From the set of specimens that yielded COI fragment lengths of at least 650 bp, 75% of the a priori morphology-based identifications were unambiguously assigned to a single Barcode Index Number (BIN). However, after a taxonomic a posteriori review, the percentage of matched identifications rose to 85%. BIN splitting was observed for 17% of the species and BIN sharing for 9%. We found that genera that contain primarily lowland species show higher percentages of local barcode gaps and congruence between BINs and morphology than genera that contain exclusively high montane species. The divergence values to the nearest neighbors were significantly lower in high Andean species while the intra-specific divergence values were significantly lower in the lowland species. These results raise questions regarding the causes of observed low inter and high intraspecific genetic variation. We discuss incomplete lineage sorting and hybridization as most likely causes of this phenomenon, as the montane species concerned are relatively young and hybridization is probable. The release of our data set represents an essential baseline for a reference library for biological assessment studies of butterflies in mega diverse countries using modern high-throughput technologies an highlights the necessity of taxonomic revisions for various genera combining both molecular and morphological data.", "which nearest neighbor distance lower limit ?", "0.4", NaN, NaN], ["Abstract Background Various methods have been proposed to assign unknown specimens to known species using their DNA barcodes, while others have focused on using genetic divergence thresholds to estimate \u201cspecies\u201d diversity for a taxon, without a well-developed taxonomy and/or an extensive reference library of DNA barcodes. The major goals of the present work were to: a) conduct the largest species-level barcoding study of the Muscidae to date and characterize the range of genetic divergence values in the northern Nearctic fauna; b) evaluate the correspondence between morphospecies and barcode groupings defined using both clustering-based and threshold-based approaches; and c) use the reference library produced to address taxonomic issues. Results Our data set included 1114 individuals and their COI sequences (951 from Churchill, Manitoba), representing 160 morphologically-determined species from 25 genera, covering 89% of the known fauna of Churchill and 23% of the Nearctic fauna. Following an iterative process through which all specimens belonging to taxa with anomalous divergence values and/or monophyly issues were re-examined, identity was modified for 9 taxa, including the reinstatement of Phaonia luteva (Walker) stat. nov. as a species distinct from Phaonia errans (Meigen). In the post-reassessment data set, no distinct gap was found between maximum pairwise intraspecific distances (range 0.00-3.01%) and minimum interspecific distances (range: 0.77-11.33%). Nevertheless, using a clustering-based approach, all individuals within 98% of species grouped with their conspecifics with high (>95%) bootstrap support; in contrast, a maximum species discrimination rate of 90% was obtained at the optimal threshold of 1.2%. DNA barcoding enabled the determination of females from 5 ambiguous species pairs and confirmed that 16 morphospecies were genetically distinct from named taxa. There were morphological differences among all distinct genetic clusters; thus, no cases of cryptic species were detected. Conclusions Our findings reveal the great utility of building a well-populated, species-level reference barcode database against which to compare unknowns. When such a library is unavailable, it is still possible to obtain a fairly accurate (within ~10%) rapid assessment of species richness based upon a barcode divergence threshold alone, but this approach is most accurate when the threshold is tuned to a particular taxon.", "which nearest neighbor distance lower limit ?", "0.77", 1473.0, 1477.0], ["DNA barcodes were obtained for 81 butterfly species belonging to 52 genera from sites in north\u2010central Pakistan to test the utility of barcoding for their identification and to gain a better understanding of regional barcode variation. These species represent 25% of the butterfly fauna of Pakistan and belong to five families, although the Nymphalidae were dominant, comprising 38% of the total specimens. Barcode analysis showed that maximum conspecific divergence was 1.6%, while there was 1.7\u201314.3% divergence from the nearest neighbour species. Barcode records for 55 species showed <2% sequence divergence to records in the Barcode of Life Data Systems (BOLD), but only 26 of these cases involved specimens from neighbouring India and Central Asia. Analysis revealed that most species showed little incremental sequence variation when specimens from other regions were considered, but a threefold increase was noted in a few cases. There was a clear gap between maximum intraspecific and minimum nearest neighbour distance for all 81 species. Neighbour\u2010joining cluster analysis showed that members of each species formed a monophyletic cluster with strong bootstrap support. The barcode results revealed two provisional species that could not be clearly linked to known taxa, while 24 other species gained their first coverage. Future work should extend the barcode reference library to include all butterfly species from Pakistan as well as neighbouring countries to gain a better understanding of regional variation in barcode sequences in this topographically and climatically complex region.", "which nearest neighbor distance lower limit ?", "1.7", 493.0, 496.0], ["Background Although they are important disease vectors mosquito biodiversity in Pakistan is poorly known. Recent epidemics of dengue fever have revealed the need for more detailed understanding of the diversity and distributions of mosquito species in this region. DNA barcoding improves the accuracy of mosquito inventories because morphological differences between many species are subtle, leading to misidentifications. Methodology/Principal Findings Sequence variation in the barcode region of the mitochondrial COI gene was used to identify mosquito species, reveal genetic diversity, and map the distribution of the dengue-vector species in Pakistan. Analysis of 1684 mosquitoes from 491 sites in Punjab and Khyber Pakhtunkhwa during 2010\u20132013 revealed 32 species with the assemblage dominated by Culex quinquefasciatus (61% of the collection). The genus Aedes (Stegomyia) comprised 15% of the specimens, and was represented by six taxa with the two dengue vector species, Ae. albopictus and Ae. aegypti, dominant and broadly distributed. Anopheles made up another 6% of the catch with An. subpictus dominating. Barcode sequence divergence in conspecific specimens ranged from 0\u20132.4%, while congeneric species showed from 2.3\u201317.8% divergence. A global haplotype analysis of disease-vectors showed the presence of multiple haplotypes, although a single haplotype of each dengue-vector species was dominant in most countries. Geographic distribution of Ae. aegypti and Ae. albopictus showed the later species was dominant and found in both rural and urban environments. Conclusions As the first DNA-based analysis of mosquitoes in Pakistan, this study has begun the construction of a barcode reference library for the mosquitoes of this region. Levels of genetic diversity varied among species. Because of its capacity to differentiate species, even those with subtle morphological differences, DNA barcoding aids accurate tracking of vector populations.", "which nearest neighbor distance lower limit ?", "2.3", 1228.0, 1231.0], ["In this paper we investigate the utility of the COI DNA barcoding region for species identification and for revealing hidden diversity within the subgenus Trichodagmia and related taxa in the New World. In total, 24 morphospecies within the current expanded taxonomic concept of Trichodagmia were analyzed. Three species in the subgenus Aspathia and 10 species in the subgenus Simulium s.str. were also included in the analysis because of their putative phylogenetic relationship with Trichodagmia. In the Neighbour Joining analysis tree (NJ) derived from the DNA barcodes most of the specimens grouped together according to species or species groups as recognized by other morphotaxonomic studies. The interspecific genetic divergence averaged 11.2% (range 2.8\u201319.5%), whereas intraspecific genetic divergence within morphologically distinct species averaged 0.5% (range 0\u20131.2%). Higher values of genetic divergence (3.2\u20133.7%) in species complexes suggest the presence of cryptic diversity. The existence of well defined groups within S. piperi, S. duodenicornium, S. canadense and S. rostratum indicate the possible presence of cryptic species within these taxa. Also, the suspected presence of a sibling species in S. tarsatum and S. paynei is supported. DNA barcodes also showed that specimens from species that were taxonomically difficult to delimit such as S. hippovorum, S. rubrithorax, S. paynei, and other related taxa (S. solarii), grouped together in the NJ analysis, confirming the validity of their species status. The recovery of partial barcodes from specimens in collections was time consuming and PCR success was low from specimens more than 10 years old. However, when a sequence was obtained, it provided good resolution for species identification. Larvae preserved in \u2018weak\u2019 Carnoy\u2019s solution (9:1 ethanol:acetic acid) provided full DNA barcodes. Adding legs directly to the PCR mix from recently collected and preserved adults was an inexpensive, fast methodology to obtain full barcodes. In summary, DNA barcoding combined with a sound morphotaxonomic framework provides an effective approach for the delineation of species and for the discovery of hidden diversity in the subgenus Trichodagmia.", "which nearest neighbor distance lower limit ?", "2.8", 766.0, 769.0], ["DNA barcoding has gained increased recognition as a molecular tool for species identification in various groups of organisms. In this preliminary study, we tested the efficacy of a 615\u2010bp fragment of the cytochrome c oxidase I (COI) as a DNA barcode in the medically important family Simuliidae, or black flies. A total of 65 (25%) morphologically distinct species and sibling species in species complexes of the 255 recognized Nearctic black fly species were used to create a preliminary barcode profile for the family. Genetic divergence among congeners averaged 14.93% (range 2.83\u201315.33%), whereas intraspecific genetic divergence between morphologically distinct species averaged 0.72% (range 0\u20133.84%). DNA barcodes correctly identified nearly 100% of the morphologically distinct species (87% of the total sampled taxa), whereas in species complexes (13% of the sampled taxa) maximum values of divergence were comparatively higher (max. 4.58\u20136.5%), indicating cryptic diversity. The existence of sibling species in Prosimulium travisi and P. neomacropyga was also demonstrated, thus confirming previous cytological evidence about the existence of such cryptic diversity in these two taxa. We conclude that DNA barcoding is an effective method for species identification and discovery of cryptic diversity in black flies.", "which nearest neighbor distance lower limit ?", "2.83", 579.0, 583.0], ["This paper reports the first tests of the suitability of the standardized mitochondrial cytochrome c oxidase subunit I (COI) barcoding system for the identification of Canadian deerflies and horseflies. Two additional mitochondrial molecular markers were used to determine whether unambiguous species recognition in tabanids can be achieved. Our 332 Canadian tabanid samples yielded 650 sequences from five genera and 42 species. Standard COI barcodes demonstrated a strong A + T bias (mean 68.1%), especially at third codon positions (mean 93.0%). Our preliminary test of this system showed that the standard COI barcode worked well for Canadian Tabanidae: the target DNA can be easily recovered from small amounts of insect tissue and aligned for all tabanid taxa. Each tabanid species possessed distinctive sets of COI haplotypes which discriminated well among species. Average conspecific Kimura two\u2010parameter (K2P) divergence (0.49%) was 12 times lower than the average divergence within species. Both the neighbour\u2010joining and the Bayesian methods produced trees with identical monophyletic species groups. Two species, Chrysops dawsoni Philip and Chrysops montanus Osten Sacken (Diptera: Tabanidae), showed relatively deep intraspecific sequence divergences (\u223c10 times the average) for all three mitochondrial gene regions analysed. We suggest provisional differentiation of Ch. montanus into two haplotypes, namely, Ch. montanus haplomorph 1 and Ch. montanus haplomorph 2, both defined by their molecular sequences and by newly discovered differences in structural features near their ocelli.", "which Maximum intraspecific distances averaged ?", "0.49", 932.0, 936.0], ["Abstract A short fragment of mt DNA from the cytochrome c oxidase 1 (CO1) region was used to provide the first CO1 barcodes for 37 species of Canadian mosquitoes (Diptera: Culicidae) from the provinces Ontario and New Brunswick. Sequence variation was analysed in a 617\u2010bp fragment from the 5\u2032 end of the CO1 region. Sequences of each mosquito species formed barcode clusters with tight cohesion that were usually clearly distinct from those of allied species. CO1 sequence divergences were, on average, nearly 20 times higher for congeneric species than for members of a species; divergences between congeneric species averaged 10.4% (range 0.2\u201317.2%), whereas those for conspecific individuals averaged 0.5% (range 0.0\u20133.9%).", "which Maximum intraspecific distances averaged ?", "0.5", 705.0, 708.0], ["In this paper we investigate the utility of the COI DNA barcoding region for species identification and for revealing hidden diversity within the subgenus Trichodagmia and related taxa in the New World. In total, 24 morphospecies within the current expanded taxonomic concept of Trichodagmia were analyzed. Three species in the subgenus Aspathia and 10 species in the subgenus Simulium s.str. were also included in the analysis because of their putative phylogenetic relationship with Trichodagmia. In the Neighbour Joining analysis tree (NJ) derived from the DNA barcodes most of the specimens grouped together according to species or species groups as recognized by other morphotaxonomic studies. The interspecific genetic divergence averaged 11.2% (range 2.8\u201319.5%), whereas intraspecific genetic divergence within morphologically distinct species averaged 0.5% (range 0\u20131.2%). Higher values of genetic divergence (3.2\u20133.7%) in species complexes suggest the presence of cryptic diversity. The existence of well defined groups within S. piperi, S. duodenicornium, S. canadense and S. rostratum indicate the possible presence of cryptic species within these taxa. Also, the suspected presence of a sibling species in S. tarsatum and S. paynei is supported. DNA barcodes also showed that specimens from species that were taxonomically difficult to delimit such as S. hippovorum, S. rubrithorax, S. paynei, and other related taxa (S. solarii), grouped together in the NJ analysis, confirming the validity of their species status. The recovery of partial barcodes from specimens in collections was time consuming and PCR success was low from specimens more than 10 years old. However, when a sequence was obtained, it provided good resolution for species identification. Larvae preserved in \u2018weak\u2019 Carnoy\u2019s solution (9:1 ethanol:acetic acid) provided full DNA barcodes. Adding legs directly to the PCR mix from recently collected and preserved adults was an inexpensive, fast methodology to obtain full barcodes. In summary, DNA barcoding combined with a sound morphotaxonomic framework provides an effective approach for the delineation of species and for the discovery of hidden diversity in the subgenus Trichodagmia.", "which Maximum intraspecific distances averaged ?", "0.5", 868.0, 871.0], ["Because the tropical regions of America harbor the highest concentration of butterfly species, its fauna has attracted considerable attention. Much less is known about the butterflies of southern South America, particularly Argentina, where over 1,200 species occur. To advance understanding of this fauna, we assembled a DNA barcode reference library for 417 butterfly species of Argentina, focusing on the Atlantic Forest, a biodiversity hotspot. We tested the efficacy of this library for specimen identification, used it to assess the frequency of cryptic species, and examined geographic patterns of genetic variation, making this study the first large-scale genetic assessment of the butterflies of southern South America. The average sequence divergence to the nearest neighbor (i.e. minimum interspecific distance) was 6.91%, ten times larger than the mean distance to the furthest conspecific (0.69%), with a clear barcode gap present in all but four of the species represented by two or more specimens. As a consequence, the DNA barcode library was extremely effective in the discrimination of these species, allowing a correct identification in more than 95% of the cases. Singletons (i.e. species represented by a single sequence) were also distinguishable in the gene trees since they all had unique DNA barcodes, divergent from those of the closest non-conspecific. The clustering algorithms implemented recognized from 416 to 444 barcode clusters, suggesting that the actual diversity of butterflies in Argentina is 3%\u20139% higher than currently recognized. Furthermore, our survey added three new records of butterflies for the country (Eurema agave, Mithras hannelore, Melanis hillapana). In summary, this study not only supported the utility of DNA barcoding for the identification of the butterfly species of Argentina, but also highlighted several cases of both deep intraspecific and shallow interspecific divergence that should be studied in more detail.", "which Maximum intraspecific distances averaged ?", "0.69", 903.0, 907.0], ["Abstract DNA barcoding is a modern species identification technique that can be used to distinguish morphologically similar species, and is particularly useful when using small amounts of starting material from partial specimens or from immature stages. In order to use DNA barcoding in a surveillance program, a database containing mosquito barcode sequences is required. This study obtained Cytochrome Oxidase I (COI) sequences for 113 morphologically identified specimens, representing 29 species, six tribes and 12 genera; 17 of these species have not been previously barcoded. Three of the 29 species \u2500 Culex palpalis, Macleaya macmillani, and an unknown species originally identified as Tripteroides atripes \u2500 were initially misidentified as they are difficult to separate morphologically, highlighting the utility of DNA barcoding. While most species grouped separately (reciprocally monophyletic), the Cx. pipiens subgroup could not be genetically separated using COI. The average conspecific and congeneric p\u2010distance was 0.8% and 7.6%, respectively. In our study, we also demonstrate the utility of DNA barcoding in distinguishing exotics from endemic mosquitoes by identifying a single intercepted Stegomyia aegypti egg at an international airport. The use of DNA barcoding dramatically reduced the identification time required compared with rearing specimens through to adults, thereby demonstrating the value of this technique in biosecurity surveillance. The DNA barcodes produced by this study have been uploaded to the \u2018Mosquitoes of Australia\u2013Victoria\u2019 project on the Barcode of Life Database (BOLD), which will serve as a resource for the Victorian Arbovirus Disease Control Program and other national and international mosquito surveillance programs.", "which Maximum intraspecific distances averaged ?", "0.8", 1031.0, 1034.0], ["Background With about 1,000 species in the Neotropics, the Eumaeini (Theclinae) are one of the most diverse butterfly tribes. Correct morphology-based identifications are challenging in many genera due to relatively little interspecific differences in wing patterns. Geographic infraspecific variation is sometimes more substantial than variation between species. In this paper we present a large DNA barcode dataset of South American Lycaenidae. We analyze how well DNA barcode BINs match morphologically delimited species. Methods We compare morphology-based species identifications with the clustering of molecular operational taxonomic units (MOTUs) delimitated by the RESL algorithm in BOLD, which assigns Barcode Index Numbers (BINs). We examine intra- and interspecific divergences for genera represented by at least four morphospecies. We discuss the existence of local barcode gaps in a genus by genus analysis. We also note differences in the percentage of species with barcode gaps in groups of lowland and high mountain genera. Results We identified 2,213 specimens and obtained 1,839 sequences of 512 species in 90 genera. Overall, the mean intraspecific divergence value of CO1 sequences was 1.20%, while the mean interspecific divergence between nearest congeneric neighbors was 4.89%, demonstrating the presence of a barcode gap. However, the gap seemed to disappear from the entire set when comparing the maximum intraspecific distance (8.40%) with the minimum interspecific distance (0.40%). Clear barcode gaps are present in many genera but absent in others. From the set of specimens that yielded COI fragment lengths of at least 650 bp, 75% of the a priori morphology-based identifications were unambiguously assigned to a single Barcode Index Number (BIN). However, after a taxonomic a posteriori review, the percentage of matched identifications rose to 85%. BIN splitting was observed for 17% of the species and BIN sharing for 9%. We found that genera that contain primarily lowland species show higher percentages of local barcode gaps and congruence between BINs and morphology than genera that contain exclusively high montane species. The divergence values to the nearest neighbors were significantly lower in high Andean species while the intra-specific divergence values were significantly lower in the lowland species. These results raise questions regarding the causes of observed low inter and high intraspecific genetic variation. We discuss incomplete lineage sorting and hybridization as most likely causes of this phenomenon, as the montane species concerned are relatively young and hybridization is probable. The release of our data set represents an essential baseline for a reference library for biological assessment studies of butterflies in mega diverse countries using modern high-throughput technologies an highlights the necessity of taxonomic revisions for various genera combining both molecular and morphological data.", "which Maximum intraspecific distances averaged ?", "1.2", NaN, NaN], ["DNA barcodes were obtained for 81 butterfly species belonging to 52 genera from sites in north\u2010central Pakistan to test the utility of barcoding for their identification and to gain a better understanding of regional barcode variation. These species represent 25% of the butterfly fauna of Pakistan and belong to five families, although the Nymphalidae were dominant, comprising 38% of the total specimens. Barcode analysis showed that maximum conspecific divergence was 1.6%, while there was 1.7\u201314.3% divergence from the nearest neighbour species. Barcode records for 55 species showed <2% sequence divergence to records in the Barcode of Life Data Systems (BOLD), but only 26 of these cases involved specimens from neighbouring India and Central Asia. Analysis revealed that most species showed little incremental sequence variation when specimens from other regions were considered, but a threefold increase was noted in a few cases. There was a clear gap between maximum intraspecific and minimum nearest neighbour distance for all 81 species. Neighbour\u2010joining cluster analysis showed that members of each species formed a monophyletic cluster with strong bootstrap support. The barcode results revealed two provisional species that could not be clearly linked to known taxa, while 24 other species gained their first coverage. Future work should extend the barcode reference library to include all butterfly species from Pakistan as well as neighbouring countries to gain a better understanding of regional variation in barcode sequences in this topographically and climatically complex region.", "which Maximum intraspecific distances upper limit ?", "1.6", 471.0, 474.0], ["Background Although they are important disease vectors mosquito biodiversity in Pakistan is poorly known. Recent epidemics of dengue fever have revealed the need for more detailed understanding of the diversity and distributions of mosquito species in this region. DNA barcoding improves the accuracy of mosquito inventories because morphological differences between many species are subtle, leading to misidentifications. Methodology/Principal Findings Sequence variation in the barcode region of the mitochondrial COI gene was used to identify mosquito species, reveal genetic diversity, and map the distribution of the dengue-vector species in Pakistan. Analysis of 1684 mosquitoes from 491 sites in Punjab and Khyber Pakhtunkhwa during 2010\u20132013 revealed 32 species with the assemblage dominated by Culex quinquefasciatus (61% of the collection). The genus Aedes (Stegomyia) comprised 15% of the specimens, and was represented by six taxa with the two dengue vector species, Ae. albopictus and Ae. aegypti, dominant and broadly distributed. Anopheles made up another 6% of the catch with An. subpictus dominating. Barcode sequence divergence in conspecific specimens ranged from 0\u20132.4%, while congeneric species showed from 2.3\u201317.8% divergence. A global haplotype analysis of disease-vectors showed the presence of multiple haplotypes, although a single haplotype of each dengue-vector species was dominant in most countries. Geographic distribution of Ae. aegypti and Ae. albopictus showed the later species was dominant and found in both rural and urban environments. Conclusions As the first DNA-based analysis of mosquitoes in Pakistan, this study has begun the construction of a barcode reference library for the mosquitoes of this region. Levels of genetic diversity varied among species. Because of its capacity to differentiate species, even those with subtle morphological differences, DNA barcoding aids accurate tracking of vector populations.", "which Maximum intraspecific distances upper limit ?", "2.4", 1185.0, 1188.0], ["In this paper we investigate the utility of the COI DNA barcoding region for species identification and for revealing hidden diversity within the subgenus Trichodagmia and related taxa in the New World. In total, 24 morphospecies within the current expanded taxonomic concept of Trichodagmia were analyzed. Three species in the subgenus Aspathia and 10 species in the subgenus Simulium s.str. were also included in the analysis because of their putative phylogenetic relationship with Trichodagmia. In the Neighbour Joining analysis tree (NJ) derived from the DNA barcodes most of the specimens grouped together according to species or species groups as recognized by other morphotaxonomic studies. The interspecific genetic divergence averaged 11.2% (range 2.8\u201319.5%), whereas intraspecific genetic divergence within morphologically distinct species averaged 0.5% (range 0\u20131.2%). Higher values of genetic divergence (3.2\u20133.7%) in species complexes suggest the presence of cryptic diversity. The existence of well defined groups within S. piperi, S. duodenicornium, S. canadense and S. rostratum indicate the possible presence of cryptic species within these taxa. Also, the suspected presence of a sibling species in S. tarsatum and S. paynei is supported. DNA barcodes also showed that specimens from species that were taxonomically difficult to delimit such as S. hippovorum, S. rubrithorax, S. paynei, and other related taxa (S. solarii), grouped together in the NJ analysis, confirming the validity of their species status. The recovery of partial barcodes from specimens in collections was time consuming and PCR success was low from specimens more than 10 years old. However, when a sequence was obtained, it provided good resolution for species identification. Larvae preserved in \u2018weak\u2019 Carnoy\u2019s solution (9:1 ethanol:acetic acid) provided full DNA barcodes. Adding legs directly to the PCR mix from recently collected and preserved adults was an inexpensive, fast methodology to obtain full barcodes. In summary, DNA barcoding combined with a sound morphotaxonomic framework provides an effective approach for the delineation of species and for the discovery of hidden diversity in the subgenus Trichodagmia.", "which Maximum intraspecific distances upper limit ?", "3.7", 930.0, 933.0], ["Abstract A short fragment of mt DNA from the cytochrome c oxidase 1 (CO1) region was used to provide the first CO1 barcodes for 37 species of Canadian mosquitoes (Diptera: Culicidae) from the provinces Ontario and New Brunswick. Sequence variation was analysed in a 617\u2010bp fragment from the 5\u2032 end of the CO1 region. Sequences of each mosquito species formed barcode clusters with tight cohesion that were usually clearly distinct from those of allied species. CO1 sequence divergences were, on average, nearly 20 times higher for congeneric species than for members of a species; divergences between congeneric species averaged 10.4% (range 0.2\u201317.2%), whereas those for conspecific individuals averaged 0.5% (range 0.0\u20133.9%).", "which Maximum intraspecific distances upper limit ?", "3.9", 721.0, 724.0], ["Although correct taxonomy is paramount for disease control programs and epidemiological studies, morphology-based taxonomy of black flies is extremely difficult. In the present study, the utility of a partial sequence of the COI gene, the DNA barcoding region, for the identification of species of black flies from Mesoamerica was assessed. A total of 32 morphospecies were analyzed, one belonging to the genus Gigantodax and 31 species to the genus Simulium and six of its subgenera (Aspathia, Eusimulium, Notolepria, Psaroniocompsa, Psilopelmia, Trichodagmia). The Neighbour Joining tree (NJ) derived from the DNA barcodes grouped most specimens according to species or species groups recognized by morphotaxonomic studies. Intraspecific sequence divergences within morphologically distinct species ranged from 0.07% to 1.65%, while higher divergences (2.05%-6.13%) in species complexes suggested the presence of cryptic diversity. The existence of well-defined groups within S. callidum (Dyar & Shannon), S. quadrivittatum Loew, and S. samboni Jennings revealed the likely inclusion of cryptic species within these taxa. In addition, the suspected presence of sibling species within S. paynei Vargas and S. tarsatum Macquart was supported. DNA barcodes also showed that specimens of species that are difficult to delimit morphologically such as S. callidum, S. pseudocallidum D\u00edaz N\u00e1jera, S. travisi Vargas, Vargas & Ram\u00edrez-P\u00e9rez, relatives of the species complexes such as S. metallicum Bellardi s.l. (e.g., S. horacioi Okazawa & Onishi, S. jobbinsi Vargas, Mart\u00ednez Palacios, D\u00edaz N\u00e1jera, and S. puigi Vargas, Mart\u00ednez Palacios & D\u00edaz N\u00e1jera), and S. virgatum Coquillett complex (e.g., S. paynei and S. tarsatum) grouped together in the NJ analysis, suggesting they represent valid species. DNA barcoding combined with a sound morphotaxonomic framework provided an effective approach for the identification of medically important black flies species in Mesoamerica and for the discovery of hidden diversity within this group.", "which Maximum intraspecific distances upper limit ?", "6.13", 861.0, 865.0], ["DNA barcoding has gained increased recognition as a molecular tool for species identification in various groups of organisms. In this preliminary study, we tested the efficacy of a 615\u2010bp fragment of the cytochrome c oxidase I (COI) as a DNA barcode in the medically important family Simuliidae, or black flies. A total of 65 (25%) morphologically distinct species and sibling species in species complexes of the 255 recognized Nearctic black fly species were used to create a preliminary barcode profile for the family. Genetic divergence among congeners averaged 14.93% (range 2.83\u201315.33%), whereas intraspecific genetic divergence between morphologically distinct species averaged 0.72% (range 0\u20133.84%). DNA barcodes correctly identified nearly 100% of the morphologically distinct species (87% of the total sampled taxa), whereas in species complexes (13% of the sampled taxa) maximum values of divergence were comparatively higher (max. 4.58\u20136.5%), indicating cryptic diversity. The existence of sibling species in Prosimulium travisi and P. neomacropyga was also demonstrated, thus confirming previous cytological evidence about the existence of such cryptic diversity in these two taxa. We conclude that DNA barcoding is an effective method for species identification and discovery of cryptic diversity in black flies.", "which Maximum intraspecific distances upper limit ?", "6.5", 181.0, 184.0], ["Background With about 1,000 species in the Neotropics, the Eumaeini (Theclinae) are one of the most diverse butterfly tribes. Correct morphology-based identifications are challenging in many genera due to relatively little interspecific differences in wing patterns. Geographic infraspecific variation is sometimes more substantial than variation between species. In this paper we present a large DNA barcode dataset of South American Lycaenidae. We analyze how well DNA barcode BINs match morphologically delimited species. Methods We compare morphology-based species identifications with the clustering of molecular operational taxonomic units (MOTUs) delimitated by the RESL algorithm in BOLD, which assigns Barcode Index Numbers (BINs). We examine intra- and interspecific divergences for genera represented by at least four morphospecies. We discuss the existence of local barcode gaps in a genus by genus analysis. We also note differences in the percentage of species with barcode gaps in groups of lowland and high mountain genera. Results We identified 2,213 specimens and obtained 1,839 sequences of 512 species in 90 genera. Overall, the mean intraspecific divergence value of CO1 sequences was 1.20%, while the mean interspecific divergence between nearest congeneric neighbors was 4.89%, demonstrating the presence of a barcode gap. However, the gap seemed to disappear from the entire set when comparing the maximum intraspecific distance (8.40%) with the minimum interspecific distance (0.40%). Clear barcode gaps are present in many genera but absent in others. From the set of specimens that yielded COI fragment lengths of at least 650 bp, 75% of the a priori morphology-based identifications were unambiguously assigned to a single Barcode Index Number (BIN). However, after a taxonomic a posteriori review, the percentage of matched identifications rose to 85%. BIN splitting was observed for 17% of the species and BIN sharing for 9%. We found that genera that contain primarily lowland species show higher percentages of local barcode gaps and congruence between BINs and morphology than genera that contain exclusively high montane species. The divergence values to the nearest neighbors were significantly lower in high Andean species while the intra-specific divergence values were significantly lower in the lowland species. These results raise questions regarding the causes of observed low inter and high intraspecific genetic variation. We discuss incomplete lineage sorting and hybridization as most likely causes of this phenomenon, as the montane species concerned are relatively young and hybridization is probable. The release of our data set represents an essential baseline for a reference library for biological assessment studies of butterflies in mega diverse countries using modern high-throughput technologies an highlights the necessity of taxonomic revisions for various genera combining both molecular and morphological data.", "which Maximum intraspecific distances upper limit ?", "8.4", NaN, NaN], ["The ecological and medical importance of black flies drives the need for rapid and reliable identification of these minute, structurally uniform insects. We assessed the efficiency of DNA barcoding for species identification of tropical black flies. A total of 351 cytochrome c oxidase subunit 1 sequences were obtained from 41 species in six subgenera of the genus Simulium in Thailand. Despite high intraspecific genetic divergence (mean = 2.00%, maximum = 9.27%), DNA barcodes provided 96% correct identification. Barcodes also differentiated cytoforms of selected species complexes, albeit with varying levels of success. Perfect differentiation was achieved for two cytoforms of Simulium feuerborni, and 91% correct identification was obtained for the Simulium angulistylum complex. Low success (33%), however, was obtained for the Simulium siamense complex. The differential efficiency of DNA barcodes to discriminate cytoforms was attributed to different levels of genetic structure and demographic histories of the taxa. DNA barcode trees were largely congruent with phylogenies based on previous molecular, chromosomal and morphological analyses, but revealed inconsistencies that will require further evaluation.", "which Maximum intraspecific distances upper limit ?", "9.27", 459.0, 463.0], ["Background With about 1,000 species in the Neotropics, the Eumaeini (Theclinae) are one of the most diverse butterfly tribes. Correct morphology-based identifications are challenging in many genera due to relatively little interspecific differences in wing patterns. Geographic infraspecific variation is sometimes more substantial than variation between species. In this paper we present a large DNA barcode dataset of South American Lycaenidae. We analyze how well DNA barcode BINs match morphologically delimited species. Methods We compare morphology-based species identifications with the clustering of molecular operational taxonomic units (MOTUs) delimitated by the RESL algorithm in BOLD, which assigns Barcode Index Numbers (BINs). We examine intra- and interspecific divergences for genera represented by at least four morphospecies. We discuss the existence of local barcode gaps in a genus by genus analysis. We also note differences in the percentage of species with barcode gaps in groups of lowland and high mountain genera. Results We identified 2,213 specimens and obtained 1,839 sequences of 512 species in 90 genera. Overall, the mean intraspecific divergence value of CO1 sequences was 1.20%, while the mean interspecific divergence between nearest congeneric neighbors was 4.89%, demonstrating the presence of a barcode gap. However, the gap seemed to disappear from the entire set when comparing the maximum intraspecific distance (8.40%) with the minimum interspecific distance (0.40%). Clear barcode gaps are present in many genera but absent in others. From the set of specimens that yielded COI fragment lengths of at least 650 bp, 75% of the a priori morphology-based identifications were unambiguously assigned to a single Barcode Index Number (BIN). However, after a taxonomic a posteriori review, the percentage of matched identifications rose to 85%. BIN splitting was observed for 17% of the species and BIN sharing for 9%. We found that genera that contain primarily lowland species show higher percentages of local barcode gaps and congruence between BINs and morphology than genera that contain exclusively high montane species. The divergence values to the nearest neighbors were significantly lower in high Andean species while the intra-specific divergence values were significantly lower in the lowland species. These results raise questions regarding the causes of observed low inter and high intraspecific genetic variation. We discuss incomplete lineage sorting and hybridization as most likely causes of this phenomenon, as the montane species concerned are relatively young and hybridization is probable. The release of our data set represents an essential baseline for a reference library for biological assessment studies of butterflies in mega diverse countries using modern high-throughput technologies an highlights the necessity of taxonomic revisions for various genera combining both molecular and morphological data.", "which nearest neighbor distance averaged ?", "4.89", 1294.0, 1298.0], ["Abstract The DNA barcode reference library for Lepidoptera holds much promise as a tool for taxonomic research and for providing the reliable identifications needed for conservation assessment programs. We gathered sequences for the barcode region of the mitochondrial cytochrome c oxidase subunit I gene from 160 of the 176 nominal species of Erebidae moths (Insecta: Lepidoptera) known from the Iberian Peninsula. These results arise from a research project which constructing a DNA barcode library for the insect species of Spain. New records for 271 specimens (122 species) are coupled with preexisting data for 38 species from the Iberian fauna. Mean interspecific distance was 12.1%, while the mean nearest neighbour divergence was 6.4%. All 160 species possessed diagnostic barcode sequences, but one pair of congeneric taxa (Eublemma rosea and Eublemma rietzi) were assigned to the same BIN. As well, intraspecific sequence divergences higher than 1.5% were detected in four species which likely represent species complexes. This study reinforces the effectiveness of DNA barcoding as a tool for monitoring biodiversity in particular geographical areas and the strong correspondence between sequence clusters delineated by BINs and species recognized through detailed taxonomic analysis.", "which nearest neighbor distance averaged ?", "6.4", 738.0, 741.0], ["Because the tropical regions of America harbor the highest concentration of butterfly species, its fauna has attracted considerable attention. Much less is known about the butterflies of southern South America, particularly Argentina, where over 1,200 species occur. To advance understanding of this fauna, we assembled a DNA barcode reference library for 417 butterfly species of Argentina, focusing on the Atlantic Forest, a biodiversity hotspot. We tested the efficacy of this library for specimen identification, used it to assess the frequency of cryptic species, and examined geographic patterns of genetic variation, making this study the first large-scale genetic assessment of the butterflies of southern South America. The average sequence divergence to the nearest neighbor (i.e. minimum interspecific distance) was 6.91%, ten times larger than the mean distance to the furthest conspecific (0.69%), with a clear barcode gap present in all but four of the species represented by two or more specimens. As a consequence, the DNA barcode library was extremely effective in the discrimination of these species, allowing a correct identification in more than 95% of the cases. Singletons (i.e. species represented by a single sequence) were also distinguishable in the gene trees since they all had unique DNA barcodes, divergent from those of the closest non-conspecific. The clustering algorithms implemented recognized from 416 to 444 barcode clusters, suggesting that the actual diversity of butterflies in Argentina is 3%\u20139% higher than currently recognized. Furthermore, our survey added three new records of butterflies for the country (Eurema agave, Mithras hannelore, Melanis hillapana). In summary, this study not only supported the utility of DNA barcoding for the identification of the butterfly species of Argentina, but also highlighted several cases of both deep intraspecific and shallow interspecific divergence that should be studied in more detail.", "which nearest neighbor distance averaged ?", "6.91", 827.0, 831.0], ["Abstract DNA barcoding is a modern species identification technique that can be used to distinguish morphologically similar species, and is particularly useful when using small amounts of starting material from partial specimens or from immature stages. In order to use DNA barcoding in a surveillance program, a database containing mosquito barcode sequences is required. This study obtained Cytochrome Oxidase I (COI) sequences for 113 morphologically identified specimens, representing 29 species, six tribes and 12 genera; 17 of these species have not been previously barcoded. Three of the 29 species \u2500 Culex palpalis, Macleaya macmillani, and an unknown species originally identified as Tripteroides atripes \u2500 were initially misidentified as they are difficult to separate morphologically, highlighting the utility of DNA barcoding. While most species grouped separately (reciprocally monophyletic), the Cx. pipiens subgroup could not be genetically separated using COI. The average conspecific and congeneric p\u2010distance was 0.8% and 7.6%, respectively. In our study, we also demonstrate the utility of DNA barcoding in distinguishing exotics from endemic mosquitoes by identifying a single intercepted Stegomyia aegypti egg at an international airport. The use of DNA barcoding dramatically reduced the identification time required compared with rearing specimens through to adults, thereby demonstrating the value of this technique in biosecurity surveillance. The DNA barcodes produced by this study have been uploaded to the \u2018Mosquitoes of Australia\u2013Victoria\u2019 project on the Barcode of Life Database (BOLD), which will serve as a resource for the Victorian Arbovirus Disease Control Program and other national and international mosquito surveillance programs.", "which nearest neighbor distance averaged ?", "7.6", 1040.0, 1043.0], ["Abstract A short fragment of mt DNA from the cytochrome c oxidase 1 (CO1) region was used to provide the first CO1 barcodes for 37 species of Canadian mosquitoes (Diptera: Culicidae) from the provinces Ontario and New Brunswick. Sequence variation was analysed in a 617\u2010bp fragment from the 5\u2032 end of the CO1 region. Sequences of each mosquito species formed barcode clusters with tight cohesion that were usually clearly distinct from those of allied species. CO1 sequence divergences were, on average, nearly 20 times higher for congeneric species than for members of a species; divergences between congeneric species averaged 10.4% (range 0.2\u201317.2%), whereas those for conspecific individuals averaged 0.5% (range 0.0\u20133.9%).", "which nearest neighbor distance averaged ?", "10.4", 629.0, 633.0], ["In this paper we investigate the utility of the COI DNA barcoding region for species identification and for revealing hidden diversity within the subgenus Trichodagmia and related taxa in the New World. In total, 24 morphospecies within the current expanded taxonomic concept of Trichodagmia were analyzed. Three species in the subgenus Aspathia and 10 species in the subgenus Simulium s.str. were also included in the analysis because of their putative phylogenetic relationship with Trichodagmia. In the Neighbour Joining analysis tree (NJ) derived from the DNA barcodes most of the specimens grouped together according to species or species groups as recognized by other morphotaxonomic studies. The interspecific genetic divergence averaged 11.2% (range 2.8\u201319.5%), whereas intraspecific genetic divergence within morphologically distinct species averaged 0.5% (range 0\u20131.2%). Higher values of genetic divergence (3.2\u20133.7%) in species complexes suggest the presence of cryptic diversity. The existence of well defined groups within S. piperi, S. duodenicornium, S. canadense and S. rostratum indicate the possible presence of cryptic species within these taxa. Also, the suspected presence of a sibling species in S. tarsatum and S. paynei is supported. DNA barcodes also showed that specimens from species that were taxonomically difficult to delimit such as S. hippovorum, S. rubrithorax, S. paynei, and other related taxa (S. solarii), grouped together in the NJ analysis, confirming the validity of their species status. The recovery of partial barcodes from specimens in collections was time consuming and PCR success was low from specimens more than 10 years old. However, when a sequence was obtained, it provided good resolution for species identification. Larvae preserved in \u2018weak\u2019 Carnoy\u2019s solution (9:1 ethanol:acetic acid) provided full DNA barcodes. Adding legs directly to the PCR mix from recently collected and preserved adults was an inexpensive, fast methodology to obtain full barcodes. In summary, DNA barcoding combined with a sound morphotaxonomic framework provides an effective approach for the delineation of species and for the discovery of hidden diversity in the subgenus Trichodagmia.", "which nearest neighbor distance averaged ?", "11.2", 753.0, 757.0], ["DNA barcoding has gained increased recognition as a molecular tool for species identification in various groups of organisms. In this preliminary study, we tested the efficacy of a 615\u2010bp fragment of the cytochrome c oxidase I (COI) as a DNA barcode in the medically important family Simuliidae, or black flies. A total of 65 (25%) morphologically distinct species and sibling species in species complexes of the 255 recognized Nearctic black fly species were used to create a preliminary barcode profile for the family. Genetic divergence among congeners averaged 14.93% (range 2.83\u201315.33%), whereas intraspecific genetic divergence between morphologically distinct species averaged 0.72% (range 0\u20133.84%). DNA barcodes correctly identified nearly 100% of the morphologically distinct species (87% of the total sampled taxa), whereas in species complexes (13% of the sampled taxa) maximum values of divergence were comparatively higher (max. 4.58\u20136.5%), indicating cryptic diversity. The existence of sibling species in Prosimulium travisi and P. neomacropyga was also demonstrated, thus confirming previous cytological evidence about the existence of such cryptic diversity in these two taxa. We conclude that DNA barcoding is an effective method for species identification and discovery of cryptic diversity in black flies.", "which nearest neighbor distance averaged ?", "14.93", 565.0, 570.0], ["Abstract Background Various methods have been proposed to assign unknown specimens to known species using their DNA barcodes, while others have focused on using genetic divergence thresholds to estimate \u201cspecies\u201d diversity for a taxon, without a well-developed taxonomy and/or an extensive reference library of DNA barcodes. The major goals of the present work were to: a) conduct the largest species-level barcoding study of the Muscidae to date and characterize the range of genetic divergence values in the northern Nearctic fauna; b) evaluate the correspondence between morphospecies and barcode groupings defined using both clustering-based and threshold-based approaches; and c) use the reference library produced to address taxonomic issues. Results Our data set included 1114 individuals and their COI sequences (951 from Churchill, Manitoba), representing 160 morphologically-determined species from 25 genera, covering 89% of the known fauna of Churchill and 23% of the Nearctic fauna. Following an iterative process through which all specimens belonging to taxa with anomalous divergence values and/or monophyly issues were re-examined, identity was modified for 9 taxa, including the reinstatement of Phaonia luteva (Walker) stat. nov. as a species distinct from Phaonia errans (Meigen). In the post-reassessment data set, no distinct gap was found between maximum pairwise intraspecific distances (range 0.00-3.01%) and minimum interspecific distances (range: 0.77-11.33%). Nevertheless, using a clustering-based approach, all individuals within 98% of species grouped with their conspecifics with high (>95%) bootstrap support; in contrast, a maximum species discrimination rate of 90% was obtained at the optimal threshold of 1.2%. DNA barcoding enabled the determination of females from 5 ambiguous species pairs and confirmed that 16 morphospecies were genetically distinct from named taxa. There were morphological differences among all distinct genetic clusters; thus, no cases of cryptic species were detected. Conclusions Our findings reveal the great utility of building a well-populated, species-level reference barcode database against which to compare unknowns. When such a library is unavailable, it is still possible to obtain a fairly accurate (within ~10%) rapid assessment of species richness based upon a barcode divergence threshold alone, but this approach is most accurate when the threshold is tuned to a particular taxon.", "which nearest neighbor distance upper limit ?", "11.33", 1478.0, 1483.0], ["DNA barcodes were obtained for 81 butterfly species belonging to 52 genera from sites in north\u2010central Pakistan to test the utility of barcoding for their identification and to gain a better understanding of regional barcode variation. These species represent 25% of the butterfly fauna of Pakistan and belong to five families, although the Nymphalidae were dominant, comprising 38% of the total specimens. Barcode analysis showed that maximum conspecific divergence was 1.6%, while there was 1.7\u201314.3% divergence from the nearest neighbour species. Barcode records for 55 species showed <2% sequence divergence to records in the Barcode of Life Data Systems (BOLD), but only 26 of these cases involved specimens from neighbouring India and Central Asia. Analysis revealed that most species showed little incremental sequence variation when specimens from other regions were considered, but a threefold increase was noted in a few cases. There was a clear gap between maximum intraspecific and minimum nearest neighbour distance for all 81 species. Neighbour\u2010joining cluster analysis showed that members of each species formed a monophyletic cluster with strong bootstrap support. The barcode results revealed two provisional species that could not be clearly linked to known taxa, while 24 other species gained their first coverage. Future work should extend the barcode reference library to include all butterfly species from Pakistan as well as neighbouring countries to gain a better understanding of regional variation in barcode sequences in this topographically and climatically complex region.", "which nearest neighbor distance upper limit ?", "14.3", 497.0, 501.0], ["Abstract A short fragment of mt DNA from the cytochrome c oxidase 1 (CO1) region was used to provide the first CO1 barcodes for 37 species of Canadian mosquitoes (Diptera: Culicidae) from the provinces Ontario and New Brunswick. Sequence variation was analysed in a 617\u2010bp fragment from the 5\u2032 end of the CO1 region. Sequences of each mosquito species formed barcode clusters with tight cohesion that were usually clearly distinct from those of allied species. CO1 sequence divergences were, on average, nearly 20 times higher for congeneric species than for members of a species; divergences between congeneric species averaged 10.4% (range 0.2\u201317.2%), whereas those for conspecific individuals averaged 0.5% (range 0.0\u20133.9%).", "which nearest neighbor distance upper limit ?", "17.2", 646.0, 650.0], ["Background Although they are important disease vectors mosquito biodiversity in Pakistan is poorly known. Recent epidemics of dengue fever have revealed the need for more detailed understanding of the diversity and distributions of mosquito species in this region. DNA barcoding improves the accuracy of mosquito inventories because morphological differences between many species are subtle, leading to misidentifications. Methodology/Principal Findings Sequence variation in the barcode region of the mitochondrial COI gene was used to identify mosquito species, reveal genetic diversity, and map the distribution of the dengue-vector species in Pakistan. Analysis of 1684 mosquitoes from 491 sites in Punjab and Khyber Pakhtunkhwa during 2010\u20132013 revealed 32 species with the assemblage dominated by Culex quinquefasciatus (61% of the collection). The genus Aedes (Stegomyia) comprised 15% of the specimens, and was represented by six taxa with the two dengue vector species, Ae. albopictus and Ae. aegypti, dominant and broadly distributed. Anopheles made up another 6% of the catch with An. subpictus dominating. Barcode sequence divergence in conspecific specimens ranged from 0\u20132.4%, while congeneric species showed from 2.3\u201317.8% divergence. A global haplotype analysis of disease-vectors showed the presence of multiple haplotypes, although a single haplotype of each dengue-vector species was dominant in most countries. Geographic distribution of Ae. aegypti and Ae. albopictus showed the later species was dominant and found in both rural and urban environments. Conclusions As the first DNA-based analysis of mosquitoes in Pakistan, this study has begun the construction of a barcode reference library for the mosquitoes of this region. Levels of genetic diversity varied among species. Because of its capacity to differentiate species, even those with subtle morphological differences, DNA barcoding aids accurate tracking of vector populations.", "which nearest neighbor distance upper limit ?", "17.8", 1232.0, 1236.0], ["DNA barcoding has been an effective tool for species identification in several animal groups. Here, we used DNA barcoding to discriminate between 47 morphologically distinct species of Brazilian sand flies. DNA barcodes correctly identified approximately 90% of the sampled taxa (42 morphologically distinct species) using clustering based on neighbor-joining distance, of which four species showed comparatively higher maximum values of divergence (range 4.23\u201319.04%), indicating cryptic diversity. The DNA barcodes also corroborated the resurrection of two species within the shannoni complex and provided an efficient tool to differentiate between morphologically indistinguishable females of closely related species. Taken together, our results validate the effectiveness of DNA barcoding for species identification and the discovery of cryptic diversity in sand flies from Brazil.", "which nearest neighbor distance upper limit ?", "19.04", 461.0, 466.0], ["In this paper we investigate the utility of the COI DNA barcoding region for species identification and for revealing hidden diversity within the subgenus Trichodagmia and related taxa in the New World. In total, 24 morphospecies within the current expanded taxonomic concept of Trichodagmia were analyzed. Three species in the subgenus Aspathia and 10 species in the subgenus Simulium s.str. were also included in the analysis because of their putative phylogenetic relationship with Trichodagmia. In the Neighbour Joining analysis tree (NJ) derived from the DNA barcodes most of the specimens grouped together according to species or species groups as recognized by other morphotaxonomic studies. The interspecific genetic divergence averaged 11.2% (range 2.8\u201319.5%), whereas intraspecific genetic divergence within morphologically distinct species averaged 0.5% (range 0\u20131.2%). Higher values of genetic divergence (3.2\u20133.7%) in species complexes suggest the presence of cryptic diversity. The existence of well defined groups within S. piperi, S. duodenicornium, S. canadense and S. rostratum indicate the possible presence of cryptic species within these taxa. Also, the suspected presence of a sibling species in S. tarsatum and S. paynei is supported. DNA barcodes also showed that specimens from species that were taxonomically difficult to delimit such as S. hippovorum, S. rubrithorax, S. paynei, and other related taxa (S. solarii), grouped together in the NJ analysis, confirming the validity of their species status. The recovery of partial barcodes from specimens in collections was time consuming and PCR success was low from specimens more than 10 years old. However, when a sequence was obtained, it provided good resolution for species identification. Larvae preserved in \u2018weak\u2019 Carnoy\u2019s solution (9:1 ethanol:acetic acid) provided full DNA barcodes. Adding legs directly to the PCR mix from recently collected and preserved adults was an inexpensive, fast methodology to obtain full barcodes. In summary, DNA barcoding combined with a sound morphotaxonomic framework provides an effective approach for the delineation of species and for the discovery of hidden diversity in the subgenus Trichodagmia.", "which nearest neighbor distance upper limit ?", "19.5", 770.0, 774.0], ["Mosquitoes are insects of the Diptera, Nematocera, and Culicidae families, some species of which are important disease vectors. Identifying mosquito species based on morphological characteristics is difficult, particularly the identification of specimens collected in the field as part of disease surveillance programs. Because of this difficulty, we constructed DNA barcodes of the cytochrome c oxidase subunit 1, the COI gene, for the more common mosquito species in China, including the major disease vectors. A total of 404 mosquito specimens were collected and assigned to 15 genera and 122 species and subspecies on the basis of morphological characteristics. Individuals of the same species grouped closely together in a Neighborhood-Joining tree based on COI sequence similarity, regardless of collection site. COI gene sequence divergence was approximately 30 times higher for species in the same genus than for members of the same species. Divergence in over 98% of congeneric species ranged from 2.3% to 21.8%, whereas divergence in conspecific individuals ranged from 0% to 1.67%. Cryptic species may be common and a few pseudogenes were detected.", "which nearest neighbor distance upper limit ?", "21.8", 1015.0, 1019.0], ["This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences.", "which Number of identified species with current taxonomy ?", "3565", 253.0, 257.0], ["The proliferation of DNA data is revolutionizing all fields of systematic research. DNA barcode sequences, now available for millions of specimens and several hundred thousand species, are increasingly used in algorithmic species delimitations. This is complicated by occasional incongruences between species and gene genealogies, as indicated by situations where conspecific individuals do not form a monophyletic cluster in a gene tree. In two previous reviews, non-monophyly has been reported as being common in mitochondrial DNA gene trees. We developed a novel web service \u201cMonophylizer\u201d to detect non-monophyly in phylogenetic trees and used it to ascertain the incidence of species non-monophyly in COI (a.k.a. cox1) barcode sequence data from 4977 species and 41,583 specimens of European Lepidoptera, the largest data set of DNA barcodes analyzed from this regard. Particular attention was paid to accurate species identification to ensure data integrity. We investigated the effects of tree-building method, sampling effort, and other methodological issues, all of which can influence estimates of non-monophyly. We found a 12% incidence of non-monophyly, a value significantly lower than that observed in previous studies. Neighbor joining (NJ) and maximum likelihood (ML) methods yielded almost equal numbers of non-monophyletic species, but 24.1% of these cases of non-monophyly were only found by one of these methods. Non-monophyletic species tend to show either low genetic distances to their nearest neighbors or exceptionally high levels of intraspecific variability. Cases of polyphyly in COI trees arising as a result of deep intraspecific divergence are negligible, as the detected cases reflected misidentifications or methodological errors. Taking into consideration variation in sampling effort, we estimate that the true incidence of non-monophyly is \u223c23%, but with operational factors still being included. Within the operational factors, we separately assessed the frequency of taxonomic limitations (presence of overlooked cryptic and oversplit species) and identification uncertainties. We observed that operational factors are potentially present in more than half (58.6%) of the detected cases of non-monophyly. Furthermore, we observed that in about 20% of non-monophyletic species and entangled species, the lineages involved are either allopatric or parapatric\u2014conditions where species delimitation is inherently subjective and particularly dependent on the species concept that has been adopted. These observations suggest that species-level non-monophyly in COI gene trees is less common than previously supposed, with many cases reflecting misidentifications, the subjectivity of species delimitation or other operational factors.", "which Number of identified species with current taxonomy ?", "4977", 751.0, 755.0], ["This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences.", "which higher number estimated species ?", "3816", 501.0, 505.0], ["Biodiversity research in tropical ecosystems-popularized as the most biodiverse habitats on Earth-often neglects invertebrates, yet invertebrates represent the bulk of local species richness. Insect communities in particular remain strongly impeded by both Linnaean and Wallacean shortfalls, and identifying species often remains a formidable challenge inhibiting the use of these organisms as indicators for ecological and conservation studies. Here we use DNA barcoding as an alternative to the traditional taxonomic approach for characterizing and comparing the diversity of moth communities in two different ecosystems in Gabon. Though sampling remains very incomplete, as evidenced by the high proportion (59%) of species represented by singletons, our results reveal an outstanding diversity. With about 3500 specimens sequenced and representing 1385 BINs (Barcode Index Numbers, used as a proxy to species) in 23 families, the diversity of moths in the two sites sampled is higher than the current number of species listed for the entire country, highlighting the huge gap in biodiversity knowledge for this country. Both seasonal and spatial turnovers are strikingly high (18.3% of BINs shared between seasons, and 13.3% between sites) and draw attention to the need to account for these when running regional surveys. Our results also highlight the richness and singularity of savannah environments and emphasize the status of Central African ecosystems as hotspots of biodiversity.", "which higher number estimated species ?", "1385", 852.0, 856.0], ["This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences.", "which No. of estimated species ?", "3816", 501.0, 505.0], ["The proliferation of DNA data is revolutionizing all fields of systematic research. DNA barcode sequences, now available for millions of specimens and several hundred thousand species, are increasingly used in algorithmic species delimitations. This is complicated by occasional incongruences between species and gene genealogies, as indicated by situations where conspecific individuals do not form a monophyletic cluster in a gene tree. In two previous reviews, non-monophyly has been reported as being common in mitochondrial DNA gene trees. We developed a novel web service \u201cMonophylizer\u201d to detect non-monophyly in phylogenetic trees and used it to ascertain the incidence of species non-monophyly in COI (a.k.a. cox1) barcode sequence data from 4977 species and 41,583 specimens of European Lepidoptera, the largest data set of DNA barcodes analyzed from this regard. Particular attention was paid to accurate species identification to ensure data integrity. We investigated the effects of tree-building method, sampling effort, and other methodological issues, all of which can influence estimates of non-monophyly. We found a 12% incidence of non-monophyly, a value significantly lower than that observed in previous studies. Neighbor joining (NJ) and maximum likelihood (ML) methods yielded almost equal numbers of non-monophyletic species, but 24.1% of these cases of non-monophyly were only found by one of these methods. Non-monophyletic species tend to show either low genetic distances to their nearest neighbors or exceptionally high levels of intraspecific variability. Cases of polyphyly in COI trees arising as a result of deep intraspecific divergence are negligible, as the detected cases reflected misidentifications or methodological errors. Taking into consideration variation in sampling effort, we estimate that the true incidence of non-monophyly is \u223c23%, but with operational factors still being included. Within the operational factors, we separately assessed the frequency of taxonomic limitations (presence of overlooked cryptic and oversplit species) and identification uncertainties. We observed that operational factors are potentially present in more than half (58.6%) of the detected cases of non-monophyly. Furthermore, we observed that in about 20% of non-monophyletic species and entangled species, the lineages involved are either allopatric or parapatric\u2014conditions where species delimitation is inherently subjective and particularly dependent on the species concept that has been adopted. These observations suggest that species-level non-monophyly in COI gene trees is less common than previously supposed, with many cases reflecting misidentifications, the subjectivity of species delimitation or other operational factors.", "which No. of estimated species ?", "4977", 751.0, 755.0], ["Biodiversity research in tropical ecosystems-popularized as the most biodiverse habitats on Earth-often neglects invertebrates, yet invertebrates represent the bulk of local species richness. Insect communities in particular remain strongly impeded by both Linnaean and Wallacean shortfalls, and identifying species often remains a formidable challenge inhibiting the use of these organisms as indicators for ecological and conservation studies. Here we use DNA barcoding as an alternative to the traditional taxonomic approach for characterizing and comparing the diversity of moth communities in two different ecosystems in Gabon. Though sampling remains very incomplete, as evidenced by the high proportion (59%) of species represented by singletons, our results reveal an outstanding diversity. With about 3500 specimens sequenced and representing 1385 BINs (Barcode Index Numbers, used as a proxy to species) in 23 families, the diversity of moths in the two sites sampled is higher than the current number of species listed for the entire country, highlighting the huge gap in biodiversity knowledge for this country. Both seasonal and spatial turnovers are strikingly high (18.3% of BINs shared between seasons, and 13.3% between sites) and draw attention to the need to account for these when running regional surveys. Our results also highlight the richness and singularity of savannah environments and emphasize the status of Central African ecosystems as hotspots of biodiversity.", "which No. of estimated species ?", "1385", 852.0, 856.0], ["The proliferation of DNA data is revolutionizing all fields of systematic research. DNA barcode sequences, now available for millions of specimens and several hundred thousand species, are increasingly used in algorithmic species delimitations. This is complicated by occasional incongruences between species and gene genealogies, as indicated by situations where conspecific individuals do not form a monophyletic cluster in a gene tree. In two previous reviews, non-monophyly has been reported as being common in mitochondrial DNA gene trees. We developed a novel web service \u201cMonophylizer\u201d to detect non-monophyly in phylogenetic trees and used it to ascertain the incidence of species non-monophyly in COI (a.k.a. cox1) barcode sequence data from 4977 species and 41,583 specimens of European Lepidoptera, the largest data set of DNA barcodes analyzed from this regard. Particular attention was paid to accurate species identification to ensure data integrity. We investigated the effects of tree-building method, sampling effort, and other methodological issues, all of which can influence estimates of non-monophyly. We found a 12% incidence of non-monophyly, a value significantly lower than that observed in previous studies. Neighbor joining (NJ) and maximum likelihood (ML) methods yielded almost equal numbers of non-monophyletic species, but 24.1% of these cases of non-monophyly were only found by one of these methods. Non-monophyletic species tend to show either low genetic distances to their nearest neighbors or exceptionally high levels of intraspecific variability. Cases of polyphyly in COI trees arising as a result of deep intraspecific divergence are negligible, as the detected cases reflected misidentifications or methodological errors. Taking into consideration variation in sampling effort, we estimate that the true incidence of non-monophyly is \u223c23%, but with operational factors still being included. Within the operational factors, we separately assessed the frequency of taxonomic limitations (presence of overlooked cryptic and oversplit species) and identification uncertainties. We observed that operational factors are potentially present in more than half (58.6%) of the detected cases of non-monophyly. Furthermore, we observed that in about 20% of non-monophyletic species and entangled species, the lineages involved are either allopatric or parapatric\u2014conditions where species delimitation is inherently subjective and particularly dependent on the species concept that has been adopted. These observations suggest that species-level non-monophyly in COI gene trees is less common than previously supposed, with many cases reflecting misidentifications, the subjectivity of species delimitation or other operational factors.", "which lower number estimated species ?", "4977", 751.0, 755.0], ["Ultralight graphene-based cellular elastomers are found to exhibit nearly frequency-independent piezoresistive behaviors. Surpassing the mechanoreceptors in the human skin, these graphene elastomers can provide an instantaneous and high-fidelity electrical response to dynamic pressures ranging from quasi-static up to 2000 Hz, and are capable of detecting ultralow pressures as small as 0.082 Pa.", "which Detection limit (Pa) ?", "0.082", 388.0, 393.0], ["The effects of gallium doping into indium\u2013zinc\u2013tin oxide (IZTO) thin film transistors (TFTs) and Ar/O2 plasma treatment on the performance of a\u2010IZTO TFT are reported. The Ga doping ratio is varied from 0 to 20%, and it is found that 10% gallium doping in a\u2010IZTO TFT results in a saturation mobility (\u00b5sat) of 11.80 cm2 V\u22121 s\u22121, a threshold voltage (Vth) of 0.17 V, subthreshold swing (SS) of 94 mV dec\u22121, and on/off current ratio (Ion/Ioff) of 1.21 \u00d7 107. Additionally, the performance of 10% Ga\u2010doped IZTO TFT can be further improved by Ar/O2 plasma treatment. It is found that 30 s plasma treatment gives the best TFT performances such as \u00b5sat of 30.60 cm2 V\u22121 s\u22121, Vth of 0.12 V, SS of 92 mV dec\u22121, and Ion/Ioff ratio of 7.90 \u00d7 107. The bias\u2010stability of 10% Ga\u2010doped IZTO TFT is also improved by 30 s plasma treatment. The enhancement of the TFT performance appears to be due to the reduction in the oxygen vacancy and \uf8ffOH concentrations.", "which Threshold Voltage (V) ?", "0.12", 675.0, 679.0], ["Sensitivity of the sensor is of great importance in practical applications of wearable electronics or smart robotics. In the present study, a capacitive sensor enhanced by a tilted micropillar array-structured dielectric layer is developed. Because the tilted micropillars undergo bending deformation rather than compression deformation, the distance between the electrodes is easier to change, even discarding the contribution of the air gap at the interface of the structured dielectric layer and the electrode, thus resulting in high pressure sensitivity (0.42 kPa-1) and very small detection limit (1 Pa). In addition, eliminating the presence of uncertain air gap, the dielectric layer is strongly bonded with the electrode, which makes the structure robust and endows the sensor with high stability and reliable capacitance response. These characteristics allow the device to remain in normal use without the need for repair or replacement despite mechanical damage. Moreover, the proposed sensor can be tailored to any size and shape, which is further demonstrated in wearable application. This work provides a new strategy for sensors that are required to be sensitive and reliable in actual applications.", "which Sensibility of the pressure sensor ( /kPa) ?", "0.42", 559.0, 563.0], ["In recent years, the development of electronic skin and smart wearable body sensors has put forward high requirements for flexible pressure sensors with high sensitivity and large linear measuring range. However it turns out to be difficult to increase both of them simultaneously. In this paper, a flexible capacitive pressure sensor based on porous carbon conductive paste-PDMS composite is reported, the sensitivity and the linear measuring range of which were developed using multiple methods including adjusting the stiffness of the dielectric layer material, fabricating micro-structure and increasing dielectric permittivity of dielectric layer. The capacitive pressure sensor reported here has a relatively high sensitivity of 1.1 kPa-1 and a large linear measuring range of 10 kPa, making the product of the sensitivity and linear measuring range is 11, which is higher than that of the most reported capacitive pressure sensor to our best knowledge. The sensor has a detection of limit of 4 Pa, response time of 60 ms and great stability. Some potential applications of the sensor were demonstrated such as arterial pulse wave measuring and breathe measuring, which shows a promising candidate for wearable biomedical devices. In addition, a pressure sensor array based on the material was also fabricated and it could identify objects in the shape of different letters clearly, which shows a promising application in the future electronic skins.", "which Sensibility of the pressure sensor ( /kPa) ?", "1.1", 735.0, 738.0], ["In recent times, polymer-based flexible pressure sensors have been attracting a lot of attention because of their various applications. A highly sensitive and flexible sensor is suggested, capable of being attached to the human body, based on a three-dimensional dielectric elastomeric structure of polydimethylsiloxane (PDMS) and microsphere composite. This sensor has maximal porosity due to macropores created by sacrificial layer grains and micropores generated by microspheres pre-mixed with PDMS, allowing it to operate at a wider pressure range (~150 kPa) while maintaining a sensitivity (of 0.124 kPa\u22121 in a range of 0~15 kPa) better than in previous studies. The maximized pores can cause deformation in the structure, allowing for the detection of small changes in pressure. In addition to exhibiting a fast rise time (~167 ms) and fall time (~117 ms), as well as excellent reproducibility, the fabricated pressure sensor exhibits reliability in its response to repeated mechanical stimuli (2.5 kPa, 1000 cycles). As an application, we develop a wearable device for monitoring repeated tiny motions, such as the pulse on the human neck and swallowing at the Adam\u2019s apple. This sensory device is also used to detect movements in the index finger and to monitor an insole system in real-time.", "which Sensibility of the pressure sensor ( /kPa) ?", "0.124", 607.0, 612.0], ["Compositional engineering of recently arising methylammonium (MA) lead (Pb) halide based perovskites is an essential approach for finding better perovskite compositions to resolve still remaining issues of toxic Pb, long-term instability, etc. In this work, we carried out crystallographic, morphological, optical, and photovoltaic characterization of compositional MASn0.6Pb0.4I3-xBrx by gradually introducing bromine (Br) into parental Pb-Sn binary perovskite (MASn0.6Pb0.4I3) to elucidate its function in Sn-rich (Sn:Pb = 6:4) perovskites. We found significant advances in crystallinity and dense coverage of the perovskite films by inserting the Br into Sn-rich perovskite lattice. Furthermore, light-intensity-dependent open circuit voltage (Voc) measurement revealed much suppressed trap-assisted recombination for a proper Br-added (x = 0.4) device. These contributed to attaining the unprecedented power conversion efficiency of 12.1% and Voc of 0.78 V, which are, to the best of our knowledge, the highest performance in the Sn-rich (\u226560%) perovskite solar cells reported so far. In addition, impressive enhancement of photocurrent-output stability and little hysteresis were found, which paves the way for the development of environmentally benign (Pb reduction), stable monolithic tandem cells using the developed low band gap (1.24-1.26 eV) MASn0.6Pb0.4I3-xBrx with suggested composition (x = 0.2-0.4).", "which Open circuit voltage, Voc (V) ?", "0.78", 954.0, 958.0], ["Mixed tin (Sn)-lead (Pb) perovskites with high Sn content exhibit low bandgaps suitable for fabricating the bottom cell of perovskite-based tandem solar cells. In this work, we report on the fabrication of efficient mixed Sn-Pb perovskite solar cells using precursors combining formamidinium tin iodide (FASnI3) and methylammonium lead iodide (MAPbI3). The best-performing cell fabricated using a (FASnI3)0.6(MAPbI3)0.4 absorber with an absorption edge of \u223c1.2 eV achieved a power conversion efficiency (PCE) of 15.08 (15.00)% with an open-circuit voltage of 0.795 (0.799) V, a short-circuit current density of 26.86(26.82) mA/cm(2), and a fill factor of 70.6(70.0)% when measured under forward (reverse) voltage scan. The average PCE of 50 cells we have fabricated is 14.39 \u00b1 0.33%, indicating good reproducibility.", "which Open circuit voltage, Voc (V) ?", "0.795", 559.0, 564.0], ["Inverted perovskite solar cells (PSCs) have been becoming more and more attractive, owing to their easy-fabrication and suppressed hysteresis, while the ion diffusion between metallic electrode and perovskite layer limit the long-term stability of devices. In this work, we employed a novel polyethylenimine (PEI) modified cross-stacked superaligned carbon nanotube (CSCNT) film in the inverted planar PSCs configurated FTO/NiO x/methylammonium lead tri-iodide (MAPbI3)/6, 6-phenyl C61-butyric acid methyl ester (PCBM)/CSCNT:PEI. By modifying CSCNT with a certain concentration of PEI (0.5 wt %), suitable energy level alignment and promoted interfacial charge transfer have been achieved, leading to a significant enhancement in the photovoltaic performance. As a result, a champion power conversion efficiency (PCE) of \u223c11% was obtained with a Voc of 0.95 V, a Jsc of 18.7 mA cm-2, a FF of 0.61 as well as negligible hysteresis. Moreover, CSCNT:PEI based inverted PSCs show superior durability in comparison to the standard silver based devices, remaining over 85% of the initial PCE after 500 h aging under various conditions, including long-term air exposure, thermal, and humid treatment. This work opens up a new avenue of facile modified carbon electrodes for highly stable and hysteresis suppressed PSCs.", "which Open circuit voltage, Voc (V) ?", "0.95", 853.0, 857.0], ["This paper reports on the experimental and theoretical characterization of RF microelectromechanical systems (MEMS) switches for high-power applications. First, we investigate the problem of self-actuation due to high RF power and we demonstrate switches that do not self-actuate or catastrophically fail with a measured RF power of up to 5.5 W. Second, the problem of switch stiction to the down state as a function of the applied RF power is also theoretically and experimentally studied. Finally, a novel switch design with a top electrode is introduced and its advantages related to RF power-handling capabilities are presented. By applying this technology, we demonstrate hot-switching measurements with a maximum power of 0.8 W. Our results, backed by theory and measurements, illustrate that careful design can significantly improve the power-handling capabilities of RF MEMS switches.", "which Real down-state capacitance - Cr (pF) ?", "0.8", 728.0, 731.0], ["RF microelectromechanical systems (MEMS) capacitive switches for two different dielectrics, aluminum nitride (AlN) and silicon nitride (Si3N4), are presented. The switches have been characterized and compared in terms of DC and RF performance (5-40 GHz). Switches based on AlN have higher down-state capacitance for similar dielectric thicknesses and provide better isolation and smaller insertion losses compared to Si3N4 switches. Experiments were carried out on RF MEMS switches with stiffening bars to prevent membrane deformation due to residual stress and with different spring and meander-type anchor designs. For a ~300-nm dielectric thickness, an air gap of 2.3 \u03bcm and identical spring-type designs, the AlN switches systematically show an improvement in the isolation by more than -12 dB (-35.8 dB versus -23.7 dB) and a better insertion loss (-0.68 dB versus -0.90 dB) at 40 GHz compared to Si3N4. DC measurements show small leakage current densities for both dielectrics (<;10-8 A/cm2 at 1 MV/cm). However, the resulting leakage current for AlN devices is ten times higher than for Si3N4 when applying a larger electric field. The fabricated switches were also stressed by applying different voltages in air and vacuum, and dielectric charging effects were investigated. AlN switches eliminate the residual or injected charge faster than the Si3N4 devices do.", "which Real down-state capacitance - Cr (pF) ?", "0.9", NaN, NaN], ["In this paper, a high-temperature fiber sensor based on an optical fiber Fabry-Perot interferometer is fabricated by splicing a section of simplified hollow-core fiber between two single-mode fibers (SMFs) and cleaving one of the two SMFs to a certain length. With the superposition of three beams of light reflected from two splicing joints and end face of the cleaved SMF, the modified Vernier effect will be generated in the proposed structure and improve the sensitivity of temperature measurement. The envelope of spectrum reflected from the proposed sensor head is modulated by the ambient temperature of the sensor head. By monitoring and measuring the shift of spectrum envelope, the measurement of environment temperature is carried out experimentally, and high temperature sensitivity of 1.019 nm/\u00b0C for the envelope of the reflected spectrum was obtained. A temperature measurement as high as 1050 \u00b0C has been achieved with excellent repeatability.", "which Sensitivity (nm/\u00b0C) ?", "1.019", 798.0, 803.0], ["We have proposed and experimentally demonstrated an ultrasensitive fiber-optic temperature sensor based on two cascaded Fabry\u2013Perot interferometers (FPIs). Vernier effect that significantly improves the sensitivity is generated due to the slight cavity length difference of the sensing and reference FPI. The sensing FPI is composed of a cleaved fiber end-face and UV-cured adhesive while the reference FPI is fabricated by splicing SMF with hollow core fiber. Temperature sensitivity of the sensing FPI is much higher than the reference FPI, which means that the reference FPI need not to be thermally isolated. By curve fitting method, three different temperature sensitivities of 33.07, \u221258.60, and 67.35 nm/\u00b0C have been experimentally demonstrated with different cavity lengths ratio of the sensing and reference FPI, which can be flexibly adjusted to meet different application demands. The proposed probe-type ultrahigh sensitivity temperature sensor is compact and cost effective, which can be applied to special fields, such as biochemical engineering, medical treatment, and nuclear test.", "which Sensitivity (nm/\u00b0C) ?", "33.07", 683.0, 688.0], ["Highly sensitive, transparent, and durable pressure sensors are fabricated using sea-urchin-shaped metal nanoparticles and insulating polyurethane elastomer. The pressure sensors exhibit outstanding sensitivity (2.46 kPa-1 ), superior optical transmittance (84.8% at 550 nm), fast response/relaxation time (30 ms), and excellent operational durability. In addition, the pressure sensors successfully detect minute movements of human muscles.", "which Sensitivity (/kPa) ?", "2.46", 212.0, 216.0], ["A stretchable resistive pressure sensor is achieved by coating a compressible substrate with a highly stretchable electrode. The substrate contains an array of microscale pyramidal features, and the electrode comprises a polymer composite. When the pressure-induced geometrical change experienced by the electrode is maximized at 40% elongation, a sensitivity of 10.3 kPa(-1) is achieved.", "which Sensitivity (/kPa) ?", "10.3", 363.0, 367.0], ["The physics for integration of piezoelectric aluminum nitride (AlN) films with underlying insulating ultrananocrystalline diamond (UNCD), and electrically conductive grain boundary nitrogen-incorporated UNCD (N-UNCD) and boron-doped UNCD (B-UNCD) layers, as membranes for microelectromechanical system implantable drug delivery devices, has been investigated. AlN films deposited on platinum layers on as grown UNCD or N-UNCD layer (5\u201310 nm rms roughness) required thickness of \u223c400 nm to induce (002) AlN orientation with piezoelectric d33 coefficient \u223c1.91 pm/V at \u223c10 V. Chemical mechanical polished B-UNCD films (0.2 nm rms roughness) substrates enabled (002) AlN film 200 nm thick, yielding d33 = 5.3 pm/V.", "which Piezoelectric coefficient measured (pm/V) ?", "5.3", 702.0, 705.0], ["Thin-film piezoelectric materials are currently employed in micro- and nanodevices for energy harvesting and mechanical sensing. The deposition of these functional layers, however, is quite challenging onto non-rigid/non-flat substrates, such as optical fibers (OFs). Besides the recent novel applications of OFs as probes for biosensing and bioactuation, the possibility to combine them with piezoelectric thin films and metallic electrodes can pave the way for the employment of novel opto-electro-mechanical sensors (e.g., waveguides, optical phase modulators, tunable filters, energy harvesters or biosensors). In this work the deposition of a thin-film piezoelectric wurtzite-phase Aluminium Nitride (AlN), sandwiched between molybdenum (Mo) electrodes, on the curved lateral surface of an optical fiber with polymeric cladding, is reported for the first time, without the need of an orientation-promoting interlayer. The material surface properties and morphology are characterized by microscopy techniques. High orientation is demonstrated by SEM, PFM and X-ray diffraction analysis on a flat polymeric control, with a resulting piezoelectric coefficient (d33) of \u223c5.4 pm/V, while the surface roughness Rms measured by AFM is 9 \u00f7 16 nm. The output mechanical sensing capability of the resulting AlN-based piezo-optrode is investigated through mechanical buckling tests: the peak-to-peak voltage for weakly impulsive loads increases with increasing relative displacements (up to 30%), in the range of 20 \u00f7 35 mV. Impedance spectroscopy frequency sweeps (10 kHz-1 MHz, 1 V) demonstrate a sensor capacitance of \u223c8 pF, with an electrical Q factor as high as 150. The electrical response in the long-term period (two months) revealed good reliability and durability.", "which Piezoelectric coefficient measured (pm/V) ?", "5.4", 1172.0, 1175.0], ["A low-temperature sputter deposition process for the synthesis of aluminum nitride (AlN) thin films that is attractive for applications with a limited temperature budget is presented. Influence of the reactive gas concentration, plasma treatment of the nucleation surface and film thickness on the microstructural, piezoelectric and dielectric properties of AlN is investigated. An improved crystal quality with respect to the increased film thickness was observed; where full width at half maximum (FWHM) of the AlN films decreased from 2.88 \u00b1 0.16\u00b0 down to 1.25 \u00b1 0.07\u00b0 and the effective longitudinal piezoelectric coefficient (d33,f) increased from 2.30 \u00b1 0.32 pm/V up to 5.57 \u00b1 0.34 pm/V for film thicknesses in the range of 30 nm to 2 \u03bcm. Dielectric loss angle (tan \u03b4) decreased from 0.626% \u00b1 0.005% to 0.025% \u00b1 0.011% for the same thickness range. The average relative permittivity (er) was calculated as 10.4 \u00b1 0.05. An almost constant transversal piezoelectric coefficient (|e31,f|) of 1.39 \u00b1 0.01 C/m2 was measured for samples in the range of 0.5 \u03bcm to 2 \u03bcm. Transmission electron microscopy (TEM) investigations performed on thin (100 nm) and thick (1.6 \u03bcm) films revealed an (002) oriented AlN nucleation and growth starting directly from the AlN-Pt interface independent of the film thickness and exhibit comparable quality with the state-of-the-art AlN thin films sputtered at much higher substrate temperatures.", "which Piezoelectric coefficient measured (pm/V) ?", "5.57", 675.0, 679.0], ["In this study, we successfully deposit c-axis oriented aluminum nitride (AlN) piezoelectric films at low temperature (100 \u00b0C) via the DC sputtering method with tilt gun. The X-ray diffraction (XRD) observations prove that the deposited films have a c-axis preferred orientation. Effective d33 value of the proposed films is 5.92 pm/V, which is better than most of the reported data using DC sputtering or other processing methods. It is found that the gun placed at 25\u00b0 helped the films to rearrange at low temperature and c-axis orientation AlN films were successfully grown at 100 \u00b0C. This temperature is much lower than the reported growing temperature. It means the piezoelectric films can be deposited at flexible substrate and the photoresist can be stable at this temperature. The cantilever beam type microelectromechanical systems (MEMS) piezoelectric accelerometers are then fabricated based on the proposed AlN films with a lift-off process. The results show that the responsivity of the proposed devices is 8.12 mV/g, and the resonance frequency is 460 Hz, which indicates they can be used for machine tools.", "which Piezoelectric coefficient measured (pm/V) ?", "5.92", 324.0, 328.0], ["A new variant of the classic pulsed laser deposition (PLD) process is introduced as a room-temperature dry process for the growth and stoichiometry control of hybrid perovskite films through the use of nonstoichiometric single target ablation and off-axis growth. Mixed halide hybrid perovskite films nominally represented by CH3NH3PbI3\u2013xAx (A = Cl or F) are also grown and are shown to reveal interesting trends in the optical properties and photoresponse. Growth of good quality lead-free CH3NH3SnI3 films is also demonstrated, and the corresponding optical properties are presented. Finally, perovskite solar cells fabricated at room temperature (which makes the process adaptable to flexible substrates) are shown to yield a conversion efficiency of about 7.7%.", "which Maximum efficiency of the solar cell (%) ?", "7.7", 760.0, 763.0], ["Compositional engineering of recently arising methylammonium (MA) lead (Pb) halide based perovskites is an essential approach for finding better perovskite compositions to resolve still remaining issues of toxic Pb, long-term instability, etc. In this work, we carried out crystallographic, morphological, optical, and photovoltaic characterization of compositional MASn0.6Pb0.4I3-xBrx by gradually introducing bromine (Br) into parental Pb-Sn binary perovskite (MASn0.6Pb0.4I3) to elucidate its function in Sn-rich (Sn:Pb = 6:4) perovskites. We found significant advances in crystallinity and dense coverage of the perovskite films by inserting the Br into Sn-rich perovskite lattice. Furthermore, light-intensity-dependent open circuit voltage (Voc) measurement revealed much suppressed trap-assisted recombination for a proper Br-added (x = 0.4) device. These contributed to attaining the unprecedented power conversion efficiency of 12.1% and Voc of 0.78 V, which are, to the best of our knowledge, the highest performance in the Sn-rich (\u226560%) perovskite solar cells reported so far. In addition, impressive enhancement of photocurrent-output stability and little hysteresis were found, which paves the way for the development of environmentally benign (Pb reduction), stable monolithic tandem cells using the developed low band gap (1.24-1.26 eV) MASn0.6Pb0.4I3-xBrx with suggested composition (x = 0.2-0.4).", "which Maximum efficiency of the solar cell (%) ?", "12.1", 937.0, 941.0], ["A low-bandgap (1.33 eV) Sn-based MA0.5 FA0.5 Pb0.75 Sn0.25 I3 perovskite is developed via combined compositional, process, and interfacial engineering. It can deliver a high power conversion efficiency (PCE) of 14.19%. Finally, a four-terminal all-perovskite tandem solar cell is demonstrated by combining this low-bandgap cell with a semitransparent MAPbI3 cell to achieve a high efficiency of 19.08%.", "which Maximum efficiency of the solar cell (%) ?", "14.19", 211.0, 216.0], ["Mixed tin (Sn)-lead (Pb) perovskites with high Sn content exhibit low bandgaps suitable for fabricating the bottom cell of perovskite-based tandem solar cells. In this work, we report on the fabrication of efficient mixed Sn-Pb perovskite solar cells using precursors combining formamidinium tin iodide (FASnI3) and methylammonium lead iodide (MAPbI3). The best-performing cell fabricated using a (FASnI3)0.6(MAPbI3)0.4 absorber with an absorption edge of \u223c1.2 eV achieved a power conversion efficiency (PCE) of 15.08 (15.00)% with an open-circuit voltage of 0.795 (0.799) V, a short-circuit current density of 26.86(26.82) mA/cm(2), and a fill factor of 70.6(70.0)% when measured under forward (reverse) voltage scan. The average PCE of 50 cells we have fabricated is 14.39 \u00b1 0.33%, indicating good reproducibility.", "which Maximum efficiency of the solar cell (%) ?", "15.08", 512.0, 517.0], ["Realizing the theoretical limiting power conversion efficiency (PCE) in perovskite solar cells requires a better understanding and control over the fundamental loss processes occurring in the bulk of the perovskite layer and at the internal semiconductor interfaces in devices. One of the main challenges is to eliminate the presence of charge recombination centres throughout the film which have been observed to be most densely located at regions near the grain boundaries. Here, we introduce aluminium acetylacetonate to the perovskite precursor solution, which improves the crystal quality by reducing the microstrain in the polycrystalline film. At the same time, we achieve a reduction in the non-radiative recombination rate, a remarkable improvement in the photoluminescence quantum efficiency (PLQE) and a reduction in the electronic disorder deduced from an Urbach energy of only 12.6 meV in complete devices. As a result, we demonstrate a PCE of 19.1% with negligible hysteresis in planar heterojunction solar cells comprising all organic p and n-type charge collection layers. Our work shows that an additional level of control of perovskite thin film quality is possible via impurity cation doping, and further demonstrates the continuing importance of improving the electronic quality of the perovskite absorber and the nature of the heterojunctions to further improve the solar cell performance.", "which Maximum efficiency of the solar cell (%) ?", "19.1", 957.0, 961.0], ["A highly sensitive fiber temperature sensor based on in-line Mach-Zehnder interferometers (MZIs) and Vernier effect was proposed and experimentally demonstrated. The MZI was fabricated by splicing a section of hollow core fiber between two pieces of multimode fiber. The temperature sensitivity obtained by extracting envelope dip shift of the superimposed spectrum reaches to 528.5 pm/\u00b0C in the range of 0 \u00b0C\u2013100 \u00b0C, which is 17.5 times as high as that without enhanced by the Vernier effect. The experimental sensitivity amplification factor is close to the theoretical predication (18.3 times).The proposed sensitivity enhancement system employs parallel connecting to implement the Vernier effect, which possesses the advantages of easy fabrication and high flexibility.", "which Amplification factor ?", "17.5", 427.0, 431.0], ["A hybrid cascaded configuration consisting of a fiber Sagnac interferometer (FSI) and a Fabry-Perot interferometer (FPI) was proposed and experimentally demonstrated to enhance the temperature intensity by the Vernier-effect. The FSI, which consists of a certain length of Panda fiber, is for temperature sensing, while the FPI acts as a filter due to its temperature insensitivity. The two interferometers have almost the same free spectral range, with the spectral envelope of the cascaded sensor shifting much more than the single FSI. Experimental results show that the temperature sensitivity is enhanced from \u22121.4 nm/\u00b0C (single FSI) to \u221229.0 (cascaded configuration). The enhancement factor is 20.7, which is basically consistent with theoretical analysis (19.9).", "which Amplification factor ?", "20.7", 700.0, 704.0], ["In this paper, we investigated the performance of an n-type tin-oxide (SnOx) thin film transistor (TFT) by experiments and simulation. The fabricated SnOx TFT device by oxygen plasma treatment on the channel exhibited n-type conduction with an on/off current ratio of 4.4x104, a high field-effect mobility of 18.5 cm2/V.s and a threshold swing of 405 mV/decade, which could be attributed to the excess reacted oxygen incorporated to the channel to form the oxygen-rich n-type SnOx. Furthermore, a TCAD simulation based on the n-type SnOx TFT device was performed by fitting the experimental data to investigate the effect of the channel traps on the device performance, indicating that performance enhancements were further achieved by suppressing the density of channel traps. In addition, the n-type SnOx TFT device exhibited high stability upon illumination with visible light. The results show that the n-type SnOx TFT device by channel plasma processing has considerable potential for next-generation high-performance display application.", "which Mobility (cm2 /V.s) ?", "18.5", 309.0, 313.0], ["The effects of gallium doping into indium\u2013zinc\u2013tin oxide (IZTO) thin film transistors (TFTs) and Ar/O2 plasma treatment on the performance of a\u2010IZTO TFT are reported. The Ga doping ratio is varied from 0 to 20%, and it is found that 10% gallium doping in a\u2010IZTO TFT results in a saturation mobility (\u00b5sat) of 11.80 cm2 V\u22121 s\u22121, a threshold voltage (Vth) of 0.17 V, subthreshold swing (SS) of 94 mV dec\u22121, and on/off current ratio (Ion/Ioff) of 1.21 \u00d7 107. Additionally, the performance of 10% Ga\u2010doped IZTO TFT can be further improved by Ar/O2 plasma treatment. It is found that 30 s plasma treatment gives the best TFT performances such as \u00b5sat of 30.60 cm2 V\u22121 s\u22121, Vth of 0.12 V, SS of 92 mV dec\u22121, and Ion/Ioff ratio of 7.90 \u00d7 107. The bias\u2010stability of 10% Ga\u2010doped IZTO TFT is also improved by 30 s plasma treatment. The enhancement of the TFT performance appears to be due to the reduction in the oxygen vacancy and \uf8ffOH concentrations.", "which Mobility (cm2 /V.s) ?", "30.6", NaN, NaN], ["The development of p-type metal-oxide semiconductors (MOSs) is of increasing interest for applications in next-generation optoelectronic devices, display backplane, and low-power-consumption complementary MOS circuits. Here, we report the high performance of solution-processed, p-channel copper-tin-sulfide-gallium oxide (CTSGO) thin-film transistors (TFTs) using UV/O3 exposure. Hall effect measurement confirmed the p-type conduction of CTSGO with Hall mobility of 6.02 \u00b1 0.50 cm2 V-1 s-1. The p-channel CTSGO TFT using UV/O3 treatment exhibited the field-effect mobility (\u03bcFE) of 1.75 \u00b1 0.15 cm2 V-1 s-1 and an on/off current ratio (ION/IOFF) of \u223c104 at a low operating voltage of -5 V. The significant enhancement in the device performance is due to the good p-type CTSGO material, smooth surface morphology, and fewer interfacial traps between the semiconductor and the Al2O3 gate insulator. Therefore, the p-channel CTSGO TFT can be applied for CMOS MOS TFT circuits for next-generation display.", "which Mobility (cm2 /V.s) ?", "1.75", 584.0, 588.0], ["The simultaneous doping effect of Gadolinium (Gd) and Lithium (Li) on zinc oxide (ZnO) thin\u2010film transistor (TFT) by spray pyrolysis using a ZrOx gate insulator is reported. Li doping in ZnO increases mobility significantly, whereas the presence of Gd improves the stability of the device. The Gd ratio in ZnO is varied from 0% to 20% and the Li ratio from 0% to 10%. The optimized ZnO TFT with codoping of 5% Li and 10% Gd exhibits the linear mobility of 25.87 cm2 V\u22121 s\u22121, the subthreshold swing of 204 mV dec\u22121, on/off current ratio of \u2248108, and zero hysteresis voltage. The enhancement of both mobility and stability is due to an increase in grain size by Li incorporation and decrease of defect states by Gd doping. The negligible threshold voltage shift (\u2206VTH) under gate bias and zero hysteresis are due to the reduced defects in an oxide semiconductor and decreased traps at the LiGdZnO/ZrOx interface. Li doping can balance the reduction of the carrier concentration by Gd doping, which improves the mobility and stability of the ZnO TFT. Therefore, LiGdZnO TFT shows excellent electrical performance with high stability.", "which Mobility (cm2 /V.s) ?", "25.87", 456.0, 461.0], ["Inverted perovskite solar cells (PSCs) have been becoming more and more attractive, owing to their easy-fabrication and suppressed hysteresis, while the ion diffusion between metallic electrode and perovskite layer limit the long-term stability of devices. In this work, we employed a novel polyethylenimine (PEI) modified cross-stacked superaligned carbon nanotube (CSCNT) film in the inverted planar PSCs configurated FTO/NiO x/methylammonium lead tri-iodide (MAPbI3)/6, 6-phenyl C61-butyric acid methyl ester (PCBM)/CSCNT:PEI. By modifying CSCNT with a certain concentration of PEI (0.5 wt %), suitable energy level alignment and promoted interfacial charge transfer have been achieved, leading to a significant enhancement in the photovoltaic performance. As a result, a champion power conversion efficiency (PCE) of \u223c11% was obtained with a Voc of 0.95 V, a Jsc of 18.7 mA cm-2, a FF of 0.61 as well as negligible hysteresis. Moreover, CSCNT:PEI based inverted PSCs show superior durability in comparison to the standard silver based devices, remaining over 85% of the initial PCE after 500 h aging under various conditions, including long-term air exposure, thermal, and humid treatment. This work opens up a new avenue of facile modified carbon electrodes for highly stable and hysteresis suppressed PSCs.", "which Short-circuit current density, Jsc (mA/cm2) ?", "18.7", 870.0, 874.0], ["Mixed tin (Sn)-lead (Pb) perovskites with high Sn content exhibit low bandgaps suitable for fabricating the bottom cell of perovskite-based tandem solar cells. In this work, we report on the fabrication of efficient mixed Sn-Pb perovskite solar cells using precursors combining formamidinium tin iodide (FASnI3) and methylammonium lead iodide (MAPbI3). The best-performing cell fabricated using a (FASnI3)0.6(MAPbI3)0.4 absorber with an absorption edge of \u223c1.2 eV achieved a power conversion efficiency (PCE) of 15.08 (15.00)% with an open-circuit voltage of 0.795 (0.799) V, a short-circuit current density of 26.86(26.82) mA/cm(2), and a fill factor of 70.6(70.0)% when measured under forward (reverse) voltage scan. The average PCE of 50 cells we have fabricated is 14.39 \u00b1 0.33%, indicating good reproducibility.", "which Short-circuit current density, Jsc (mA/cm2) ?", "26.82", 617.0, 622.0], ["Granular activated carbon (GAC) materials were prepared via simple gas activation of silkworm cocoons and were coated on ZnO nanorods (ZNRs) by the facile hydrothermal method. The present combination of GAC and ZNRs shows a core-shell structure (where the GAC is coated on the surface of ZNRs) and is exposed by systematic material analysis. The as-prepared samples were then fabricated as dual-functional sensors and, most fascinatingly, the as-fabricated core-shell structure exhibits better UV and H2 sensing properties than those of as-fabricated ZNRs and GAC. Thus, the present core-shell structure-based H2 sensor exhibits fast responses of 11% (10 ppm) and 23.2% (200 ppm) with ultrafast response and recovery. However, the UV sensor offers an ultrahigh photoresponsivity of 57.9 A W-1, which is superior to that of as-grown ZNRs (0.6 A W-1). Besides this, switching photoresponse of GAC/ZNR core-shell structures exhibits a higher switching ratio (between dark and photocurrent) of 1585, with ultrafast response and recovery, than that of as-grown ZNRs (40). Because of the fast adsorption ability of GAC, it was observed that the finest distribution of GAC on ZNRs results in rapid electron transportation between the conduction bands of GAC and ZNRs while sensing H2 and UV. Furthermore, the present core-shell structure-based UV and H2 sensors also well-retained excellent sensitivity, repeatability, and long-term stability. Thus, the salient feature of this combination is that it provides a dual-functional sensor with biowaste cocoon and ZnO, which is ecological and inexpensive.", "which Response (%) ?", "23.2", 664.0, 668.0], ["Mixed tin (Sn)-lead (Pb) perovskites with high Sn content exhibit low bandgaps suitable for fabricating the bottom cell of perovskite-based tandem solar cells. In this work, we report on the fabrication of efficient mixed Sn-Pb perovskite solar cells using precursors combining formamidinium tin iodide (FASnI3) and methylammonium lead iodide (MAPbI3). The best-performing cell fabricated using a (FASnI3)0.6(MAPbI3)0.4 absorber with an absorption edge of \u223c1.2 eV achieved a power conversion efficiency (PCE) of 15.08 (15.00)% with an open-circuit voltage of 0.795 (0.799) V, a short-circuit current density of 26.86(26.82) mA/cm(2), and a fill factor of 70.6(70.0)% when measured under forward (reverse) voltage scan. The average PCE of 50 cells we have fabricated is 14.39 \u00b1 0.33%, indicating good reproducibility.", "which Fill factor, FF (%) ?", "70.6", 655.0, 659.0], ["BACKGROUND On-farm crop species richness (CSR) may be important for maintaining the diversity and quality of diets of smallholder farming households. OBJECTIVES The objectives of this study were to 1) determine the association of CSR with the diversity and quality of household diets in Malawi and 2) assess hypothesized mechanisms for this association via both subsistence- and market-oriented pathways. METHODS Longitudinal data were assessed from nationally representative household surveys in Malawi between 2010 and 2013 (n = 3000 households). A household diet diversity score (DDS) and daily intake per adult equivalent of energy, protein, iron, vitamin A, and zinc were calculated from 7-d household consumption data. CSR was calculated from plot-level data on all crops cultivated during the 2009-2010 and 2012-2013 agricultural seasons in Malawi. Adjusted generalized estimating equations were used to assess the longitudinal relation of CSR with household diet quality and diversity. RESULTS CSR was positively associated with DDS (\u03b2: 0.08; 95% CI: 0.06, 0.12; P < 0.001), as well as daily intake per adult equivalent of energy (kilocalories) (\u03b2: 41.6; 95% CI: 20.9, 62.2; P < 0.001), protein (grams) (\u03b2: 1.78; 95% CI: 0.80, 2.75; P < 0.001), iron (milligrams) (\u03b2: 0.30; 95% CI: 0.16, 0.44; P < 0.001), vitamin A (micrograms of retinol activity equivalent) (\u03b2: 25.8; 95% CI: 12.7, 38.9; P < 0.001), and zinc (milligrams) (\u03b2: 0.26; 95% CI: 0.13, 0.38; P < 0.001). Neither proportion of harvest sold nor distance to nearest population center modified the relation between CSR and household diet diversity or quality (P \u2265 0.05). Households with greater CSR were more commercially oriented (least-squares mean proportion of harvest sold \u00b1 SE, highest tertile of CSR: 17.1 \u00b1 0.52; lowest tertile of CSR: 8.92 \u00b1 1.09) (P < 0.05). CONCLUSION Promoting on-farm CSR may be a beneficial strategy for simultaneously supporting enhanced diet quality and diversity while also creating opportunities for smallholder farmers to engage with markets in subsistence agricultural contexts.", "which Has lower limit for 95% confidence interval ?", "0.06", 1059.0, 1063.0], ["BACKGROUND On-farm crop species richness (CSR) may be important for maintaining the diversity and quality of diets of smallholder farming households. OBJECTIVES The objectives of this study were to 1) determine the association of CSR with the diversity and quality of household diets in Malawi and 2) assess hypothesized mechanisms for this association via both subsistence- and market-oriented pathways. METHODS Longitudinal data were assessed from nationally representative household surveys in Malawi between 2010 and 2013 (n = 3000 households). A household diet diversity score (DDS) and daily intake per adult equivalent of energy, protein, iron, vitamin A, and zinc were calculated from 7-d household consumption data. CSR was calculated from plot-level data on all crops cultivated during the 2009-2010 and 2012-2013 agricultural seasons in Malawi. Adjusted generalized estimating equations were used to assess the longitudinal relation of CSR with household diet quality and diversity. RESULTS CSR was positively associated with DDS (\u03b2: 0.08; 95% CI: 0.06, 0.12; P < 0.001), as well as daily intake per adult equivalent of energy (kilocalories) (\u03b2: 41.6; 95% CI: 20.9, 62.2; P < 0.001), protein (grams) (\u03b2: 1.78; 95% CI: 0.80, 2.75; P < 0.001), iron (milligrams) (\u03b2: 0.30; 95% CI: 0.16, 0.44; P < 0.001), vitamin A (micrograms of retinol activity equivalent) (\u03b2: 25.8; 95% CI: 12.7, 38.9; P < 0.001), and zinc (milligrams) (\u03b2: 0.26; 95% CI: 0.13, 0.38; P < 0.001). Neither proportion of harvest sold nor distance to nearest population center modified the relation between CSR and household diet diversity or quality (P \u2265 0.05). Households with greater CSR were more commercially oriented (least-squares mean proportion of harvest sold \u00b1 SE, highest tertile of CSR: 17.1 \u00b1 0.52; lowest tertile of CSR: 8.92 \u00b1 1.09) (P < 0.05). CONCLUSION Promoting on-farm CSR may be a beneficial strategy for simultaneously supporting enhanced diet quality and diversity while also creating opportunities for smallholder farmers to engage with markets in subsistence agricultural contexts.", "which Correlation Coefficient ?", "0.08", 1045.0, 1049.0], ["BACKGROUND On-farm crop species richness (CSR) may be important for maintaining the diversity and quality of diets of smallholder farming households. OBJECTIVES The objectives of this study were to 1) determine the association of CSR with the diversity and quality of household diets in Malawi and 2) assess hypothesized mechanisms for this association via both subsistence- and market-oriented pathways. METHODS Longitudinal data were assessed from nationally representative household surveys in Malawi between 2010 and 2013 (n = 3000 households). A household diet diversity score (DDS) and daily intake per adult equivalent of energy, protein, iron, vitamin A, and zinc were calculated from 7-d household consumption data. CSR was calculated from plot-level data on all crops cultivated during the 2009-2010 and 2012-2013 agricultural seasons in Malawi. Adjusted generalized estimating equations were used to assess the longitudinal relation of CSR with household diet quality and diversity. RESULTS CSR was positively associated with DDS (\u03b2: 0.08; 95% CI: 0.06, 0.12; P < 0.001), as well as daily intake per adult equivalent of energy (kilocalories) (\u03b2: 41.6; 95% CI: 20.9, 62.2; P < 0.001), protein (grams) (\u03b2: 1.78; 95% CI: 0.80, 2.75; P < 0.001), iron (milligrams) (\u03b2: 0.30; 95% CI: 0.16, 0.44; P < 0.001), vitamin A (micrograms of retinol activity equivalent) (\u03b2: 25.8; 95% CI: 12.7, 38.9; P < 0.001), and zinc (milligrams) (\u03b2: 0.26; 95% CI: 0.13, 0.38; P < 0.001). Neither proportion of harvest sold nor distance to nearest population center modified the relation between CSR and household diet diversity or quality (P \u2265 0.05). Households with greater CSR were more commercially oriented (least-squares mean proportion of harvest sold \u00b1 SE, highest tertile of CSR: 17.1 \u00b1 0.52; lowest tertile of CSR: 8.92 \u00b1 1.09) (P < 0.05). CONCLUSION Promoting on-farm CSR may be a beneficial strategy for simultaneously supporting enhanced diet quality and diversity while also creating opportunities for smallholder farmers to engage with markets in subsistence agricultural contexts.", "which Has upper limit for 95% confidence interval ?", "0.12", 1065.0, 1069.0], ["Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.", "which Number of entities ?", "3756", 621.0, 625.0], ["We develop an efficient fused-ring electron acceptor (ITIC-Th) based on indacenodithieno[3,2-b]thiophene core and thienyl side-chains for organic solar cells (OSCs). Relative to its counterpart with phenyl side-chains (ITIC), ITIC-Th shows lower energy levels (ITIC-Th: HOMO = -5.66 eV, LUMO = -3.93 eV; ITIC: HOMO = -5.48 eV, LUMO = -3.83 eV) due to the \u03c3-inductive effect of thienyl side-chains, which can match with high-performance narrow-band-gap polymer donors and wide-band-gap polymer donors. ITIC-Th has higher electron mobility (6.1 \u00d7 10(-4) cm(2) V(-1) s(-1)) than ITIC (2.6 \u00d7 10(-4) cm(2) V(-1) s(-1)) due to enhanced intermolecular interaction induced by sulfur-sulfur interaction. We fabricate OSCs by blending ITIC-Th acceptor with two different low-band-gap and wide-band-gap polymer donors. In one case, a power conversion efficiency of 9.6% was observed, which rivals some of the highest efficiencies for single junction OSCs based on fullerene acceptors.", "which HOMO ?", "-5.66", NaN, NaN], ["A novel non-fullerene acceptor, possessing a very low bandgap of 1.34 eV and a high-lying lowest unoccupied molecular orbital level of -3.95 eV, is designed and synthesized by introducing electron-donating alkoxy groups to the backbone of a conjugated small molecule. Impressive power conversion efficiencies of 8.4% and 10.7% are obtained for fabricated single and tandem polymer solar cells.", "which LUMO ?", "-3.95", NaN, NaN], ["We develop an efficient fused-ring electron acceptor (ITIC-Th) based on indacenodithieno[3,2-b]thiophene core and thienyl side-chains for organic solar cells (OSCs). Relative to its counterpart with phenyl side-chains (ITIC), ITIC-Th shows lower energy levels (ITIC-Th: HOMO = -5.66 eV, LUMO = -3.93 eV; ITIC: HOMO = -5.48 eV, LUMO = -3.83 eV) due to the \u03c3-inductive effect of thienyl side-chains, which can match with high-performance narrow-band-gap polymer donors and wide-band-gap polymer donors. ITIC-Th has higher electron mobility (6.1 \u00d7 10(-4) cm(2) V(-1) s(-1)) than ITIC (2.6 \u00d7 10(-4) cm(2) V(-1) s(-1)) due to enhanced intermolecular interaction induced by sulfur-sulfur interaction. We fabricate OSCs by blending ITIC-Th acceptor with two different low-band-gap and wide-band-gap polymer donors. In one case, a power conversion efficiency of 9.6% was observed, which rivals some of the highest efficiencies for single junction OSCs based on fullerene acceptors.", "which LUMO ?", "-3.93", NaN, NaN], ["Ladder-type dithienocyclopentacarbazole (DTCC) cores, which possess highly extended \u03c0-conjugated backbones and versatile modular structures for derivatization, were widely used to develop high-performance p-type polymeric semiconductors. However, an n-type DTCC-based organic semiconductor has not been reported to date. In this study, the first DTCC-based n-type organic semiconductor (DTCC\u2013IC) with a well-defined A\u2013D\u2013A backbone was designed, synthesized, and characterized, in which a DTCC derivative substituted by four p-octyloxyphenyl groups was used as the electron-donating core and two strongly electron-withdrawing 3-(dicyanomethylene)indan-1-one moieties were used as the terminal acceptors. It was found that DTCC\u2013IC has strong light-capturing ability in the range of 500\u2013720 nm and exhibits an impressively high molar absorption coefficient of 2.24 \u00d7 105 M\u22121 cm\u22121 at 669 nm owing to effective intramolecular charge transfer and a strong D\u2013A effect. Cyclic voltammetry measurements indicated that the HOMO and LUMO energy levels of DTCC\u2013IC are \u22125.50 and \u22123.87 eV, respectively. More importantly, a high electron mobility of 2.17 \u00d7 10\u22123 cm2 V\u22121 s\u22121 was determined by the space-charge-limited current method; this electron mobility can be comparable to that of fullerene derivative acceptors (\u03bce \u223c 10\u22123 cm2 V\u22121 s\u22121). To investigate its application potential in non-fullerene solar cells, we fabricated organic solar cells (OSCs) by blending a DTCC\u2013IC acceptor with a PTB7-Th donor under various conditions. The results suggest that the optimized device exhibits a maximum power conversion efficiency (PCE) of up to 6% and a rational high VOC of 0.95 V. These findings demonstrate that the ladder-type DTCC core is a promising building block for the development of high-mobility n-type organic semiconductors for OSCs.", "which Open circuit voltage, Voc ?", "0.95", 1655.0, 1659.0], ["Two cheliform non-fullerene acceptors, DTPC-IC and DTPC-DFIC, based on a highly electron-rich core, dithienopicenocarbazole (DTPC), are synthesized, showing ultra-narrow bandgaps (as low as 1.21 eV). The two-dimensional nitrogen-containing conjugated DTPC possesses strong electron-donating capability, which induces intense intramolecular charge transfer and intermolecular \u03c0-\u03c0 stacking in derived acceptors. The solar cell based on DTPC-DFIC and a spectrally complementary polymer donor, PTB7-Th, showed a high power conversion efficiency of 10.21% and an extremely low energy loss of 0.45 eV, which is the lowest among reported efficient OSCs.", "which Energy band gap ?", "1.21", 190.0, 194.0], ["A novel non-fullerene acceptor, possessing a very low bandgap of 1.34 eV and a high-lying lowest unoccupied molecular orbital level of -3.95 eV, is designed and synthesized by introducing electron-donating alkoxy groups to the backbone of a conjugated small molecule. Impressive power conversion efficiencies of 8.4% and 10.7% are obtained for fabricated single and tandem polymer solar cells.", "which Energy band gap ?", "1.34", 65.0, 69.0], ["A fused hexacyclic electron acceptor, IHIC, based on strong electron\u2010donating group dithienocyclopentathieno[3,2\u2010b]thiophene flanked by strong electron\u2010withdrawing group 1,1\u2010dicyanomethylene\u20103\u2010indanone, is designed, synthesized, and applied in semitransparent organic solar cells (ST\u2010OSCs). IHIC exhibits strong near\u2010infrared absorption with extinction coefficients of up to 1.6 \u00d7 105m\u22121 cm\u22121, a narrow optical bandgap of 1.38 eV, and a high electron mobility of 2.4 \u00d7 10\u22123 cm2 V\u22121 s\u22121. The ST\u2010OSCs based on blends of a narrow\u2010bandgap polymer donor PTB7\u2010Th and narrow\u2010bandgap IHIC acceptor exhibit a champion power conversion efficiency of 9.77% with an average visible transmittance of 36% and excellent device stability; this efficiency is much higher than any single\u2010junction and tandem ST\u2010OSCs reported in the literature.", "which Energy band gap ?", "1.38", 422.0, 426.0], ["A new electron\u2010rich central building block, 5,5,12,12\u2010tetrakis(4\u2010hexylphenyl)\u2010indacenobis\u2010(dithieno[3,2\u2010b:2\u2032,3\u2032\u2010d]pyrrol) (INP), and two derivative nonfullerene acceptors (INPIC and INPIC\u20104F) are designed and synthesized. The two molecules reveal broad (600\u2013900 nm) and strong absorption due to the satisfactory electron\u2010donating ability of INP. Compared with its counterpart INPIC, fluorinated nonfullerene acceptor INPIC\u20104F exhibits a stronger near\u2010infrared absorption with a narrower optical bandgap of 1.39 eV, an improved crystallinity with higher electron mobility, and down\u2010shifted highest occupied molecular orbital and lowest unoccupied molecular orbital energy levels. Organic solar cells (OSCs) based on INPIC\u20104F exhibit a high power conversion efficiency (PCE) of 13.13% and a relatively low energy loss of 0.54 eV, which is among the highest efficiencies reported for binary OSCs in the literature. The results demonstrate the great potential of the new INP as an electron\u2010donating building block for constructing high\u2010performance nonfullerene acceptors for OSCs.", "which Energy band gap ?", "1.39", 506.0, 510.0], ["Low-bandgap polymers/molecules are an interesting family of semiconductor materials, and have enabled many recent exciting breakthroughs in the field of organic electronics, especially for organic photovoltaics (OPVs). Here, such a low-bandgap (1.43 eV) non-fullerene electron acceptor (BT-IC) bearing a fused 7-heterocyclic ring with absorption edge extending to the near-infrared (NIR) region was specially designed and synthesized. Benefitted from its NIR light harvesting, high performance OPVs were fabricated with medium bandgap polymers (J61 and J71) as donors, showing power conversion efficiencies of 9.6% with J61 and 10.5% with J71 along with extremely low energy loss (0.56 eV for J61 and 0.53 eV for J71). Interestingly, femtosecond transient absorption spectroscopy studies on both systems show that efficient charge generation was observed despite the fact that the highest occupied molecular orbital (HOMO)\u2013HOMO offset (\u0394EH) in the blends was as low as 0.10 eV, suggesting that such a small \u0394EH is not a crucial limitation in realizing high performance of NIR non-fullerene based OPVs. Our results indicated that BT-IC is an interesting NIR non-fullerene acceptor with great potential application in tandem/multi-junction, semitransparent, and ternary blend solar cells.", "which Energy band gap ?", "1.43", 245.0, 249.0], ["With an indenoindene core, a new thieno[3,4\u2010b]thiophene\u2010based small\u2010molecule electron acceptor, 2,2\u2032\u2010((2Z,2\u2032Z)\u2010((6,6\u2032\u2010(5,5,10,10\u2010tetrakis(2\u2010ethylhexyl)\u20105,10\u2010dihydroindeno[2,1\u2010a]indene\u20102,7\u2010diyl)bis(2\u2010octylthieno[3,4\u2010b]thiophene\u20106,4\u2010diyl))bis(methanylylidene))bis(5,6\u2010difluoro\u20103\u2010oxo\u20102,3\u2010dihydro\u20101H\u2010indene\u20102,1\u2010diylidene))dimalononitrile (NITI), is successfully designed and synthesized. Compared with 12\u2010\u03c0\u2010electron fluorene, a carbon\u2010bridged biphenylene with an axial symmetry, indenoindene, a carbon\u2010bridged E\u2010stilbene with a centrosymmetry, shows elongated \u03c0\u2010conjugation with 14 \u03c0\u2010electrons and one more sp3 carbon bridge, which may increase the tunability of electronic structure and film morphology. Despite its twisted molecular framework, NITI shows a low optical bandgap of 1.49 eV in thin film and a high molar extinction coefficient of 1.90 \u00d7 105m\u22121 cm\u22121 in solution. By matching NITI with a large\u2010bandgap polymer donor, an extraordinary power conversion efficiency of 12.74% is achieved, which is among the best performance so far reported for fullerene\u2010free organic photovoltaics and is inspiring for the design of new electron acceptors.", "which Energy band gap ?", "1.49", 778.0, 782.0], ["Naphtho[1,2\u2010b:5,6\u2010b\u2032]dithiophene is extended to a fused octacyclic building block, which is end capped by strong electron\u2010withdrawing 2\u2010(5,6\u2010difluoro\u20103\u2010oxo\u20102,3\u2010dihydro\u20101H\u2010inden\u20101\u2010ylidene)malononitrile to yield a fused\u2010ring electron acceptor (IOIC2) for organic solar cells (OSCs). Relative to naphthalene\u2010based IHIC2, naphthodithiophene\u2010based IOIC2 with a larger \u03c0\u2010conjugation and a stronger electron\u2010donating core shows a higher lowest unoccupied molecular orbital energy level (IOIC2: \u22123.78 eV vs IHIC2: \u22123.86 eV), broader absorption with a smaller optical bandgap (IOIC2: 1.55 eV vs IHIC2: 1.66 eV), and a higher electron mobility (IOIC2: 1.0 \u00d7 10\u22123 cm2 V\u22121 s\u22121 vs IHIC2: 5.0 \u00d7 10\u22124 cm2 V\u22121 s\u22121). Thus, IOIC2\u2010based OSCs show higher values in open\u2010circuit voltage, short\u2010circuit current density, fill factor, and thereby much higher power conversion efficiency (PCE) values than those of the IHIC2\u2010based counterpart. In particular, as\u2010cast OSCs based on FTAZ: IOIC2 yield PCEs of up to 11.2%, higher than that of the control devices based on FTAZ: IHIC2 (7.45%). Furthermore, by using 0.2% 1,8\u2010diiodooctane as the processing additive, a PCE of 12.3% is achieved from the FTAZ:IOIC2\u2010based devices, higher than that of the FTAZ:IHIC2\u2010based devices (7.31%). These results indicate that incorporating extended conjugation into the electron\u2010donating fused\u2010ring units in nonfullerene acceptors is a promising strategy for designing high\u2010performance electron acceptors.", "which Energy band gap ?", "1.55", 575.0, 579.0], ["Abstract Dual functional fluorescence nanosensors have many potential applications in biology and medicine. Monitoring temperature with higher precision at localized small length scales or in a nanocavity is a necessity in various applications. As well as the detection of biologically interesting metal ions using low-cost and sensitive approach is of great importance in bioanalysis. In this paper, we describe the preparation of dual-function highly fluorescent B, N-co-doped carbon nanodots (CDs) that work as chemical and thermal sensors. The CDs emit blue fluorescence peaked at 450 nm and exhibit up to 70% photoluminescence quantum yield with showing excitation-independent fluorescence. We also show that water-soluble CDs display temperature-dependent fluorescence and can serve as highly sensitive and reliable nanothermometers with a thermo-sensitivity 1.8% \u00b0C \u22121 , and wide range thermo-sensing between 0\u201390 \u00b0C with excellent recovery. Moreover, the fluorescence emission of CDs are selectively quenched after the addition of Fe 2+ and Fe 3+ ions while show no quenching with adding other common metal cations and anions. The fluorescence emission shows a good linear correlation with concentration of Fe 2+ and Fe 3+ (R 2 = 0.9908 for Fe 2+ and R 2 = 0.9892 for Fe 3+ ) with a detection limit of of 80.0 \u00b1 0.5 nM for Fe 2+ and 110.0 \u00b1 0.5 nM for Fe 3+ . Considering the high quantum yield and selectivity, CDs are exploited to design a nanoprobe towards iron detection in a biological sample. The fluorimetric assay is used to detect Fe 2+ in iron capsules and total iron in serum samples successfully.", "which Sensitivity ?", "1.8", 865.0, 868.0], ["Background: Seasonal influenza virus outbreaks cause annual epidemics, mostly during winter in temperate zone countries, especially resulting in increased morbidity and higher mortality in children. In order to conduct rapid screening for influenza in pediatric outpatient units, we developed a pediatric infection screening system with a radar respiration monitor. Methods: The system conducts influenza screening within 10 seconds based on vital signs (i.e., respiration rate monitored using a 24 GHz microwave radar; facial temperature, using a thermopile array; and heart rate, using a pulse photosensor). A support vector machine (SVM) classification method was used to discriminate influenza children from healthy children based on vital signs. To assess the classification performance of the screening system that uses the SVM, we conducted influenza screening for 70 children (i.e., 27 seasonal influenza patients (11 \u00b1 2 years) at a pediatric clinic and 43 healthy control subjects (9 \u00b1 4 years) at a pediatric dental clinic) in the winter of 2013-2014. Results: The screening system using the SVM identified 26 subjects with influenza (22 of the 27 influenza patients and 4 of the 43 healthy subjects). The system discriminated 44 subjects as healthy (5 of the 27 influenza patients and 39 of the 43 healthy subjects), with sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 81.5%, 90.7%, 84.6%, and 88.6%, respectively. Conclusion: The SVM-based screening system achieved classification results for the outpatient children based on vital signs with comparatively high NPV within 10 seconds. At pediatric clinics and hospitals, our system seems potentially useful in the first screening step for infections in the future.", "which Sensitivity ?", "81.5", 1432.0, 1436.0], ["After the outbreak of severe acute respiratory syndrome (SARS) in 2003, many international airport quarantine stations conducted fever-based screening to identify infected passengers using infrared thermography for preventing global pandemics. Due to environmental factors affecting measurement of facial skin temperature with thermography, some previous studies revealed the limits of authenticity in detecting infectious symptoms. In order to implement more strict entry screening in the epidemic seasons of emerging infectious diseases, we developed an infection screening system for airport quarantines using multi-parameter vital signs. This system can automatically detect infected individuals within several tens of seconds by a neural-network-based discriminant function using measured vital signs, i.e., heart rate obtained by a reflective photo sensor, respiration rate determined by a 10-GHz non-contact respiration radar, and the ear temperature monitored by a thermography. In this paper, to reduce the environmental effects on thermography measurement, we adopted the ear temperature as a new screening indicator instead of facial skin. We tested the system on 13 influenza patients and 33 normal subjects. The sensitivity of the infection screening system in detecting influenza were 92.3%, which was higher than the sensitivity reported in our previous paper (88.0%) with average facial skin temperature.", "which Sensitivity ?", "92.3", 1299.0, 1303.0], ["Abstract Objective: Surveillance of surgical site infections (SSIs) is important for infection control and is usually performed through retrospective manual chart review. The aim of this study was to develop an algorithm for the surveillance of deep SSIs based on clinical variables to enhance efficiency of surveillance. Design: Retrospective cohort study (2012\u20132015). Setting: A Dutch teaching hospital. Participants: We included all consecutive patients who underwent colorectal surgery excluding those with contaminated wounds at the time of surgery. All patients were evaluated for deep SSIs through manual chart review, using the Centers for Disease Control and Prevention (CDC) criteria as the reference standard. Analysis: We used logistic regression modeling to identify predictors that contributed to the estimation of diagnostic probability. Bootstrapping was applied to increase generalizability, followed by assessment of statistical performance and clinical implications. Results: In total, 1,606 patients were included, of whom 129 (8.0%) acquired a deep SSI. The final model included postoperative length of stay, wound class, readmission, reoperation, and 30-day mortality. The model achieved 68.7% specificity and 98.5% sensitivity and an area under the receiver operator characteristic (ROC) curve (AUC) of 0.950 (95% CI, 0.932\u20130.969). Positive and negative predictive values were 21.5% and 99.8%, respectively. Applying the algorithm resulted in a 63.4% reduction in the number of records requiring full manual review (from 1,606 to 590). Conclusions: This 5-parameter model identified 98.5% of patients with a deep SSI. The model can be used to develop semiautomatic surveillance of deep SSIs after colorectal surgery, which may further improve efficiency and quality of SSI surveillance.", "which Sensitivity ?", "98.5", 1232.0, 1236.0], ["

A nonfullerene electron acceptor (IEIC) based on indaceno[1,2-b:5,6-b\u2032]dithiophene and 2-(3-oxo-2,3-dihydroinden-1-ylidene)malononitrile was designed and synthesized, and fullerene-free polymer solar cells based on the IEIC acceptor showed power conversion efficiencies of up to 6.31%.

", "which Power conversion efficiency ?", "6.31", 316.0, 320.0], ["A fused hexacyclic electron acceptor, IHIC, based on strong electron\u2010donating group dithienocyclopentathieno[3,2\u2010b]thiophene flanked by strong electron\u2010withdrawing group 1,1\u2010dicyanomethylene\u20103\u2010indanone, is designed, synthesized, and applied in semitransparent organic solar cells (ST\u2010OSCs). IHIC exhibits strong near\u2010infrared absorption with extinction coefficients of up to 1.6 \u00d7 105m\u22121 cm\u22121, a narrow optical bandgap of 1.38 eV, and a high electron mobility of 2.4 \u00d7 10\u22123 cm2 V\u22121 s\u22121. The ST\u2010OSCs based on blends of a narrow\u2010bandgap polymer donor PTB7\u2010Th and narrow\u2010bandgap IHIC acceptor exhibit a champion power conversion efficiency of 9.77% with an average visible transmittance of 36% and excellent device stability; this efficiency is much higher than any single\u2010junction and tandem ST\u2010OSCs reported in the literature.", "which Power conversion efficiency ?", "9.77", 640.0, 644.0], ["Two cheliform non-fullerene acceptors, DTPC-IC and DTPC-DFIC, based on a highly electron-rich core, dithienopicenocarbazole (DTPC), are synthesized, showing ultra-narrow bandgaps (as low as 1.21 eV). The two-dimensional nitrogen-containing conjugated DTPC possesses strong electron-donating capability, which induces intense intramolecular charge transfer and intermolecular \u03c0-\u03c0 stacking in derived acceptors. The solar cell based on DTPC-DFIC and a spectrally complementary polymer donor, PTB7-Th, showed a high power conversion efficiency of 10.21% and an extremely low energy loss of 0.45 eV, which is the lowest among reported efficient OSCs.", "which Power conversion efficiency ?", "10.21", 544.0, 549.0], ["Low bandgap n-type organic semiconductor (n-OS) ITIC has attracted great attention for the application as an acceptor with medium bandgap p-type conjugated polymer as donor in nonfullerene polymer solar cells (PSCs) because of its attractive photovoltaic performance. Here we report a modification on the molecular structure of ITIC by side-chain isomerization with meta-alkyl-phenyl substitution, m-ITIC, to further improve its photovoltaic performance. In a comparison with its isomeric counterpart ITIC with para-alkyl-phenyl substitution, m-ITIC shows a higher film absorption coefficient, a larger crystalline coherence, and higher electron mobility. These inherent advantages of m-ITIC resulted in a higher power conversion efficiency (PCE) of 11.77% for the nonfullerene PSCs with m-ITIC as acceptor and a medium bandgap polymer J61 as donor, which is significantly improved over that (10.57%) of the corresponding devices with ITIC as acceptor. To the best of our knowledge, the PCE of 11.77% is one of the highest values reported in the literature to date for nonfullerene PSCs. More importantly, the m-ITIC-based device shows less thickness-dependent photovoltaic behavior than ITIC-based devices in the active-layer thickness range of 80-360 nm, which is beneficial for large area device fabrication. These results indicate that m-ITIC is a promising low bandgap n-OS for the application as an acceptor in PSCs, and the side-chain isomerization could be an easy and convenient way to further improve the photovoltaic performance of the donor and acceptor materials for high efficiency PSCs.", "which Power conversion efficiency ?", "11.77", 750.0, 755.0], ["Three novel non-fullerene small molecular acceptors ITOIC, ITOIC-F, and ITOIC-2F were designed and synthesized with easy chemistry. The concept of supramolecular chemistry was successfully used in the molecular design, which includes noncovalently conformational locking (via intrasupramolecular interaction) to enhance the planarity of backbone and electrostatic interaction (intersupramolecular interaction) to enhance the \u03c0\u2013\u03c0 stacking of terminal groups. Fluorination can further strengthen the intersupramolecular electrostatic interaction of terminal groups. As expected, the designed acceptors exhibited excellent device performance when blended with polymer donor PBDB-T. In comparison with the parent acceptor molecule DC-IDT2T reported in the literature with a power conversion efficiency (PCE) of 3.93%, ITOIC with a planar structure exhibited a PCE of 8.87% and ITOIC-2F with a planar structure and enhanced electrostatic interaction showed a quite impressive PCE of 12.17%. Our result demonstrates the import...", "which Power conversion efficiency ?", "12.17", 978.0, 983.0], ["Naphtho[1,2\u2010b:5,6\u2010b\u2032]dithiophene is extended to a fused octacyclic building block, which is end capped by strong electron\u2010withdrawing 2\u2010(5,6\u2010difluoro\u20103\u2010oxo\u20102,3\u2010dihydro\u20101H\u2010inden\u20101\u2010ylidene)malononitrile to yield a fused\u2010ring electron acceptor (IOIC2) for organic solar cells (OSCs). Relative to naphthalene\u2010based IHIC2, naphthodithiophene\u2010based IOIC2 with a larger \u03c0\u2010conjugation and a stronger electron\u2010donating core shows a higher lowest unoccupied molecular orbital energy level (IOIC2: \u22123.78 eV vs IHIC2: \u22123.86 eV), broader absorption with a smaller optical bandgap (IOIC2: 1.55 eV vs IHIC2: 1.66 eV), and a higher electron mobility (IOIC2: 1.0 \u00d7 10\u22123 cm2 V\u22121 s\u22121 vs IHIC2: 5.0 \u00d7 10\u22124 cm2 V\u22121 s\u22121). Thus, IOIC2\u2010based OSCs show higher values in open\u2010circuit voltage, short\u2010circuit current density, fill factor, and thereby much higher power conversion efficiency (PCE) values than those of the IHIC2\u2010based counterpart. In particular, as\u2010cast OSCs based on FTAZ: IOIC2 yield PCEs of up to 11.2%, higher than that of the control devices based on FTAZ: IHIC2 (7.45%). Furthermore, by using 0.2% 1,8\u2010diiodooctane as the processing additive, a PCE of 12.3% is achieved from the FTAZ:IOIC2\u2010based devices, higher than that of the FTAZ:IHIC2\u2010based devices (7.31%). These results indicate that incorporating extended conjugation into the electron\u2010donating fused\u2010ring units in nonfullerene acceptors is a promising strategy for designing high\u2010performance electron acceptors.", "which Power conversion efficiency ?", "12.3", 1146.0, 1150.0], ["With an indenoindene core, a new thieno[3,4\u2010b]thiophene\u2010based small\u2010molecule electron acceptor, 2,2\u2032\u2010((2Z,2\u2032Z)\u2010((6,6\u2032\u2010(5,5,10,10\u2010tetrakis(2\u2010ethylhexyl)\u20105,10\u2010dihydroindeno[2,1\u2010a]indene\u20102,7\u2010diyl)bis(2\u2010octylthieno[3,4\u2010b]thiophene\u20106,4\u2010diyl))bis(methanylylidene))bis(5,6\u2010difluoro\u20103\u2010oxo\u20102,3\u2010dihydro\u20101H\u2010indene\u20102,1\u2010diylidene))dimalononitrile (NITI), is successfully designed and synthesized. Compared with 12\u2010\u03c0\u2010electron fluorene, a carbon\u2010bridged biphenylene with an axial symmetry, indenoindene, a carbon\u2010bridged E\u2010stilbene with a centrosymmetry, shows elongated \u03c0\u2010conjugation with 14 \u03c0\u2010electrons and one more sp3 carbon bridge, which may increase the tunability of electronic structure and film morphology. Despite its twisted molecular framework, NITI shows a low optical bandgap of 1.49 eV in thin film and a high molar extinction coefficient of 1.90 \u00d7 105m\u22121 cm\u22121 in solution. By matching NITI with a large\u2010bandgap polymer donor, an extraordinary power conversion efficiency of 12.74% is achieved, which is among the best performance so far reported for fullerene\u2010free organic photovoltaics and is inspiring for the design of new electron acceptors.", "which Power conversion efficiency ?", "12.74", 975.0, 980.0], ["A new electron\u2010rich central building block, 5,5,12,12\u2010tetrakis(4\u2010hexylphenyl)\u2010indacenobis\u2010(dithieno[3,2\u2010b:2\u2032,3\u2032\u2010d]pyrrol) (INP), and two derivative nonfullerene acceptors (INPIC and INPIC\u20104F) are designed and synthesized. The two molecules reveal broad (600\u2013900 nm) and strong absorption due to the satisfactory electron\u2010donating ability of INP. Compared with its counterpart INPIC, fluorinated nonfullerene acceptor INPIC\u20104F exhibits a stronger near\u2010infrared absorption with a narrower optical bandgap of 1.39 eV, an improved crystallinity with higher electron mobility, and down\u2010shifted highest occupied molecular orbital and lowest unoccupied molecular orbital energy levels. Organic solar cells (OSCs) based on INPIC\u20104F exhibit a high power conversion efficiency (PCE) of 13.13% and a relatively low energy loss of 0.54 eV, which is among the highest efficiencies reported for binary OSCs in the literature. The results demonstrate the great potential of the new INP as an electron\u2010donating building block for constructing high\u2010performance nonfullerene acceptors for OSCs.", "which Power conversion efficiency ?", "13.13", 776.0, 781.0], ["A simple small molecule acceptor named DICTF, with fluorene as the central block and 2-(2,3-dihydro-3-oxo-1H-inden-1-ylidene)propanedinitrile as the end-capping groups, has been designed for fullerene-free organic solar cells. The new molecule was synthesized from widely available and inexpensive commercial materials in only three steps with a high overall yield of \u223c60%. Fullerene-free organic solar cells with DICTF as the acceptor material provide a high PCE of 7.93%.", "which Power conversion efficiency (%) ?", "7.93", 467.0, 471.0], ["Background: Paclitaxel (PTX) is one of the most important and effective anticancer drugs for the treatment of human cancer. However, its low solubility and severe adverse effects limited clinical use. To overcome this limitation, nanotechnology has been used to overcome tumors due to its excellent antimicrobial activity. Objective: This study was to demonstrate the anticancer properties of functionalization silver nanoparticles loaded with paclitaxel (Ag@PTX) induced A549 cells apoptosis through ROS-mediated signaling pathways. Methods: The Ag@PTX nanoparticles were charged with a zeta potential of about -17 mv and characterized around 2 nm with a narrow size distribution. Results: Ag@PTX significantly decreased the viability of A549 cells and possessed selectivity between cancer and normal cells. Ag@PTX induced A549 cells apoptosis was confirmed by nuclear condensation, DNA fragmentation, and activation of caspase-3. Furthermore, Ag@PTX enhanced the anti-cancer activity of A549 cells through ROS-mediated p53 and AKT signalling pathways. Finally, in a xenograft nude mice model, Ag@PTX suppressed the growth of tumors. Conclusion: Our findings suggest that Ag@PTX may be a candidate as a chemopreventive agent and could be a highly efficient way to achieve anticancer synergism for human cancers.", "which Zeta potential ?", "-17", NaN, NaN], ["Abstract Objective: Paclitaxel (PTX)-loaded polymer (Poly(lactic-co-glycolic acid), PLGA)-based nanoformulation was developed with the objective of formulating cremophor EL-free nanoformulation intended for intravenous use. Significance: The polymeric PTX nanoparticles free from the cremophor EL will help in eliminating the shortcomings of the existing delivery system as cremophor EL causes serious allergic reactions to the subjects after intravenous use. Methods and results: Paclitaxel-loaded nanoparticles were formulated by nanoprecipitation method. The diminutive nanoparticles (143.2 nm) with uniform size throughout (polydispersity index, 0.115) and high entrapment efficiency (95.34%) were obtained by employing the Box\u2013Behnken design for the optimization of the formulation with the aid of desirability approach-based numerical optimization technique. Optimized levels for each factor viz. polymer concentration (X1), amount of organic solvent (X2), and surfactant concentration (X3) were 0.23%, 5 ml %, and 1.13%, respectively. The results of the hemocompatibility studies confirmed the safety of PLGA-based nanoparticles for intravenous administration. Pharmacokinetic evaluations confirmed the longer retention of PTX in systemic circulation. Conclusion: In a nutshell, the developed polymeric nanoparticle formulation of PTX precludes the inadequacy of existing PTX formulation and can be considered as superior alternative carrier system of the same.", "which Poly dispercity index (PDI) ?", "0.115", 650.0, 655.0], ["Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5\uff05. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel\uff1bDrug delivery\uff1bNanoparticle\uff1bRadiotherapy\uff1bHypoxia\uff1bHuman tumor cells\uff1bcellular uptake", "which Drug entrapment efficiency (%) ?", "85.5", 1058.0, 1062.0], ["Abstract Objective: Paclitaxel (PTX)-loaded polymer (Poly(lactic-co-glycolic acid), PLGA)-based nanoformulation was developed with the objective of formulating cremophor EL-free nanoformulation intended for intravenous use. Significance: The polymeric PTX nanoparticles free from the cremophor EL will help in eliminating the shortcomings of the existing delivery system as cremophor EL causes serious allergic reactions to the subjects after intravenous use. Methods and results: Paclitaxel-loaded nanoparticles were formulated by nanoprecipitation method. The diminutive nanoparticles (143.2 nm) with uniform size throughout (polydispersity index, 0.115) and high entrapment efficiency (95.34%) were obtained by employing the Box\u2013Behnken design for the optimization of the formulation with the aid of desirability approach-based numerical optimization technique. Optimized levels for each factor viz. polymer concentration (X1), amount of organic solvent (X2), and surfactant concentration (X3) were 0.23%, 5 ml %, and 1.13%, respectively. The results of the hemocompatibility studies confirmed the safety of PLGA-based nanoparticles for intravenous administration. Pharmacokinetic evaluations confirmed the longer retention of PTX in systemic circulation. Conclusion: In a nutshell, the developed polymeric nanoparticle formulation of PTX precludes the inadequacy of existing PTX formulation and can be considered as superior alternative carrier system of the same.", "which Drug entrapment efficiency (%) ?", "95.34", 689.0, 694.0], ["Abstract Objective: Paclitaxel (PTX)-loaded polymer (Poly(lactic-co-glycolic acid), PLGA)-based nanoformulation was developed with the objective of formulating cremophor EL-free nanoformulation intended for intravenous use. Significance: The polymeric PTX nanoparticles free from the cremophor EL will help in eliminating the shortcomings of the existing delivery system as cremophor EL causes serious allergic reactions to the subjects after intravenous use. Methods and results: Paclitaxel-loaded nanoparticles were formulated by nanoprecipitation method. The diminutive nanoparticles (143.2 nm) with uniform size throughout (polydispersity index, 0.115) and high entrapment efficiency (95.34%) were obtained by employing the Box\u2013Behnken design for the optimization of the formulation with the aid of desirability approach-based numerical optimization technique. Optimized levels for each factor viz. polymer concentration (X1), amount of organic solvent (X2), and surfactant concentration (X3) were 0.23%, 5 ml %, and 1.13%, respectively. The results of the hemocompatibility studies confirmed the safety of PLGA-based nanoparticles for intravenous administration. Pharmacokinetic evaluations confirmed the longer retention of PTX in systemic circulation. Conclusion: In a nutshell, the developed polymeric nanoparticle formulation of PTX precludes the inadequacy of existing PTX formulation and can be considered as superior alternative carrier system of the same.", "which Particle size of nanoparticles (nm) ?", "143.2", 588.0, 593.0], ["In this study, temperature-dependent electrical properties of n-type Ga-doped ZnO thin film / p-type Si nanowire heterojunction diodes were reported. Metal-assisted chemical etching (MACE) process was performed to fabricate Si nanowires. Ga-doped ZnO films were then deposited onto nanowires through chemical bath deposition (CBD) technique to build three-dimensional nanowire-based heterojunction diodes. Fabricated devices revealed significant diode characteristics in the temperature range of 220 - 360 K. Electrical measurements shown that diodes had a well-defined rectifying behavior with a good rectification ratio of 103 \u00b13 V at room temperature. Ideality factor (n) were changed from 2.2 to 1.2 with increasing temperature.In this study, temperature-dependent electrical properties of n-type Ga-doped ZnO thin film / p-type Si nanowire heterojunction diodes were reported. Metal-assisted chemical etching (MACE) process was performed to fabricate Si nanowires. Ga-doped ZnO films were then deposited onto nanowires through chemical bath deposition (CBD) technique to build three-dimensional nanowire-based heterojunction diodes. Fabricated devices revealed significant diode characteristics in the temperature range of 220 - 360 K. Electrical measurements shown that diodes had a well-defined rectifying behavior with a good rectification ratio of 103 \u00b13 V at room temperature. Ideality factor (n) were changed from 2.2 to 1.2 with increasing temperature.", "which Ideality factor ?", "2.2", 693.0, 696.0], ["An electrically conductive ultralow percolation threshold of 0.1 wt% graphene was observed in the thermoplastic polyurethane (TPU) nanocomposites. The homogeneously dispersed graphene effectively enhanced the mechanical properties of TPU significantly at a low graphene loading of 0.2 wt%. These nanocomposites were subjected to cyclic loading to investigate the influences of graphene loading, strain amplitude and strain rate on the strain sensing performances. The two dimensional graphene and the flexible TPU matrix were found to endow these nanocomposites with a wide range of strain sensitivity (gauge factor ranging from 0.78 for TPU with 0.6 wt% graphene at the strain rate of 0.1 min\u22121 to 17.7 for TPU with 0.2 wt% graphene at the strain rate of 0.3 min\u22121) and good sensing stability for different strain patterns. In addition, these nanocomposites demonstrated good recoverability and reproducibility after stabilization by cyclic loading. An analytical model based on tunneling theory was used to simulate the resistance response to strain under different strain rates. The change in the number of conductive pathways and tunneling distance under strain was responsible for the observed resistance-strain behaviors. This study provides guidelines for the fabrication of graphene based polymer strain sensors.", "which Gauge Factor (GF) ?", "17.7", 699.0, 703.0], ["The construction of a continuous conductive network with a low percolation threshold plays a key role in fabricating a high performance strain sensor. Herein, a highly stretchable and sensitive strain sensor based on binary rubber blend/graphene was fabricated by a simple and effective assembly approach. A novel double-interconnected network composed of compactly continuous graphene conductive networks was designed and constructed using the composites, thereby resulting in an ultralow percolation threshold of 0.3 vol%, approximately 12-fold lower than that of the conventional graphene-based composites with a homogeneously dispersed morphology (4.0 vol%). Near the percolation threshold, the sensors could be stretched in excess of 100% applied strain, and exhibited a high stretchability, sensitivity (gauge factor \u223c82.5) and good reproducibility (\u223c300 cycles) of up to 100% strain under cyclic tensile tests. The proposed strategy provides a novel effective approach for constructing a double-interconnected conductive network using polymer composites, and is very competitive for developing and designing high performance strain sensors.", "which Gauge Factor (GF) ?", "82.5", 824.0, 828.0], ["Herein, we incorporated dual biotemplates, i.e., cellulose nanocrystals (CNC) and apoferritin, into electrospinning solution to achieve three distinct benefits, i.e., (i) facile synthesis of a WO3 nanotube by utilizing the self-agglomerating nature of CNC in the core of as-spun nanofibers, (ii) effective sensitization by partial phase transition from WO3 to Na2W4O13 induced by interaction between sodium-doped CNC and WO3 during calcination, and (iii) uniform functionalization with monodispersive apoferritin-derived Pt catalytic nanoparticles (2.22 \u00b1 0.42 nm). Interestingly, the sensitization effect of Na2W4O13 on WO3 resulted in highly selective H2S sensing characteristics against seven different interfering molecules. Furthermore, synergistic effects with a bioinspired Pt catalyst induced a remarkably enhanced H2S response ( Rair/ Rgas = 203.5), unparalleled selectivity ( Rair/ Rgas < 1.3 for the interfering molecules), and rapid response (<10 s)/recovery (<30 s) time at 1 ppm of H2S under 95% relative humidity level. This work paves the way for a new class of cosensitization routes to overcome critical shortcomings of SMO-based chemical sensors, thus providing a potential platform for diagnosis of halitosis.", "which Gas response (S=Ra/Rg) ?", "203.5", 851.0, 856.0], ["Objectives: To estimate the prevalence and incidence of epilepsy in Italy using a national database of general practitioners (GPs). Methods: The Health Search CSD Longitudinal Patient Database (HSD) has been established in 1998 by the Italian College of GPs. Participants were 700 GPs, representing a population of 912,458. For each patient, information on age and sex, EEG, CT scan, and MRI was included. Prevalent cases with a diagnosis of \u2018epilepsy' (ICD9CM: 345*) were selected in the 2011 population. Incident cases of epilepsy were identified in 2011 by excluding patients diagnosed for epilepsy and convulsions and those with EEG, CT scan, MRI prescribed for epilepsy and/or convulsions in the previous years. Crude and standardized (Italian population) prevalence and incidence were calculated. Results: Crude prevalence of epilepsy was 7.9 per 1,000 (men 8.1; women 7.7). The highest prevalence was in patients <25 years and \u226575 years. The incidence of epilepsy was 33.5 per 100,000 (women 35.3; men 31.5). The highest incidence was in women <25 years and in men 75 years or older. Conclusions: Prevalence and incidence of epilepsy in this study were similar to those of other industrialized countries. HSD appears as a reliable data source for the surveillance of epilepsy in Italy. i 2014 S. Karger AG, Basel", "which Females ?", "7.7", 875.0, 878.0], ["Objectives: To estimate the prevalence and incidence of epilepsy in Italy using a national database of general practitioners (GPs). Methods: The Health Search CSD Longitudinal Patient Database (HSD) has been established in 1998 by the Italian College of GPs. Participants were 700 GPs, representing a population of 912,458. For each patient, information on age and sex, EEG, CT scan, and MRI was included. Prevalent cases with a diagnosis of \u2018epilepsy' (ICD9CM: 345*) were selected in the 2011 population. Incident cases of epilepsy were identified in 2011 by excluding patients diagnosed for epilepsy and convulsions and those with EEG, CT scan, MRI prescribed for epilepsy and/or convulsions in the previous years. Crude and standardized (Italian population) prevalence and incidence were calculated. Results: Crude prevalence of epilepsy was 7.9 per 1,000 (men 8.1; women 7.7). The highest prevalence was in patients <25 years and \u226575 years. The incidence of epilepsy was 33.5 per 100,000 (women 35.3; men 31.5). The highest incidence was in women <25 years and in men 75 years or older. Conclusions: Prevalence and incidence of epilepsy in this study were similar to those of other industrialized countries. HSD appears as a reliable data source for the surveillance of epilepsy in Italy. i 2014 S. Karger AG, Basel", "which Overall ?", "7.9", 845.0, 848.0], ["Objectives: To estimate the prevalence and incidence of epilepsy in Italy using a national database of general practitioners (GPs). Methods: The Health Search CSD Longitudinal Patient Database (HSD) has been established in 1998 by the Italian College of GPs. Participants were 700 GPs, representing a population of 912,458. For each patient, information on age and sex, EEG, CT scan, and MRI was included. Prevalent cases with a diagnosis of \u2018epilepsy' (ICD9CM: 345*) were selected in the 2011 population. Incident cases of epilepsy were identified in 2011 by excluding patients diagnosed for epilepsy and convulsions and those with EEG, CT scan, MRI prescribed for epilepsy and/or convulsions in the previous years. Crude and standardized (Italian population) prevalence and incidence were calculated. Results: Crude prevalence of epilepsy was 7.9 per 1,000 (men 8.1; women 7.7). The highest prevalence was in patients <25 years and \u226575 years. The incidence of epilepsy was 33.5 per 100,000 (women 35.3; men 31.5). The highest incidence was in women <25 years and in men 75 years or older. Conclusions: Prevalence and incidence of epilepsy in this study were similar to those of other industrialized countries. HSD appears as a reliable data source for the surveillance of epilepsy in Italy. i 2014 S. Karger AG, Basel", "which Males ?", "8.1", 864.0, 867.0], ["Total carbon dioxide (TCO 2) and computations of partial pressure of carbon dioxide (pCO 2) had been examined in Northerneastern region of Indian Ocean. It exhibit seasonal and spatial variability. North-south gradients in the pCO 2 levels were closely related to gradients in salinity caused by fresh water discharge received from rivers. Eddies observed in this region helped to elevate the nutrients availability and the biological controls by increasing the productivity. These phenomena elevated the carbon dioxide draw down during the fair seasons. Seasonal fluxes estimated from local wind speed and air-sea carbon dioxide difference indicate that during southwest monsoon, the northeastern Indian Ocean acts as a strong sink of carbon dioxide (-20.04 mmol m \u20132 d -1 ). Also during fall intermonsoon the area acts as a weak sink of carbon dioxide (-4.69 mmol m \u20132 d -1 ). During winter monsoon, this region behaves as a weak carbon dioxide source with an average sea to air flux of 4.77 mmol m -2 d -1 . In the northern region, salinity levels in the surface level are high during winter compared to the other two seasons. Northeastern Indian Ocean shows significant intraseasonal variability in carbon dioxide fluxes that are mediated by eddies which provide carbon dioxide and nutrients from the subsurface waters to the mixed layer.", "which CO2 flux (lower limit) ?", "-20", NaN, NaN], ["Biological N2 fixation rates were quantified in the Eastern Tropical South Pacific (ETSP) during both El Ni\u00f1o (February 2010) and La Ni\u00f1a (March\u2013April 2011) conditions, and from Low\u2010Nutrient, Low\u2010Chlorophyll (20\u00b0S) to High\u2010Nutrient, Low\u2010Chlorophyll (HNLC) (10\u00b0S) conditions. N2 fixation was detected at all stations with rates ranging from 0.01 to 0.88 nmol N L\u22121 d\u22121, with higher rates measured during El Ni\u00f1o conditions compared to La Ni\u00f1a. High N2 fixations rates were reported at northern stations (HNLC conditions) at the oxycline and in the oxygen minimum zone (OMZ), despite nitrate concentrations up to 30 \u00b5mol L\u22121, indicating that inputs of new N can occur in parallel with N loss processes in OMZs. Water\u2010column integrated N2 fixation rates ranged from 4 to 53 \u00b5mol N m\u22122 d\u22121 at northern stations, and from 0 to 148 \u00b5mol m\u22122 d\u22121 at southern stations, which are of the same order of magnitude as N2 fixation rates measured in the oligotrophic ocean. N2 fixation rates responded significantly to Fe and organic carbon additions in the surface HNLC waters, and surprisingly by concomitant Fe and N additions in surface waters at the edge of the subtropical gyre. Recent studies have highlighted the predominance of heterotrophic diazotrophs in this area, and we hypothesize that N2 fixation could be directly limited by inorganic nutrient availability, or indirectly through the stimulation of primary production and the subsequent excretion of dissolved organic matter and/or the formation of micro\u2010environments favorable for heterotrophic N2 fixation.", "which Volumetric N2 fixation rate (lower limit) ?", "0.01", 340.0, 344.0], ["A combination of 15N2 labeling, Tyramide Signal Amplification\u2013Fluorescent in Situ Hybridization (TSA\u2010FISH) assay, and chemical analyses were performed along a trophic gradient (8000 km) in the equatorial Pacific. Nitrogen fixation rates were low (0.06 \u00b1 0.02 to 2.8 \u00b1 2.1 nmol L\u22121 d\u22121) in HNLC waters, higher in the warm pool (0.11 \u00b1 0.0 to 18.2 \u00b1 2.8 nmol L\u22121 d\u22121), and extremely high close to Papua New Guinea (38 \u00b1 9 to 610 \u00b1 46 nmol L\u22121 d\u22121). Rates attributed to the <10\u2010\u03bcm fraction accounted for 74% of total activity. Both unicellular and filamentous diazotrophs were detected and reached 17 cells mL\u22121 and 1.85 trichome mL\u22121. Unicellular diazotrophs were found to be free\u2010living in <10\u2010\u03bcm fraction or in association with mucilage, particles, or eukaryotes in the >10\u2010\u03bcm fraction, leading to a possible overestimation of this fraction to total N2 fixation. In oceanic waters, 98% of the unicellular diazotrophs were picoplanktonic. Finally, we found a clear longitudinal pattern of niche partitioning between diazotroph groups: while unicellular diazotrophs were present all along the transect, Trichodesmium spp. were detected only in coastal waters, where nitrogen fixation associated to both size fractions was greatly stimulated.", "which Volumetric N2 fixation rate (lower limit) ?", "0.06", 247.0, 251.0], ["Specialized prokaryotes performing biological dinitrogen (N2) fixation ('diazotrophs') provide an important source of fixed nitrogen in oligotrophic marine ecosystems such as tropical and subtropical oceans. In these waters, cyanobacterial photosynthetic diazotrophs are well known to be abundant and active, yet the role and contribution of non-cyanobacterial diazotrophs are currently unclear. The latter are not photosynthetic (here called 'heterotrophic') and hence require external sources of organic matter to sustain N2 fixation. Here we added the photosynthesis inhibitor 3-(3,4-dichlorophenyl)-1,1-dimethylurea (DCMU) to estimate the N2 fixation potential of heterotrophic diazotrophs as compared to autotrophic ones. Additionally, we explored the influence of dissolved organic matter (DOM) on these diazotrophs along a coast to open ocean gradient in the surface waters of a subtropical coral lagoon (New Caledonia). Total N2 fixation (samples not amended with DCMU) ranged from 0.66 to 1.32 nmol N L-1 d-1. The addition of DCMU reduced N2 fixation by >90%, suggesting that the contribution of heterotrophic diazotrophs to overall N2 fixation activity was minor in this environment. Higher contribution of heterotrophic diazotrophs occurred in stations closer to the shore and coincided with the decreasing lability of DOM, as shown by various colored DOM and fluorescent DOM (CDOM and FDOM) indices. This suggests that heterotrophic N2 fixation is favored when labile DOM compounds are available. We tested the response of diazotrophs (in terms of nifH gene expression and bulk N2 fixation rates) upon the addition of a mix of carbohydrates ('DOC' treatment), amino acids ('DON' treatment), and phosphonates and phosphomonesters ('DOP' treatment). While nifH expression increased significantly in Trichodesmium exposed to the DOC treatment, bulk N2 fixation rates increased significantly only in the DOP treatment. The lack of nifH expression by gammaproteobacteria, in any of the DOM addition treatments applied, questions the contribution of non-cyanobacterial diazotrophs to fixed nitrogen inputs in the New Caledonian lagoon. While the metabolism and ecology of heterotrophic diazotrophs is currently elusive, a deeper understanding of their ecology and relationship with DOM is needed in the light of increased DOM inputs in coastal zones due to anthropogenic pressure.", "which Volumetric N2 fixation rate (lower limit) ?", "0.66", 990.0, 994.0], ["Abstract. Diazotrophic activity and primary production (PP) were investigated along two transects (Belgica BG2014/14 and GEOVIDE cruises) off the western Iberian Margin and the Bay of Biscay in May 2014. Substantial N2 fixation activity was observed at 8 of the 10 stations sampled, ranging overall from 81 to 384 \u00b5mol N m\u22122 d\u22121 (0.7 to 8.2 nmol N L\u22121 d\u22121), with two sites close to the Iberian Margin situated between 38.8 and 40.7\u2218 N yielding rates reaching up to 1355 and 1533 \u00b5mol N m\u22122 d\u22121. Primary production was relatively lower along the Iberian Margin, with rates ranging from 33 to 59 mmol C m\u22122 d\u22121, while it increased towards the northwest away from the peninsula, reaching as high as 135 mmol C m\u22122 d\u22121. In agreement with the area-averaged Chl a satellite data contemporaneous with our study period, our results revealed that post-bloom conditions prevailed at most sites, while at the northwesternmost station the bloom was still ongoing. When converted to carbon uptake using Redfield stoichiometry, N2 fixation could support 1 % to 3 % of daily PP in the euphotic layer at most sites, except at the two most active sites where this contribution to daily PP could reach up to 25 %. At the two sites where N2 fixation activity was the highest, the prymnesiophyte\u2013symbiont Candidatus Atelocyanobacterium thalassa (UCYN-A) dominated the nifH sequence pool, while the remaining recovered sequences belonged to non-cyanobacterial phylotypes. At all the other sites, however, the recovered nifH sequences were exclusively assigned phylogenetically to non-cyanobacterial phylotypes. The intense N2 fixation activities recorded at the time of our study were likely promoted by the availability of phytoplankton-derived organic matter produced during the spring bloom, as evidenced by the significant surface particulate organic carbon concentrations. Also, the presence of excess phosphorus signature in surface waters seemed to contribute to sustaining N2 fixation, particularly at the sites with extreme activities. These results provide a mechanistic understanding of the unexpectedly high N2 fixation in productive waters of the temperate North Atlantic and highlight the importance of N2 fixation for future assessment of the global N inventory.", "which Volumetric N2 fixation rate (lower limit) ?", "0.7", 330.0, 333.0], ["One challenge in field-based marine microbial ecology is to achieve sufficient spatial resolution to obtain representative information about microbial distributions and biogeochemical processes. The challenges are exacerbated when conducting rate measurements of biological processes due to potential perturbations during sampling and incubation. Here we present the first application of a robotic microlaboratory, the 4 L-submersible incubation device (SID), for conducting in situ measurements of the rates of biological nitrogen (N2) fixation (BNF). The free-drifting autonomous instrument obtains samples from the water column that are incubated in situ after the addition of 15N2 tracer. After each of up to four consecutive incubation experiments, the 4-L sample is filtered and chemically preserved. Measured BNF rates from two deployments of the SID in the oligotrophic North Pacific ranged from 0.8 to 2.8 nmol N L?1 day?1, values comparable with simultaneous rate measurements obtained using traditional conductivity\u2013temperature\u2013depth (CTD)\u2013rosette sampling followed by on-deck or in situ incubation. Future deployments of the SID will help to better resolve spatial variability of oceanic BNF, particularly in areas where recovery of seawater samples by CTD compromises their integrity, e.g. anoxic habitats.", "which Volumetric N2 fixation rate (lower limit) ?", "0.8", 904.0, 907.0], ["A continuous record of atmospheric N2O measured from a tower in northern California captures strong pulses of N2O released by coastal upwelling events. The atmospheric record offers a unique, observation\u2010based method for quantifying the coastal N2O source. A coastal upwelling model is developed and compared to the constraints imposed by the atmospheric record in the Pacific Northwest coastal region. The upwelling model is based on Ekman theory and driven by high\u2010resolution wind and SST data and by relationships between subsurface N2O and temperature. A simplified version of the upwelling model is extended to the world's major eastern boundary regions to estimate a total coastal upwelling source of \u223c0.2 \u00b1 >70% Tg N2O\u2010N/yr. This flux represents \u223c5% of the total ocean source, estimated here at \u223c4 Tg N2O\u2010N/yr using traditional gas\u2010transfer methods, and is probably largely neglected in current N2O budgets.", "which N2O flux (upper limit) ?", "0.2", 708.0, 711.0], ["Abstract. We computed high-resolution (1o latitude x 1o longitude) seasonal and annual nitrous oxide (N2O) concentration fields for the Arabian Sea surface layer using a database containing more than 2400 values measured between December 1977 and July 1997. N2O concentrations are highest during the southwest (SW) monsoon along the southern Indian continental shelf. Annual emissions range from 0.33 to 0.70 Tg N2O and are dominated by fluxes from coastal regions during the SW and northeast monsoons. Our revised estimate for the annual N2O flux from the Arabian Sea is much more tightly constrained than the previous consensus derived using averaged in-situ data from a smaller number of studies. However, the tendency to focus on measurements in locally restricted features in combination with insufficient seasonal data coverage leads to considerable uncertainties of the concentration fields and thus in the flux estimates, especially in the coastal zones of the northern and eastern Arabian Sea. The overall mean relative error of the annual N2O emissions from the Arabian Sea was estimated to be at least 65%.", "which N2O flux (upper limit) ?", "0.7", NaN, NaN], ["Extensive measurements of nitrous oxide (N2O) have been made during April\u2013May 1994 (intermonsoon), February\u2013March 1995 (northeast monsoon), July\u2013August 1995 and August 1996 (southwest monsoon) in the Arabian Sea. Low N2O supersaturations in the surface waters are observed during intermonsoon compared to those in northeast and southwest monsoons. Spatial distributions of supersaturations manifest the effects of larger mixing during winter cooling and wind\u2010driven upwelling during monsoon period off the Indian west coast. A net positive flux is observable during all the seasons, with no discernible differences from the open ocean to coastal regions. The average ocean\u2010to\u2010atmosphere fluxes of N2O are estimated, using wind speed dependent gas transfer velocity, to be of the order of 0.26, 0.003, and 0.51, and 0.78 pg (pico grams) cm\u22122 s\u22121 during northeast monsoon, intermonsoon, and southwest monsoon in 1995 and 1996, respectively. The lower range of annual emission of N2O is estimated to be 0.56\u20130.76 Tg N2O per year which constitutes 13\u201317% of the net global oceanic source. However, N2O emission from the Arabian Sea can be as high as 1.0 Tg N2O per year using different gas transfer models.", "which N2O flux (upper limit) ?", "0.76", 1005.0, 1009.0], ["Dissolved and atmospheric nitrous oxide (N2O) were measured on the legs 3 and 5 of the R/V Meteor cruise 32 in the Arabian Sea. A cruise track along 65\u00b0E was followed during both the intermonsoon (May 1995) and the southwest (SW) monsoon (July/August 1995) periods. During the second leg the coastal and open ocean upwelling regions off the Arabian Peninsula were also investigated. Mean N2O saturations for the oceanic regions of the Arabian Sea were in the range of 99\u2013103% during the intermonsoon and 103\u2013230% during the SW monsoon. Computed annual emissions of 0.8\u20131.5 Tg N2O for the Arabian Sea are considerably higher than previous estimates, indicating that the role of upwelling regions, such as the Arabian Sea, may be more important than previously assumed in global budgets of oceanic N2O emissions.", "which N2O flux (upper limit) ?", "1.5", 569.0, 572.0], ["One of the shallowest, most intense oxygen minimum zones (OMZs) is found in the eastern tropical South Pacific, off northern Chile and southern Peru. It has a strong oxygen gradient (upper oxycline) and high N2O accumulation. N2O cycling by heterotrophic denitrification along the upper oxycline was studied by measuring N2O production and consumption rates using an improved acetylene blockage method. Dissolved N2O and its isotope (15N:14N ratio in N2O or \u03b415N) and isotopomer composition (intramolecular distribution of 15N in the N2O or \u03b415N\u03b1 and \u03b415N\u03b2), dissolved O2, nutrients, and other oceanographic variables were also measured. Strong N2O accumulation (up to 86 nmol L\u22121) was observed in the upper oxycline followed by a decline (around 8\u201012 nmol L\u22121) toward the OMZ core. N2O production rates by denitrification (NO2\u2212 reduction to N2O) were 2.25 to 50.0 nmol L\u22121 d\u22121, whereas N2O consumption rates (N2O reduction to N2) were 2.73 and 70.8 nmol L\u22121 d\u22121. \u03b415N in N2O increased from 8.57% in the middle oxycline (50\u2010m depth) to 14.87% toward the OMZ core (100\u2010m depth), indicating the progressive use of N2O as an electron acceptor by denitrifying organisms. Isotopomer signals of N2O (\u03b415N\u03b1 and \u03b415N\u03b2) showed an abrupt change at the middle oxycline, indicating different mechanisms of N2O production and consumption in this layer. Thus, partial denitrification along with aerobic ammonium oxidation appears to be responsible for N2O accumulation in the upper oxycline, where O2 levels fluctuate widely; N2O reduction, on the other hand, is an important pathway for N2 production. As a result, the proportion of N2O consumption relative to its production increased as O2 decreased toward the OMZ core. A N2O mass balance in the subsurface layer indicates that only a small amount of the gas could be effluxed into the atmosphere (12.7\u201030.7 \u00b5mol m\u22122 d\u22121) and that most N2O is used as an electron acceptor during denitrification (107\u2010468 \u00b5mol m\u22122 d\u22121).", "which N2O flux (upper limit) ?", "30.7", 1843.0, 1847.0], ["The spatiotemporal variability of upper ocean inorganic carbon parameters and air\u2010sea CO2 exchange in the Indian Ocean was examined using inorganic carbon data collected as part of the World Ocean Circulation Experiment (WOCE) cruises in 1995. Multiple linear regression methods were used to interpolate and extrapolate the temporally and geographically limited inorganic carbon data set to the entire Indian Ocean basin using other climatological hydrographic and biogeochemical data. The spatiotemporal distributions of total carbon dioxide (TCO2), alkalinity, and seawater pCO2 were evaluated for the Indian Ocean and regions of interest including the Arabian Sea, Bay of Bengal, and 10\u00b0N\u201335\u00b0S zones. The Indian Ocean was a net source of CO2 to the atmosphere, and a net sea\u2010to\u2010air CO2 flux of +237 \u00b1 132 Tg C yr\u22121 (+0.24 Pg C yr\u22121) was estimated. Regionally, the Arabian Sea, Bay of Bengal, and 10\u00b0N\u201310\u00b0S zones were perennial sources of CO2 to the atmosphere. In the 10\u00b0S\u201335\u00b0S zone, the CO2 sink or source status of the surface ocean shifts seasonally, although the region is a net oceanic sink of atmospheric CO2.", "which Average CO2 flux ?", "0.24", 820.0, 824.0], ["Abstract. We computed high-resolution (1o latitude x 1o longitude) seasonal and annual nitrous oxide (N2O) concentration fields for the Arabian Sea surface layer using a database containing more than 2400 values measured between December 1977 and July 1997. N2O concentrations are highest during the southwest (SW) monsoon along the southern Indian continental shelf. Annual emissions range from 0.33 to 0.70 Tg N2O and are dominated by fluxes from coastal regions during the SW and northeast monsoons. Our revised estimate for the annual N2O flux from the Arabian Sea is much more tightly constrained than the previous consensus derived using averaged in-situ data from a smaller number of studies. However, the tendency to focus on measurements in locally restricted features in combination with insufficient seasonal data coverage leads to considerable uncertainties of the concentration fields and thus in the flux estimates, especially in the coastal zones of the northern and eastern Arabian Sea. The overall mean relative error of the annual N2O emissions from the Arabian Sea was estimated to be at least 65%.", "which N2O flux (lower limit) ?", "0.33", 396.0, 400.0], ["Extensive measurements of nitrous oxide (N2O) have been made during April\u2013May 1994 (intermonsoon), February\u2013March 1995 (northeast monsoon), July\u2013August 1995 and August 1996 (southwest monsoon) in the Arabian Sea. Low N2O supersaturations in the surface waters are observed during intermonsoon compared to those in northeast and southwest monsoons. Spatial distributions of supersaturations manifest the effects of larger mixing during winter cooling and wind\u2010driven upwelling during monsoon period off the Indian west coast. A net positive flux is observable during all the seasons, with no discernible differences from the open ocean to coastal regions. The average ocean\u2010to\u2010atmosphere fluxes of N2O are estimated, using wind speed dependent gas transfer velocity, to be of the order of 0.26, 0.003, and 0.51, and 0.78 pg (pico grams) cm\u22122 s\u22121 during northeast monsoon, intermonsoon, and southwest monsoon in 1995 and 1996, respectively. The lower range of annual emission of N2O is estimated to be 0.56\u20130.76 Tg N2O per year which constitutes 13\u201317% of the net global oceanic source. However, N2O emission from the Arabian Sea can be as high as 1.0 Tg N2O per year using different gas transfer models.", "which N2O flux (lower limit) ?", "0.56", 1000.0, 1004.0], ["Dissolved and atmospheric nitrous oxide (N2O) were measured on the legs 3 and 5 of the R/V Meteor cruise 32 in the Arabian Sea. A cruise track along 65\u00b0E was followed during both the intermonsoon (May 1995) and the southwest (SW) monsoon (July/August 1995) periods. During the second leg the coastal and open ocean upwelling regions off the Arabian Peninsula were also investigated. Mean N2O saturations for the oceanic regions of the Arabian Sea were in the range of 99\u2013103% during the intermonsoon and 103\u2013230% during the SW monsoon. Computed annual emissions of 0.8\u20131.5 Tg N2O for the Arabian Sea are considerably higher than previous estimates, indicating that the role of upwelling regions, such as the Arabian Sea, may be more important than previously assumed in global budgets of oceanic N2O emissions.", "which N2O flux (lower limit) ?", "0.8", 565.0, 568.0], ["One of the shallowest, most intense oxygen minimum zones (OMZs) is found in the eastern tropical South Pacific, off northern Chile and southern Peru. It has a strong oxygen gradient (upper oxycline) and high N2O accumulation. N2O cycling by heterotrophic denitrification along the upper oxycline was studied by measuring N2O production and consumption rates using an improved acetylene blockage method. Dissolved N2O and its isotope (15N:14N ratio in N2O or \u03b415N) and isotopomer composition (intramolecular distribution of 15N in the N2O or \u03b415N\u03b1 and \u03b415N\u03b2), dissolved O2, nutrients, and other oceanographic variables were also measured. Strong N2O accumulation (up to 86 nmol L\u22121) was observed in the upper oxycline followed by a decline (around 8\u201012 nmol L\u22121) toward the OMZ core. N2O production rates by denitrification (NO2\u2212 reduction to N2O) were 2.25 to 50.0 nmol L\u22121 d\u22121, whereas N2O consumption rates (N2O reduction to N2) were 2.73 and 70.8 nmol L\u22121 d\u22121. \u03b415N in N2O increased from 8.57% in the middle oxycline (50\u2010m depth) to 14.87% toward the OMZ core (100\u2010m depth), indicating the progressive use of N2O as an electron acceptor by denitrifying organisms. Isotopomer signals of N2O (\u03b415N\u03b1 and \u03b415N\u03b2) showed an abrupt change at the middle oxycline, indicating different mechanisms of N2O production and consumption in this layer. Thus, partial denitrification along with aerobic ammonium oxidation appears to be responsible for N2O accumulation in the upper oxycline, where O2 levels fluctuate widely; N2O reduction, on the other hand, is an important pathway for N2 production. As a result, the proportion of N2O consumption relative to its production increased as O2 decreased toward the OMZ core. A N2O mass balance in the subsurface layer indicates that only a small amount of the gas could be effluxed into the atmosphere (12.7\u201030.7 \u00b5mol m\u22122 d\u22121) and that most N2O is used as an electron acceptor during denitrification (107\u2010468 \u00b5mol m\u22122 d\u22121).", "which N2O flux (lower limit) ?", "12.7", 1838.0, 1842.0], ["Biological N2 fixation rates were quantified in the Eastern Tropical South Pacific (ETSP) during both El Ni\u00f1o (February 2010) and La Ni\u00f1a (March\u2013April 2011) conditions, and from Low\u2010Nutrient, Low\u2010Chlorophyll (20\u00b0S) to High\u2010Nutrient, Low\u2010Chlorophyll (HNLC) (10\u00b0S) conditions. N2 fixation was detected at all stations with rates ranging from 0.01 to 0.88 nmol N L\u22121 d\u22121, with higher rates measured during El Ni\u00f1o conditions compared to La Ni\u00f1a. High N2 fixations rates were reported at northern stations (HNLC conditions) at the oxycline and in the oxygen minimum zone (OMZ), despite nitrate concentrations up to 30 \u00b5mol L\u22121, indicating that inputs of new N can occur in parallel with N loss processes in OMZs. Water\u2010column integrated N2 fixation rates ranged from 4 to 53 \u00b5mol N m\u22122 d\u22121 at northern stations, and from 0 to 148 \u00b5mol m\u22122 d\u22121 at southern stations, which are of the same order of magnitude as N2 fixation rates measured in the oligotrophic ocean. N2 fixation rates responded significantly to Fe and organic carbon additions in the surface HNLC waters, and surprisingly by concomitant Fe and N additions in surface waters at the edge of the subtropical gyre. Recent studies have highlighted the predominance of heterotrophic diazotrophs in this area, and we hypothesize that N2 fixation could be directly limited by inorganic nutrient availability, or indirectly through the stimulation of primary production and the subsequent excretion of dissolved organic matter and/or the formation of micro\u2010environments favorable for heterotrophic N2 fixation.", "which Volumetric N2 fixation rate (upper limit) ?", "0.88", 348.0, 352.0], ["Specialized prokaryotes performing biological dinitrogen (N2) fixation ('diazotrophs') provide an important source of fixed nitrogen in oligotrophic marine ecosystems such as tropical and subtropical oceans. In these waters, cyanobacterial photosynthetic diazotrophs are well known to be abundant and active, yet the role and contribution of non-cyanobacterial diazotrophs are currently unclear. The latter are not photosynthetic (here called 'heterotrophic') and hence require external sources of organic matter to sustain N2 fixation. Here we added the photosynthesis inhibitor 3-(3,4-dichlorophenyl)-1,1-dimethylurea (DCMU) to estimate the N2 fixation potential of heterotrophic diazotrophs as compared to autotrophic ones. Additionally, we explored the influence of dissolved organic matter (DOM) on these diazotrophs along a coast to open ocean gradient in the surface waters of a subtropical coral lagoon (New Caledonia). Total N2 fixation (samples not amended with DCMU) ranged from 0.66 to 1.32 nmol N L-1 d-1. The addition of DCMU reduced N2 fixation by >90%, suggesting that the contribution of heterotrophic diazotrophs to overall N2 fixation activity was minor in this environment. Higher contribution of heterotrophic diazotrophs occurred in stations closer to the shore and coincided with the decreasing lability of DOM, as shown by various colored DOM and fluorescent DOM (CDOM and FDOM) indices. This suggests that heterotrophic N2 fixation is favored when labile DOM compounds are available. We tested the response of diazotrophs (in terms of nifH gene expression and bulk N2 fixation rates) upon the addition of a mix of carbohydrates ('DOC' treatment), amino acids ('DON' treatment), and phosphonates and phosphomonesters ('DOP' treatment). While nifH expression increased significantly in Trichodesmium exposed to the DOC treatment, bulk N2 fixation rates increased significantly only in the DOP treatment. The lack of nifH expression by gammaproteobacteria, in any of the DOM addition treatments applied, questions the contribution of non-cyanobacterial diazotrophs to fixed nitrogen inputs in the New Caledonian lagoon. While the metabolism and ecology of heterotrophic diazotrophs is currently elusive, a deeper understanding of their ecology and relationship with DOM is needed in the light of increased DOM inputs in coastal zones due to anthropogenic pressure.", "which Volumetric N2 fixation rate (upper limit) ?", "1.32", 998.0, 1002.0], ["One challenge in field-based marine microbial ecology is to achieve sufficient spatial resolution to obtain representative information about microbial distributions and biogeochemical processes. The challenges are exacerbated when conducting rate measurements of biological processes due to potential perturbations during sampling and incubation. Here we present the first application of a robotic microlaboratory, the 4 L-submersible incubation device (SID), for conducting in situ measurements of the rates of biological nitrogen (N2) fixation (BNF). The free-drifting autonomous instrument obtains samples from the water column that are incubated in situ after the addition of 15N2 tracer. After each of up to four consecutive incubation experiments, the 4-L sample is filtered and chemically preserved. Measured BNF rates from two deployments of the SID in the oligotrophic North Pacific ranged from 0.8 to 2.8 nmol N L?1 day?1, values comparable with simultaneous rate measurements obtained using traditional conductivity\u2013temperature\u2013depth (CTD)\u2013rosette sampling followed by on-deck or in situ incubation. Future deployments of the SID will help to better resolve spatial variability of oceanic BNF, particularly in areas where recovery of seawater samples by CTD compromises their integrity, e.g. anoxic habitats.", "which Volumetric N2 fixation rate (upper limit) ?", "2.8", 911.0, 914.0], ["We coupled dinitrogen (N2) fixation rate estimates with molecular biological methods to determine the activity and abundance of diazotrophs in coastal waters along the temperate North American Mid\u2010Atlantic continental shelf during multiple seasons and cruises. Volumetric rates of N2 fixation were as high as 49.8 nmol N L\u22121 d\u22121 and areal rates as high as 837.9 \u00b5mol N m\u22122 d\u22121 in our study area. Our results suggest that N2 fixation occurs at high rates in coastal shelf waters that were previously thought to be unimportant sites of N2 fixation and so were excluded from calculations of pelagic marine N2 fixation. Unicellular N2\u2010fixing group A cyanobacteria were the most abundant diazotrophs in the Atlantic coastal waters and their abundance was comparable to, or higher than, that measured in oceanic regimes where they were discovered. High rates of N2 fixation and the high abundance of diazotrophs along the North American Mid\u2010Atlantic continental shelf highlight the need to revise marine N budgets to include coastal N2 fixation. Integrating areal rates of N2 fixation over the continental shelf area between Cape Hatteras and Nova Scotia, the estimated N2 fixation in this temperate shelf system is about 0.02 Tmol N yr\u22121, the amount previously calculated for the entire North Atlantic continental shelf. Additional studies should provide spatially, temporally, and seasonally resolved rate estimates from coastal systems to better constrain N inputs via N2 fixation from the neritic zone.", "which Volumetric N2 fixation rate (upper limit) ?", "49.8", 309.0, 313.0], ["Total carbon dioxide (TCO 2) and computations of partial pressure of carbon dioxide (pCO 2) had been examined in Northerneastern region of Indian Ocean. It exhibit seasonal and spatial variability. North-south gradients in the pCO 2 levels were closely related to gradients in salinity caused by fresh water discharge received from rivers. Eddies observed in this region helped to elevate the nutrients availability and the biological controls by increasing the productivity. These phenomena elevated the carbon dioxide draw down during the fair seasons. Seasonal fluxes estimated from local wind speed and air-sea carbon dioxide difference indicate that during southwest monsoon, the northeastern Indian Ocean acts as a strong sink of carbon dioxide (-20.04 mmol m \u20132 d -1 ). Also during fall intermonsoon the area acts as a weak sink of carbon dioxide (-4.69 mmol m \u20132 d -1 ). During winter monsoon, this region behaves as a weak carbon dioxide source with an average sea to air flux of 4.77 mmol m -2 d -1 . In the northern region, salinity levels in the surface level are high during winter compared to the other two seasons. Northeastern Indian Ocean shows significant intraseasonal variability in carbon dioxide fluxes that are mediated by eddies which provide carbon dioxide and nutrients from the subsurface waters to the mixed layer.", "which CO2 flux (upper limit) ?", "4.77", 989.0, 993.0], ["Nitrogen fixation is an essential process that biologically transforms atmospheric dinitrogen gas to ammonia, therefore compensating for nitrogen losses occurring via denitrification and anammox. Currently, inputs and losses of nitrogen to the ocean resulting from these processes are thought to be spatially separated: nitrogen fixation takes place primarily in open ocean environments (mainly through diazotrophic cyanobacteria), whereas nitrogen losses occur in oxygen-depleted intermediate waters and sediments (mostly via denitrifying and anammox bacteria). Here we report on rates of nitrogen fixation obtained during two oceanographic cruises in 2005 and 2007 in the eastern tropical South Pacific (ETSP), a region characterized by the presence of coastal upwelling and a major permanent oxygen minimum zone (OMZ). Our results show significant rates of nitrogen fixation in the water column; however, integrated rates from the surface down to 120 m varied by \u223c30 fold between cruises (7.5\u00b14.6 versus 190\u00b182.3 \u00b5mol m\u22122 d\u22121). Moreover, rates were measured down to 400 m depth in 2007, indicating that the contribution to the integrated rates of the subsurface oxygen-deficient layer was \u223c5 times higher (574\u00b1294 \u00b5mol m\u22122 d\u22121) than the oxic euphotic layer (48\u00b168 \u00b5mol m\u22122 d\u22121). Concurrent molecular measurements detected the dinitrogenase reductase gene nifH in surface and subsurface waters. Phylogenetic analysis of the nifH sequences showed the presence of a diverse diazotrophic community at the time of the highest measured nitrogen fixation rates. Our results thus demonstrate the occurrence of nitrogen fixation in nutrient-rich coastal upwelling systems and, importantly, within the underlying OMZ. They also suggest that nitrogen fixation is a widespread process that can sporadically provide a supplementary source of fixed nitrogen in these regions.", "which Depth integrated N2 fixation rate (lower limit) ?", "7.5", 992.0, 995.0], ["Diazotrophy in the Indian Ocean is poorly understood compared to that in the Atlantic and Pacific Oceans. We first examined the basin\u2010scale community structure of diazotrophs and their nitrogen fixation activity within the euphotic zone during the northeast monsoon period along about 69\u00b0E from 17\u00b0N to 20\u00b0S in the oligotrophic Indian Ocean, where a shallow nitracline (49\u201359 m) prevailed widely and the sea surface temperature (SST) was above 25\u00b0C. Phosphate was detectable at the surface throughout the study area. The dissolved iron concentration and the ratio of iron to nitrate + nitrite at the surface were significantly higher in the Arabian Sea than in the equatorial and southern Indian Ocean. Nitrogen fixation in the Arabian Sea (24.6\u201347.1 \u03bcmolN m\u22122 d\u22121) was also significantly greater than that in the equatorial and southern Indian Ocean (6.27\u201316.6 \u03bcmolN m\u22122 d\u22121), indicating that iron could control diazotrophy in the Indian Ocean. Phylogenetic analysis of nifH showed that most diazotrophs belonged to the Proteobacteria and that cyanobacterial diazotrophs were absent in the study area except in the Arabian Sea. Furthermore, nitrogen fixation was not associated with light intensity throughout the study area. These results are consistent with nitrogen fixation in the Indian Ocean, being largely performed by heterotrophic bacteria and not by cyanobacteria. The low cyanobacterial diazotrophy was attributed to the shallow nitracline, which is rarely observed in the Pacific and Atlantic oligotrophic oceans. Because the shallower nitracline favored enhanced upward nitrate flux, the competitive advantage of cyanobacterial diazotrophs over nondiazotrophic phytoplankton was not as significant as it is in other oligotrophic oceans.", "which Upper limit (nitrogen fixation rates) ?", "47.1", 746.0, 750.0], ["We coupled dinitrogen (N2) fixation rate estimates with molecular biological methods to determine the activity and abundance of diazotrophs in coastal waters along the temperate North American Mid\u2010Atlantic continental shelf during multiple seasons and cruises. Volumetric rates of N2 fixation were as high as 49.8 nmol N L\u22121 d\u22121 and areal rates as high as 837.9 \u00b5mol N m\u22122 d\u22121 in our study area. Our results suggest that N2 fixation occurs at high rates in coastal shelf waters that were previously thought to be unimportant sites of N2 fixation and so were excluded from calculations of pelagic marine N2 fixation. Unicellular N2\u2010fixing group A cyanobacteria were the most abundant diazotrophs in the Atlantic coastal waters and their abundance was comparable to, or higher than, that measured in oceanic regimes where they were discovered. High rates of N2 fixation and the high abundance of diazotrophs along the North American Mid\u2010Atlantic continental shelf highlight the need to revise marine N budgets to include coastal N2 fixation. Integrating areal rates of N2 fixation over the continental shelf area between Cape Hatteras and Nova Scotia, the estimated N2 fixation in this temperate shelf system is about 0.02 Tmol N yr\u22121, the amount previously calculated for the entire North Atlantic continental shelf. Additional studies should provide spatially, temporally, and seasonally resolved rate estimates from coastal systems to better constrain N inputs via N2 fixation from the neritic zone.", "which Depth integrated N2 fixation rate (upper limit) ?", "837.9", 356.0, 361.0], ["Abstract. Diazotrophic activity and primary production (PP) were investigated along two transects (Belgica BG2014/14 and GEOVIDE cruises) off the western Iberian Margin and the Bay of Biscay in May 2014. Substantial N2 fixation activity was observed at 8 of the 10 stations sampled, ranging overall from 81 to 384 \u00b5mol N m\u22122 d\u22121 (0.7 to 8.2 nmol N L\u22121 d\u22121), with two sites close to the Iberian Margin situated between 38.8 and 40.7\u2218 N yielding rates reaching up to 1355 and 1533 \u00b5mol N m\u22122 d\u22121. Primary production was relatively lower along the Iberian Margin, with rates ranging from 33 to 59 mmol C m\u22122 d\u22121, while it increased towards the northwest away from the peninsula, reaching as high as 135 mmol C m\u22122 d\u22121. In agreement with the area-averaged Chl a satellite data contemporaneous with our study period, our results revealed that post-bloom conditions prevailed at most sites, while at the northwesternmost station the bloom was still ongoing. When converted to carbon uptake using Redfield stoichiometry, N2 fixation could support 1 % to 3 % of daily PP in the euphotic layer at most sites, except at the two most active sites where this contribution to daily PP could reach up to 25 %. At the two sites where N2 fixation activity was the highest, the prymnesiophyte\u2013symbiont Candidatus Atelocyanobacterium thalassa (UCYN-A) dominated the nifH sequence pool, while the remaining recovered sequences belonged to non-cyanobacterial phylotypes. At all the other sites, however, the recovered nifH sequences were exclusively assigned phylogenetically to non-cyanobacterial phylotypes. The intense N2 fixation activities recorded at the time of our study were likely promoted by the availability of phytoplankton-derived organic matter produced during the spring bloom, as evidenced by the significant surface particulate organic carbon concentrations. Also, the presence of excess phosphorus signature in surface waters seemed to contribute to sustaining N2 fixation, particularly at the sites with extreme activities. These results provide a mechanistic understanding of the unexpectedly high N2 fixation in productive waters of the temperate North Atlantic and highlight the importance of N2 fixation for future assessment of the global N inventory.", "which Depth integrated N2 fixation rate (upper limit) ?", "1533", 474.0, 478.0], ["fiber bundle that carried the laser beam and returned the scattered radiation could be placed against surfaces at any desired angle by a deployment mechanism; otherwise, the instrument would need no moving parts. A modem micro-Raman spectrometer with its beam broadened (to .expand the spot to 50-gm diameter) and set for low resolution (7 cm '\u007f in the 100-1400 cm '\u007f region relative to 514.5-nm excitation), was used to simulate the spectra anticipated from a rover instrument. We present spectra for lunar mineral grains, <1 mm soil fines, breccia fragments, and glasses. From frequencies of olivine peaks, we derived sufficiently precise forsteritc contents to correlate the analyzed grains to known rock types and we obtained appropriate forsteritc contents from weak signals above background in soil fines and breccias. Peak positions of pyroxenes were sufficiently well determined to distinguish among orthorhombic, monoclinic, and triclinic (pyroxenoid) structures; additional information can be obtained from pyroxene spectra, but requires further laboratory calibration. Plagioclase provided sharp peaks in soil fines and most breccias even when the glass content was high.", "which Excitation Frequency/Wavelength (nm) ?", "514.5", 387.0, 392.0], ["[1] Soils within the impact crater Goldschmidt have been identified as spectrally distinct from the local highland material. High spatial and spectral resolution data from the Moon Mineralogy Mapper (M3) on the Chandrayaan-1 orbiter are used to examine the character of Goldschmidt crater in detail. Spectral parameters applied to a north polar mosaic of M3 data are used to discern large-scale compositional trends at the northern high latitudes, and spectra from three widely separated regions are compared to spectra from Goldschmidt. The results highlight the compositional diversity of the lunar nearside, in particular, where feldspathic soils with a low-Ca pyroxene component are pervasive, but exclusively feldspathic regions and small areas of basaltic composition are also observed. Additionally, we find that the relative strengths of the diagnostic OH/H2O absorption feature near 3000 nm are correlated with the mineralogy of the host material. On both global and local scales, the strongest hydrous absorptions occur on the more feldspathic surfaces. Thus, M3 data suggest that while the feldspathic soils within Goldschmidt crater are enhanced in OH/H2O compared to the relatively mafic nearside polar highlands, their hydration signatures are similar to those observed in the feldspathic highlands on the farside.", "which OH (nm) ?", "3000", 892.0, 896.0], ["A survey of anastomosis groups (AG) of Rhizoctonia spp. associated with potato diseases was conducted in South Africa. In total, 112 Rhizoctonia solani and 19 binucleate Rhizoctonia (BNR) isolates were recovered from diseased potato plants, characterized for AG and pathogenicity. The AG identity of the isolates was confirmed using phylogenetic analysis of the internal transcribed spacer region of ribosomal DNA. R. solani isolates recovered belonged to AG 3-PT, AG 2-2IIIB, AG 4HG-I, AG 4HG-III, and AG 5, while BNR isolates belonged to AG A and AG R, with frequencies of 74, 6.1, 2.3, 2.3, 0.8, 12.2, and 2.3%, respectively. R. solani AG 3-PT was the most predominant AG and occurred in all the potato-growing regions sampled, whereas the other AG occurred in distinct locations. Different AG grouped into distinct clades, with high maximum parsimony and maximum-likelihood bootstrap support for both R. solani and BNR. An experiment under greenhouse conditions with representative isolates from different AG showed differences in aggressiveness between and within AG. Isolates of AG 2-2IIIB, AG 4HG-III, and AG R were the most aggressive in causing stem canker while AG 3-PT, AG 5, and AG R caused black scurf. This is the first comprehensive survey of R. solani and BNR on potato in South Africa using a molecular-based approach. This is the first report of R. solani AG 2-2IIIB and AG 4 HG-I causing stem and stolon canker and BNR AG A and AG R causing stem canker and black scurf on potato in South Africa.", "which AG HG4III ?", "2.3", 584.0, 587.0], ["A survey of anastomosis groups (AG) of Rhizoctonia spp. associated with potato diseases was conducted in South Africa. In total, 112 Rhizoctonia solani and 19 binucleate Rhizoctonia (BNR) isolates were recovered from diseased potato plants, characterized for AG and pathogenicity. The AG identity of the isolates was confirmed using phylogenetic analysis of the internal transcribed spacer region of ribosomal DNA. R. solani isolates recovered belonged to AG 3-PT, AG 2-2IIIB, AG 4HG-I, AG 4HG-III, and AG 5, while BNR isolates belonged to AG A and AG R, with frequencies of 74, 6.1, 2.3, 2.3, 0.8, 12.2, and 2.3%, respectively. R. solani AG 3-PT was the most predominant AG and occurred in all the potato-growing regions sampled, whereas the other AG occurred in distinct locations. Different AG grouped into distinct clades, with high maximum parsimony and maximum-likelihood bootstrap support for both R. solani and BNR. An experiment under greenhouse conditions with representative isolates from different AG showed differences in aggressiveness between and within AG. Isolates of AG 2-2IIIB, AG 4HG-III, and AG R were the most aggressive in causing stem canker while AG 3-PT, AG 5, and AG R caused black scurf. This is the first comprehensive survey of R. solani and BNR on potato in South Africa using a molecular-based approach. This is the first report of R. solani AG 2-2IIIB and AG 4 HG-I causing stem and stolon canker and BNR AG A and AG R causing stem canker and black scurf on potato in South Africa.", "which AG R ?", "2.3", 584.0, 587.0], ["A survey of anastomosis groups (AG) of Rhizoctonia spp. associated with potato diseases was conducted in South Africa. In total, 112 Rhizoctonia solani and 19 binucleate Rhizoctonia (BNR) isolates were recovered from diseased potato plants, characterized for AG and pathogenicity. The AG identity of the isolates was confirmed using phylogenetic analysis of the internal transcribed spacer region of ribosomal DNA. R. solani isolates recovered belonged to AG 3-PT, AG 2-2IIIB, AG 4HG-I, AG 4HG-III, and AG 5, while BNR isolates belonged to AG A and AG R, with frequencies of 74, 6.1, 2.3, 2.3, 0.8, 12.2, and 2.3%, respectively. R. solani AG 3-PT was the most predominant AG and occurred in all the potato-growing regions sampled, whereas the other AG occurred in distinct locations. Different AG grouped into distinct clades, with high maximum parsimony and maximum-likelihood bootstrap support for both R. solani and BNR. An experiment under greenhouse conditions with representative isolates from different AG showed differences in aggressiveness between and within AG. Isolates of AG 2-2IIIB, AG 4HG-III, and AG R were the most aggressive in causing stem canker while AG 3-PT, AG 5, and AG R caused black scurf. This is the first comprehensive survey of R. solani and BNR on potato in South Africa using a molecular-based approach. This is the first report of R. solani AG 2-2IIIB and AG 4 HG-I causing stem and stolon canker and BNR AG A and AG R causing stem canker and black scurf on potato in South Africa.", "which AG 2.2IIIB ?", "6.1", 579.0, 582.0], ["A survey of anastomosis groups (AG) of Rhizoctonia spp. associated with potato diseases was conducted in South Africa. In total, 112 Rhizoctonia solani and 19 binucleate Rhizoctonia (BNR) isolates were recovered from diseased potato plants, characterized for AG and pathogenicity. The AG identity of the isolates was confirmed using phylogenetic analysis of the internal transcribed spacer region of ribosomal DNA. R. solani isolates recovered belonged to AG 3-PT, AG 2-2IIIB, AG 4HG-I, AG 4HG-III, and AG 5, while BNR isolates belonged to AG A and AG R, with frequencies of 74, 6.1, 2.3, 2.3, 0.8, 12.2, and 2.3%, respectively. R. solani AG 3-PT was the most predominant AG and occurred in all the potato-growing regions sampled, whereas the other AG occurred in distinct locations. Different AG grouped into distinct clades, with high maximum parsimony and maximum-likelihood bootstrap support for both R. solani and BNR. An experiment under greenhouse conditions with representative isolates from different AG showed differences in aggressiveness between and within AG. Isolates of AG 2-2IIIB, AG 4HG-III, and AG R were the most aggressive in causing stem canker while AG 3-PT, AG 5, and AG R caused black scurf. This is the first comprehensive survey of R. solani and BNR on potato in South Africa using a molecular-based approach. This is the first report of R. solani AG 2-2IIIB and AG 4 HG-I causing stem and stolon canker and BNR AG A and AG R causing stem canker and black scurf on potato in South Africa.", "which AG A ?", "12.2", 599.0, 603.0], ["A novel nanostructured Pd2Ga intermetallic catalyst is presented and compared to elemental Pd and a macroscopic bulk Pd2Ga material concerning physical and chemical properties. The new material was prepared by controlled co-precipitation from a single phase layered double hydroxide precursor or hydrotalcite-like compound, of the composition Pd0.025Mg0.675Ga0.3(OH)2(CO3)0.15 \u2219 m H2O. Upon thermal reduction in hydrogen, bimetallic nanoparticles of an average size less than 10 nm and a porous MgO/MgGa2O4 support are formed. HRTEM images confirmed the presence of the intermetallic compound Pd2Ga and are corroborated by XPS investigations which revealed an interaction between Pd and Ga. Due to the relatively high dispersion of the intermetallic compound, the catalytic activity of the sample in the semi-hydrogenation of acetylene was more than five thousand times higher than observed for a bulk Pd2Ga model catalyst. Interestingly, the high selectivity of the model catalysts towards the semi-hydrogenated product of 74% was only slightly lowered to 70% for the nano-structured catalyst, while an elemental Pd reference catalyst showed only a selectivity of around 20% under these testing conditions. This result indicates the structural integrity of the intermetallic compound and the absence of elemental Pd in the nano-sized particles. Thus, this work serves as an example of how the unique properties of an intermetallic compound, well-studied as a model catalyst, can be made accessible as a real high performing materials allowing establishment of structure-performance relationships and other application-related further investigations. The general synthesis approach is assumed to be applicable to several Pd-X intermetallic catalysts for X being elements forming hydrotalcite-like precursors in their ionic form.", "which P (MPa) ?", "0.1", NaN, NaN], ["BACKGROUND Predicting recurrent Clostridium difficile infection (rCDI) remains difficult. METHODS. We employed a retrospective cohort design. Granular electronic medical record (EMR) data had been collected from patients hospitalized at 21 Kaiser Permanente Northern California hospitals. The derivation dataset (2007\u20132013) included data from 9,386 patients who experienced incident CDI (iCDI) and 1,311 who experienced their first CDI recurrences (rCDI). The validation dataset (2014) included data from 1,865 patients who experienced incident CDI and 144 who experienced rCDI. Using multiple techniques, including machine learning, we evaluated more than 150 potential predictors. Our final analyses evaluated 3 models with varying degrees of complexity and 1 previously published model. RESULTS Despite having a large multicenter cohort and access to granular EMR data (eg, vital signs, and laboratory test results), none of the models discriminated well (c statistics, 0.591\u20130.605), had good calibration, or had good explanatory power. CONCLUSIONS Our ability to predict rCDI remains limited. Given currently available EMR technology, improvements in prediction will require incorporating new variables because currently available data elements lack adequate explanatory power. Infect Control Hosp Epidemiol 2017;38:1196\u20131203", "which C Statistic ?", "0.605", 979.0, 984.0], ["OBJECTIVE An estimated 293,300 healthcare-associated cases of Clostridium difficile infection (CDI) occur annually in the United States. To date, research has focused on developing risk prediction models for CDI that work well across institutions. However, this one-size-fits-all approach ignores important hospital-specific factors. We focus on a generalizable method for building facility-specific models. We demonstrate the applicability of the approach using electronic health records (EHR) from the University of Michigan Hospitals (UM) and the Massachusetts General Hospital (MGH). METHODS We utilized EHR data from 191,014 adult admissions to UM and 65,718 adult admissions to MGH. We extracted patient demographics, admission details, patient history, and daily hospitalization details, resulting in 4,836 features from patients at UM and 1,837 from patients at MGH. We used L2 regularized logistic regression to learn the models, and we measured the discriminative performance of the models on held-out data from each hospital. RESULTS Using the UM and MGH test data, the models achieved area under the receiver operating characteristic curve (AUROC) values of 0.82 (95% confidence interval [CI], 0.80\u20130.84) and 0.75 ( 95% CI, 0.73\u20130.78), respectively. Some predictive factors were shared between the 2 models, but many of the top predictive factors differed between facilities. CONCLUSION A data-driven approach to building models for estimating daily patient risk for CDI was used to build institution-specific models at 2 large hospitals with different patient populations and EHR systems. In contrast to traditional approaches that focus on developing models that apply across hospitals, our generalizable approach yields risk-stratification models tailored to an institution. These hospital-specific models allow for earlier and more accurate identification of high-risk patients and better targeting of infection prevention strategies. Infect Control Hosp Epidemiol 2018;39:425\u2013433", "which AUROC ?", "0.82", 1170.0, 1174.0], ["Background Surgical site infection (SSI) surveillance is a key factor in the elaboration of strategies to reduce SSI occurrence and in providing surgeons with appropriate data feedback (risk indicators, clinical prediction rule). Aim To improve the predictive performance of an individual-based SSI risk model by considering a multilevel hierarchical structure. Patients and Methods Data were collected anonymously by the French SSI active surveillance system in 2011. An SSI diagnosis was made by the surgical teams and infection control practitioners following standardized criteria. A random 20% sample comprising 151 hospitals, 502 wards and 62280 patients was used. Three-level (patient, ward, hospital) hierarchical logistic regression models were initially performed. Parameters were estimated using the simulation-based Markov Chain Monte Carlo procedure. Results A total of 623 SSI were diagnosed (1%). The hospital level was discarded from the analysis as it did not contribute to variability of SSI occurrence (p = 0.32). Established individual risk factors (patient history, surgical procedure and hospitalization characteristics) were identified. A significant heterogeneity in SSI occurrence between wards was found (median odds ratio [MOR] 3.59, 95% credibility interval [CI] 3.03 to 4.33) after adjusting for patient-level variables. The effects of the follow-up duration varied between wards (p<10\u22129), with an increased heterogeneity when follow-up was <15 days (MOR 6.92, 95% CI 5.31 to 9.07]). The final two-level model significantly improved the discriminative accuracy compared to the single level reference model (p<10\u22129), with an area under the ROC curve of 0.84. Conclusion This study sheds new light on the respective contribution of patient-, ward- and hospital-levels to SSI occurrence and demonstrates the significant impact of the ward level over and above risk factors present at patient level (i.e., independently from patient case-mix).", "which AUROC ?", "0.84", 1681.0, 1685.0], ["Background Sepsis is one of the leading causes of mortality in hospitalized patients. Despite this fact, a reliable means of predicting sepsis onset remains elusive. Early and accurate sepsis onset predictions could allow more aggressive and targeted therapy while maintaining antimicrobial stewardship. Existing detection methods suffer from low performance and often require time-consuming laboratory test results. Objective To study and validate a sepsis prediction method, InSight, for the new Sepsis-3 definitions in retrospective data, make predictions using a minimal set of variables from within the electronic health record data, compare the performance of this approach with existing scoring systems, and investigate the effects of data sparsity on InSight performance. Methods We apply InSight, a machine learning classification system that uses multivariable combinations of easily obtained patient data (vitals, peripheral capillary oxygen saturation, Glasgow Coma Score, and age), to predict sepsis using the retrospective Multiparameter Intelligent Monitoring in Intensive Care (MIMIC)-III dataset, restricted to intensive care unit (ICU) patients aged 15 years or more. Following the Sepsis-3 definitions of the sepsis syndrome, we compare the classification performance of InSight versus quick sequential organ failure assessment (qSOFA), modified early warning score (MEWS), systemic inflammatory response syndrome (SIRS), simplified acute physiology score (SAPS) II, and sequential organ failure assessment (SOFA) to determine whether or not patients will become septic at a fixed period of time before onset. We also test the robustness of the InSight system to random deletion of individual input observations. Results In a test dataset with 11.3% sepsis prevalence, InSight produced superior classification performance compared with the alternative scores as measured by area under the receiver operating characteristic curves (AUROC) and area under precision-recall curves (APR). In detection of sepsis onset, InSight attains AUROC = 0.880 (SD 0.006) at onset time and APR = 0.595 (SD 0.016), both of which are superior to the performance attained by SIRS (AUROC: 0.609; APR: 0.160), qSOFA (AUROC: 0.772; APR: 0.277), and MEWS (AUROC: 0.803; APR: 0.327) computed concurrently, as well as SAPS II (AUROC: 0.700; APR: 0.225) and SOFA (AUROC: 0.725; APR: 0.284) computed at admission (P<.001 for all comparisons). Similar results are observed for 1-4 hours preceding sepsis onset. In experiments where approximately 60% of input data are deleted at random, InSight attains an AUROC of 0.781 (SD 0.013) and APR of 0.401 (SD 0.015) at sepsis onset time. Even with 60% of data missing, InSight remains superior to the corresponding SIRS scores (AUROC and APR, P<.001), qSOFA scores (P=.0095; P<.001) and superior to SOFA and SAPS II computed at admission (AUROC and APR, P<.001), where all of these comparison scores (except InSight) are computed without data deletion. Conclusions Despite using little more than vitals, InSight is an effective tool for predicting sepsis onset and performs well even with randomly missing data.", "which AUROC ?", "0.88", NaN, NaN], ["Objective. To develop and validate a risk prediction model that could identify patients at high risk for Clostridium difficile infection (CDI) before they develop disease. Design and Setting. Retrospective cohort study in a tertiary care medical center. Patients. Patients admitted to the hospital for at least 48 hours during the calendar year 2003. Methods. Data were collected electronically from the hospital's Medical Informatics database and analyzed with logistic regression to determine variables that best predicted patients' risk for development of CDI. Model discrimination and calibration were calculated. The model was bootstrapped 500 times to validate the predictive accuracy. A receiver operating characteristic curve was calculated to evaluate potential risk cutoffs. Results. A total of 35,350 admitted patients, including 329 with CDI, were studied. Variables in the risk prediction model were age, CDI pressure, times admitted to hospital in the previous 60 days, modified Acute Physiology Score, days of treatment with high-risk antibiotics, whether albumin level was low, admission to an intensive care unit, and receipt of laxatives, gastric acid suppressors, or antimotility drugs. The calibration and discrimination of the model were very good to excellent (C index, 0.88; Brier score, 0.009). Conclusions. The CDI risk prediction model performed well. Further study is needed to determine whether it could be used in a clinical setting to prevent CDI-associated outcomes and reduce costs.", "which AUROC ?", "0.88", 1292.0, 1296.0], ["Abstract Objective: Surveillance of surgical site infections (SSIs) is important for infection control and is usually performed through retrospective manual chart review. The aim of this study was to develop an algorithm for the surveillance of deep SSIs based on clinical variables to enhance efficiency of surveillance. Design: Retrospective cohort study (2012\u20132015). Setting: A Dutch teaching hospital. Participants: We included all consecutive patients who underwent colorectal surgery excluding those with contaminated wounds at the time of surgery. All patients were evaluated for deep SSIs through manual chart review, using the Centers for Disease Control and Prevention (CDC) criteria as the reference standard. Analysis: We used logistic regression modeling to identify predictors that contributed to the estimation of diagnostic probability. Bootstrapping was applied to increase generalizability, followed by assessment of statistical performance and clinical implications. Results: In total, 1,606 patients were included, of whom 129 (8.0%) acquired a deep SSI. The final model included postoperative length of stay, wound class, readmission, reoperation, and 30-day mortality. The model achieved 68.7% specificity and 98.5% sensitivity and an area under the receiver operator characteristic (ROC) curve (AUC) of 0.950 (95% CI, 0.932\u20130.969). Positive and negative predictive values were 21.5% and 99.8%, respectively. Applying the algorithm resulted in a 63.4% reduction in the number of records requiring full manual review (from 1,606 to 590). Conclusions: This 5-parameter model identified 98.5% of patients with a deep SSI. The model can be used to develop semiautomatic surveillance of deep SSIs after colorectal surgery, which may further improve efficiency and quality of SSI surveillance.", "which AUROC ?", "0.95", NaN, NaN], ["This study describes a novel approach to solve the surgical site infection (SSI) classification problem. Feature engineering has traditionally been one of the most important steps in solving complex classification problems, especially in cases with temporal data. The described novel approach is based on abstraction of temporal data recorded in three temporal windows. Maximum likelihood L1-norm (lasso) regularization was used in penalized logistic regression to predict the onset of surgical site infection occurrence based on available patient blood testing results up to the day of surgery. Prior knowledge of predictors (blood tests) was integrated in the modelling by introduction of penalty factors depending on blood test prices and an early stopping parameter limiting the maximum number of selected features used in predictive modelling. Finally, solutions resulting in higher interpretability and cost-effectiveness were demonstrated. Using repeated holdout cross-validation, the baseline C-reactive protein (CRP) classifier achieved a mean AUC of 0.801, whereas our best full lasso model achieved a mean AUC of 0.956. Best model testing results were achieved for full lasso model with maximum number of features limited at 20 features with an AUC of 0.967. Presented models showed the potential to not only support domain experts in their decision making but could also prove invaluable for improvement in prediction of SSI occurrence, which may even help setting new guidelines in the field of preoperative SSI prevention and surveillance.", "which AUROC ?", "0.967", 1263.0, 1268.0], ["Background Sepsis is one of the leading causes of mortality in hospitalized patients. Despite this fact, a reliable means of predicting sepsis onset remains elusive. Early and accurate sepsis onset predictions could allow more aggressive and targeted therapy while maintaining antimicrobial stewardship. Existing detection methods suffer from low performance and often require time-consuming laboratory test results. Objective To study and validate a sepsis prediction method, InSight, for the new Sepsis-3 definitions in retrospective data, make predictions using a minimal set of variables from within the electronic health record data, compare the performance of this approach with existing scoring systems, and investigate the effects of data sparsity on InSight performance. Methods We apply InSight, a machine learning classification system that uses multivariable combinations of easily obtained patient data (vitals, peripheral capillary oxygen saturation, Glasgow Coma Score, and age), to predict sepsis using the retrospective Multiparameter Intelligent Monitoring in Intensive Care (MIMIC)-III dataset, restricted to intensive care unit (ICU) patients aged 15 years or more. Following the Sepsis-3 definitions of the sepsis syndrome, we compare the classification performance of InSight versus quick sequential organ failure assessment (qSOFA), modified early warning score (MEWS), systemic inflammatory response syndrome (SIRS), simplified acute physiology score (SAPS) II, and sequential organ failure assessment (SOFA) to determine whether or not patients will become septic at a fixed period of time before onset. We also test the robustness of the InSight system to random deletion of individual input observations. Results In a test dataset with 11.3% sepsis prevalence, InSight produced superior classification performance compared with the alternative scores as measured by area under the receiver operating characteristic curves (AUROC) and area under precision-recall curves (APR). In detection of sepsis onset, InSight attains AUROC = 0.880 (SD 0.006) at onset time and APR = 0.595 (SD 0.016), both of which are superior to the performance attained by SIRS (AUROC: 0.609; APR: 0.160), qSOFA (AUROC: 0.772; APR: 0.277), and MEWS (AUROC: 0.803; APR: 0.327) computed concurrently, as well as SAPS II (AUROC: 0.700; APR: 0.225) and SOFA (AUROC: 0.725; APR: 0.284) computed at admission (P<.001 for all comparisons). Similar results are observed for 1-4 hours preceding sepsis onset. In experiments where approximately 60% of input data are deleted at random, InSight attains an AUROC of 0.781 (SD 0.013) and APR of 0.401 (SD 0.015) at sepsis onset time. Even with 60% of data missing, InSight remains superior to the corresponding SIRS scores (AUROC and APR, P<.001), qSOFA scores (P=.0095; P<.001) and superior to SOFA and SAPS II computed at admission (AUROC and APR, P<.001), where all of these comparison scores (except InSight) are computed without data deletion. Conclusions Despite using little more than vitals, InSight is an effective tool for predicting sepsis onset and performs well even with randomly missing data.", "which AUCPR ?", "0.88", NaN, NaN], ["To confirm the occurrence of marine residents of the Japanese eel, Anguilla japonica, which have never entered freshwater ('sea eels'), we measured Sr and Ca concentrations by X-ray electron microprobe analysis of the otoliths of 69 yellow and silver eels, collected from 10 localities in seawater and freshwater habitats around Japan, and classified their migratory histories. Two-dimen- sional images of the Sr concentration in the otoliths showed that all specimens generally had a high Sr core at the center of their otolith, which corresponded to a period of their leptocephalus and early glass eel stages in the ocean, but there were a variety of different patterns of Sr concentration and concentric rings outside the central core. Line analysis of Sr/Ca ratios along the radius of each otolith showed peaks (ca 15 \u00d7 10 -3 ) between the core and out to about 150 \u00b5m (elver mark). The pattern change of the Sr/Ca ratio outside of 150 \u00b5m indicated 3 general categories of migratory history: 'river eels', 'estuarine eels' and 'sea eels'. These 3 categories corresponded to mean values of Sr/Ca ratios of \u2265 6.0 \u00d7 10 -3 for sea eels, which spent most of their life in the sea and did not enter freshwater, of 2.5 to 6.0 \u00d7 10 -3 for estuarine eels, which inhabited estuaries or switched between different habitats, and of <2.5 \u00d7 10 -3 for river eels, which entered and remained in freshwater river habitats after arrival in the estuary. The occurrence of sea eels was 20% of all specimens examined and that of river eels, 23%, while estuarine eels were the most prevalent (57%). The occurrence of sea eels was confirmed at 4 localities in Japanese coastal waters, including offshore islands, a small bay and an estuary. The finding of estuarine eels as an intermediate type, which appear to frequently move between different habitats, and their presence at almost all localities, suggested that A. japonica has a flexible pattern of migration, with an ability to adapt to various habitats and salinities. Thus, anguillid eel migrations into freshwater are clearly not an obligatory migratory pathway, and this form of diadromy should be defined as facultative catadromy, with the sea eel as one of several ecophenotypes. Furthermore, this study indicates that eels which utilize the marine environment to various degrees during their juve- nile growth phase may make a substantial contribution to the spawning stock each year.", "which Freshwater ?", "2.5", 1212.0, 1215.0], ["To understand the migratory behavior and habitat use of the Japanese eel Anguilla japonica in the Kaoping River, SW Taiwan, the temporal changes of strontium (Sr) and calcium (Ca) contents in otoliths of the eels in combination with age data were examined by wavelength dispersive X-ray spectrometry with an electron probe microanalyzer. Ages of the eel were determined by the annulus mark in their otolith. The pattern of the Sr:Ca ratios in the otoliths, before the elver stage, was similar among all specimens. Post-elver stage Sr:Ca ratios indicated that the eels experienced different salinity histories in their growth phase yellow stage. The mean (\u00b1SD) Sr:Ca ratios in otoliths beyond elver check of the 6 yellow eels from the freshwater middle reach were 1.8 \u00b1 0.2 x 10 -3 with a maximum value of 3.73 x 10 -3 . Sr:Ca ratios of less than 4 x 10-3 were used to discriminate the freshwater from seawater resident eels. Eels from the lower reach of the river were classified into 3 types: (1) freshwater contingents, Sr:Ca ratio <4 x 10 -3 , constituted 14 % of the eels examined; (2) seawater contingent, Sr:Ca ratio 5.1 \u00b1 1.1 x 10-3 (5%); and (3) estuarine contingent, Sr:Ca ratios ranged from 0 to 10 x 10 -3 , with migration between freshwater and seawater (81 %). The frequency distribution of the 3 contingents differed between yellow and silver eel stages (0.01 < p < 0.05 for each case) and changed with age of the eel, indicating that most of the eels stayed in the estuary for the first year then migrated to the freshwater until 6 yr old. The eel population in the river system was dominated by the estuarine contingent, probably because the estuarine environment was more stable and had a larger carrying capacity than the freshwater middle reach did, and also due to a preference for brackish water by the growth-phase, yellow eel.", "which Seawater ?", "5.1", 1123.0, 1126.0], ["Abstract Objective: Surveillance of surgical site infections (SSIs) is important for infection control and is usually performed through retrospective manual chart review. The aim of this study was to develop an algorithm for the surveillance of deep SSIs based on clinical variables to enhance efficiency of surveillance. Design: Retrospective cohort study (2012\u20132015). Setting: A Dutch teaching hospital. Participants: We included all consecutive patients who underwent colorectal surgery excluding those with contaminated wounds at the time of surgery. All patients were evaluated for deep SSIs through manual chart review, using the Centers for Disease Control and Prevention (CDC) criteria as the reference standard. Analysis: We used logistic regression modeling to identify predictors that contributed to the estimation of diagnostic probability. Bootstrapping was applied to increase generalizability, followed by assessment of statistical performance and clinical implications. Results: In total, 1,606 patients were included, of whom 129 (8.0%) acquired a deep SSI. The final model included postoperative length of stay, wound class, readmission, reoperation, and 30-day mortality. The model achieved 68.7% specificity and 98.5% sensitivity and an area under the receiver operator characteristic (ROC) curve (AUC) of 0.950 (95% CI, 0.932\u20130.969). Positive and negative predictive values were 21.5% and 99.8%, respectively. Applying the algorithm resulted in a 63.4% reduction in the number of records requiring full manual review (from 1,606 to 590). Conclusions: This 5-parameter model identified 98.5% of patients with a deep SSI. The model can be used to develop semiautomatic surveillance of deep SSIs after colorectal surgery, which may further improve efficiency and quality of SSI surveillance.", "which Specificity ?", "68.7", 1210.0, 1214.0], ["Background: Seasonal influenza virus outbreaks cause annual epidemics, mostly during winter in temperate zone countries, especially resulting in increased morbidity and higher mortality in children. In order to conduct rapid screening for influenza in pediatric outpatient units, we developed a pediatric infection screening system with a radar respiration monitor. Methods: The system conducts influenza screening within 10 seconds based on vital signs (i.e., respiration rate monitored using a 24 GHz microwave radar; facial temperature, using a thermopile array; and heart rate, using a pulse photosensor). A support vector machine (SVM) classification method was used to discriminate influenza children from healthy children based on vital signs. To assess the classification performance of the screening system that uses the SVM, we conducted influenza screening for 70 children (i.e., 27 seasonal influenza patients (11 \u00b1 2 years) at a pediatric clinic and 43 healthy control subjects (9 \u00b1 4 years) at a pediatric dental clinic) in the winter of 2013-2014. Results: The screening system using the SVM identified 26 subjects with influenza (22 of the 27 influenza patients and 4 of the 43 healthy subjects). The system discriminated 44 subjects as healthy (5 of the 27 influenza patients and 39 of the 43 healthy subjects), with sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 81.5%, 90.7%, 84.6%, and 88.6%, respectively. Conclusion: The SVM-based screening system achieved classification results for the outpatient children based on vital signs with comparatively high NPV within 10 seconds. At pediatric clinics and hospitals, our system seems potentially useful in the first screening step for infections in the future.", "which Specificity ?", "90.7", 1439.0, 1443.0], ["Background: Seasonal influenza virus outbreaks cause annual epidemics, mostly during winter in temperate zone countries, especially resulting in increased morbidity and higher mortality in children. In order to conduct rapid screening for influenza in pediatric outpatient units, we developed a pediatric infection screening system with a radar respiration monitor. Methods: The system conducts influenza screening within 10 seconds based on vital signs (i.e., respiration rate monitored using a 24 GHz microwave radar; facial temperature, using a thermopile array; and heart rate, using a pulse photosensor). A support vector machine (SVM) classification method was used to discriminate influenza children from healthy children based on vital signs. To assess the classification performance of the screening system that uses the SVM, we conducted influenza screening for 70 children (i.e., 27 seasonal influenza patients (11 \u00b1 2 years) at a pediatric clinic and 43 healthy control subjects (9 \u00b1 4 years) at a pediatric dental clinic) in the winter of 2013-2014. Results: The screening system using the SVM identified 26 subjects with influenza (22 of the 27 influenza patients and 4 of the 43 healthy subjects). The system discriminated 44 subjects as healthy (5 of the 27 influenza patients and 39 of the 43 healthy subjects), with sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 81.5%, 90.7%, 84.6%, and 88.6%, respectively. Conclusion: The SVM-based screening system achieved classification results for the outpatient children based on vital signs with comparatively high NPV within 10 seconds. At pediatric clinics and hospitals, our system seems potentially useful in the first screening step for infections in the future.", "which Precision ?", "84.6", 1446.0, 1450.0], ["Background: Seasonal influenza virus outbreaks cause annual epidemics, mostly during winter in temperate zone countries, especially resulting in increased morbidity and higher mortality in children. In order to conduct rapid screening for influenza in pediatric outpatient units, we developed a pediatric infection screening system with a radar respiration monitor. Methods: The system conducts influenza screening within 10 seconds based on vital signs (i.e., respiration rate monitored using a 24 GHz microwave radar; facial temperature, using a thermopile array; and heart rate, using a pulse photosensor). A support vector machine (SVM) classification method was used to discriminate influenza children from healthy children based on vital signs. To assess the classification performance of the screening system that uses the SVM, we conducted influenza screening for 70 children (i.e., 27 seasonal influenza patients (11 \u00b1 2 years) at a pediatric clinic and 43 healthy control subjects (9 \u00b1 4 years) at a pediatric dental clinic) in the winter of 2013-2014. Results: The screening system using the SVM identified 26 subjects with influenza (22 of the 27 influenza patients and 4 of the 43 healthy subjects). The system discriminated 44 subjects as healthy (5 of the 27 influenza patients and 39 of the 43 healthy subjects), with sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 81.5%, 90.7%, 84.6%, and 88.6%, respectively. Conclusion: The SVM-based screening system achieved classification results for the outpatient children based on vital signs with comparatively high NPV within 10 seconds. At pediatric clinics and hospitals, our system seems potentially useful in the first screening step for infections in the future.", "which NPV ?", "88.6", 1457.0, 1461.0], ["This paper introduces a class of correlation filters called average of synthetic exact filters (ASEF). For ASEF, the correlation output is completely specified for each training image. This is in marked contrast to prior methods such as synthetic discriminant functions (SDFs) which only specify a single output value per training image. Advantages of ASEF training include: insensitivity to over-fitting, greater flexibility with regard to training images, and more robust behavior in the presence of structured backgrounds. The theory and design of ASEF filters is presented using eye localization on the FERET database as an example task. ASEF is compared to other popular correlation filters including SDF, MACE, OTF, and UMACE, and with other eye localization methods including Gabor Jets and the OpenCV cascade classifier. ASEF is shown to outperform all these methods, locating the eye to within the radius of the iris approximately 98.5% of the time.", "which Accuracy (%) deyeo0: ?", "98.5", 940.0, 944.0], ["This study utilizes standard- and nested-EKC models to investigate the income-environment relation for Nigeria, between 1960 and 2008. The results from the standard-EKC model provides weak evidence of an inverted-U shaped relationship with turning point (T.P) around $280.84, while the nested model presents strong evidence of an N-shaped relationship between income and emissions in Nigeria, with a T.P around $237.23. Tests for structural breaks caused by the 1973 oil price shocks and 1986 Structural Adjustment are not rejected, implying that these factors have not significantly affected the income-environment relationship in Nigeria. Further, results from the rolling interdecadal analysis shows that the observed relationship is stable and insensitive to the sample interval chosen. Overall, our findings imply that economic development is compatible with environmental improvements in Nigeria. However, tighter and concentrated environmental policy regimes will be required to ensure that the relationship is maintained around the first two-strands of the N-shape", "which EKC Turnaround point(s) 2 ?", "280.84", 268.0, 274.0], ["(1) Background: Although bullying victimization is a phenomenon that is increasingly being recognized as a public health and mental health concern in many countries, research attention on this aspect of youth violence in low- and middle-income countries, especially sub-Saharan Africa, is minimal. The current study examined the national prevalence of bullying victimization and its correlates among in-school adolescents in Ghana. (2) Methods: A sample of 1342 in-school adolescents in Ghana (55.2% males; 44.8% females) aged 12\u201318 was drawn from the 2012 Global School-based Health Survey (GSHS) for the analysis. Self-reported bullying victimization \u201cduring the last 30 days, on how many days were you bullied?\u201d was used as the central criterion variable. Three-level analyses using descriptive, Pearson chi-square, and binary logistic regression were performed. Results of the regression analysis were presented as adjusted odds ratios (aOR) at 95% confidence intervals (CIs), with a statistical significance pegged at p < 0.05. (3) Results: Bullying victimization was prevalent among 41.3% of the in-school adolescents. Pattern of results indicates that adolescents in SHS 3 [aOR = 0.34, 95% CI = 0.25, 0.47] and SHS 4 [aOR = 0.30, 95% CI = 0.21, 0.44] were less likely to be victims of bullying. Adolescents who had sustained injury [aOR = 2.11, 95% CI = 1.63, 2.73] were more likely to be bullied compared to those who had not sustained any injury. The odds of bullying victimization were higher among adolescents who had engaged in physical fight [aOR = 1.90, 95% CI = 1.42, 2.25] and those who had been physically attacked [aOR = 1.73, 95% CI = 1.32, 2.27]. Similarly, adolescents who felt lonely were more likely to report being bullied [aOR = 1.50, 95% CI = 1.08, 2.08] as against those who did not feel lonely. Additionally, adolescents with a history of suicide attempts were more likely to be bullied [aOR = 1.63, 95% CI = 1.11, 2.38] and those who used marijuana had higher odds of bullying victimization [aOR = 3.36, 95% CI = 1.10, 10.24]. (4) Conclusions: Current findings require the need for policy makers and school authorities in Ghana to design and implement policies and anti-bullying interventions (e.g., Social Emotional Learning (SEL), Emotive Behavioral Education (REBE), Marijuana Cessation Therapy (MCT)) focused on addressing behavioral issues, mental health and substance abuse among in-school adolescents.", "which Prevalence, % ?", "41.3", 1089.0, 1093.0], ["Introduction We aimed to examine if severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) polymerase chain reaction (PCR) cycle quantification (Cq) value, as a surrogate for SARS-CoV-2 viral load, could predict hospitalisation and disease severity in adult patients with coronavirus disease 2019 (COVID-19). Methods We performed a prospective cohort study of adult patients with PCR positive SARS-CoV-2 airway samples including all out-patients registered at the Department of Infectious Diseases, Odense University Hospital (OUH) March 9-March 17 2020, and all hospitalised patients at OUH March 10-April 21 2020. To identify associations between Cq-values and a) hospital admission and b) a severe outcome, logistic regression analyses were used to compute odds ratios (OR) and 95% Confidence Intervals (CI), adjusting for confounding factors (aOR). Results We included 87 non-hospitalised and 82 hospitalised patients. The median baseline Cq-value was 25.5 (interquartile range 22.3\u201329.0). We found a significant association between increasing Cq-value and hospital-admission in univariate analysis (OR 1.11, 95% CI 1.04\u20131.19). However, this was due to an association between time from symptom onset to testing and Cq-values, and no association was found in the adjusted analysis (aOR 1.08, 95% CI 0.94\u20131.23). In hospitalised patients, a significant association between lower Cq-values and higher risk of severe disease was found (aOR 0.89, 95% CI 0.81\u20130.98), independent of timing of testing. Conclusions SARS-CoV-2 PCR Cq-values in outpatients correlated with time after symptom onset, but was not a predictor of hospitalisation. However, in hospitalised patients lower Cq-values were associated with higher risk of severe disease.", "which Lower confidence limit ?", "0.81", 1458.0, 1462.0], ["Introduction We aimed to examine if severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) polymerase chain reaction (PCR) cycle quantification (Cq) value, as a surrogate for SARS-CoV-2 viral load, could predict hospitalisation and disease severity in adult patients with coronavirus disease 2019 (COVID-19). Methods We performed a prospective cohort study of adult patients with PCR positive SARS-CoV-2 airway samples including all out-patients registered at the Department of Infectious Diseases, Odense University Hospital (OUH) March 9-March 17 2020, and all hospitalised patients at OUH March 10-April 21 2020. To identify associations between Cq-values and a) hospital admission and b) a severe outcome, logistic regression analyses were used to compute odds ratios (OR) and 95% Confidence Intervals (CI), adjusting for confounding factors (aOR). Results We included 87 non-hospitalised and 82 hospitalised patients. The median baseline Cq-value was 25.5 (interquartile range 22.3\u201329.0). We found a significant association between increasing Cq-value and hospital-admission in univariate analysis (OR 1.11, 95% CI 1.04\u20131.19). However, this was due to an association between time from symptom onset to testing and Cq-values, and no association was found in the adjusted analysis (aOR 1.08, 95% CI 0.94\u20131.23). In hospitalised patients, a significant association between lower Cq-values and higher risk of severe disease was found (aOR 0.89, 95% CI 0.81\u20130.98), independent of timing of testing. Conclusions SARS-CoV-2 PCR Cq-values in outpatients correlated with time after symptom onset, but was not a predictor of hospitalisation. However, in hospitalised patients lower Cq-values were associated with higher risk of severe disease.", "which Lower confidence limit ?", "0.94", 1308.0, 1312.0], ["Introduction We aimed to examine if severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) polymerase chain reaction (PCR) cycle quantification (Cq) value, as a surrogate for SARS-CoV-2 viral load, could predict hospitalisation and disease severity in adult patients with coronavirus disease 2019 (COVID-19). Methods We performed a prospective cohort study of adult patients with PCR positive SARS-CoV-2 airway samples including all out-patients registered at the Department of Infectious Diseases, Odense University Hospital (OUH) March 9-March 17 2020, and all hospitalised patients at OUH March 10-April 21 2020. To identify associations between Cq-values and a) hospital admission and b) a severe outcome, logistic regression analyses were used to compute odds ratios (OR) and 95% Confidence Intervals (CI), adjusting for confounding factors (aOR). Results We included 87 non-hospitalised and 82 hospitalised patients. The median baseline Cq-value was 25.5 (interquartile range 22.3\u201329.0). We found a significant association between increasing Cq-value and hospital-admission in univariate analysis (OR 1.11, 95% CI 1.04\u20131.19). However, this was due to an association between time from symptom onset to testing and Cq-values, and no association was found in the adjusted analysis (aOR 1.08, 95% CI 0.94\u20131.23). In hospitalised patients, a significant association between lower Cq-values and higher risk of severe disease was found (aOR 0.89, 95% CI 0.81\u20130.98), independent of timing of testing. Conclusions SARS-CoV-2 PCR Cq-values in outpatients correlated with time after symptom onset, but was not a predictor of hospitalisation. However, in hospitalised patients lower Cq-values were associated with higher risk of severe disease.", "which higher confidence limit ?", "0.98", 1463.0, 1467.0], ["We conducted a comparative study of COVID-19 epidemic in three different settings: mainland China, the Guangdong province of China and South Korea, by formulating two disease transmission dynamics models incorporating epidemic characteristics and setting-specific interventions, and fitting the models to multi-source data to identify initial and effective reproduction numbers and evaluate effectiveness of interventions. We estimated the initial basic reproduction number for South Korea, the Guangdong province and mainland China as 2.6 (95% confidence interval (CI): (2.5, 2.7)), 3.0 (95%CI: (2.6, 3.3)) and 3.8 (95%CI: (3.5,4.2)), respectively, given a serial interval with mean of 5 days with standard deviation of 3 days. We found that the effective reproduction number for the Guangdong province and mainland China has fallen below the threshold 1 since February 8th and 18th respectively, while the effective reproduction number for South Korea remains high, suggesting that the interventions implemented need to be enhanced in order to halt further infections. We also project the epidemic trend in South Korea under different scenarios where a portion or the entirety of the integrated package of interventions in China is used. We show that a coherent and integrated approach with stringent public health interventions is the key to the success of containing the epidemic in China and specially its provinces outside its epicenter, and we show that this approach can also be effective to mitigate the burden of the COVID-19 epidemic in South Korea. The experience of outbreak control in mainland China should be a guiding reference for the rest of the world including South Korea.", "which R0 estimates (average) ?", "2.6", 536.0, 539.0], ["Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.", "which R0 estimates (average) ?", "1.87", 1184.0, 1188.0], ["Background: On January 23, 2020, a quarantine was imposed on travel in and out of Wuhan, where the 2019 novel coronavirus (2019-nCoV) outbreak originated from. Previous analyses estimated the basic epidemiological parameters using symptom onset dates of the confirmed cases in Wuhan and outside China. Methods: We obtained information on the 46 coronavirus cases who traveled from Wuhan before January 23 and have been subsequently confirmed in Hong Kong, Japan, Korea, Macau, Singapore, and Taiwan as of February 5, 2020. Most cases have detailed travel history and disease progress. Compared to previous analyses, an important distinction is that we used this data to informatively simulate the infection time of each case using the symptom onset time, previously reported incubation interval, and travel history. We then fitted a simple exponential growth model with adjustment for the January 23 travel ban to the distribution of the simulated infection time. We used a Bayesian analysis with diffuse priors to quantify the uncertainty of the estimated epidemiological parameters. We performed sensitivity analysis to different choices of incubation interval and the hyperparameters in the prior specification. Results: We found that our model provides good fit to the distribution of the infection time. Assuming the travel rate to the selected countries and regions is constant over the study period, we found that the epidemic was doubling in size every 2.9 days (95% credible interval [CrI], 2 days--4.1 days). Using previously reported serial interval for 2019-nCoV, the estimated basic reproduction number is 5.7 (95% CrI, 3.4--9.2). The estimates did not change substantially if we assumed the travel rate doubled in the last 3 days before January 23, when we used previously reported incubation interval for severe acute respiratory syndrome (SARS), or when we changed the hyperparameters in our prior specification. Conclusions: Our estimated epidemiological parameters are higher than an earlier report using confirmed cases in Wuhan. This indicates the 2019-nCoV could have been spreading faster than previous estimates.", "which R0 estimates (average) ?", "5.7", 1619.0, 1622.0], ["Background: In December 2019, an outbreak of coronavirus disease (COVID-19)was identified in Wuhan, China and, later on, detected in other parts of China. Our aim is to evaluate the effectiveness of the evolution of interventions and self-protection measures, estimate the risk of partial lifting control measures and predict the epidemic trend of the virus in mainland China excluding Hubei province based on the published data and a novel mathematical model. Methods: A novel COVID-19 transmission dynamic model incorporating the intervention measures implemented in China is proposed. We parameterize the model by using the Markov Chain Monte Carlo (MCMC) method and estimate the control reproduction number Rc, as well as the effective daily reproduction ratio Re(t), of the disease transmission in mainland China excluding Hubei province. Results: The estimation outcomes indicate that the control reproduction number is 3.36 (95% CI 3.20-3.64) and Re(t) has dropped below 1 since January 31st, 2020, which implies that the containment strategies implemented by the Chinese government in mainland China excluding Hubei province are indeed effective and magnificently suppressed COVID-19 transmission. Moreover, our results show that relieving personal protection too early may lead to the spread of disease for a longer time and more people would be infected, and may even cause epidemic or outbreak again. By calculating the effective reproduction ratio, we proved that the contact rate should be kept at least less than 30% of the normal level by April, 2020. Conclusions: To ensure the epidemic ending rapidly, it is necessary to maintain the current integrated restrict interventions and self-protection measures, including travel restriction, quarantine of entry, contact tracing followed by quarantine and isolation and reduction of contact, like wearing masks, etc. People should be fully aware of the real-time epidemic situation and keep sufficient personal protection until April. If all the above conditions are met, the outbreak is expected to be ended by April in mainland China apart from Hubei province.", "which R0 estimates (average) ?", "3.36", 926.0, 930.0], ["We conducted a comparative study of COVID-19 epidemic in three different settings: mainland China, the Guangdong province of China and South Korea, by formulating two disease transmission dynamics models incorporating epidemic characteristics and setting-specific interventions, and fitting the models to multi-source data to identify initial and effective reproduction numbers and evaluate effectiveness of interventions. We estimated the initial basic reproduction number for South Korea, the Guangdong province and mainland China as 2.6 (95% confidence interval (CI): (2.5, 2.7)), 3.0 (95%CI: (2.6, 3.3)) and 3.8 (95%CI: (3.5,4.2)), respectively, given a serial interval with mean of 5 days with standard deviation of 3 days. We found that the effective reproduction number for the Guangdong province and mainland China has fallen below the threshold 1 since February 8th and 18th respectively, while the effective reproduction number for South Korea remains high, suggesting that the interventions implemented need to be enhanced in order to halt further infections. We also project the epidemic trend in South Korea under different scenarios where a portion or the entirety of the integrated package of interventions in China is used. We show that a coherent and integrated approach with stringent public health interventions is the key to the success of containing the epidemic in China and specially its provinces outside its epicenter, and we show that this approach can also be effective to mitigate the burden of the COVID-19 epidemic in South Korea. The experience of outbreak control in mainland China should be a guiding reference for the rest of the world including South Korea.", "which R0 estimates (average) ?", "3.8", 612.0, 615.0], ["The exported cases of 2019 novel coronavirus (COVID-19) infection that were confirmed outside China provide an opportunity to estimate the cumulative incidence and confirmed case fatality risk (cCFR) in mainland China. Knowledge of the cCFR is critical to characterize the severity and understand the pandemic potential of COVID-19 in the early stage of the epidemic. Using the exponential growth rate of the incidence, the present study statistically estimated the cCFR and the basic reproduction number\u2014the average number of secondary cases generated by a single primary case in a na\u00efve population. We modeled epidemic growth either from a single index case with illness onset on 8 December 2019 (Scenario 1), or using the growth rate fitted along with the other parameters (Scenario 2) based on data from 20 exported cases reported by 24 January 2020. The cumulative incidence in China by 24 January was estimated at 6924 cases (95% confidence interval [CI]: 4885, 9211) and 19,289 cases (95% CI: 10,901, 30,158), respectively. The latest estimated values of the cCFR were 5.3% (95% CI: 3.5%, 7.5%) for Scenario 1 and 8.4% (95% CI: 5.3%, 12.3%) for Scenario 2. The basic reproduction number was estimated to be 2.1 (95% CI: 2.0, 2.2) and 3.2 (95% CI: 2.7, 3.7) for Scenarios 1 and 2, respectively. Based on these results, we argued that the current COVID-19 epidemic has a substantial potential for causing a pandemic. The proposed approach provides insights in early risk assessment using publicly available data.", "which R0 estimates (average) ?", "3.2", 1241.0, 1244.0], ["The novel coronavirus (2019-nCoV) is a recently emerged human pathogen that has spread widely since January 2020. Initially, the basic reproductive number, R0, was estimated to be 2.2 to 2.7. Here we provide a new estimate of this quantity. We collected extensive individual case reports and estimated key epidemiology parameters, including the incubation period. Integrating these estimates and high-resolution real-time human travel and infection data with mathematical models, we estimated that the number of infected individuals during early epidemic double every 2.4 days, and the R0 value is likely to be between 4.7 and 6.6. We further show that quarantine and contact tracing of symptomatic individuals alone may not be effective and early, strong control measures are needed to stop transmission of the virus.", "which R0 estimates (average) ?", "4.7", 619.0, 622.0], ["Background: In December 2019, an outbreak of respiratory illness caused by a novel coronavirus (2019-nCoV) emerged in Wuhan, China and has swiftly spread to other parts of China and a number of foreign countries. The 2019-nCoV cases might have been under-reported roughly from 1 to 15 January 2020, and thus we estimated the number of unreported cases and the basic reproduction number, R0, of 2019-nCoV. Methods: We modelled the epidemic curve of 2019-nCoV cases, in mainland China from 1 December 2019 to 24 January 2020 through the exponential growth. The number of unreported cases was determined by the maximum likelihood estimation. We used the serial intervals (SI) of infection caused by two other well-known coronaviruses (CoV), Severe Acute Respiratory Syndrome (SARS) and Middle East Respiratory Syndrome (MERS) CoVs, as approximations of the unknown SI for 2019-nCoV to estimate R0. Results: We confirmed that the initial growth phase followed an exponential growth pattern. The under-reporting was likely to have resulted in 469 (95% CI: 403\u2013540) unreported cases from 1 to 15 January 2020. The reporting rate after 17 January 2020 was likely to have increased 21-fold (95% CI: 18\u201325) in comparison to the situation from 1 to 17 January 2020 on average. We estimated the R0 of 2019-nCoV at 2.56 (95% CI: 2.49\u20132.63). Conclusion: The under-reporting was likely to have occurred during the first half of January 2020 and should be considered in future investigation.", "which R0 estimates (average) ?", "2.56", 1303.0, 1307.0], ["Background The 2019 novel Coronavirus (COVID-19) emerged in Wuhan, China in December 2019 and has been spreading rapidly in China. Decisions about its pandemic threat and the appropriate level of public health response depend heavily on estimates of its basic reproduction number and assessments of interventions conducted in the early stages of the epidemic. Methods We conducted a mathematical modeling study using five independent methods to assess the basic reproduction number (R0) of COVID-19, using data on confirmed cases obtained from the China National Health Commission for the period 10th January to 8th February. We analyzed the data for the period before the closure of Wuhan city (10th January to 23rd January) and the post-closure period (23rd January to 8th February) and for the whole period, to assess both the epidemic risk of the virus and the effectiveness of the closure of Wuhan city on spread of COVID-19. Findings Before the closure of Wuhan city the basic reproduction number of COVID-19 was 4.38 (95% CI: 3.63-5.13), dropping to 3.41 (95% CI: 3.16-3.65) after the closure of Wuhan city. Over the entire epidemic period COVID-19 had a basic reproduction number of 3.39 (95% CI: 3.09-3.70), indicating it has a very high transmissibility. Interpretation COVID-19 is a highly transmissible virus with a very high risk of epidemic outbreak once it emerges in metropolitan areas. The closure of Wuhan city was effective in reducing the severity of the epidemic, but even after closure of the city and the subsequent expansion of that closure to other parts of Hubei the virus remained extremely infectious. Emergency planners in other cities should consider this high infectiousness when considering responses to this virus.", "which R0 estimates (average) ?", "3.39", 1191.0, 1195.0], ["Background The 2019 novel Coronavirus (COVID-19) emerged in Wuhan, China in December 2019 and has been spreading rapidly in China. Decisions about its pandemic threat and the appropriate level of public health response depend heavily on estimates of its basic reproduction number and assessments of interventions conducted in the early stages of the epidemic. Methods We conducted a mathematical modeling study using five independent methods to assess the basic reproduction number (R0) of COVID-19, using data on confirmed cases obtained from the China National Health Commission for the period 10th January to 8th February. We analyzed the data for the period before the closure of Wuhan city (10th January to 23rd January) and the post-closure period (23rd January to 8th February) and for the whole period, to assess both the epidemic risk of the virus and the effectiveness of the closure of Wuhan city on spread of COVID-19. Findings Before the closure of Wuhan city the basic reproduction number of COVID-19 was 4.38 (95% CI: 3.63-5.13), dropping to 3.41 (95% CI: 3.16-3.65) after the closure of Wuhan city. Over the entire epidemic period COVID-19 had a basic reproduction number of 3.39 (95% CI: 3.09-3.70), indicating it has a very high transmissibility. Interpretation COVID-19 is a highly transmissible virus with a very high risk of epidemic outbreak once it emerges in metropolitan areas. The closure of Wuhan city was effective in reducing the severity of the epidemic, but even after closure of the city and the subsequent expansion of that closure to other parts of Hubei the virus remained extremely infectious. Emergency planners in other cities should consider this high infectiousness when considering responses to this virus.", "which R0 estimates (average) ?", "4.38", 1019.0, 1023.0], ["ABSTRACT On December 31, 2019, the World Health Organization was notified about a cluster of pneumonia of unknown aetiology in the city of Wuhan, China. Chinese authorities later identified a new coronavirus (2019-nCoV) as the causative agent of the outbreak. As of January 23, 2020, 655 cases have been confirmed in China and several other countries. Understanding the transmission characteristics and the potential for sustained human-to-human transmission of 2019-nCoV is critically important for coordinating current screening and containment strategies, and determining whether the outbreak constitutes a public health emergency of international concern (PHEIC). We performed stochastic simulations of early outbreak trajectories that are consistent with the epidemiological findings to date. We found the basic reproduction number, R 0 , to be around 2.2 (90% high density interval 1.4\u20143.8), indicating the potential for sustained human-to-human transmission. Transmission characteristics appear to be of a similar magnitude to severe acute respiratory syndrome-related coronavirus (SARS-CoV) and the 1918 pandemic influenza. These findings underline the importance of heightened screening, surveillance and control efforts, particularly at airports and other travel hubs, in order to prevent further international spread of 2019-nCoV.", "which R0\u00a0estimates (average) ?", "2.2", 857.0, 860.0], ["Abstract Background The initial cases of novel coronavirus (2019-nCoV)\u2013infected pneumonia (NCIP) occurred in Wuhan, Hubei Province, China, in December 2019 and January 2020. We analyzed data on the first 425 confirmed cases in Wuhan to determine the epidemiologic characteristics of NCIP. Methods We collected information on demographic characteristics, exposure history, and illness timelines of laboratory-confirmed cases of NCIP that had been reported by January 22, 2020. We described characteristics of the cases and estimated the key epidemiologic time-delay distributions. In the early period of exponential growth, we estimated the epidemic doubling time and the basic reproductive number. Results Among the first 425 patients with confirmed NCIP, the median age was 59 years and 56% were male. The majority of cases (55%) with onset before January 1, 2020, were linked to the Huanan Seafood Wholesale Market, as compared with 8.6% of the subsequent cases. The mean incubation period was 5.2 days (95% confidence interval [CI], 4.1 to 7.0), with the 95th percentile of the distribution at 12.5 days. In its early stages, the epidemic doubled in size every 7.4 days. With a mean serial interval of 7.5 days (95% CI, 5.3 to 19), the basic reproductive number was estimated to be 2.2 (95% CI, 1.4 to 3.9). Conclusions On the basis of this information, there is evidence that human-to-human transmission has occurred among close contacts since the middle of December 2019. Considerable efforts to reduce transmission will be required to control outbreaks if similar dynamics apply elsewhere. Measures to prevent or reduce transmission should be implemented in populations at risk. (Funded by the Ministry of Science and Technology of China and others.)", "which R0\u00a0estimates (average) ?", "2.2", 1285.0, 1288.0], ["Abstract Backgrounds An ongoing outbreak of a novel coronavirus (2019-nCoV) pneumonia hit a major city of China, Wuhan, December 2019 and subsequently reached other provinces/regions of China and countries. We present estimates of the basic reproduction number, R 0 , of 2019-nCoV in the early phase of the outbreak. Methods Accounting for the impact of the variations in disease reporting rate, we modelled the epidemic curve of 2019-nCoV cases time series, in mainland China from January 10 to January 24, 2020, through the exponential growth. With the estimated intrinsic growth rate ( \u03b3 ), we estimated R 0 by using the serial intervals (SI) of two other well-known coronavirus diseases, MERS and SARS, as approximations for the true unknown SI. Findings The early outbreak data largely follows the exponential growth. We estimated that the mean R 0 ranges from 2.24 (95%CI: 1.96-2.55) to 3.58 (95%CI: 2.89-4.39) associated with 8-fold to 2-fold increase in the reporting rate. We demonstrated that changes in reporting rate substantially affect estimates of R 0 . Conclusion The mean estimate of R 0 for the 2019-nCoV ranges from 2.24 to 3.58, and significantly larger than 1. Our findings indicate the potential of 2019-nCoV to cause outbreaks.", "which R0\u00a0estimates (average) ?", "2.24", 866.0, 870.0], ["Since first identified, the epidemic scale of the recently emerged novel coronavirus (2019-nCoV) in Wuhan, China, has increased rapidly, with cases arising across China and other countries and regions. using a transmission model, we estimate a basic reproductive number of 3.11 (95%CI, 2.39-4.13); 58-76% of transmissions must be prevented to stop increasing; Wuhan case ascertainment of 5.0% (3.6-7.4); 21022 (11090-33490) total infections in Wuhan 1 to 22 January.", "which R0\u00a0estimates (average) ?", "3.11", 273.0, 277.0], ["Abstract Backgrounds An ongoing outbreak of a novel coronavirus (2019-nCoV) pneumonia hit a major city of China, Wuhan, December 2019 and subsequently reached other provinces/regions of China and countries. We present estimates of the basic reproduction number, R 0 , of 2019-nCoV in the early phase of the outbreak. Methods Accounting for the impact of the variations in disease reporting rate, we modelled the epidemic curve of 2019-nCoV cases time series, in mainland China from January 10 to January 24, 2020, through the exponential growth. With the estimated intrinsic growth rate ( \u03b3 ), we estimated R 0 by using the serial intervals (SI) of two other well-known coronavirus diseases, MERS and SARS, as approximations for the true unknown SI. Findings The early outbreak data largely follows the exponential growth. We estimated that the mean R 0 ranges from 2.24 (95%CI: 1.96-2.55) to 3.58 (95%CI: 2.89-4.39) associated with 8-fold to 2-fold increase in the reporting rate. We demonstrated that changes in reporting rate substantially affect estimates of R 0 . Conclusion The mean estimate of R 0 for the 2019-nCoV ranges from 2.24 to 3.58, and significantly larger than 1. Our findings indicate the potential of 2019-nCoV to cause outbreaks.", "which R0\u00a0estimates (average) ?", "3.58", 893.0, 897.0], ["English Abstract: Background: Since the emergence of the first pneumonia cases in Wuhan, China, the novel coronavirus (2019-nCov) infection has been quickly spreading out to other provinces and neighbouring countries. Estimation of the basic reproduction number by means of mathematical modelling can be helpful for determining the potential and severity of an outbreak, and providing critical information for identifying the type of disease interventions and intensity. Methods: A deterministic compartmental model was devised based on the clinical progression of the disease, epidemiological status of the individuals, and the intervention measures. Findings: The estimation results based on likelihood and model analysis reveal that the control reproduction number may be as high as 6.47 (95% CI 5.71-7.23). Sensitivity analyses reveal that interventions, such as intensive contact tracing followed by quarantine and isolation, can effectively reduce the control reproduction number and transmission risk, with the effect of travel restriction of Wuhan on 2019-nCov infection in Beijing being almost equivalent to increasing quarantine by 100-thousand baseline value. Interpretation: It is essential to assess how the expensive, resource-intensive measures implemented by the Chinese authorities can contribute to the prevention and control of the 2019-nCov infection, and how long should be maintained. Under the most restrictive measures, the outbreak is expected to peak within two weeks (since January 23rd 2020) with significant low peak value. With travel restriction (no imported exposed individuals to Beijing), the number of infected individuals in 7 days will decrease by 91.14% in Beijing, compared with the scenario of no travel restriction. Mandarin Abstract: \u80cc\u666f\uff1a\u81ea\u4ece\u4e2d\u56fd\u6b66\u6c49\u51fa\u73b0\u7b2c\u4e00\u4f8b\u80ba\u708e\u75c5\u4f8b\u4ee5\u6765\uff0c\u65b0\u578b\u51a0\u72b6\u75c5\u6bd2\uff082019-nCov\uff09\u611f\u67d3\u5df2\u8fc5\u901f\u4f20\u64ad\u5230\u5176\u4ed6\u7701\u4efd\u548c\u5468\u8fb9\u56fd\u5bb6\u3002\u901a\u8fc7\u6570\u5b66\u6a21\u578b\u4f30\u8ba1\u57fa\u672c\u518d\u751f\u6570\uff0c\u6709\u52a9\u4e8e\u786e\u5b9a\u75ab\u60c5\u7206\u53d1\u7684\u53ef\u80fd\u6027\u548c\u4e25\u91cd\u6027\uff0c\u5e76\u4e3a\u786e\u5b9a\u75be\u75c5\u5e72\u9884\u7c7b\u578b\u548c\u5f3a\u5ea6\u63d0\u4f9b\u5173\u952e\u4fe1\u606f\u3002 \u65b9\u6cd5\uff1a\u6839\u636e\u75be\u75c5\u7684\u4e34\u5e8a\u8fdb\u5c55\uff0c\u4e2a\u4f53\u7684\u6d41\u884c\u75c5\u5b66\u72b6\u51b5\u548c\u5e72\u9884\u63aa\u65bd\uff0c\u8bbe\u8ba1\u786e\u5b9a\u6027\u7684\u4ed3\u5ba4\u6a21\u578b\u3002 \u7ed3\u679c\uff1a\u57fa\u4e8e\u4f3c\u7136\u51fd\u6570\u548c\u6a21\u578b\u5206\u6790\u7684\u4f30\u8ba1\u7ed3\u679c\u8868\u660e\uff0c\u63a7\u5236\u518d\u751f\u6570\u53ef\u80fd\u9ad8\u8fbe6.47\uff0895\uff05CI 5.71-7.23\uff09\u3002\u654f\u611f\u6027\u5206\u6790\u663e\u793a\uff0c\u5bc6\u96c6\u63a5\u89e6\u8ffd\u8e2a\u548c\u9694\u79bb\u7b49\u5e72\u9884\u63aa\u65bd\u53ef\u4ee5\u6709\u6548\u51cf\u5c11\u63a7\u5236\u518d\u751f\u6570\u548c\u4f20\u64ad\u98ce\u9669\uff0c\u6b66\u6c49\u5c01\u57ce\u63aa\u65bd\u5bf9\u5317\u4eac2019-nCov\u611f\u67d3\u7684\u5f71\u54cd\u51e0\u4e4e\u7b49\u540c\u4e8e\u589e\u52a0\u9694\u79bb\u63aa\u65bd10\u4e07\u7684\u57fa\u7ebf\u503c\u3002 \u89e3\u91ca\uff1a\u5fc5\u987b\u8bc4\u4f30\u4e2d\u56fd\u5f53\u5c40\u5b9e\u65bd\u7684\u6602\u8d35\uff0c\u8d44\u6e90\u5bc6\u96c6\u578b\u63aa\u65bd\u5982\u4f55\u6709\u52a9\u4e8e\u9884\u9632\u548c\u63a7\u52362019-nCov\u611f\u67d3\uff0c\u4ee5\u53ca\u5e94\u7ef4\u6301\u591a\u957f\u65f6\u95f4\u3002\u5728\u6700\u4e25\u683c\u7684\u63aa\u65bd\u4e0b\uff0c\u9884\u8ba1\u75ab\u60c5\u5c06\u5728\u4e24\u5468\u5185\uff08\u81ea2020\u5e741\u670823\u65e5\u8d77\uff09\u8fbe\u5230\u5cf0\u503c\uff0c\u5cf0\u503c\u8f83\u4f4e\u3002\u4e0e\u6ca1\u6709\u51fa\u884c\u9650\u5236\u7684\u60c5\u51b5\u76f8\u6bd4\uff0c\u6709\u4e86\u51fa\u884c\u9650\u5236\uff08\u5373\u6ca1\u6709\u8f93\u5165\u7684\u6f5c\u4f0f\u7c7b\u4e2a\u4f53\u8fdb\u5165\u5317\u4eac\uff09\uff0c\u5317\u4eac\u76847\u5929\u611f\u67d3\u8005\u6570\u91cf\u5c06\u51cf\u5c1191.14\uff05\u3002", "which R0\u00a0estimates (average) ?", "6.47", 786.0, 790.0], ["Since the emergence of the first cases in Wuhan, China, the novel coronavirus (2019-nCoV) infection has been quickly spreading out to other provinces and neighboring countries. Estimation of the basic reproduction number by means of mathematical modeling can be helpful for determining the potential and severity of an outbreak and providing critical information for identifying the type of disease interventions and intensity. A deterministic compartmental model was devised based on the clinical progression of the disease, epidemiological status of the individuals, and intervention measures. The estimations based on likelihood and model analysis show that the control reproduction number may be as high as 6.47 (95% CI 5.71\u20137.23). Sensitivity analyses show that interventions, such as intensive contact tracing followed by quarantine and isolation, can effectively reduce the control reproduction number and transmission risk, with the effect of travel restriction adopted by Wuhan on 2019-nCoV infection in Beijing being almost equivalent to increasing quarantine by a 100 thousand baseline value. It is essential to assess how the expensive, resource-intensive measures implemented by the Chinese authorities can contribute to the prevention and control of the 2019-nCoV infection, and how long they should be maintained. Under the most restrictive measures, the outbreak is expected to peak within two weeks (since 23 January 2020) with a significant low peak value. With travel restriction (no imported exposed individuals to Beijing), the number of infected individuals in seven days will decrease by 91.14% in Beijing, compared with the scenario of no travel restriction.", "which Rc estimates (average) ?", "6.47", 711.0, 715.0], ["Graphene (Gr) has been widely used as a transparent electrode material for photodetectors because of its high conductivity and high transmittance in recent years. However, the current low-efficiency manipulation of Gr has hindered the arraying and practical use of such detectors. We invented a multistep method of accurately tailoring graphene into interdigital electrodes for fabricating a sensitive, stable deep-ultraviolet photodetector based on Zn-doped Ga2O3 films. The fabricated photodetector exhibits a series of excellent performance, including extremely low dark current (\u223c10-11 A), an ultrahigh photo-to-dark ratio (>105), satisfactory responsivity (1.05 A/W), and excellent selectivity for the deep-ultraviolet band, compared to those with ordinary metal electrodes. The raise of photocurrent and responsivity is attributed to the increase of incident photons through Gr and separated carriers caused by the built-in electric field formed at the interface of Gr and Ga2O3:Zn films. The proposed ideas and methods of tailoring Gr can not only improve the performance of devices but more importantly contribute to the practical development of graphene.", "which Photoresponsivity (A/W ) ?", "1.05", 662.0, 666.0], ["We present a counterfactual recognition (CR) task, the shared Task 5 of SemEval-2020. Counterfactuals describe potential outcomes (consequents) produced by actions or circumstances that did not happen or cannot happen and are counter to the facts (antecedent). Counterfactual thinking is an important characteristic of the human cognitive system; it connects antecedents and consequent with causal relations. Our task provides a benchmark for counterfactual recognition in natural language with two subtasks. Subtask-1 aims to determine whether a given sentence is a counterfactual statement or not. Subtask-2 requires the participating systems to extract the antecedent and consequent in a given counterfactual statement. During the SemEval-2020 official evaluation period, we received 27 submissions to Subtask-1 and 11 to Subtask-2. Our data and baseline code are made publicly available at https://zenodo.org/record/3932442. The task website and leaderboard can be found at https://competitions.codalab.org/competitions/21691.", "which Subtask 2 ?", "Extract the antecedent and consequent in a given counterfactual statement", 648.0, 721.0], ["Objective: We consider challenges in accurate segmentation of heart sound signals recorded under noisy clinical environments for subsequent classification of pathological events. Existing state-of-the-art solutions to heart sound segmentation use probabilistic models such as hidden Markov models (HMMs), which, however, are limited by its observation independence assumption and rely on pre-extraction of noise-robust features. Methods: We propose a Markov-switching autoregressive (MSAR) process to model the raw heart sound signals directly, which allows efficient segmentation of the cyclical heart sound states according to the distinct dependence structure in each state. To enhance robustness, we extend the MSAR model to a switching linear dynamic system (SLDS) that jointly model both the switching AR dynamics of underlying heart sound signals and the noise effects. We introduce a novel algorithm via fusion of switching Kalman filter and the duration-dependent Viterbi algorithm, which incorporates the duration of heart sound states to improve state decoding. Results: Evaluated on Physionet/CinC Challenge 2016 dataset, the proposed MSAR-SLDS approach significantly outperforms the hidden semi-Markov model (HSMM) in heart sound segmentation based on raw signals and comparable to a feature-based HSMM. The segmented labels were then used to train Gaussian-mixture HMM classifier for identification of abnormal beats, achieving high average precision of 86.1% on the same dataset including very noisy recordings. Conclusion: The proposed approach shows noticeable performance in heart sound segmentation and classification on a large noisy dataset. Significance: It is potentially useful in developing automated heart monitoring systems for pre-screening of heart pathologies.", "which Contribution description ?", "We consider challenges in accurate segmentation of heart sound signals recorded under noisy clinical environments for subsequent classification of pathological events.", NaN, NaN], ["Background Visual atypicalities in autism spectrum disorder (ASD) are a well documented phenomenon, beginning as early as 2\u20136 months of age and manifesting in a significantly decreased attention to the eyes, direct gaze and socially salient information. Early emerging neurobiological deficits in perceiving social stimuli as rewarding or its active avoidance due to the anxiety it entails have been widely purported as potential reasons for this atypicality. Parallel research evidence also points to the significant benefits of animal presence for reducing social anxiety and enhancing social interaction in children with autism. While atypicality in social attention in ASD has been widely substantiated, whether this atypicality persists equally across species types or is confined to humans has not been a key focus of research insofar. Methods We attempted a comprehensive examination of the differences in visual attention to static images of human and animal faces (40 images; 20 human faces and 20 animal faces) among children with ASD using an eye tracking paradigm. 44 children (ASD n = 21; TD n = 23) participated in the study (10,362 valid observations) across five regions of interest (left eye, right eye, eye region, face and screen). Results Results obtained revealed significantly greater social attention across human and animal stimuli in typical controls when compared to children with ASD. However in children with ASD, a significantly greater attention allocation was seen to animal faces and eye region and lesser attention to the animal mouth when compared to human faces, indicative of a clear attentional preference to socially salient regions of animal stimuli. The positive attentional bias toward animals was also seen in terms of a significantly greater visual attention to direct gaze in animal images. Conclusion Our results suggest the possibility that atypicalities in social attention in ASD may not be uniform across species. It adds to the current neural and biomarker evidence base of the potentially greater social reward processing and lesser social anxiety underlying animal stimuli as compared to human stimuli in children with ASD.", "which has answer to research question ?", "greater attention allocation was seen to animal faces and eye region and lesser attention to the animal mouth when compared to human faces, indicative of a clear attentional preference to socially salient regions of animal stimuli.", NaN, NaN], ["Televised public service announcements are video ads that are a key component of public health campaigns against smoking. Understanding the neurophysiological correlates of anti-tobacco ads is an important step toward novel objective methods of their evaluation and design. In the present study, we used functional magnetic resonance imaging (fMRI) to investigate the brain and behavioral effects of the interaction between content (\u201cargument strength,\u201d AS) and format (\u201cmessage sensation value,\u201d MSV) of anti-smoking ads in humans. Seventy-one nontreatment-seeking smokers viewed a sequence of 16 high or 16 low AS ads during an fMRI scan. Dependent variables were brain fMRI signal, the immediate recall of the ads, the immediate change in intentions to quit smoking, and the urine levels of a major nicotine metabolite cotinine at a 1 month follow-up. Whole-brain ANOVA revealed that AS and MSV interacted in the inferior frontal, inferior parietal, and fusiform gyri; the precuneus; and the dorsomedial prefrontal cortex (dMPFC). Regression analysis showed that the activation in the dMPFC predicted the urine cotinine levels 1 month later. These results characterize the key brain regions engaged in the processing of persuasive communications and suggest that brain fMRI response to anti-smoking ads could predict subsequent smoking severity in nontreatment-seeking smokers. Our findings demonstrate the importance of the quality of content for objective ad outcomes and suggest that fMRI investigation may aid the prerelease evaluation of televised public health ads.", "which Main_manipulation_of_interest ?", "interaction between content (\u201cargument strength,\u201d AS) and format (\u201cmessage sensation value,\u201d MSV) of anti-smoking ads", NaN, NaN], ["This paper presents a technique for adapting existing motion of a human-like character to have the desired features that are specified by a set of constraints. This problem can be typically formulated as a spacetime constraint problem. Our approach combines a hierarchical curve fitting technique with a new inverse kinematics solver. Using the kinematics solver, we can adjust the configuration of an articulated figure to meet the constraints in each frame. Through the fitting technique, the motion displacement of every joint at each constrained frame is interpolated and thus smoothly propagated to frames. We are able to adaptively add motion details to satisfy the constraints within a specified tolerance by adopting a multilevel Bspline representation which also provides a speedup for the interpolation. The performance of our system is further enhanced by the new inverse kinematics solver. We present a closed-form solution to compute the joint angles of a limb linkage. This analytical method greatly reduces the burden of a numerical optimization to find the solutions for full degrees of freedom of a human-like articulated figure. We demonstrate that the technique can be used for retargetting a motion to compensate for geometric variations caused by both characters and environments. Furthermore, we can also use this technique for directly manipulating a motion clip through a graphical interface. CR Categories: I.3.7 [Computer Graphics]: Threedimensional Graphics\u2014Animation; G.1.2 [Numerical Analysis]: Approximation\u2014Spline and piecewise polynomial approximation", "which Algorithm and Techniques ?", "A hierarchical curve fitting technique with a new inverse kinematics solver", 258.0, 333.0], ["The effect of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids) on the microbial mineralisation of phenanthrene in soil slurry by the indigenous microbial community has been investigated. The rate and extent of 14C\u2013phenanthrenemineralisation in artificially spiked soils were monitored in the absence of hydroxycinnamic acids and presence of hydroxycinnamic acids applied at three different concentrations (50, 100 and 200 \u00b5g kg-1) either as single compounds or as a mixture of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids at a 1:1:1 ratio). The highest extent of 14C\u2013phenanthrene mineralisation (P 200 \u00b5g kg-1. Depending on its concentrationin soil, hydroxycinnamic acids can either stimulate or inhibit mineralisation of phenanthrene by indigenous soil microbial community. Therefore, effective understanding of phytochemical\u2013microbe\u2013organic contaminant interactions is essential for further development of phytotechnologies for remediation of PAH\u2013contaminated soils.", "which Implication ?", "Depending on its concentrationin soil, hydroxycinnamic acids can either stimulate or inhibit mineralisation of phenanthrene by indigenous soil microbial community.", NaN, NaN], ["The effect of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids) on the microbial mineralisation of phenanthrene in soil slurry by the indigenous microbial community has been investigated. The rate and extent of 14C\u2013phenanthrenemineralisation in artificially spiked soils were monitored in the absence of hydroxycinnamic acids and presence of hydroxycinnamic acids applied at three different concentrations (50, 100 and 200 \u00b5g kg-1) either as single compounds or as a mixture of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids at a 1:1:1 ratio). The highest extent of 14C\u2013phenanthrene mineralisation (P 200 \u00b5g kg-1. Depending on its concentrationin soil, hydroxycinnamic acids can either stimulate or inhibit mineralisation of phenanthrene by indigenous soil microbial community. Therefore, effective understanding of phytochemical\u2013microbe\u2013organic contaminant interactions is essential for further development of phytotechnologies for remediation of PAH\u2013contaminated soils.", "which Implication ?", "Therefore, effective understanding of phytochemical\u2013microbe\u2013organic contaminant interactions is essential for further development of phytotechnologies for remediation of PAH\u2013contaminated soils.", NaN, NaN], ["The effect of rhizosphere soil or root tissues amendments on the microbial mineralisation of hydrocarbons in soil slurry by the indigenous microbial communities has been investigated. In this study, rhizosphere soil and root tissues of reed canary grass (Phalaris arundinacea), channel grass (Vallisneria spiralis), blackberry (Rubus fructicosus) and goat willow (Salix caprea) were collected from the former Shell and Imperial Industries (ICI) Refinery site in Lancaster, UK. The rates and extents of 14C\u2013hydrocarbons (naphthalene, phenanthrene, hexadecane or octacosane) mineralisation in artificially spiked soils were monitored in the absence and presence of 5% (wet weight) of rhizosphere soil or root tissues. Respirometric and microbial assays were monitored in fresh (0 d) and pre\u2013incubated (28 d) artificially spiked soils following amendment with rhizosphere soil or root tissues. There were significant increases (P < 0.001) in the extents of 14C\u2013naphthalene and 14C\u2013phenanthrene mineralisation in fresh artificially spiked soils amended with rhizosphere soil and root tissues compared to those measured in unamended soils. However, amendment of fresh artificially spiked soils with rhizosphere soil and root tissues did not enhance the microbial mineralisation of 14C\u2013hexadecane or 14C\u2013octacosane by indigenous microbial communities. Apart from artificially spiked soil systems containing naphthalene (amended with reed canary grass and channel grass rhizosphere) and hexadecane amended with goat willow rhizosphere, microbial mineralisation of hydrocarbons was further enhanced following 28 d soil\u2013organic contaminants pre\u2013exposure and subsequent amendment with rhizosphere soil or root tissues. This study suggests that organic chemicals in roots and/or rhizosphere can enhance the microbial degradation of petroleum hydrocarbons in freshly contaminated soil by supporting higher numbers of hydrocarbon\u2013degrading populations, promoting microbial activity and/or enhancing bioavailability of organic contaminants.", "which Implication ?", "This study suggests that organic chemicals in roots and/or rhizosphere can enhance the microbial degradation of petroleum hydrocarbons in freshly contaminated soil by supporting higher numbers of hydrocarbon\u2013degrading populations, promoting microbial activity and/or enhancing bioavailability of organic contaminants.", NaN, NaN], ["This study investigates the distribution of cadmium and lead concentrations in the outcrop rock samples collected from Abakaliki anticlinorium in the Southern Benue Trough, Nigeria. The outcrop rock samples from seven sampling locations were air\u2013dried for seventy\u2013two hours, homogenized by grinding and pass through < 63 micron mesh sieve. The ground and homogenized rock samples were pulverized and analyzed for cadmium and lead using X-Ray Fluorescence Spectrometer. The concentrations of heavy metals in the outcrop rock samples ranged from < 0.10 \u2013 7.95 mg kg\u20131 for cadmium (Cd) and < 1.00 \u2013 4966.00 mg kg\u20131 for lead (Pb). Apart from an anomalous concentration measured in Afikpo Shale (Middle Segment), the results obtained revealed that rock samples from all the sampling locations yielded cadmium concentrations of < 0.10 mg kg\u20131 and the measured concentrations were below the average crustal abundance of 0.50 mg kg\u20131. Although background concentration of <1.00 \u00b1 0.02 mg kg\u20131 was measured in Abakaliki Shale, rock samples from all the sampling locations revealed anomalous lead concentrations above average crustal abundance of 30 mg kg\u20131. The results obtained reveal important contributions towards understanding of heavy metal distribution patterns and provide baseline data that can be used for potential identification of areas at risk associated with natural sources of heavy metals contamination in the region. The use of outcrop rocks provides a cost\u2013effective approach for monitoring regional heavy metal contamination associated with dissolution and/or weathering of rocks or parent materials. Evaluation of heavy metals may be effectively used in large scale regional pollution monitoring of soil, groundwater, atmospheric and marine environment. Therefore, monitoring of heavy metal concentrations in soils, groundwater and atmospheric environment is imperative in order to prevent bioaccumulation in various ecological receptors.", "which Aim ?", "This study investigates the distribution of cadmium and lead concentrations in the outcrop rock samples collected from Abakaliki anticlinorium in the Southern Benue Trough, Nigeria. ", 0.0, 182.0], ["Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179\u00b122 nm were produced. Res was loaded with high EE of 73\u00b10.9% and DL content of 6.2\u00b10.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", "which Highlights ?", "An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells.", NaN, NaN], ["Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179\u00b122 nm were produced. Res was loaded with high EE of 73\u00b10.9% and DL content of 6.2\u00b10.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", "which Highlights ?", "Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control.", NaN, NaN], ["Rationale: Anti-tumor necrosis factor (TNF) therapy is a very effective way to treat inflammatory bowel disease. However, systemic exposure to anti-TNF-\u03b1 antibodies through current clinical systemic administration can cause serious adverse effects in many patients. Here, we report a facile prepared self-assembled supramolecular nanoparticle based on natural polyphenol tannic acid and poly(ethylene glycol) containing polymer for oral antibody delivery. Method: This supramolecular nanoparticle was fabricated within minutes in aqueous solution and easily scaled up to gram level due to their pH-dependent reversible assembly. DSS-induced colitis model was prepared to evaluate the ability of inflammatory colon targeting ability and therapeutic efficacy of this antibody-loaded nanoparticles. Results: This polyphenol-based nanoparticle can be aqueous assembly without organic solvent and thus scaled up easily. The oral administration of antibody loaded nanoparticle achieved high accumulation in the inflamed colon and low systemic exposure. The novel formulation of anti-TNF-\u03b1 antibodies administrated orally achieved high efficacy in the treatment of colitis mice compared with free antibodies administered orally. The average weight, colon length, and inflammatory factors in colon and serum of colitis mice after the treatment of novel formulation of anti-TNF-\u03b1 antibodies even reached the similar level to healthy controls. Conclusion: This polyphenol-based supramolecular nanoparticle is a promising platform for oral delivery of antibodies for the treatment of inflammatory bowel diseases, which may have promising clinical translation prospects.", "which Highlights ?", "The average weight, colon length, and inflammatory factors in colon and serum of colitis mice after the treatment of novel formulation of anti-TNF-\u03b1 antibodies even reached the similar level to healthy controls", 1222.0, 1432.0], ["Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p\u02c20.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.", "which Highlights ?", "The IC50 value of all drugs on T47D were lower than those on MCF7.", NaN, NaN], ["Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179\u00b122 nm were produced. Res was loaded with high EE of 73\u00b10.9% and DL content of 6.2\u00b10.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", "which Highlights ?", "the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", NaN, NaN], ["Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179\u00b122 nm were produced. Res was loaded with high EE of 73\u00b10.9% and DL content of 6.2\u00b10.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", "which Highlights ?", "the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. ", 1619.0, 1714.0], ["Rationale: Anti-tumor necrosis factor (TNF) therapy is a very effective way to treat inflammatory bowel disease. However, systemic exposure to anti-TNF-\u03b1 antibodies through current clinical systemic administration can cause serious adverse effects in many patients. Here, we report a facile prepared self-assembled supramolecular nanoparticle based on natural polyphenol tannic acid and poly(ethylene glycol) containing polymer for oral antibody delivery. Method: This supramolecular nanoparticle was fabricated within minutes in aqueous solution and easily scaled up to gram level due to their pH-dependent reversible assembly. DSS-induced colitis model was prepared to evaluate the ability of inflammatory colon targeting ability and therapeutic efficacy of this antibody-loaded nanoparticles. Results: This polyphenol-based nanoparticle can be aqueous assembly without organic solvent and thus scaled up easily. The oral administration of antibody loaded nanoparticle achieved high accumulation in the inflamed colon and low systemic exposure. The novel formulation of anti-TNF-\u03b1 antibodies administrated orally achieved high efficacy in the treatment of colitis mice compared with free antibodies administered orally. The average weight, colon length, and inflammatory factors in colon and serum of colitis mice after the treatment of novel formulation of anti-TNF-\u03b1 antibodies even reached the similar level to healthy controls. Conclusion: This polyphenol-based supramolecular nanoparticle is a promising platform for oral delivery of antibodies for the treatment of inflammatory bowel diseases, which may have promising clinical translation prospects.", "which Highlights ?", "The novel formulation of anti-TNF-\u03b1 antibodies administrated orally achieved high efficacy in the treatment of colitis mice compared with free antibodies administered orally", 1047.0, 1220.0], ["The development of simple fluorescent and colorimetric assays that enable point-of-care DNA and RNA detection has been a topic of significant research because of the utility of such assays in resource limited settings. The most common motifs utilize hybridization to a complementary detection strand coupled with a sensitive reporter molecule. Here, a paper-based colorimetric assay for DNA detection based on pyrrolidinyl peptide nucleic acid (acpcPNA)-induced nanoparticle aggregation is reported as an alternative to traditional colorimetric approaches. PNA probes are an attractive alternative to DNA and RNA probes because they are chemically and biologically stable, easily synthesized, and hybridize efficiently with the complementary DNA strands. The acpcPNA probe contains a single positive charge from the lysine at C-terminus and causes aggregation of citrate anion-stabilized silver nanoparticles (AgNPs) in the absence of complementary DNA. In the presence of target DNA, formation of the anionic DNA-acpcPNA duplex results in dispersion of the AgNPs as a result of electrostatic repulsion, giving rise to a detectable color change. Factors affecting the sensitivity and selectivity of this assay were investigated, including ionic strength, AgNP concentration, PNA concentration, and DNA strand mismatches. The method was used for screening of synthetic Middle East respiratory syndrome coronavirus (MERS-CoV), Mycobacterium tuberculosis (MTB), and human papillomavirus (HPV) DNA based on a colorimetric paper-based analytical device developed using the aforementioned principle. The oligonucleotide targets were detected by measuring the color change of AgNPs, giving detection limits of 1.53 (MERS-CoV), 1.27 (MTB), and 1.03 nM (HPV). The acpcPNA probe exhibited high selectivity for the complementary oligonucleotides over single-base-mismatch, two-base-mismatch, and noncomplementary DNA targets. The proposed paper-based colorimetric DNA sensor has potential to be an alternative approach for simple, rapid, sensitive, and selective DNA detection.", "which has outcome ?", "paper-based colorimetric assay for DNA detection", 352.0, 400.0], ["Abstract This study investigated the process of resolving painful emotional experience during psychodrama group therapy, by examining significant therapeutic events within seven psychodrama enactments. A comprehensive process analysis of four resolved and three not-resolved cases identified five meta-processes which were linked to in-session resolution. One was a readiness to engage in the therapeutic process, which was influenced by client characteristics and the client's experience of the group; and four were therapeutic events: (1) re-experiencing with insight; (2) activating resourcefulness; (3) social atom repair with emotional release; and (4) integration. A corrective interpersonal experience (social atom repair) healed the sense of fragmentation and interpersonal disconnection associated with unresolved emotional pain, and emotional release was therapeutically helpful when located within the enactment of this new role relationship. Protagonists who experienced resolution reported important improvements in interpersonal functioning and sense of self which they attributed to this experience.", "which Qualitative findings ?", "Improvements in interpersonal functioning and sense of self ", 1013.0, 1073.0], ["A proof of retrievability (POR) is a compact proof by a file system (prover) to a client (verifier) that a target file F is intact, in the sense that the client can fully recover it. As PORs incur lower communication complexity than transmission of F itself, they are an attractive building block for high-assurance remote storage systems. In this paper, we propose a theoretical framework for the design of PORs. Our framework improves the previously proposed POR constructions of Juels-Kaliski and Shacham-Waters, and also sheds light on the conceptual limitations of previous theoretical models for PORs. It supports a fully Byzantine adversarial model, carrying only the restriction---fundamental to all PORs---that the adversary's error rate be bounded when the client seeks to extract F. We propose a new variant on the Juels-Kaliski protocol and describe a prototype implementation. We demonstrate practical encoding even for files F whose size exceeds that of client main memory.", "which Custom1 ?", "A proof of retrievability (POR) is a compact proof by a file system (prover) to a client (verifier) that a target file F is intact, in the sense that the client can fully recover it. As PORs incur lower communication complexity than transmission of F itself, they are an attractive building block for high-assurance remote storage systems.", NaN, NaN], ["Proof of retrievability is a cryptographic tool which interacts between the data user and the server, and the server proves to the data user the integrity of data which he will download. It is a crucial problem in outsourcing storage such as cloud computing. In this paper, a novel scheme called the zero knowledge proof of retrievability is proposed, which combines proof of retrievability and zero knowledge proof. It has lower computation and communication complexity and higher security than the previous schemes.", "which Custom1 ?", "Proof of retrievability is a cryptographic tool which interacts between the data user and the server, and the server proves to the data user the integrity of data which he will download. It is a crucial problem in outsourcing storage such as cloud computing. In this paper, a novel scheme called the zero knowledge proof of retrievability is proposed, which combines proof of retrievability and zero knowledge proof. It has lower computation and communication complexity and higher security than the previous schemes.", NaN, NaN], ["Proofs of Retrievability (PORs) permit a cloud provider to prove to the client (owner) that her files are correctly stored. Extensions of PORs, called Proofs of Retrievability and Reliability (PORRs), enable to check in a single instance that replicas of those files are correctly stored as well.In this paper, we propose a publicly verifiable PORR using Verifiable Delay Functions, which are special functions being slow to compute and easy to verify. We thus ensure that the cloud provider stores both original files and their replicas at rest, rather than computing the latter on the fly when requested to prove fair storage. Moreover, the storage verification can be done by anyone, not necessarily by the client. To our knowledge, this is the first PORR that offers public verification. Future work will include implementation and evaluation of our solution in a realistic cloud setting.", "which Custom1 ?", "Proofs of Retrievability (PORs) permit a cloud provider to prove to the client (owner) that her files are correctly stored. Extensions of PORs, called Proofs of Retrievability and Reliability (PORRs), enable to check in a single instance that replicas of those files are correctly stored as well.In this paper, we propose a publicly verifiable PORR using Verifiable Delay Functions, which are special functions being slow to compute and easy to verify. We thus ensure that the cloud provider stores both original files and their replicas at rest, rather than computing the latter on the fly when requested to prove fair storage. Moreover, the storage verification can be done by anyone, not necessarily by the client. To our knowledge, this is the first PORR that offers public verification. Future work will include implementation and evaluation of our solution in a realistic cloud setting.", NaN, NaN], ["Remote data auditing service is important for mobile clients to guarantee the intactness of their outsourced data stored at cloud side. To relieve mobile client from the nonnegligible burden incurred by performing the frequent data auditing, more and more literatures propose that the execution of such data auditing should be migrated from mobile client to third-party auditor (TPA). However, existing public auditing schemes always assume that TPA is reliable, which is the potential risk for outsourced data security. Although Outsourced Proofs of Retrievability (OPOR) have been proposed to further protect against the malicious TPA and collusion among any two entities, the original OPOR scheme applies only to the static data, which is the limitation that should be solved for enabling data dynamics. In this paper, we design a novel authenticated data structure called bv23Tree, which enables client to batch-verify the indices and values of any number of appointed leaves all at once for efficiency. By utilizing bv23Tree and a hierarchical storage structure, we present the first solution for Dynamic OPOR (DOPOR), which extends the OPOR model to support dynamic updates of the outsourced data. Extensive security and performance analyses show the reliability and effectiveness of our proposed scheme.", "which Custom1 ?", "Remote data auditing service is important for mobile clients to guarantee the intactness of their outsourced data stored at cloud side. To relieve mobile client from the nonnegligible burden incurred by performing the frequent data auditing, more and more literatures propose that the execution of such data auditing should be migrated from mobile client to third-party auditor (TPA). However, existing public auditing schemes always assume that TPA is reliable, which is the potential risk for outsourced data security. Although Outsourced Proofs of Retrievability (OPOR) have been proposed to further protect against the malicious TPA and collusion among any two entities, the original OPOR scheme applies only to the static data, which is the limitation that should be solved for enabling data dynamics. In this paper, we design a novel authenticated data structure called bv23Tree, which enables client to batch-verify the indices and values of any number of appointed leaves all at once for efficiency. By utilizing bv23Tree and a hierarchical storage structure, we present the first solution for Dynamic OPOR (DOPOR), which extends the OPOR model to support dynamic updates of the outsourced data. Extensive security and performance analyses show the reliability and effectiveness of our proposed scheme.", NaN, NaN], ["Mobile phone text messages can be used to disseminate information and advice to the public in disasters. We sought to identify factors influencing how adolescents would respond to receiving emergency text messages. Qualitative interviews were conducted with participants aged 12\u201318 years. Participants discussed scenarios relating to flooding and the discovery of an unexploded World War Two bomb and were shown example alerts that might be sent out in these circumstances. Intended compliance with the alerts was high. Participants noted that compliance would be more likely if: they were familiar with the system; the messages were sent by a trusted source; messages were reserved for serious incidents; multiple messages were sent; messages were kept short and formal.", "which RQ ?", "Identify factors influencing how adolescents would respond to receiving emergency text messages", 118.0, 213.0], ["Objective: The objective is to design a fully automated glycemia controller of Type-1 Diabetes (T1D) in both fasting and postprandial phases on a large number of virtual patients. Methods: A model-free intelligent proportional-integral-derivative (iPID) is used to infuse insulin. The feasibility of iPID is tested in silico on two simulators with and without measurement noise. The first simulator is derived from a long-term linear time-invariant model. The controller is also validated on the UVa/Padova metabolic simulator on 10 adults under 25 runs/subject for noise robustness test. Results: It was shown that without measurement noise, iPID mimicked the normal pancreatic secretion with a relatively fast reaction to meals as compared to a standard PID. With the UVa/Padova simulator, the robustness against CGM noise was tested. A higher percentage of time in target was obtained with iPID as compared to standard PID with reduced time spent in hyperglycemia. Conclusion: Two different T1D simulators tests showed that iPID detects meals and reacts faster to meal perturbations as compared to a classic PID. The intelligent part turns the controller to be more aggressive immediately after meals without neglecting safety. Further research is suggested to improve the computation of the intelligent part of iPID for such systems under actuator constraints. Any improvement can impact the overall performance of the model-free controller. Significance: The simple structure iPID is a step for PID-like controllers since it combines the classic PID nice properties with new adaptive features.", "which Future work/challenges ?", "improve the computation of the intelligent part of iPID", 1264.0, 1319.0], ["We present a counterfactual recognition (CR) task, the shared Task 5 of SemEval-2020. Counterfactuals describe potential outcomes (consequents) produced by actions or circumstances that did not happen or cannot happen and are counter to the facts (antecedent). Counterfactual thinking is an important characteristic of the human cognitive system; it connects antecedents and consequent with causal relations. Our task provides a benchmark for counterfactual recognition in natural language with two subtasks. Subtask-1 aims to determine whether a given sentence is a counterfactual statement or not. Subtask-2 requires the participating systems to extract the antecedent and consequent in a given counterfactual statement. During the SemEval-2020 official evaluation period, we received 27 submissions to Subtask-1 and 11 to Subtask-2. Our data and baseline code are made publicly available at https://zenodo.org/record/3932442. The task website and leaderboard can be found at https://competitions.codalab.org/competitions/21691.", "which Online competition ?", "https://competitions.codalab.org/competitions/21691", 978.0, 1029.0], ["In this paper, we present SemEval-2020 Task 4,CommonsenseValidation andExplanation(ComVE), which includes three subtasks, aiming to evaluate whether a system can distinguish anatural language statement thatmakes senseto humans from one that does not, and provide thereasons. Specifically, in our first subtask, the participating systems are required to choose from twonatural language statements of similar wording the one thatmakes senseand the one does not. Thesecond subtask additionally asks a system to select the key reason from three options why a givenstatement does not make sense. In the third subtask, a participating system needs to generate thereason automatically. 39 teams submitted their valid systems to at least one subtask. For SubtaskA and Subtask B, top-performing teams have achieved results closed to human performance.However, for Subtask C, there is still a considerable gap between system and human performance.The dataset used in our task can be found athttps://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation.", "which Data repositories ?", "https://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation", NaN, NaN], ["We present a counterfactual recognition (CR) task, the shared Task 5 of SemEval-2020. Counterfactuals describe potential outcomes (consequents) produced by actions or circumstances that did not happen or cannot happen and are counter to the facts (antecedent). Counterfactual thinking is an important characteristic of the human cognitive system; it connects antecedents and consequent with causal relations. Our task provides a benchmark for counterfactual recognition in natural language with two subtasks. Subtask-1 aims to determine whether a given sentence is a counterfactual statement or not. Subtask-2 requires the participating systems to extract the antecedent and consequent in a given counterfactual statement. During the SemEval-2020 official evaluation period, we received 27 submissions to Subtask-1 and 11 to Subtask-2. Our data and baseline code are made publicly available at https://zenodo.org/record/3932442. The task website and leaderboard can be found at https://competitions.codalab.org/competitions/21691.", "which Data repositories ?", "https://zenodo.org/record/3932442", 894.0, 927.0], ["Abstract Background The task of recognizing and identifying species names in biomedical literature has recently been regarded as critical for a number of applications in text and data mining, including gene name recognition, species-specific document retrieval, and semantic enrichment of biomedical articles. Results In this paper we describe an open-source species name recognition and normalization software system, LINNAEUS, and evaluate its performance relative to several automatically generated biomedical corpora, as well as a novel corpus of full-text documents manually annotated for species mentions. LINNAEUS uses a dictionary-based approach (implemented as an efficient deterministic finite-state automaton) to identify species names and a set of heuristics to resolve ambiguous mentions. When compared against our manually annotated corpus, LINNAEUS performs with 94% recall and 97% precision at the mention level, and 98% recall and 90% precision at the document level. Our system successfully solves the problem of disambiguating uncertain species mentions, with 97% of all mentions in PubMed Central full-text documents resolved to unambiguous NCBI taxonomy identifiers. Conclusions LINNAEUS is an open source, stand-alone software system capable of recognizing and normalizing species name mentions with speed and accuracy, and can therefore be integrated into a range of bioinformatics and text-mining applications. The software and manually annotated corpus can be downloaded freely at http://linnaeus.sourceforge.net/.", "which url ?", "http://linnaeus.sourceforge.net/", NaN, NaN], ["SUMMARY We present a part-of-speech tagger that achieves over 97% accuracy on MEDLINE citations. AVAILABILITY Software, documentation and a corpus of 5700 manually tagged sentences are available at ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedPost/medpost.tar.gz", "which url ?", "ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedPost/medpost.tar.gz", 198.0, 258.0], ["Abstract Background Named entity recognition (NER) is an important first step for text mining the biomedical literature. Evaluating the performance of biomedical NER systems is impossible without a standardized test corpus. The annotation of such a corpus for gene/protein name NER is a difficult process due to the complexity of gene/protein names. We describe the construction and annotation of GENETAG, a corpus of 20K MEDLINE \u00ae sentences for gene/protein NER. 15K GENETAG sentences were used for the BioCreAtIvE Task 1A Competition. Results To ensure heterogeneity of the corpus, MEDLINE sentences were first scored for term similarity to documents with known gene names, and 10K high- and 10K low-scoring sentences were chosen at random. The original 20K sentences were run through a gene/protein name tagger, and the results were modified manually to reflect a wide definition of gene/protein names subject to a specificity constraint, a rule that required the tagged entities to refer to specific entities. Each sentence in GENETAG was annotated with acceptable alternatives to the gene/protein names it contained, allowing for partial matching with semantic constraints. Semantic constraints are rules requiring the tagged entity to contain its true meaning in the sentence context. Application of these constraints results in a more meaningful measure of the performance of an NER system than unrestricted partial matching. Conclusion The annotation of GENETAG required intricate manual judgments by annotators which hindered tagging consistency. The data were pre-segmented into words, to provide indices supporting comparison of system responses to the \"gold standard\". However, character-based indices would have been more robust than word-based indices. GENETAG Train, Test and Round1 data and ancillary programs are freely available at ftp://ftp.ncbi.nlm.nih.gov/pub/tanabe/GENETAG.tar.gz. A newer version of GENETAG-05, will be released later this year.", "which url ?", "ftp://ftp.ncbi.nlm.nih.gov/pub/tanabe/GENETAG.tar.gz", 1920.0, 1972.0], ["Extracting information from full documents is an important problem in many domains, but most previous work focus on identifying relationships within a sentence or a paragraph. It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections. In this paper, we introduce SciREX, a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. We annotate our dataset by integrating automatic and human annotations, leveraging existing scientific knowledge resources. We develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE. Analyzing the model performance shows a significant gap between human performance and current baselines, inviting the community to use our dataset as a challenge to develop document-level IE models. Our data and code are publicly available at https://github.com/allenai/SciREX .", "which url ?", "https://github.com/allenai/SciREX", 1146.0, 1179.0], ["Sentence embeddings have become an essential part of today\u2019s natural language processing (NLP) systems, especially together advanced deep learning methods. Although pre-trained sentence encoders are available in the general domain, none exists for biomedical texts to date. In this work, we introduce BioSentVec: the first open set of sentence embeddings trained with over 30 million documents from both scholarly articles in PubMed and clinical notes in the MIMICIII Clinical Database. We evaluate BioSentVec embeddings in two sentence pair similarity tasks in different biomedical text genres. Our benchmarking results demonstrate that the BioSentVec embeddings can better capture sentence semantics compared to the other competitive alternatives and achieve state-of-the-art performance in both tasks. We expect BioSentVec to facilitate the research and development in biomedical text mining and to complement the existing resources in biomedical word embeddings. The embeddings are publicly available at https://github.com/ncbi-nlp/BioSentVec.", "which has source code ?", "https://github.com/ncbi-nlp/BioSentVec", 1008.0, 1046.0], ["Neural architecture search (NAS) has emerged as a promising avenue for automatically designing task-specific neural networks. Existing NAS approaches require one complete search for each deployment specification of hardware or objective. This is a computationally impractical endeavor given the potentially large number of application scenarios. In this paper, we propose Neural Architecture Transfer (NAT) to overcome this limitation. NAT is designed to efficiently generate task-specific custom models that are competitive under multiple conflicting objectives. To realize this goal we learn task-specific supernets from which specialized subnets can be sampled without any additional training. The key to our approach is an integrated online transfer learning and many-objective evolutionary search procedure. A pre-trained supernet is iteratively adapted while simultaneously searching for task-specific subnets. We demonstrate the efficacy of NAT on 11 benchmark image classification tasks ranging from large-scale multi-class to small-scale fine-grained datasets. In all cases, including ImageNet, NATNets improve upon the state-of-the-art under mobile settings ($\\leq$\u2264 600M Multiply-Adds). Surprisingly, small-scale fine-grained datasets benefit the most from NAT. At the same time, the architecture search and transfer is orders of magnitude more efficient than existing NAS methods. Overall, experimental evaluation indicates that, across diverse image classification tasks and computational objectives, NAT is an appreciably more effective alternative to conventional transfer learning of fine-tuning weights of an existing network architecture learned on standard datasets. Code is available at https://github.com/human-analysis/neural-architecture-transfer.", "which has source code ?", "https://github.com/human-analysis/neural-architecture-transfer", 1922.0, 1984.0], ["Can a computer determine a piano player\u2019s skill level? Is it preferable to base this assessment on visual analysis of the player\u2019s performance or should we trust our ears over our eyes? Since current convolutional neural networks (CNNs) have difficulty processing long video videos, how can shorter clips be sampled to best reflect the players skill level? In this work, we collect and release a first-of-its-kind dataset for multimodal skill assessment focusing on assessing piano player\u2019s skill level, answer the asked questions, initiate work in automated evaluation of piano playing skills and provide baselines for future work. Dataset can be accessed from: https://github.com/ParitoshParmar/Piano-Skills-Assessment.", "which has source code ?", "https://github.com/ParitoshParmar/Piano-Skills-Assessment", 663.0, 720.0], ["We present a new approach to modeling sequential data: the deep equilibrium model (DEQ). Motivated by an observation that the hidden layers of many existing deep sequence models converge towards some fixed point, we propose the DEQ approach that directly finds these equilibrium points via root-finding. Such a method is equivalent to running an infinite depth (weight-tied) feedforward network, but has the notable advantage that we can analytically backpropagate through the equilibrium point using implicit differentiation. Using this approach, training and prediction in these networks require only constant memory, regardless of the effective \u201cdepth\u201d of the network. We demonstrate how DEQs can be applied to two state-of-the-art deep sequence models: self-attention transformers and trellis networks. On large-scale language modeling tasks, such as the WikiText-103 benchmark, we show that DEQs 1) often improve performance over these state-of-the-art models (for similar parameter counts); 2) have similar computational requirements to existing models; and 3) vastly reduce memory consumption (often the bottleneck for training large sequence models), demonstrating an up-to 88% memory reduction in our experiments. The code is available at https://github.com/locuslab/deq.", "which has source code ?", "https://github.com/locuslab/deq", 1248.0, 1279.0], ["Graph neural networks have recently emerged as a very effective framework for processing graph-structured data. These models have achieved state-of-the-art performance in many tasks. Most graph neural networks can be described in terms of message passing, vertex update, and readout functions. In this paper, we represent documents as word co-occurrence networks and propose an application of the message passing framework to NLP, the Message Passing Attention network for Document understanding (MPAD). We also propose several hierarchical variants of MPAD. Experiments conducted on 10 standard text classification datasets show that our architectures are competitive with the state-of-the-art. Ablation studies reveal further insights about the impact of the different components on performance. Code is publicly available at: https://github.com/giannisnik/mpad.", "which has source code ?", "https://github.com/giannisnik/mpad", 837.0, 871.0], ["Abstract We introduce an architecture to learn joint multilingual sentence representations for 93 languages, belonging to more than 30 different families and written in 28 different scripts. Our system uses a single BiLSTM encoder with a shared byte-pair encoding vocabulary for all languages, which is coupled with an auxiliary decoder and trained on publicly available parallel corpora. This enables us to learn a classifier on top of the resulting embeddings using English annotated data only, and transfer it to any of the 93 languages without any modification. Our experiments in cross-lingual natural language inference (XNLI data set), cross-lingual document classification (MLDoc data set), and parallel corpus mining (BUCC data set) show the effectiveness of our approach. We also introduce a new test set of aligned sentences in 112 languages, and show that our sentence embeddings obtain strong results in multilingual similarity search even for low- resource languages. Our implementation, the pre-trained encoder, and the multilingual test set are available at https://github.com/facebookresearch/LASER.", "which has source code ?", "https://github.com/facebookresearch/LASER", 1074.0, 1115.0], ["Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SciBERT, a pretrained language model based on BERT (Devlin et. al., 2018) to address the lack of high-quality, large-scale labeled scientific data. SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks. The code and pretrained models are available at https://github.com/allenai/scibert/.", "which has source code ?", "https://github.com/allenai/scibert", 765.0, 799.0], ["The BioCreative NLM-Chem track calls for a community effort to fine-tune automated recognition of chemical names in biomedical literature. Chemical names are one of the most searched biomedical entities in PubMed and \u2013 as highlighted during the COVID-19 pandemic \u2013 their identification may significantly advance research in multiple biomedical subfields. While previous community challenges focused on identifying chemical names mentioned in titles and abstracts, the full text contains valuable additional detail. We organized the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document. This manuscript summarizes the BioCreative NLM-Chem track. We received a total of 88 submissions in total from 17 teams worldwide. The highest performance achieved for the Chemical Identification task was 0.8672 f-score (0.8759 precision, 0.8587 recall) for strict NER performance and 0.8136 f-score (0.8621 precision, 0.7702 recall) for strict normalization performance. The highest performance achieved for the Chemical Indexing task was 0.4825 f-score (0.4397 precision, 0.5344 recall). The NLM-Chem track dataset and other challenge materials are publicly available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/. This community challenge demonstrated 1) the current substantial achievements in deep learning technologies can be utilized to further improve automated prediction accuracy, and 2) the Chemical Indexing task is substantially more challenging. We look forward to further development of biomedical text mining methods to respond to the rapid growth of biomedical literature. Keywords\u2014 biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; text mining; chemical entity recognition; chemical indexing", "which Dataset download url ?", "https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/", NaN, NaN], ["We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1027 canteen trays for a total of 3616 food instances belonging to 73 food classes. The food on the tray images has been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented with three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using convolutional-neural-networks-based features. The dataset, as well as the benchmark framework, are available to the research community.", "which Number of images ?", "1027", 293.0, 297.0], ["This practice paper describes an ongoing research project to test the effectiveness and relevance of the FAIR Data Principles. Simultaneously, it will analyse how easy it is for data archives to adhere to the principles. The research took place from November 2016 to January 2017, and will be underpinned with feedback from the repositories. The FAIR Data Principles feature 15 facets corresponding to the four letters of FAIR - Findable, Accessible, Interoperable, Reusable. These principles have already gained traction within the research world. The European Commission has recently expanded its demand for research to produce open data. The relevant guidelines1are explicitly written in the context of the FAIR Data Principles. Given an increasing number of researchers will have exposure to the guidelines, understanding their viability and suggesting where there may be room for modification and adjustment is of vital importance. This practice paper is connected to a dataset(Dunning et al.,2017) containing the original overview of the sample group statistics and graphs, in an Excel spreadsheet. Over the course of two months, the web-interfaces, help-pages and metadata-records of over 40 data repositories have been examined, to score the individual data repository against the FAIR principles and facets. The traffic-light rating system enables colour-coding according to compliance and vagueness. The statistical analysis provides overall, categorised, on the principles focussing, and on the facet focussing results. The analysis includes the statistical and descriptive evaluation, followed by elaborations on Elements of the FAIR Data Principles, the subject specific or repository specific differences, and subsequently what repositories can do to improve their information architecture.", "which has publication year ?", "2017", 275.0, 279.0], ["Objectives: To estimate the prevalence and incidence of epilepsy in Italy using a national database of general practitioners (GPs). Methods: The Health Search CSD Longitudinal Patient Database (HSD) has been established in 1998 by the Italian College of GPs. Participants were 700 GPs, representing a population of 912,458. For each patient, information on age and sex, EEG, CT scan, and MRI was included. Prevalent cases with a diagnosis of \u2018epilepsy' (ICD9CM: 345*) were selected in the 2011 population. Incident cases of epilepsy were identified in 2011 by excluding patients diagnosed for epilepsy and convulsions and those with EEG, CT scan, MRI prescribed for epilepsy and/or convulsions in the previous years. Crude and standardized (Italian population) prevalence and incidence were calculated. Results: Crude prevalence of epilepsy was 7.9 per 1,000 (men 8.1; women 7.7). The highest prevalence was in patients <25 years and \u226575 years. The incidence of epilepsy was 33.5 per 100,000 (women 35.3; men 31.5). The highest incidence was in women <25 years and in men 75 years or older. Conclusions: Prevalence and incidence of epilepsy in this study were similar to those of other industrialized countries. HSD appears as a reliable data source for the surveillance of epilepsy in Italy. i 2014 S. Karger AG, Basel", "which has publication year ?", "2014", 1295.0, 1299.0], ["Human motion is difficult to create and manipulate because of the high dimensionality and spatiotemporal nature of human motion data. Recently, the use of large collections of captured motion data has added increased realism in character animation. In order to make the synthesis and analysis of motion data tractable, we present a low\u2010dimensional motion space in which high\u2010dimensional human motion can be effectively visualized, synthesized, edited, parameterized, and interpolated in both spatial and temporal domains. Our system allows users to create and edit the motion of animated characters in several ways: The user can sketch and edit a curve on low\u2010dimensional motion space, directly manipulate the character's pose in three\u2010dimensional object space, or specify key poses to create in\u2010between motions. Copyright \u00a9 2006 John Wiley & Sons, Ltd.", "which paper: publication_year ?", "2006", 825.0, 829.0], ["The notion of communities getting together during a disaster to help each other is common. However, how does this communal activity happen within the online world? Here we examine this issue using the Communities of Practice (CoP) approach. We extend CoP to multiple CoP (MCoPs) and examine the role of social media applications in disaster management, extending work done by Ahmed (2011). Secondary data in the form of newspaper reports during 2010 to 2011 were analysed to understand how social media, particularly Facebook and Twitter, facilitated the process of communication among various communities during the Queensland floods in 2010. The results of media-content analysis along with the findings of relevant literature were used to extend our existing understanding on various communities of practice involved in disaster management, their communication tasks and the role of Twitter and Facebook as common conducive platforms of communication during disaster management alongside traditional communication channels.", "which paper: publication_year ?", "2011", 383.0, 387.0], ["Social media have played integral roles in many crises around the world. Thailand faced severe floods between July 2011 and January 2012, when more than 13.6 million people were affected. This 7-month disaster provides a great opportunity to understand the use of social media for managing a crisis project before, during, and after its occurrence. However, current literature lacks a theoretical framework on investigating the relationship between social media and crisis management from the project management perspective. The paper adopts a social media-based crisis management framework and the structuration theory in investigating and analyzing social media. The results suggest that social media should be utilized to meet different information needs in order to achieve the success of managing a future crisis project.", "which paper: publication_year ?", "2012", 132.0, 136.0], ["ABSTRACT Contested heritage has increasingly been studied by scholars over the last two decades in multiple disciplines, however, there is still limited knowledge about what contested heritage is and how it is realized in society. Therefore, the purpose of this paper is to produce a systematic literature review on this topic to provide a holistic understanding of contested heritage, and delineate its current state, trends and gaps. Methodologically, four electronic databases were searched, and 102 journal articles published before 2020 were extracted. A content analysis of each article was then conducted to identify key themes and variables for classification. Findings show that while its research often lacks theoretical underpinnings, contested heritage is marked by its diversity and complexity as it becomes a global issue for both tourism and urbanization. By presenting a holistic understanding of contested heritage, this review offers an extensive investigation of the topic area to help move literature pertaining contested heritage forward.", "which Has endpoint ?", "2020", 537.0, 541.0], ["In Spring 2013 San Jos\u00e9 State University (SJSU) launched SJSU Plus: three college courses required for most students to graduate, which used massive open online course provider Udacity\u2019s platform, attracting over 15,000 students. Retention and success (pass/fail) and online support were tested using an augmented online learning environment (AOLE) on a subset of 213 students; about one-half matriculated. SJSU faculty created the course content, collaborating with Udacity to develop video instruction, quizzes, and interactive elements. Course log-ins and progression data were combined with surveys and focus groups, with students, faculty, support staff, coordinators, and program leaders as subjects. Logit models used contingency table-tested potential success predictors on all students and five subgroups. Student effort was the strongest success indicator, suggesting criticality of early and consistent student engagement. No statistically significant relationships with student characteristics were found. AOLE support effectiveness was compromised with staff time consumed by the least prepared students.", "which MOOC Period ?", "2013", 10.0, 14.0], ["Raman spectroscopy, complemented by infrared spectroscopy has been used to characterise the ferroaxinite minerals of theoretical formula Ca2Fe2+Al2BSi4O15(OH), a ferrous aluminium borosilicate. The Raman spectra are complex but are subdivided into sections based upon the vibrating units. The Raman spectra are interpreted in terms of the addition of borate and silicate spectra. Three characteristic bands of ferroaxinite are observed at 1082, 1056 and 1025 cm-1 and are attributed to BO4 stretching vibrations. Bands at 1003, 991, 980 and 963 cm-1 are assigned to SiO4 stretching vibrations. Bands are found in these positions for each of the ferroaxinites studied. No Raman bands were found above 1100 cm-1 showing that ferroaxinites contain only tetrahedral boron. The hydroxyl stretching region of ferroaxinites is characterised by a single Raman band between 3368 and 3376 cm-1, the position of which is sample dependent. Bands for ferroaxinite at 678, 643, 618, 609, 588, 572, 546 cm-1 may be attributed to the \u03bd4 bending modes and the three bands at 484, 444 and 428 cm-1 may be attributed to the \u03bd2 bending modes of the (SiO4)2-.", "which BO stretching vibrations ?", "1025", 454.0, 458.0], ["Raman spectroscopy, complemented by infrared spectroscopy has been used to characterise the ferroaxinite minerals of theoretical formula Ca2Fe2+Al2BSi4O15(OH), a ferrous aluminium borosilicate. The Raman spectra are complex but are subdivided into sections based upon the vibrating units. The Raman spectra are interpreted in terms of the addition of borate and silicate spectra. Three characteristic bands of ferroaxinite are observed at 1082, 1056 and 1025 cm-1 and are attributed to BO4 stretching vibrations. Bands at 1003, 991, 980 and 963 cm-1 are assigned to SiO4 stretching vibrations. Bands are found in these positions for each of the ferroaxinites studied. No Raman bands were found above 1100 cm-1 showing that ferroaxinites contain only tetrahedral boron. The hydroxyl stretching region of ferroaxinites is characterised by a single Raman band between 3368 and 3376 cm-1, the position of which is sample dependent. Bands for ferroaxinite at 678, 643, 618, 609, 588, 572, 546 cm-1 may be attributed to the \u03bd4 bending modes and the three bands at 484, 444 and 428 cm-1 may be attributed to the \u03bd2 bending modes of the (SiO4)2-.", "which BO stretching vibrations ?", "1056", 445.0, 449.0], ["Raman spectroscopy, complemented by infrared spectroscopy has been used to characterise the ferroaxinite minerals of theoretical formula Ca2Fe2+Al2BSi4O15(OH), a ferrous aluminium borosilicate. The Raman spectra are complex but are subdivided into sections based upon the vibrating units. The Raman spectra are interpreted in terms of the addition of borate and silicate spectra. Three characteristic bands of ferroaxinite are observed at 1082, 1056 and 1025 cm-1 and are attributed to BO4 stretching vibrations. Bands at 1003, 991, 980 and 963 cm-1 are assigned to SiO4 stretching vibrations. Bands are found in these positions for each of the ferroaxinites studied. No Raman bands were found above 1100 cm-1 showing that ferroaxinites contain only tetrahedral boron. The hydroxyl stretching region of ferroaxinites is characterised by a single Raman band between 3368 and 3376 cm-1, the position of which is sample dependent. Bands for ferroaxinite at 678, 643, 618, 609, 588, 572, 546 cm-1 may be attributed to the \u03bd4 bending modes and the three bands at 484, 444 and 428 cm-1 may be attributed to the \u03bd2 bending modes of the (SiO4)2-.", "which BO stretching vibrations ?", "1082", 439.0, 443.0], ["Abstract Background Various methods have been proposed to assign unknown specimens to known species using their DNA barcodes, while others have focused on using genetic divergence thresholds to estimate \u201cspecies\u201d diversity for a taxon, without a well-developed taxonomy and/or an extensive reference library of DNA barcodes. The major goals of the present work were to: a) conduct the largest species-level barcoding study of the Muscidae to date and characterize the range of genetic divergence values in the northern Nearctic fauna; b) evaluate the correspondence between morphospecies and barcode groupings defined using both clustering-based and threshold-based approaches; and c) use the reference library produced to address taxonomic issues. Results Our data set included 1114 individuals and their COI sequences (951 from Churchill, Manitoba), representing 160 morphologically-determined species from 25 genera, covering 89% of the known fauna of Churchill and 23% of the Nearctic fauna. Following an iterative process through which all specimens belonging to taxa with anomalous divergence values and/or monophyly issues were re-examined, identity was modified for 9 taxa, including the reinstatement of Phaonia luteva (Walker) stat. nov. as a species distinct from Phaonia errans (Meigen). In the post-reassessment data set, no distinct gap was found between maximum pairwise intraspecific distances (range 0.00-3.01%) and minimum interspecific distances (range: 0.77-11.33%). Nevertheless, using a clustering-based approach, all individuals within 98% of species grouped with their conspecifics with high (>95%) bootstrap support; in contrast, a maximum species discrimination rate of 90% was obtained at the optimal threshold of 1.2%. DNA barcoding enabled the determination of females from 5 ambiguous species pairs and confirmed that 16 morphospecies were genetically distinct from named taxa. There were morphological differences among all distinct genetic clusters; thus, no cases of cryptic species were detected. Conclusions Our findings reveal the great utility of building a well-populated, species-level reference barcode database against which to compare unknowns. When such a library is unavailable, it is still possible to obtain a fairly accurate (within ~10%) rapid assessment of species richness based upon a barcode divergence threshold alone, but this approach is most accurate when the threshold is tuned to a particular taxon.", "which No. of samples (sequences) ?", "1114", 779.0, 783.0], ["Background Although they are important disease vectors mosquito biodiversity in Pakistan is poorly known. Recent epidemics of dengue fever have revealed the need for more detailed understanding of the diversity and distributions of mosquito species in this region. DNA barcoding improves the accuracy of mosquito inventories because morphological differences between many species are subtle, leading to misidentifications. Methodology/Principal Findings Sequence variation in the barcode region of the mitochondrial COI gene was used to identify mosquito species, reveal genetic diversity, and map the distribution of the dengue-vector species in Pakistan. Analysis of 1684 mosquitoes from 491 sites in Punjab and Khyber Pakhtunkhwa during 2010\u20132013 revealed 32 species with the assemblage dominated by Culex quinquefasciatus (61% of the collection). The genus Aedes (Stegomyia) comprised 15% of the specimens, and was represented by six taxa with the two dengue vector species, Ae. albopictus and Ae. aegypti, dominant and broadly distributed. Anopheles made up another 6% of the catch with An. subpictus dominating. Barcode sequence divergence in conspecific specimens ranged from 0\u20132.4%, while congeneric species showed from 2.3\u201317.8% divergence. A global haplotype analysis of disease-vectors showed the presence of multiple haplotypes, although a single haplotype of each dengue-vector species was dominant in most countries. Geographic distribution of Ae. aegypti and Ae. albopictus showed the later species was dominant and found in both rural and urban environments. Conclusions As the first DNA-based analysis of mosquitoes in Pakistan, this study has begun the construction of a barcode reference library for the mosquitoes of this region. Levels of genetic diversity varied among species. Because of its capacity to differentiate species, even those with subtle morphological differences, DNA barcoding aids accurate tracking of vector populations.", "which No. of samples (sequences) ?", "1684", 669.0, 673.0], ["Although some plant traits have been linked to invasion success, the possible effects of regional factors, such as diversity, habitat suitability, and human activity are not well understood. Each of these mechanisms predicts a different pattern of distribution at the regional scale. Thus, where climate and soils are similar, predictions based on regional hypotheses for invasion success can be tested by comparisons of distributions in the source and receiving regions. Here, we analyse the native and alien geographic ranges of all 1567 plant species that have been introduced between eastern Asia and North America or have been introduced to both regions from elsewhere. The results reveal correlations between the spread of exotics and both the native species richness and transportation networks of recipient regions. This suggests that both species interactions and human-aided dispersal influence exotic distributions, although further work on the relative importance of these processes is needed.", "which Number of species ?", "1567", 535.0, 539.0], ["Horticulture is an important source of naturalized plants, but our knowledge about naturalization frequencies and potential patterns of naturalization in horticultural plants is limited. We analyzed a unique set of data derived from the detailed sales catalogs (1887-1930) of the most important early Florida, USA, plant nursery (Royal Palm Nursery) to detect naturalization patterns of these horticultural plants in the state. Of the 1903 nonnative species sold by the nursery, 15% naturalized. The probability of plants becoming naturalized increases significantly with the number of years the plants were marketed. Plants that became invasive and naturalized were sold for an average of 19.6 and 14.8 years, respectively, compared to 6.8 years for non-naturalized plants, and the naturalization of plants sold for 30 years or more is 70%. Unexpectedly, plants that were sold earlier were less likely to naturalize than those sold later. The nursery's inexperience, which caused them to grow and market many plants unsuited to Florida during their early period, may account for this pattern. Plants with pantropical distributions and those native to both Africa and Asia were more likely to naturalize (42%), than were plants native to other smaller regions, suggesting that plants with large native ranges were more likely to naturalize. Naturalization percentages also differed according to plant life form, with the most naturalization occurring in aquatic herbs (36.8%) and vines (30.8%). Plants belonging to the families Araceae, Apocynaceae, Convolvulaceae, Moraceae, Oleaceae, and Verbenaceae had higher than expected naturalization. Information theoretic model selection indicated that the number of years a plant was sold, alone or together with the first year a plant was sold, was the strongest predictor of naturalization. Because continued importation and marketing of nonnative horticultural plants will lead to additional plant naturalization and invasion, a comprehensive approach to address this problem, including research to identifyand select noninvasive forms and types of horticultural plants is urgently needed.", "which Number of species ?", "1903", 435.0, 439.0], ["The paper provides the first estimate of the composition and structure of alien plants occurring in the wild in the European continent, based on the results of the DAISIE project (2004\u20132008), funded by the 6th Framework Programme of the European Union and aimed at \u201ccreating an inventory of invasive species that threaten European terrestrial, freshwater and marine environments\u201d. The plant section of the DAISIE database is based on national checklists from 48 European countries/regions and Israel; for many of them the data were compiled during the project and for some countries DAISIE collected the first comprehensive checklists of alien species, based on primary data (e.g., Cyprus, Greece, F. Y. R. O. Macedonia, Slovenia, Ukraine). In total, the database contains records of 5789 alien plant species in Europe (including those native to a part of Europe but alien to another part), of which 2843 are alien to Europe (of extra-European origin). The research focus was on naturalized species; there are in total 3749 naturalized aliens in Europe, of which 1780 are alien to Europe. This represents a marked increase compared to 1568 alien species reported by a previous analysis of data in Flora Europaea (1964\u20131980). Casual aliens were marginally considered and are represented by 1507 species with European origins and 872 species whose native range falls outside Europe. The highest diversity of alien species is concentrated in industrialized countries with a tradition of good botanical recording or intensive recent research. The highest number of all alien species, regardless of status, is reported from Belgium (1969), the United Kingdom (1779) and Czech Republic (1378). The United Kingdom (857), Germany (450), Belgium (447) and Italy (440) are countries with the most naturalized neophytes. The number of naturalized neophytes in European countries is determined mainly by the interaction of temperature and precipitation; it increases with increasing precipitation but only in climatically warm and moderately warm regions. Of the nowadays naturalized neophytes alien to Europe, 50% arrived after 1899, 25% after 1962 and 10% after 1989. At present, approximately 6.2 new species, that are capable of naturalization, are arriving each year. Most alien species have relatively restricted European distributions; half of all naturalized species occur in four or fewer countries/regions, whereas 70% of non-naturalized species occur in only one region. Alien species are drawn from 213 families, dominated by large global plant families which have a weedy tendency and have undergone major radiations in temperate regions (Asteraceae, Poaceae, Rosaceae, Fabaceae, Brassicaceae). There are 1567 genera, which have alien members in European countries, the commonest being globally-diverse genera comprising mainly urban and agricultural weeds (e.g., Amaranthus, Chenopodium and Solanum) or cultivated for ornamental purposes (Cotoneaster, the genus richest in alien species). Only a few large genera which have successfully invaded (e.g., Oenothera, Oxalis, Panicum, Helianthus) are predominantly of non-European origin. Conyza canadensis, Helianthus tuberosus and Robinia pseudoacacia are most widely distributed alien species. Of all naturalized aliens present in Europe, 64.1% occur in industrial habitats and 58.5% on arable land and in parks and gardens. Grasslands and woodlands are also highly invaded, with 37.4 and 31.5%, respectively, of all naturalized aliens in Europe present in these habitats. Mires, bogs and fens are least invaded; only approximately 10% of aliens in Europe occur there. Intentional introductions to Europe (62.8% of the total number of naturalized aliens) prevail over unintentional (37.2%). Ornamental and horticultural introductions escaped from cultivation account for the highest number of species, 52.2% of the total. Among unintentional introductions, contaminants of seed, mineral materials and other commodities are responsible for 1091 alien species introductions to Europe (76.6% of all species introduced unintentionally) and 363 species are assumed to have arrived as stowaways (directly associated with human transport but arriving independently of commodity). Most aliens in Europe have a native range in the same continent (28.6% of all donor region records are from another part of Europe where the plant is native); in terms of species numbers the contribution of Europe as a region of origin is 53.2%. Considering aliens to Europe separately, 45.8% of species have their native distribution in North and South America, 45.9% in Asia, 20.7% in Africa and 5.3% in Australasia. Based on species composition, European alien flora can be classified into five major groups: (1) north-western, comprising Scandinavia and the UK; (2) west-central, extending from Belgium and the Netherlands to Germany and Switzerland; (3) Baltic, including only the former Soviet Baltic states; (4) east-central, comprizing the remainder of central and eastern Europe; (5) southern, covering the entire Mediterranean region. The clustering patterns cut across some European bioclimatic zones; cultural factors such as regional trade links and traditional local preferences for crop, forestry and ornamental species are also important by influencing the introduced species pool. Finally, the paper evaluates a state of the art in the field of plant invasions in Europe, points to research gaps and outlines avenues of further research towards documenting alien plant invasions in Europe. The data are of varying quality and need to be further assessed with respect to the invasion status and residence time of the species included. This concerns especially the naturalized/casual status; so far, this information is available comprehensively for only 19 countries/regions of the 49 considered. Collating an integrated database on the alien flora of Europe can form a principal contribution to developing a European-wide management strategy of alien species.", "which Number of species ?", "1969", 1628.0, 1632.0], ["Abstract Huisinga, K. D., D. C. Laughlin, P. Z. Ful\u00e9, J. D. Springer, and C. M. McGlone (Ecological Restoration Institute and School of Forestry, Northern Arizona University, Box 15017, Flagstaff, AZ 86011). Effects of an intense prescribed fire on understory vegetation in a mixed conifer forest. J. Torrey Bot. Soc. 132: 590\u2013601. 2005.\u2014Intense prescribed fire has been suggested as a possible method for forest restoration in mixed conifer forests. In 1993, a prescribed fire in a dense, never-harvested forest on the North Rim of Grand Canyon National Park escaped prescription and burned with greater intensity and severity than expected. We sampled this burned area and an adjacent unburned area to assess fire effects on understory species composition, diversity, and plant cover. The unburned area was sampled in 1998 and the burned area in 1999; 25% of the plots were resampled in 2001 to ensure that differences between sites were consistent and persistent, and not due to inter-annual climatic differences. Species composition differed significantly between unburned and burned sites; eight species were identified as indicators of the unburned site and thirteen as indicators of the burned site. Plant cover was nearly twice as great in the burned site than in the unburned site in the first years of measurement and was 4.6 times greater in the burned site in 2001. Average and total species richness was greater in the burned site, explained mostly by higher numbers of native annual and biennial forbs. Overstory canopy cover and duff depth were significantly lower in the burned site, and there were significant inverse relationships between these variables and plant species richness and plant cover. Greater than 95% of the species in the post-fire community were native and exotic plant cover never exceeded 1%, in contrast with other northern Arizona forests that were dominated by exotic species following high-severity fires. This difference is attributed to the minimal anthropogenic disturbance history (no logging, minimal grazing) of forests in the national park, and suggests that park managers may have more options than non-park managers to use intense fire as a tool for forest conservation and restoration.", "which Study date ?", "2005", 332.0, 336.0], ["Invasive plant species are a considerable threat to ecosystems globally and on islands in particular where species diversity can be relatively low. In this study, we examined the phylogenetic basis of invasion success on Robben Island in South Africa. The flora of the island was sampled extensively and the phylogeny of the local community was reconstructed using the two core DNA barcode regions, rbcLa and matK. By analysing the phylogenetic patterns of native and invasive floras at two different scales, we found that invasive alien species are more distantly related to native species, a confirmation of Darwin's naturalization hypothesis. However, this pattern also holds even for randomly generated communities, therefore discounting the explanatory power of Darwin's naturalization hypothesis as the unique driver of invasion success on the island. These findings suggest that the drivers of invasion success on the island may be linked to species traits rather than their evolutionary history alone, or to the combination thereof. This result also has implications for the invasion management programmes currently being implemented to rehabilitate the native diversity on Robben Island. \u00a9 2013 The Linnean Society of London, Botanical Journal of the Linnean Society, 2013, 172, 142\u2013152.", "which Study date ?", "2013", 1199.0, 1203.0], ["Hussner A (2012). Alien aquatic plant species in European countries. Weed Research52, 297\u2013306. Summary Alien aquatic plant species cause serious ecological and economic impacts to European freshwater ecosystems. This study presents a comprehensive overview of all alien aquatic plants in Europe, their places of origin and their distribution within the 46 European countries. In total, 96 aquatic species from 30 families have been reported as aliens from at least one European country. Most alien aquatic plants are native to Northern America, followed by Asia and Southern America. Elodea canadensis is the most widespread alien aquatic plant in Europe, reported from 41 European countries. Azolla filiculoides ranks second (25), followed by Vallisneria spiralis (22) and Elodea nuttallii (20). The highest number of alien aquatic plant species has been found in Italy and France (34 species), followed by Germany (27), Belgium and Hungary (both 26) and the Netherlands (24). Even though the number of alien aquatic plants seems relatively small, the European and Mediterranean Plant Protection Organization (EPPO, http://www.eppo.org) has listed 18 of these species as invasive or potentially invasive within the EPPO region. As ornamental trade has been regarded as the major pathway for the introduction of alien aquatic plants, trading bans seem to be the most effective option to reduce the risk of further unintended entry of alien aquatic plants into Europe.", "which Study date ?", "2012", 11.0, 15.0], ["The Enemy Release Hypothesis (ERH) predicts that when plant species are introduced outside their native range there is a release from natural enemies resulting in the plants becoming problematic invasive alien species (Lake & Leishman 2004; Puliafico et al. 2008). The release from natural enemies may benefit alien plants more than simply reducing herbivory because, according to the Evolution of Increased Competitive Ability (EICA) hypothesis, without pressure from herbivores more resources that were previously allocated to defence can be allocated to reproduction (Blossey & Notzold 1995). Alien invasive plants are therefore expected to have simpler herbivore communities with fewer specialist herbivores (Frenzel & Brandl 2003; Heleno et al. 2008; Heger & Jeschke 2014).", "which Study date ?", "2014", 772.0, 776.0], ["Hanley ME (2012). Seedling defoliation, plant growth and flowering potential in native- and invasive-range Plantago lanceolata populations. Weed Research52, 252\u2013259. Summary The plastic response of weeds to new environmental conditions, in particular the likely relaxation of herbivore pressure, is considered vital for successful colonisation and spread. However, while variation in plant anti-herbivore resistance between native- and introduced-range populations is well studied, few authors have considered herbivore tolerance, especially at the seedling stage. This study examines variation in seedling tolerance in native (European) and introduced (North American) Plantago lanceolata populations following cotyledon removal at 14 days old. Subsequent effects on plant growth were quantified at 35 days, along with effects on flowering potential at maturity. Cotyledon removal reduced early growth for all populations, with no variation between introduced- or native-range plants. Although more variable, the effects of cotyledon loss on flowering potential were also unrelated to range. The likelihood that generalist seedling herbivores are common throughout North America may explain why no difference in seedling tolerance was apparent. However, increased flowering potential in plants from North American P. lanceolata populations was observed. As increased flowering potential was not lost, even after severe cotyledon damage, the manifestation of phenotypic plasticity in weeds at maturity may nonetheless still be shaped by plasticity in the ability to tolerate herbivory during seedling establishment.", "which Study date ?", "2012", 11.0, 15.0], ["Introduced species must adapt their ecology, behaviour, and morphological traits to new conditions. The successful introduction and invasive potential of a species are related to its levels of phenotypic plasticity and genetic polymorphism. We analysed changes in the body mass and length of American mink (Neovison vison) since its introduction into the Warta Mouth National Park, western Poland, in relation to diet composition and colonization progress from 1996 to 2004. Mink body mass decreased significantly during the period of population establishment within the study area, with an average decrease of 13% from 1.36 to 1.18 kg in males and of 16% from 0.83 to 0.70 kg in females. Diet composition varied seasonally and between consecutive years. The main prey items were mammals and fish in the cold season and birds and fish in the warm season. During the study period the proportion of mammals preyed upon increased in the cold season and decreased in the warm season. The proportion of birds preyed upon decreased over the study period, whereas the proportion of fish increased. Following introduction, the strictly aquatic portion of mink diet (fish and frogs) increased over time, whereas the proportion of large prey (large birds, muskrats, and water voles) decreased. The average yearly proportion of large prey and average-sized prey in the mink diet was significantly correlated with the mean body masses of males and females. Biogeographical variation in the body mass and length of mink was best explained by the percentage of large prey in the mink diet in both sexes, and by latitude for females. Together these results demonstrate that American mink rapidly changed their body mass in relation to local conditions. This phenotypic variability may be underpinned by phenotypic plasticity and/or by adaptation of quantitative genetic variation. The potential to rapidly change phenotypic variation in this manner is an important factor determining the negative ecological impacts of invasive species. \u00a9 2012 The Linnean Society of London, Biological Journal of the Linnean Society, 2012, 105, 681\u2013693.", "which Study date ?", "2012", 2024.0, 2028.0], ["1. When multiple invasive species coexist in the same ecosystem and their diets change as they grow, determining whether to eradicate any particular invader is difficult because of complex predator\u2013prey interactions. 2. A stable isotope food-web analysis was conducted to explore an appropriate management strategy for three potential alien predators (snakehead Channa argus, bullfrog Rana catesbeiana, red-eared slider turtle Trachemys scripta elegans) of invasive crayfish Procambarus clarkii that had severely reduced the densities of endangered odonates in a pond in Japan. 3. The stable isotope analysis demonstrated that medium- and small-sized snakeheads primarily depended on crayfish and stone moroko Pseudorasbora parva. Both adult and juvenile bullfrogs depended on terrestrial arthropods, and juveniles exhibited a moderate dependence on crayfish. The turtle showed little dependence on crayfish. 4. These results suggest that eradication of snakeheads risks the possibility of mesopredator release, while such risk appears to be low in other alien predators. Copyright \u00a9 2011 John Wiley & Sons, Ltd.", "which Study date ?", "2011", 1084.0, 1088.0], ["Brown, R. L. and Fridley, J. D. 2003. Control of plant species diversity andcommunity invasibility by species immigration: seed richness versus seed density. \u2013Oikos 102: 15\u201324.Immigration rates of species into communities are widely understood to in\ufb02uencecommunity diversity, which in turn is widely expected to in\ufb02uence the susceptibilityof ecosystems to species invasion. For a given community, however, immigrationprocesses may impact diversity by means of two separable components: the numberof species represented in seed inputs and the density of seed per species. Theindependent effects of these components on plant species diversity and consequentrates of invasion are poorly understood. We constructed experimental plant commu-nities through repeated seed additions to independently measure the effects of seedrichness and seed density on the trajectory of species diversity during the develop-ment of annual plant communities. Because we sowed species not found in theimmediate study area, we were able to assess the invasibility of the resultingcommunities by recording the rate of establishment of species from adjacent vegeta-tion. Early in community development when species only weakly interacted, seedrichness had a strong effect on community diversity whereas seed density had littleeffect. After the plants became established, the effect of seed richness on measureddiversity strongly depended on seed density, and disappeared at the highest level ofseed density. The ability of surrounding vegetation to invade the experimentalcommunities was decreased by seed density but not by seed richness, primarilybecause the individual effects of a few sown species could explain the observedinvasion rates. These results suggest that seed density is just as important as seedrichness in the control of species diversity, and perhaps a more important determi-nant of community invasibility than seed richness in dynamic plant assemblages.", "which Study date ?", "2003", 32.0, 36.0], ["1. Comparison of the pre-1960 faunal survey data for the Indian Seas with that for the post-1960 period showed that 205 non-indigenous taxa were introduced in the post-1960 period; shipping activity is considered a plausible major vector for many of these introductions. 2. Of the non-indigenous taxa, 21% were fish, followed by Polychaeta (<11%), Algae (10%), Crustacea (10%), Mollusca (10%), Ciliata (8%), Fungi (7%), Ascidians (6%) and minor invertebrates (17%). 3. An analysis of the data suggests a correspondence between the shipping routes between India and various regions. There were 75 species common to the Indian Seas and the coastal seas of China and Japan, 63 to the Indo-Malaysian region, 42 to the Mediterranean, 40 and 34 to western and eastern Atlantic respectively, and 41 to Australia and New Zealand. A further 33 species were common to the Caribbean region, 32 to the eastern Pacific, 14 and 24 to the west and east coasts of Africa respectively, 18 to the Baltic, 15 to the middle Arabian Gulf and Red Sea, and 10 to the Brazilian coast. 4. The Indo-Malaysian region can be identified as a centre of xenodiversity for biota from Southeast Asia, China, Japan, Philippines and Australian regions. 5. Of the introduced species, the bivalve Mytilopsis sallei and the serpulid Ficopomatus enigmaticus have become pests in the Indian Seas, consistent with the Williamson and Fitter \u2018tens rule\u2019. Included amongst the biota with economic impact are nine fouling and six wood-destroying organisms. 6. Novel occurrences of the human pathogenic vibrios, e.g. Vibrio parahaemolyticus, non-01 Vibrio cholerae, Vibrio vulnificus and Vibrio mimicus and the harmful algal bloom species Alexandrium spp. and Gymnodinium nagasakiense in the Indian coastal waters could be attributed to ballast water introductions. 7. Introductions of alien biota could pose a threat to the highly productive tropical coastal waters, estuaries and mariculture sites and could cause economic impacts and ecological surprises. 8. In addition to strict enforcement of a national quarantine policy on ballast water discharges, long-term multidisciplinary research on ballast water invaders is crucial to enhance our understanding of the biodiversity and functioning of the ecosystem. Copyright \u00a9 2005 John Wiley & Sons, Ltd.", "which Study date ?", "2005", 2280.0, 2284.0], ["The paper provides the first estimate of the composition and structure of alien plants occurring in the wild in the European continent, based on the results of the DAISIE project (2004\u20132008), funded by the 6th Framework Programme of the European Union and aimed at \u201ccreating an inventory of invasive species that threaten European terrestrial, freshwater and marine environments\u201d. The plant section of the DAISIE database is based on national checklists from 48 European countries/regions and Israel; for many of them the data were compiled during the project and for some countries DAISIE collected the first comprehensive checklists of alien species, based on primary data (e.g., Cyprus, Greece, F. Y. R. O. Macedonia, Slovenia, Ukraine). In total, the database contains records of 5789 alien plant species in Europe (including those native to a part of Europe but alien to another part), of which 2843 are alien to Europe (of extra-European origin). The research focus was on naturalized species; there are in total 3749 naturalized aliens in Europe, of which 1780 are alien to Europe. This represents a marked increase compared to 1568 alien species reported by a previous analysis of data in Flora Europaea (1964\u20131980). Casual aliens were marginally considered and are represented by 1507 species with European origins and 872 species whose native range falls outside Europe. The highest diversity of alien species is concentrated in industrialized countries with a tradition of good botanical recording or intensive recent research. The highest number of all alien species, regardless of status, is reported from Belgium (1969), the United Kingdom (1779) and Czech Republic (1378). The United Kingdom (857), Germany (450), Belgium (447) and Italy (440) are countries with the most naturalized neophytes. The number of naturalized neophytes in European countries is determined mainly by the interaction of temperature and precipitation; it increases with increasing precipitation but only in climatically warm and moderately warm regions. Of the nowadays naturalized neophytes alien to Europe, 50% arrived after 1899, 25% after 1962 and 10% after 1989. At present, approximately 6.2 new species, that are capable of naturalization, are arriving each year. Most alien species have relatively restricted European distributions; half of all naturalized species occur in four or fewer countries/regions, whereas 70% of non-naturalized species occur in only one region. Alien species are drawn from 213 families, dominated by large global plant families which have a weedy tendency and have undergone major radiations in temperate regions (Asteraceae, Poaceae, Rosaceae, Fabaceae, Brassicaceae). There are 1567 genera, which have alien members in European countries, the commonest being globally-diverse genera comprising mainly urban and agricultural weeds (e.g., Amaranthus, Chenopodium and Solanum) or cultivated for ornamental purposes (Cotoneaster, the genus richest in alien species). Only a few large genera which have successfully invaded (e.g., Oenothera, Oxalis, Panicum, Helianthus) are predominantly of non-European origin. Conyza canadensis, Helianthus tuberosus and Robinia pseudoacacia are most widely distributed alien species. Of all naturalized aliens present in Europe, 64.1% occur in industrial habitats and 58.5% on arable land and in parks and gardens. Grasslands and woodlands are also highly invaded, with 37.4 and 31.5%, respectively, of all naturalized aliens in Europe present in these habitats. Mires, bogs and fens are least invaded; only approximately 10% of aliens in Europe occur there. Intentional introductions to Europe (62.8% of the total number of naturalized aliens) prevail over unintentional (37.2%). Ornamental and horticultural introductions escaped from cultivation account for the highest number of species, 52.2% of the total. Among unintentional introductions, contaminants of seed, mineral materials and other commodities are responsible for 1091 alien species introductions to Europe (76.6% of all species introduced unintentionally) and 363 species are assumed to have arrived as stowaways (directly associated with human transport but arriving independently of commodity). Most aliens in Europe have a native range in the same continent (28.6% of all donor region records are from another part of Europe where the plant is native); in terms of species numbers the contribution of Europe as a region of origin is 53.2%. Considering aliens to Europe separately, 45.8% of species have their native distribution in North and South America, 45.9% in Asia, 20.7% in Africa and 5.3% in Australasia. Based on species composition, European alien flora can be classified into five major groups: (1) north-western, comprising Scandinavia and the UK; (2) west-central, extending from Belgium and the Netherlands to Germany and Switzerland; (3) Baltic, including only the former Soviet Baltic states; (4) east-central, comprizing the remainder of central and eastern Europe; (5) southern, covering the entire Mediterranean region. The clustering patterns cut across some European bioclimatic zones; cultural factors such as regional trade links and traditional local preferences for crop, forestry and ornamental species are also important by influencing the introduced species pool. Finally, the paper evaluates a state of the art in the field of plant invasions in Europe, points to research gaps and outlines avenues of further research towards documenting alien plant invasions in Europe. The data are of varying quality and need to be further assessed with respect to the invasion status and residence time of the species included. This concerns especially the naturalized/casual status; so far, this information is available comprehensively for only 19 countries/regions of the 49 considered. Collating an integrated database on the alien flora of Europe can form a principal contribution to developing a European-wide management strategy of alien species.", "which Study date ?", "2008", 185.0, 189.0], ["", "which Film thickness (nm) ?", "2000", NaN, NaN], ["In this paper, a liquid-based micro thermal convective accelerometer (MTCA) is optimized by the Rayleigh number (Ra) based compact model and fabricated using the $0.35\\mu $ m CMOS MEMS technology. To achieve water-proof performance, the conformal Parylene C coating was adopted as the isolation layer with the accelerated life-testing results of a 9-year-lifetime for liquid-based MTCA. Then, the device performance was characterized considering sensitivity, response time, and noise. Both the theoretical and experimental results demonstrated that fluid with a larger Ra number can provide better performance for the MTCA. More significantly, Ra based model showed its advantage to make a more accurate prediction than the simple linear model to select suitable fluid to enhance the sensitivity and balance the linear range of the device. Accordingly, an alcohol-based MTCA was achieved with a two-order-of magnitude increase in sensitivity (43.8 mV/g) and one-order-of-magnitude decrease in the limit of detection (LOD) ( $61.9~\\mu \\text{g}$ ) compared with the air-based MTCA. [2021-0092]", "which Year ?", "2021", 1276.0, 1280.0], ["The significant development in global information technologies and the ever-intensifying competitive market climate have both pushed many companies to transform their businesses. Enterprise resource planning (ERP) is seen as one of the most recently emerging process-orientation tools that can enable such a transformation. Its development has presented both researchers and practitioners with new challenges and opportunities. This paper provides a comprehensive review of the state of research in the ERP field relating to process management, organizational change and knowledge management. It surveys current practices, research and development, and suggests several directions for future investigation. Copyright \u00a9 2001 John Wiley & Sons, Ltd.", "which Year ?", "2001", 719.0, 723.0], ["We present a ship scheduling problem concerned with the pickup and delivery of bulk cargoes within given time windows. As the ports are closed for service at night and during weekends, the wide time windows can be regarded as multiple time windows. Another issue is that the loading/discharging times of cargoes may take several days. This means that a ship will stay idle much of the time in port, and the total time at port will depend on the ship's arrival time. Ship scheduling is associated with uncertainty due to bad weather at sea and unpredictable service times in ports. Our objective is to make robust schedules that are less likely to result in ships staying idle in ports during the weekend, and impose penalty costs for arrivals at risky times (i.e., close to weekends). A set partitioning approach is proposed to solve the problem. The columns correspond to feasible ship schedules that are found a priori. They are generated taking the uncertainty and multiple time windows into account. The computational results show that we can increase the robustness of the schedules at the sacrifice of increased transportation costs. \u00a9 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 611\u2013625, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/nav.10033", "which Year ?", "2002", 1142.0, 1146.0], ["In this article, we examine the design of an evacuation tree, in which evacuation is subject to capacity restrictions on arcs. The cost of evacuating people in the network is determined by the sum of penalties incurred on arcs on which they travel, where penalties are determined according to a nondecreasing function of time. Given a discrete set of disaster scenarios affecting network population, arc capacities, transit times, and penalty functions, we seek to establish an optimal a priori evacuation tree that minimizes the expected evacuation penalty. The solution strategy is based on Benders decomposition, in which the master problem is a mixed\u2010integer program and each subproblem is a time\u2010expanded network flow problem. We provide efficient methods for obtaining primal and dual subproblem solutions, and analyze techniques for improving the strength of the master problem formulation, thus reducing the number of master problem solutions required for the algorithm's convergence. We provide computational results to compare the efficiency of our methods on a set of randomly generated test instances. \u00a9 2008 Wiley Periodicals, Inc. NETWORKS, 2009", "which Year ?", "2009", 1155.0, 1159.0], ["High\u2010throughput single nucleotide polymorphism (SNP)\u2010array technologies allow to investigate copy number variants (CNVs) in genome\u2010wide scans and specific calling algorithms have been developed to determine CNV location and copy number. We report the results of a reliability analysis comparing data from 96 pairs of samples processed with CNVpartition, PennCNV, and QuantiSNP for Infinium Illumina Human 1Million probe chip data. We also performed a validity assessment with multiplex ligation\u2010dependent probe amplification (MLPA) as a reference standard. The number of CNVs per individual varied according to the calling algorithm. Higher numbers of CNVs were detected in saliva than in blood DNA samples regardless of the algorithm used. All algorithms presented low agreement with mean Kappa Index (KI) <66. PennCNV was the most reliable algorithm (KIw=98.96) when assessing the number of copies. The agreement observed in detecting CNV was higher in blood than in saliva samples. When comparing to MLPA, all algorithms identified poorly known copy aberrations (sensitivity = 0.19\u20130.28). In contrast, specificity was very high (0.97\u20130.99). Once a CNV was detected, the number of copies was truly assessed (sensitivity >0.62). Our results indicate that the current calling algorithms should be improved for high performance CNV analysis in genome\u2010wide scans. Further refinement is required to assess CNVs as risk factors in complex diseases.Hum Mutat 32:1\u201310, 2011. \u00a9 2011 Wiley\u2010Liss, Inc.", "which Year ?", "2011", 1463.0, 1467.0], ["In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency\u2019s perspective, which undermines communities\u2019 role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.", "which Year ?", "2011", 543.0, 547.0], ["This study explores the role of social media in social change by analyzing Twitter data collected during the 2011 Egypt Revolution. Particular attention is paid to the notion of collective sense making, which is considered a critical aspect for the emergence of collective action for social change. We suggest that collective sense making through social media can be conceptualized as human-machine collaborative information processing that involves an interplay of signs, Twitter grammar, humans, and social technologies. We focus on the occurrences of hashtags among a high volume of tweets to study the collective sense-making phenomena of milling and keynoting. A quantitative Markov switching analysis is performed to understand how the hashtag frequencies vary over time, suggesting structural changes that depict the two phenomena. We further explore different hashtags through a qualitative content analysis and find that, although many hashtags were used as symbolic anchors to funnel online users' attention to the Egypt Revolution, other hashtags were used as part of tweet sentences to share changing situational information. We suggest that hashtags functioned as a means to collect information and maintain situational awareness during the unstable political situation of the Egypt Revolution.", "which Year ?", "2011", 109.0, 113.0], ["In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency\u2019s perspective, which undermines communities\u2019 role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.", "which Year ?", "2011", 543.0, 547.0], ["BACKGROUND On-farm crop species richness (CSR) may be important for maintaining the diversity and quality of diets of smallholder farming households. OBJECTIVES The objectives of this study were to 1) determine the association of CSR with the diversity and quality of household diets in Malawi and 2) assess hypothesized mechanisms for this association via both subsistence- and market-oriented pathways. METHODS Longitudinal data were assessed from nationally representative household surveys in Malawi between 2010 and 2013 (n = 3000 households). A household diet diversity score (DDS) and daily intake per adult equivalent of energy, protein, iron, vitamin A, and zinc were calculated from 7-d household consumption data. CSR was calculated from plot-level data on all crops cultivated during the 2009-2010 and 2012-2013 agricultural seasons in Malawi. Adjusted generalized estimating equations were used to assess the longitudinal relation of CSR with household diet quality and diversity. RESULTS CSR was positively associated with DDS (\u03b2: 0.08; 95% CI: 0.06, 0.12; P < 0.001), as well as daily intake per adult equivalent of energy (kilocalories) (\u03b2: 41.6; 95% CI: 20.9, 62.2; P < 0.001), protein (grams) (\u03b2: 1.78; 95% CI: 0.80, 2.75; P < 0.001), iron (milligrams) (\u03b2: 0.30; 95% CI: 0.16, 0.44; P < 0.001), vitamin A (micrograms of retinol activity equivalent) (\u03b2: 25.8; 95% CI: 12.7, 38.9; P < 0.001), and zinc (milligrams) (\u03b2: 0.26; 95% CI: 0.13, 0.38; P < 0.001). Neither proportion of harvest sold nor distance to nearest population center modified the relation between CSR and household diet diversity or quality (P \u2265 0.05). Households with greater CSR were more commercially oriented (least-squares mean proportion of harvest sold \u00b1 SE, highest tertile of CSR: 17.1 \u00b1 0.52; lowest tertile of CSR: 8.92 \u00b1 1.09) (P < 0.05). CONCLUSION Promoting on-farm CSR may be a beneficial strategy for simultaneously supporting enhanced diet quality and diversity while also creating opportunities for smallholder farmers to engage with markets in subsistence agricultural contexts.", "which has beginning ?", "2010", 512.0, 516.0], ["ABSTRACT In this study, we investigated the relationship between agricultural biodiversity and dietary diversity of children and whether factors such as economic access may affect this relationship. This paper is based on data collected in a baseline cross-sectional survey in November 2013.The study population comprising 1200 mother-child pairs was selected using a two-stage cluster sampling. Dietary diversity was defined as the number of food groups consumed 24 h prior to the assessment. The number of crop and livestock species produced on a farm was used as the measure of production diversity. Hierarchical regression analysis was used to identify predictors and test for interactions. Whereas the average production diversity score was 4.7 \u00b1 1.6, only 42.4% of households consumed at least four food groups out of seven over the preceding 24-h recall period. Agricultural biodiversity (i.e. variety of animals kept and food groups produced) associated positively with dietary diversity of children aged 6\u201336 months but the relationship was moderated by household socioeconomic status. The interaction term was also statistically significant [\u03b2 = \u22120.08 (95% CI: \u22120.05, \u22120.01, p = 0.001)]. Spearman correlation (rho) analysis showed that agricultural biodiversity was positively associated with individual dietary diversity of the child more among children of low socioeconomic status in rural households compared to children of high socioeconomic status (r = 0.93, p < 0.001 versus r = 0.08, p = 0.007). Socioeconomic status of the household also partially mediated the link between agricultural biodiversity and dietary diversity of a child\u2019s diet. The effect of increased agricultural biodiversity on dietary diversity was significantly higher in households of lower socioeconomic status. Therefore, improvement of agricultural biodiversity could be one of the best approaches for ensuring diverse diets especially for households of lower socioeconomic status in rural areas of Northern Ghana.", "which has beginning ?", "2013", 286.0, 290.0], ["Abstract Objective The association between farm production diversity and dietary diversity in rural smallholder households was recently analysed. Most existing studies build on household-level dietary diversity indicators calculated from 7d food consumption recalls. Herein, this association is revisited with individual-level 24 h recall data. The robustness of the results is tested by comparing household- and individual-level estimates. The role of other factors that may influence dietary diversity, such as market access and agricultural technology, is also analysed. Design A survey of smallholder farm households was carried out in Malawi in 2014. Dietary diversity scores are calculated from 24 h recall data. Production diversity scores are calculated from farm production data covering a period of 12 months. Individual- and household-level regression models are developed and estimated. Setting Data were collected in sixteen districts of central and southern Malawi. Subjects Smallholder farm households (n 408), young children (n 519) and mothers (n 408). Results Farm production diversity is positively associated with dietary diversity. However, the estimated effects are small. Access to markets for buying food and selling farm produce and use of chemical fertilizers are shown to be more important for dietary diversity than diverse farm production. Results with household- and individual-level dietary data are very similar. Conclusions Further increasing production diversity may not be the most effective strategy to improve diets in smallholder farm households. Improving access to markets, productivity-enhancing inputs and technologies seems to be more promising.", "which has beginning ?", "2014", 650.0, 654.0], ["Households in low-income settings are vulnerable to seasonal changes in dietary diversity because of fluctuations in food availability and access. We assessed seasonal differences in household dietary diversity in Burkina Faso, and determined the extent to which household socioeconomic status and crop production diversity modify changes in dietary diversity across seasons, using data from the nationally representative 2014 Burkina Faso Continuous Multisectoral Survey (EMC). A household dietary diversity score based on nine food groups was created from household food consumption data collected during four rounds of the 2014 EMC. Plot-level crop production data, and data on household assets and education were used to create variables on crop diversity and household socioeconomic status, respectively. Analyses included data for 10,790 households for which food consumption data were available for at least one round. Accounting for repeated measurements and controlling for the complex survey design and confounding covariates using a weighted multi-level model, household dietary diversity was significantly higher during both lean seasons periods, and higher still during the harvest season as compared to the post-harvest season (mean: post-harvest: 4.76 (SE 0.04); beginning of lean: 5.13 (SE 0.05); end of lean: 5.21 (SE 0.05); harvest: 5.72 (SE 0.04)), but was not different between the beginning and the end of lean season. Seasonal differences in household dietary diversity were greater among households with higher food expenditures, greater crop production, and greater monetary value of crops sale (P<0.05). Seasonal changes in household dietary diversity in Burkina Faso may reflect nutritional differences among agricultural households, and may be modified both by households\u2019 socioeconomic status and agricultural characteristics.", "which has beginning ?", "2014", 422.0, 426.0], ["Background: Recent literature, largely from Africa, shows mixed effects of own-production on diet diversity. However, the role of own-production, relative to markets, in influencing food consumption becomes more pronounced as market integration increases. Objective: This paper investigates the relative importance of two factors - production diversity and household market integration - for the intake of a nutritious diet by women and households in rural India. Methods: Data analysis is based on primary data from an extensive agriculture-nutrition survey of 3600 Indian households that was collected in 2017. Dietary diversity scores are constructed for women and households is based on 24-hour and 7-day recall periods. Household market integration is measured as monthly household expenditure on key non-staple food groups. We measure production diversity in two ways - field-level and on-farm production diversity - in order to account for the cereal centric rice-wheat cropping system found in our study locations. The analysis is based on Ordinary Least Squares regressions where we control for a variety of village, household, and individual level covariates that affect food consumption, and village fixed effects. Robustness checks are done by way of using a Poisson regression specifications and 7-day recall period. Results: Conventional measures of field-level production diversity, like the number of crops or food groups grown, have no significant association with diet diversity. In contrast, it is on-farm production diversity (the field-level cultivation of pulses and on-farm livestock management, and kitchen gardens in the longer run) that is significantly associated with improved dietary diversity scores, thus suggesting the importance of non-staples in improving both individual and household dietary diversity. Furthermore, market purchases of non-staples like pulses and dairy products are associated with a significantly higher dietary diversity. Other significant determinants of dietary diversity include women\u2019s literacy and awareness of nutrition. These results mostly remain robust to changes in the recall period of the diet diversity measure and the nature of the empirical specification. Conclusions: This study contributes to the scarce empirical evidence related to diets in India. Additionally, our results indicate some key intervention areas - promoting livestock rearing, strengthening households\u2019 market integration (for purchase of non-staples) and increasing women\u2019s awareness about nutrition. These are more impactful than raising production diversity. ", "which has beginning ?", "2017", 776.0, 780.0], ["BACKGROUND On-farm crop species richness (CSR) may be important for maintaining the diversity and quality of diets of smallholder farming households. OBJECTIVES The objectives of this study were to 1) determine the association of CSR with the diversity and quality of household diets in Malawi and 2) assess hypothesized mechanisms for this association via both subsistence- and market-oriented pathways. METHODS Longitudinal data were assessed from nationally representative household surveys in Malawi between 2010 and 2013 (n = 3000 households). A household diet diversity score (DDS) and daily intake per adult equivalent of energy, protein, iron, vitamin A, and zinc were calculated from 7-d household consumption data. CSR was calculated from plot-level data on all crops cultivated during the 2009-2010 and 2012-2013 agricultural seasons in Malawi. Adjusted generalized estimating equations were used to assess the longitudinal relation of CSR with household diet quality and diversity. RESULTS CSR was positively associated with DDS (\u03b2: 0.08; 95% CI: 0.06, 0.12; P < 0.001), as well as daily intake per adult equivalent of energy (kilocalories) (\u03b2: 41.6; 95% CI: 20.9, 62.2; P < 0.001), protein (grams) (\u03b2: 1.78; 95% CI: 0.80, 2.75; P < 0.001), iron (milligrams) (\u03b2: 0.30; 95% CI: 0.16, 0.44; P < 0.001), vitamin A (micrograms of retinol activity equivalent) (\u03b2: 25.8; 95% CI: 12.7, 38.9; P < 0.001), and zinc (milligrams) (\u03b2: 0.26; 95% CI: 0.13, 0.38; P < 0.001). Neither proportion of harvest sold nor distance to nearest population center modified the relation between CSR and household diet diversity or quality (P \u2265 0.05). Households with greater CSR were more commercially oriented (least-squares mean proportion of harvest sold \u00b1 SE, highest tertile of CSR: 17.1 \u00b1 0.52; lowest tertile of CSR: 8.92 \u00b1 1.09) (P < 0.05). CONCLUSION Promoting on-farm CSR may be a beneficial strategy for simultaneously supporting enhanced diet quality and diversity while also creating opportunities for smallholder farmers to engage with markets in subsistence agricultural contexts.", "which Has end ?", "2013", 521.0, 525.0], ["ABSTRACT In this study, we investigated the relationship between agricultural biodiversity and dietary diversity of children and whether factors such as economic access may affect this relationship. This paper is based on data collected in a baseline cross-sectional survey in November 2013.The study population comprising 1200 mother-child pairs was selected using a two-stage cluster sampling. Dietary diversity was defined as the number of food groups consumed 24 h prior to the assessment. The number of crop and livestock species produced on a farm was used as the measure of production diversity. Hierarchical regression analysis was used to identify predictors and test for interactions. Whereas the average production diversity score was 4.7 \u00b1 1.6, only 42.4% of households consumed at least four food groups out of seven over the preceding 24-h recall period. Agricultural biodiversity (i.e. variety of animals kept and food groups produced) associated positively with dietary diversity of children aged 6\u201336 months but the relationship was moderated by household socioeconomic status. The interaction term was also statistically significant [\u03b2 = \u22120.08 (95% CI: \u22120.05, \u22120.01, p = 0.001)]. Spearman correlation (rho) analysis showed that agricultural biodiversity was positively associated with individual dietary diversity of the child more among children of low socioeconomic status in rural households compared to children of high socioeconomic status (r = 0.93, p < 0.001 versus r = 0.08, p = 0.007). Socioeconomic status of the household also partially mediated the link between agricultural biodiversity and dietary diversity of a child\u2019s diet. The effect of increased agricultural biodiversity on dietary diversity was significantly higher in households of lower socioeconomic status. Therefore, improvement of agricultural biodiversity could be one of the best approaches for ensuring diverse diets especially for households of lower socioeconomic status in rural areas of Northern Ghana.", "which Has end ?", "2013", 286.0, 290.0], ["Abstract Objective The association between farm production diversity and dietary diversity in rural smallholder households was recently analysed. Most existing studies build on household-level dietary diversity indicators calculated from 7d food consumption recalls. Herein, this association is revisited with individual-level 24 h recall data. The robustness of the results is tested by comparing household- and individual-level estimates. The role of other factors that may influence dietary diversity, such as market access and agricultural technology, is also analysed. Design A survey of smallholder farm households was carried out in Malawi in 2014. Dietary diversity scores are calculated from 24 h recall data. Production diversity scores are calculated from farm production data covering a period of 12 months. Individual- and household-level regression models are developed and estimated. Setting Data were collected in sixteen districts of central and southern Malawi. Subjects Smallholder farm households (n 408), young children (n 519) and mothers (n 408). Results Farm production diversity is positively associated with dietary diversity. However, the estimated effects are small. Access to markets for buying food and selling farm produce and use of chemical fertilizers are shown to be more important for dietary diversity than diverse farm production. Results with household- and individual-level dietary data are very similar. Conclusions Further increasing production diversity may not be the most effective strategy to improve diets in smallholder farm households. Improving access to markets, productivity-enhancing inputs and technologies seems to be more promising.", "which Has end ?", "2014", 650.0, 654.0], ["Households in low-income settings are vulnerable to seasonal changes in dietary diversity because of fluctuations in food availability and access. We assessed seasonal differences in household dietary diversity in Burkina Faso, and determined the extent to which household socioeconomic status and crop production diversity modify changes in dietary diversity across seasons, using data from the nationally representative 2014 Burkina Faso Continuous Multisectoral Survey (EMC). A household dietary diversity score based on nine food groups was created from household food consumption data collected during four rounds of the 2014 EMC. Plot-level crop production data, and data on household assets and education were used to create variables on crop diversity and household socioeconomic status, respectively. Analyses included data for 10,790 households for which food consumption data were available for at least one round. Accounting for repeated measurements and controlling for the complex survey design and confounding covariates using a weighted multi-level model, household dietary diversity was significantly higher during both lean seasons periods, and higher still during the harvest season as compared to the post-harvest season (mean: post-harvest: 4.76 (SE 0.04); beginning of lean: 5.13 (SE 0.05); end of lean: 5.21 (SE 0.05); harvest: 5.72 (SE 0.04)), but was not different between the beginning and the end of lean season. Seasonal differences in household dietary diversity were greater among households with higher food expenditures, greater crop production, and greater monetary value of crops sale (P<0.05). Seasonal changes in household dietary diversity in Burkina Faso may reflect nutritional differences among agricultural households, and may be modified both by households\u2019 socioeconomic status and agricultural characteristics.", "which Has end ?", "2014", 422.0, 426.0], ["Background: Recent literature, largely from Africa, shows mixed effects of own-production on diet diversity. However, the role of own-production, relative to markets, in influencing food consumption becomes more pronounced as market integration increases. Objective: This paper investigates the relative importance of two factors - production diversity and household market integration - for the intake of a nutritious diet by women and households in rural India. Methods: Data analysis is based on primary data from an extensive agriculture-nutrition survey of 3600 Indian households that was collected in 2017. Dietary diversity scores are constructed for women and households is based on 24-hour and 7-day recall periods. Household market integration is measured as monthly household expenditure on key non-staple food groups. We measure production diversity in two ways - field-level and on-farm production diversity - in order to account for the cereal centric rice-wheat cropping system found in our study locations. The analysis is based on Ordinary Least Squares regressions where we control for a variety of village, household, and individual level covariates that affect food consumption, and village fixed effects. Robustness checks are done by way of using a Poisson regression specifications and 7-day recall period. Results: Conventional measures of field-level production diversity, like the number of crops or food groups grown, have no significant association with diet diversity. In contrast, it is on-farm production diversity (the field-level cultivation of pulses and on-farm livestock management, and kitchen gardens in the longer run) that is significantly associated with improved dietary diversity scores, thus suggesting the importance of non-staples in improving both individual and household dietary diversity. Furthermore, market purchases of non-staples like pulses and dairy products are associated with a significantly higher dietary diversity. Other significant determinants of dietary diversity include women\u2019s literacy and awareness of nutrition. These results mostly remain robust to changes in the recall period of the diet diversity measure and the nature of the empirical specification. Conclusions: This study contributes to the scarce empirical evidence related to diets in India. Additionally, our results indicate some key intervention areas - promoting livestock rearing, strengthening households\u2019 market integration (for purchase of non-staples) and increasing women\u2019s awareness about nutrition. These are more impactful than raising production diversity. ", "which Has end ?", "2017", 776.0, 780.0], ["Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.", "which Number of documents ?", "1367", 647.0, 651.0], ["Die vorliegende Studie vergleicht das Niveau der selbstberichteten psychischen Gesundheit und des Wohlbe\ufb01ndens in Deutschland zu Beginn der Corona-Krise (April 2020) mit den der Vorjahre. Die Zufriedenheit mit der Gesundheit steigt uber alle Bevolkerungsgruppen hinweg deutlich an, wahrend Sorgen um die Gesundheit uber alle Gruppen deutlich sinken. Dies deutet darauf hin, dass die aktuelle Einschatzung stark im Kontext des Bedrohungsszenarios der Pandemie erfogt. Die subjektive Einsamkeit steigt uber alle betrachteten Gruppe sehr stark an. Der Anstieg fallt unter jungeren Menschen und Frauen etwas groser aus. Depressions- und Angstsymptome steigen ebenfalls an im Vergleich zu 2019, sind jedoch vergleichbar zum Nievau in 2016. Das Wohlbe\ufb01nden verandert sich insgesamt kaum, es zeigen sich jedoch kleine Geschlechterunterschiede. Wahrend Frauen im Durchschnitt ein etwas geringeres Wohlbe\ufb01nden berichten, ist das Wohlbe\ufb01nden bei Mannern leicht angestiegen. Die allgemeine Lebenszufriedenheit verandert sich im Vergleich zu den Vorjahren im April 2020 noch nicht signi\ufb01kant. Allerdings \ufb01ndet man eine Angleichung der soziookonomischen Unterschiede in Bildung und Einkommen. Personen mit niedriger Bildung und Personen mit niedrigem Einkommen berichten einen leichten Anstieg ihrer Lebenszufriedenheit, wahrend Personen mit hoher Bildung und Personen mit hohem Einkommen eine leichte Reduktion ihrer Lebenszufriedenheit berichten. Insgesamt ergibt sich, dass in der ersten Phase der Corona-Pandemie soziookonomische Unterschiede fur die psychische Gesundheit noch keine grose modi\ufb01zierende Rolle spielen. Bestehende soziale Ungleichheiten in gesundheitsbezogenen Indikatoren bleiben weitgehend bestehen, einige verringern sich sogar.", "which Compared time before the COVID-19 pandemic ?", "2019", 684.0, 688.0], ["In recent times, polymer-based flexible pressure sensors have been attracting a lot of attention because of their various applications. A highly sensitive and flexible sensor is suggested, capable of being attached to the human body, based on a three-dimensional dielectric elastomeric structure of polydimethylsiloxane (PDMS) and microsphere composite. This sensor has maximal porosity due to macropores created by sacrificial layer grains and micropores generated by microspheres pre-mixed with PDMS, allowing it to operate at a wider pressure range (~150 kPa) while maintaining a sensitivity (of 0.124 kPa\u22121 in a range of 0~15 kPa) better than in previous studies. The maximized pores can cause deformation in the structure, allowing for the detection of small changes in pressure. In addition to exhibiting a fast rise time (~167 ms) and fall time (~117 ms), as well as excellent reproducibility, the fabricated pressure sensor exhibits reliability in its response to repeated mechanical stimuli (2.5 kPa, 1000 cycles). As an application, we develop a wearable device for monitoring repeated tiny motions, such as the pulse on the human neck and swallowing at the Adam\u2019s apple. This sensory device is also used to detect movements in the index finger and to monitor an insole system in real-time.", "which Loading-unloading cycles ?", "1000", 1018.0, 1022.0], ["SnO2 nanowire gas sensors have been fabricated on Cd\u2212Au comb-shaped interdigitating electrodes using thermal evaporation of the mixed powders of SnO2 and active carbon. The self-assembly grown sensors have excellent performance in sensor response to hydrogen concentration in the range of 10 to 1000 ppm. This high response is attributed to the large portion of undercoordinated atoms on the surface of the SnO2 nanowires. The influence of the Debye length of the nanowires and the gap between electrodes in the gas sensor response is examined and discussed.", "which Measuring concentration (ppm) ?", "1000", 295.0, 299.0], ["Objectives: To estimate the prevalence and incidence of epilepsy in Italy using a national database of general practitioners (GPs). Methods: The Health Search CSD Longitudinal Patient Database (HSD) has been established in 1998 by the Italian College of GPs. Participants were 700 GPs, representing a population of 912,458. For each patient, information on age and sex, EEG, CT scan, and MRI was included. Prevalent cases with a diagnosis of \u2018epilepsy' (ICD9CM: 345*) were selected in the 2011 population. Incident cases of epilepsy were identified in 2011 by excluding patients diagnosed for epilepsy and convulsions and those with EEG, CT scan, MRI prescribed for epilepsy and/or convulsions in the previous years. Crude and standardized (Italian population) prevalence and incidence were calculated. Results: Crude prevalence of epilepsy was 7.9 per 1,000 (men 8.1; women 7.7). The highest prevalence was in patients <25 years and \u226575 years. The incidence of epilepsy was 33.5 per 100,000 (women 35.3; men 31.5). The highest incidence was in women <25 years and in men 75 years or older. Conclusions: Prevalence and incidence of epilepsy in this study were similar to those of other industrialized countries. HSD appears as a reliable data source for the surveillance of epilepsy in Italy. i 2014 S. Karger AG, Basel", "which Year of study ?", "2011", 489.0, 493.0], ["Despite its importance for the global oceanic nitrogen (N) cycle, considerable uncertainties exist about the N fluxes of the Arabian Sea. On the basis of our recent measurements during the German Arabian Sea Process Study as part of the Joint Global Ocean Flux Study (JGOFS) in 1995 and 1997, we present estimates of various N sources and sinks such as atmospheric dry and wet depositions of N aerosols, pelagic denitrification, nitrous oxide (N2O) emissions, and advective N input from the south. Additionally, we estimated the N burial in the deep sea and the sedimentary shelf denitrification. On the basis of our measurements and literature data, the N budget for the Arabian Sea was reassessed. It is dominated by the N loss due to denitrification, which is balanced by the advective input of N from the south. The role of N fixation in the Arabian Sea is still difficult to assess owing to the small database available; however, there are hints that it might be more important than previously thought. Atmospheric N depositions are important on a regional scale during the intermonsoon in the central Arabian Sea; however, they play only a minor role for the overall N cycling. Emissions of N2O and ammonia, deep\u2010sea N burial, and N inputs by rivers and marginal seas (i.e., Persian Gulf and Red Sea) are of minor importance. We found that the magnitude of the sedimentary denitrification at the shelf might be \u223c17% of the total denitrification in the Arabian Sea, indicating that the shelf sediments might be of considerably greater importance for the N cycling in the Arabian Sea than previously thought. Sedimentary and pelagic denitrification together demand \u223c6% of the estimated particulate organic nitrogen export flux from the photic zone. The main northward transport of N into the Arabian Sea occurs in the intermediate layers, indicating that the N cycle of the Arabian Sea might be sensitive to variations of the intermediate water circulation of the Indian Ocean.", "which Sampling year ?", "1997", 287.0, 291.0], ["Abstract. Coastal upwelling ecosystems with marked oxyclines (redoxclines) present high availability of electron donors that favour chemoautotrophy, leading in turn to high N2O and CH4 cycling associated with aerobic NH4+ (AAO) and CH4 oxidation (AMO). This is the case of the highly productive coastal upwelling area off Central Chile (36\u00b0 S), where we evaluated the importance of total chemolithoautotrophic vs. photoautotrophic production, the specific contributions of AAO and AMO to chemosynthesis and their role in gas cycling. Chemoautotrophy (involving bacteria and archaea) was studied at a time-series station during monthly (2002\u20132009) and seasonal cruises (January 2008, September 2008, January 2009) and was assessed in terms of dark carbon assimilation (CA), N2O and CH4 cycling, and the natural C isotopic ratio of particulate organic carbon (\u03b413POC). Total Integrated dark CA fluctuated between 19.4 and 2.924 mg C m\u22122 d\u22121. It was higher during active upwelling and represented on average 27% of the integrated photoautotrophic production (from 135 to 7.626 mg C m\u22122d\u22121). At the oxycline, \u03b413POC averaged -22.209\u2030 this was significantly lighter compared to the surface (-19.674\u2030) and bottom layers (-20.716\u2030). This pattern, along with low NH4+ content and high accumulations of N2O, NO2- and NO3- within the oxycline indicates that chemolithoautotrophs and specifically AA oxydisers were active. Dark CA was reduced from 27 to 48% after addition of a specific AAO inhibitor (ATU) and from 24 to 76% with GC7, a specific archaea inhibitor, indicating that AAO and maybe AMO microbes (most of them archaea) were performing dark CA through oxidation of NH4+ and CH4. AAO produced N2O at rates from 8.88 to 43 nM d\u22121 and a fraction of it was effluxed into the atmosphere (up to 42.85 \u03bcmol m\u22122 d\u22121). AMO on the other hand consumed CH4 at rates between 0.41 and 26.8 nM d\u22121 therefore preventing its efflux to the atmosphere (up to 18.69 \u03bcmol m\u22122 d\u22121). These findings show that chemically driven chemoautotrophy (with NH4+ and CH4 acting as electron donors) could be more important than previously thought in upwelling ecosystems and open new questions concerning its future relevance.", "which Sampling year ?", "2008", 677.0, 681.0], ["LVe encountered an extensive surface bloom of the N, fixing cyanobactenum Trichodesrniurn erythraeum in the central basin of the Arabian Sea during the spring ~nter-n~onsoon of 1995. The bloom, which occurred dunng a penod of calm winds and relatively high atmospher~c iron content, was metabollcally active. Carbon fixation by the bloom represented about one-quarter of water column primary productivity while input by h:: flxation could account for a major fraction of the estimated 'new' N demand of pnmary production. Isotopic measurements of the N in surface suspended material confirmed a direct contribution of N, fixation to the organic nltrogen pools of the upper water column. Retrospective analysis of NOAA-12 AVHRR imagery indicated that blooms covered up to 2 X 106 km2, or 20% of the Arabian Sea surface, during the period from 22 to 27 May 1995. In addition to their biogeochemical impact, surface blooms of this extent may have secondary effects on sea surface albedo and light penetration as well as heat and gas exchange across the air-sea interface. A preliminary extrapolation based on our observed, non-bloom rates of N, fixation from our limited sampling in the spring intermonsoon, including a conservative estimate of the input by blooms, suggest N2 fixation may account for an input of about 1 Tg N yr-I This is substantial, but relatively minor compared to current estimates of the removal of N through denitrification in the basin. However, N2 fixation may also occur in the central basin through the mild winter monsoon, be considerably greater during the fall intermonsoon than we observed during the spring intermonsoon, and may also occur at higher levels in the chronically oligotrophic southern basin. Ongoing satellite observations will help to determine more accurately the distribution and density of Trichodesmium in this and other tropical oceanic basins, as well as resolving the actual frequency and duration of bloom occurrence.", "which Sampling year ?", "1995", 177.0, 181.0], ["Biogeochemical implications of global imbalance between the rates of marine dinitrogen (N2) fixation and denitrification have spurred us to understand the former process in the Arabian Sea, which contributes considerably to the global nitrogen budget. Heterotrophic bacteria have gained recent appreciation for their major role in marine N budget by fixing a significant amount of N2. Accordingly, we hypothesize a probable role of heterotrophic diazotrophs from the 15N2 enriched isotope labelling dark incubations that witnessed rates comparable to the light incubations in the eastern Arabian Sea during spring 2010. Maximum areal rates (8 mmol N m-2 d-1) were the highest ever observed anywhere in world oceans. Our results suggest that the eastern Arabian Sea gains ~92% of its new nitrogen through N2 fixation. Our results are consistent with the observations made in the same region in preceding year, i.e., during the spring of 2009.", "which Sampling year ?", "2010", 614.0, 618.0], ["Extensive measurements of nitrous oxide (N2O) have been made during April\u2013May 1994 (intermonsoon), February\u2013March 1995 (northeast monsoon), July\u2013August 1995 and August 1996 (southwest monsoon) in the Arabian Sea. Low N2O supersaturations in the surface waters are observed during intermonsoon compared to those in northeast and southwest monsoons. Spatial distributions of supersaturations manifest the effects of larger mixing during winter cooling and wind\u2010driven upwelling during monsoon period off the Indian west coast. A net positive flux is observable during all the seasons, with no discernible differences from the open ocean to coastal regions. The average ocean\u2010to\u2010atmosphere fluxes of N2O are estimated, using wind speed dependent gas transfer velocity, to be of the order of 0.26, 0.003, and 0.51, and 0.78 pg (pico grams) cm\u22122 s\u22121 during northeast monsoon, intermonsoon, and southwest monsoon in 1995 and 1996, respectively. The lower range of annual emission of N2O is estimated to be 0.56\u20130.76 Tg N2O per year which constitutes 13\u201317% of the net global oceanic source. However, N2O emission from the Arabian Sea can be as high as 1.0 Tg N2O per year using different gas transfer models.", "which Sampling year ?", "1995", 114.0, 118.0], ["Abstract. One of the major objectives of the BIOSOPE cruise, carried out on the R/V Atalante from October-November 2004 in the South Pacific Ocean, was to establish productivity rates along a zonal section traversing the oligotrophic South Pacific Gyre (SPG). These results were then compared to measurements obtained from the nutrient \u2013 replete waters in the Chilean upwelling and around the Marquesas Islands. A dual 13C/15N isotope technique was used to estimate the carbon fixation rates, inorganic nitrogen uptake (including dinitrogen fixation), ammonium (NH4) and nitrate (NO3) regeneration and release of dissolved organic nitrogen (DON). The SPG exhibited the lowest primary production rates (0.15 g C m\u22122 d\u22121), while rates were 7 to 20 times higher around the Marquesas Islands and in the Chilean upwelling, respectively. In the very low productive area of the SPG, most of the primary production was sustained by active regeneration processes that fuelled up to 95% of the biological nitrogen demand. Nitrification was active in the surface layer and often balanced the biological demand for nitrate, especially in the SPG. The percentage of nitrogen released as DON represented a large proportion of the inorganic nitrogen uptake (13\u201315% in average), reaching 26\u201341% in the SPG, where DON production played a major role in nitrogen cycling. Dinitrogen fixation was detectable over the whole study area; even in the Chilean upwelling, where rates as high as 3 nmoles l\u22121 d\u22121 were measured. In these nutrient-replete waters new production was very high (0.69\u00b10.49 g C m\u22122 d\u22121) and essentially sustained by nitrate levels. In the SPG, dinitrogen fixation, although occurring at much lower daily rates (\u22481\u20132 nmoles l\u22121 d\u22121), sustained up to 100% of the new production (0.008\u00b10.007 g C m\u22122 d\u22121) which was two orders of magnitude lower than that measured in the upwelling. The annual N2-fixation of the South Pacific is estimated to 21\u00d71012g, of which 1.34\u00d71012g is for the SPG only. Even if our \"snapshot\" estimates of N2-fixation rates were lower than that expected from a recent ocean circulation model, these data confirm that the N-deficiency South Pacific Ocean would provide an ideal ecological niche for the proliferation of N2-fixers which are not yet identified.", "which Sampling year ?", "2004", 115.0, 119.0], ["Nitrogen fixation is an essential process that biologically transforms atmospheric dinitrogen gas to ammonia, therefore compensating for nitrogen losses occurring via denitrification and anammox. Currently, inputs and losses of nitrogen to the ocean resulting from these processes are thought to be spatially separated: nitrogen fixation takes place primarily in open ocean environments (mainly through diazotrophic cyanobacteria), whereas nitrogen losses occur in oxygen-depleted intermediate waters and sediments (mostly via denitrifying and anammox bacteria). Here we report on rates of nitrogen fixation obtained during two oceanographic cruises in 2005 and 2007 in the eastern tropical South Pacific (ETSP), a region characterized by the presence of coastal upwelling and a major permanent oxygen minimum zone (OMZ). Our results show significant rates of nitrogen fixation in the water column; however, integrated rates from the surface down to 120 m varied by \u223c30 fold between cruises (7.5\u00b14.6 versus 190\u00b182.3 \u00b5mol m\u22122 d\u22121). Moreover, rates were measured down to 400 m depth in 2007, indicating that the contribution to the integrated rates of the subsurface oxygen-deficient layer was \u223c5 times higher (574\u00b1294 \u00b5mol m\u22122 d\u22121) than the oxic euphotic layer (48\u00b168 \u00b5mol m\u22122 d\u22121). Concurrent molecular measurements detected the dinitrogenase reductase gene nifH in surface and subsurface waters. Phylogenetic analysis of the nifH sequences showed the presence of a diverse diazotrophic community at the time of the highest measured nitrogen fixation rates. Our results thus demonstrate the occurrence of nitrogen fixation in nutrient-rich coastal upwelling systems and, importantly, within the underlying OMZ. They also suggest that nitrogen fixation is a widespread process that can sporadically provide a supplementary source of fixed nitrogen in these regions.", "which Sampling year ?", "2005", 653.0, 657.0], ["Two blooms of Trichodesmium erythraeum were observed during April 2001, in the open waters of Bay of Bengal and this is the first report from this region. The locations of the bloom were off Karaikkal (10\u00b058'N, 81\u00b050'E) and off south of Calcutta (19\u00b0 44'N, 89\u00b0 04'), both along east coast of India. Nutrients (nitrate, phosphate, silicate) concentration in the upper 30 m of the water column showed very low values. High-integrated primary production (Bloom 1- 2160 mgC m -2 d -1 , Bloom 2-1740 mgC m -2 d -1 ) was obtained in these regions, which indicated the enhancement of primary production in the earlier stages of the bloom. Very low NO3-N concentrations, brownish yellow bloom colour, undisturbed patches and high primary production strongly suggested that the blooms were in the growth phase. Low mesozooplankton biomass was found in both locations and was dominated by copepods followed by chaetognaths.", "which Sampling year ?", "2001", 66.0, 70.0], ["Dissolved and atmospheric nitrous oxide (N2O) were measured on the legs 3 and 5 of the R/V Meteor cruise 32 in the Arabian Sea. A cruise track along 65\u00b0E was followed during both the intermonsoon (May 1995) and the southwest (SW) monsoon (July/August 1995) periods. During the second leg the coastal and open ocean upwelling regions off the Arabian Peninsula were also investigated. Mean N2O saturations for the oceanic regions of the Arabian Sea were in the range of 99\u2013103% during the intermonsoon and 103\u2013230% during the SW monsoon. Computed annual emissions of 0.8\u20131.5 Tg N2O for the Arabian Sea are considerably higher than previous estimates, indicating that the role of upwelling regions, such as the Arabian Sea, may be more important than previously assumed in global budgets of oceanic N2O emissions.", "which Sampling year ?", "1995", 201.0, 205.0], ["Productivity measurements were carried out during spring 2007 in the northeastern (NE) Indian Ocean, where light availability is controlled by clouds and surface productivity by nutrient and light availability. New productivity is found to be higher than regenerated productivity at most locations, consistent with the earlier findings from the region. A comparison of the present results with the earlier findings reveals that the region contributes significantly in the sequestration of CO2 from the atmosphere, particularly during spring. Diatomdominated plankton community is more efficient than those dominated by other organisms in the uptake of CO2 and its export to the deep. Earlier studies on plankton composition suggest that higher new productivity at most locations could also be due to the dominance of diatoms in the region.", "which Sampling year ?", "2007", 57.0, 61.0], ["Abstract Biological dinitrogen (N 2 ) fixation exerts an important control on oceanic primary production by providing bioavailable form of nitrogen (such as ammonium) to photosynthetic microorganisms. N 2 fixation is dominant in nutrient poor and warm surface waters. The Bay of Bengal is one such region where no measurements of phototrophic N 2 fixation rates exist. The surface water of the Bay of Bengal is generally nitrate-poor and warm due to prevailing stratification and thus, could favour N 2 fixation. We commenced the first N 2 fixation study in the photic zone of the Bay of Bengal using 15 N 2 gas tracer incubation experiment during summer monsoon 2018. We collected seawater samples from four depths (covering the mixed layer depth of up to 75 m) at eight stations. N 2 fixation rates varied from 4 to 75 \u03bc mol N m \u22122 d \u22121 . The contribution of N 2 fixation to primary production was negligible (<1%). However, the upper bound of observed N 2 fixation rates is higher than the rates measured in other oceanic regimes, such as the Eastern Tropical South Pacific, the Tropical Northwest Atlantic, and the Equatorial and Southern Indian Ocean.", "which Sampling year ?", "2018", 663.0, 667.0], ["The partial pressure of CO2 (pCO2) was measured during the 1995 South\u2010West Monsoon in the Arabian Sea. The Arabian Sea was characterized throughout by a moderate supersaturation of 12\u201330 \u00b5atm. The stable atmospheric pCO2 level was around 345 \u00b5atm. An extreme supersaturation was found in areas of coastal upwelling off the Omani coast with pCO2 peak values in surface waters of 750 \u00b5atm. Such two\u2010fold saturation (218%) is rarely found elsewhere in open ocean environments. We also encountered cold upwelled water 300 nm off the Omani coast in the region of Ekman pumping, which was also characterized by a strongly elevated seawater pCO2 of up to 525 \u00b5atm. Due to the strong monsoonal wind forcing the Arabian Sea as a whole and the areas of upwelling in particular represent a significant source of atmospheric CO2 with flux densities from around 2 mmol m\u22122 d\u22121 in the open ocean to 119 mmol m\u22122 d\u22121 in coastal upwelling. Local air masses passing the area of coastal upwelling showed increasing CO2 concentrations, which are consistent with such strong emissions.", "which Sampling year ?", "1995", 59.0, 63.0], ["Abstract. We performed N budgets at three stations in the western tropical South Pacific (WTSP) Ocean during austral summer conditions (Feb. Mar. 2015) and quantified all major N fluxes both entering the system (N2 fixation, nitrate eddy diffusion, atmospheric deposition) and leaving the system (PN export). Thanks to a Lagrangian strategy, we sampled the same water mass for the entire duration of each long duration (5 days) station, allowing to consider only vertical exchanges. Two stations located at the western end of the transect (Melanesian archipelago (MA) waters, LD A and LD B) were oligotrophic and characterized by a deep chlorophyll maximum (DCM) located at 51\u2009\u00b1\u200918\u2009m and 81\u2009\u00b1\u20099\u2009m at LD A and LD B. Station LD C was characterized by a DCM located at 132\u2009\u00b1\u20097\u2009m, representative of the ultra-oligotrophic waters of the South Pacific gyre (SPG water). N2 fixation rates were extremely high at both LD A (593\u2009\u00b1\u200951\u2009\u00b5mol\u2009N\u2009m\u22122\u2009d\u22121) and LD B (706\u2009\u00b1\u2009302\u2009\u00b5mol\u2009N\u2009m\u22122\u2009d\u22121), and the diazotroph community was dominated by Trichodesmium. N2 fixation rates were lower (59\u2009\u00b1\u200916\u2009\u00b5mol\u2009N\u2009m\u22122\u2009d\u22121) at LD C and the diazotroph community was dominated by unicellular N2-fixing cyanobacteria (UCYN). At all stations, N2 fixation was the major source of new N (>\u200990\u2009%) before atmospheric deposition and upward nitrate fluxes induced by turbulence. N2 fixation contributed circa 8\u201312\u2009% of primary production in the MA region and 3\u2009% in the SPG water and sustained nearly all new primary production at all stations. The e-ratio (e-ratio\u2009=\u2009PC export/PP) was maximum at LD A (9.7\u2009%) and was higher than the e-ratio in most studied oligotrophic regions (~\u20091\u2009%), indicating a high efficiency of the WTSP to export carbon relative to primary production. The direct export of diazotrophs assessed by qPCR of the nifH gene in sediment traps represented up to 30.6\u2009% of the PC export at LD A, while there contribution was 5 and \n ", "which Sampling year ?", "2015", 154.0, 158.0], ["Depth profiles of dissolved nitrous oxide (N2O) were measured in the central and western Arabian Sea during four cruises in May and July\u2013August 1995 and May\u2013July 1997 as part of the German contribution to the Arabian Sea Process Study of the Joint Global Ocean Flux Study. The vertical distribution of N2O in the water column on a transect along 65\u00b0E showed a characteristic double-peak structure, indicating production of N2O associated with steep oxygen gradients at the top and bottom of the oxygen minimum zone. We propose a general scheme consisting of four ocean compartments to explain the N2O cycling as a result of nitrification and denitrification processes in the water column of the Arabian Sea. We observed a seasonal N2O accumulation at 600\u2013800 m near the shelf break in the western Arabian Sea. We propose that, in the western Arabian Sea, N2O might also be formed during bacterial oxidation of organic matter by the reduction of IO3 \u2212 to I\u2212, indicating that the biogeochemical cycling of N2O in the Arabian Sea during the SW monsoon might be more complex than previously thought. A compilation of sources and sinks of N2O in the Arabian Sea suggested that the N2O budget is reasonably balanced.", "which Sampling year ?", "1997", 162.0, 166.0], ["Abstract Picophytoplankton were investigated during spring 2015 and 2016 extending from near\u2010shore coastal waters to oligotrophic open waters in the eastern Indian Ocean (EIO). They were typically composed of Prochlorococcus (Pro), Synechococcus (Syn), and picoeukaryotes (PEuks). Pro dominated most regions of the entire EIO and were approximately 1\u20132 orders of magnitude more abundant than Syn and PEuks. Under the influence of physicochemical conditions induced by annual variations of circulations and water masses, no coherent abundance and horizontal distributions of picophytoplankton were observed between spring 2015 and 2016. Although previous studies reported the limited effects of nutrients and heavy metals around coastal waters or upwelling zones could constrain Pro growth, Pro abundance showed strong positive correlation with nutrients, indicating the increase in nutrient availability particularly in the oligotrophic EIO could appreciably elevate their abundance. The exceptional appearance of picophytoplankton with high abundance along the equator appeared to be associated with the advection processes supported by the Wyrtki jets. For vertical patterns of picophytoplankton, a simple conceptual model was built based upon physicochemical parameters. However, Pro and PEuks simultaneously formed a subsurface maximum, while Syn generally restricted to the upper waters, significantly correlating with the combined effects of temperature, light, and nutrient availability. The average chlorophyll a concentrations (Chl a) of picophytoplankton accounted for above 49.6% and 44.9% of the total Chl a during both years, respectively, suggesting that picophytoplankton contributed a significant proportion of the phytoplankton community in the whole EIO.", "which Sampling year ?", "2015", 59.0, 63.0], ["Extensive measurements of nitrous oxide (N2O) have been made during April\u2013May 1994 (intermonsoon), February\u2013March 1995 (northeast monsoon), July\u2013August 1995 and August 1996 (southwest monsoon) in the Arabian Sea. Low N2O supersaturations in the surface waters are observed during intermonsoon compared to those in northeast and southwest monsoons. Spatial distributions of supersaturations manifest the effects of larger mixing during winter cooling and wind\u2010driven upwelling during monsoon period off the Indian west coast. A net positive flux is observable during all the seasons, with no discernible differences from the open ocean to coastal regions. The average ocean\u2010to\u2010atmosphere fluxes of N2O are estimated, using wind speed dependent gas transfer velocity, to be of the order of 0.26, 0.003, and 0.51, and 0.78 pg (pico grams) cm\u22122 s\u22121 during northeast monsoon, intermonsoon, and southwest monsoon in 1995 and 1996, respectively. The lower range of annual emission of N2O is estimated to be 0.56\u20130.76 Tg N2O per year which constitutes 13\u201317% of the net global oceanic source. However, N2O emission from the Arabian Sea can be as high as 1.0 Tg N2O per year using different gas transfer models.", "which Sampling year ?", "1994", 78.0, 82.0], ["Nitrogen (N) is an essential element for life and controls the magnitude of primary productivity in the ocean. In order to describe the microorganisms that catalyze N transformations in surface waters in the South Pacific Ocean, we collected high-resolution biotic and abiotic data along a 7000 km transect, from the Antarctic ice edge to the equator. The transect, conducted between late Austral autumn and early winter 2016, covered major oceanographic features such as the polar front (PF), the subtropical front (STF) and the Pacific equatorial divergence (PED). We measured N2 fixation and nitrification rates and quantified the relative abundances of diazotrophs and nitrifiers in a region where few to no rate measurements are available. Even though N2 fixation rates are usually below detection limits in cold environments, we were able to measure this N pathway at 7/10 stations in the cold and nutrient rich waters near the PF. This result highlights that N2 fixation rates continue to be measured outside the well-known subtropical regions. The majority of the mid to high N2 fixation rates (>\u223c20 nmol L\u20131 d\u20131), however, still occurred in the expected tropical and subtropical regions. High throughput sequence analyses of the dinitrogenase reductase gene (nifH) revealed that the nifH Cluster I dominated the diazotroph diversity throughout the transect. nifH gene richness did not show a latitudinal trend, nor was it significantly correlated with N2 fixation rates. Nitrification rates above the mixed layer in the Southern Ocean ranged between 56 and 1440 nmol L\u20131 d\u20131. Our data showed a decoupling between carbon and N assimilation (NO3\u2013 and NH4+ assimilation rates) in winter in the South Pacific Ocean. Phytoplankton community structure showed clear changes across the PF, the STF and the PED, defining clear biomes. Overall, these findings provide a better understanding of the ecosystem functionality in the South Pacific Ocean across key oceanographic biomes.", "which Sampling year ?", "2016", 421.0, 425.0], ["LVe encountered an extensive surface bloom of the N, fixing cyanobactenum Trichodesrniurn erythraeum in the central basin of the Arabian Sea during the spring ~nter-n~onsoon of 1995. The bloom, which occurred dunng a penod of calm winds and relatively high atmospher~c iron content, was metabollcally active. Carbon fixation by the bloom represented about one-quarter of water column primary productivity while input by h:: flxation could account for a major fraction of the estimated 'new' N demand of pnmary production. Isotopic measurements of the N in surface suspended material confirmed a direct contribution of N, fixation to the organic nltrogen pools of the upper water column. Retrospective analysis of NOAA-12 AVHRR imagery indicated that blooms covered up to 2 X 106 km2, or 20% of the Arabian Sea surface, during the period from 22 to 27 May 1995. In addition to their biogeochemical impact, surface blooms of this extent may have secondary effects on sea surface albedo and light penetration as well as heat and gas exchange across the air-sea interface. A preliminary extrapolation based on our observed, non-bloom rates of N, fixation from our limited sampling in the spring intermonsoon, including a conservative estimate of the input by blooms, suggest N2 fixation may account for an input of about 1 Tg N yr-I This is substantial, but relatively minor compared to current estimates of the removal of N through denitrification in the basin. However, N2 fixation may also occur in the central basin through the mild winter monsoon, be considerably greater during the fall intermonsoon than we observed during the spring intermonsoon, and may also occur at higher levels in the chronically oligotrophic southern basin. Ongoing satellite observations will help to determine more accurately the distribution and density of Trichodesmium in this and other tropical oceanic basins, as well as resolving the actual frequency and duration of bloom occurrence.", "which Sampling year ?", "1995", 177.0, 181.0], ["Abstract. We have determined the latitudinal distribution of Trichodesmium spp. abundance and community N2 fixation in the Atlantic Ocean along a meridional transect from ca. 30\u00b0 N to 30\u00b0 S in November\u2013December 2007 and April\u2013May 2008. The observations from both cruises were highly consistent in terms of absolute magnitude and latitudinal distribution, showing a strong association between Trichodesmium abundance and community N2 fixation. The highest Trichodesmium abundances (mean = 220 trichomes L\u22121,) and community N2 fixation rates (mean = 60 \u03bcmol m\u22122 d\u22121) occurred in the Equatorial region between 5\u00b0 S\u201315\u00b0 N. In the South Atlantic gyre, Trichodesmium abundance was very low (ca. 1 trichome L\u22121) but N2 fixation was always measurable, averaging 3 and 10 \u03bcmol m2 d\u22121 in 2007 and 2008, respectively. We suggest that N2 fixation in the South Atlantic was sustained by other, presumably unicellular, diazotrophs. Comparing these distributions with the geographical pattern in atmospheric dust deposition points to iron supply as the main factor determining the large scale latitudinal variability of Trichodesmium spp. abundance and N2 fixation in the Atlantic Ocean. We observed a marked South to North decrease in surface phosphate concentration, which argues against a role for phosphorus availability in controlling the large scale distribution of N2 fixation. Scaling up from all our measurements (42 stations) results in conservative estimates for total N2 fixation of \u223c6 TgN yr\u22121 in the North Atlantic (0\u201340\u00b0 N) and ~1.2 TgN yr\u22121 in the South Atlantic (0\u201340\u00b0 S).", "which Sampling year ?", "2007", 211.0, 215.0], ["The uptake of dissolved inorganic nitrogen by phytoplankton is an important aspect of the nitrogen cycle of oceans. Here, we present nitrate () and ammonium () uptake rates in the northeastern Arabian Sea using tracer technique. In this relatively underexplored region, productivity is high during winter due to supply of nutrients by convective mixing caused by the cooling of the surface by the northeast monsoon winds. Studies done during different months (January and late February-early March) of the northeast monsoon 2003 revealed a fivefold increase in the average euphotic zone integrated uptake from January (2.3 mmolN ) to late February-early March (12.7 mmolN ). The -ratio during January appeared to be affected by the winter cooling effect and increased by more than 50% from the southernmost station to the northern open ocean stations, indicating hydrographic and meteorological control. Estimates of residence time suggested that entrained in the water column during January contributed to the development of blooms during late February-early March.", "which Sampling year ?", "2003", 524.0, 528.0], ["We examined rates of N2 fixation from the surface to 2000 m depth in the Eastern Tropical South Pacific (ETSP) during El Ni\u00f1o (2010) and La Ni\u00f1a (2011). Replicated vertical profiles performed under oxygen-free conditions show that N2 fixation takes place both in euphotic and aphotic waters, with rates reaching 155 to 509 \u00b5mol N m\u22122 d\u22121 in 2010 and 24\u00b114 to 118\u00b187 \u00b5mol N m\u22122 d\u22121 in 2011. In the aphotic layers, volumetric N2 fixation rates were relatively low (<1.00 nmol N L\u22121 d\u22121), but when integrated over the whole aphotic layer, they accounted for 87\u201390% of total rates (euphotic+aphotic) for the two cruises. Phylogenetic studies performed in microcosms experiments confirm the presence of diazotrophs in the deep waters of the Oxygen Minimum Zone (OMZ), which were comprised of non-cyanobacterial diazotrophs affiliated with nifH clusters 1K (predominantly comprised of \u03b1-proteobacteria), 1G (predominantly comprised of \u03b3-proteobacteria), and 3 (sulfate reducing genera of the \u03b4-proteobacteria and Clostridium spp., Vibrio spp.). Organic and inorganic nutrient addition bioassays revealed that amino acids significantly stimulated N2 fixation in the core of the OMZ at all stations tested and as did simple carbohydrates at stations located nearest the coast of Peru/Chile. The episodic supply of these substrates from upper layers are hypothesized to explain the observed variability of N2 fixation in the ETSP.", "which Sampling year ?", "2011", 146.0, 150.0], ["Abstract. Diazotrophic activity and primary production (PP) were investigated along two transects (Belgica BG2014/14 and GEOVIDE cruises) off the western Iberian Margin and the Bay of Biscay in May 2014. Substantial N2 fixation activity was observed at 8 of the 10 stations sampled, ranging overall from 81 to 384 \u00b5mol N m\u22122 d\u22121 (0.7 to 8.2 nmol N L\u22121 d\u22121), with two sites close to the Iberian Margin situated between 38.8 and 40.7\u2218 N yielding rates reaching up to 1355 and 1533 \u00b5mol N m\u22122 d\u22121. Primary production was relatively lower along the Iberian Margin, with rates ranging from 33 to 59 mmol C m\u22122 d\u22121, while it increased towards the northwest away from the peninsula, reaching as high as 135 mmol C m\u22122 d\u22121. In agreement with the area-averaged Chl a satellite data contemporaneous with our study period, our results revealed that post-bloom conditions prevailed at most sites, while at the northwesternmost station the bloom was still ongoing. When converted to carbon uptake using Redfield stoichiometry, N2 fixation could support 1 % to 3 % of daily PP in the euphotic layer at most sites, except at the two most active sites where this contribution to daily PP could reach up to 25 %. At the two sites where N2 fixation activity was the highest, the prymnesiophyte\u2013symbiont Candidatus Atelocyanobacterium thalassa (UCYN-A) dominated the nifH sequence pool, while the remaining recovered sequences belonged to non-cyanobacterial phylotypes. At all the other sites, however, the recovered nifH sequences were exclusively assigned phylogenetically to non-cyanobacterial phylotypes. The intense N2 fixation activities recorded at the time of our study were likely promoted by the availability of phytoplankton-derived organic matter produced during the spring bloom, as evidenced by the significant surface particulate organic carbon concentrations. Also, the presence of excess phosphorus signature in surface waters seemed to contribute to sustaining N2 fixation, particularly at the sites with extreme activities. These results provide a mechanistic understanding of the unexpectedly high N2 fixation in productive waters of the temperate North Atlantic and highlight the importance of N2 fixation for future assessment of the global N inventory.", "which Sampling year ?", "2014", 198.0, 202.0], ["Abstract The surface waters of the northeastern Arabian Sea sustained relatively high chlorophyll a (average 0.81\u00b10.80 mg m\u20133) and primary production (average 29.5\u00b123.6 mgC m\u20133 d\u20131) during the early spring intermonsoon 2000. This was caused primarily by a thick algal bloom spread over a vast area between 17\u201321\u00b0N and 66\u201370\u00b0E. Satellite images showed exceptionally high concentration of chlorophyll a in the bloom area, representing the annually occurring \u2018spring blooms\u2019 during February\u2013March. The causative organism of the bloom was the dinoflagellate, Noctiluca scintillans (Dinophyceae: Noctilucidea), symbiotically associated with an autotrophic prasinophyte Pedinomonas noctilucae. The symbiosis between N. scintillans and P. noctilucae is most likely responsible for their explosive growth (average 3 million cells l\u20131) over an extensive area, making the northeastern Arabian Sea highly productive (average 607\u00b1338 mgC m\u20132 d\u20131) even during an oligotrophic period such as spring intermonsoon.", "which Sampling year ?", "2000", 219.0, 223.0], ["We report the first direct estimates of N2 fixation rates measured during the spring, 2009 using the 15N2 gas tracer technique in the eastern Arabian Sea, which is well known for significant loss of nitrogen due to intense denitrification. Carbon uptake rates are also concurrently estimated using the 13C tracer technique. The N2 fixation rates vary from \u223c0.1 to 34 mmol N m\u22122d\u22121 after correcting for the isotopic under\u2010equilibrium with dissolved air in the samples. These higher N2 fixation rates are consistent with higher chlorophyll a and low \u03b415N of natural particulate organic nitrogen. Our estimates of N2 fixation is a useful step toward reducing the uncertainty in the nitrogen budget.", "which Sampling year ?", "2009", 86.0, 90.0], ["Despite its importance for the global oceanic nitrogen (N) cycle, considerable uncertainties exist about the N fluxes of the Arabian Sea. On the basis of our recent measurements during the German Arabian Sea Process Study as part of the Joint Global Ocean Flux Study (JGOFS) in 1995 and 1997, we present estimates of various N sources and sinks such as atmospheric dry and wet depositions of N aerosols, pelagic denitrification, nitrous oxide (N2O) emissions, and advective N input from the south. Additionally, we estimated the N burial in the deep sea and the sedimentary shelf denitrification. On the basis of our measurements and literature data, the N budget for the Arabian Sea was reassessed. It is dominated by the N loss due to denitrification, which is balanced by the advective input of N from the south. The role of N fixation in the Arabian Sea is still difficult to assess owing to the small database available; however, there are hints that it might be more important than previously thought. Atmospheric N depositions are important on a regional scale during the intermonsoon in the central Arabian Sea; however, they play only a minor role for the overall N cycling. Emissions of N2O and ammonia, deep\u2010sea N burial, and N inputs by rivers and marginal seas (i.e., Persian Gulf and Red Sea) are of minor importance. We found that the magnitude of the sedimentary denitrification at the shelf might be \u223c17% of the total denitrification in the Arabian Sea, indicating that the shelf sediments might be of considerably greater importance for the N cycling in the Arabian Sea than previously thought. Sedimentary and pelagic denitrification together demand \u223c6% of the estimated particulate organic nitrogen export flux from the photic zone. The main northward transport of N into the Arabian Sea occurs in the intermediate layers, indicating that the N cycle of the Arabian Sea might be sensitive to variations of the intermediate water circulation of the Indian Ocean.", "which Sampling year ?", "1995", 278.0, 282.0], ["We report the first direct estimates of N2 fixation rates measured during the spring, 2009 using the 15N2 gas tracer technique in the eastern Arabian Sea, which is well known for significant loss of nitrogen due to intense denitrification. Carbon uptake rates are also concurrently estimated using the 13C tracer technique. The N2 fixation rates vary from \u223c0.1 to 34 mmol N m\u22122d\u22121 after correcting for the isotopic under\u2010equilibrium with dissolved air in the samples. These higher N2 fixation rates are consistent with higher chlorophyll a and low \u03b415N of natural particulate organic nitrogen. Our estimates of N2 fixation is a useful step toward reducing the uncertainty in the nitrogen budget.", "which Sampling year ?", "2009", 86.0, 90.0], ["Biological N2 fixation rates were quantified in the Eastern Tropical South Pacific (ETSP) during both El Ni\u00f1o (February 2010) and La Ni\u00f1a (March\u2013April 2011) conditions, and from Low\u2010Nutrient, Low\u2010Chlorophyll (20\u00b0S) to High\u2010Nutrient, Low\u2010Chlorophyll (HNLC) (10\u00b0S) conditions. N2 fixation was detected at all stations with rates ranging from 0.01 to 0.88 nmol N L\u22121 d\u22121, with higher rates measured during El Ni\u00f1o conditions compared to La Ni\u00f1a. High N2 fixations rates were reported at northern stations (HNLC conditions) at the oxycline and in the oxygen minimum zone (OMZ), despite nitrate concentrations up to 30 \u00b5mol L\u22121, indicating that inputs of new N can occur in parallel with N loss processes in OMZs. Water\u2010column integrated N2 fixation rates ranged from 4 to 53 \u00b5mol N m\u22122 d\u22121 at northern stations, and from 0 to 148 \u00b5mol m\u22122 d\u22121 at southern stations, which are of the same order of magnitude as N2 fixation rates measured in the oligotrophic ocean. N2 fixation rates responded significantly to Fe and organic carbon additions in the surface HNLC waters, and surprisingly by concomitant Fe and N additions in surface waters at the edge of the subtropical gyre. Recent studies have highlighted the predominance of heterotrophic diazotrophs in this area, and we hypothesize that N2 fixation could be directly limited by inorganic nutrient availability, or indirectly through the stimulation of primary production and the subsequent excretion of dissolved organic matter and/or the formation of micro\u2010environments favorable for heterotrophic N2 fixation.", "which Sampling year ?", "2011", 151.0, 155.0], ["Abstract. We have determined the latitudinal distribution of Trichodesmium spp. abundance and community N2 fixation in the Atlantic Ocean along a meridional transect from ca. 30\u00b0 N to 30\u00b0 S in November\u2013December 2007 and April\u2013May 2008. The observations from both cruises were highly consistent in terms of absolute magnitude and latitudinal distribution, showing a strong association between Trichodesmium abundance and community N2 fixation. The highest Trichodesmium abundances (mean = 220 trichomes L\u22121,) and community N2 fixation rates (mean = 60 \u03bcmol m\u22122 d\u22121) occurred in the Equatorial region between 5\u00b0 S\u201315\u00b0 N. In the South Atlantic gyre, Trichodesmium abundance was very low (ca. 1 trichome L\u22121) but N2 fixation was always measurable, averaging 3 and 10 \u03bcmol m2 d\u22121 in 2007 and 2008, respectively. We suggest that N2 fixation in the South Atlantic was sustained by other, presumably unicellular, diazotrophs. Comparing these distributions with the geographical pattern in atmospheric dust deposition points to iron supply as the main factor determining the large scale latitudinal variability of Trichodesmium spp. abundance and N2 fixation in the Atlantic Ocean. We observed a marked South to North decrease in surface phosphate concentration, which argues against a role for phosphorus availability in controlling the large scale distribution of N2 fixation. Scaling up from all our measurements (42 stations) results in conservative estimates for total N2 fixation of \u223c6 TgN yr\u22121 in the North Atlantic (0\u201340\u00b0 N) and ~1.2 TgN yr\u22121 in the South Atlantic (0\u201340\u00b0 S).", "which Sampling year ?", "2008", 230.0, 234.0], ["LVe encountered an extensive surface bloom of the N, fixing cyanobactenum Trichodesrniurn erythraeum in the central basin of the Arabian Sea during the spring ~nter-n~onsoon of 1995. The bloom, which occurred dunng a penod of calm winds and relatively high atmospher~c iron content, was metabollcally active. Carbon fixation by the bloom represented about one-quarter of water column primary productivity while input by h:: flxation could account for a major fraction of the estimated 'new' N demand of pnmary production. Isotopic measurements of the N in surface suspended material confirmed a direct contribution of N, fixation to the organic nltrogen pools of the upper water column. Retrospective analysis of NOAA-12 AVHRR imagery indicated that blooms covered up to 2 X 106 km2, or 20% of the Arabian Sea surface, during the period from 22 to 27 May 1995. In addition to their biogeochemical impact, surface blooms of this extent may have secondary effects on sea surface albedo and light penetration as well as heat and gas exchange across the air-sea interface. A preliminary extrapolation based on our observed, non-bloom rates of N, fixation from our limited sampling in the spring intermonsoon, including a conservative estimate of the input by blooms, suggest N2 fixation may account for an input of about 1 Tg N yr-I This is substantial, but relatively minor compared to current estimates of the removal of N through denitrification in the basin. However, N2 fixation may also occur in the central basin through the mild winter monsoon, be considerably greater during the fall intermonsoon than we observed during the spring intermonsoon, and may also occur at higher levels in the chronically oligotrophic southern basin. Ongoing satellite observations will help to determine more accurately the distribution and density of Trichodesmium in this and other tropical oceanic basins, as well as resolving the actual frequency and duration of bloom occurrence.", "which Sampling period ?", "May", 851.0, 854.0], ["Dissolved and atmospheric nitrous oxide (N2O) were measured on the legs 3 and 5 of the R/V Meteor cruise 32 in the Arabian Sea. A cruise track along 65\u00b0E was followed during both the intermonsoon (May 1995) and the southwest (SW) monsoon (July/August 1995) periods. During the second leg the coastal and open ocean upwelling regions off the Arabian Peninsula were also investigated. Mean N2O saturations for the oceanic regions of the Arabian Sea were in the range of 99\u2013103% during the intermonsoon and 103\u2013230% during the SW monsoon. Computed annual emissions of 0.8\u20131.5 Tg N2O for the Arabian Sea are considerably higher than previous estimates, indicating that the role of upwelling regions, such as the Arabian Sea, may be more important than previously assumed in global budgets of oceanic N2O emissions.", "which Sampling period ?", "May", 197.0, 200.0], ["The Orientale basin is a multiring impact structure on the western limb of the Moon that provides a clear view of the primary lunar crust exposed during basin formation. Previously, near\u2010infrared reflectance spectra suggested that Orientale's Inner Rook Ring (IRR) is very poor in mafic minerals and may represent anorthosite excavated from the Moon's upper crust. However, detailed assessment of the mineralogy of these anorthosites was prohibited because the available spectroscopic data sets did not identify the diagnostic plagioclase absorption feature near 1250 nm. Recently, however, this absorption has been identified in several spectroscopic data sets, including the Moon Mineralogy Mapper (M3), enabling the unique identification of a plagioclase\u2010dominated lithology at Orientale for the first time. Here we present the first in\u2010depth characterization of the Orientale anorthosites based on direct measurement of their plagioclase component. In addition, detailed geologic context of the exposures is discussed based on analysis of Lunar Reconnaissance Orbiter Narrow Angle Camera images for selected anorthosite identifications. The results confirm that anorthosite is overwhelmingly concentrated in the IRR. Comparison with nonlinear spectral mixing models suggests that the anorthosite is exceedingly pure, containing >95 vol % plagioclase in most areas and commonly ~99\u2013100 vol %. These new data place important constraints on magma ocean crystallization scenarios, which must produce a zone of highly pure anorthosite spanning the entire lateral extent of the 430 km diameter IRR.", "which Plagioclase (nm) ?", "1250", 563.0, 567.0], ["This paper examines the impact of overeducation (or surplus schooling) on earnings. Overeducated workers are defined as those with educational attainments substantially above the mean for their specific occupations. Two models are estimated using data from the 1980 census. Though our models, data, and measure of overeducation are different from those used by Rumberger (1987), our results are similar. Our results show that overeducated workers often earn less than their adequately educated and undereducated counterparts.", "which Data collec- tion ?", "1980", 261.0, 265.0], ["The 1991 wave of the British Household Panel Survey is used to examine the extent of, and the returns to overeducation in the UK. About 11% of the workers are overeducated, while another 9% are undereducated for their job. The results show that the allocation of female workers is more efficient than the allocation of males. The probability of being overeducated decreases with work experience, but increases with tenure. Overeducated workers earn less, while undereducated workers earn more than correctly allocated workers. Both the hypothesis that productivity is fully embodied and the hypothesis that productivity is completely job determined are rejected by the data. It is found that there are substantial wage gains obtainable from a more efficient allocation of skills over jobs.", "which Data collec- tion ?", "1991", 4.0, 8.0], ["The objective of this paper is to address the issue of the production cost of second generation biofuels via the thermo-chemical route. The last decade has seen a large number of technical\u2013economic studies of second generation biofuels. As there is a large variation in the announced production costs of second generation biofuels in the literature, this paper clarifies some of the reasons for these variations and helps obtain a clearer picture. This paper presents simulations for two pathways and comparative production pathways previously published in the literature in the years between 2000 and 2011. It also includes a critical comparison and analysis of previously published studies. This paper does not include studies where the production is boosted with a hydrogen injection to improve the carbon yield. The only optimisation included is the recycle of tail gas. It is shown that the fuel can be produced on a large scale at prices of around 1.0\u20131.4 \u20ac per l. Large uncertainties remain however with regard to the precision of the economic predictions, the technology choices, the investment cost estimation and even the financial models to calculate the production costs. The benefit of a tail gas recycle is also examined; its benefit largely depends on the selling price of the produced electricity.", "which Year data ?", "2011", 602.0, 606.0], ["We have generated a large, unique database that includes morphologic, clinical, cytogenetic, and follow-up data from 2124 patients with myelodysplastic syndromes (MDSs) at 4 institutions in Austria and 4 in Germany. Cytogenetic analyses were successfully performed in 2072 (97.6%) patients, revealing clonal abnormalities in 1084 (52.3%) patients. Numeric and structural chromosomal abnormalities were documented for each patient and subdivided further according to the number of additional abnormalities. Thus, 684 different cytogenetic categories were identified. The impact of the karyotype on the natural course of the disease was studied in 1286 patients treated with supportive care only. Median survival was 53.4 months for patients with normal karyotypes (n = 612) and 8.7 months for those with complex anomalies (n = 166). A total of 13 rare abnormalities were identified with good (+1/+1q, t(1q), t(7q), del(9q), del(12p), chromosome 15 anomalies, t(17q), monosomy 21, trisomy 21, and -X), intermediate (del(11q), chromosome 19 anomalies), or poor (t(5q)) prognostic impact, respectively. The prognostic relevance of additional abnormalities varied considerably depending on the chromosomes affected. For all World Health Organization (WHO) and French-American-British (FAB) classification system subtypes, the karyotype provided additional prognostic information. Our analyses offer new insights into the prognostic significance of rare chromosomal abnormalities and specific karyotypic combinations in MDS.", "which Number of patients studied ?", "2072", 268.0, 272.0]], "evaluation_set": [["In this paper, we present a QA system enabling NL questions against Linked Data, designed and adopted by the Tor Vergata University AI group in the QALD-3 evaluation. The system integrates lexical semantic modeling and statistical inference within a complex architecture that decomposes the NL interpretation task into a cascade of three different stages: (1) The selection of key ontological information from the question (i.e. predicate, arguments and properties), (2) the location of such salient information in the ontology through the joint disambiguation of the different candidates and (3) the compilation of the final SPARQL query. This architecture characterizes a novel approach for the task and exploits a graphical model (i.e. an Hidden Markov Model) to select the proper ontological triples according to the graph nature of RDF. In particular, for each query an HMM model is produced whose Viterbi solution is the comprehensive joint disambiguation across the sentence elements. The combination of these approaches achieved interesting results in the QALD competition. The RTV is in fact within the group of participants performing slightly below the best system, but with smaller requirements and on significantly poorer input information.", "which implementation ?", "RTV", 1086.0, 1089.0], ["Visualizing Resource Description Framework (RDF) data to support decision-making processes is an important and challenging aspect of consuming Linked Data. With the recent development of JavaScript libraries for data visualization, new opportunities for Web-based visualization of Linked Data arise. This paper presents an extensive evaluation of JavaScript-based libraries for visualizing RDF data. A set of criteria has been devised for the evaluation and 15 major JavaScript libraries have been analyzed against the criteria. The two JavaScript libraries with the highest score in the evaluation acted as the basis for developing LODWheel (Linked Open Data Wheel) - a prototype for visualizing Linked Open Data in graphs and charts - introduced in this paper. This way of visualizing RDF data leads to a great deal of challenges related to data-categorization and connecting data resources together in new ways, which are discussed in this paper.", "which implementation ?", "LODWheel", 633.0, 641.0], ["Querying the Semantic Web and analyzing the query results are often complex tasks that can be greatly facilitated by visual interfaces. A major challenge in the design of these interfaces is to provide intuitive and efficient interaction support without limiting too much the analytical degrees of freedom. This paper introduces SemLens, a visual tool that combines scatter plots and semantic lenses to overcome this challenge and to allow for a simple yet powerful analysis of RDF data. The scatter plots provide a global overview on an object collection and support the visual discovery of correlations and patterns in the data. The semantic lenses add dimensions for local analysis of subsets of the objects. A demo accessing DBpedia data is used for illustration.", "which implementation ?", "SemLens", 329.0, 336.0], ["Clustered graph visualization techniques are an easy to understand way of hiding complex parts of a visualized graph when they are not needed by the user. When visualizing RDF, there are several situations where such clusters are defined in a very natural way. Using this techniques, we can give the user optional access to some detailed information without unnecessarily occupying space in the basic view of the data. This paper describes algorithms for clustered visualization used in the Trisolda RDF visualizer. Most notable is the newly added clustered navigation technique.", "which implementation ?", "Trisolda", 491.0, 499.0], ["In an effort to optimize visualization and editing of OWL ontologies we have developed GrOWL: a browser and visual editor for OWL that accurately visualizes the underlying DL semantics of OWL ontologies while avoiding the difficulties of the verbose OWL syntax. In this paper, we discuss GrOWL visualization model and the essential visualization techniques implemented in GrOWL.", "which implementation ?", "GrOWL", 87.0, 92.0], ["QAnswer is a question answering system that uses DBpedia as a knowledge base and converts natural language questions into a SPARQL query. In order to improve the match between entities and relations and natural language text, we make use of Wikipedia to extract lexicalizations of the DBpedia entities and then match them with the question. These entities are validated on the ontology, while missing ones can be inferred. The proposed system was tested in the QALD-5 challenge and it obtained a F1 score of 0.30, which placed QAnswer in the second position in the challenge, despite the fact that the system used only a small subset of the properties in DBpedia, due to the long extraction process.", "which implementation ?", "QAnswer", 0.0, 7.0], ["With the development of Semantic Web in recent years, an increasing amount of semantic data has been created in form of Resource Description Framework (RDF). Current visualization techniques help users quickly understand the underlying RDF data by displaying its structure in an overview. However, detailed information can only be accessed by further navigation. An alternative approach is to display the global context as well as the local details simultaneously in a unified view. This view supports the visualization and navigation on RDF data in an integrated way. In this demonstration, we present ZoomRDF, a framework that: i) adapts a space-optimized visualization algorithm for RDF, which allows more resources to be displayed, thus maximizes the utilization of display space, ii) combines the visualization with a fisheye zooming concept, which assigns more space to some individual nodes while still preserving the overview structure of the data, iii) considers both the importance of resources and the user interaction on them, which offers more display space to those elements the user may be interested in. We implement the framework based on the Gene Ontology and demonstrate that it facilitates tasks like RDF data exploration and editing.", "which implementation ?", "ZoomRDF", 603.0, 610.0], ["This paper consists of three parts: a preliminary typology of summaries in general; a description of the current and planned modules and performance of the SUMMARIST automated multilingual text summarization system being built sat ISI, and a discussion of three methods to evaluate summaries.", "which implementation ?", "SUMMARIST", 156.0, 165.0], ["Constructing focused, context-based multi-document summaries requires an analysis of the context questions, as well as their corresponding document sets. We present a fuzzy cluster graph algorithm that finds entities and their connections between context and documents based on fuzzy coreference chains and describe the design and implementation of the ERSS summarizer implementing these ideas.", "which implementation ?", "ERSS summarizer", 353.0, 368.0], ["The multi-document summarizer using genetic algorithm-based sentence extraction (MSBGA) regards summarization process as an optimization problem where the optimal summary is chosen among a set of summaries formed by the conjunction of the original articles sentences. To solve the NP hard optimization problem, MSBGA adopts genetic algorithm, which can choose the optimal summary on global aspect. The evaluation function employs four features according to the criteria of a good summary: satisfied length, high coverage, high informativeness and low redundancy. To improve the accuracy of term frequency, MSBGA employs a novel method TFS, which takes word sense into account while calculating term frequency. The experiments on DUC04 data show that our strategy is effective and the ROUGE-1 score is only 0.55% lower than the best participant in DUC04", "which implementation ?", "MSBGA", 81.0, 86.0], ["In this paper we propose LODeX, a tool that produces a representative summary of a Linked open Data (LOD) source starting from scratch, thus supporting users in exploring and understanding the contents of a dataset. The tool takes in input the URL of a SPARQL endpoint and launches a set of predefined SPARQL queries, from the results of the queries it generates a visual summary of the source. The summary reports statistical and structural information of the LOD dataset and it can be browsed to focus on particular classes or to explore their properties and their use. LODeX was tested on the 137 public SPARQL endpoints contained in Data Hub (formerly CKAN), one of the main Open Data catalogues. The statistical and structural information extraction was successfully performed on 107 sources, among these the most significant ones are included in the online version of the tool.", "which implementation ?", "LODeX", 25.0, 30.0], ["This paper presents a flexible method to enrich and populate an existing OWL ontology from XML data. Basic mapping rules are defined in order to specify the conversion rules on properties. Advanced mapping rules are defined on XML schemas a nd OWL XML schema elements in order to define rules for th e population process. In addition, this flexible method allows u sers to reuse rules for other conversions and populations.", "which Input format ?", "XML schema", 248.0, 258.0], ["One of the promises of the Semantic Web is to support applications that easily and seamlessly deal with heterogeneous data. Most data on the Web, however, is in the Extensible Markup Language (XML) format, but using XML requires applications to understand the format of each data source that they access. To achieve the benefits of the Semantic Web involves transforming XML into the Semantic Web language, OWL (Ontology Web Language), a process that generally has manual or only semi-automatic components. In this paper we present a set of patterns that enable the direct, automatic transformation from XML Schema into OWL allowing the integration of much XML data in the Semantic Web. We focus on an advanced logical representation of XML Schema components and present an implementation, including a comparison with related work.", "which Input format ?", "XML schema", 604.0, 614.0], ["DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.", "which Input format ?", "DTD", 0.0, 3.0], ["DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.", "which Input format ?", "XML instances", 1291.0, 1304.0], ["With the increase in smart devices and abundance of video contents, efficient techniques for the indexing, analysis and retrieval of videos are becoming more and more desirable. Improved indexing and automated analysis of millions of videos could be accomplished by getting videos tagged automatically. A lot of existing methods fail to precisely tag videos because of their lack of ability to capture the video context. The context in a video represents the interactions of objects in a scene and their overall meaning. In this work, we propose a novel approach that integrates the video scene ontology with CNN (Convolutional Neural Network) for improved video tagging. Our method captures the content of a video by extracting the information from individual key frames. The key frames are then fed to a CNN based deep learning model to train its parameters. The trained parameters are used to generate the most frequent tags. Highly frequent tags are used to summarize the input video. The proposed technique is benchmarked on the most widely used dataset of video activities, namely, UCF-101. Our method managed to achieve an overall accuracy of 99.8% with an F1- score of 96.2%.", "which Input format ?", "Video", 52.0, 57.0], ["Significant amounts of knowledge in science and technology have so far not been published as Linked Open Data but are contained in the text and tables of legacy PDF publications. Making such information available as RDF would, for example, provide direct access to claims and facilitate surveys of related work. A lot of valuable tabular information that till now only existed in PDF documents would also finally become machine understandable. Instead of studying scientific literature or engineering patents for months, it would be possible to collect such input by simple SPARQL queries. The SemAnn approach enables collaborative annotation of text and tables in PDF documents, a format that is still the common denominator of publishing, thus maximising the potential user base. The resulting annotations in RDF format are available for querying through a SPARQL endpoint. To incentivise users with an immediate benefit for making the effort of annotation, SemAnn recommends related papers, taking into account the hierarchical context of annotations in a novel way. We evaluated the usability of SemAnn and the usefulness of its recommendations by analysing annotations resulting from tasks assigned to test users and by interviewing them. While the evaluation shows that even few annotations lead to a good recall, we also observed unexpected, serendipitous recommendations, which confirms the merit of our low-threshold annotation support for the crowd.", "which Output format ?", "RDF", 216.0, 219.0], ["In this work, we offer an approach to combine standard multimedia analysis techniques with knowledge drawn from conceptual metadata provided by domain experts of a specialized scholarly domain, to learn a domain-specific multimedia ontology from a set of annotated examples. A standard Bayesian network learning algorithm that learns structure and parameters of a Bayesian network is extended to include media observables in the learning. An expert group provides domain knowledge to construct a basic ontology of the domain as well as to annotate a set of training videos. These annotations help derive the associations between high-level semantic concepts of the domain and low-level MPEG-7 based features representing audio-visual content of the videos. We construct a more robust and refined version of this ontology by learning from this set of conceptually annotated videos. To encode this knowledge, we use MOWL, a multimedia extension of Web Ontology Language (OWL) which is capable of describing domain concepts in terms of their media properties and of capturing the inherent uncertainties involved. We use the ontology specified knowledge for recognizing concepts relevant to a video to annotate fresh addition to the video database with relevant concepts in the ontology. These conceptual annotations are used to create hyperlinks in the video collection, to provide an effective video browsing interface to the user.", "which Output format ?", "OWL", 969.0, 972.0], ["The aims of XML data conversion to ontologies are the indexing, integration and enrichment of existing ontologies with knowledge acquired from these sources. The contribution of this paper consists in providing a classification of the approaches used for the conversion of XML documents into OWL ontologies. This classification underlines the usage profile of each conversion method, providing a clear description of the advantages and drawbacks belonging to each method. Hence, this paper focuses on two main processes, which are ontology enrichment and ontology population using XML data. Ontology enrichment is related to the schema of the ontology (TBox), and ontology population is related to an individual (Abox). In addition, the ontologies described in these methods are based on formal languages of the Semantic Web such as OWL (Ontology Web Language) or RDF (Resource Description Framework). These languages are formal because the semantics are formally defined and take advantage of the Description Logics. In contrast, XML data sources are without formal semantics. The XML language is used to store, export and share data between processes able to process the specific data structure. However, even if the semantics is not explicitly expressed, data structure contains the universe of discourse by using a qualified vocabulary regarding a consensual agreement. In order to formalize this semantics, the OWL language provides rich logical constraints. Therefore, these logical constraints are evolved in the transformation of XML documents into OWL documents, allowing the enrichment and the population of the target ontology. To design such a transformation, the current research field establishes connections between OWL constructs (classes, predicates, simple or complex data types, etc.) and XML constructs (elements, attributes, element lists, etc.). Two different approaches for the transformation process are exposed. The instance approaches are based on XML documents without any schema associated. The validation approaches are based on the XML schema and document validated by the associated schema. The second approaches benefit from the schema definition to provide automated transformations with logic constraints. Both approaches are discussed in the text.", "which Output format ?", "OWL", 292.0, 295.0], ["As the Semantic Web initiative gains momentum, a fundamental problem of integrating existing data-intensive WWW applications into the Semantic Web emerges. In order for today\u2019s relational database supported Web applications to transparently participate in the Semantic Web, their associated database schemas need to be converted into semantically equivalent ontologies. In this paper we present a solution to an important special case of the automatic mapping problem with wide applicability: mapping well-formed Entity-Relationship (ER) schemas to semantically equivalent OWL Lite ontologies. We present a set of mapping rules that fully capture the ER schema semantics, along with an overview of an implementation of the complete mapping algorithm integrated into the current SFSU ER Design Tools software.", "which Output format ?", "OWL", 573.0, 576.0], ["Name ambiguity has long been viewed as a challenging problem in many applications, such as scientific literature management, people search, and social network analysis. When we search a person name in these systems, many documents (e.g., papers, web pages) containing that person's name may be returned. It is hard to determine which documents are about the person we care about. Although much research has been conducted, the problem remains largely unsolved, especially with the rapid growth of the people information available on the Web. In this paper, we try to study this problem from a new perspective and propose an ADANA method for disambiguating person names via active user interactions. In ADANA, we first introduce a pairwise factor graph (PFG) model for person name disambiguation. The model is flexible and can be easily extended by incorporating various features. Based on the PFG model, we propose an active name disambiguation algorithm, aiming to improve the disambiguation performance by maximizing the utility of the user's correction. Experimental results on three different genres of data sets show that with only a few user corrections, the error rate of name disambiguation can be reduced to 3.1%. A real system has been developed based on the proposed method and is available online.", "which dataset ?", "Web page", NaN, NaN], ["Extraction of relevant features from high-dimensional multi-way functional MRI (fMRI) data is essential for the classification of a cognitive task. In general, fMRI records a combination of neural activation signals and several other noisy components. Alternatively, fMRI data is represented as a high dimensional array using a number of voxels, time instants, and snapshots. The organisation of fMRI data includes a number of Region Of Interests (ROI), snapshots, and thousand of voxels. The crucial step in cognitive task classification is a reduction of feature size through feature selection. Extraction of a specific pattern of interest within the noisy components is a challenging task. Tensor decomposition techniques have found several applications in the scientific fields. In this paper, a novel tensor gradient-based feature extraction technique for cognitive task classification is proposed. The technique has efficiently been applied on StarPlus fMRI data. Also, the technique has been used to discriminate the ROIs in fMRI data in terms of cognitive state classification. The method has been achieved a better average accuracy when compared to other existing feature extraction methods.", "which dataset ?", "StarPlus fMRI data", 950.0, 968.0], ["This paper presents a new challenging information extraction task in the domain of materials science. We develop an annotation scheme for marking information on experiments related to solid oxide fuel cells in scientific publications, such as involved materials and measurement conditions. With this paper, we publish our annotation guidelines, as well as our SOFC-Exp corpus consisting of 45 open-access scholarly articles annotated by domain experts. A corpus and an inter-annotator agreement study demonstrate the complexity of the suggested named entity recognition and slot filling tasks as well as high annotation quality. We also present strong neural-network based models for a variety of tasks that can be addressed on the basis of our new data set. On all tasks, using BERT embeddings leads to large performance gains, but with increasing task complexity, adding a recurrent neural network on top seems beneficial. Our models will serve as competitive baselines in future work, and analysis of their performance highlights difficult cases when modeling the data and suggests promising research directions.", "which dataset ?", "SOFC-Exp", 360.0, 368.0], ["Microorganisms are well adapted to their habitat but are partially sensitive to toxic metabolites or abiotic compounds secreted by other organisms or chemically formed under the respective environmental conditions. Thermoacidophiles are challenged by pyroglutamate, a lactam that is spontaneously formed by cyclization of glutamate under aerobic thermoacidophilic conditions. It is known that growth of the thermoacidophilic crenarchaeon Saccharolobus solfataricus (formerly Sulfolobus solfataricus) is completely inhibited by pyroglutamate. In the present study, we investigated the effect of pyroglutamate on the growth of S. solfataricus and the closely related crenarchaeon Sulfolobus acidocaldarius. In contrast to S. solfataricus, S. acidocaldarius was successfully cultivated with pyroglutamate as a sole carbon source. Bioinformatical analyses showed that both members of the Sulfolobaceae have at least one candidate for a 5-oxoprolinase, which catalyses the ATP-dependent conversion of pyroglutamate to glutamate. In S. solfataricus, we observed the intracellular accumulation of pyroglutamate and crude cell extract assays showed a less effective degradation of pyroglutamate. Apparently, S. acidocaldarius seems to be less versatile regarding carbohydrates and prefers peptidolytic growth compared to S. solfataricus. Concludingly, S. acidocaldarius exhibits a more efficient utilization of pyroglutamate and is not inhibited by this compound, making it a better candidate for applications with glutamate-containing media at high temperatures.", "which has research problem ?", "Utilization of pyroglutamate", 1388.0, 1416.0], ["Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179\u00b122 nm were produced. Res was loaded with high EE of 73\u00b10.9% and DL content of 6.2\u00b10.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", "which has research problem ?", "Breast cancer", 13.0, 26.0], ["Purpose \u2013 To develop an analytical framework through which the organizational cultural dimension of enterprise resource planning (ERP) implementations can be analyzed.Design/methodology/approach \u2013 This paper is primarily based on a review of the literature.Findings \u2013 ERP is an enterprise system that offers, to a certain extent, standard business solutions. This standardization is reinforced by two processes: ERP systems are generally implemented by intermediary IT organizations, mediating between the development of ERP\u2010standard software packages and specific business domains of application; and ERP systems integrate complex networks of production divisions, suppliers and customers.Originality/value \u2013 In this paper, ERP itself is presented as problematic, laying heavy burdens on organizations \u2013 ERP is a demanding technology. While in some cases recognizing the mutual shaping of technology and organization, research into ERP mainly addresses the economic\u2010technological rationality of ERP (i.e. matters of eff...", "which has research problem ?", "Enterprise resource planning", 100.0, 128.0], ["We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.", "which has research problem ?", "Sentiment Analysis", 601.0, 619.0], ["A self-referencing dual fluorescing carbon dot-based nanothermometer can ratiometrically sense thermal events in HeLa cells with very high sensitivity.", "which has research problem ?", "Nanothermometer", 53.0, 68.0], ["This study is the first attempt that assembled published academic work on critical success factors (CSFs) in supply chain management (SCM) fields. The purpose of this study are to review the CSFs in SCM and to uncover the major CSFs that are apparent in SCM literatures. This study apply literature survey techniques from published CSFs studies in SCM. A collection of 42 CSFs studies in various SCM fields are obtained from major databases. The search uses keywords such as as supply chain management, critical success factors, logistics management and supply chain drivers and barriers. From the literature survey, four major CSFs are proposed. The factors are collaborative partnership, information technology, top management support and human resource. It is hoped that this review will serve as a platform for future research in SCM and CSFs studies. Plus, this study contribute to existing SCM knowledge and further appraise the concept of CSFs.", "which has research problem ?", "Supply chain management", 109.0, 132.0], ["Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, English-French and Chinese-to-English translation tasks.", "which has research problem ?", "Machine Translation", 20.0, 39.0], ["This paper examines the impact of exchange rate volatility on trade flows in the U.K. over the period 1990\u20132000. According to the conventional approach, exchange rate volatility clamps down trade volumes. This paper, however, identifies the existence of a positive relationship between exchange rate volatility and imports in the U.K. in the 1990s by using a bivariate GARCH-in-mean model. It highlights a possible emergence of a polarized version with conventional proposition that ERV works as an impediment factor on trade flows.", "which has research problem ?", "Exchange rate volatility", 34.0, 58.0], ["Recently, substantial progress has been made in language modeling by using deep neural networks. However, in practice, large scale neural language models have been shown to be prone to overfitting. In this paper, we present a simple yet highly effective adversarial training mechanism for regularizing neural language models. The idea is to introduce adversarial noise to the output embedding layer while training the models. We show that the optimal adversarial noise yields a simple closed-form solution, thus allowing us to develop a simple and time efficient algorithm. Theoretically, we show that our adversarial mechanism effectively encourages the diversity of the embedding vectors, helping to increase the robustness of models. Empirically, we show that our method improves on the single model state-of-the-art results for language modeling on Penn Treebank (PTB) and Wikitext-2, achieving test perplexity scores of 46.01 and 38.07, respectively. When applied to machine translation, our method improves over various transformer-based translation baselines in BLEU scores on the WMT14 English-German and IWSLT14 German-English tasks.", "which has research problem ?", "Machine Translation", 972.0, 991.0], ["ABSTRACT The research related to Enterprise Resource Planning (ERP) has grown over the past several years. This growing body of ERP research results in an increased need to review this extant literature with the intent of identifying gaps and thus motivate researchers to close this breach. Therefore, this research was intended to critique, synthesize and analyze both the content (e.g., topics, focus) and processes (i.e., methods) of the ERP literature, and then enumerates and discusses an agenda for future research efforts. To accomplish this, we analyzed 49 ERP articles published (1999-2004) in top Information Systems (IS) and Operations Management (OM) journals. We found an increasing level of activity during the 5-year period and a slightly biased distribution of ERP articles targeted at IS journals compared to OM. We also found several research methods either underrepresented or absent from the pool of ERP research. We identified several areas of need within the ERP literature, none more prevalent than the need to analyze ERP within the context of the supply chain. INTRODUCTION Davenport (1998) described the strengths and weaknesses of using Enterprise Resource Planning (ERP). He called attention to the growth of vendors like SAP, Baan, Oracle, and People-Soft, and defined this software as, \"...the seamless integration of all the information flowing through a companyfinancial and accounting information, human resource information, supply chain information, and customer information.\" (Davenport, 1998). Since the time of that article, there has been a growing interest among researchers and practitioners in how organization implement and use ERP systems (Amoako-Gyampah and Salam, 2004; Bendoly and Jacobs, 2004; Gattiker and Goodhue, 2004; Lander, Purvis, McCray and Leigh, 2004; Luo and Strong, 2004; Somers and Nelson, 2004; Zoryk-Schalla, Fransoo and de Kok, 2004). This interest is a natural continuation of trends in Information Technology (IT), such as MRP II, (Olson, 2004; Teltumbde, 2000; Toh and Harding, 1999) and in business practice improvement research, such as continuous process improvement and business process reengineering (Markus and Tanis, 2000; Ng, Ip and Lee, 1999; Reijers, Limam and van der Aalst, 2003; Toh and Harding, 1999). This growing body of ERP research results in an increased need to review this extant literature with the intent of \"identifying critical knowledge gaps and thus motivate researchers to close this breach\" (Webster and Watson, 2002). Also, as noted by Scandura & Williams (2000), in order for research to advance, the methods used by researchers must periodically be evaluated to provide insights into the methods utilized and thus the areas of need. These two interrelated needs provide the motivation for this paper. In essence, this research critiques, synthesizes and analyzes both the content (e.g., topics, focus) and processes (i.e., methods) of the ERP literature and then enumerates and discusses an agenda for future research efforts. The remainder of the paper is organized as follows: Section 2 describes the approach to the analysis of the ERP research. Section 3 contains the results and a review of the literature. Section 4 discusses our findings and the needs relative to future ERP research efforts. Finally, section 5 summarizes the research. RESEARCH STUDY We captured the trends pertaining to (1) the number and distribution of ERP articles published in the leading journals, (2) methodologies employed in ERP research, and (3) emphasis relative to topic of ERP research. During the analysis of the ERP literature, we identified gaps and needs in the research and therefore enumerate and discuss a research agenda which allows the progression of research (Webster and Watson, 2002). In short, we sought to paint a representative landscape of the current ERP literature base in order to influence the direction of future research efforts relative to ERP. \u2026", "which has research problem ?", "Enterprise resource planning", 33.0, 61.0], ["This paper studies the importance of identifying and categorizing scientific concepts as a way to achieve a deeper understanding of the research literature of a scientific community. To reach this goal, we propose an unsupervised bootstrapping algorithm for identifying and categorizing mentions of concepts. We then propose a new clustering algorithm that uses citations' context as a way to cluster the extracted mentions into coherent concepts. Our evaluation of the algorithms against gold standards shows significant improvement over state-of-the-art results. More importantly, we analyze the computational linguistic literature using the proposed algorithms and show four different ways to summarize and understand the research community which are difficult to obtain using existing techniques.", "which has research problem ?", "Identifying and categorizing mentions of concepts", 258.0, 307.0], ["Smart cities offer services to their inhabitants which make everyday life easier beyond providing a feedback channel to the city administration. For instance, a live timetable service for public transportation or real-time traffic jam notification can increase the efficiency of travel planning substantially. Traditionally, the implementation of these smart city services require the deployment of some costly sensing and tracking infrastructure. As an alternative, the crowd of inhabitants can be involved in data collection via their mobile devices. This emerging paradigm is called mobile crowd-sensing or participatory sensing. In this paper, we present our generic framework built upon XMPP (Extensible Messaging and Presence Protocol) for mobile participatory sensing based smart city applications. After giving a short description of this framework we show three use-case smart city application scenarios, namely a live transit feed service, a soccer intelligence agency service and a smart campus application, which are currently under development on top of our framework.", "which has research problem ?", "Participatory Sensing", 610.0, 631.0], ["Software testing is a crucial measure used to assure the quality of software. Path testing can detect bugs earlier because of it performs higher error coverage. This paper presents a model of generating test data based on an improved ant colony optimization and path coverage criteria. Experiments show that the algorithm has a better performance than other two algorithms and improve the efficiency of test data generation notably.", "which has research problem ?", "Ant Colony Optimization", 234.0, 257.0], ["While modern machine translation has relied on large parallel corpora, a recent line of work has managed to train Neural Machine Translation (NMT) systems from monolingual corpora only (Artetxe et al., 2018c; Lample et al., 2018). Despite the potential of this approach for low-resource settings, existing systems are far behind their supervised counterparts, limiting their practical interest. In this paper, we propose an alternative approach based on phrase-based Statistical Machine Translation (SMT) that significantly closes the gap with supervised systems. Our method profits from the modular architecture of SMT: we first induce a phrase table from monolingual corpora through cross-lingual embedding mappings, combine it with an n-gram language model, and fine-tune hyperparameters through an unsupervised MERT variant. In addition, iterative backtranslation improves results further, yielding, for instance, 14.08 and 26.22 BLEU points in WMT 2014 English-German and English-French, respectively, an improvement of more than 7-10 BLEU points over previous unsupervised systems, and closing the gap with supervised SMT (Moses trained on Europarl) down to 2-5 BLEU points. Our implementation is available at https://github.com/artetxem/monoses.", "which has research problem ?", "Machine Translation", 13.0, 32.0], ["Trace chemical detection is important for a wide range of practical applications. Recently emerged two-dimensional (2D) crystals offer unique advantages as potential sensing materials with high sensitivity, owing to their very high surface-to-bulk atom ratios and semiconducting properties. Here, we report the first use of Schottky-contacted chemical vapor deposition grown monolayer MoS2 as high-performance room temperature chemical sensors. The Schottky-contacted MoS2 transistors show current changes by 2-3 orders of magnitude upon exposure to very low concentrations of NO2 and NH3. Specifically, the MoS2 sensors show clear detection of NO2 and NH3 down to 20 ppb and 1 ppm, respectively. We attribute the observed high sensitivity to both well-known charger transfer mechanism and, more importantly, the Schottky barrier modulation upon analyte molecule adsorption, the latter of which is made possible by the Schottky contacts in the transistors and is not reported previously for MoS2 sensors. This study shows the potential of 2D semiconductors as high-performance sensors and also benefits the fundamental studies of interfacial phenomena and interactions between chemical species and monolayer 2D semiconductors.", "which has research problem ?", "Chemical sensors", 427.0, 443.0], ["The growth of the Web in recent years has resulted in the development of various online platforms that provide healthcare information services. These platforms contain an enormous amount of information, which could be beneficial for a large number of people. However, navigating through such knowledgebases to answer specific queries of healthcare consumers is a challenging task. A majority of such queries might be non-factoid in nature, and hence, traditional keyword-based retrieval models do not work well for such cases. Furthermore, in many scenarios, it might be desirable to get a short answer that sufficiently answers the query, instead of a long document with only a small amount of useful information. In this paper, we propose a neural network model for ranking documents for question answering in the healthcare domain. The proposed model uses a deep attention mechanism at word, sentence, and document levels, for efficient retrieval for both factoid and non-factoid queries, on documents of varied lengths. Specifically, the word-level cross-attention allows the model to identify words that might be most relevant for a query, and the hierarchical attention at sentence and document levels allows it to do effective retrieval on both long and short documents. We also construct a new large-scale healthcare question-answering dataset, which we use to evaluate our model. Experimental evaluation results against several state-of-the-art baselines show that our model outperforms the existing retrieval techniques.", "which has research problem ?", "Question Answering ", 790.0, 809.0], ["This paper provides a working definition of what the middle-income trap is. We start by defining four income groups of GDP per capita in 1990 PPP dollars: low-income below $2,000; lower-middle-income between $2,000 and $7,250; upper-middle-income between $7,250 and $11,750; and high-income above $11,750. We then classify 124 countries for which we have consistent data for 1950\u20132010. In 2010, there were 40 low-income countries in the world, 38 lower-middle-income, 14 upper-middle-income, and 32 high-income countries. Then we calculate the threshold number of years for a country to be in the middle-income trap: a country that becomes lower-middle-income (i.e., that reaches $2,000 per capita income) has to attain an average growth rate of per capita income of at least 4.7 percent per annum to avoid falling into the lower-middle-income trap (i.e., to reach $7,250, the upper-middle-income threshold); and a country that becomes upper-middle-income (i.e., that reaches $7,250 per capita income) has to attain an average growth rate of per capita income of at least 3.5 percent per annum to avoid falling into the upper-middle-income trap (i.e., to reach $11,750, the high-income level threshold). Avoiding the middle-income trap is, therefore, a question of how to grow fast enough so as to cross the lower-middle-income segment in at most 28 years, and the upper-middle-income segment in at most 14 years. Finally, the paper proposes and analyzes one possible reason why some countries get stuck in the middle-income trap: the role played by the changing structure of the economy (from low-productivity activities into high-productivity activities), the types of products exported (not all products have the same consequences for growth and development), and the diversification of the economy. We compare the exports of countries in the middle-income trap with those of countries that graduated from it, across eight dimensions that capture different aspects of a country\u2019s capabilities to undergo structural transformation, and test whether they are different. Results indicate that, in general, they are different. We also compare Korea, Malaysia, and the Philippines according to the number of products that each exports with revealed comparative advantage. We find that while Korea was able to gain comparative advantage in a significant number of sophisticated products and was well connected, Malaysia and the Philippines were able to gain comparative advantage in electronics only.", "which has research problem ?", "Middle-Income Trap", 53.0, 71.0], ["Cities form the heart of a dynamic society. In an open space-economy cities have to mobilize all of their resources to remain attractive and competitive. Smart cities depend on creative and knowledge resources to maximize their innovation potential. This study offers a comparative analysis of nine European smart cities on the basis of an extensive database covering two time periods. After conducting a principal component analysis, a new approach, based on a self-organizing map analysis, is adopted to position the various cities under consideration according to their selected \u201csmartness\u201d performance indicators.", "which has research problem ?", "Smart cities", 154.0, 166.0], ["Debates about the future of urban development in many Western countries have been increasingly influenced by discussions of smart cities. Yet despite numerous examples of this \u2018urban labelling\u2019 phenomenon, we know surprisingly little about so\u2010called smart cities, particularly in terms of what the label ideologically reveals as well as hides. Due to its lack of definitional precision, not to mention an underlying self\u2010congratulatory tendency, the main thrust of this article is to provide a preliminary critical polemic against some of the more rhetorical aspects of smart cities. The primary focus is on the labelling process adopted by some designated smart cities, with a view to problematizing a range of elements that supposedly characterize this new urban form, as well as question some of the underlying assumptions/contradictions hidden within the concept. To aid this critique, the article explores to what extent labelled smart cities can be understood as a high\u2010tech variation of the \u2018entrepreneurial city\u2019, as well as speculates on some general principles which would make them more progressive and inclusive.", "which has research problem ?", "Smart cities", 124.0, 136.0], ["A consumer-dependent (business-to-consumer) organization tends to present itself as possessing a set of human qualities, which is termed the brand personality of the company. The perception is impressed upon the consumer through the content, be it in the form of advertisement, blogs, or magazines, produced by the organization. A consistent brand will generate trust and retain customers over time as they develop an affinity toward regularity and common patterns. However, maintaining a consistent messaging tone for a brand has become more challenging with the virtual explosion in the amount of content that needs to be authored and pushed to the Internet to maintain an edge in the era of digital marketing. To understand the depth of the problem, we collect around 300K web page content from around 650 companies. We develop trait-specific classification models by considering the linguistic features of the content. The classifier automatically identifies the web articles that are not consistent with the mission and vision of a company and further helps us to discover the conditions under which the consistency cannot be maintained. To address the brand inconsistency issue, we then develop a sentence ranking system that outputs the top three sentences that need to be changed for making a web article more consistent with the company\u2019s brand personality.", "which has research problem ?", "Sentence Ranking", 1203.0, 1219.0], ["Instruments play an essential role in creating research data. Given the importance of instruments and associated metadata to the assessment of data quality and data reuse, globally unique, persistent and resolvable identification of instruments is crucial. The Research Data Alliance Working Group Persistent Identification of Instruments (PIDINST) developed a community-driven solution for persistent identification of instruments which we present and discuss in this paper. Based on an analysis of 10 use cases, PIDINST developed a metadata schema and prototyped schema implementation with DataCite and ePIC as representative persistent identifier infrastructures and with HZB (Helmholtz-Zentrum Berlin fur Materialien und Energie) and BODC (British Oceanographic Data Centre) as representative institutional instrument providers. These implementations demonstrate the viability of the proposed solution in practice. Moving forward, PIDINST will further catalyse adoption and consolidate the schema by addressing new stakeholder requirements.", "which has research problem ?", "Persistent Identification", 298.0, 323.0], ["Goal-oriented dialogue in complex domains is an extremely challenging problem and there are relatively few datasets. This task provided two new resources that presented different challenges: one was focused but small, while the other was large but diverse. We also considered several new variations on the next utterance selection problem: (1) increasing the number of candidates, (2) including paraphrases, and (3) not including a correct option in the candidate set. Twenty teams participated, developing a range of neural network models, including some that successfully incorporated external data to boost performance. Both datasets have been publicly released, enabling future work to build on these results, working towards robust goal-oriented dialogue systems.", "which has research problem ?", "Goal-Oriented Dialogue Systems", 737.0, 767.0], ["General purpose relation extractors, which can model arbitrary relations, are a core aspiration in information extraction. Efforts have been made to build general purpose extractors that represent relations with their surface forms, or which jointly embed surface forms with relations from an existing knowledge graph. However, both of these approaches are limited in their ability to generalize. In this paper, we build on extensions of Harris\u2019 distributional hypothesis to relations, as well as recent advances in learning text representations (specifically, BERT), to build task agnostic relation representations solely from entity-linked text. We show that these representations significantly outperform previous work on exemplar based relation extraction (FewRel) even without using any of that task\u2019s training data. We also show that models initialized with our task agnostic representations, and then tuned on supervised relation extraction datasets, significantly outperform the previous methods on SemEval 2010 Task 8, KBP37, and TACRED", "which has research problem ?", "build task agnostic relation representations solely from entity-linked text", 571.0, 646.0], ["Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.", "which has research problem ?", "Food calorie estimation", 12.0, 35.0], ["The Environmental Kuznets Curve (EKC) hypothesises that emissions first increase at low stages of development then decrease once a certain threshold has been reached. The EKC concept is usually used with per capita Gross Domestic Product as the explanatory variable. As others, we find mixed evidence, at best, of such a pattern for CO2 emissions with respect to per capita GDP. We also show that the share of manufacture in GDP and governance/institutions play a significant role in the CO2 emissions\u2013income relationship. As GDP presents shortcomings in representing income, development in a broad perspective or human well-being, it is then replaced by the World Bank's Adjusted Net Savings (ANS, also known as Genuine Savings). Using the ANS as an explanatory variable, we show that the EKC is generally empirically supported for CO2 emissions. We also show that human capital and natural capital are the main drivers of the downward sloping part of the EKC.", "which has research problem ?", "CO2 emissions", 333.0, 346.0], ["Abstract Background The goal of the first BioCreAtIvE challenge (Critical Assessment of Information Extraction in Biology) was to provide a set of common evaluation tasks to assess the state of the art for text mining applied to biological problems. The results were presented in a workshop held in Granada, Spain March 28\u201331, 2004. The articles collected in this BMC Bioinformatics supplement entitled \"A critical assessment of text mining methods in molecular biology\" describe the BioCreAtIvE tasks, systems, results and their independent evaluation. Results BioCreAtIvE focused on two tasks. The first dealt with extraction of gene or protein names from text, and their mapping into standardized gene identifiers for three model organism databases (fly, mouse, yeast). The second task addressed issues of functional annotation, requiring systems to identify specific text passages that supported Gene Ontology annotations for specific proteins, given full text articles. Conclusion The first BioCreAtIvE assessment achieved a high level of international participation (27 groups from 10 countries). The assessment provided state-of-the-art performance results for a basic task (gene name finding and normalization), where the best systems achieved a balanced 80% precision / recall or better, which potentially makes them suitable for real applications in biology. The results for the advanced task (functional annotation from free text) were significantly lower, demonstrating the current limitations of text-mining approaches where knowledge extrapolation and interpretation are required. In addition, an important contribution of BioCreAtIvE has been the creation and release of training and test data sets for both tasks. There are 22 articles in this special issue, including six that provide analyses of results or data quality for the data sets, including a novel inter-annotator consistency assessment for the test set used in task 2.", "which has research problem ?", "gene name finding and normalization", 1182.0, 1217.0], ["Entity linking (EL) is the task of disambiguating mentions appearing in text by linking them to entities in a knowledge graph, a crucial task for text understanding, question answering or conversational systems. In the special case of short-text EL, which poses additional challenges due to limited context, prior approaches have reached good performance by employing heuristics-based methods or purely neural approaches. Here, we take a different, neuro-symbolic approach that combines the advantages of using interpretable rules based on first-order logic with the performance of neural learning. Even though constrained to use rules, we show that we reach competitive or better performance with SoTA black-box neural approaches. Furthermore, our framework has the benefits of extensibility and transferability. We show that we can easily blend existing rule templates given by a human expert, with multiple types of features (priors, BERT encodings, box embeddings, etc), and even with scores resulting from previous EL methods, thus improving on such methods. As an example of improvement, on the LC-QuAD-1.0 dataset, we show more than 3% increase in F1 score relative to previous SoTA. Finally, we show that the inductive bias offered by using logic results in a set of learned rules that transfers from one dataset to another, sometimes without finetuning, while still having high accuracy.", "which has research problem ?", "Entity Linking", 0.0, 14.0], ["Wikidata is becoming an increasingly important knowledge base whose usage is spreading in the research community. However, most question answering systems evaluation datasets rely on Freebase or DBpedia. We present two new datasets in order to train and benchmark QA systems over Wikidata. The first is a translation of the popular SimpleQuestions dataset to Wikidata, the second is a dataset created by collecting user feedbacks.", "which has research problem ?", "Question Answering ", 128.0, 147.0], ["We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access to the underlying state of the game, and occasionally completes the first level.", "which has research problem ?", "Atari Games", 528.0, 539.0], ["Background: The surge in patients during the COVID-19 pandemic has exacerbated the looming problem of sta\ufb00 shortage in German ICUs possibly leading to worse outcomes for patients. Methods: Within the German Evidence Ecosystem CEOsys network, we conducted an online national mixed-methods survey assessing the standard of care in German ICUs treating patients with COVID-19. Results: A total of 171 German ICUs reported a median ideal number of patients per intensivist of 8 (interquartile range, IQR = 3rd quartile - 1st quartile = 4.0) and per nurse of 2.0 (IQR = 1.0). For COVID-19 patients, the median target was a maximum of 6.0 (IQR = 2.0) patients per intensivist or 2.0 (IQR = 0.0) patients per nurse. Targets for intensivists were rarely met by 15.2% and never met by 3.5% of responding institutions. Targets for nursing sta\ufb03ng could rarely be met in 32.2% and never in 5.3% of responding institutions.Conclusions: Shortages of sta\ufb03ng in the critical care setting are eminent during the COVID-19 pandemic and might not only negatively a\ufb00ect patient outcomes, but also sta\ufb00 wellbeing and healthcare costs. A joint e\ufb00ort that scrutinizes the demands and structures of our health care system seems fundamental to be prepared for the future.", "which has research problem ?", "COVID-19", 45.0, 53.0], ["In recent years, the development of recommender systems has attracted increased interest in several domains, especially in e-learning. Massive Open Online Courses have brought a revolution. However, deficiency in support and personalization in this context drive learners to lose their motivation and leave the learning process. To overcome this problem we focus on adapting learning activities to learners' needs using a recommender system.This paper attempts to provide an introduction to different recommender systems for e-learning settings, as well as to present our proposed recommender system for massive learning activities in order to provide learners with the suitable learning activities to follow the learning process and maintain their motivation. We propose a hybrid knowledge-based recommender system based on ontology for recommendation of e-learning activities to learners in the context of MOOCs. In the proposed recommendation approach, ontology is used to model and represent the knowledge about the domain model, learners and learning activities.", "which has research problem ?", "Recommender Systems", 36.0, 55.0], ["BioNLP Open Shared Tasks (BioNLP-OST) is an international competition organized to facilitate development and sharing of computational tasks of biomedical text mining and solutions to them. For BioNLP-OST 2019, we introduced a new mental health informatics task called \u201cRDoC Task\u201d, which is composed of two subtasks: information retrieval and sentence extraction through National Institutes of Mental Health\u2019s Research Domain Criteria framework. Five and four teams around the world participated in the two tasks, respectively. According to the performance on the two tasks, we observe that there is room for improvement for text mining on brain research and mental illness.", "which has research problem ?", "Mental Health Informatics", 231.0, 256.0], ["A new electron\u2010rich central building block, 5,5,12,12\u2010tetrakis(4\u2010hexylphenyl)\u2010indacenobis\u2010(dithieno[3,2\u2010b:2\u2032,3\u2032\u2010d]pyrrol) (INP), and two derivative nonfullerene acceptors (INPIC and INPIC\u20104F) are designed and synthesized. The two molecules reveal broad (600\u2013900 nm) and strong absorption due to the satisfactory electron\u2010donating ability of INP. Compared with its counterpart INPIC, fluorinated nonfullerene acceptor INPIC\u20104F exhibits a stronger near\u2010infrared absorption with a narrower optical bandgap of 1.39 eV, an improved crystallinity with higher electron mobility, and down\u2010shifted highest occupied molecular orbital and lowest unoccupied molecular orbital energy levels. Organic solar cells (OSCs) based on INPIC\u20104F exhibit a high power conversion efficiency (PCE) of 13.13% and a relatively low energy loss of 0.54 eV, which is among the highest efficiencies reported for binary OSCs in the literature. The results demonstrate the great potential of the new INP as an electron\u2010donating building block for constructing high\u2010performance nonfullerene acceptors for OSCs.", "which has research problem ?", "Organic solar cells", 679.0, 698.0], ["The fluorescent N-doped carbon dots (N-CDs) obtained from C3N4 emit strong blue fluorescence, which is stable with different ionic strengths and time. The fluorescence intensity of N-CDs decreases with the temperature increasing, while it can recover to the initial one with the temperature decreasing. It is an accurate linear response of fluorescence intensity to temperature, which may be attributed to the synergistic effect of abundant oxygen-containing functional groups and hydrogen bonds. Further experiments also demonstrate that N-CDs can serve as effective in vitro and in vivo fluorescence-based nanothermometer.", "which has research problem ?", "Nanothermometer", 608.0, 623.0], ["Facial expression recognition is an active research field which accommodates the need of interaction between humans and machines in a broad field of subjects. This work investigates the performance of a multi-scale and multi-orientation Gabor Filter Bank constructed in such a way to avoid redundant information. A region based approach is employed using different neighbourhood size at the locations of 34 fiducial points. Furthermore, a reduced set of 19 fiducial points is used to model the face geometry. The use of Principal Component Analysis (PCA) is evaluated. The proposed methodology is evaluated for the classification of the 6 basic emotions proposed by Ekman considering neutral expression as the seventh emotion.", "which has research problem ?", "Facial Expression Recognition", 0.0, 29.0], ["Cross-lingual document classification aims at training a document classifier on resources in one language and transferring it to a different language without any additional resources. Several approaches have been proposed in the literature and the current best practice is to evaluate them on a subset of the Reuters Corpus Volume 2. However, this subset covers only few languages (English, German, French and Spanish) and almost all published works focus on the the transfer between English and German. In addition, we have observed that the class prior distributions differ significantly between the languages. We argue that this complicates the evaluation of the multilinguality. In this paper, we propose a new subset of the Reuters corpus with balanced class priors for eight languages. By adding Italian, Russian, Japanese and Chinese, we cover languages which are very different with respect to syntax, morphology, etc. We provide strong baselines for all language transfer directions using multilingual word and sentence embeddings respectively. Our goal is to offer a freely available framework to evaluate cross-lingual document classification, and we hope to foster by these means, research in this important area.", "which has research problem ?", "Cross-Lingual Document Classification", 0.0, 37.0], ["A concept guided by the ISO 37120 standard for city services and quality of life is suggested as unified framework for smart city dashboards. The slow (annual, quarterly, or monthly) ISO 37120 indicators are enhanced and complemented with more detailed and person-centric indicators that can further accelerate the transition toward smart cities. The architecture supports three tasks: acquire and manage data from heterogeneous sensors; process data originated from heterogeneous sources (sensors, OpenData, social data, blogs, news, and so on); and implement such collection and processing on the cloud. A prototype application based on the proposed architecture concept is developed for the city of Skopje, Macedonia. This article is part of a special issue on smart cities.", "which has research problem ?", "City dashboards", 125.0, 140.0], ["A series of halogenated conjugated molecules, containing F, Cl, Br and I, were easily prepared via Knoevenagel condensation and applied in field-effect transistors and organic solar cells. Halogenated conjugated materials were found to possess deep frontier energy levels and high crystallinity compared to their non-halogenated analogues, which is due to the strong electronegativity and heavy atom effect of halogens. As a result, halogenated semiconductors provide high electron mobilities up to 1.3 cm2 V\u22121 s\u22121 in transistors and high efficiencies over 9% in non-fullerene solar cells.", "which has research problem ?", "Organic solar cells", 168.0, 187.0], ["The intraspecific chemical variability of essential oils (50 samples) isolated from the aerial parts of Artemisia herba\u2010alba Asso growing wild in the arid zone of Southeastern Tunisia was investigated. Analysis by GC (RI) and GC/MS allowed the identification of 54 essential oil components. The main compounds were \u03b2\u2010thujone and \u03b1\u2010thujone, followed by 1,8\u2010cineole, camphor, chrysanthenone, trans\u2010sabinyl acetate, trans\u2010pinocarveol, and borneol. Chemometric analysis (k\u2010means clustering and PCA) led to the partitioning into three groups. The composition of two thirds of the samples was dominated by \u03b1\u2010thujone or \u03b2\u2010thujone. Therefore, it could be expected that wild plants of A. herba\u2010alba randomly harvested in the area of Kirchaou and transplanted by local farmers for the cultivation in arid zones of Southern Tunisia produce an essential oil belonging to the \u03b1\u2010thujone/\u03b2\u2010thujone chemotype and containing also 1,8\u2010cineole, camphor, and trans\u2010sabinyl acetate at appreciable amounts.", "which has research problem ?", "Oil", 275.0, 278.0], ["Maritime situation awareness is supported by a combination of satellite, airborne, and terrestrial sensor systems. This paper presents several solutions to process that sensor data into information that supports operator decisions. Examples are vessel detection algorithms based on multispectral image techniques in combination with background subtraction, feature extraction techniques that estimate the vessel length to support vessel classification, and data fusion techniques to combine image based information, detections from coastal radar, and reports from cooperative systems such as (satellite) AIS. Other processing solutions include persistent tracking techniques that go beyond kinematic tracking, and include environmental information from navigation charts, and if available, ELINT reports. And finally rule-based and statistical solutions for the behavioural analysis of anomalous vessels. With that, trends and future work will be presented.", "which has research problem ?", "Vessel detection", 245.0, 261.0], ["Fault-based testing is often advocated to overcome limitations ofother testing approaches; however it is also recognized as beingexpensive. On the other hand, evolutionary algorithms have beenproved suitable for reducing the cost of data generation in the contextof coverage based testing. In this paper, we propose a newevolutionary approach based on ant colony optimization for automatictest input data generation in the context of mutation testingto reduce the cost of such a test strategy. In our approach the antcolony optimization algorithm is enhanced by a probability densityestimation technique. We compare our proposal with otherevolutionary algorithms, e.g., Genetic Algorithm. Our preliminaryresults on JAVA testbeds show that our approach performed significantlybetter than other alternatives.", "which has research problem ?", "Ant Colony Optimization", 352.0, 375.0], ["In this paper, we present the virtual knowledge graph (VKG) paradigm for data integration and access, also known in the literature as Ontology-based Data Access. Instead of structuring the integration layer as a collection of relational tables, the VKG paradigm replaces the rigid structure of tables with the flexibility of graphs that are kept virtual and embed domain knowledge. We explain the main notions of this paradigm, its tooling ecosystem and significant use cases in a wide range of applications. Finally, we discuss future research directions.", "which has research problem ?", "virtual knowledge graph (VKG) paradigm for data integration", NaN, NaN], ["While the fast-paced inception of novel tasks and new datasets helps foster active research in a community towards interesting directions, keeping track of the abundance of research activity in different areas on different datasets is likely to become increasingly difficult. The community could greatly benefit from an automatic system able to summarize scientific results, e.g., in the form of a leaderboard. In this paper we build two datasets and develop a framework (TDMS-IE) aimed at automatically extracting task, dataset, metric and score from NLP papers, towards the automatic construction of leaderboards. Experiments show that our model outperforms several baselines by a large margin. Our model is a first step towards automatic leaderboard construction, e.g., in the NLP domain.", "which has research problem ?", "Automatic leaderboard construction", 731.0, 765.0], [" Open educational resources are currently becoming increasingly available from a multitude of sources and are consequently annotated in many diverse ways. Interoperability concerns that naturally arise can often be resolved through the semantification of metadata descriptions, while at the same time strengthening the knowledge value of resources. SKOS can be a solid linking point offering a standard vocabulary for thematic descriptions, by referencing semantic thesauri. We propose the enhancement and maintenance of educational resources\u2019 metadata in the form of learning object ontologies and introduce the notion of a learning object ontology repository that can help towards their publication, discovery and reuse. At the same time, linking to thesauri datasets and contextualized sources interrelates learning objects with linked data and exposes them to the Web of Data. We build a set of extensions and workflows on top of contemporary ontology management tools, such as WebProt\u00e9g\u00e9, that can make it suitable as a learning object ontology repository. The proposed approach and implementation can help libraries and universities in discovering, managing and incorporating open educational resources and enhancing current curricula. ", "which has research problem ?", "educational resources", 14.0, 35.0], ["Multi-factory production networks have increased in recent years. With the factories located in different geographic areas, companies can benefit from various advantages, such as closeness to their customers, and can respond faster to market changes. Products (jobs) in the network can usually be produced in more than one factory. However, each factory has its operations efficiency, capacity, and utilization level. Allocation of jobs inappropriately in a factory will produce high cost, long lead time, overloading or idling resources, etc. This makes distributed scheduling more complicated than classical production scheduling problems because it has to determine how to allocate the jobs into suitable factories, and simultaneously determine the production scheduling in each factory as well. The problem is even more complicated when alternative production routing is allowed in the factories. This paper proposed a genetic algorithm with dominant genes to deal with distributed scheduling problems, especially in a flexible manufacturing system (FMS) environment. The idea of dominant genes is to identify and record the critical genes in the chromosome and to enhance the performance of genetic search. To testify and benchmark the optimization reliability, the proposed algorithm has been compared with other approaches on several distributed scheduling problems. These comparisons demonstrate the importance of distributed scheduling and indicate the optimization reliability of the proposed algorithm.", "which has research problem ?", "Scheduling problems", 621.0, 640.0], ["Purpose \u2013 The purpose of this paper is to identify, assess and explore potential risks that Chinese companies may encounter when using, maintaining and enhancing their enterprise resource planning (ERP) systems in the post\u2010implementation phase.Design/methodology/approach \u2013 The study adopts a deductive research design based on a cross\u2010sectional questionnaire survey. This survey is preceded by a political, economic, social and technological analysis and a set of strength, weakness, opportunity and threat analyses, from which the researchers refine the research context and select state\u2010owned enterprises (SOEs) in the electronic and telecommunications industry in Guangdong province as target companies to carry out the research. The questionnaire design is based on a theoretical risk ontology drawn from a critical literature review process. The questionnaire is sent to 118 selected Chinese SOEs, from which 42 (84 questionnaires) valid and usable responses are received and analysed.Findings \u2013 The findings ident...", "which has research problem ?", "Enterprise resource planning", 168.0, 196.0], ["In a commonly-used version of the Simple Assembly Line Balancing Problem (SALBP-1) tasks are assigned to stations along an assembly line with a fixed cycle time in order to minimize the required number of stations. It has traditionally been assumed that the total work needed for each product unit has been partitioned into economically indivisible tasks. However, in practice, it is sometimes possible to divide particular tasks in limited ways at additional time penalty cost. Despite the penalties, task division where possible, now and then leads to a reduction in the minimum number of stations. Deciding which allowable tasks to divide creates a new assembly line balancing problem, TDALBP (Task Division Assembly Line Balancing Problem). We propose a mathematical model of the TDALBP, an exact solution procedure for it and present promising computational results for the adaptation of some classical SALBP instances from the research literature. The results demonstrate that the TDALBP sometimes has the potential to significantly improve assembly line performance.", "which has research problem ?", "Task division assembly line balancing problem", 697.0, 742.0], ["In this paper we describe the SemEval-2010 Cross-Lingual Lexical Substitution task, where given an English target word in context, participating systems had to find an alternative substitute word or phrase in Spanish. The task is based on the English Lexical Substitution task run at SemEval-2007. In this paper we provide background and motivation for the task, we describe the data annotation process and the scoring system, and present the results of the participating systems.", "which has research problem ?", "Cross-Lingual Lexical Substitution", 43.0, 77.0], ["Errors are prevalent in time series data, which is particularly common in the industrial field. Data with errors could not be stored in the database, which results in the loss of data assets. At present, to deal with these time series containing errors, besides keeping original erroneous data, discarding erroneous data and manually checking erroneous data, we can also use the cleaning algorithm widely used in the database to automatically clean the time series data. This survey provides a classification of time series data cleaning techniques and comprehensively reviews the state-of-the-art methods of each type. Besides we summarize data cleaning tools, systems and evaluation criteria from research and industry. Finally, we highlight possible directions time series data cleaning.", "which has research problem ?", "Time Series Data Cleaning", 512.0, 537.0], ["We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional tree-structured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the state-of-the-art feature-based model on end-to-end relation extraction, achieving 12.1% and 5.7% relative error reductions in F1-score on ACE2005 and ACE2004, respectively. We also show that our LSTM-RNN based model compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8). Finally, we present an extensive ablation analysis of several model components.", "which has research problem ?", "Relation Extraction", 501.0, 520.0], ["Extraction of relevant features from high-dimensional multi-way functional MRI (fMRI) data is essential for the classification of a cognitive task. In general, fMRI records a combination of neural activation signals and several other noisy components. Alternatively, fMRI data is represented as a high dimensional array using a number of voxels, time instants, and snapshots. The organisation of fMRI data includes a number of Region Of Interests (ROI), snapshots, and thousand of voxels. The crucial step in cognitive task classification is a reduction of feature size through feature selection. Extraction of a specific pattern of interest within the noisy components is a challenging task. Tensor decomposition techniques have found several applications in the scientific fields. In this paper, a novel tensor gradient-based feature extraction technique for cognitive task classification is proposed. The technique has efficiently been applied on StarPlus fMRI data. Also, the technique has been used to discriminate the ROIs in fMRI data in terms of cognitive state classification. The method has been achieved a better average accuracy when compared to other existing feature extraction methods.", "which has research problem ?", "Cognitive state classification", 1054.0, 1084.0], ["Knowledge graph embedding is an important task and it will benefit lots of downstream applications. Currently, deep neural networks based methods achieve state-of-the-art performance. However, most of these existing methods are very complex and need much time for training and inference. To address this issue, we propose a simple but effective atrous convolution based knowledge graph embedding method. Compared with existing state-of-the-art methods, our method has following main characteristics. First, it effectively increases feature interactions by using atrous convolutions. Second, to address the original information forgotten issue and vanishing/exploding gradient issue, it uses the residual learning method. Third, it has simpler structure but much higher parameter efficiency. We evaluate our method on six benchmark datasets with different evaluation metrics. Extensive experiments show that our model is very effective. On these diverse datasets, it achieves better results than the compared state-of-the-art methods on most of evaluation metrics. The source codes of our model could be found at https://github.com/neukg/AcrE.", "which has research problem ?", "Knowledge Graph Embedding", 0.0, 25.0], ["This paper discusses smart cities and raises critical questions about the faith being placed in technology to reduce carbon dioxide emissions. Given increasingly challenging carbon reduction targets, the role of information and communication technology and the digital economy are increasingly championed as offering potential to contribute to meeting these targets within cities and buildings. This paper questions the faith being placed in smart or intelligent solutions through asking, what role then for the ordinary citizen? The smart approach often appears to have a narrow view of how technology and user-engagement can sit together, viewing the behaviour of users as a hurdle to overcome rather than a resource to be utilised. This paper suggests lessons can be learnt from other disciplines and wider sustainable development policy that champions the role of citizens and user-engagement to harness the co-creation of knowledge, collaboration and empowerment. Specifically, empirical findings and observations a...", "which has research problem ?", "Smart cities", 21.0, 33.0], ["We present a novel 3\u2010step self\u2010training method for author name disambiguation\u2014SAND (self\u2010training associative name disambiguator)\u2014which requires no manual labeling, no parameterization (in real\u2010world scenarios) and is particularly suitable for the common situation in which only the most basic information about a citation record is available (i.e., author names, and work and venue titles). During the first step, real\u2010world heuristics on coauthors are able to produce highly pure (although fragmented) clusters. The most representative of these clusters are then selected to serve as training data for the third supervised author assignment step. The third step exploits a state\u2010of\u2010the\u2010art transductive disambiguation method capable of detecting unseen authors not included in any training example and incorporating reliable predictions to the training data. Experiments conducted with standard public collections, using the minimum set of attributes present in a citation, demonstrate that our proposed method outperforms all representative unsupervised author grouping disambiguation methods and is very competitive with fully supervised author assignment methods. Thus, different from other bootstrapping methods that explore privileged, hard to obtain information such as self\u2010citations and personal information, our proposed method produces topnotch performance with no (manual) training data or parameterization and in the presence of scarce information.", "which has research problem ?", "Author name disambiguation", 51.0, 77.0], ["The research question addressed in this article concerns whether unemployment persistency can be regarded as a phenomenon that increases employment difficulties for the less educated and, if so, whether their employment chances are reduced by an overly rapid reduction in the number of jobs with low educational requirements. The empirical case is Sweden and the data covers the period 1976-2000. The empirical analyses point towards a negative response to both questions. First, it is shown that jobs with low educational requirements have declined but still constitute a substantial share of all jobs. Secondly, educational attainment has changed at a faster rate than the job structure with increasing over-education in jobs with low educational requirements as a result. This, together with changed selection patterns into the low education group, are the main reasons for the poor employment chances of the less educated in periods with low general demand for labour.", "which has research problem ?", "Over-education", 705.0, 719.0], ["Companies declare that quality or customer satisfaction is their top priority in order to keep and attract more business in an increasingly competitive marketplace. The cost of quality (COQ) is a tool which can help determine the optimal level of quality investment. COQ analysis enables organizations to identify measure and control the consequences of poor quality. This study attempts to identify the COQ elements across the enterprise resource planning (ERP) implementation phases for the ERP implementation services of consultancy companies. The findings provide guidance to project managers on how best to utilize their limited resources. In summary, we suggest that project teams should focus on \u201cvalue-added\u201d activities and minimize the cost of \u201cnon-value-added\u201d activities at each phase of the ERP implementation project. Key words: Services, ERP implementation services, quality standard, service quality standard, cost of quality, project management, project quality management, project financial management.", "which has research problem ?", "Enterprise resource planning", 428.0, 456.0], ["paper attempts to explore and identify issues affecting Enterprise Resource Planning (ERP) implementation in context to Indian Small and Medium Enterprises (SMEs) and large enterprises. Issues which are considered more important for large scale enterprises may not be of equal importance for a small and medium scale enterprise and hence replicating the implementation experience which holds for large organizations will not a wise approach on the part of the implementation vendors targeting small scale enterprises. This paper attempts to highlight those specific issues where a different approach needs to be adopted. Pareto analysis has been applied to identify the issues for Indian SMEs and Large scale enterprises as available from the published literature. Also by doing comparative analysis between the identified issues for Indian large enterprises and SMEs four issues are proved to be crucial for SMEs in India but not for large enterprises such as proper system implementation strategy, clearly defined scope of implementation procedure, proper project planning and minimal customization of the system selected for implementation, because of some limitations faced by the Indian SMEs compared to large enterprises.", "which has research problem ?", "Enterprise resource planning", 56.0, 84.0], ["There are three issues that are crucial to advancing our academic understanding of smart cities: (1) contextual conditions, (2) governance models, and (3) the assessment of public value. A brief review of recent literature and the analysis of the included papers provide support for the assumption that cities cannot simply copy good practices but must develop approaches that fit their own situation ( contingency) and concord with their own organization in terms of broader strategies, human resource policies, information policies, and so on ( configuration). A variety of insights into the mechanisms and building blocks of smart city practices are presented, and issues for further research are identified.", "which has research problem ?", "Smart cities", 83.0, 95.0], ["Abstract. We describe here the development and evaluation of an Earth system model suitable for centennial-scale climate prediction. The principal new components added to the physical climate model are the terrestrial and ocean ecosystems and gas-phase tropospheric chemistry, along with their coupled interactions. The individual Earth system components are described briefly and the relevant interactions between the components are explained. Because the multiple interactions could lead to unstable feedbacks, we go through a careful process of model spin up to ensure that all components are stable and the interactions balanced. This spun-up configuration is evaluated against observed data for the Earth system components and is generally found to perform very satisfactorily. The reason for the evaluation phase is that the model is to be used for the core climate simulations carried out by the Met Office Hadley Centre for the Coupled Model Intercomparison Project (CMIP5), so it is essential that addition of the extra complexity does not detract substantially from its climate performance. Localised changes in some specific meteorological variables can be identified, but the impacts on the overall simulation of present day climate are slight. This model is proving valuable both for climate predictions, and for investigating the strengths of biogeochemical feedbacks.", "which has research problem ?", "CMIP5", 975.0, 980.0], ["We propose a static relation extraction task to complement biomedical information extraction approaches. We argue that static relations such as part-whole are implicitly involved in many common extraction settings, define a task setting making them explicit, and discuss their integration into previously proposed tasks and extraction methods. We further identify a specific static relation extraction task motivated by the BioNLP'09 shared task on event extraction, introduce an annotated corpus for the task, and demonstrate the feasibility of the task by experiments showing that the defined relations can be reliably extracted. The task setting and corpus can serve to support several forms of domain information extraction.", "which has research problem ?", "Relation Extraction", 20.0, 39.0], ["Abstract Dual functional fluorescence nanosensors have many potential applications in biology and medicine. Monitoring temperature with higher precision at localized small length scales or in a nanocavity is a necessity in various applications. As well as the detection of biologically interesting metal ions using low-cost and sensitive approach is of great importance in bioanalysis. In this paper, we describe the preparation of dual-function highly fluorescent B, N-co-doped carbon nanodots (CDs) that work as chemical and thermal sensors. The CDs emit blue fluorescence peaked at 450 nm and exhibit up to 70% photoluminescence quantum yield with showing excitation-independent fluorescence. We also show that water-soluble CDs display temperature-dependent fluorescence and can serve as highly sensitive and reliable nanothermometers with a thermo-sensitivity 1.8% \u00b0C \u22121 , and wide range thermo-sensing between 0\u201390 \u00b0C with excellent recovery. Moreover, the fluorescence emission of CDs are selectively quenched after the addition of Fe 2+ and Fe 3+ ions while show no quenching with adding other common metal cations and anions. The fluorescence emission shows a good linear correlation with concentration of Fe 2+ and Fe 3+ (R 2 = 0.9908 for Fe 2+ and R 2 = 0.9892 for Fe 3+ ) with a detection limit of of 80.0 \u00b1 0.5 nM for Fe 2+ and 110.0 \u00b1 0.5 nM for Fe 3+ . Considering the high quantum yield and selectivity, CDs are exploited to design a nanoprobe towards iron detection in a biological sample. The fluorimetric assay is used to detect Fe 2+ in iron capsules and total iron in serum samples successfully.", "which has research problem ?", "Nanothermometer", NaN, NaN], ["Bellemare et al. (2016) introduced the notion of a pseudo-count, derived from a density model, to generalize count-based exploration to non-tabular reinforcement learning. This pseudo-count was used to generate an exploration bonus for a DQN agent and combined with a mixed Monte Carlo update was sufficient to achieve state of the art on the Atari 2600 game Montezuma's Revenge. We consider two questions left open by their work: First, how important is the quality of the density model for exploration? Second, what role does the Monte Carlo update play in exploration? We answer the first question by demonstrating the use of PixelCNN, an advanced neural density model for images, to supply a pseudo-count. In particular, we examine the intrinsic difficulties in adapting Bellemare et al.'s approach when assumptions about the model are violated. The result is a more practical and general algorithm requiring no special apparatus. We combine PixelCNN pseudo-counts with different agent architectures to dramatically improve the state of the art on several hard Atari games. One surprising finding is that the mixed Monte Carlo update is a powerful facilitator of exploration in the sparsest of settings, including Montezuma's Revenge.", "which has research problem ?", "Atari Games", 1065.0, 1076.0], ["The purpose of this paper is to shed the light on the critical success factors that lead to high supply chain performance outcomes in a Malaysian manufacturing company. The critical success factors consist of relationship with customer and supplier, information communication and technology (ICT), material flow management, corporate culture and performance measurement. Questionnaire was the main instrument for the study and it was distributed to 84 staff from departments of purchasing, planning, logistics and operation. Data analysis was conducted by employing descriptive analysis (mean and standard deviation), reliability analysis, Pearson correlation analysis and multiple regression. The findings show that there are relationships exist between relationship with customer and supplier, ICT, material flow management, performance measurement and supply chain management (SCM) performance, but not for corporate culture. Forming a good customer and supplier relationship is the main predictor of SCM performance, followed by performance measurement, material flow management and ICT. It is recommended that future study to determine additional success factors that are pertinent to firms\u2019 current SCM strategies and directions, competitive advantages and missions. Logic suggests that further study to include more geographical data coverage, other nature of businesses and research instruments. Key words: Supply chain management, critical success factor.", "which has research problem ?", "Supply chain management", 855.0, 878.0], ["Abstract One determining characteristic of contemporary sociopolitical systems is their power over increasingly large and diverse populations. This raises questions about power relations between heterogeneous individuals and increasingly dominant and homogenizing system objectives. This article crosses epistemic boundaries by integrating computer engineering and a historicalphilosophical approach making the general organization of individuals within large-scale systems and corresponding individual homogenization intelligible. From a versatile archeological-genealogical perspective, an analysis of computer and social architectures is conducted that reinterprets Foucault\u2019s disciplines and political anatomy to establish the notion of politics for a purely technical system. This permits an understanding of system organization as modern technology with application to technical and social systems alike. Connecting to Heidegger\u2019s notions of the enframing ( Gestell ) and a more primal truth ( anf\u00e4nglicheren Wahrheit) , the recognition of politics in differently developing systems then challenges the immutability of contemporary organization. Following this critique of modernity and within the conceptualization of system organization, Derrida\u2019s democracy to come (\u00e0 venir) is then reformulated more abstractly as organizations to come . Through the integration of the discussed concepts, the framework of Large-Scale Systems Composed of Homogeneous Individuals (LSSCHI) is proposed, problematizing the relationships between individuals, structure, activity, and power within large-scale systems. The LSSCHI framework highlights the conflict of homogenizing system-level objectives and individual heterogeneity, and outlines power relations and mechanisms of control shared across different social and technical systems.", "which has research problem ?", "Heterogeneity", 1707.0, 1720.0], ["Representation learning is a critical ingredient for natural language processing systems. Recent Transformer language models like BERT learn powerful textual representations, but these models are targeted towards token- and sentence-level training objectives and do not leverage information on inter-document relatedness, which limits their document-level representation power. For applications on scientific documents, such as classification and recommendation, accurate embeddings of documents are a necessity. We propose SPECTER, a new method to generate document-level embedding of scientific papers based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, Specter can be easily applied to downstream applications without task-specific fine-tuning. Additionally, to encourage further research on document-level models, we introduce SciDocs, a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation. We show that Specter outperforms a variety of competitive baselines on the benchmark.", "which has research problem ?", "Representation Learning", 0.0, 23.0], ["In today's fierce business competition, companies face the tremendous challenge of expanding markets, improving their products, services and processes and exploiting their intellectual capital in a dynamic network of knowledge-intensive relations inside and outside their borders. In order to accomplish these objectives, more and more companies are turning to the Enterprise Resource Planning systems (ERP). On the other hand, Knowledge Management (KM) has received considerable attention in the last decade and is continuously gaining interest by industry, enterprises and academia. As we are moving into an era of \u201cknowledge capitalism\u201d, knowledge management will play a fundamental role in the success of today's businesses. This paper aims at throwing light on the role of KM in the ERP success first and on their possible integration second. A wide range of academic and practitioner literature related to KM and ERP is reviewed. On the basis of this review, the paper gives answers to specific research questions and analyses future research directions.", "which has research problem ?", "Enterprise resource planning", 365.0, 393.0], ["Artemisia herba-alba Asso., Asteraceae, is widely used in Morrocan folk medicine for the treatment of different health disorders. However, no scientific or medical studies were carried out to assess the cytotoxicity of A. herba-alba essential oil against cancer cell lines. In this study, eighteen volatile compounds were identified by GC-MS analysis of the essential oil obtained from the plant's aerial parts. The main volatile constituent in A. herba-alba was found to be a monoterpene, Verbenol, contributing to about 22% of the total volatile components. The essential oil showed significant antiproliferative activity against the acute lymphoblastic leukaemia (CEM) cell line, with 3 \u00b5g/mL as IC50 value. The anticancer bioactivity of Moroccan A. herba-alba essential oil is described here for the first time.", "which has research problem ?", "Oil", 243.0, 246.0], ["Abstract The composition of the essential oil hydrodistilled from the aerial parts of Artemisia herba-alba Asso. growing in Jordan was determined by GC and GC/MS. The oil yield was 1.3% (v/w) from dried tops (leaves, stems and fowers). Forty components corresponding to 95.3% of the oil were identifed, of which oxygenated monoterpenes were the main oil fraction (39.3% of the oil), with \u03b1- and \u03b2-thujones as the principal components (24.7%). The other major identifed components were: santolina alcohol (13.0%), artemisia ketone (12.4%), trans-sabinyl acetate (5.4%), germacrene D (4.6%), \u03b1-eudesmol (4.2%) and caryophyllene acetate (5.7%). The high oil yield and the substantial levels of potentially active components, in particular thujones and santolina alcohol, in the oil of this Jordanian species make the plant and the oil thereof promising candidates as natural herbal constituents of antimicrobial drug combinations.", "which has research problem ?", "Oil", 42.0, 45.0], ["Abstract A retrospective study of 30 patients hospitalized with a diagnosis of uncomplicated retrobulbar neuritis was carried out. The follow\u2010up period was 2\u201311 years; 57% developed multiple sclerosis. When the initial examination revealed oligoclonal bands in the cerebrospinal fluid, the risk of developing multiple sclerosis increased to 79%. With normal cerebrospinal fluid the risk decreased to only 10%. In the majority of cases, the diagnosis of MS was made during the first 3 years after retrobulbar neuritis.", "which has research problem ?", "Multiple sclerosis", 182.0, 200.0], ["We present the design, preparation, results and analysis of the Cancer Genetics (CG) event extraction task, a main task of the BioNLP Shared Task (ST) 2013. The CG task is an information extraction task targeting the recognition of events in text, represented as structured n-ary associations of given physical entities. In addition to addressing the cancer domain, the CG task is differentiated from previous event extraction tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple levels of biological organization, ranging from the molecular through the cellular and organ levels up to whole organisms. Final test set submissions were accepted from six teams. The highest-performing system achieved an Fscore of 55.4%. This level of performance is broadly comparable with the state of the art for established molecular-level extraction tasks, demonstrating that event extraction resources and methods generalize well to higher levels of biological organization and are applicable to the analysis of scientific texts on cancer. The CG task continues as an open challenge to all interested parties, with tools and resources available from http://2013. bionlp-st.org/.", "which has research problem ?", "Cancer genetics (CG) event extraction", NaN, NaN], ["Abstract Seedlings of Artemisia herba-alba Asso collected from Kirchaou area were transplanted in an experimental garden near the Institut des R\u00e9gions Arides of M\u00e9denine (Tunisia). During three years, the aerials parts were harvested (three levels of cutting, 25%, 50% and 75% of the plant), at full blossom and during the vegetative stage. The essential oil was isolated by hydrodistillation and its chemical composition was determined by GC(RI) and 13C-NMR. With respect to the quantity of vegetable material and the yield of hydrodistillation, it appears that the best results were obtained for plants cut at 50% of their height and during the full blossom. The chemical composition of the essential oil was dominated by \u03b2-thujone, \u03b1-thujone, 1,8-cineole, camphor and trans-sabinyl acetate, irrespective of the level of cutting and the period of harvest. It remains similar to that of plants growing wild in the same area.", "which has research problem ?", "Oil", 355.0, 358.0], ["Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .", "which has research problem ?", "DBPedia", 38.0, 45.0], ["Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.", "which has research problem ?", "goals", 742.0, 747.0], ["The ability of deep convolutional neural networks (CNNs) to learn discriminative spectro-temporal patterns makes them well suited to environmental sound classification. However, the relative scarcity of labeled data has impeded the exploitation of this family of high-capacity models. This study has two primary contributions: first, we propose a deep CNN architecture for environmental sound classification. Second, we propose the use of audio data augmentation for overcoming the problem of data scarcity and explore the influence of different augmentations on the performance of the proposed CNN architecture. Combined with data augmentation, the proposed model produces state-of-the-art results for environmental sound classification. We show that the improved performance stems from the combination of a deep, high-capacity model and an augmented training set: this combination outperforms both the proposed CNN without augmentation and a \u201cshallow\u201d dictionary learning model with augmentation. Finally, we examine the influence of each augmentation on the model's classification accuracy for each class, and observe that the accuracy for each class is influenced differently by each augmentation, suggesting that the performance of the model could be improved further by applying class-conditional data augmentation.", "which has research problem ?", "Environmental Sound Classification", 133.0, 167.0], ["Abstract. Vessel monitoring and surveillance is important for maritime safety and security, environment protection and border control. Ship monitoring systems based on Synthetic-aperture Radar (SAR) satellite images are operational. On SAR images the ships made of metal with sharp edges appear as bright dots and edges, therefore they can be well distinguished from the water. Since the radar is independent from the sun light and can acquire images also by cloudy weather and rain, it provides a reliable service. Vessel detection from spaceborne optical images (VDSOI) can extend the SAR based systems by providing more frequent revisit times and overcoming some drawbacks of the SAR images (e.g. lower spatial resolution, difficult human interpretation). Optical satellite images (OSI) can have a higher spatial resolution thus enabling the detection of smaller vessels and enhancing the vessel type classification. The human interpretation of an optical image is also easier than as of SAR image. In this paper I present a rapid automatic vessel detection method which uses pattern recognition methods, originally developed in the computer vision field. In the first step I train a binary classifier from image samples of vessels and background. The classifier uses simple features which can be calculated very fast. For the detection the classifier is slided along the image in various directions and scales. The detector has a cascade structure which rejects most of the background in the early stages which leads to faster execution. The detections are grouped together to avoid multiple detections. Finally the position, size(i.e. length and width) and heading of the vessels is extracted from the contours of the vessel. The presented method is parallelized, thus it runs fast (in minutes for 16000 \u00d7 16000 pixels image) on a multicore computer, enabling near real-time applications, e.g. one hour from image acquisition to end user.", "which has research problem ?", "Vessel detection", 516.0, 532.0], ["We introduce SpERT, an attention model for span-based joint entity and relation extraction. Our key contribution is a light-weight reasoning on BERT embeddings, which features entity recognition and filtering, as well as relation classification with a localized, marker-free context representation. The model is trained using strong within-sentence negative samples, which are efficiently extracted in a single BERT pass. These aspects facilitate a search over all spans in the sentence. In ablation studies, we demonstrate the benefits of pre-training, strong negative sampling and localized context. Our model outperforms prior work by up to 2.6% F1 score on several datasets for joint entity and relation extraction.", "which has research problem ?", "Relation Extraction", 71.0, 90.0], ["We present the findings and results of theSecond Nuanced Arabic Dialect IdentificationShared Task (NADI 2021). This Shared Taskincludes four subtasks: country-level ModernStandard Arabic (MSA) identification (Subtask1.1), country-level dialect identification (Subtask1.2), province-level MSA identification (Subtask2.1), and province-level sub-dialect identifica-tion (Subtask 2.2). The shared task dataset cov-ers a total of 100 provinces from 21 Arab coun-tries, collected from the Twitter domain. A totalof 53 teams from 23 countries registered to par-ticipate in the tasks, thus reflecting the interestof the community in this area. We received 16submissions for Subtask 1.1 from five teams, 27submissions for Subtask 1.2 from eight teams,12 submissions for Subtask 2.1 from four teams,and 13 Submissions for subtask 2.2 from fourteams.", "which has research problem ?", "Arabic Dialect Identification", NaN, NaN], ["Abstract Background The task of recognizing and identifying species names in biomedical literature has recently been regarded as critical for a number of applications in text and data mining, including gene name recognition, species-specific document retrieval, and semantic enrichment of biomedical articles. Results In this paper we describe an open-source species name recognition and normalization software system, LINNAEUS, and evaluate its performance relative to several automatically generated biomedical corpora, as well as a novel corpus of full-text documents manually annotated for species mentions. LINNAEUS uses a dictionary-based approach (implemented as an efficient deterministic finite-state automaton) to identify species names and a set of heuristics to resolve ambiguous mentions. When compared against our manually annotated corpus, LINNAEUS performs with 94% recall and 97% precision at the mention level, and 98% recall and 90% precision at the document level. Our system successfully solves the problem of disambiguating uncertain species mentions, with 97% of all mentions in PubMed Central full-text documents resolved to unambiguous NCBI taxonomy identifiers. Conclusions LINNAEUS is an open source, stand-alone software system capable of recognizing and normalizing species name mentions with speed and accuracy, and can therefore be integrated into a range of bioinformatics and text-mining applications. The software and manually annotated corpus can be downloaded freely at http://linnaeus.sourceforge.net/.", "which has research problem ?", "Species name recognition and normalization", 359.0, 401.0], ["Many NLP applications require information about locations of objects referenced in text, or relations between them in space. For example, the phrase a book on the desk contains information about the location of the object book, as trajector, with respect to another object desk, as landmark. Spatial Role Labeling (SpRL) is an evaluation task in the information extraction domain which sets a goal to automatically process text and identify objects of spatial scenes and relations between them. This paper describes the task in Semantic Evaluations 2013, annotation schema, corpora, participants, methods and results obtained by the participants.", "which has research problem ?", "Spatial Role Labeling", 292.0, 313.0], ["As analysts still grapple with understanding core damage accident progression at Three Mile Island and Fukushima that caught the nuclear industry off-guard once too many times, one notices the very limited detail with which the large reactor cores of these subject reactors have been modelled in their severe accident simulation code packages. At the same time, modelling of CANDU severe accidents have largely borrowed from and suffered from the limitations of the same LWR codes (see IAEA TECDOC 1727) whose applications to PHWRs have poorly caught critical PHWR design specifics and vulnerabilities. As a result, accident management measures that have been instituted at CANDU PHWRs, while meeting the important industry objective of publically seeming to be doing something about lessons learnt from say Fukushima and showing that the reactor designs are oh so close to perfect and the off-site consequences of severe accidents happily benign. Integrated PHWR severe accident progression and consequence assessment code ROSHNI can make a significant contribution to actual, practical understanding of severe accident progression in CANDU PHWRs, improving significantly on the other PHWR specific computer codes developed three decades ago when modeling decisions were constrained by limited computing power and poor understanding of and interest in severe core damage accidents. These codes force gross simplifications in reactor core modelling and do not adequately represent all the right CANDU core details, materials, fluids, vessels or phenomena. But they produce results that are familiar and palatable. They do, however to their credit, also excel in their computational speed, largely because they model and compute so little and with such un-necessary simplifications. ROSHNI sheds most previous modelling simplifications and represents each of the 380 channels, 4560 bundle, 37 elements in four concentric ring, Zircaloy clad fuel geometry, materials and fluids more faithfully in a 2000 MW(Th) CANDU6 reactor. It can be used easily for other PHWRs with different number of fuel channels and bundles per each channel. Each of horizontal PHWR reactor channels with all their bundles, fuel rings, sheaths, appendages, end fittings and feeders are modelled and in detail that reflects large across core differences. While other codes model at best a few hundred core fuel entities, thermo-chemical transient behaviour of about 73,000 different fuel channel entities within the core is considered by ROSHNI simultaneously along with other 15,000 or so other flow path segments. At each location all known thermo-chemical and hydraulic phenomena are computed. With such detail, ROSHNI is able to provide information on their progressive and parallel thermo-chemical contribution to accident progression and a more realistic fission product release source term that would belie the miniscule one (100 TBq of Cs-137 or 0.15% of core inventory) used by EMOs now in Canada on recommendation of our national regulator CNSC. ROSHNI has an advanced, more CANDU specific consideration of each bundle transitioning to a solid debris behaviour in the Calandria vessel without reverting to a simplified molten corium formulation that happily ignores interaction of debris with vessel welds, further vessel failures and energetic interactions. The code is able to follow behaviour of each fuel bundle following its disassembly from the fuel channel and thus demonstrate that the gross assumption of a core collapse made in some analyses is wrong and misleading. It is able to thus demonstrate that PHWR core disassembly is not only gradual, it will be also be incomplete with a large number of low power, peripheral fuel channels never disassembling under most credible scenarios. The code is designed to grow into and use its voluminous results in a severe accident simulator for operator training. It\u2019s phenomenological models are able to examine design inadequacies / issues that affect accident progression and several simple to implement design improvements that have a profound effect on results. For example, an early pressure boundary failure due to inadequacy of heat sinks in a station blackout scenario can be examined along with the effect of improved and adequate over pressure protection. A best effort code such as ROSHNI can be instrumental in identifying the risk reduction benefits of undertaking certain design, operational and accidental management improvements for PHWRs, with some of the multi-unit ones handicapped by poor pressurizer placement and leaky containments with vulnerable materials, poor overpressure protection, ad-hoc mitigation measures and limited instrumentation common to all CANDUs. Case in point is the PSA supported design and installed number of Hydrogen recombiners that are neither for the right gas (designed mysteriously for H2 instead of D2) or its potential release quantity (they are sparse and will cause explosions). The paper presents ROSHNI results of simulations of a postulated station blackout scenario and sheds a light on the challenges ahead in minimizing risk from operation of these otherwise unique power reactors.", "which has research problem ?", "Fission product release", 2833.0, 2856.0], ["Migrating existing enterprise software to cloud platforms involves the comparison of competing cloud deployment options (CDOs). A CDO comprises a combination of a specific cloud environment, deployment architecture, and runtime reconfiguration rules for dynamic resource scaling. Our simulator CDOSim can evaluate CDOs, e.g., regarding response times and costs. However, the design space to be searched for well-suited solutions is extremely huge. In this paper, we approach this optimization problem with the novel genetic algorithm CDOXplorer. It uses techniques of the search-based software engineering field and CDOSim to assess the fitness of CDOs. An experimental evaluation that employs, among others, the cloud environments Amazon EC2 and Microsoft Windows Azure, shows that CDOXplorer can find solutions that surpass those of other state-of-the-art techniques by up to 60%. Our experiment code and data and an implementation of CDOXplorer are available as open source software.", "which has research problem ?", "Search-Based Software Engineering", 572.0, 605.0], ["Building predictive models for information extraction from text, such as named entity recognition or the extraction of semantic relationships between named entities in text, requires a large corpus of annotated text. Wikipedia is often used as a corpus for these tasks where the annotation is a named entity linked by a hyperlink to its article. However, editors on Wikipedia are only expected to link these mentions in order to help the reader to understand the content, but are discouraged from adding links that do not add any benefit for understanding an article. Therefore, many mentions of popular entities (such as countries or popular events in history), or previously linked articles, as well as the article\u2019s entity itself, are not linked. In this paper, we discuss WEXEA, a Wikipedia EXhaustive Entity Annotation system, to create a text corpus based on Wikipedia with exhaustive annotations of entity mentions, i.e. linking all mentions of entities to their corresponding articles. This results in a huge potential for additional annotations that can be used for downstream NLP tasks, such as Relation Extraction. We show that our annotations are useful for creating distantly supervised datasets for this task. Furthermore, we publish all code necessary to derive a corpus from a raw Wikipedia dump, so that it can be reproduced by everyone.", "which has research problem ?", "Named Entity Recognition", 73.0, 97.0], ["1. IntroductionEconomy is the main determinant of smart city proposals, and a city with a significant level of economic competitiveness (Popescu, 2015a, b, c, d, e) has one of the features of a smart city. The economic consequences of the smart city proposals are business production, job generation, personnel development, and enhancement in the productivity. The enforcement of an ICT infrastructure is essential to a smart city's advancement and is contingent on several elements associated with its attainability and operation. (Chourabi et al., 2012) Smart city involvements are the end results of, and uncomfortably incorporated into, present social and spatial configurations of urban governa nce (Br a tu, 2015) a nd the built setting: the s ma rt city is put together gradually, integrated awkwardly into current arrangements of city administration and the reinforced environment. Smart cities are intrinsically distinguished, being geographically asymmetrical at a diversity of scales. Not all places of the city will be similarly smart: smart cities will favor some spaces, individuals, and undertakings over others. An essential component of the smart city is its capacity to further economic growth. (Shelton et al., 2015)2. The Assemblage of Participants, Tenets and Technologies Related to Smart City InterventionsThe \"smart city\" notion has arisen from long-persisting opinions regarding urban technological idealistic schemes (Lazaroiu, 2013) and the absolutely competitive city. Smart cities are where novel technologies may be produced and the receptacles for technology, i.e. the goal of its utilizations. The contest to join this movement and become a smart city has stimulated city policymakers to endogenize the performance of technology-led growth (Lazaroiu, 2014a, b, c), leading municipal budgets toward financings that present smart city standing. The boundaries of the smart city are generated both by the lack of data utilizations that can handle shared and not separate solutions and by the incapacity to aim at indefinit e features of cities that both enhance and blemish from the standard of urban existence for city inhabitants. Smart city technology fina ncings are chiefly composed of a meliorations inst ea d of genuine innovations, on the cit izen consumer side. (Glasmeier and Christopherson, 2015) The notion of smart city as a method to improve the life standard of individuals has been achieving rising relevance in the calendars of policymakers. The amount of \"smart\" proposals initiated by a municipality can lead to an intermediate final product that indicates the endeavors made to augment the quality of existence of the citizens. The probabilities of a city raising its degree of smartness are contingent on several country-specific variables that outweigh its economic, technological and green advancement rate. Public administrations dema nd backing to organize the notion of the smartness of a city (Nica, 2015a, b, c, d), to encapsulate its ramifications, to establish standards at the global level, and to observe enhancement chances. (Neir otti et a l., 2014) T he gr owth of smart cit ies is assisting the rise of government employment of ITCs to enhance political involvement, enforce public schemes or supply public spher e ser vices. Ther e is no one wa y to becoming smart, and diverse cities ha ve embraced distinct adva nces that indicate their specific circumstances. The administration of smart cities is dependent on elaborate arrangements of interdependent entities. (Rodriguez Bolivar, 2015)The association of smart (technology-enabled) solutions to satisfy the leading societal difficult tasks and the concentration on the city as the chief determinant of alteration bring about the notion of the \"smart city.\" The rise of novel technologies to assess and interlink various facets of ordinary exis- tence (\"the internet of things\") is relevant in the progression towards a smart city. The latter is attempt ing to encourage a nd adjust innovations to the demands of their citizens (Pera, 2015a, b) by urging synergetic advancement of inventions with various stakeholders. \u2026", "which has research problem ?", "Smart cities", 890.0, 902.0], ["Today when many practitioners run basic NLP on the entire web and large-volume traffic, faster methods are paramount to saving time and energy costs. Recent advances in GPU hardware have led to the emergence of bi-directional LSTMs as a standard method for obtaining per-token vector representations serving as input to labeling tasks such as NER (often followed by prediction in a linear-chain CRF). Though expressive and accurate, these models fail to fully exploit GPU parallelism, limiting their computational efficiency. This paper proposes a faster alternative to Bi-LSTMs for NER: Iterated Dilated Convolutional Neural Networks (ID-CNNs), which have better capacity than traditional CNNs for large context and structured prediction. Unlike LSTMs whose sequential processing on sentences of length N requires O(N) time even in the face of parallelism, ID-CNNs permit fixed-depth convolutions to run in parallel across entire documents. We describe a distinct combination of network structure, parameter sharing and training procedures that enable dramatic 14-20x test-time speedups while retaining accuracy comparable to the Bi-LSTM-CRF. Moreover, ID-CNNs trained to aggregate context from the entire document are more accurate than Bi-LSTM-CRFs while attaining 8x faster test time speeds.", "which has research problem ?", "NER", 343.0, 346.0], ["A novel low-computation discriminative feature space is introduced for facial expression recognition capable of robust performance over a rang of image resolutions. Our approach is based on the simple local binary patterns (LBP) for representing salient micro-patterns of face images. Compared to Gabor wavelets, the LBP features can be extracted faster in a single scan through the raw image and lie in a lower dimensional space, whilst still retaining facial information efficiently. Template matching with weighted Chi square statistic and support vector machine are adopted to classify facial expressions. Extensive experiments on the Cohn-Kanade Database illustrate that the LBP features are effective and efficient for facial expression discrimination. Additionally, experiments on face images with different resolutions show that the LBP features are robust to low-resolution images, which is critical in real-world applications where only low-resolution video input is available.", "which has research problem ?", "Facial Expression Recognition", 71.0, 100.0], ["Abstract Semantic embedding of knowledge graphs has been widely studied and used for prediction and statistical analysis tasks across various domains such as Natural Language Processing and the Semantic Web. However, less attention has been paid to developing robust methods for embedding OWL (Web Ontology Language) ontologies, which contain richer semantic information than plain knowledge graphs, and have been widely adopted in domains such as bioinformatics. In this paper, we propose a random walk and word embedding based ontology embedding method named , which encodes the semantics of an OWL ontology by taking into account its graph structure, lexical information and logical constructors. Our empirical evaluation with three real world datasets suggests that benefits from these three different aspects of an ontology in class membership prediction and class subsumption prediction tasks. Furthermore, often significantly outperforms the state-of-the-art methods in our experiments.", "which has research problem ?", "Ontology Embedding", 529.0, 547.0], ["Today game engines are popular in commercial game development, as they lower the threshold of game production by providing common technologies and convenient content-creation tools. Game engine based development is therefore the mainstream methodology in the game industry. Model-Driven Game Development (MDGD) is an emerging game development methodology, which applies the Model-Driven Software Development (MDSD) method in the game development domain. This simplifies game development by reducing the gap between game design and implementation. MDGD has to take advantage of the existing game engines in order to be useful in commercial game development practice. However, none of the existing MDGD approaches in literature has convincingly demonstrated good integration of its tools with the game engine tool-chain. In this paper, we propose a hybrid approach named ECGM to address the integration challenges of two methodologies with a focus on the technical aspects. The approach makes a run-time engine the base of the domain framework, and uses the game engine tool-chain together with the MDGD tool-chain. ECGM minimizes the change to the existing workflow and technology, thus reducing the cost and risk of adopting MDGD in commercial game development. Our contribution is one important step towards MDGD industrialization.", "which has research problem ?", "Model-driven Game Development", 274.0, 303.0], ["Recent studies have reported that organizations are often unable to identify the key success factors of Sustainable Supply Chain Management (SSCM) and to understand their implications for management practice. For this reason, the implementation of SSCM often does not result in noticeable benefits. So far, research has failed to offer any explanations for this discrepancy. In view of this fact, our study aims at identifying and analyzing the factors that underlie successful SSCM. Success factors are identified by means of a systematic literature review and are then integrated into an explanatory model. Consequently, the proposed success factor model is tested on the basis of an empirical study focusing on recycling networks of the electrics and electronics industry. We found that signaling, information provision and the adoption of standards are crucial preconditions for strategy commitment, mutual learning, the establishment of ecological cycles and hence for the overall success of SSCM. Copyright \u00a9 2011 John Wiley & Sons, Ltd and ERP Environment.", "which has research problem ?", "Supply chain management", 116.0, 139.0], ["Increasingly large document collections require improved information processing methods for searching, retrieving, and organizing text. Central to these information processing methods is document classification, which has become an important application for supervised learning. Recently the performance of traditional supervised classifiers has degraded as the number of documents has increased. This is because along with growth in the number of documents has come an increase in the number of categories. This paper approaches this problem differently from current document classification methods that view the problem as multi-class classification. Instead we perform hierarchical classification using an approach we call Hierarchical Deep Learning for Text classification (HDLTex). HDLTex employs stacks of deep learning architectures to provide specialized understanding at each level of the document hierarchy.", "which has research problem ?", "Document Classification", 187.0, 210.0], ["Abstract Smart cities are a modern administrative/ developmental concept that tries to combine the development of urban areas with a higher level of citizens\u2019 participation. However, there is a lack of understanding of the concept\u2019s potential, due possibly to an unwillingness to accept a new form of relationship with the citizens. In this article, the willingness to introduce the elements of smart cities into two Central and Eastern European cities is tested. The results show that people are reluctant to use technology above the level of their needs and show little interest in participating in matters of governance, which prevents smart cities from developing in reality.", "which has research problem ?", "Smart cities", 9.0, 21.0], ["Purpose \u2013 The purpose of this paper is to further build up the knowledge about reasons for small and mid\u2010sized enterprises (SMEs) to adopt open source enterprise resource planning (ERP) systems.Design/methodology/approach \u2013 The paper presents and analyses findings in articles about proprietary ERPs and open source ERPs. In addition, a limited investigation of the distribution channel SourceForge for open source is made.Findings \u2013 The cost perspective seems to receive a high attention regarding adoption of open source ERPs. This can be questioned and the main conclusion is that costs seem to have a secondary role in adoption or non adoption of open source ERPs.Research limitations/implications \u2013 The paper is mainly a conceptual paper written from a literature review. The ambition is to search support for the findings by doing more research in the area.Practical implications \u2013 The findings presented are of interest both for developers of proprietary ERPs as well as SMEs since it is shown that there are defi...", "which has research problem ?", "Enterprise resource planning", 151.0, 179.0], ["Open Educational Resources (OER) are a direct reaction to knowledge privatization; they foment their exchange to the entire world with the aim of increase the human intellectual capacity.In this document, we describe the committment of Universidad T\u00e9cnica Particular de Loja (UTPL), Ecuador, in the promotion of open educational practices and resources and their impact in society and knowledge economy through the use of Social Software.", "which has research problem ?", "Open Education", NaN, NaN], ["The paper is concerned with two-class active learning. While the common approach for collecting data in active learning is to select samples close to the classification boundary, better performance can be achieved by taking into account the prior data distribution. The main contribution of the paper is a formal framework that incorporates clustering into active learning. The algorithm first constructs a classifier on the set of the cluster representatives, and then propagates the classification decision to the other samples via a local noise model. The proposed model allows to select the most representative samples as well as to avoid repeatedly labeling samples in the same cluster. During the active learning process, the clustering is adjusted using the coarse-to-fine strategy in order to balance between the advantage of large clusters and the accuracy of the data representation. The results of experiments in image databases show a better performance of our algorithm compared to the current methods.", "which has research problem ?", "Active learning", 38.0, 53.0], ["We design a family of image classification architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures, which are competitive on highly parallel processing hardware. We revisit principles from the extensive literature on convolutional neural networks to apply them to transformers, in particular activation maps with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information in vision transformers.As a result, we propose LeViT: a hybrid neural network for fast inference image classification. We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU. We release the code at https://github.com/facebookresearch/LeViT.", "which has research problem ?", "Image Classification", 22.0, 42.0], ["Links build the backbone of the Linked Data Cloud. With the steady growth in size of datasets comes an increased need for end users to know which frameworks to use for deriving links between datasets. In this survey, we comparatively evaluate current Link Discovery tools and frameworks. For this purpose, we outline general requirements and derive a generic architecture of Link Discovery frameworks. Based on this generic architecture, we study and compare the features of state-ofthe-art linking frameworks. We also analyze reported performance evaluations for the different frameworks. Finally, we derive insights pertaining to possible future developments in the domain of Link Discovery.", "which has research problem ?", "Link Discovery", 251.0, 265.0], ["A highly efficient C-C bond cleavage of unstrained aliphatic ketones bearing \u03b2-hydrogens with olefins was achieved using a chelation-assisted catalytic system consisting of (Ph 3 P) 3 RhCl and 2-amino-3-picoline by microwave irradiation under solvent-free conditions. The addition of cyclohexylamine catalyst accelerated the reaction rate dramatically under microwave irradiation compared with the classical heating method.", "which has research problem ?", "C-C bond cleavage", 19.0, 36.0], ["In recent years, ``smart cities'' have rapidly increased in discourses as well as in their real number, and raise various issues. While citizen engagement is a key element of most definitions of smart cities, information and communication technologies (ICTs) would also have great potential for facilitating public participation. However, scholars have highlighted that little research has focused on actual practices of citizen involvement in smart cities so far. In this respect, the authors analyse public participation in Japanese ``Smart Communities'', paying attention to both official discourses and actual practices. Smart Communities were selected in 2010 by the Japanese government which defines them as ``smart city'' projects and imposed criteria such as focus on energy issues, participation and lifestyle innovation. Drawing on analysis of official documents as well as on interviews with each of the four Smart Communities' stakeholders, the paper explains that very little input is expected from Japanese citizens. Instead, ICTs are used by municipalities and electric utilities to steer project participants and to change their behaviour. The objective of Smart Communities would not be to involve citizens in city governance, but rather to make them participate in the co-production of public services, mainly energy production and distribution.", "which has research problem ?", "Smart cities", 19.0, 31.0], ["Hyperspectral sensors are devices that acquire images over hundreds of spectral bands, thereby enabling the extraction of spectral signatures for objects or materials observed. Hyperspectral remote sensing has been used over a wide range of applications, such as agriculture, forestry, geology, ecological monitoring and disaster monitoring. In this paper, the specific application of hyperspectral remote sensing to agriculture is examined. The technological development of agricultural methods is of critical importance as the world's population is anticipated to continuously rise much beyond the current number of 7 billion. One area upon which hyperspectral sensing can yield considerable impact is that of precision agriculture - the use of observations to optimize the use of resources and management of farming practices. For example, hyperspectral image processing is used in the monitoring of plant diseases, insect pests and invasive plant species; the estimation of crop yield; and the fine classification of crop distributions. This paper also presents a detailed overview of hyperspectral data processing techniques and suggestions for advancing the agricultural applications of hyperspectral technologies in Turkey.", "which has research problem ?", "Application of Hyperspectral remote sensing to Agriculture", 370.0, 428.0], ["The business value generated by information and communication technologies (ICT) has been for long time a major research topic. Recently there is a growing research interest in the business value generated by particular types of information systems (IS). One of them is the enterprise resource planning (ERP) systems, which are increasingly adopted by organizations for supporting and integrating key business and management processes. The current paper initially presents a critical review of the existing empirical literature concerning the business value of the ERP systems, which investigates the impact of ERP systems adoption on various measures of organizational performance. Then is critically reviewed the literature concerning the related topic of critical success factors (CSFs) in ERP systems implementation, which aims at identifying and investigating factors that result in more successful ERP systems implementation that generate higher levels of value for organizations. Finally, future directions of research concerning ERP systems business value are proposed.", "which has research problem ?", "Enterprise resource planning", 274.0, 302.0], ["Diabetes Mellitus (DM) is a chronic, progressive and life-threatening disease. The ocular manifestations of DM, Diabetic Retinopathy (DR) and Diabetic Macular Edema (DME), are the leading causes of blindness in the adult population throughout the world. Early diagnosis of DR and DM through screening tests and successive treatments can reduce the threat to visual acuity. In this context, we propose an encoder decoder based semantic segmentation network SOP-Net (Segmentation of Ocular Pathologies Using Deep Convolutional Neural Network) for simultaneous delineation of retinal pathologies (hard exudates, soft exudates, hemorrhages, microaneurysms). The proposed semantic segmentation framework is capable of providing segmentation results at pixel-level with good localization of object boundaries. SOP-Net has been trained and tested on IDRiD dataset which is publicly available with pixel level annotations of retinal pathologies. The network achieved average accuracies of 98.98%, 90.46%, 96.79%, and 96.70% for segmentation of hard exudates, soft exudates, hemorrhages, and microaneurysms. The proposed methodology has the capability to be used in developing a diagnostic system for organizing large scale ophthalmic screening programs.", "which has research problem ?", "simultaneous delineation of retinal pathologies (hard exudates, soft exudates, hemorrhages, microaneurysms)", NaN, NaN], ["Purpose \u2013 The purpose of this paper is to review the literature on supply chain management (SCM) practices in small and medium scale enterprises (SMEs) and outlines the key insights.Design/methodology/approach \u2013 The paper describes a literature\u2010based research that has sought understand the issues of SCM for SMEs. The methodology is based on critical review of 77 research papers from high\u2010quality, international refereed journals. Mainly, issues are explored under three categories \u2013 supply chain integration, strategy and planning and implementation. This has supported the development of key constructs and propositions.Findings \u2013 The research outcomes are three fold. Firstly, paper summarizes the reported literature and classifies it based on their nature of work and contributions. Second, paper demonstrates the overall approach towards the development of constructs, research questions, and investigative questions leading to key proposition for the further research. Lastly, paper outlines the key findings an...", "which has research problem ?", "Supply chain management", 67.0, 90.0], ["A side\u2010chain conjugation strategy in the design of nonfullerene electron acceptors is proposed, with the design and synthesis of a side\u2010chain\u2010conjugated acceptor (ITIC2) based on a 4,8\u2010bis(5\u2010(2\u2010ethylhexyl)thiophen\u20102\u2010yl)benzo[1,2\u2010b:4,5\u2010b\u2032]di(cyclopenta\u2010dithiophene) electron\u2010donating core and 1,1\u2010dicyanomethylene\u20103\u2010indanone electron\u2010withdrawing end groups. ITIC2 with the conjugated side chains exhibits an absorption peak at 714 nm, which redshifts 12 nm relative to ITIC1. The absorption extinction coefficient of ITIC2 is 2.7 \u00d7 105m\u22121 cm\u22121, higher than that of ITIC1 (1.5 \u00d7 105m\u22121 cm\u22121). ITIC2 exhibits slightly higher highest occupied molecular orbital (HOMO) (\u22125.43 eV) and lowest unoccupied molecular orbital (LUMO) (\u22123.80 eV) energy levels relative to ITIC1 (HOMO: \u22125.48 eV; LUMO: \u22123.84 eV), and higher electron mobility (1.3 \u00d7 10\u22123 cm2 V\u22121 s\u22121) than that of ITIC1 (9.6 \u00d7 10\u22124 cm2 V\u22121 s\u22121). The power conversion efficiency of ITIC2\u2010based organic solar cells is 11.0%, much higher than that of ITIC1\u2010based control devices (8.54%). Our results demonstrate that side\u2010chain conjugation can tune energy levels, enhance absorption, and electron mobility, and finally enhance photovoltaic performance of nonfullerene acceptors.", "which has research problem ?", "Organic solar cells", 945.0, 964.0], ["Multilingual Named Entity Recognition (NER) is a key intermediate task which is needed in many areas of NLP. In this paper, we address the well-known issue of data scarcity in NER, especially relevant when moving to a multilingual scenario, and go beyond current approaches to the creation of multilingual silver data for the task. We exploit the texts of Wikipedia and introduce a new methodology based on the effective combination of knowledge-based approaches and neural models, together with a novel domain adaptation technique, to produce high-quality training corpora for NER. We evaluate our datasets extensively on standard benchmarks for NER, yielding substantial improvements of up to 6 span-based F1-score points over previous state-of-the-art systems for data creation.", "which has research problem ?", "Multilingual named entity recognition", 0.0, 37.0], ["In this paper, we present a novel approach to estimate the relative depth of regions in monocular images. There are several contributions. First, the task of monocular depth estimation is considered as a learning-to-rank problem which offers several advantages compared to regression approaches. Second, monocular depth clues of human perception are modeled in a systematic manner. Third, we show that these depth clues can be modeled and integrated appropriately in a Rankboost framework. For this purpose, a space-efficient version of Rankboost is derived that makes it applicable to rank a large number of objects, as posed by the given problem. Finally, the monocular depth clues are combined with results from a deep learning approach. Experimental results show that the error rate is reduced by adding the monocular features while outperforming state-of-the-art systems.", "which has research problem ?", "Depth Estimation", 168.0, 184.0], ["We demonstrate that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar. Indeed, its performance of 86.36% (LP/LR F1) is better than that of early lexicalized PCFG models, and surprisingly close to the current state-of-the-art. This result has potential uses beyond establishing a strong lower bound on the maximum possible accuracy of unlexicalized models: an unlexicalized PCFG is much more compact, easier to replicate, and easier to interpret than more complex lexical models, and the parsing algorithms are simpler, more widely understood, of lower asymptotic complexity, and easier to optimize.", "which has research problem ?", "Parsing", 662.0, 669.0], ["Enterprise resource planning (ERP) systems have emerged as the core of successful information management and the enterprise backbone of organizations. The difficulties of ERP implementations have been widely cited in the literature but research on the critical factors for initial and ongoing ERP implementation success is rare and fragmented. Through a comprehensive review of the literature, 11 factors were found to be critical to ERP implementation success \u2013 ERP teamwork and composition; change management program and culture; top management support; business plan and vision; business process reengineering with minimum customization; project management; monitoring and evaluation of performance; effective communication; software development, testing and troubleshooting; project champion; appropriate business and IT legacy systems. The classification of these factors into the respective phases (chartering, project, shakedown, onward and upward) in Markus and Tanis\u2019 ERP life cycle model is presented and the importance of each factor is discussed.", "which has research problem ?", "Enterprise resource planning", 0.0, 28.0], ["Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.", "which has research problem ?", "Named Entity Recognition", 0.0, 24.0], ["Evolutionary algorithms are methods, which imitate the natural evolution process. An artificial evolution process evaluates fitness of each individual, which are solution candidates. The next population of candidate solutions is formed by using the good properties of the current population by applying different mutation and crossover operations. Different kinds of evolutionary algorithm applications related to software engineering were searched in the literature. Because the entire book presents some interesting information chapter of some evolutionary computation techniques applied into the software engineering we consider necessary to present into this chapter a short survey of some techniques which are very useful in the future research of this field. The majority of evolutionary algorithm applications related to software engineering were about software design or testing. Software Engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software (Abran and Moore, 2004). The purpose of this book is to open a door in order to find out the optimization problems in different software engineering problems. The idea of putting together the application of evolutionary computation and evolutionary optimization techniques in software engineering problems provided to the researchers the possibility to study some existing abSTraCT", "which has research problem ?", "Evolutionary Computation Techniques", 546.0, 581.0], ["Abstract The text-mining services for kinome curation track, part of BioCreative VI, proposed a competition to assess the effectiveness of text mining to perform literature triage. The track has exploited an unpublished curated data set from the neXtProt database. This data set contained comprehensive annotations for 300 human protein kinases. For a given protein and a given curation axis [diseases or gene ontology (GO) biological processes], participants\u2019 systems had to identify and rank relevant articles in a collection of 5.2 M MEDLINE citations (task 1) or 530 000 full-text articles (task 2). Explored strategies comprised named-entity recognition and machine-learning frameworks. For that latter approach, participants developed methods to derive a set of negative instances, as the databases typically do not store articles that were judged as irrelevant by curators. The supervised approaches proposed by the participating groups achieved significant improvements compared to the baseline established in a previous study and compared to a basic PubMed search.", "which has research problem ?", "text-mining services for kinome curation", 13.0, 53.0], ["Over the last ten years, there has been a dramatic growth in the acquisition of Enterprise Resource Planning (ERP) systems, where the market leader is the German company, SAP AG. However, more recently, there has been an increase in reported ERP failures, suggesting that the implementation issues are not just technical, but encompass wider behavioural factors.", "which has research problem ?", "Enterprise resource planning", 80.0, 108.0], ["We present SemEval-2019 Task 8 on Fact Checking in Community Question Answering Forums, which features two subtasks. Subtask A is about deciding whether a question asks for factual information vs. an opinion/advice vs. just socializing. Subtask B asks to predict whether an answer to a factual question is true, false or not a proper answer. We received 17 official submissions for subtask A and 11 official submissions for Subtask B. For subtask A, all systems improved over the majority class baseline. For Subtask B, all systems were below a majority class baseline, but several systems were very close to it. The leaderboard and the data from the competition can be found at http://competitions.codalab.org/competitions/20022.", "which has research problem ?", "Fact Checking in Community Question Answering Forums", 34.0, 86.0], ["The successful implementation of various enterprise resource planning (ERP) systems has provoked considerable interest over the last few years. Management has recently been enticed to look toward these new information technologies and philosophies of manufacturing for the key to survival or competitive edges. Although there is no shortage of glowing reports on the success of ERP installations, many companies have tossed millions of dollars in this direction with little to show for it. Since many of the ERP failures today can be attributed to inadequate planning prior to installation, we choose to analyze several critical planning issues including needs assessment and choosing a right ERP system, matching business process with the ERP system, understanding the organizational requirements, and economic and strategic justification. In addition, this study also identifies new windows of opportunity as well as challenges facing companies today as enterprise systems continue to evolve and expand.", "which has research problem ?", "Enterprise resource planning", 41.0, 69.0], ["ABSTRACTThe concept of the digital twin calls for virtual replicas of real world products. Achieving this requires a sophisticated network of models that have a level of interconnectivity. The authors attempted to improve model interconnectivity by enhancing the computer-aided design model with spatially related non-geometric data. A tool was created to store, visualize, and search for spatial data within the computer-aided design tool. This enables both model authors, and consumers to utilize information inside the CAD tool which traditionally would have existed in separate software.", "which has research problem ?", "digital twin", 27.0, 39.0], ["Existing literature on Question Answering (QA) mostly focuses on algorithmic novelty, data augmentation, or increasingly large pre-trained language models like XLNet and RoBERTa. Additionally, a lot of systems on the QA leaderboards do not have associated research documentation in order to successfully replicate their experiments. In this paper, we outline these algorithmic components such as Attention-over-Attention, coupled with data augmentation and ensembling strategies that have shown to yield state-of-the-art results on benchmark datasets like SQuAD, even achieving super-human performance. Contrary to these prior results, when we evaluate on the recently proposed Natural Questions benchmark dataset, we find that an incredibly simple approach of transfer learning from BERT outperforms the previous state-of-the-art system trained on 4 million more examples than ours by 1.9 F1 points. Adding ensembling strategies further improves that number by 2.3 F1 points.", "which has research problem ?", "Question Answering", 23.0, 41.0], ["E-learning recommender systems are gaining significance nowadays due to its ability to enhance the learning experience by providing tailor-made services based on learner preferences. A Personalized Learning Environment (PLE) that automatically adapts to learner characteristics such as learning styles and knowledge level can recommend appropriate learning resources that would favor the learning process and improve learning outcomes. The pure cold-start problem is a relevant issue in PLEs, which arises due to the lack of prior information about the new learner in the PLE to create appropriate recommendations. This article introduces a semantic framework based on ontology to address the pure cold-start problem in content recommenders. The ontology encapsulates the domain knowledge about the learners as well as Learning Objects (LOs). The semantic model that we built has been experimented with different combinations of the key learner parameters such as learning style, knowledge level, and background knowledge. The proposed framework utilizes these parameters to build natural learner groups from the learner ontology using SPARQL queries. The ontology holds 480 learners\u2019 data, 468 annotated learning objects with 5,600 learner ratings. A multivariate k-means clustering algorithm, an unsupervised machine learning technique for grouping similar data, is used to evaluate the learner similarity computation accuracy. The learner satisfaction achieved with the proposed model is measured based on the ratings given by the 40 participants of the experiments. From the evaluation perspective, it is evident that 79% of the learners are satisfied with the recommendations generated by the proposed model in pure cold-start condition.", "which has research problem ?", "Recommender Systems", 11.0, 30.0], ["This paper presents the Graded Word Similarity in Context (GWSC) task which asked participants to predict the effects of context on human perception of similarity in English, Croatian, Slovene and Finnish. We received 15 submissions and 11 system description papers. A new dataset (CoSimLex) was created for evaluation in this task: it contains pairs of words, each annotated within two different contexts. Systems beat the baselines by significant margins, but few did well in more than one language or subtask. Almost every system employed a Transformer model, but with many variations in the details: WordNet sense embeddings, translation of contexts, TF-IDF weightings, and the automatic creation of datasets for fine-tuning were all used to good effect.", "which has research problem ?", "Graded Word Similarity in Context", 24.0, 57.0], ["Business architecture became a well-known tool for business transformations. According to a recent study by Forrester, 50 percent of the companies polled claimed to have an active business architecture initiative, whereas 20 percent were planning to engage in business architecture work in the near future. However, despite the high interest in BA, there is not yet a common understanding of the main concepts. There is a lack for the business architecture framework which provides a complete metamodel, suggests methodology for business architecture development and enables tool support for it. The ORG- Master framework is designed to solve this problem using the ontology as a core of the metamodel. This paper describes the ORG-Master framework, its implementation and dissemination.", "which has research problem ?", "Business architecture development", 529.0, 562.0], ["The technological development of quantum dots has ushered in a new era in fluorescence bioimaging, which was propelled with the advent of novel multiphoton fluorescence microscopes. Here, the potential use of CdSe quantum dots has been evaluated as fluorescent nanothermometers for two-photon fluorescence microscopy. In addition to the enhancement in spatial resolution inherent to any multiphoton excitation processes, two-photon (near-infrared) excitation leads to a temperature sensitivity of the emission intensity much higher than that achieved under one-photon (visible) excitation. The peak emission wavelength is also temperature sensitive, providing an additional approach for thermal imaging, which is particularly interesting for systems where nanoparticles are not homogeneously dispersed. On the basis of these superior thermal sensitivity properties of the two-photon excited fluorescence, we have demonstrated the ability of CdSe quantum dots to image a temperature gradient artificially created in a biocompatible fluid (phosphate-buffered saline) and also their ability to measure an intracellular temperature increase externally induced in a single living cell.", "which has research problem ?", "Nanothermometer", NaN, NaN], ["This paper presents a method that conbines a set of unsupervised algorithms in order to accurately build large taxonomies from any machine-readable dictionary (MRD). Our aim is to profit from conventional MRDs, with no explicit semantic coding. We propose a system that 1) performs fully automatic extraction of taxonomic links from MRD entries and 2) ranks the extracted relations in a way that selective manual refinement is allowed. Tested accuracy can reach around 100% depending on the degree of coverage selected, showing that taxonomy building is not limited to structured dictionaries such as LDOCE.", "which has research problem ?", "automatic extraction of taxonomic links from MRD entries", 288.0, 344.0], ["\u2013 Critical success factors (CSFs) have been widely used in the context of commercial supply chains. However, in the context of humanitarian aid (HA) this is a poorly addressed area and this paper therefore aims to set out the key areas for research., \u2013 This paper is based on a conceptual discussion of CSFs as applied to the HA sector. A detailed literature review is undertaken to identify CSFs in a commercial context and to consider their applicability to the HA sector., \u2013 CSFs have not previously been identified for the HA sector, an issue addressed in this paper., \u2013 The main constraint on this paper is that CSFs have not been previously considered in the literature as applied to HA. The relevance of CSFs will therefore need to be tested in the HA environment and qualitative research is needed to inform further work., \u2013 This paper informs the HA community of key areas of activity which have not been fully addressed and offers., \u2013 This paper contributes to the understanding of supply chain management in an HA context.", "which has research problem ?", "Supply chain management", 992.0, 1015.0], ["Eighty\u2010six patients with monosymptomatic optic neuritis of unknown cause were followed prospectively for a median period of 12.9 years. At onset, cerebrospinal fluid (CSF) pleocytosis was present in 46 patients (53%) but oligoclonal immunoglobulin in only 40 (47%) of the patients. The human leukocyte antigen (HLA)\u2010DR2 was present in 45 (52%). Clinically definite multiple sclerosis (MS) was established in 33 patients. Actuarial analysis showed that the cumulative probability of developing MS within 15 years was 45%. Three risk factors were identified: low age and abnormal CSF at onset, and early recurrence of optic neuritis. Female gender, onset in the winter season, and the presence of HLA\u2010DR2 antigen increased the risk for MS, but not significantly. Magnetic resonance imaging detected bilateral discrete white matter lesions, similar to those in MS, in 11 of 25 patients, 7 to 18 years after the isolated attack of optic neuritis. Nine were among the 13 with abnormal CSF and only 2 belonged to the group of 12 with normal CSF (p = 0.01). Normal CSF at the onset of optic neuritis conferred better prognosis but did not preclude the development of MS.", "which has research problem ?", "Multiple sclerosis", 365.0, 383.0], ["We present two related tasks of the BioNLP Shared Tasks 2011: Bacteria Gene Renaming (Rename) and Bacteria Gene Interactions (GI). We detail the objectives, the corpus specification, the evaluation metrics, and we summarize the participants' results. Both issued from PubMed scientific literature abstracts, the Rename task aims at extracting gene name synonyms, and the GI task aims at extracting genic interaction events, mainly about gene transcriptional regulations in bacteria.", "which has research problem ?", "Bacteria gene renaming", 62.0, 84.0], ["Enterprise resource planning (ERP) is one the most common applications implemented by firms around the world. These systems cannot remain static after their implementation, they need maintenance. This process is required by the rapidly-changing business environment and the usual software maintenance needs. However, these projects are highly complex and risky. So, the risks management associated with ERP maintenance projects is crucial to attain a satisfactory performance. Unfortunately, ERP maintenance risks have not been studied in depth. For this reason, this paper presents a framework, which gathers together the risks affecting the performance of ERP maintenance.", "which has research problem ?", "Enterprise resource planning", 0.0, 28.0], ["Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.", "which has research problem ?", "framework", 566.0, 575.0], ["Acute lymphoblastic leukemia (ALL) is the most common childhood malignancy, and implementation of risk-adapted therapy has been instrumental in the dramatic improvements in clinical outcomes. A key to risk-adapted therapies includes the identification of genomic features of individual tumors, including chromosome number (for hyper- and hypodiploidy) and gene fusions, notably ETV6-RUNX1, TCF3-PBX1, and BCR-ABL1 in B-cell ALL (B-ALL). RNA-sequencing (RNA-seq) of large ALL cohorts has expanded the number of recurrent gene fusions recognized as drivers in ALL, and identification of these new entities will contribute to refining ALL risk stratification. We used RNA-seq on 126 ALL patients from our clinical service to test the utility of including RNA-seq in standard-of-care diagnostic pipelines to detect gene rearrangements and IKZF1 deletions. RNA-seq identified 86% of rearrangements detected by standard-of-care diagnostics. KMT2A (MLL) rearrangements, although usually identified, were the most commonly missed by RNA-seq as a result of low expression. RNA-seq identified rearrangements that were not detected by standard-of-care testing in 9 patients. These were found in patients who were not classifiable using standard molecular assessment. We developed an approach to detect the most common IKZF1 deletion from RNA-seq data and validated this using an RQ-PCR assay. We applied an expression classifier to identify Philadelphia chromosome-like B-ALL patients. T-ALL proved a rich source of novel gene fusions, which have clinical implications or provide insights into disease biology. Our experience shows that RNA-seq can be implemented within an individual clinical service to enhance the current molecular diagnostic risk classification of ALL.", "which has research problem ?", "acute lymphoblastic leukemia (ALL)", NaN, NaN], ["Understanding unstructured text is a major goal within natural language processing. Comprehension tests pose questions based on short text passages to evaluate such understanding. In this work, we investigate machine comprehension on the challenging {\\it MCTest} benchmark. Partly because of its limited size, prior work on {\\it MCTest} has focused mainly on engineering better features. We tackle the dataset with a neural approach, harnessing simple neural networks arranged in a parallel hierarchy. The parallel hierarchy enables our model to compare the passage, question, and answer from a variety of trainable perspectives, as opposed to using a manually designed, rigid feature set. Perspectives range from the word level to sentence fragments to sequences of sentences; the networks operate only on word-embedding representations of text. When trained with a methodology designed to help cope with limited training data, our Parallel-Hierarchical model sets a new state of the art for {\\it MCTest}, outperforming previous feature-engineered approaches slightly and previous neural approaches by a significant margin (over 15\\% absolute).", "which has research problem ?", "Understanding unstructured text", 0.0, 31.0], ["Coastal safety may be influenced by climate change, as changes in extreme surge levels and wave extremes may increase the vulnerability of dunes and other coastal defenses. In the North Sea, an area already prone to severe flooding, these high surge levels and waves are generated by low atmospheric pressure and severe wind speeds during storm events. As a result of the geometry of the North Sea, not only the maximum wind speed is relevant, but also wind direction. Climate change could change maximum wind conditions, with potentially negative effects for coastal safety. Here, we use an ensemble of 12 Coupled Model Intercomparison Project Phase 5 (CMIP5) General Circulation Models (GCMs) and diagnose the effect of two climate scenarios (rcp4.5 and rcp8.5) on annual maximum wind speed, wind speeds with lower return frequencies, and the direction of these annual maximum wind speeds. The 12 selected CMIP5 models do not project changes in annual maximum wind speed and in wind speeds with lower return frequencies; however, we do find an indication that the annual extreme wind events are coming more often from western directions. Our results are in line with the studies based on CMIP3 models and do not confirm the statement based on some reanalysis studies that there is a climate\u2010change\u2010related upward trend in storminess in the North Sea area.", "which has research problem ?", "CMIP5", 654.0, 659.0], ["Purpose \u2013 The purpose of this paper is to explore critical factors for implementing green supply chain management (GSCM) practice in the Taiwanese electrical and electronics industries relative to European Union directives.Design/methodology/approach \u2013 A tentative list of critical factors of GSCM was developed based on a thorough and detailed analysis of the pertinent literature. The survey questionnaire contained 25 items, developed based on the literature and interviews with three industry experts, specifically quality and product assurance representatives. A total of 300 questionnaires were mailed out, and 87 were returned, of which 84 were valid, representing a response rate of 28 percent. Using the data collected, the identified critical factors were performed via factor analysis to establish reliability and validity.Findings \u2013 The results show that 20 critical factors were extracted into four dimensions, which denominated supplier management, product recycling, organization involvement and life cycl...", "which has research problem ?", "Supply chain management", 90.0, 113.0], ["Most conventional sentence similarity methods only focus on similar parts of two input sentences, and simply ignore the dissimilar parts, which usually give us some clues and semantic meanings about the sentences. In this work, we propose a model to take into account both the similarities and dissimilarities by decomposing and composing lexical semantics over sentences. The model represents each word as a vector, and calculates a semantic matching vector for each word based on all words in the other sentence. Then, each word vector is decomposed into a similar component and a dissimilar component based on the semantic matching vector. After this, a two-channel CNN model is employed to capture features by composing the similar and dissimilar components. Finally, a similarity score is estimated over the composed feature vectors. Experimental results show that our model gets the state-of-the-art performance on the answer sentence selection task, and achieves a comparable result on the paraphrase identification task.", "which has research problem ?", "sentence similarity", 18.0, 37.0], ["A simple small molecule acceptor named DICTF, with fluorene as the central block and 2-(2,3-dihydro-3-oxo-1H-inden-1-ylidene)propanedinitrile as the end-capping groups, has been designed for fullerene-free organic solar cells. The new molecule was synthesized from widely available and inexpensive commercial materials in only three steps with a high overall yield of \u223c60%. Fullerene-free organic solar cells with DICTF as the acceptor material provide a high PCE of 7.93%.", "which has research problem ?", "Organic solar cells", 206.0, 225.0], ["Our purpose is to identify the relevance of participative governance in urban areas characterized by smart cities projects, especially those implementing Living Labs initiatives as real-life settings to develop services innovation and enhance engagement of all urban stakeholders. A research on the three top smart cities in Europe \u2013 i.e. Amsterdam, Barcelona and Helsinki \u2013 is proposed through a content analysis with NVivo on the offi cial documents issued by the project partners (2012-2015) to investigate their Living Lab initiatives. The results show the increasing usefulness of Living Labs for the development of more inclusive smart cities projects in which public and private actors, and people, collaborate in innovation processes and governance for the co-creation of new services, underlining the importance of the open and ecosystem-oriented approach for smart cities.", "which has research problem ?", "Smart cities", 101.0, 113.0], ["This paper presents the preparation, resources, results and analysis of the Epigenetics and Post-translational Modifications (EPI) task, a main task of the BioNLP Shared Task 2011. The task concerns the extraction of detailed representations of 14 protein and DNA modification events, the catalysis of these reactions, and the identification of instances of negated or speculatively stated event instances. Seven teams submitted final results to the EPI task in the shared task, with the highest-performing system achieving 53% F-score in the full task and 69% F-score in the extraction of a simplified set of core event arguments.", "which has research problem ?", "The Epigenetics and Post-translational Modifications (EPI) task", NaN, NaN], ["Current state-of-the-art relation extraction methods typically rely on a set of lexical, syntactic, and semantic features, explicitly computed in a pre-processing step. Training feature extraction models requires additional annotated language resources, which severely restricts the applicability and portability of relation extraction to novel languages. Similarly, pre-processing introduces an additional source of error. To address these limitations, we introduce TRE, a Transformer for Relation Extraction, extending the OpenAI Generative Pre-trained Transformer [Radford et al., 2018]. Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions. TRE allows us to learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task. TRE obtains a new state-of-the-art result on the TACRED and SemEval 2010 Task 8 datasets, achieving a test F1 of 67.4 and 87.1, respectively. Furthermore, we observe a significant increase in sample efficiency. With only 20% of the training examples, TRE matches the performance of our baselines and our model trained from scratch on 100% of the TACRED dataset. We open-source our trained models, experiments, and source code.", "which has research problem ?", "learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task", 916.0, 1099.0], ["Academic attention to smart cities and their governance is growing rapidly, but the fragmentation in approaches makes for a confusing debate. This article brings some structure to the debate by analyzing a corpus of 51 publications and mapping their variation. The analysis shows that publications differ in their emphasis on (1) smart technology, smart people or smart collaboration as the defining features of smart cities, (2) a transformative or incremental perspective on changes in urban governance, (3) better outcomes or a more open process as the legitimacy claim for smart city governance. We argue for a comprehensive perspective: smart city governance is about crafting new forms of human collaboration through the use of ICTs to obtain better outcomes and more open governance processes. Research into smart city governance could benefit from previous studies into success and failure factors for e-government and build upon sophisticated theories of socio-technical change. This article highlights that smart city governance is not a technological issue: we should study smart city governance as a complex process of institutional change and acknowledge the political nature of appealing visions of socio-technical governance. Points for practitioners The study provides practitioners with an in-depth understanding of current debates about smart city governance. The article highlights that governing a smart city is about crafting new forms of human collaboration through the use of information and communication technologies. City managers should realize that technology by itself will not make a city smarter: building a smart city requires a political understanding of technology, a process approach to manage the emerging smart city and a focus on both economic gains and other public values.", "which has research problem ?", "Smart cities", 22.0, 34.0], ["(1) Background: Although bullying victimization is a phenomenon that is increasingly being recognized as a public health and mental health concern in many countries, research attention on this aspect of youth violence in low- and middle-income countries, especially sub-Saharan Africa, is minimal. The current study examined the national prevalence of bullying victimization and its correlates among in-school adolescents in Ghana. (2) Methods: A sample of 1342 in-school adolescents in Ghana (55.2% males; 44.8% females) aged 12\u201318 was drawn from the 2012 Global School-based Health Survey (GSHS) for the analysis. Self-reported bullying victimization \u201cduring the last 30 days, on how many days were you bullied?\u201d was used as the central criterion variable. Three-level analyses using descriptive, Pearson chi-square, and binary logistic regression were performed. Results of the regression analysis were presented as adjusted odds ratios (aOR) at 95% confidence intervals (CIs), with a statistical significance pegged at p < 0.05. (3) Results: Bullying victimization was prevalent among 41.3% of the in-school adolescents. Pattern of results indicates that adolescents in SHS 3 [aOR = 0.34, 95% CI = 0.25, 0.47] and SHS 4 [aOR = 0.30, 95% CI = 0.21, 0.44] were less likely to be victims of bullying. Adolescents who had sustained injury [aOR = 2.11, 95% CI = 1.63, 2.73] were more likely to be bullied compared to those who had not sustained any injury. The odds of bullying victimization were higher among adolescents who had engaged in physical fight [aOR = 1.90, 95% CI = 1.42, 2.25] and those who had been physically attacked [aOR = 1.73, 95% CI = 1.32, 2.27]. Similarly, adolescents who felt lonely were more likely to report being bullied [aOR = 1.50, 95% CI = 1.08, 2.08] as against those who did not feel lonely. Additionally, adolescents with a history of suicide attempts were more likely to be bullied [aOR = 1.63, 95% CI = 1.11, 2.38] and those who used marijuana had higher odds of bullying victimization [aOR = 3.36, 95% CI = 1.10, 10.24]. (4) Conclusions: Current findings require the need for policy makers and school authorities in Ghana to design and implement policies and anti-bullying interventions (e.g., Social Emotional Learning (SEL), Emotive Behavioral Education (REBE), Marijuana Cessation Therapy (MCT)) focused on addressing behavioral issues, mental health and substance abuse among in-school adolescents.", "which has research problem ?", "bullying", 25.0, 33.0], ["The FAIR principles were received with broad acceptance in several scientific communities. However, there is still some degree of uncertainty on how they should be implemented. Several self-report questionnaires have been proposed to assess the implementation of the FAIR principles. Moreover, the FAIRmetrics group released 14, general-purpose maturity for representing FAIRness. Initially, these metrics were conducted as open-answer questionnaires. Recently, these metrics have been implemented into a software that can automatically harvest metadata from metadata providers and generate a principle-specific FAIRness evaluation. With so many different approaches for FAIRness evaluations, we believe that further clarification on their limitations and advantages, as well as on their interpretation and interplay should be considered.", "which has research problem ?", "Fairness", 371.0, 379.0], ["The widespread adoption of agile methodologies raises the question of their continued and effective usage in organizations. An agile usage model consisting of innovation, sociological, technological, team, and organizational factors is used to inform an analysis of post-adoptive usage of agile practices in two major organizations. Analysis of the two case studies found that a methodology champion and top management support were the most important factors influencing continued usage, while innovation factors such as compatibility seemed less influential. Both horizontal and vertical usage was found to have significant impact on the effectiveness of agile usage.", "which has research problem ?", "Agile Usage", 127.0, 138.0], ["While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.", "which has research problem ?", "Image Classification", 501.0, 521.0], ["Universities and higher education institutes continually create knowledge. Hence it is necessary to keep a record of the academic and administrative information generated. Considering the vast amount of information managed by higher education institutions and the diversity of heterogeneous systems that can coexist within the same institution, it becomes necessary to use technologies for knowledge representation. Ontologies facilitate access to knowledge allowing the adequate exchange of information between people and between heterogeneous systems. This paper aims to identify existing research on the use and application of ontologies in higher education. From a set of 2792 papers, a study based on systematic mapping was conducted. A total of 52 research papers were reviewed and analyzed. Our results contribute key findings regarding how ontologies are used in higher education institutes, what technologies and tools are applied for the development of ontologies and what are the main vocabularies reused in the application of ontologies.", "which has research problem ?", "Knowledge Representation", 390.0, 414.0], ["End-to-end relation extraction aims to identify named entities and extract relations between them. Most recent work models these two subtasks jointly, either by casting them in one structured prediction framework, or performing multi-task learning through shared representations. In this work, we present a simple pipelined approach for entity and relation extraction, and establish the new state-of-the-art on standard benchmarks (ACE04, ACE05 and SciERC), obtaining a 1.7%-2.8% absolute improvement in relation F1 over previous joint models with the same pre-trained encoders. Our approach essentially builds on two independent encoders and merely uses the entity model to construct the input for the relation model. Through a series of careful examinations, we validate the importance of learning distinct contextual representations for entities and relations, fusing entity information early in the relation model, and incorporating global context. Finally, we also present an efficient approximation to our approach which requires only one pass of both entity and relation encoders at inference time, achieving an 8-16\u00d7 speedup with a slight reduction in accuracy.", "which has research problem ?", "Relation Extraction", 11.0, 30.0], ["Many of the challenges to be faced by smart cities surpass the capacities, capabilities, and reaches of their traditional institutions and their classical processes of governing, and therefore new and innovative forms of governance are needed to meet these challenges. According to the network governance literature, governance models in public administrations can be categorized through the identification and analysis of some main dimensions that govern in the way of managing the city by governments. Based on prior research and on the perception of city practitioners in European smart cities, this paper seeks to analyze the relevance of main dimensions of governance models in smart cities. Results could shed some light regarding new future research on efficient patterns of governance models within smart cities.", "which has research problem ?", "Smart cities", 38.0, 50.0], ["Game developers are facing an increasing demand for new games every year. Game development tools can be of great help, but require highly specialized professionals. Also, just as any software development effort, game development has some challenges. Model-Driven Game Development (MDGD) is suggested as a means to solve some of these challenges, but with a loss in flexibility. We propose a MDGD approach that combines multiple domain-specific languages (DSLs) with design patterns to provide flexibility and allow generated code to be integrated with manual code. After experimentation, we observed that, with the approach, less experienced developers can create games faster and more easily, and the product of code generation can be customized with manually written code, providing flexibility. However, with MDGD, developers become less familiar with the code, making manual codification more difficult.", "which has research problem ?", "Model-driven Game Development", 250.0, 279.0], ["Environmental sustainability is a critical global issue that requires comprehensive intervention policies. Viewed as localized intervention policy implementations, smart cities leverage information infrastructures and distributed renewable energy smart micro-grids, smart meters, and home/building energy management systems to reduce city-wide carbon emissions. However, theory-driven smart city implementation research is critically lacking. This theory-building case study identifies antecedent conditions necessary for implementing smart cities. We integrated resource dependence, social embeddedness, and citizen-centric e-governance theories to develop a citizen-centric social governance framework. We apply the framework to a field-based case study of Japan\u2019s Kitakyushu smart community project to examine the validity and utility of the framework\u2019s antecedent conditions: resource-dependent leadership network, cross-sector collaboration based on social ties, and citizen-centric e-governance. We conclude that complex smart community implementation processes require shared vision of social innovation owned by diverse stakeholders with conflicting values and adaptive use of informal social governance mechanisms for effective smart city implementation.", "which has research problem ?", "Smart cities", 164.0, 176.0], ["The accuracy of face alignment affects the performance of a face recognition system. Since face alignment is usually conducted using eye positions, an accurate eye localization algorithm is therefore essential for accurate face recognition. In this paper, we first study the impact of eye locations on face recognition accuracy, and then introduce an automatic technique for eye detection. The performance of our automatic eye detection technique is subsequently validated using FRGC 1.0 database. The validation shows that our eye detector has an overall 94.5% eye detection rate, with the detected eyes very close to the manually provided eye positions. In addition, the face recognition performance based on the automatic eye detection is shown to be comparable to that of using manually given eye positions.", "which has research problem ?", "Eye localization", 160.0, 176.0], ["As part of the BioNLP Open Shared Tasks 2019, the CRAFT Shared Tasks 2019 provides a platform to gauge the state of the art for three fundamental language processing tasks \u2014 dependency parse construction, coreference resolution, and ontology concept identification \u2014 over full-text biomedical articles. The structural annotation task requires the automatic generation of dependency parses for each sentence of an article given only the article text. The coreference resolution task focuses on linking coreferring base noun phrase mentions into chains using the symmetrical and transitive identity relation. The ontology concept annotation task involves the identification of concept mentions within text using the classes of ten distinct ontologies in the biomedical domain, both unmodified and augmented with extension classes. This paper provides an overview of each task, including descriptions of the data provided to participants and the evaluation metrics used, and discusses participant results relative to baseline performances for each of the three tasks.", "which has research problem ?", "Coreference Resolution", 205.0, 227.0], ["Neural machine translation has recently achieved impressive results, while using little in the way of external linguistic information. In this paper we show that the strong learning capability of neural MT models does not make linguistic features redundant; they can be easily incorporated to provide further improvements in performance. We generalize the embedding layer of the encoder in the attentional encoder--decoder architecture to support the inclusion of arbitrary features, in addition to the baseline word feature. We add morphological features, part-of-speech tags, and syntactic dependency labels as input features to English German, and English->Romanian neural machine translation systems. In experiments on WMT16 training and test sets, we find that linguistic input features improve model quality according to three metrics: perplexity, BLEU and CHRF3. An open-source implementation of our neural MT system is available, as are sample files and configurations.", "which has research problem ?", "Machine Translation", 7.0, 26.0], ["This study examines the dynamic relationship among carbon dioxide (CO2) emissions, economic growth, energy consumption and foreign trade based on the environmental Kuznets curve (EKC) hypothesis in Indonesia for the period 1971\u20132007, using the Auto Regressive Distributed Lag (ARDL) methodology. The results do not support the EKC hypothesis, which assumes an inverted U-shaped relationship between income and environmental degradation. The long-run results indicate that foreign trade is the most significant variable in explaining CO2 emissions in Indonesia followed by Energy consumption and economic growth. The stability of the variables in estimated model is also examined. The result suggests that the estimated model is stable over the study period.", "which has research problem ?", "CO2 emissions", 533.0, 546.0], ["The aim of this study was to identify and analyse the key success factors behind successful achievement of environment sustainability in Indian automobile industry supply chains. Here, critical success factors (CSFs) and performance measures of green supply chain management (GSCM) have been identified through extensive literature review and discussions with experts from Indian automobile industry. Based on the literature review, a questionnaire was designed and 123 final responses were considered. Six CSFs to implement GSCM for achieving sustainability and four expected performance measures of GSCM practices implementation were extracted using factor analysis. interpretive ranking process (IRP) modelling approach is employed to examine the contextual relationships among CSFs and to rank them with respect to performance measures. The developed IRP model shows that the CSF \u2018Competitiveness\u2019 is the most important CSF for achieving sustainability in Indian automobile industry through GSCM practices. This study is one of the few that have considered the environmental sustainability practices in the automobile industry in India and their implications on sectoral economy. The results of this study may help the mangers/SC practitioners/Governments/Customers in making strategic and tactical decisions regarding successful implementation of GSCM practices in Indian automobile industry with a sustainability focus. The developed framework provides a comprehensive perspective for assessing the synergistic impact of CSFs on GSCM performances and can act as ready reckoner for the practitioners. As there is very limited work presented in literature using IRP, this piece of work would provide a better understanding of this relatively new ranking methodology.", "which has research problem ?", "Supply chain management", 251.0, 274.0], ["This paper describes the first task on semantic relation extraction and classification in scientific paper abstracts at SemEval 2018. The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios.", "which has research problem ?", "Semantic Relation Extraction and Classification", 39.0, 86.0], ["Various new national advanced manufacturing strategies, such as Industry 4.0, Industrial Internet, and Made in China 2025, are issued to achieve smart manufacturing, resulting in the increasing number of newly designed production lines in both developed and developing countries. Under the individualized designing demands, more realistic virtual models mirroring the real worlds of production lines are essential to bridge the gap between design and operation. This paper presents a digital twin-based approach for rapid individualized designing of the hollow glass production line. The digital twin merges physics-based system modeling and distributed real-time process data to generate an authoritative digital design of the system at pre-production phase. A digital twin-based analytical decoupling framework is also developed to provide engineering analysis capabilities and support the decision-making over the system designing and solution evaluation. Three key enabling techniques as well as a case study in hollow glass production line are addressed to validate the proposed approach.", "which has research problem ?", "digital twin", 484.0, 496.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Material ?", "words, emoticons, and exclamation marks", 735.0, 774.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "our relational process model", 929.0, 957.0], ["The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.", "which Material ?", "Several novel putative Hsp90 clients", 1274.0, 1310.0], ["Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.", "which Material ?", "CoNLL-2003 dataset", 671.0, 689.0], ["ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean\u00b1SD of 2.93\u00b10.33mg/L compared to control group\u2019s 4.42\u00b10.30mg/L and gemfibrozil group\u2019s 4.28\u00b10.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).", "which Material ?", "high fat fed non-diabetic rats", 146.0, 176.0], ["To improve designs of e-learning materials, it is necessary to know which word or figure a learner felt \"difficult\" in the materials. In this pilot study, we measured electroencephalography (EEG) and eye gaze data of learners and analyzed to estimate which area they had difficulty to learn. The developed system realized simultaneous measurements of physiological data and subjective evaluations during learning. Using this system, we observed specific EEG activity in difficult pages. Integrating of eye gaze and EEG measurements raised a possibility to determine where a learner felt \"difficult\" in a page of learning materials. From these results, we could suggest that the multimodal measurements of EEG and eye gaze would lead to effective improvement of learning materials. For future study, more data collection using various materials and learners with different backgrounds is necessary. This study could lead to establishing a method to improve e-learning materials based on learners' mental states.", "which Material ?", "various materials and learners with different backgrounds", 826.0, 883.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "materials", 443.0, 452.0], ["Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service \"Where is My URI\" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.", "which Material ?", "state-of-the-art real-data benchmarks", 1079.0, 1116.0], ["Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service \"Where is My URI\" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.", "which Material ?", "These data sources", 889.0, 907.0], ["The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.", "which Material ?", "data", 22.0, 26.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "these technologies", 354.0, 372.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Material ?", "a multidimensional statistical model", 412.0, 448.0], ["Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19\u2013related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.", "which Material ?", "frontline clinicians and researchers", 657.0, 693.0], ["Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.", "which Material ?", "Data papers", 20.0, 31.0], ["Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service \"Where is My URI\" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.", "which Material ?", "SPARQL endpoints", 115.0, 131.0], ["Abstract Presently, analytics degree programs exhibit a growing trend to meet a strong market demand. To explore the skill sets required for analytics positions, the authors examined a sample of online job postings related to professions such as business analyst (BA), business intelligence analyst (BIA), data analyst (DA), and data scientist (DS) using content analysis. They present a ranked list of relevant skills belonging to specific skills categories for the studied positions. Also, they conducted a pairwise comparison between DA and DS as well as BA and BIA. Overall, the authors observed that decision making, organization, communication, and structured data management are key to all job categories. The analysis shows that technical skills like statistics and programming skills are in most demand for DAs. The analysis is useful for creating clear definitions with respect to required skills for job categories in the business and data analytics domain and for designing course curricula for this domain.", "which Material ?", "sample of online job postings", 185.0, 214.0], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "which Material ?", "taxonomic treatments", 890.0, 910.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "modern information and communication technologies", 269.0, 318.0], ["Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service \"Where is My URI\" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.", "which Material ?", "the SPARQL query language", 492.0, 517.0], ["ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean\u00b1SD of 2.93\u00b10.33mg/L compared to control group\u2019s 4.42\u00b10.30mg/L and gemfibrozil group\u2019s 4.28\u00b10.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).", "which Material ?", "rats", 172.0, 176.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Material ?", "necessary components", 1236.0, 1256.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Material ?", "text", 559.0, 563.0], ["This Editorial describes the rationale, focus, scope and technology behind the newly launched, open access, innovative Food Modelling Journal (FMJ). The Journal is designed to publish those outputs of the research cycle that usually precede the publication of the research article, but have their own value and re-usability potential. Such outputs are methods, models, software and data. The Food Modelling Journal is launched by the AGINFRA+ community and is integrated with the AGINFRA+ Virtual Research Environment (VRE) to facilitate and streamline the authoring, peer review and publication of the manuscripts via the ARPHA Publishing Platform.", "which Material ?", "software", 377.0, 385.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites", NaN, NaN], ["Despite being a challenging research field with many unresolved problems, recommender systems are getting more popular in recent years. These systems rely on the personal preferences of users on items given in the form of ratings and return the preferable items based on choices of like-minded users. In this study, a graph-based recommender system using link prediction techniques incorporating similarity metrics is proposed. A graph-based recommender system that has ratings of users on items can be represented as a bipartite graph, where vertices correspond to users and items and edges to ratings. Recommendation generation in a bipartite graph is a link prediction problem. In current literature, modified link prediction approaches are used to distinguish between fundamental relational dualities of like vs. dislike and similar vs. dissimilar. However, the similarity relationship between users/items is mostly disregarded in the complex domain. The proposed model utilizes user-user and item-item cosine similarity value with the relational dualities in order to improve coverage and hits rate of the system by carefully incorporating similarities. On the standard MovieLens Hetrec and MovieLens datasets, the proposed similarity-inclusive link prediction method performed empirically well compared to other methods operating in the complex domain. The experimental results show that the proposed recommender system can be a plausible alternative to overcome the deficiencies in recommender systems.", "which Material ?", "MovieLens", 1183.0, 1192.0], ["

We introduce a solution-processed copper tin sulfide (CTS) thin film to realize high-performance of thin-film transistors (TFT) by optimizing the CTS precursor solution concentration.

", "which Material ?", "Copper tin sulfide (CTS) thin film", NaN, NaN], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "which Material ?", "communal knowledge", 102.0, 120.0], ["The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.", "which Material ?", "RDF data", 18.0, 26.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "light-emitting diodes", 330.0, 351.0], ["Most question answering (QA) systems over Linked Data, i.e. Knowledge Graphs, approach the question answering task as a conversion from a natural language question to its corresponding SPARQL query. A common approach is to use query templates to generate SPARQL queries with slots that need to be filled. Using templates instead of running an extensive NLP pipeline or end-to-end model shifts the QA problem into a classification task, where the system needs to match the input question to the appropriate template. This paper presents an approach to automatically learn and classify natural language questions into corresponding templates using recursive neural networks. Our model was trained on 5000 questions and their respective SPARQL queries from the preexisting LC-QuAD dataset grounded in DBpedia, spanning 5042 entities and 615 predicates. The resulting model was evaluated using the FAIR GERBIL QA framework resulting in 0.419 macro f-measure on LC-QuAD and 0.417 macro f-measure on QALD-7.", "which Material ?", "LC-QuAD dataset", 770.0, 785.0], ["Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.", "which Material ?", "7 languages", 604.0, 615.0], ["The integration of different datasets in the Linked Data Cloud is a key aspect to the success of the Web of Data. To tackle this problem most of existent solutions have been supported by the task of entity resolution. However, many challenges still prevail specially when considering different types, structures and vocabularies used in the Web. Another common problem is that data usually are incomplete, inconsistent and contain outliers. To overcome these limitations, some works have applied machine learning algorithms since they are typically robust to both noise and data inconsistencies and are able to efficiently utilize nondeterministic dependencies in the data. In this paper we propose an approach based in a relational learning algorithm that addresses the problem by statistical approximation method. Modeling the problem as a relational machine learning task allows exploit contextual information that might be too distant in the relational graph. The joint application of relationship patterns between entities and evidences of similarity between their descriptions can improve the effectiveness of results. Furthermore, it is based on a sparse structure that scales well to large datasets. We present initial experiments based on BTC2012 datasets.", "which Material ?", "Linked Data Cloud", 45.0, 62.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "which Material ?", "780 nm, FAPbI3 NCs", 1395.0, 1413.0], ["Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information \u2013 meaningful secondary or derived data \u2013 about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known \u201c(elaborated) data products,\u201d for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation \u2013 as machine readable data and their machine-readable meaning \u2013 is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.", "which Material ?", "Research infrastructures and research communities", 609.0, 658.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "our relational process model", 929.0, 957.0], ["Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, but abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards automatically compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In pattern-based query log extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.4M statements for 130K popular Wikidata entities.", "which Material ?", "Knowledge bases (KBs)", NaN, NaN], ["ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean\u00b1SD of 2.93\u00b10.33mg/L compared to control group\u2019s 4.42\u00b10.30mg/L and gemfibrozil group\u2019s 4.28\u00b10.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).", "which Material ?", "Group A (control)", NaN, NaN], ["Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.", "which Material ?", "OntoNotes 5.0 dataset", 764.0, 785.0], ["While Wikipedia exists in 287 languages, its content is unevenly distributed among them. In this work, we investigate the generation of open domain Wikipedia summaries in underserved languages using structured data from Wikidata. To this end, we propose a neural network architecture equipped with copy actions that learns to generate single-sentence and comprehensible textual summaries from Wikidata triples. We demonstrate the effectiveness of the proposed approach by evaluating it against a set of baselines on two languages of different natures: Arabic, a morphological rich language with a larger vocabulary than English, and Esperanto, a constructed language known for its easy acquisition.", "which Material ?", "a neural network architecture", 254.0, 283.0], ["Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and \u201cfeeling left out\u201d can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.", "which Material ?", "underserved audiences", 72.0, 93.0], ["This paper discusses the potential of current advancements in Information Communication Technologies (ICT) for cultural heritage preservation, valorization and management within contemporary cities. The paper highlights the potential of virtual environments to assess the impacts of heritage policies on urban development. It does so by discussing the implications of virtual globes and crowdsourcing to support the participatory valuation and management of cultural heritage assets. To this purpose, a review of available valuation techniques is here presented together with a discussion on how these techniques might be coupled with ICT tools to promote inclusive governance.\u00a0", "which Material ?", "contemporary cities", 186.0, 205.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "a building", 24.0, 34.0], ["Background Visual atypicalities in autism spectrum disorder (ASD) are a well documented phenomenon, beginning as early as 2\u20136 months of age and manifesting in a significantly decreased attention to the eyes, direct gaze and socially salient information. Early emerging neurobiological deficits in perceiving social stimuli as rewarding or its active avoidance due to the anxiety it entails have been widely purported as potential reasons for this atypicality. Parallel research evidence also points to the significant benefits of animal presence for reducing social anxiety and enhancing social interaction in children with autism. While atypicality in social attention in ASD has been widely substantiated, whether this atypicality persists equally across species types or is confined to humans has not been a key focus of research insofar. Methods We attempted a comprehensive examination of the differences in visual attention to static images of human and animal faces (40 images; 20 human faces and 20 animal faces) among children with ASD using an eye tracking paradigm. 44 children (ASD n = 21; TD n = 23) participated in the study (10,362 valid observations) across five regions of interest (left eye, right eye, eye region, face and screen). Results Results obtained revealed significantly greater social attention across human and animal stimuli in typical controls when compared to children with ASD. However in children with ASD, a significantly greater attention allocation was seen to animal faces and eye region and lesser attention to the animal mouth when compared to human faces, indicative of a clear attentional preference to socially salient regions of animal stimuli. The positive attentional bias toward animals was also seen in terms of a significantly greater visual attention to direct gaze in animal images. Conclusion Our results suggest the possibility that atypicalities in social attention in ASD may not be uniform across species. It adds to the current neural and biomarker evidence base of the potentially greater social reward processing and lesser social anxiety underlying animal stimuli as compared to human stimuli in children with ASD.", "which Material ?", "five regions of interest (left eye, right eye, eye region, face and screen)", NaN, NaN], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "different technical disciplines", 75.0, 106.0], ["ABSTRACT \u2018Heritage Interpretation\u2019 has always been considered as an effective learning, communication and management tool that increases visitors\u2019 awareness of and empathy to heritage sites or artefacts. Yet the definition of \u2018digital heritage interpretation\u2019 is still wide and so far, no significant method and objective are evident within the domain of \u2018digital heritage\u2019 theory and discourse. Considering \u2018digital heritage interpretation\u2019 as a process rather than as a tool to present or communicate with end-users, this paper presents a critical application of a theoretical construct ascertained from multiple disciplines and explicates four objectives for a comprehensive interpretive process. A conceptual model is proposed and further developed into a conceptual framework with fifteen considerations. This framework is then implemented and tested on an online platform to assess its impact on end-users\u2019 interpretation level. We believe the presented interpretive framework (PrEDiC) will help heritage professionals and media designers to develop interpretive heritage project.", "which Material ?", "tool", 117.0, 121.0], ["While Wikipedia exists in 287 languages, its content is unevenly distributed among them. In this work, we investigate the generation of open domain Wikipedia summaries in underserved languages using structured data from Wikidata. To this end, we propose a neural network architecture equipped with copy actions that learns to generate single-sentence and comprehensible textual summaries from Wikidata triples. We demonstrate the effectiveness of the proposed approach by evaluating it against a set of baselines on two languages of different natures: Arabic, a morphological rich language with a larger vocabulary than English, and Esperanto, a constructed language known for its easy acquisition.", "which Material ?", "underserved languages", 171.0, 192.0], ["Abstract Objective To evaluate viral loads at different stages of disease progression in patients infected with the 2019 severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) during the first four months of the epidemic in Zhejiang province, China. Design Retrospective cohort study. Setting A designated hospital for patients with covid-19 in Zhejiang province, China. Participants 96 consecutively admitted patients with laboratory confirmed SARS-CoV-2 infection: 22 with mild disease and 74 with severe disease. Data were collected from 19 January 2020 to 20 March 2020. Main outcome measures Ribonucleic acid (RNA) viral load measured in respiratory, stool, serum, and urine samples. Cycle threshold values, a measure of nucleic acid concentration, were plotted onto the standard curve constructed on the basis of the standard product. Epidemiological, clinical, and laboratory characteristics and treatment and outcomes data were obtained through data collection forms from electronic medical records, and the relation between clinical data and disease severity was analysed. Results 3497 respiratory, stool, serum, and urine samples were collected from patients after admission and evaluated for SARS-CoV-2 RNA viral load. Infection was confirmed in all patients by testing sputum and saliva samples. RNA was detected in the stool of 55 (59%) patients and in the serum of 39 (41%) patients. The urine sample from one patient was positive for SARS-CoV-2. The median duration of virus in stool (22 days, interquartile range 17-31 days) was significantly longer than in respiratory (18 days, 13-29 days; P=0.02) and serum samples (16 days, 11-21 days; P<0.001). The median duration of virus in the respiratory samples of patients with severe disease (21 days, 14-30 days) was significantly longer than in patients with mild disease (14 days, 10-21 days; P=0.04). In the mild group, the viral loads peaked in respiratory samples in the second week from disease onset, whereas viral load continued to be high during the third week in the severe group. Virus duration was longer in patients older than 60 years and in male patients. Conclusion The duration of SARS-CoV-2 is significantly longer in stool samples than in respiratory and serum samples, highlighting the need to strengthen the management of stool samples in the prevention and control of the epidemic, and the virus persists longer with higher load and peaks later in the respiratory tissue of patients with severe disease.", "which Material ?", "serum sample", NaN, NaN], ["This Editorial describes the rationale, focus, scope and technology behind the newly launched, open access, innovative Food Modelling Journal (FMJ). The Journal is designed to publish those outputs of the research cycle that usually precede the publication of the research article, but have their own value and re-usability potential. Such outputs are methods, models, software and data. The Food Modelling Journal is launched by the AGINFRA+ community and is integrated with the AGINFRA+ Virtual Research Environment (VRE) to facilitate and streamline the authoring, peer review and publication of the manuscripts via the ARPHA Publishing Platform.", "which Material ?", "data", 390.0, 394.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "a consistent mathematical process model", 540.0, 579.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "these technologies", 354.0, 372.0], ["Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W\u00b7sr(-1)\u00b7m(-2) at a current density of 232 mA\u00b7cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 \u00b1 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.", "which Material ?", "non-radiative recombination pathways", 1531.0, 1567.0], ["The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.", "which Material ?", "protein families", 1346.0, 1362.0], ["To improve designs of e-learning materials, it is necessary to know which word or figure a learner felt \"difficult\" in the materials. In this pilot study, we measured electroencephalography (EEG) and eye gaze data of learners and analyzed to estimate which area they had difficulty to learn. The developed system realized simultaneous measurements of physiological data and subjective evaluations during learning. Using this system, we observed specific EEG activity in difficult pages. Integrating of eye gaze and EEG measurements raised a possibility to determine where a learner felt \"difficult\" in a page of learning materials. From these results, we could suggest that the multimodal measurements of EEG and eye gaze would lead to effective improvement of learning materials. For future study, more data collection using various materials and learners with different backgrounds is necessary. This study could lead to establishing a method to improve e-learning materials based on learners' mental states.", "which Material ?", "which word or figure a learner", 68.0, 98.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Material ?", "social media and social networks", 76.0, 108.0], ["The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.", "which Material ?", "popular multi-node RDF data management systems", 561.0, 607.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Material ?", "similar venues", 866.0, 880.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "the project leader", 178.0, 196.0], ["Neural-symbolic systems combine the strengths of neural networks and symbolic formalisms. In this paper, we introduce a neural-symbolic system which combines restricted Boltzmann machines and probabilistic semi-abstract argumentation. We propose to train networks on argument labellings explaining the data, so that any sampled data outcome is associated with an argument labelling. Argument labellings are integrated as constraints within restricted Boltzmann machines, so that the neural networks are used to learn probabilistic dependencies amongst argument labels. Given a dataset and an argumentation graph as prior knowledge, for every example/case K in the dataset, we use a so-called K-maxconsistent labelling of the graph, and an explanation of case K refers to a K-maxconsistent labelling of the given argumentation graph. The abilities of the proposed system to predict correct labellings were evaluated and compared with standard machine learning techniques. Experiments revealed that such argumentation Boltzmann machines can outperform other classification models, especially in noisy settings.", "which Material ?", "restricted boltzmann machines", 166.0, 195.0], ["Two-dimensional (2D) layered materials, such as MoS2, are greatly attractive for flexible devices due to their unique layered structures, novel physical and electronic properties, and high mechanical strength. However, their limited mechanical strains (<2%) can hardly meet the demands of loading conditions for most flexible and stretchable device applications. In this Article, inspired from Kirigami, the ancient Japanese art of paper cutting, we design and fabricate nanoscale Kirigami architectures of 2D layered MoS2 on a soft substrate of polydimethylsiloxane (PDMS) using a top-down fabrication process. Results show that the Kirigami structures significantly improve the reversible stretchability of flexible 2D MoS2 electronic devices, which is increased from 0.75% to \u223c15%. This increase in flexibility is originated from a combination of multidimensional deformation capabilities from the nanoscale Kirigami architectures consisting of in-plane stretching and out-of-plane deformation. We further discover a ...", "which Material ?", "MoS2", 48.0, 52.0], ["Purpose \u2013 The purpose of this paper is to investigate what employers seek when recruiting library and information professionals in the UK and whether professional skills, generic skills or personal qualities are most in demand.Design/methodology/approach \u2013 A content analysis of a sample of 180 advertisements requiring a professional library or information qualification from Chartered Institute of Library and Information Professional's Library + Information Gazette over the period May 2006\u20102007.Findings \u2013 The findings reveal that a multitude of skills and qualities are required in the profession. When the results were compared with Information National Training Organisation and Library and Information Management Employability Skills research, customer service, interpersonal and communication skills, and general computing skills emerged as the requirements most frequently sought by employers. Overall, requirements from the generic skills area were most important to employers, but the research also demonstra...", "which Material ?", "sample of 180 advertisements", 281.0, 309.0], ["Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information \u2013 meaningful secondary or derived data \u2013 about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known \u201c(elaborated) data products,\u201d for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation \u2013 as machine readable data and their machine-readable meaning \u2013 is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.", "which Material ?", "(elaborated) data products", NaN, NaN], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Material ?", "the project leader", 178.0, 196.0], ["Two-dimensional (2D) layered materials are ideal for micro- and nanoelectromechanical systems (MEMS/NEMS) due to their ultimate thinness. Platinum diselenide (PtSe2), an exciting and unexplored 2D transition metal dichalcogenide material, is particularly interesting because its low temperature growth process is scalable and compatible with silicon technology. Here, we report the potential of thin PtSe2 films as electromechanical piezoresistive sensors. All experiments have been conducted with semimetallic PtSe2 films grown by thermally assisted conversion of platinum at a complementary metal\u2013oxide\u2013semiconductor (CMOS)-compatible temperature of 400 \u00b0C. We report high negative gauge factors of up to \u221285 obtained experimentally from PtSe2 strain gauges in a bending cantilever beam setup. Integrated NEMS piezoresistive pressure sensors with freestanding PMMA/PtSe2 membranes confirm the negative gauge factor and exhibit very high sensitivity, outperforming previously reported values by orders of magnitude. We employ density functional theory calculations to understand the origin of the measured negative gauge factor. Our results suggest PtSe2 as a very promising candidate for future NEMS applications, including integration into CMOS production lines.", "which Material ?", "PtSe2", 159.0, 164.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Material ?", "added components", 1326.0, 1342.0], ["The method of phylogenetic ancestral sequence reconstruction is a powerful approach for studying evolutionary relationships among protein sequence, structure, and function. In particular, this approach allows investigators to (1) reconstruct and \u201cresurrect\u201d (that is, synthesize in vivo or in vitro) extinct proteins to study how they differ from modern proteins, (2) identify key amino acid changes that, over evolutionary timescales, have altered the function of the protein, and (3) order historical events in the evolution of protein function. Widespread use of this approach has been slow among molecular biologists, in part because the methods require significant computational expertise. Here we present PhyloBot, a web-based software tool that makes ancestral sequence reconstruction easy. Designed for non-experts, it integrates all the necessary software into a single user interface. Additionally, PhyloBot provides interactive tools to explore evolutionary trajectories between ancestors, enabling the rapid generation of hypotheses that can be tested using genetic or biochemical approaches. Early versions of this software were used in previous studies to discover genetic mechanisms underlying the functions of diverse protein families, including V-ATPase ion pumps, DNA-binding transcription regulators, and serine/threonine protein kinases. PhyloBot runs in a web browser, and is available at the following URL: http://www.phylobot.com. The software is implemented in Python using the Django web framework, and runs on elastic cloud computing resources from Amazon Web Services. Users can create and submit jobs on our free server (at the URL listed above), or use our open-source code to launch their own PhyloBot server.", "which creates ?", "PhyloBot", 711.0, 719.0], ["The rapidly expanding body of available genomic and protein structural data provides a rich resource for understanding protein dynamics with biomolecular simulation. While computational infrastructure has grown rapidly, simulations on an omics scale are not yet widespread, primarily because software infrastructure to enable simulations at this scale has not kept pace. It should now be possible to study protein dynamics across entire (super)families, exploiting both available structural biology data and conformational similarities across homologous proteins. Here, we present a new tool for enabling high-throughput simulation in the genomics era. Ensembler takes any set of sequences - from a single sequence to an entire superfamily - and shepherds them through various stages of modeling and refinement to produce simulation-ready structures. This includes comparative modeling to all relevant PDB structures (which may span multiple conformational states of interest), reconstruction of missing loops, addition of missing atoms, culling of nearly identical structures, assignment of appropriate protonation states, solvation in explicit solvent, and refinement and filtering with molecular simulation to ensure stable simulation. The output of this pipeline is an ensemble of structures ready for subsequent molecular simulations using computer clusters, supercomputers, or distributed computing projects like Folding@home. Ensembler thus automates much of the time-consuming process of preparing protein models suitable for simulation, while allowing scalability up to entire superfamilies. A particular advantage of this approach can be found in the construction of kinetic models of conformational dynamics - such as Markov state models (MSMs) - which benefit from a diverse array of initial configurations that span the accessible conformational states to aid sampling. We demonstrate the power of this approach by constructing models for all catalytic domains in the human tyrosine kinase family, using all available kinase catalytic domain structures from any organism as structural templates. Ensembler is free and open source software licensed under the GNU General Public License (GPL) v2. It is compatible with Linux and OS X. The latest release can be installed via the conda package manager, and the latest source can be downloaded from https://github.com/choderalab/ensembler.", "which creates ?", "Ensembler", 653.0, 662.0], ["The quantification of cell shape, cell migration, and cell rearrangements is important for addressing classical questions in developmental biology such as patterning and tissue morphogenesis. Time-lapse microscopic imaging of transgenic embryos expressing fluorescent reporters is the method of choice for tracking morphogenetic changes and establishing cell lineages and fate maps in vivo. However, the manual steps involved in curating thousands of putative cell segmentations have been a major bottleneck in the application of these technologies especially for cell membranes. Segmentation of cell membranes while more difficult than nuclear segmentation is necessary for quantifying the relations between changes in cell morphology and morphogenesis. We present a novel and fully automated method to first reconstruct membrane signals and then segment out cells from 3D membrane images even in dense tissues. The approach has three stages: 1) detection of local membrane planes, 2) voting to fill structural gaps, and 3) region segmentation. We demonstrate the superior performance of the algorithms quantitatively on time-lapse confocal and two-photon images of zebrafish neuroectoderm and paraxial mesoderm by comparing its results with those derived from human inspection. We also compared with synthetic microscopic images generated by simulating the process of imaging with fluorescent reporters under varying conditions of noise. Both the over-segmentation and under-segmentation percentages of our method are around 5%. The volume overlap of individual cells, compared to expert manual segmentation, is consistently over 84%. By using our software (ACME) to study somite formation, we were able to segment touching cells with high accuracy and reliably quantify changes in morphogenetic parameters such as cell shape and size, and the arrangement of epithelial and mesenchymal cells. Our software has been developed and tested on Windows, Mac, and Linux platforms and is available publicly under an open source BSD license (https://github.com/krm15/ACME).", "which creates ?", "ACME", 1660.0, 1664.0], ["AbstractSomatic copy number variations (CNVs) play a crucial role in development of many human cancers. The broad availability of next-generation sequencing data has enabled the development of algorithms to computationally infer CNV profiles from a variety of data types including exome and targeted sequence data; currently the most prevalent types of cancer genomics data. However, systemic evaluation and comparison of these tools remains challenging due to a lack of ground truth reference sets. To address this need, we have developed Bamgineer, a tool written in Python to introduce user-defined haplotype-phased allele-specific copy number events into an existing Binary Alignment Mapping (BAM) file, with a focus on targeted and exome sequencing experiments. As input, this tool requires a read alignment file (BAM format), lists of non-overlapping genome coordinates for introduction of gains and losses (bed file), and an optional file defining known haplotypes (vcf format). To improve runtime performance, Bamgineer introduces the desired CNVs in parallel using queuing and parallel processing on a local machine or on a high-performance computing cluster. As proof-of-principle, we applied Bamgineer to a single high-coverage (mean: 220X) exome sequence file from a blood sample to simulate copy number profiles of 3 exemplar tumors from each of 10 tumor types at 5 tumor cellularity levels (20-100%, 150 BAM files in total). To demonstrate feasibility beyond exome data, we introduced read alignments to a targeted 5-gene cell-free DNA sequencing library to simulate EGFR amplifications at frequencies consistent with circulating tumor DNA (10, 1, 0.1 and 0.01%) while retaining the multimodal insert size distribution of the original data. We expect Bamgineer to be of use for development and systematic benchmarking of CNV calling algorithms by users using locally-generated data for a variety of applications. The source code is freely available at http://github.com/pughlab/bamgineer.Author summaryWe present Bamgineer, a software program to introduce user-defined, haplotype-specific copy number variants (CNVs) at any frequency into standard Binary Alignment Mapping (BAM) files. Copy number gains are simulated by introducing new DNA sequencing read pairs sampled from existing reads and modified to contain SNPs of the haplotype of interest. This approach retains biases of the original data such as local coverage, strand bias, and insert size. Deletions are simulated by removing reads corresponding to one or both haplotypes. In our proof-of-principle study, we simulated copy number profiles from 10 cancer types at varying cellularity levels typically encountered in clinical samples. We also demonstrated introduction of low frequency CNVs into cell-free DNA sequencing data that retained the bimodal fragment size distribution characteristic of these data. Bamgineer is flexible and enables users to simulate CNVs that reflect characteristics of locally-generated sequence files and can be used for many applications including development and benchmarking of CNV inference tools for a variety of data types.", "which creates ?", "Bamgineer", 573.0, 582.0], ["Characterization of Human Endogenous Retrovirus (HERV) expression within the transcriptomic landscape using RNA-seq is complicated by uncertainty in fragment assignment because of sequence similarity. We present Telescope, a computational software tool that provides accurate estimation of transposable element expression (retrotranscriptome) resolved to specific genomic locations. Telescope directly addresses uncertainty in fragment assignment by reassigning ambiguously mapped fragments to the most probable source transcript as determined within a Bayesian statistical model. We demonstrate the utility of our approach through single locus analysis of HERV expression in 13 ENCODE cell types. When examined at this resolution, we find that the magnitude and breadth of the retrotranscriptome can be vastly different among cell types. Furthermore, our approach is robust to differences in sequencing technology and demonstrates that the retrotranscriptome has potential to be used for cell type identification. We compared our tool with other approaches for quantifying transposable element (TE) expression, and found that Telescope has the greatest resolution, as it estimates expression at specific TE insertions rather than at the TE subfamily level. Telescope performs highly accurate quantification of the retrotranscriptomic landscape in RNA-seq experiments, revealing a differential complexity in the transposable element biology of complex systems not previously observed. Telescope is available at https://github.com/mlbendall/telescope.", "which creates ?", "Telescope", 212.0, 221.0], ["PathVisio is a commonly used pathway editor, visualization and analysis software. Biological pathways have been used by biologists for many years to describe the detailed steps in biological processes. Those powerful, visual representations help researchers to better understand, share and discuss knowledge. Since the first publication of PathVisio in 2008, the original paper was cited more than 170 times and PathVisio was used in many different biological studies. As an online editor PathVisio is also integrated in the community curated pathway database WikiPathways. Here we present the third version of PathVisio with the newest additions and improvements of the application. The core features of PathVisio are pathway drawing, advanced data visualization and pathway statistics. Additionally, PathVisio 3 introduces a new powerful extension systems that allows other developers to contribute additional functionality in form of plugins without changing the core application. PathVisio can be downloaded from http://www.pathvisio.org and in 2014 PathVisio 3 has been downloaded over 5,500 times. There are already more than 15 plugins available in the central plugin repository. PathVisio is a freely available, open-source tool published under the Apache 2.0 license (http://www.apache.org/licenses/LICENSE-2.0). It is implemented in Java and thus runs on all major operating systems. The code repository is available at http://svn.bigcat.unimaas.nl/pathvisio. The support mailing list for users is available on https://groups.google.com/forum/#!forum/wikipathways-discuss and for developers on https://groups.google.com/forum/#!forum/wikipathways-devel.", "which creates ?", "PathVisio", 0.0, 9.0], ["The analysis of the mutational landscape of cancer, including mutual exclusivity and co-occurrence of mutations, has been instrumental in studying the disease. We hypothesized that exploring the interplay between co-occurrence, mutual exclusivity, and functional interactions between genes will further improve our understanding of the disease and help to uncover new relations between cancer driving genes and pathways. To this end, we designed a general framework, BeWith, for identifying modules with different combinations of mutation and interaction patterns. We focused on three different settings of the BeWith schema: (i) BeME-WithFun, in which the relations between modules are enriched with mutual exclusivity, while genes within each module are functionally related; (ii) BeME-WithCo, which combines mutual exclusivity between modules with co-occurrence within modules; and (iii) BeCo-WithMEFun, which ensures co-occurrence between modules, while the within module relations combine mutual exclusivity and functional interactions. We formulated the BeWith framework using Integer Linear Programming (ILP), enabling us to find optimally scoring sets of modules. Our results demonstrate the utility of BeWith in providing novel information about mutational patterns, driver genes, and pathways. In particular, BeME-WithFun helped identify functionally coherent modules that might be relevant for cancer progression. In addition to finding previously well-known drivers, the identified modules pointed to other novel findings such as the interaction between NCOR2 and NCOA3 in breast cancer. Additionally, an application of the BeME-WithCo setting revealed that gene groups differ with respect to their vulnerability to different mutagenic processes, and helped us to uncover pairs of genes with potentially synergistic effects, including a potential synergy between mutations in TP53 and the metastasis related DCC gene. Overall, BeWith not only helped us uncover relations between potential driver genes and pathways, but also provided additional insights on patterns of the mutational landscape, going beyond cancer driving mutations. Implementation is available at https://www.ncbi.nlm.nih.gov/CBBresearch/Przytycka/software/bewith.html", "which creates ?", "BeWith", 467.0, 473.0], ["Nonribosomally and ribosomally synthesized bioactive peptides constitute a source of molecules of great biomedical importance, including antibiotics such as penicillin, immunosuppressants such as cyclosporine, and cytostatics such as bleomycin. Recently, an innovative mass-spectrometry-based strategy, peptidogenomics, has been pioneered to effectively mine microbial strains for novel peptidic metabolites. Even though mass-spectrometric peptide detection can be performed quite fast, true high-throughput natural product discovery approaches have still been limited by the inability to rapidly match the identified tandem mass spectra to the gene clusters responsible for the biosynthesis of the corresponding compounds. With Pep2Path, we introduce a software package to fully automate the peptidogenomics approach through the rapid Bayesian probabilistic matching of mass spectra to their corresponding biosynthetic gene clusters. Detailed benchmarking of the method shows that the approach is powerful enough to correctly identify gene clusters even in data sets that consist of hundreds of genomes, which also makes it possible to match compounds from unsequenced organisms to closely related biosynthetic gene clusters in other genomes. Applying Pep2Path to a data set of compounds without known biosynthesis routes, we were able to identify candidate gene clusters for the biosynthesis of five important compounds. Notably, one of these clusters was detected in a genome from a different subphylum of Proteobacteria than that in which the molecule had first been identified. All in all, our approach paves the way towards high-throughput discovery of novel peptidic natural products. Pep2Path is freely available from http://pep2path.sourceforge.net/, implemented in Python, licensed under the GNU General Public License v3 and supported on MS Windows, Linux and Mac OS X.", "which creates ?", "Pep2Path", 729.0, 737.0], ["We present ggsashimi, a command-line tool for the visualization of splicing events across multiple samples. Given a specified genomic region, ggsashimi creates sashimi plots for individual RNA-seq experiments as well as aggregated plots for groups of experiments, a feature unique to this software. Compared to the existing versions of programs generating sashimi plots, it uses popular bioinformatics file formats, it is annotation-independent, and allows the visualization of splicing events even for large genomic regions by scaling down the genomic segments between splice sites. ggsashimi is freely available at https://github.com/guigolab/ggsashimi. It is implemented in python, and internally generates R code for plotting.", "which creates ?", "ggsashimi", 11.0, 20.0], ["Detecting similarities between ligand binding sites in the absence of global homology between target proteins has been recognized as one of the critical components of modern drug discovery. Local binding site alignments can be constructed using sequence order-independent techniques, however, to achieve a high accuracy, many current algorithms for binding site comparison require high-quality experimental protein structures, preferably in the bound conformational state. This, in turn, complicates proteome scale applications, where only various quality structure models are available for the majority of gene products. To improve the state-of-the-art, we developed eMatchSite, a new method for constructing sequence order-independent alignments of ligand binding sites in protein models. Large-scale benchmarking calculations using adenine-binding pockets in crystal structures demonstrate that eMatchSite generates accurate alignments for almost three times more protein pairs than SOIPPA. More importantly, eMatchSite offers a high tolerance to structural distortions in ligand binding regions in protein models. For example, the percentage of correctly aligned pairs of adenine-binding sites in weakly homologous protein models is only 4\u20139% lower than those aligned using crystal structures. This represents a significant improvement over other algorithms, e.g. the performance of eMatchSite in recognizing similar binding sites is 6% and 13% higher than that of SiteEngine using high- and moderate-quality protein models, respectively. Constructing biologically correct alignments using predicted ligand binding sites in protein models opens up the possibility to investigate drug-protein interaction networks for complete proteomes with prospective systems-level applications in polypharmacology and rational drug repositioning. eMatchSite is freely available to the academic community as a web-server and a stand-alone software distribution at http://www.brylinski.org/ematchsite.", "which creates ?", "eMatchSite", 668.0, 678.0], ["Imaging and analyzing the locomotion behavior of small animals such as Drosophila larvae or C. elegans worms has become an integral subject of biological research. In the past we have introduced FIM, a novel imaging system feasible to extract high contrast images. This system in combination with the associated tracking software FIMTrack is already used by many groups all over the world. However, so far there has not been an in-depth discussion of the technical aspects. Here we elaborate on the implementation details of FIMTrack and give an in-depth explanation of the used algorithms. Among others, the software offers several tracking strategies to cover a wide range of different model organisms, locomotion types, and camera properties. Furthermore, the software facilitates stimuli-based analysis in combination with built-in manual tracking and correction functionalities. All features are integrated in an easy-to-use graphical user interface. To demonstrate the potential of FIMTrack we provide an evaluation of its accuracy using manually labeled data. The source code is available under the GNU GPLv3 at https://github.com/i-git/FIMTrack and pre-compiled binaries for Windows and Mac are available at http://fim.uni-muenster.de.", "which creates ?", "FIMTrack", 330.0, 338.0], ["Advances in computational metabolic optimization are required to realize the full potential of new in vivo metabolic engineering technologies by bridging the gap between computational design and strain development. We present Redirector, a new Flux Balance Analysis-based framework for identifying engineering targets to optimize metabolite production in complex pathways. Previous optimization frameworks have modeled metabolic alterations as directly controlling fluxes by setting particular flux bounds. Redirector develops a more biologically relevant approach, modeling metabolic alterations as changes in the balance of metabolic objectives in the system. This framework iteratively selects enzyme targets, adds the associated reaction fluxes to the metabolic objective, thereby incentivizing flux towards the production of a metabolite of interest. These adjustments to the objective act in competition with cellular growth and represent up-regulation and down-regulation of enzyme mediated reactions. Using the iAF1260 E. coli metabolic network model for optimization of fatty acid production as a test case, Redirector generates designs with as many as 39 simultaneous and 111 unique engineering targets. These designs discover proven in vivo targets, novel supporting pathways and relevant interdependencies, many of which cannot be predicted by other methods. Redirector is available as open and free software, scalable to computational resources, and powerful enough to find all known enzyme targets for fatty acid production.", "which creates ?", "Redirector", 226.0, 236.0], ["Since its identification in 1983, HIV-1 has been the focus of a research effort unprecedented in scope and difficulty, whose ultimate goals \u2014 a cure and a vaccine \u2013 remain elusive. One of the fundamental challenges in accomplishing these goals is the tremendous genetic variability of the virus, with some genes differing at as many as 40% of nucleotide positions among circulating strains. Because of this, the genetic bases of many viral phenotypes, most notably the susceptibility to neutralization by a particular antibody, are difficult to identify computationally. Drawing upon open-source general-purpose machine learning algorithms and libraries, we have developed a software package IDEPI (IDentify EPItopes) for learning genotype-to-phenotype predictive models from sequences with known phenotypes. IDEPI can apply learned models to classify sequences of unknown phenotypes, and also identify specific sequence features which contribute to a particular phenotype. We demonstrate that IDEPI achieves performance similar to or better than that of previously published approaches on four well-studied problems: finding the epitopes of broadly neutralizing antibodies (bNab), determining coreceptor tropism of the virus, identifying compartment-specific genetic signatures of the virus, and deducing drug-resistance associated mutations. The cross-platform Python source code (released under the GPL 3.0 license), documentation, issue tracking, and a pre-configured virtual machine for IDEPI can be found at https://github.com/veg/idepi.", "which creates ?", "IDEPI", 692.0, 697.0], ["A metabolome-wide genome-wide association study (mGWAS) aims to discover the effects of genetic variants on metabolome phenotypes. Most mGWASes use as phenotypes concentrations of limited sets of metabolites that can be identified and quantified from spectral information. In contrast, in an untargeted mGWAS both identification and quantification are forgone and, instead, all measured metabolome features are tested for association with genetic variants. While the untargeted approach does not discard data that may have eluded identification, the interpretation of associated features remains a challenge. To address this issue, we developed metabomatching to identify the metabolites underlying significant associations observed in untargeted mGWASes on proton NMR metabolome data. Metabomatching capitalizes on genetic spiking, the concept that because metabolome features associated with a genetic variant tend to correspond to the peaks of the NMR spectrum of the underlying metabolite, genetic association can allow for identification. Applied to the untargeted mGWASes in the SHIP and CoLaus cohorts and using 180 reference NMR spectra of the urine metabolome database, metabomatching successfully identified the underlying metabolite in 14 of 19, and 8 of 9 associations, respectively. The accuracy and efficiency of our method make it a strong contender for facilitating or complementing metabolomics analyses in large cohorts, where the availability of genetic, or other data, enables our approach, but targeted quantification is limited.", "which deposits ?", "Metabomatching", 645.0, 659.0], ["The use of 3C-based methods has revealed the importance of the 3D organization of the chromatin for key aspects of genome biology. However, the different caveats of the variants of 3C techniques have limited their scope and the range of scientific fields that could benefit from these approaches. To address these limitations, we present 4Cin, a method to generate 3D models and derive virtual Hi-C (vHi-C) heat maps of genomic loci based on 4C-seq or any kind of 4C-seq-like data, such as those derived from NG Capture-C. 3D genome organization is determined by integrative consideration of the spatial distances derived from as few as four 4C-seq experiments. The 3D models obtained from 4C-seq data, together with their associated vHi-C maps, allow the inference of all chromosomal contacts within a given genomic region, facilitating the identification of Topological Associating Domains (TAD) boundaries. Thus, 4Cin offers a much cheaper, accessible and versatile alternative to other available techniques while providing a comprehensive 3D topological profiling. By studying TAD modifications in genomic structural variants associated to disease phenotypes and performing cross-species evolutionary comparisons of 3D chromatin structures in a quantitative manner, we demonstrate the broad potential and novel range of applications of our method.", "which deposits ?", "4Cin", 338.0, 342.0], ["The rapidly expanding body of available genomic and protein structural data provides a rich resource for understanding protein dynamics with biomolecular simulation. While computational infrastructure has grown rapidly, simulations on an omics scale are not yet widespread, primarily because software infrastructure to enable simulations at this scale has not kept pace. It should now be possible to study protein dynamics across entire (super)families, exploiting both available structural biology data and conformational similarities across homologous proteins. Here, we present a new tool for enabling high-throughput simulation in the genomics era. Ensembler takes any set of sequences - from a single sequence to an entire superfamily - and shepherds them through various stages of modeling and refinement to produce simulation-ready structures. This includes comparative modeling to all relevant PDB structures (which may span multiple conformational states of interest), reconstruction of missing loops, addition of missing atoms, culling of nearly identical structures, assignment of appropriate protonation states, solvation in explicit solvent, and refinement and filtering with molecular simulation to ensure stable simulation. The output of this pipeline is an ensemble of structures ready for subsequent molecular simulations using computer clusters, supercomputers, or distributed computing projects like Folding@home. Ensembler thus automates much of the time-consuming process of preparing protein models suitable for simulation, while allowing scalability up to entire superfamilies. A particular advantage of this approach can be found in the construction of kinetic models of conformational dynamics - such as Markov state models (MSMs) - which benefit from a diverse array of initial configurations that span the accessible conformational states to aid sampling. We demonstrate the power of this approach by constructing models for all catalytic domains in the human tyrosine kinase family, using all available kinase catalytic domain structures from any organism as structural templates. Ensembler is free and open source software licensed under the GNU General Public License (GPL) v2. It is compatible with Linux and OS X. The latest release can be installed via the conda package manager, and the latest source can be downloaded from https://github.com/choderalab/ensembler.", "which deposits ?", "Ensembler", 653.0, 662.0], ["Epigenetic regulation consists of a multitude of different modifications that determine active and inactive states of chromatin. Conditions such as cell differentiation or exposure to environmental stress require concerted changes in gene expression. To interpret epigenomics data, a spectrum of different interconnected datasets is needed, ranging from the genome sequence and positions of histones, together with their modifications and variants, to the transcriptional output of genomic regions. Here we present a tool, Podbat (Positioning database and analysis tool), that incorporates data from various sources and allows detailed dissection of the entire range of chromatin modifications simultaneously. Podbat can be used to analyze, visualize, store and share epigenomics data. Among other functions, Podbat allows data-driven determination of genome regions of differential protein occupancy or RNA expression using Hidden Markov Models. Comparisons between datasets are facilitated to enable the study of the comprehensive chromatin modification system simultaneously, irrespective of data-generating technique. Any organism with a sequenced genome can be accommodated. We exemplify the power of Podbat by reanalyzing all to-date published genome-wide data for the histone variant H2A.Z in fission yeast together with other histone marks and also phenotypic response data from several sources. This meta-analysis led to the unexpected finding of H2A.Z incorporation in the coding regions of genes encoding proteins involved in the regulation of meiosis and genotoxic stress responses. This incorporation was partly independent of the H2A.Z-incorporating remodeller Swr1. We verified an Swr1-independent role for H2A.Z following genotoxic stress in vivo. Podbat is open source software freely downloadable from www.podbat.org, distributed under the GNU LGPL license. User manuals, test data and instructions are available at the website, as well as a repository for third party\u2013developed plug-in modules. Podbat requires Java version 1.6 or higher.", "which deposits ?", "Podbat", 523.0, 529.0], ["Active matter systems, and in particular the cell cytoskeleton, exhibit complex mechanochemical dynamics that are still not well understood. While prior computational models of cytoskeletal dynamics have lead to many conceptual insights, an important niche still needs to be filled with a high-resolution structural modeling framework, which includes a minimally-complete set of cytoskeletal chemistries, stochastically treats reaction and diffusion processes in three spatial dimensions, accurately and efficiently describes mechanical deformations of the filamentous network under stresses generated by molecular motors, and deeply couples mechanics and chemistry at high spatial resolution. To address this need, we propose a novel reactive coarse-grained force field, as well as a publicly available software package, named the Mechanochemical Dynamics of Active Networks (MEDYAN), for simulating active network evolution and dynamics (available at www.medyan.org). This model can be used to study the non-linear, far from equilibrium processes in active matter systems, in particular, comprised of interacting semi-flexible polymers embedded in a solution with complex reaction-diffusion processes. In this work, we applied MEDYAN to investigate a contractile actomyosin network consisting of actin filaments, alpha-actinin cross-linking proteins, and non-muscle myosin IIA mini-filaments. We found that these systems undergo a switch-like transition in simulations from a random network to ordered, bundled structures when cross-linker concentration is increased above a threshold value, inducing contraction driven by myosin II mini-filaments. Our simulations also show how myosin II mini-filaments, in tandem with cross-linkers, can produce a range of actin filament polarity distributions and alignment, which is crucially dependent on the rate of actin filament turnover and the actin filament\u2019s resulting super-diffusive behavior in the actomyosin-cross-linker system. We discuss the biological implications of these findings for the arc formation in lamellipodium-to-lamellum architectural remodeling. Lastly, our simulations produce force-dependent accumulation of myosin II, which is thought to be responsible for their mechanosensation ability, also spontaneously generating myosin II concentration gradients in the solution phase of the simulation volume.", "which deposits ?", "MEDYAN", 877.0, 883.0], ["Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or \"best\" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit \"AlignerBoost\", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost\u2019s algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost.", "which deposits ?", "AlignerBoost", 625.0, 637.0], ["I introduce an open-source R package \u2018dcGOR\u2019 to provide the bioinformatics community with the ease to analyse ontologies and protein domain annotations, particularly those in the dcGO database. The dcGO is a comprehensive resource for protein domain annotations using a panel of ontologies including Gene Ontology. Although increasing in popularity, this database needs statistical and graphical support to meet its full potential. Moreover, there are no bioinformatics tools specifically designed for domain ontology analysis. As an add-on package built in the R software environment, dcGOR offers a basic infrastructure with great flexibility and functionality. It implements new data structure to represent domains, ontologies, annotations, and all analytical outputs as well. For each ontology, it provides various mining facilities, including: (i) domain-based enrichment analysis and visualisation; (ii) construction of a domain (semantic similarity) network according to ontology annotations; and (iii) significance analysis for estimating a contact (statistical significance) network. To reduce runtime, most analyses support high-performance parallel computing. Taking as inputs a list of protein domains of interest, the package is able to easily carry out in-depth analyses in terms of functional, phenotypic and diseased relevance, and network-level understanding. More importantly, dcGOR is designed to allow users to import and analyse their own ontologies and annotations on domains (taken from SCOP, Pfam and InterPro) and RNAs (from Rfam) as well. The package is freely available at CRAN for easy installation, and also at GitHub for version control. The dedicated website with reproducible demos can be found at http://supfam.org/dcGOR.", "which deposits ?", "dcGOR", 38.0, 43.0], ["Transmembrane channel proteins play pivotal roles in maintaining the homeostasis and responsiveness of cells and the cross-membrane electrochemical gradient by mediating the transport of ions and molecules through biological membranes. Therefore, computational methods which, given a set of 3D coordinates, can automatically identify and describe channels in transmembrane proteins are key tools to provide insights into how they function. Herein we present PoreWalker, a fully automated method, which detects and fully characterises channels in transmembrane proteins from their 3D structures. A stepwise procedure is followed in which the pore centre and pore axis are first identified and optimised using geometric criteria, and then the biggest and longest cavity through the channel is detected. Finally, pore features, including diameter profiles, pore-lining residues, size, shape and regularity of the pore are calculated, providing a quantitative and visual characterization of the channel. To illustrate the use of this tool, the method was applied to several structures of transmembrane channel proteins and was able to identify shape/size/residue features representative of specific channel families. The software is available as a web-based resource at http://www.ebi.ac.uk/thornton-srv/software/PoreWalker/.", "which deposits ?", "software", 1217.0, 1225.0], ["Chemical reaction networks are ubiquitous in biology, and their dynamics is fundamentally stochastic. Here, we present the software library pSSAlib, which provides a complete and concise implementation of the most efficient partial-propensity methods for simulating exact stochastic chemical kinetics. pSSAlib can import models encoded in Systems Biology Markup Language, supports time delays in chemical reactions, and stochastic spatiotemporal reaction-diffusion systems. It also provides tools for statistical analysis of simulation results and supports multiple output formats. It has previously been used for studies of biochemical reaction pathways and to benchmark other stochastic simulation methods. Here, we describe pSSAlib in detail and apply it to a new model of the endocytic pathway in eukaryotic cells, leading to the discovery of a stochastic counterpart of the cut-out switch motif underlying early-to-late endosome conversion. pSSAlib is provided as a stand-alone command-line tool and as a developer API. We also provide a plug-in for the SBMLToolbox. The open-source code and pre-packaged installers are freely available from http://mosaic.mpi-cbg.de.", "which deposits ?", "pSSAlib", 140.0, 147.0], ["Chaste \u2014 Cancer, Heart And Soft Tissue Environment \u2014 is an open source C++ library for the computational simulation of mathematical models developed for physiology and biology. Code development has been driven by two initial applications: cardiac electrophysiology and cancer development. A large number of cardiac electrophysiology studies have been enabled and performed, including high-performance computational investigations of defibrillation on realistic human cardiac geometries. New models for the initiation and growth of tumours have been developed. In particular, cell-based simulations have provided novel insight into the role of stem cells in the colorectal crypt. Chaste is constantly evolving and is now being applied to a far wider range of problems. The code provides modules for handling common scientific computing components, such as meshes and solvers for ordinary and partial differential equations (ODEs/PDEs). Re-use of these components avoids the need for researchers to \u2018re-invent the wheel\u2019 with each new project, accelerating the rate of progress in new applications. Chaste is developed using industrially-derived techniques, in particular test-driven development, to ensure code quality, re-use and reliability. In this article we provide examples that illustrate the types of problems Chaste can be used to solve, which can be run on a desktop computer. We highlight some scientific studies that have used or are using Chaste, and the insights they have provided. The source code, both for specific releases and the development version, is available to download under an open source Berkeley Software Distribution (BSD) licence at http://www.cs.ox.ac.uk/chaste, together with details of a mailing list and links to documentation and tutorials.", "which deposits ?", "Chaste", 0.0, 6.0], ["Movement is fundamental to human and animal life, emerging through interaction of complex neural, muscular, and skeletal systems. Study of movement draws from and contributes to diverse fields, including biology, neuroscience, mechanics, and robotics. OpenSim unites methods from these fields to create fast and accurate simulations of movement, enabling two fundamental tasks. First, the software can calculate variables that are difficult to measure experimentally, such as the forces generated by muscles and the stretch and recoil of tendons during movement. Second, OpenSim can predict novel movements from models of motor control, such as kinematic adaptations of human gait during loaded or inclined walking. Changes in musculoskeletal dynamics following surgery or due to human\u2013device interaction can also be simulated; these simulations have played a vital role in several applications, including the design of implantable mechanical devices to improve human grasping in individuals with paralysis. OpenSim is an extensible and user-friendly software package built on decades of knowledge about computational modeling and simulation of biomechanical systems. OpenSim\u2019s design enables computational scientists to create new state-of-the-art software tools and empowers others to use these tools in research and clinical applications. OpenSim supports a large and growing community of biomechanics and rehabilitation researchers, facilitating exchange of models and simulations for reproducing and extending discoveries. Examples, tutorials, documentation, and an active user forum support this community. The OpenSim software is covered by the Apache License 2.0, which permits its use for any purpose including both nonprofit and commercial applications. The source code is freely and anonymously accessible on GitHub, where the community is welcomed to make contributions. Platform-specific installers of OpenSim include a GUI and are available on simtk.org.", "which deposits ?", "OpenSim", 252.0, 259.0], ["PhyloGibbs, our recent Gibbs-sampling motif-finder, takes phylogeny into account in detecting binding sites for transcription factors in DNA and assigns posterior probabilities to its predictions obtained by sampling the entire configuration space. Here, in an extension called PhyloGibbs-MP, we widen the scope of the program, addressing two major problems in computational regulatory genomics. First, PhyloGibbs-MP can localise predictions to small, undetermined regions of a large input sequence, thus effectively predicting cis-regulatory modules (CRMs) ab initio while simultaneously predicting binding sites in those modules\u2014tasks that are usually done by two separate programs. PhyloGibbs-MP's performance at such ab initio CRM prediction is comparable with or superior to dedicated module-prediction software that use prior knowledge of previously characterised transcription factors. Second, PhyloGibbs-MP can predict motifs that differentiate between two (or more) different groups of regulatory regions, that is, motifs that occur preferentially in one group over the others. While other \u201cdiscriminative motif-finders\u201d have been published in the literature, PhyloGibbs-MP's implementation has some unique features and flexibility. Benchmarks on synthetic and actual genomic data show that this algorithm is successful at enhancing predictions of differentiating sites and suppressing predictions of common sites and compares with or outperforms other discriminative motif-finders on actual genomic data. Additional enhancements include significant performance and speed improvements, the ability to use \u201cinformative priors\u201d on known transcription factors, and the ability to output annotations in a format that can be visualised with the Generic Genome Browser. In stand-alone motif-finding, PhyloGibbs-MP remains competitive, outperforming PhyloGibbs-1.0 and other programs on benchmark data.", "which deposits ?", "PhyloGibbs-MP", 278.0, 291.0], ["Abstract Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal \u201cvirtual laboratory\u201d for such multicellular systems simulates both the biochemical microenvironment (the \u201cstage\u201d) and many mechanically and biochemically interacting cells (the \u201cplayers\u201d upon the stage). PhysiCell\u2014physics-based multicellular simulator\u2014is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility \u201cout of the box.\u201d The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 10 5 -10 6 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a \u201ccellular cargo delivery\u201d system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net. Author Summary This paper introduces PhysiCell: an open source, agent-based modeling framework for 3-D multicellular simulations. It includes a standard library of sub-models for cell fluid and solid volume changes, cycle progression, apoptosis, necrosis, mechanics, and motility. PhysiCell is directly coupled to a biotransport solver to simulate many diffusing substrates and cell-secreted signals. Each cell can dynamically update its phenotype based on its microenvironmental conditions. Users can customize or replace the included sub-models. PhysiCell runs on a variety of platforms (Linux, OSX, and Windows) with few software dependencies. Its computational cost scales linearly in the number of cells. It is feasible to simulate 500,000 cells on quad-core desktop workstations, and millions of cells on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on hanging drop tumor spheroids (HDS) and ductal carcinoma in situ (DCIS) of the breast. We demonstrate contact- and chemokine-based interactions among multiple cell types with examples in synthetic multicellular bioengineering, cancer heterogeneity, and cancer immunology. We developed PhysiCell to help the scientific community tackle multicellular systems biology problems involving many interacting cells in multi-substrate microenvironments. PhysiCell is also an independent, cross-platform codebase for replicating results from other simulators.", "which deposits ?", "PhysiCell", 468.0, 477.0], ["Imaging and analyzing the locomotion behavior of small animals such as Drosophila larvae or C. elegans worms has become an integral subject of biological research. In the past we have introduced FIM, a novel imaging system feasible to extract high contrast images. This system in combination with the associated tracking software FIMTrack is already used by many groups all over the world. However, so far there has not been an in-depth discussion of the technical aspects. Here we elaborate on the implementation details of FIMTrack and give an in-depth explanation of the used algorithms. Among others, the software offers several tracking strategies to cover a wide range of different model organisms, locomotion types, and camera properties. Furthermore, the software facilitates stimuli-based analysis in combination with built-in manual tracking and correction functionalities. All features are integrated in an easy-to-use graphical user interface. To demonstrate the potential of FIMTrack we provide an evaluation of its accuracy using manually labeled data. The source code is available under the GNU GPLv3 at https://github.com/i-git/FIMTrack and pre-compiled binaries for Windows and Mac are available at http://fim.uni-muenster.de.", "which deposits ?", "FIMTrack", 330.0, 338.0], ["AbstractSomatic copy number variations (CNVs) play a crucial role in development of many human cancers. The broad availability of next-generation sequencing data has enabled the development of algorithms to computationally infer CNV profiles from a variety of data types including exome and targeted sequence data; currently the most prevalent types of cancer genomics data. However, systemic evaluation and comparison of these tools remains challenging due to a lack of ground truth reference sets. To address this need, we have developed Bamgineer, a tool written in Python to introduce user-defined haplotype-phased allele-specific copy number events into an existing Binary Alignment Mapping (BAM) file, with a focus on targeted and exome sequencing experiments. As input, this tool requires a read alignment file (BAM format), lists of non-overlapping genome coordinates for introduction of gains and losses (bed file), and an optional file defining known haplotypes (vcf format). To improve runtime performance, Bamgineer introduces the desired CNVs in parallel using queuing and parallel processing on a local machine or on a high-performance computing cluster. As proof-of-principle, we applied Bamgineer to a single high-coverage (mean: 220X) exome sequence file from a blood sample to simulate copy number profiles of 3 exemplar tumors from each of 10 tumor types at 5 tumor cellularity levels (20-100%, 150 BAM files in total). To demonstrate feasibility beyond exome data, we introduced read alignments to a targeted 5-gene cell-free DNA sequencing library to simulate EGFR amplifications at frequencies consistent with circulating tumor DNA (10, 1, 0.1 and 0.01%) while retaining the multimodal insert size distribution of the original data. We expect Bamgineer to be of use for development and systematic benchmarking of CNV calling algorithms by users using locally-generated data for a variety of applications. The source code is freely available at http://github.com/pughlab/bamgineer.Author summaryWe present Bamgineer, a software program to introduce user-defined, haplotype-specific copy number variants (CNVs) at any frequency into standard Binary Alignment Mapping (BAM) files. Copy number gains are simulated by introducing new DNA sequencing read pairs sampled from existing reads and modified to contain SNPs of the haplotype of interest. This approach retains biases of the original data such as local coverage, strand bias, and insert size. Deletions are simulated by removing reads corresponding to one or both haplotypes. In our proof-of-principle study, we simulated copy number profiles from 10 cancer types at varying cellularity levels typically encountered in clinical samples. We also demonstrated introduction of low frequency CNVs into cell-free DNA sequencing data that retained the bimodal fragment size distribution characteristic of these data. Bamgineer is flexible and enables users to simulate CNVs that reflect characteristics of locally-generated sequence files and can be used for many applications including development and benchmarking of CNV inference tools for a variety of data types.", "which uses ?", "Bamgineer", 573.0, 582.0], ["Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. AVAILABILITY: QuIN\u2019s web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.", "which uses ?", "GitHub", 1732.0, 1738.0], ["Background While the provision of gender affirming care for transgender people in South Africa is considered legal, ethical, and medically sound, and is\u2014theoretically\u2014available in both the South African private and public health sectors, access remains severely limited and unequal within the country. As there are no national policies or guidelines, little is known about how individual health care professionals providing gender affirming care make clinical decisions about eligibility and treatment options. Method Based on an initial policy review and service mapping, this study employed semi-structured interviews with a snowball sample of twelve health care providers, representing most providers currently providing gender affirming care in South Africa. Data were analysed thematically using NVivo, and are reported following COREQ guidelines. Results Our findings suggest that, whilst a small minority of health care providers offer gender affirming care, this is almost exclusively on their own initiative and is usually unsupported by wider structures and institutions. The ad hoc, discretionary nature of services means that access to care is dependent on whether a transgender person is fortunate enough to access a sympathetic and knowledgeable health care provider. Conclusion Accordingly, national, state-sanctioned guidelines for gender affirming care are necessary to increase access, homogenise quality of care, and contribute to equitable provision of gender affirming care in the public and private health systems.", "which uses ?", "NVivo", 801.0, 806.0], ["Evolution of cooperation and competition can appear when multiple adaptive agents share a biological, social, or technological niche. In the present work we study how cooperation and competition emerge between autonomous agents that learn by reinforcement while using only their raw visual input as the state representation. In particular, we extend the Deep Q-Learning framework to multiagent environments to investigate the interaction between two learning agents in the well-known video game Pong. By manipulating the classical rewarding scheme of Pong we show how competitive and collaborative behaviors emerge. We also describe the progression from competitive to collaborative behavior when the incentive to cooperate is increased. Finally we show how learning by playing against another adaptive agent, instead of against a hard-wired algorithm, results in more robust strategies. The present work shows that Deep Q-Networks can become a useful tool for studying decentralized learning of multiagent systems coping with high-dimensional environments.", "which uses ?", "Pong", 495.0, 499.0], ["The interest in the promotion of entrepreneurship is significantly increasing, particularly in those countries, such as Italy, that suffered during the recent great economic recession and subsequently needed to revitalize their economy. Entrepreneurial intention (EI) is a crucial stage in the entrepreneurial process and represents the basis for consequential entrepreneurial actions. Several research projects have sought to understand the antecedents of EI. This study, using a situational approach, has investigated the personal and contextual determinants of EI, exploring gender differences. In particular, the mediational role of general self-efficacy between internal locus of control (LoC), self-regulation, and support from family and friends, on the one hand, and EI, on the other hand, has been investigated. The study involved a sample of 658 Italian participants, of which 319 were male and 339 were female. Data were collected with a self-report on-line questionnaire and analysed with SPSS 23 and Mplus 7 to test a multi-group structural equation model. The results showed that self-efficacy totally mediated the relationship between internal LoC, self-regulation and EI. Moreover, it partially mediated the relationship between support from family and friends and EI. All the relations were significant for both men and women; however, our findings highlighted a stronger relationship between self-efficacy and EI for men, and between support from family and friends and both self-efficacy and EI for women. Findings highlighted the role of contextual characteristics in addition to personal ones in influencing EI and confirmed the key mediational function of self-efficacy. As for gender, results suggested that differences between men and women in relation to the entrepreneur role still exist. Practical implications for trainers and educators are discussed.", "which uses ?", "SPSS", 1001.0, 1005.0], ["Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or \"best\" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit \"AlignerBoost\", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost\u2019s algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost.", "which uses ?", "Java", 1680.0, 1684.0], ["Detecting similarities between ligand binding sites in the absence of global homology between target proteins has been recognized as one of the critical components of modern drug discovery. Local binding site alignments can be constructed using sequence order-independent techniques, however, to achieve a high accuracy, many current algorithms for binding site comparison require high-quality experimental protein structures, preferably in the bound conformational state. This, in turn, complicates proteome scale applications, where only various quality structure models are available for the majority of gene products. To improve the state-of-the-art, we developed eMatchSite, a new method for constructing sequence order-independent alignments of ligand binding sites in protein models. Large-scale benchmarking calculations using adenine-binding pockets in crystal structures demonstrate that eMatchSite generates accurate alignments for almost three times more protein pairs than SOIPPA. More importantly, eMatchSite offers a high tolerance to structural distortions in ligand binding regions in protein models. For example, the percentage of correctly aligned pairs of adenine-binding sites in weakly homologous protein models is only 4\u20139% lower than those aligned using crystal structures. This represents a significant improvement over other algorithms, e.g. the performance of eMatchSite in recognizing similar binding sites is 6% and 13% higher than that of SiteEngine using high- and moderate-quality protein models, respectively. Constructing biologically correct alignments using predicted ligand binding sites in protein models opens up the possibility to investigate drug-protein interaction networks for complete proteomes with prospective systems-level applications in polypharmacology and rational drug repositioning. eMatchSite is freely available to the academic community as a web-server and a stand-alone software distribution at http://www.brylinski.org/ematchsite.", "which uses ?", "eMatchSite", 668.0, 678.0], ["Nonribosomally and ribosomally synthesized bioactive peptides constitute a source of molecules of great biomedical importance, including antibiotics such as penicillin, immunosuppressants such as cyclosporine, and cytostatics such as bleomycin. Recently, an innovative mass-spectrometry-based strategy, peptidogenomics, has been pioneered to effectively mine microbial strains for novel peptidic metabolites. Even though mass-spectrometric peptide detection can be performed quite fast, true high-throughput natural product discovery approaches have still been limited by the inability to rapidly match the identified tandem mass spectra to the gene clusters responsible for the biosynthesis of the corresponding compounds. With Pep2Path, we introduce a software package to fully automate the peptidogenomics approach through the rapid Bayesian probabilistic matching of mass spectra to their corresponding biosynthetic gene clusters. Detailed benchmarking of the method shows that the approach is powerful enough to correctly identify gene clusters even in data sets that consist of hundreds of genomes, which also makes it possible to match compounds from unsequenced organisms to closely related biosynthetic gene clusters in other genomes. Applying Pep2Path to a data set of compounds without known biosynthesis routes, we were able to identify candidate gene clusters for the biosynthesis of five important compounds. Notably, one of these clusters was detected in a genome from a different subphylum of Proteobacteria than that in which the molecule had first been identified. All in all, our approach paves the way towards high-throughput discovery of novel peptidic natural products. Pep2Path is freely available from http://pep2path.sourceforge.net/, implemented in Python, licensed under the GNU General Public License v3 and supported on MS Windows, Linux and Mac OS X.", "which uses ?", "Windows", 1852.0, 1859.0], ["The quantification of cell shape, cell migration, and cell rearrangements is important for addressing classical questions in developmental biology such as patterning and tissue morphogenesis. Time-lapse microscopic imaging of transgenic embryos expressing fluorescent reporters is the method of choice for tracking morphogenetic changes and establishing cell lineages and fate maps in vivo. However, the manual steps involved in curating thousands of putative cell segmentations have been a major bottleneck in the application of these technologies especially for cell membranes. Segmentation of cell membranes while more difficult than nuclear segmentation is necessary for quantifying the relations between changes in cell morphology and morphogenesis. We present a novel and fully automated method to first reconstruct membrane signals and then segment out cells from 3D membrane images even in dense tissues. The approach has three stages: 1) detection of local membrane planes, 2) voting to fill structural gaps, and 3) region segmentation. We demonstrate the superior performance of the algorithms quantitatively on time-lapse confocal and two-photon images of zebrafish neuroectoderm and paraxial mesoderm by comparing its results with those derived from human inspection. We also compared with synthetic microscopic images generated by simulating the process of imaging with fluorescent reporters under varying conditions of noise. Both the over-segmentation and under-segmentation percentages of our method are around 5%. The volume overlap of individual cells, compared to expert manual segmentation, is consistently over 84%. By using our software (ACME) to study somite formation, we were able to segment touching cells with high accuracy and reliably quantify changes in morphogenetic parameters such as cell shape and size, and the arrangement of epithelial and mesenchymal cells. Our software has been developed and tested on Windows, Mac, and Linux platforms and is available publicly under an open source BSD license (https://github.com/krm15/ACME).", "which uses ?", "Windows", 1941.0, 1948.0], ["Surveys of 16S rDNA sequences from the honey bee, Apis mellifera, have revealed the presence of eight distinctive bacterial phylotypes in intestinal tracts of adult worker bees. Because previous studies have been limited to relatively few sequences from samples pooled from multiple hosts, the extent of variation in this microbiota among individuals within and between colonies and locations has been unclear. We surveyed the gut microbiota of 40 individual workers from two sites, Arizona and Maryland USA, sampling four colonies per site. Universal primers were used to amplify regions of 16S ribosomal RNA genes, and amplicons were sequenced using 454 pyrotag methods, enabling analysis of about 330,000 bacterial reads. Over 99% of these sequences belonged to clusters for which the first blastn hits in GenBank were members of the known bee phylotypes. Four phylotypes, one within Gammaproteobacteria (corresponding to \u201cCandidatus Gilliamella apicola\u201d) one within Betaproteobacteria (\u201cCandidatus Snodgrassella alvi\u201d), and two within Lactobacillus, were present in every bee, though their frequencies varied. The same typical bacterial phylotypes were present in all colonies and at both sites. Community profiles differed significantly among colonies and between sites, mostly due to the presence in some Arizona colonies of two species of Enterobacteriaceae not retrieved previously from bees. Analysis of Sanger sequences of rRNA of the Snodgrassella and Gilliamella phylotypes revealed that single bees contain numerous distinct strains of each phylotype. Strains showed some differentiation between localities, especially for the Snodgrassella phylotype.", "which uses ?", "Blastn", 794.0, 800.0], ["Live-cell imaging by light microscopy has demonstrated that all cells are spatially and temporally organized. Quantitative, computational image analysis is an important part of cellular imaging, providing both enriched information about individual cell properties and the ability to analyze large datasets. However, such studies are often limited by the small size and variable shape of objects of interest. Here, we address two outstanding problems in bacterial cell division by developing a generally applicable, standardized, and modular software suite termed Projected System of Internal Coordinates from Interpolated Contours (PSICIC) that solves common problems in image quantitation. PSICIC implements interpolated-contour analysis for accurate and precise determination of cell borders and automatically generates internal coordinate systems that are superimposable regardless of cell geometry. We have used PSICIC to establish that the cell-fate determinant, SpoIIE, is asymmetrically localized during Bacillus subtilis sporulation, thereby demonstrating the ability of PSICIC to discern protein localization features at sub-pixel scales. We also used PSICIC to examine the accuracy of cell division in Esherichia coli and found a new role for the Min system in regulating division-site placement throughout the cell length, but only prior to the initiation of cell constriction. These results extend our understanding of the regulation of both asymmetry and accuracy in bacterial division while demonstrating the general applicability of PSICIC as a computational approach for quantitative, high-throughput analysis of cellular images.", "which uses ?", "PSICIC", 632.0, 638.0], ["A calibrated computational model reflects behaviours that are expected or observed in a complex system, providing a baseline upon which sensitivity analysis techniques can be used to analyse pathways that may impact model responses. However, calibration of a model where a behaviour depends on an intervention introduced after a defined time point is difficult, as model responses may be dependent on the conditions at the time the intervention is applied. We present ASPASIA (Automated Simulation Parameter Alteration and SensItivity Analysis), a cross-platform, open-source Java toolkit that addresses a key deficiency in software tools for understanding the impact an intervention has on system behaviour for models specified in Systems Biology Markup Language (SBML). ASPASIA can generate and modify models using SBML solver output as an initial parameter set, allowing interventions to be applied once a steady state has been reached. Additionally, multiple SBML models can be generated where a subset of parameter values are perturbed using local and global sensitivity analysis techniques, revealing the model\u2019s sensitivity to the intervention. To illustrate the capabilities of ASPASIA, we demonstrate how this tool has generated novel hypotheses regarding the mechanisms by which Th17-cell plasticity may be controlled in vivo. By using ASPASIA in conjunction with an SBML model of Th17-cell polarisation, we predict that promotion of the Th1-associated transcription factor T-bet, rather than inhibition of the Th17-associated transcription factor ROR\u03b3t, is sufficient to drive switching of Th17 cells towards an IFN-\u03b3-producing phenotype. Our approach can be applied to all SBML-encoded models to predict the effect that intervention strategies have on system behaviour. ASPASIA, released under the Artistic License (2.0), can be downloaded from http://www.york.ac.uk/ycil/software.", "which uses ?", "Java", 576.0, 580.0], ["Modern DNA sequencing technologies enable geneticists to rapidly identify genetic variation among many human genomes. However, isolating the minority of variants underlying disease remains an important, yet formidable challenge for medical genetics. We have developed GEMINI (GEnome MINIng), a flexible software package for exploring all forms of human genetic variation. Unlike existing tools, GEMINI integrates genetic variation with a diverse and adaptable set of genome annotations (e.g., dbSNP, ENCODE, UCSC, ClinVar, KEGG) into a unified database to facilitate interpretation and data exploration. Whereas other methods provide an inflexible set of variant filters or prioritization methods, GEMINI allows researchers to compose complex queries based on sample genotypes, inheritance patterns, and both pre-installed and custom genome annotations. GEMINI also provides methods for ad hoc queries and data exploration, a simple programming interface for custom analyses that leverage the underlying database, and both command line and graphical tools for common analyses. We demonstrate GEMINI's utility for exploring variation in personal genomes and family based genetic studies, and illustrate its ability to scale to studies involving thousands of human samples. GEMINI is designed for reproducibility and flexibility and our goal is to provide researchers with a standard framework for medical genomics.", "which uses ?", "GEMINI", 268.0, 274.0], ["Objective Culture plays a significant role in determining family responsibilities and possibly influences the caregiver burden associated with providing care for a relative with dementia. This study was carried out to determine the elements of caregiver burden in Trinidadians regarding which interventions will provide the most benefit. Methods Seventy-five caregivers of patients diagnosed with dementia participated in this investigation. Demographic data were recorded for each caregiver and patient. Caregiver burden was assessed using the Zarit Burden Interview (ZBI), and the General Health Questionnaire (GHQ) was used as a measure of psychiatric morbidity. Statistical analyses were performed using Stata and SPSS software. Associations between individual ZBI items and GHQ-28 scores in caregivers were analyzed in logistic regression models; the above-median GHQ-28 scores were used a binary dependent variable, and individual ZBI item scores were entered as 5-point ordinal independent variables. Results The caregiver sample was composed of 61 females and 14 males. Caregiver burden was significantly associated with the participant being male; there was heterogeneity by ethnic group, and a higher burden on female caregivers was detected at borderline levels of significance. Upon examining the associations between different ZBI items and the above-median GHQ-28 scores in caregivers, the strongest associations were found with domains reflecting the caregiver\u2019s health having suffered, the caregiver not having sufficient time for him/herself, the caregiver\u2019s social life suffering, and the caregiver admitting to feeling stressed due to caregiving and meeting other responsibilities. Conclusions In this sample, with a majority of female caregivers, the factors of the person with dementia being male and belonging to a minority ethnic group were associated with a greater degree of caregiver burden. The information obtained through the association of individual ZBI items and above-median GHQ-28 scores is a helpful guide for profiling Trinidadian caregiver burden.", "which uses ?", "SPSS", 718.0, 722.0], ["Background Most of child mortality and under nutrition in developing world were attributed to suboptimal childcare and feeding, which needs detailed investigation beyond the proximal factors. This study was conducted with the aim of assessing associations of women\u2019s autonomy and men\u2019s involvement with child anthropometric indices in cash crop livelihood areas of South West Ethiopia. Methods Multi-stage stratified sampling was used to select 749 farming households living in three coffee producing sub-districts of Jimma zone, Ethiopia. Domains of women\u2019s Autonomy were measured by a tool adapted from demographic health survey. A model for determination of paternal involvement in childcare was employed. Caring practices were assessed through the WHO Infant and young child feeding practice core indicators. Length and weight measurements were taken in duplicate using standard techniques. Data were analyzed using SPSS for windows version 21. A multivariable linear regression was used to predict weight for height Z-scores and length for age Z-scores after adjusting for various factors. Results The mean (sd) scores of weight for age (WAZ), height for age (HAZ), weight for height (WHZ) and BMI for age (BAZ) was -0.52(1.26), -0.73(1.43), -0.13(1.34) and -0.1(1.39) respectively. The results of multi variable linear regression analyses showed that WHZ scores of children of mothers who had autonomy of conducting big purchase were higher by 0.42 compared to children's whose mothers had not. In addition, a child whose father was involved in childcare and feeding had higher HAZ score by 0.1. Regarding age, as for every month increase in age of child, a 0.04 point decrease in HAZ score and a 0.01 point decrease in WHZ were noted. Similarly, a child living in food insecure households had lower HAZ score by 0.29 compared to child of food secured households. As family size increased by a person a WHZ score of a child is decreased by 0.08. WHZ and HAZ scores of male child was found lower by 0.25 and 0.38 respectively compared to a female child of same age. Conclusion Women\u2019s autonomy and men\u2019s involvement appeared in tandem with better child anthropometric outcomes. Nutrition interventions in such setting should integrate enhancing women\u2019s autonomy over resource and men\u2019s involvement in childcare and feeding, in addition to food security measures.", "which uses ?", "SPSS", 920.0, 924.0], ["Background Heart Healthy Lenoir is a transdisciplinary project aimed at creating long-term, sustainable approaches to reduce cardiovascular disease risk disparities in Lenoir County, North Carolina using a design spanning genomic analysis and clinical intervention. We hypothesized that residents of Lenoir County would be unfamiliar and mistrustful of genomic research, and therefore reluctant to participate; additionally, these feelings would be higher in African-Americans. Methodology To test our hypothesis, we conducted qualitative research using community-based participatory research principles to ensure our genomic research strategies addressed the needs, priorities, and concerns of the community. African-American (n = 19) and White (n = 16) adults in Lenoir County participated in four focus groups exploring perceptions about genomics and cardiovascular disease. Demographic surveys were administered and a semi-structured interview guide was used to facilitate discussions. The discussions were digitally recorded, transcribed verbatim, and analyzed in ATLAS.ti. Results and Significance From our analysis, key themes emerged: transparent communication, privacy, participation incentives and barriers, knowledge, and the impact of knowing. African-Americans were more concerned about privacy and community impact compared to Whites, however, African-Americans were still eager to participate in our genomic research project. The results from our formative study were used to improve the informed consent and recruitment processes by: 1) reducing misconceptions of genomic studies; and 2) helping to foster participant understanding and trust with the researchers. Our study demonstrates how community-based participatory research principles can be used to gain deeper insight into the community and increase participation in genomic research studies. Due in part to these efforts 80.3% of eligible African-American participants and 86.9% of eligible White participants enrolled in the Heart Healthy Lenoir Genomics study making our overall enrollment 57.8% African-American. Future research will investigate return of genomic results in the Lenoir community.", "which uses ?", "ATLAS.ti", 1069.0, 1077.0], ["Introduction In Ethiopia, the burden of malaria during pregnancy remains a public health problem. Having a good malaria knowledge leads to practicing the prevention of malaria and seeking a health care. Researches regarding pregnant women\u2019s knowledge on malaria in Ethiopia is limited. So the aim of this study was to assess malaria knowledge and its associated factors among pregnant woman, 2018. Methods An institutional-basedcross-sectional study was conducted in Adis Zemen Hospital. Data were collected using pre-tested, an interviewer-administered structured questionnaire among 236 mothers. Women\u2019s knowledge on malaria was measured using six malaria-related questions (cause of malaria, mode of transmission, signs and symptoms, complication and prevention of malaria). The collected data were entered using Epidata version 3.1 and exported to SPSS version 20 for analysis. Bivariate and multivariate logistic regressions were computed to identify predictor variables at 95% confidence interval. Variables having P value of <0.05 were considered as predictor variables of malaria knowledge. Result A total of 235 pregnant women participated which makes the response rate 99.6%. One hundred seventy two pregnant women (73.2%) of mothers had good knowledge on malaria.Women who were from urban (AOR; 2.4: CI; 1.8, 5.7), had better family monthly income (AOR; 3.4: CI; 2.7, 3.8), attended education (AOR; 1.8: CI; 1.4, 3.5) were more knowledgeable. Conclusion and recommendation Majority of participants had good knowledge on malaria. Educational status, household monthly income and residence werepredictors of malaria knowledge. Increasing women\u2019s knowledge especially for those who are from rural, have no education, and have low monthly income is still needed.", "which uses ?", "SPSS", 852.0, 856.0], ["Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. AVAILABILITY: QuIN\u2019s web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.", "which uses ?", "MySQL", 1647.0, 1652.0], ["I introduce an open-source R package \u2018dcGOR\u2019 to provide the bioinformatics community with the ease to analyse ontologies and protein domain annotations, particularly those in the dcGO database. The dcGO is a comprehensive resource for protein domain annotations using a panel of ontologies including Gene Ontology. Although increasing in popularity, this database needs statistical and graphical support to meet its full potential. Moreover, there are no bioinformatics tools specifically designed for domain ontology analysis. As an add-on package built in the R software environment, dcGOR offers a basic infrastructure with great flexibility and functionality. It implements new data structure to represent domains, ontologies, annotations, and all analytical outputs as well. For each ontology, it provides various mining facilities, including: (i) domain-based enrichment analysis and visualisation; (ii) construction of a domain (semantic similarity) network according to ontology annotations; and (iii) significance analysis for estimating a contact (statistical significance) network. To reduce runtime, most analyses support high-performance parallel computing. Taking as inputs a list of protein domains of interest, the package is able to easily carry out in-depth analyses in terms of functional, phenotypic and diseased relevance, and network-level understanding. More importantly, dcGOR is designed to allow users to import and analyse their own ontologies and annotations on domains (taken from SCOP, Pfam and InterPro) and RNAs (from Rfam) as well. The package is freely available at CRAN for easy installation, and also at GitHub for version control. The dedicated website with reproducible demos can be found at http://supfam.org/dcGOR.", "which uses ?", "CRAN", 1600.0, 1604.0], ["This study considered all articles published in six Public Library of Science (PLOS) journals in 2012 and Web of Science citations for these articles as of May 2015. A total of 2,406 articles were analyzed to examine the relationships between Altmetric Attention Scores (AAS) and Web of Science citations. The AAS for an article, provided by Altmetric aggregates activities surrounding research outputs in social media (news outlet mentions, tweets, blogs, Wikipedia, etc.). Spearman correlation testing was done on all articles and articles with AAS. Further analysis compared the stratified datasets based on percentile ranks of AAS: top 50%, top 25%, top 10%, and top 1%. Comparisons across the six journals provided additional insights. The results show significant positive correlations between AAS and citations with varied strength for all articles and articles with AAS (or social media mentions), as well as for normalized AAS in the top 50%, top 25%, top 10%, and top 1% datasets. Four of the six PLOS journals, Genetics, Pathogens, Computational Biology, and Neglected Tropical Diseases, show significant positive correlations across all datasets. However, for the two journals with high impact factors, PLOS Biology and Medicine, the results are unexpected: the Medicine articles showed no significant correlations but the Biology articles tested positive for correlations with the whole dataset and the set with AAS. Both journals published substantially fewer articles than the other four journals. Further research to validate the AAS algorithm, adjust the weighting scheme, and include appropriate social media sources is needed to understand the potential uses and meaning of AAS in different contexts and its relationship to other metrics.", "which uses ?", "Altmetric", 243.0, 252.0], ["Transforming natural language questions into formal queries is an integral task in Question Answering (QA) systems. QA systems built on knowledge graphs like DBpedia, require a step after natural language processing for linking words, specifically including named entities and relations, to their corresponding entities in a knowledge graph. To achieve this task, several approaches rely on background knowledge bases containing semantically-typed relations, e.g., PATTY, for an extra disambiguation step. Two major factors may affect the performance of relation linking approaches whenever background knowledge bases are accessed: a) limited availability of such semantic knowledge sources, and b) lack of a systematic approach on how to maximize the benefits of the collected knowledge. We tackle this problem and devise SIBKB, a semantic-based index able to capture knowledge encoded on background knowledge bases like PATTY. SIBKB represents a background knowledge base as a bi-partite and a dynamic index over the relation patterns included in the knowledge base. Moreover, we develop a relation linking component able to exploit SIBKB features. The benefits of SIBKB are empirically studied on existing QA benchmarks and observed results suggest that SIBKB is able to enhance the accuracy of relation linking by up to three times.", "which uses ?", "PATTY", 465.0, 470.0], ["Nonribosomally and ribosomally synthesized bioactive peptides constitute a source of molecules of great biomedical importance, including antibiotics such as penicillin, immunosuppressants such as cyclosporine, and cytostatics such as bleomycin. Recently, an innovative mass-spectrometry-based strategy, peptidogenomics, has been pioneered to effectively mine microbial strains for novel peptidic metabolites. Even though mass-spectrometric peptide detection can be performed quite fast, true high-throughput natural product discovery approaches have still been limited by the inability to rapidly match the identified tandem mass spectra to the gene clusters responsible for the biosynthesis of the corresponding compounds. With Pep2Path, we introduce a software package to fully automate the peptidogenomics approach through the rapid Bayesian probabilistic matching of mass spectra to their corresponding biosynthetic gene clusters. Detailed benchmarking of the method shows that the approach is powerful enough to correctly identify gene clusters even in data sets that consist of hundreds of genomes, which also makes it possible to match compounds from unsequenced organisms to closely related biosynthetic gene clusters in other genomes. Applying Pep2Path to a data set of compounds without known biosynthesis routes, we were able to identify candidate gene clusters for the biosynthesis of five important compounds. Notably, one of these clusters was detected in a genome from a different subphylum of Proteobacteria than that in which the molecule had first been identified. All in all, our approach paves the way towards high-throughput discovery of novel peptidic natural products. Pep2Path is freely available from http://pep2path.sourceforge.net/, implemented in Python, licensed under the GNU General Public License v3 and supported on MS Windows, Linux and Mac OS X.", "which uses ?", "Linux", 1861.0, 1866.0], ["Objective Culture plays a significant role in determining family responsibilities and possibly influences the caregiver burden associated with providing care for a relative with dementia. This study was carried out to determine the elements of caregiver burden in Trinidadians regarding which interventions will provide the most benefit. Methods Seventy-five caregivers of patients diagnosed with dementia participated in this investigation. Demographic data were recorded for each caregiver and patient. Caregiver burden was assessed using the Zarit Burden Interview (ZBI), and the General Health Questionnaire (GHQ) was used as a measure of psychiatric morbidity. Statistical analyses were performed using Stata and SPSS software. Associations between individual ZBI items and GHQ-28 scores in caregivers were analyzed in logistic regression models; the above-median GHQ-28 scores were used a binary dependent variable, and individual ZBI item scores were entered as 5-point ordinal independent variables. Results The caregiver sample was composed of 61 females and 14 males. Caregiver burden was significantly associated with the participant being male; there was heterogeneity by ethnic group, and a higher burden on female caregivers was detected at borderline levels of significance. Upon examining the associations between different ZBI items and the above-median GHQ-28 scores in caregivers, the strongest associations were found with domains reflecting the caregiver\u2019s health having suffered, the caregiver not having sufficient time for him/herself, the caregiver\u2019s social life suffering, and the caregiver admitting to feeling stressed due to caregiving and meeting other responsibilities. Conclusions In this sample, with a majority of female caregivers, the factors of the person with dementia being male and belonging to a minority ethnic group were associated with a greater degree of caregiver burden. The information obtained through the association of individual ZBI items and above-median GHQ-28 scores is a helpful guide for profiling Trinidadian caregiver burden.", "which uses ?", "Stata", 708.0, 713.0], ["Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. AVAILABILITY: QuIN\u2019s web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.", "which uses ?", "Tomcat", 1625.0, 1631.0], ["AbstractSomatic copy number variations (CNVs) play a crucial role in development of many human cancers. The broad availability of next-generation sequencing data has enabled the development of algorithms to computationally infer CNV profiles from a variety of data types including exome and targeted sequence data; currently the most prevalent types of cancer genomics data. However, systemic evaluation and comparison of these tools remains challenging due to a lack of ground truth reference sets. To address this need, we have developed Bamgineer, a tool written in Python to introduce user-defined haplotype-phased allele-specific copy number events into an existing Binary Alignment Mapping (BAM) file, with a focus on targeted and exome sequencing experiments. As input, this tool requires a read alignment file (BAM format), lists of non-overlapping genome coordinates for introduction of gains and losses (bed file), and an optional file defining known haplotypes (vcf format). To improve runtime performance, Bamgineer introduces the desired CNVs in parallel using queuing and parallel processing on a local machine or on a high-performance computing cluster. As proof-of-principle, we applied Bamgineer to a single high-coverage (mean: 220X) exome sequence file from a blood sample to simulate copy number profiles of 3 exemplar tumors from each of 10 tumor types at 5 tumor cellularity levels (20-100%, 150 BAM files in total). To demonstrate feasibility beyond exome data, we introduced read alignments to a targeted 5-gene cell-free DNA sequencing library to simulate EGFR amplifications at frequencies consistent with circulating tumor DNA (10, 1, 0.1 and 0.01%) while retaining the multimodal insert size distribution of the original data. We expect Bamgineer to be of use for development and systematic benchmarking of CNV calling algorithms by users using locally-generated data for a variety of applications. The source code is freely available at http://github.com/pughlab/bamgineer.Author summaryWe present Bamgineer, a software program to introduce user-defined, haplotype-specific copy number variants (CNVs) at any frequency into standard Binary Alignment Mapping (BAM) files. Copy number gains are simulated by introducing new DNA sequencing read pairs sampled from existing reads and modified to contain SNPs of the haplotype of interest. This approach retains biases of the original data such as local coverage, strand bias, and insert size. Deletions are simulated by removing reads corresponding to one or both haplotypes. In our proof-of-principle study, we simulated copy number profiles from 10 cancer types at varying cellularity levels typically encountered in clinical samples. We also demonstrated introduction of low frequency CNVs into cell-free DNA sequencing data that retained the bimodal fragment size distribution characteristic of these data. Bamgineer is flexible and enables users to simulate CNVs that reflect characteristics of locally-generated sequence files and can be used for many applications including development and benchmarking of CNV inference tools for a variety of data types.", "which uses ?", "Python", 602.0, 608.0], ["Structural and functional brain connectivity are increasingly used to identify and analyze group differences in studies of brain disease. This study presents methods to analyze uni- and bi-modal brain connectivity and evaluate their ability to identify differences. Novel visualizations of significantly different connections comparing multiple metrics are presented. On the global level, \u201cbi-modal comparison plots\u201d show the distribution of uni- and bi-modal group differences and the relationship between structure and function. Differences between brain lobes are visualized using \u201cworm plots\u201d. Group differences in connections are examined with an existing visualization, the \u201cconnectogram\u201d. These visualizations were evaluated in two proof-of-concept studies: (1) middle-aged versus elderly subjects; and (2) patients with schizophrenia versus controls. Each included two measures derived from diffusion weighted images and two from functional magnetic resonance images. The structural measures were minimum cost path between two anatomical regions according to the \u201cStatistical Analysis of Minimum cost path based Structural Connectivity\u201d method and the average fractional anisotropy along the fiber. The functional measures were Pearson\u2019s correlation and partial correlation of mean regional time series. The relationship between structure and function was similar in both studies. Uni-modal group differences varied greatly between connectivity types. Group differences were identified in both studies globally, within brain lobes and between regions. In the aging study, minimum cost path was highly effective in identifying group differences on all levels; fractional anisotropy and mean correlation showed smaller differences on the brain lobe and regional levels. In the schizophrenia study, minimum cost path and fractional anisotropy showed differences on the global level and within brain lobes; mean correlation showed small differences on the lobe level. Only fractional anisotropy and mean correlation showed regional differences. The presented visualizations were helpful in comparing and evaluating connectivity measures on multiple levels in both studies.", "which uses ?", "Statistical Analysis of Minimum cost path based Structural Connectivity", 1072.0, 1143.0], ["Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.", "which Used models ?", "MobileNet", 1242.0, 1251.0], ["Machine learning is becoming an increasingly popular approach for investigating spatially distributed and subtle neuroanatomical alterations in brain\u2010based disorders. However, some machine learning models have been criticized for requiring a large number of cases in each experimental group, and for resembling a \u201cblack box\u201d that provides little or no insight into the nature of the data. In this article, we propose an alternative conceptual and practical approach for investigating brain\u2010based disorders which aim to overcome these limitations. We used an artificial neural network known as \u201cdeep autoencoder\u201d to create a normative model using structural magnetic resonance imaging data from 1,113 healthy people. We then used this model to estimate total and regional neuroanatomical deviation in individual patients with schizophrenia and autism spectrum disorder using two independent data sets (n = 263). We report that the model was able to generate different values of total neuroanatomical deviation for each disease under investigation relative to their control group (p < .005). Furthermore, the model revealed distinct patterns of neuroanatomical deviations for the two diseases, consistent with the existing neuroimaging literature. We conclude that the deep autoencoder provides a flexible and promising framework for assessing total and regional neuroanatomical deviations in neuropsychiatric populations.", "which Used models ?", "Autoencoder", 599.0, 610.0], ["Leukemia is a fatal cancer and has two main types: Acute and chronic. Each type has two more subtypes: Lymphoid and myeloid. Hence, in total, there are four subtypes of leukemia. This study proposes a new approach for diagnosis of all subtypes of leukemia from microscopic blood cell images using convolutional neural networks (CNN), which requires a large training data set. Therefore, we also investigated the effects of data augmentation for an increasing number of training samples synthetically. We used two publicly available leukemia data sources: ALL-IDB and ASH Image Bank. Next, we applied seven different image transformation techniques as data augmentation. We designed a CNN architecture capable of recognizing all subtypes of leukemia. Besides, we also explored other well-known machine learning algorithms such as naive Bayes, support vector machine, k-nearest neighbor, and decision tree. To evaluate our approach, we set up a set of experiments and used 5-fold cross-validation. The results we obtained from experiments showed that our CNN model performance has 88.25% and 81.74% accuracy, in leukemia versus healthy and multi-class classification of all subtypes, respectively. Finally, we also showed that the CNN model has a better performance than other well-known machine learning algorithms.", "which Used models ?", "naive Bayes", 829.0, 840.0], ["Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.", "which Used models ?", "DenseNet-121", 1228.0, 1240.0], ["An increasing number of people suffering from mental health conditions resort to online resources (specialized websites, social media, etc.) to share their feelings. Early depression detection using social media data through deep learning models can help to change life trajectories and save lives. But the accuracy of these models was not satisfying due to the real-world imbalanced data distributions. To tackle this problem, we propose a deep learning model (X-A-BiLSTM) for depression detection in imbalanced social media data. The X-A-BiLSTM model consists of two essential components: the first one is XGBoost, which is used to reduce data imbalance; and the second one is an Attention-BiLSTM neural network, which enhances classification capacity. The Reddit Self-reported Depression Diagnosis (RSDD) dataset was chosen, which included approximately 9,000 users who claimed to have been diagnosed with depression (\u201ddiagnosed users and approximately 107,000 matched control users. Results demonstrate that our approach significantly outperforms the previous state-of-the-art models on the RSDD dataset.", "which Used models ?", "LSTM", NaN, NaN], ["In the diagnosis of mental health disorder, a large portion of the Bipolar Disorder (BD) patients is likely to be misdiagnosed as Unipolar Depression (UD) on initial presentation. As speech is the most natural way to express emotion, this work focuses on tracking emotion profile of elicited speech for short-term mood disorder identification. In this work, the Deep Scattering Spectrum (DSS) and Low Level Descriptors (LLDs) of the elicited speech signals are extracted as the speech features. The hierarchical spectral clustering (HSC) algorithm is employed to adapt the emotion database to the mood disorder database to alleviate the data bias problem. The denoising autoencoder is then used to extract the bottleneck features of DSS and LLDs for better representation. Based on the bottleneck features, a long short term memory (LSTM) is applied to generate the time-varying emotion profile sequence. Finally, given the emotion profile sequence, the HMM-based identification and verification model is used to determine mood disorder. This work collected the elicited emotional speech data from 15 BDs, 15 UDs and 15 healthy controls for system training and evaluation. Five-fold cross validation was employed for evaluation. Experimental results show that the system using the bottleneck feature achieved an identification accuracy of 73.33%, improving by 8.89%, compared to that without bottleneck features. Furthermore, the system with verification mechanism, improving by 4.44%, outperformed that without verification.", "which Used models ?", "Autoencoder", 670.0, 681.0], [" BACKGROUND

Statistical predictions are useful to predict events based on statistical models. The data is useful to determine outcomes based on inputs and calculations. The Crow-AMSAA method will be explored to predict new cases of Coronavirus 19 (COVID19). This method is currently used within engineering reliability design to predict failures and evaluate the reliability growth. The author intents to use this model to predict the COVID19 cases by using daily reported data from Michigan, New York City, U.S.A and other countries. The piece wise Crow-AMSAA (CA) model fits the data very well for the infected cases and deaths at different phases during the start of the COVID19 outbreak. The slope \u03b2 of the Crow-AMSAA line indicates the speed of the transmission or death rate. The traditional epidemiological model is based on the exponential distribution, but the Crow-AMSAA is the Non Homogeneous Poisson Process (NHPP) which can be used to modeling the complex problem like COVID19, especially when the various mitigation strategies such as social distance, isolation and locking down were implemented by the government at different places.

OBJECTIVE

This paper is to use piece wise Crow-AMSAA method to fit the COVID19 confirmed cases in Michigan, New York City, U.S.A and other countries.

METHODS

piece wise Crow-AMSAA method to fit the COVID19 confirmed cases

RESULTS

From the Crow-AMSAA analysis above, at the beginning of the COVID 19, the infectious cases did not follow the Crow-AMSAA prediction line, but during the outbreak start, the confirmed cases does follow the CA line, the slope \u03b2 value indicates the pace of the transmission rate or death rate in each case. The piece wise Crow-AMSAA describes the different phases of spreading. This indicates the speed of the transmission rate could change according to the government interference, social distance order or other factors. Comparing the piece wise CA \u03b2 slopes (\u03b2: 1.683-- 0.834--0.092) in China and in U.S.A (\u03b2:5.138--10.48--5.259), the speed of infectious rate in U.S.A is much higher than the infectious rate in China. From the piece wise CA plots and summary table 1 of the CA slope \u03b2s, the COVID19 spreading has the different behavior at different places and countries where the government implemented the different policy to slow down the spreading.

CONCLUSIONS

From the analysis of data and conclusions from confirmed cases and deaths of COVID 19 in Michigan, New York city, U.S.A, China and other countries, the piece wise Crow-AMSAA method can be used to modeling the spreading of COVID19.

", "which Used models ?", "Crow-AMSAA ", 196.0, 207.0], ["Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.", "which Used models ?", "Xception", 1210.0, 1218.0], ["In the diagnosis of mental health disorder, a large portion of the Bipolar Disorder (BD) patients is likely to be misdiagnosed as Unipolar Depression (UD) on initial presentation. As speech is the most natural way to express emotion, this work focuses on tracking emotion profile of elicited speech for short-term mood disorder identification. In this work, the Deep Scattering Spectrum (DSS) and Low Level Descriptors (LLDs) of the elicited speech signals are extracted as the speech features. The hierarchical spectral clustering (HSC) algorithm is employed to adapt the emotion database to the mood disorder database to alleviate the data bias problem. The denoising autoencoder is then used to extract the bottleneck features of DSS and LLDs for better representation. Based on the bottleneck features, a long short term memory (LSTM) is applied to generate the time-varying emotion profile sequence. Finally, given the emotion profile sequence, the HMM-based identification and verification model is used to determine mood disorder. This work collected the elicited emotional speech data from 15 BDs, 15 UDs and 15 healthy controls for system training and evaluation. Five-fold cross validation was employed for evaluation. Experimental results show that the system using the bottleneck feature achieved an identification accuracy of 73.33%, improving by 8.89%, compared to that without bottleneck features. Furthermore, the system with verification mechanism, improving by 4.44%, outperformed that without verification.", "which Used models ?", "LSTM", 833.0, 837.0], ["State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data pre-processing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks --- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both the two data --- 97.55\\% accuracy for POS tagging and 91.21\\% F1 for NER.", "which model ?", "CRF", 366.0, 369.0], ["This paper presents work on a method to detect names of proteins in running text. Our system - Yapex - uses a combination of lexical and syntactic knowledge, heuristic filters and a local dynamic dictionary. The syntactic information given by a general-purpose off-the-shelf parser supports the correct identification of the boundaries of protein names, and the local dynamic dictionary finds protein names in positions incompletely analysed by the parser. We present the different steps involved in our approach to protein tagging, and show how combinations of them influence recall and precision. We evaluate the system on a corpus of MEDLINE abstracts and compare it with the KeX system (Fukuda et al., 1998) along four different notions of correctness.", "which model ?", "Yapex", 95.0, 100.0], ["We introduce a multi-task setup of identifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called SciIE with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.", "which model ?", "SciIE", 230.0, 235.0], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Other resources ?", "NCBI Taxonomy", 444.0, 457.0], ["The mentions of human health perturbations such as the diseases and adverse effects denote a special entity class in the biomedical literature. They help in understanding the underlying risk factors and develop a preventive rationale. The recognition of these named entities in texts through dictionary-based approaches relies on the availability of appropriate terminological resources. Although few resources are publicly available, not all are suitable for the text mining needs. Therefore, this work provides an overview of the well known resources with respect to human diseases and adverse effects such as the MeSH, MedDRA, ICD-10, SNOMED CT, and UMLS. Individual dictionaries are generated from these resources and their performance in recognizing the named entities is evaluated over a manually annotated corpus. In addition, the steps for curating the dictionaries, rule-based acronym disambiguation and their impact on the dictionary performance is discussed. The results show that the MedDRA and UMLS achieve the best recall. Besides this, MedDRA provides an additional benefit of achieving a higher precision. The combination of search results of all the dictionaries achieve a considerably high recall. The corpus is available on http://www.scai.fraunhofer.de/disease-ae-corpus.html", "which Other resources ?", "MeSH", 616.0, 620.0], ["Automatic extraction of biological network information is one of the most desired and most complex tasks in biological and medical text mining. Track 4 at BioCreative V attempts to approach this complexity using fragments of large-scale manually curated biological networks, represented in Biological Expression Language (BEL), as training and test data. BEL is an advanced knowledge representation format which has been designed to be both human readable and machine processable. The specific goal of track 4 was to evaluate text mining systems capable of automatically constructing BEL statements from given evidence text, and of retrieving evidence text for given BEL statements. Given the complexity of the task, we designed an evaluation methodology which gives credit to partially correct statements. We identified various levels of information expressed by BEL statements, such as entities, functions, relations, and introduced an evaluation framework which rewards systems capable of delivering useful BEL fragments at each of these levels. The aim of this evaluation method is to help identify the characteristics of the systems which, if combined, would be most useful for achieving the overall goal of automatically constructing causal biological networks from text.", "which Other resources ?", "Biological Expression Language (BEL)", NaN, NaN], ["We present a database of annotated biomedical text corpora merged into a portable data structure with uniform conventions. MedTag combines three corpora, MedPost, ABGene and GENETAG, within a common relational database data model. The GENETAG corpus has been modified to reflect new definitions of genes and proteins. The MedPost corpus has been updated to include 1,000 additional sentences from the clinical medicine domain. All data have been updated with original MEDLINE text excerpts, PubMed identifiers, and tokenization independence to facilitate data accuracy, consistency and usability. The data are available in flat files along with software to facilitate loading the data into a relational SQL database from ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedTag/medtag.tar.gz.", "which Other resources ?", "GENETAG", 174.0, 181.0], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Other resources ?", "MeSH", 489.0, 493.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Data ?", "lexical, syntactic, or pragmatic features", 610.0, 651.0], ["ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean\u00b1SD of 2.93\u00b10.33mg/L compared to control group\u2019s 4.42\u00b10.30mg/L and gemfibrozil group\u2019s 4.28\u00b10.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).", "which Data ?", "4.42\u00b10.30mg/L", 1308.0, 1321.0], ["ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean\u00b1SD of 2.93\u00b10.33mg/L compared to control group\u2019s 4.42\u00b10.30mg/L and gemfibrozil group\u2019s 4.28\u00b10.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).", "which Data ?", "mean\u00b1SD of 2.93\u00b10.33mg/L", 1255.0, 1279.0], ["The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.", "which Data ?", "The Logical Observation Identifiers, Names and Codes (LOINC)", NaN, NaN], ["While Wikipedia exists in 287 languages, its content is unevenly distributed among them. In this work, we investigate the generation of open domain Wikipedia summaries in underserved languages using structured data from Wikidata. To this end, we propose a neural network architecture equipped with copy actions that learns to generate single-sentence and comprehensible textual summaries from Wikidata triples. We demonstrate the effectiveness of the proposed approach by evaluating it against a set of baselines on two languages of different natures: Arabic, a morphological rich language with a larger vocabulary than English, and Esperanto, a constructed language known for its easy acquisition.", "which Data ?", "open domain Wikipedia summaries", 136.0, 167.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Data ?", "the structural consistency and correctness", 1011.0, 1053.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Data ?", "the structural consistency and correctness", 1011.0, 1053.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Data ?", "highest scores", 1106.0, 1120.0], ["The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.", "which Data ?", "407 local terms", 569.0, 584.0], ["We show that easily accessible digital records of behavior, Facebook Likes, can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender. The analysis presented is based on a dataset of over 58,000 volunteers who provided their Facebook Likes, detailed demographic profiles, and the results of several psychometric tests. The proposed model uses dimensionality reduction for preprocessing the Likes data, which are then entered into logistic/linear regression to predict individual psychodemographic profiles from Likes. The model correctly discriminates between homosexual and heterosexual men in 88% of cases, African Americans and Caucasian Americans in 95% of cases, and between Democrat and Republican in 85% of cases. For the personality trait \u201cOpenness,\u201d prediction accuracy is close to the test\u2013retest accuracy of a standard personality test. We give examples of associations between attributes and Likes and discuss implications for online personalization and privacy.", "which Data ?", "Dataset of over 58,000 volunteers who provided their Facebook Likes, detailed demographic profiles, and the results of several psychometric tests.", NaN, NaN], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Data ?", "prior publication count", 385.0, 408.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Data ?", "15000 sarcastic and 25000 non-sarcastic messages", 1420.0, 1468.0], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "which Data ?", "published articles or monographs", 195.0, 227.0], ["Depression is a typical mood disorder, which affects people in mental and even physical problems. People who suffer depression always behave abnormal in visual behavior and the voice. In this paper, an audio visual based multimodal depression scale prediction system is proposed. Firstly, features are extracted from video and audio are fused in feature level to represent the audio visual behavior. Secondly, long short memory recurrent neural network (LSTM-RNN) is utilized to encode the dynamic temporal information of the abnormal audio visual behavior. Thirdly, emotion information is utilized by multi-task learning to boost the performance further. The proposed approach is evaluated on the Audio-Visual Emotion Challenge (AVEC2014) dataset. Experiments results show the dimensional emotion recognition helps to depression scale prediction.", "which Data ?", "Voice", 177.0, 182.0], ["Abstract Presently, analytics degree programs exhibit a growing trend to meet a strong market demand. To explore the skill sets required for analytics positions, the authors examined a sample of online job postings related to professions such as business analyst (BA), business intelligence analyst (BIA), data analyst (DA), and data scientist (DS) using content analysis. They present a ranked list of relevant skills belonging to specific skills categories for the studied positions. Also, they conducted a pairwise comparison between DA and DS as well as BA and BIA. Overall, the authors observed that decision making, organization, communication, and structured data management are key to all job categories. The analysis shows that technical skills like statistics and programming skills are in most demand for DAs. The analysis is useful for creating clear definitions with respect to required skills for job categories in the business and data analytics domain and for designing course curricula for this domain.", "which Data ?", "sample of online job postings", 185.0, 214.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Data ?", "contemporary application-oriented fine-grained aspects", 272.0, 326.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Data ?", "the structural consistency and correctness", 1011.0, 1053.0], ["ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean\u00b1SD of 2.93\u00b10.33mg/L compared to control group\u2019s 4.42\u00b10.30mg/L and gemfibrozil group\u2019s 4.28\u00b10.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).", "which Data ?", "10mg/kg body weight", 630.0, 649.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Data ?", "participants, tasks and building data", 213.0, 250.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Data ?", "reference/citation counts", 664.0, 689.0], ["CDR (Call Detail Record) data are one type of mobile phone data collected by operators each time a user initiates/receives a phone call or sends/receives an sms. CDR data are a rich geo-referenced source of user behaviour information. In this work, we perform an analysis of CDR data for the city of Milan that originate from Telecom Italia Big Data Challenge. A set of graphs is generated from aggregated CDR data, where each node represents a centroid of an RBS (Radio Base Station) polygon, and each edge represents aggregated telecom traffic between two RBSs. To explore the community structure, we apply a modularity-based algorithm. Community structure between days is highly dynamic, with variations in number, size and spatial distribution. One general rule observed is that communities formed over the urban core of the city are small in size and prone to dynamic change in spatial distribution, while communities formed in the suburban areas are larger in size and more consistent with respect to their spatial distribution. To evaluate the dynamics of change in community structure between days, we introduced different graph based and spatial community properties which contain latent footprint of human dynamics. We created land use profiles for each RBS polygon based on the Copernicus Land Monitoring Service Urban Atlas data set to quantify the correlation and predictivennes of human dynamics properties based on land use. The results reveal a strong correlation between some properties and land use which motivated us to further explore this topic. The proposed methodology has been implemented in the programming language Scala inside the Apache Spark engine to support the most computationally intensive tasks and in Python using the rich portfolio of data analytics and machine learning libraries for the less demanding tasks.", "which Data ?", " Copernicus Land Monitoring Service Urban Atlas", 1288.0, 1335.0], ["In this paper, we aim to develop a deep learning based automatic Attention Deficit Hyperactive Disorder (ADHD) diagnosis algorithm using resting state functional magnetic resonance imaging (rs-fMRI) scans. However, relative to millions of parameters in deep neural networks (DNN), the number of fMRI samples is still limited to learn discriminative features from the raw data. In light of this, we first encode our prior knowledge on 3D features voxel-wisely, including Regional Homogeneity (ReHo), fractional Amplitude of Low Frequency Fluctuations (fALFF) and Voxel-Mirrored Homotopic Connectivity (VMHC), and take these 3D images as the input to the DNN. Inspired by the way that radiologists examine brain images, we further investigate a novel 3D convolutional neural network (CNN) architecture to learn 3D local patterns which may boost the diagnosis accuracy. Investigation on the hold-out testing data of the ADHD-200 Global competition demonstrates that the proposed 3D CNN approach yields superior performances when compared to the reported classifiers in the literature, even with less training samples.", "which Data ?", "fMRI", 193.0, 197.0], ["While Wikipedia exists in 287 languages, its content is unevenly distributed among them. In this work, we investigate the generation of open domain Wikipedia summaries in underserved languages using structured data from Wikidata. To this end, we propose a neural network architecture equipped with copy actions that learns to generate single-sentence and comprehensible textual summaries from Wikidata triples. We demonstrate the effectiveness of the proposed approach by evaluating it against a set of baselines on two languages of different natures: Arabic, a morphological rich language with a larger vocabulary than English, and Esperanto, a constructed language known for its easy acquisition.", "which Data ?", "single-sentence and comprehensible textual summaries from Wikidata", 335.0, 401.0], ["Hackathons have become an increasingly popular approach for organizations to both test their new products and services as well as to generate new ideas. Most events either focus on attracting external developers or requesting employees of the organization to focus on a specific problem. In this paper we describe extensions to this paradigm that open up the event to internal employees and preserve the open-ended nature of the hackathon itself. In this paper we describe our initial motivation and objectives for conducting an internal hackathon, our experience in pioneering an internal hackathon at AT&T including specific things we did to make the internal hackathon successful. We conclude with the benefits (both expected and unexpected) we achieved from the internal hackathon approach, and recommendations for continuing the use of this valuable tool within AT&T.", "which Data ?", "benefits (both expected and unexpected)", NaN, NaN], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Data ?", "1.6 million", 192.0, 203.0], ["Purpose \u2013 The purpose of this paper is to investigate what employers seek when recruiting library and information professionals in the UK and whether professional skills, generic skills or personal qualities are most in demand.Design/methodology/approach \u2013 A content analysis of a sample of 180 advertisements requiring a professional library or information qualification from Chartered Institute of Library and Information Professional's Library + Information Gazette over the period May 2006\u20102007.Findings \u2013 The findings reveal that a multitude of skills and qualities are required in the profession. When the results were compared with Information National Training Organisation and Library and Information Management Employability Skills research, customer service, interpersonal and communication skills, and general computing skills emerged as the requirements most frequently sought by employers. Overall, requirements from the generic skills area were most important to employers, but the research also demonstra...", "which Data ?", "sample of 180 advertisements", 281.0, 309.0], ["ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean\u00b1SD of 2.93\u00b10.33mg/L compared to control group\u2019s 4.42\u00b10.30mg/L and gemfibrozil group\u2019s 4.28\u00b10.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).", "which Data ?", "pioglitazone 10mg/kg body weight", 617.0, 649.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Data ?", "language", 709.0, 717.0], ["It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.", "which Data ?", "pattern properties", 779.0, 797.0], ["A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.", "which Data ?", "93.71%", NaN, NaN], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Data ?", "publication type", 691.0, 707.0], ["It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.", "which Data ?", "affiliation, ethnicity, collaboration size", 586.0, 628.0], ["Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W\u00b7sr(-1)\u00b7m(-2) at a current density of 232 mA\u00b7cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 \u00b1 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.", "which Data ?", "current density of 232 mA\u00b7cm(-2)", NaN, NaN], ["ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean\u00b1SD of 2.93\u00b10.33mg/L compared to control group\u2019s 4.42\u00b10.30mg/L and gemfibrozil group\u2019s 4.28\u00b10.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).", "which Data ?", "zero, 4th and 8th week", 778.0, 800.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Data ?", "feasibility", 1357.0, 1368.0], ["Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.", "which Data ?", "sheer language complexity", 297.0, 322.0], ["It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.", "which Data ?", "ontology-driven conceptual model patterns", 652.0, 693.0], ["ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean\u00b1SD of 2.93\u00b10.33mg/L compared to control group\u2019s 4.42\u00b10.30mg/L and gemfibrozil group\u2019s 4.28\u00b10.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).", "which Data ?", "4.28\u00b10.39mg/L.", NaN, NaN], ["We present a framework to synthesize character movements based on high level parameters, such that the produced movements respect the manifold of human motion, trained on a large motion capture dataset. The learned motion manifold, which is represented by the hidden units of a convolutional autoencoder, represents motion data in sparse components which can be combined to produce a wide range of complex movements. To map from high level parameters to the motion manifold, we stack a deep feedforward neural network on top of the trained autoencoder. This network is trained to produce realistic motion sequences from parameters such as a curve over the terrain that the character should follow, or a target location for punching and kicking. The feedforward control network and the motion manifold are trained independently, allowing the user to easily switch between feedforward networks according to the desired interface, without re-training the motion manifold. Once motion is generated it can be edited by performing optimization in the space of the motion manifold. This allows for imposing kinematic constraints, or transforming the style of the motion, while ensuring the edited motion remains natural. As a result, the system can produce smooth, high quality motion sequences without any manual pre-processing of the training data.", "which Has evaluation ?", "Autoencoder", 292.0, 303.0], ["Nowadays, the enormous volume of health and fitness data gathered from IoT wearable devices offers favourable opportunities to the research community. For instance, it can be exploited using sophisticated data analysis techniques, such as automatic reasoning, to find patterns and, extract information and new knowledge in order to enhance decision-making and deliver better healthcare. However, due to the high heterogeneity of data representation formats, the IoT healthcare landscape is characterised by an ubiquitous presence of data silos which prevents users and clinicians from obtaining a consistent representation of the whole knowledge. Semantic web technologies, such as ontologies and inference rules, have been shown as a promising way for the integration and exploitation of data from heterogeneous sources. In this paper, we present a semantic data model useful to: (1) consistently represent health and fitness data from heterogeneous IoT sources; (2) integrate and exchange them; and (3) enable automatic reasoning by inference engines.", "which Has evaluation ?", "automatic reasoning by inference engines", 1012.0, 1052.0], ["Representation learning is a critical ingredient for natural language processing systems. Recent Transformer language models like BERT learn powerful textual representations, but these models are targeted towards token- and sentence-level training objectives and do not leverage information on inter-document relatedness, which limits their document-level representation power. For applications on scientific documents, such as classification and recommendation, accurate embeddings of documents are a necessity. We propose SPECTER, a new method to generate document-level embedding of scientific papers based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, Specter can be easily applied to downstream applications without task-specific fine-tuning. Additionally, to encourage further research on document-level models, we introduce SciDocs, a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation. We show that Specter outperforms a variety of competitive baselines on the benchmark.", "which Has evaluation ?", "Citation Prediction", 1035.0, 1054.0], ["Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and \u201cdisorder\u201d and \u201ctransmembrane\u201d analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein\u2014binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.", "which Has evaluation ?", "codon mutation", 1901.0, 1915.0], ["Petroleum exploration and production in the Nigeria\u2019s Niger Delta region and export of oil and gas resources by the petroleum sector has substantially improved the nation\u2019s economy over the past five decades. However, activities associated with petroleum exploration, development and production operations have local detrimental and significant impacts on the atmosphere, soils and sediments, surface and groundwater, marine environment and terrestrial ecosystems in the Niger Delta. Discharges of petroleum hydrocarbon and petroleum\u2013derived waste streams have caused environmental pollution, adverse human health effects, socio\u2013economic problems and degradation of host communities in the 9 oil\u2013producing states in the Niger Delta region. Many approaches have been developed for the management of environmental impacts of petroleum production\u2013related activities and several environmental laws have been institutionalized to regulate the Nigerian petroleum industry. However, the existing statutory laws and regulations for environmental protection appear to be grossly inadequate and some of the multinational oil companies operating in the Niger Delta region have failed to adopt sustainable practices to prevent environmental pollution. This review examines the implications of multinational oil companies operations and further highlights some of the past and present environmental issues associated with petroleum exploitation and production in the Nigeria\u2019s Niger Delta. Although effective understanding of petroleum production and associated environmental degradation is importance for developing management strategies, there is a need for more multidisciplinary approaches for sustainable risk mitigation and effective environmental protection of the oil\u2013producing host communities in the Niger Delta.", "which Has evaluation ?", "Although effective understanding of petroleum production and associated environmental degradation is importance for developing management strategies, there is a need for more multidisciplinary approaches for sustainable risk mitigation and effective environmental protection of the oil\u2013producing host communities in the Niger Delta.", NaN, NaN], ["ABSTRACT This study examined the neural basis of processing high- and low-message sensation value (MSV) antidrug public service announcements (PSAs) in high (HSS) and low sensation seekers (LSS) using fMRI. HSS more strongly engaged the salience network when processing PSAs (versus LSS), suggesting that high-MSV PSAs attracted their attention. HSS and LSS participants who engaged higher level cognitive processing regions reported that the PSAs were more convincing and believable and recalled the PSAs better immediately after testing. In contrast, HSS and LSS participants who strongly engaged visual attention regions for viewing PSAs reported lower personal relevance. These findings provide neurobiological evidence that high-MSV content is salient to HSS, a primary target group for antidrug messages, and additional cognitive processing is associated with higher perceived message effectiveness.", "which Has method ?", "fMRI", 201.0, 205.0], ["Knowledge graph completion is still a challenging solution that uses techniques from distinct areas to solve many different tasks. Most recent works, which are based on embedding models, were conceived to improve an existing knowledge graph using the link prediction task. However, even considering the ability of these solutions to solve other tasks, they did not present results for data linking, for example. Furthermore, most of these works focuses only on structural information, i.e., the relations between entities. In this paper, we present an approach for data linking that enrich entity embeddings in a model with their literal information and that do not rely on external information of these entities. The key aspect of this proposal is that we use a blocking scheme to improve the effectiveness of the solution in relation to the use of literals. Thus, in addition to the literals from object elements in a triple, we use other literals from subjects and predicates. By merging entity embeddings with their literal information it is possible to extend many popular embedding models. Preliminary experiments were performed on real-world datasets and our solution showed competitive results to the performance of the task of data linking.", "which Has method ?", "Blocking", 763.0, 771.0], ["Abstract Background Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection can spread rapidly within skilled nursing facilities. After identification of a case of Covid-19 in a skilled nursing facility, we assessed transmission and evaluated the adequacy of symptom-based screening to identify infections in residents. Methods We conducted two serial point-prevalence surveys, 1 week apart, in which assenting residents of the facility underwent nasopharyngeal and oropharyngeal testing for SARS-CoV-2, including real-time reverse-transcriptase polymerase chain reaction (rRT-PCR), viral culture, and sequencing. Symptoms that had been present during the preceding 14 days were recorded. Asymptomatic residents who tested positive were reassessed 7 days later. Residents with SARS-CoV-2 infection were categorized as symptomatic with typical symptoms (fever, cough, or shortness of breath), symptomatic with only atypical symptoms, presymptomatic, or asymptomatic. Results Twenty-three days after the first positive test result in a resident at this skilled nursing facility, 57 of 89 residents (64%) tested positive for SARS-CoV-2. Among 76 residents who participated in point-prevalence surveys, 48 (63%) tested positive. Of these 48 residents, 27 (56%) were asymptomatic at the time of testing; 24 subsequently developed symptoms (median time to onset, 4 days). Samples from these 24 presymptomatic residents had a median rRT-PCR cycle threshold value of 23.1, and viable virus was recovered from 17 residents. As of April 3, of the 57 residents with SARS-CoV-2 infection, 11 had been hospitalized (3 in the intensive care unit) and 15 had died (mortality, 26%). Of the 34 residents whose specimens were sequenced, 27 (79%) had sequences that fit into two clusters with a difference of one nucleotide. Conclusions Rapid and widespread transmission of SARS-CoV-2 was demonstrated in this skilled nursing facility. More than half of residents with positive test results were asymptomatic at the time of testing and most likely contributed to transmission. Infection-control strategies focused solely on symptomatic residents were not sufficient to prevent transmission after SARS-CoV-2 introduction into this facility.", "which Has method ?", "sequencing", 617.0, 627.0], ["In order to improve the signal-to-noise ratio of the hyperspectral sensors and exploit the potential of satellite hyperspectral data for predicting soil properties, we took MingShui County as the study area, which the study area is approximately 1481 km2, and we selected Gaofen-5 (GF-5) satellite hyperspectral image of the study area to explore an applicable and accurate denoising method that can effectively improve the prediction accuracy of soil organic matter (SOM) content. First, fractional-order derivative (FOD) processing is performed on the original reflectance (OR) to evaluate the optimal FOD. Second, singular value decomposition (SVD), Fourier transform (FT) and discrete wavelet transform (DWT) are used to denoise the OR and optimal FOD reflectance. Third, the spectral indexes of the reflectance under different denoising methods are extracted by optimal band combination algorithm, and the input variables of different denoising methods are selected by the recursive feature elimination (RFE) algorithm. Finally, the SOM content is predicted by a random forest prediction model. The results reveal that 0.6-order reflectance describes more useful details in satellite hyperspectral data. Five spectral indexes extracted from the reflectance under different denoising methods have a strong correlation with the SOM content, which is helpful for realizing high-accuracy SOM predictions. All three denoising methods can reduce the noise in hyperspectral data, and the accuracies of the different denoising methods are ranked DWT > FT > SVD, where 0.6-order-DWT has the highest accuracy (R2 = 0.84, RMSE = 3.36 g kg\u22121, and RPIQ = 1.71). This paper is relatively novel, in that GF-5 satellite hyperspectral data based on different denoising methods are used to predict SOM, and the results provide a highly robust and novel method for mapping the spatial distribution of SOM content at the regional scale.", "which Has method ?", "fractional-order derivative (FOD)", NaN, NaN], ["Due to significant industrial demands toward software systems with increasing complexity and challenging quality requirements, software architecture design has become an important development activity and the research domain is rapidly evolving. In the last decades, software architecture optimization methods, which aim to automate the search for an optimal architecture design with respect to a (set of) quality attribute(s), have proliferated. However, the reported results are fragmented over different research communities, multiple system domains, and multiple quality attributes. To integrate the existing research results, we have performed a systematic literature review and analyzed the results of 188 research papers from the different research communities. Based on this survey, a taxonomy has been created which is used to classify the existing research. Furthermore, the systematic analysis of the research literature provided in this review aims to help the research community in consolidating the existing research efforts and deriving a research agenda for future developments.", "which Has method ?", "Systematic Literature Review", 651.0, 679.0], ["Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19\u2013related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.", "which Has method ?", "remote online health hackathon", 375.0, 405.0], ["The activity of a host of antimicrobial peptides has been examined against a range of lipid bilayers mimicking bacterial and eukaryotic membranes. Despite this, the molecular mechanisms and the nature of the physicochemical properties underlying the peptide\u2013lipid interactions that lead to membrane disruption are yet to be fully elucidated. In this study, the interaction of the short antimicrobial peptide aurein 1.2 was examined in the presence of an anionic cardiolipin-containing lipid bilayer using molecular dynamics simulations. Aurein 1.2 is known to interact strongly with anionic lipid membranes. In the simulations, the binding of aurein 1.2 was associated with buckling of the lipid bilayer, the degree of which varied with the peptide concentration. The simulations suggest that the intrinsic properties of cardiolipin, especially the fact that it promotes negative membrane curvature, may help protect membranes against the action of peptides such as aurein 1.2 by counteracting the tendency of the peptide to induce positive curvature in target membranes.", "which Has method ?", "Molecular Dynamics Simulations", 505.0, 535.0], ["In order to improve the signal-to-noise ratio of the hyperspectral sensors and exploit the potential of satellite hyperspectral data for predicting soil properties, we took MingShui County as the study area, which the study area is approximately 1481 km2, and we selected Gaofen-5 (GF-5) satellite hyperspectral image of the study area to explore an applicable and accurate denoising method that can effectively improve the prediction accuracy of soil organic matter (SOM) content. First, fractional-order derivative (FOD) processing is performed on the original reflectance (OR) to evaluate the optimal FOD. Second, singular value decomposition (SVD), Fourier transform (FT) and discrete wavelet transform (DWT) are used to denoise the OR and optimal FOD reflectance. Third, the spectral indexes of the reflectance under different denoising methods are extracted by optimal band combination algorithm, and the input variables of different denoising methods are selected by the recursive feature elimination (RFE) algorithm. Finally, the SOM content is predicted by a random forest prediction model. The results reveal that 0.6-order reflectance describes more useful details in satellite hyperspectral data. Five spectral indexes extracted from the reflectance under different denoising methods have a strong correlation with the SOM content, which is helpful for realizing high-accuracy SOM predictions. All three denoising methods can reduce the noise in hyperspectral data, and the accuracies of the different denoising methods are ranked DWT > FT > SVD, where 0.6-order-DWT has the highest accuracy (R2 = 0.84, RMSE = 3.36 g kg\u22121, and RPIQ = 1.71). This paper is relatively novel, in that GF-5 satellite hyperspectral data based on different denoising methods are used to predict SOM, and the results provide a highly robust and novel method for mapping the spatial distribution of SOM content at the regional scale.", "which Has method ?", "singular value decomposition (SVD)", NaN, NaN], ["ABSTRACT Contested heritage has increasingly been studied by scholars over the last two decades in multiple disciplines, however, there is still limited knowledge about what contested heritage is and how it is realized in society. Therefore, the purpose of this paper is to produce a systematic literature review on this topic to provide a holistic understanding of contested heritage, and delineate its current state, trends and gaps. Methodologically, four electronic databases were searched, and 102 journal articles published before 2020 were extracted. A content analysis of each article was then conducted to identify key themes and variables for classification. Findings show that while its research often lacks theoretical underpinnings, contested heritage is marked by its diversity and complexity as it becomes a global issue for both tourism and urbanization. By presenting a holistic understanding of contested heritage, this review offers an extensive investigation of the topic area to help move literature pertaining contested heritage forward.", "which Has method ?", "systematic literature review", 284.0, 312.0], ["Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.", "which Has method ?", "xpath", 973.0, 978.0], ["Iodine deficiency disorders (IDD) has been a major global public health problem threatening more than 2 billion people worldwide. Considering various human health implications associated with iodine deficiency, universal salt iodization programme has been recognized as one of the best methods of preventing iodine deficiency disorder and iodizing table salt is currently done in many countries. In this study, comparative assessment of iodine content of commercially available table salt brands in Nigerian market were investigated and iodine content were measured in ten table salt brands samples using iodometric titration. The iodine content ranged from 14.80 mg/kg \u2013 16.90 mg/kg with mean value of 15.90 mg/kg for Sea salt; 24.30 mg/kg \u2013 25.40 mg/kg with mean value of 24.60 mg/kg for Dangote salt (blue sachet); 22.10 mg/kg \u2013 23.10 mg/kg with mean value of 22.40 mg/kg for Dangote salt (red sachet); 23.30 mg/kg \u2013 24.30 mg/kg with mean value of 23.60 mg/kg for Mr Chef salt; 23.30 mg/kg \u2013 24.30 mg/kg with mean value of 23.60 mg/kg for Annapurna; 26.80 mg/kg \u2013 27.50 mg/kg with mean value of 27.20mg/kg for Uncle Palm salt; 23.30 mg/kg \u2013 29.60 mg/kg with mean content of 26.40 mg/kg for Dangote (bag); 25.40 mg/kg \u2013 26.50 mg/kg with mean value of 26.50 mg/kg for Royal salt; 36.80 mg/kg \u2013 37.20 mg/kg with mean iodine content of 37.0 mg/kg for Abakaliki refined salt, and 30.07 mg/kg \u2013 31.20 mg/kg with mean value of 31.00 mg/kg for Ikom refined salt. The mean iodine content measured in the Sea salt brand (15.70 mg/kg) was significantly P < 0.01 lower compared to those measured in other table salt brands. Although the iodine content of Abakaliki and Ikom refined salt exceed the recommended value, it is clear that only Sea salt brand falls below the World Health Organization (WHO) recommended value (20 \u2013 30 mg/kg), while the remaining table salt samples are just within the range. The results obtained have revealed that 70 % of the table salt brands were adequately iodized while 30 % of the table salt brands were not adequately iodized and provided baseline data that can be used for potential identification of human health risks associated with inadequate and/or excess iodine content in table salt brands consumed in households in Nigeria.", "which Has method ?", "Considering various human health implications associated with iodine deficiency, universal salt iodization programme has been recognized as one of the best methods of preventing iodine deficiency disorder and iodizing table salt is currently done in many countries. ", 130.0, 396.0], ["This paper discusses the potential of current advancements in Information Communication Technologies (ICT) for cultural heritage preservation, valorization and management within contemporary cities. The paper highlights the potential of virtual environments to assess the impacts of heritage policies on urban development. It does so by discussing the implications of virtual globes and crowdsourcing to support the participatory valuation and management of cultural heritage assets. To this purpose, a review of available valuation techniques is here presented together with a discussion on how these techniques might be coupled with ICT tools to promote inclusive governance.\u00a0", "which Has method ?", "review", 511.0, 517.0], ["Abstract Hackathons, time-bounded events where participants write computer code and build apps, have become a popular means of socializing tech students and workers to produce \u201cinnovation\u201d despite little promise of material reward. Although they offer participants opportunities for learning new skills and face-to-face networking and set up interaction rituals that create an emotional \u201chigh,\u201d potential advantage is even greater for the events\u2019 corporate sponsors, who use them to outsource work, crowdsource innovation, and enhance their reputation. Ethnographic observations and informal interviews at seven hackathons held in New York during the course of a single school year show how the format of the event and sponsors\u2019 discursive tropes, within a dominant cultural frame reflecting the appeal of Silicon Valley, reshape unpaid and precarious work as an extraordinary opportunity, a ritual of ecstatic labor, and a collective imaginary for fictional expectations of innovation that benefits all, a powerful strategy for manufacturing workers\u2019 consent in the \u201cnew\u201d economy.", "which Has method ?", "informal interviews", 583.0, 602.0], ["Reconstructing the energy landscape of a protein holds the key to characterizing its structural dynamics and function [1]. While the disparate spatio-temporal scales spanned by the slow dynamics challenge reconstruction in wet and dry laboratories, computational efforts have had recent success on proteins where a wealth of experimentally-known structures can be exploited to extract modes of motion. In [2], the authors propose the SoPriM method that extracts principle components (PCs) and utilizes them as variables of the structure space of interest. Stochastic optimization is employed to sample the structure space and its associated energy landscape in the defined varible space. We refer to this algorithm as SoPriM-PCA and compare it here to SoPriM-NMA, which investigates whether the landscape can be reconstructed with knowledge of modes of motion (normal modes) extracted from one single known structure. Some representative results are shown in Figure 1, where structures obtained by SoPriM-PCA and those obtained by SoPriM-NMA for the H-Ras enzyme are compared via color-coded projections onto the top two variables utilized by each algorithm. The results show that precious information can be obtained on the energy landscape even when one structural model is available. The presented work opens up interesting venues of research on structure-based inference of dynamics. Acknowledgment: This work is supported in part by NSF Grant No. 1421001 to AS and NSF Grant No. 1440581 to AS and EP. Computations were run on ARGO, a research computing cluster provided by the Office of Research Computing at George Mason University, VA (URL: http://orc.gmu.edu).", "which Has method ?", "SoPriM-NMA", 752.0, 762.0], ["During global health crises, such as the recent H1N1 pandemic, the mass media provide the public with timely information regarding risk. To obtain new insights into how these messages are received, we measured neural data while participants, who differed in their preexisting H1N1 risk perceptions, viewed a TV report about H1N1. Intersubject correlation (ISC) of neural time courses was used to assess how similarly the brains of viewers responded to the TV report. We found enhanced intersubject correlations among viewers with high-risk perception in the anterior cingulate, a region which classical fMRI studies associated with the appraisal of threatening information. By contrast, neural coupling in sensory-perceptual regions was similar for the high and low H1N1-risk perception groups. These results demonstrate a novel methodology for understanding how real-life health messages are processed in the human brain, with particular emphasis on the role of emotion and differences in risk perceptions.", "which Has approach ?", "fMRI", 603.0, 607.0], ["Interference between pharmacological substances can cause serious medical injuries. Correctly predicting so-called drug-drug interactions (DDI) does not only reduce these cases but can also result in a reduction of drug development cost. Presently, most drug-related knowledge is the result of clinical evaluations and post-marketing surveillance; resulting in a limited amount of information. Existing data-driven prediction approaches for DDIs typically rely on a single source of information, while using information from multiple sources would help improve predictions. Machine learning (ML) techniques are used, but the techniques are often unable to deal with skewness in the data. Hence, we propose a new ML approach for predicting DDIs based on multiple data sources. For this task, we use 12,000 drug features from DrugBank, PharmGKB, and KEGG drugs, which are integrated using Knowledge Graphs (KGs). To train our prediction model, we first embed the nodes in the graph using various embedding approaches. We found that the best performing combination was a ComplEx embedding method creating using PyTorch-BigGraph (PBG) with a Convolutional-LSTM network and classic machine learning-based prediction models. The model averaging ensemble method of three best classifiers yields up to 0.94, 0.92, 0.80 for AUPR, F1 F1-score, and MCC, respectively during 5-fold cross-validation tests.", "which Has approach ?", "Convolutional-LSTM Network", 1138.0, 1164.0], ["Query optimization in RDF Stores is a challenging problem as SPARQL queries typically contain many more joins than equivalent relational plans, and hence lead to a large join order search space. In such cases, cost-based query optimization often is not possible. One practical reason for this is that statistics typically are missing in web scale setting such as the Linked Open Datasets (LOD). The more profound reason is that due to the absence of schematic structure in RDF, join-hit ratio estimation requires complicated forms of correlated join statistics; and currently there are no methods to identify the relevant correlations beforehand. For this reason, the use of good heuristics is essential in SPARQL query optimization, even in the case that are partially used with cost-based statistics (i.e., hybrid query optimization). In this paper we describe a set of useful heuristics for SPARQL query optimizers. We present these in the context of a new Heuristic SPARQL Planner (HSP) that is capable of exploiting the syntactic and the structural variations of the triple patterns in a SPARQL query in order to choose an execution plan without the need of any cost model. For this, we define the variable graph and we show a reduction of the SPARQL query optimization problem to the maximum weight independent set problem. We implemented our planner on top of the MonetDB open source column-store and evaluated its effectiveness against the state-of-the-art RDF-3X engine as well as comparing the plan quality with a relational (SQL) equivalent of the benchmarks.", "which Has approach ?", "SPARQL", 61.0, 67.0], ["With the proliferation of the RDF data format, engines for RDF query processing are faced with very large graphs that contain hundreds of millions of RDF triples. This paper addresses the resulting scalability problems. Recent prior work along these lines has focused on indexing and other physical-design issues. The current paper focuses on join processing, as the fine-grained and schema-relaxed use of RDF often entails star- and chain-shaped join queries with many input streams from index scans. We present two contributions for scalable join processing. First, we develop very light-weight methods for sideways information passing between separate joins at query run-time, to provide highly effective filters on the input streams of joins. Second, we improve previously proposed algorithms for join-order optimization by more accurate selectivity estimations for very large RDF graphs. Experimental studies with several RDF datasets, including the UniProt collection, demonstrate the performance gains of our approach, outperforming the previously fastest systems by more than an order of magnitude.", "which Has approach ?", "Very large RDF Graphs", 870.0, 891.0], ["We introduce a multi-task setup of identifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called SciIE with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.", "which Dataset name ?", "SciERC", 127.0, 133.0], ["This paper presents the formal release of {\\em MedMentions}, a new manually annotated resource for the recognition of biomedical concepts. What distinguishes MedMentions from other annotated biomedical corpora is its size (over 4,000 abstracts and over 350,000 linked mentions), as well as the size of the concept ontology (over 3 million concepts from UMLS 2017) and its broad coverage of biomedical disciplines. In addition to the full corpus, a sub-corpus of MedMentions is also presented, comprising annotations for a subset of UMLS 2017 targeted towards document retrieval. To encourage research in Biomedical Named Entity Recognition and Linking, data splits for training and testing are included in the release, and a baseline model and its metrics for entity linking are also described.", "which Dataset name ?", "MedMentions", 47.0, 58.0], ["We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word\u2019s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.", "which Dataset name ?", "LatinISE", 11.0, 19.0], ["MOTIVATION Natural language processing (NLP) methods are regarded as being useful to raise the potential of text mining from biological literature. The lack of an extensively annotated corpus of this literature, however, causes a major bottleneck for applying NLP techniques. GENIA corpus is being developed to provide reference materials to let NLP techniques work for bio-textmining. RESULTS GENIA corpus version 3.0 consisting of 2000 MEDLINE abstracts has been released with more than 400,000 words and almost 100,000 annotations for biological terms.", "which Dataset name ?", "GENIA corpus", 276.0, 288.0], ["Extracting information from full documents is an important problem in many domains, but most previous work focus on identifying relationships within a sentence or a paragraph. It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections. In this paper, we introduce SciREX, a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. We annotate our dataset by integrating automatic and human annotations, leveraging existing scientific knowledge resources. We develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE. Analyzing the model performance shows a significant gap between human performance and current baselines, inviting the community to use our dataset as a challenge to develop document-level IE models. Our data and code are publicly available at https://github.com/allenai/SciREX .", "which Dataset name ?", "SciREX", 474.0, 480.0], ["Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.", "which Dataset name ?", "SoMeSci", 454.0, 461.0], ["This paper introduces the ACL Reference Dataset for Terminology Extraction and Classification, version 2.0 (ACL RD-TEC 2.0). The ACL RD-TEC 2.0 has been developed with the aim of providing a benchmark for the evaluation of term and entity recognition tasks based on specialised text from the computational linguistics domain. This release of the corpus consists of 300 abstracts from articles in the ACL Anthology Reference Corpus, published between 1978\u20132006. In these abstracts, terms (i.e., single or multi-word lexical units with a specialised meaning) are manually annotated. In addition to their boundaries in running text, annotated terms are classified into one of the seven categories method, tool, language resource (LR), LR product, model, measures and measurements, and other. To assess the quality of the annotations and to determine the difficulty of this annotation task, more than 171 of the abstracts are annotated twice, independently, by each of the two annotators. In total, 6,818 terms are identified and annotated in more than 1300 sentences, resulting in a specialised vocabulary made of 3,318 lexical forms, mapped to 3,471 concepts. We explain the development of the annotation guidelines and discuss some of the challenges we encountered in this annotation task.", "which Dataset name ?", "ACL RD-TEC 2.0", 108.0, 122.0], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Concept types ?", "Tissue", 392.0, 398.0], ["This paper introduces the ACL Reference Dataset for Terminology Extraction and Classification, version 2.0 (ACL RD-TEC 2.0). The ACL RD-TEC 2.0 has been developed with the aim of providing a benchmark for the evaluation of term and entity recognition tasks based on specialised text from the computational linguistics domain. This release of the corpus consists of 300 abstracts from articles in the ACL Anthology Reference Corpus, published between 1978\u20132006. In these abstracts, terms (i.e., single or multi-word lexical units with a specialised meaning) are manually annotated. In addition to their boundaries in running text, annotated terms are classified into one of the seven categories method, tool, language resource (LR), LR product, model, measures and measurements, and other. To assess the quality of the annotations and to determine the difficulty of this annotation task, more than 171 of the abstracts are annotated twice, independently, by each of the two annotators. In total, 6,818 terms are identified and annotated in more than 1300 sentences, resulting in a specialised vocabulary made of 3,318 lexical forms, mapped to 3,471 concepts. We explain the development of the annotation guidelines and discuss some of the challenges we encountered in this annotation task.", "which Concept types ?", "Language Resource", 708.0, 725.0], ["Abstract The text-mining services for kinome curation track, part of BioCreative VI, proposed a competition to assess the effectiveness of text mining to perform literature triage. The track has exploited an unpublished curated data set from the neXtProt database. This data set contained comprehensive annotations for 300 human protein kinases. For a given protein and a given curation axis [diseases or gene ontology (GO) biological processes], participants\u2019 systems had to identify and rank relevant articles in a collection of 5.2 M MEDLINE citations (task 1) or 530 000 full-text articles (task 2). Explored strategies comprised named-entity recognition and machine-learning frameworks. For that latter approach, participants developed methods to derive a set of negative instances, as the databases typically do not store articles that were judged as irrelevant by curators. The supervised approaches proposed by the participating groups achieved significant improvements compared to the baseline established in a previous study and compared to a basic PubMed search.", "which Concept types ?", "human protein kinase", NaN, NaN], ["We describe the SemEval task of extracting keyphrases and relations between them from scientific documents, which is crucial for understanding which publications describe which processes, tasks and materials. Although this was a new task, we had a total of 26 submissions across 3 evaluation scenarios. We expect the task and the findings reported in this paper to be relevant for researchers working on understanding scientific content, as well as the broader knowledge base population and information extraction communities.", "which Concept types ?", "Material", NaN, NaN], ["With the information overload in genome-related field, there is an increasing need for natural language processing technology to extract information from literature and various attempts of information extraction using NLP has been being made. We are developing the necessary resources including domain ontology and annotated corpus from research abstracts in MEDLINE database (GENIA corpus). We are building the ontology and the corpus simultaneously, using each other. In this paper we report on our new corpus, its ontological basis, annotation scheme, and statistics of annotated objects. We also describe the tools used for corpus annotation and management.", "which Concept types ?", "Other", 463.0, 468.0], ["This paper introduces the ACL Reference Dataset for Terminology Extraction and Classification, version 2.0 (ACL RD-TEC 2.0). The ACL RD-TEC 2.0 has been developed with the aim of providing a benchmark for the evaluation of term and entity recognition tasks based on specialised text from the computational linguistics domain. This release of the corpus consists of 300 abstracts from articles in the ACL Anthology Reference Corpus, published between 1978\u20132006. In these abstracts, terms (i.e., single or multi-word lexical units with a specialised meaning) are manually annotated. In addition to their boundaries in running text, annotated terms are classified into one of the seven categories method, tool, language resource (LR), LR product, model, measures and measurements, and other. To assess the quality of the annotations and to determine the difficulty of this annotation task, more than 171 of the abstracts are annotated twice, independently, by each of the two annotators. In total, 6,818 terms are identified and annotated in more than 1300 sentences, resulting in a specialised vocabulary made of 3,318 lexical forms, mapped to 3,471 concepts. We explain the development of the annotation guidelines and discuss some of the challenges we encountered in this annotation task.", "which Concept types ?", "Other", 782.0, 787.0], ["A crucial step toward the goal of automatic extraction of propositional information from natural language text is the identification of semantic relations between constituents in sentences. We examine the problem of distinguishing among seven relation types that can occur between the entities \"treatment\" and \"disease\" in bioscience text, and the problem of identifying such entities. We compare five generative graphical models and a neural network, using lexical, syntactic, and semantic features, finding that the latter help achieve high classification accuracy.", "which Concept types ?", "Treatment", 295.0, 304.0], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Concept types ?", "Disease", 277.0, 284.0], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Concept types ?", "mRNAcDNA", 365.0, 373.0], ["A considerable effort has been made to extract biological and chemical entities, as well as their relationships, from the scientific literature, either manually through traditional literature curation or by using information extraction and text mining technologies. Medicinal chemistry patents contain a wealth of information, for instance to uncover potential biomarkers that might play a role in cancer treatment and prognosis. However, current biomedical annotation databases do not cover such information, partly due to limitations of publicly available biomedical patent mining software. As part of the BioCreative V CHEMDNER patents track, we present the results of the first named entity recognition (NER) assignment carried out to detect mentions of chemical compounds and genes/proteins in running patent text. More specifically, this task aimed to evaluate the performance of automatic name recognition strategies capable of isolating chemical names and gene and gene product mentions from surrounding text within patent titles and abstracts. A total of 22 unique teams submitted results for at least one of the three CHEMDNER subtasks. The first subtask, called the CEMP (chemical entity mention in patents) task, focused on the detection of chemical named entity mentions in patents, requesting teams to return the start and end indices corresponding to all the chemical entities found in a given record. A total of 21 teams submitted 93 runs, for this subtask. The top performing team reached an f-measure of 0.89 with a precision of 0.87 and a recall of 0.91. The CPD (chemical passage detection) task required the classification of patent titles and abstracts whether they do or do not contain chemical compound mentions. Nine teams returned predictions for this task (40 runs). The top run in terms of Matthew\u2019s correlation coefficient (MCC) had a score of 0.88, the highest sensitivity ? Corresponding author", "which Concept types ?", "Chemical compounds", 758.0, 776.0], ["We present a method for characterizing a research work in terms of its focus, domain of application, and techniques used. We show how tracing these aspects over time provides a novel measure of the influence of research communities on each other. We extract these characteristics by matching semantic extraction patterns, learned using bootstrapping, to the dependency trees of sentences in an article\u2019s", "which Concept types ?", "Focus", 71.0, 76.0], ["This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2016, which follows the previous 2013 and 2011 editions. The task focuses on the extraction of the locations (biotopes and geographical places) of bacteria from PubMe abstracts and the characterization of bacteria and their associated habitats with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on bacteria habitats for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, the challenge organization, and the evaluation metrics. We also provide an analysis of the results obtained by participants.", "which Concept types ?", "Habitat", NaN, NaN], ["We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).", "which Concept types ?", "Gene", 345.0, 349.0], ["While the fast-paced inception of novel tasks and new datasets helps foster active research in a community towards interesting directions, keeping track of the abundance of research activity in different areas on different datasets is likely to become increasingly difficult. The community could greatly benefit from an automatic system able to summarize scientific results, e.g., in the form of a leaderboard. In this paper we build two datasets and develop a framework (TDMS-IE) aimed at automatically extracting task, dataset, metric and score from NLP papers, towards the automatic construction of leaderboards. Experiments show that our model outperforms several baselines by a large margin. Our model is a first step towards automatic leaderboard construction, e.g., in the NLP domain.", "which Concept types ?", "Metric", 530.0, 536.0], ["One of the biomedical entity types of relevance for medicine or biosciences are chemical compounds and drugs. The correct detection these entities is critical for other text mining applications building on them, such as adverse drug-reaction detection, medication-related fake news or drug-target extraction. Although a significant effort was made to detect mentions of drugs/chemicals in English texts, so far only very limited attempts were made to recognize them in medical documents in other languages. Taking into account the growing amount of medical publications and clinical records written in Spanish, we have organized the first shared task on detecting drug and chemical entities in Spanish medical documents. Additionally, we included a clinical concept-indexing sub-track asking teams to return SNOMED-CT identifiers related to drugs/chemicals for a collection of documents. For this task, named PharmaCoNER, we generated annotation guidelines together with a corpus of 1,000 manually annotated clinical case studies. A total of 22 teams participated in the sub-track 1, (77 system runs), and 7 teams in the sub-track 2 (19 system runs). Top scoring teams used sophisticated deep learning approaches yielding very competitive results with F-measures above 0.91. These results indicate that there is a real interest in promoting biomedical text mining efforts beyond English. We foresee that the PharmaCoNER annotation guidelines, corpus and participant systems will foster the development of new resources for clinical and biomedical text mining systems of Spanish medical data.", "which Concept types ?", "chemical", 80.0, 88.0], ["This paper presents the fourth edition of the Bacteria Biotope task at BioNLP Open Shared Tasks 2019. The task focuses on the extraction of the locations and phenotypes of microorganisms from PubMed abstracts and full-text excerpts, and the characterization of these entities with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on biodiversity for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, and the challenge organization. We also provide an analysis of the results obtained by participants, and inspect the evolution of the results since the last edition in 2016.", "which Concept types ?", "Microorganism", NaN, NaN], ["This paper presents work on a method to detect names of proteins in running text. Our system - Yapex - uses a combination of lexical and syntactic knowledge, heuristic filters and a local dynamic dictionary. The syntactic information given by a general-purpose off-the-shelf parser supports the correct identification of the boundaries of protein names, and the local dynamic dictionary finds protein names in positions incompletely analysed by the parser. We present the different steps involved in our approach to protein tagging, and show how combinations of them influence recall and precision. We evaluate the system on a corpus of MEDLINE abstracts and compare it with the KeX system (Fukuda et al., 1998) along four different notions of correctness.", "which Concept types ?", "Protein", 339.0, 346.0], ["BioC is a simple XML format for text, annotations and relations, and was developed to achieve interoperability for biomedical text processing. Following the success of BioC in BioCreative IV, the BioCreative V BioC track addressed a collaborative task to build an assistant system for BioGRID curation. In this paper, we describe the framework of the collaborative BioC task and discuss our findings based on the user survey. This track consisted of eight subtasks including gene/protein/organism named entity recognition, protein\u2013protein/genetic interaction passage identification and annotation visualization. Using BioC as their data-sharing and communication medium, nine teams, world-wide, participated and contributed either new methods or improvements of existing tools to address different subtasks of the BioC track. Results from different teams were shared in BioC and made available to other teams as they addressed different subtasks of the track. In the end, all submitted runs were merged using a machine learning classifier to produce an optimized output. The biocurator assistant system was evaluated by four BioGRID curators in terms of practical usability. The curators\u2019 feedback was overall positive and highlighted the user-friendly design and the convenient gene/protein curation tool based on text mining. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-1-bioc/", "which Concept types ?", "gene", 475.0, 479.0], ["Abstract The text-mining services for kinome curation track, part of BioCreative VI, proposed a competition to assess the effectiveness of text mining to perform literature triage. The track has exploited an unpublished curated data set from the neXtProt database. This data set contained comprehensive annotations for 300 human protein kinases. For a given protein and a given curation axis [diseases or gene ontology (GO) biological processes], participants\u2019 systems had to identify and rank relevant articles in a collection of 5.2 M MEDLINE citations (task 1) or 530 000 full-text articles (task 2). Explored strategies comprised named-entity recognition and machine-learning frameworks. For that latter approach, participants developed methods to derive a set of negative instances, as the databases typically do not store articles that were judged as irrelevant by curators. The supervised approaches proposed by the participating groups achieved significant improvements compared to the baseline established in a previous study and compared to a basic PubMed search.", "which data source ?", "MEDLINE", 537.0, 544.0], ["Linguistically annotated corpus based on texts in biomedical domain has been constructed to tune natural language processing (NLP) tools for biotextmining. As the focus of information extraction is shifting from \"nominal\" information such as named entity to \"verbal\" information such as function and interaction of substances, application of parsers has become one of the key technologies and thus the corpus annotated for syntactic structure of sentences is in demand. A subset of the GENIA corpus consisting of 500 MEDLINE abstracts has been annotated for syntactic structure in an XMLbased format based on Penn Treebank II (PTB) scheme. Inter-annotator agreement test indicated that the writing style rather than the contents of the research abstracts is the source of the difficulty in tree annotation, and that annotation can be stably done by linguists without much knowledge of biology with appropriate guidelines regarding to linguistic phenomena particular to scientific texts.", "which data source ?", "MEDLINE", 517.0, 524.0], ["In order to deal with heterogeneous knowledge in the medical field, this paper proposes a method which can learn a heavy-weighted medical ontology based on medical glossaries and Web resources. Firstly, terms and taxonomic relations are extracted based on disease and drug glossaries and a light-weighted ontology is constructed, Secondly, non-taxonomic relations are automatically learned from Web resources with linguistic patterns, and the two ontologies (disease and drug) are expanded from light-weighted level towards heavy-weighted level, At last, the disease ontology and drug ontology are integrated to create a practical medical ontology. Experiment shows that this method can integrate and expand medical terms with taxonomic and different kinds of non-taxonomic relations. Our experiments show that the performance is promising.", "which data source ?", "Medical glossaries", 156.0, 174.0], ["A rapidly growing amount of content posted online, such as food recipes, opens doors to new exciting applications at the intersection of vision and language. In this work, we aim to estimate the calorie amount of a meal directly from an image by learning from recipes people have published on the Internet, thus skipping time-consuming manual data annotation. Since there are few large-scale publicly available datasets captured in unconstrained environments, we propose the pic2kcal benchmark comprising 308 000 images from over 70 000 recipes including photographs, ingredients, and instructions. To obtain nutritional information of the ingredients and automatically determine the ground-truth calorie value, we match the items in the recipes with structured information from a food item database. We evaluate various neural networks for regression of the calorie quantity and extend them with the multi-task paradigm. Our learning procedure combines the calorie estimation with prediction of proteins, carbohydrates, and fat amounts as well as a multi-label ingredient classification. Our experiments demonstrate clear benefits of multi-task learning for calorie estimation, surpassing the single-task calorie regression by 9.9%. To encourage further research on this task, we make the code for generating the dataset and the models publicly available.", "which data source ?", "Food item database", 781.0, 799.0], ["We present the BioCreative VII Task 3 which focuses on drug names extraction from tweets. Recognized to provide unique insights into population health, detecting health related tweets is notoriously challenging for natural language processing tools. Tweets are written about any and all topics, most of them not related to health. Additionally, they are written with little regard for proper grammar, are inherently colloquial, and are almost never proof-read. Given a tweet, task 3 consists of detecting if the tweet has a mention of a drug name and, if so, extracting the span of the drug mention. We made available 182,049 tweets publicly posted by 212 Twitter users with all drugs mentions manually annotated. This corpus exhibits the natural and strongly imbalanced distribution of positive tweets, with only 442 tweets (0.2%) mentioning a drug. This task was an opportunity for participants to evaluate methods robust to classimbalance beyond the simple lexical match. A total of 65 teams registered, and 16 teams submitted a system run. We summarize the corpus and the tools created for the challenge, which is freely available at https://biocreative.bioinformatics.udel.edu/tasks/biocreativevii/track-3/. We analyze the methods and the results of the competing systems with a focus on learning from classimbalanced data. Keywords\u2014social media; pharmacovigilance; named entity recognition; drug name extraction; class-imbalance.", "which data source ?", "Twitter", 656.0, 663.0], ["Motivation: Coreference resolution, the process of identifying different mentions of an entity, is a very important component in a text-mining system. Compared with the work in news articles, the existing study of coreference resolution in biomedical texts is quite preliminary by only focusing on specific types of anaphors like pronouns or definite noun phrases, using heuristic methods, and running on small data sets. Therefore, there is a need for an in-depth exploration of this task in the biomedical domain. Results: In this article, we presented a learning-based approach to coreference resolution in the biomedical domain. We made three contributions in our study. Firstly, we annotated a large scale coreference corpus, MedCo, which consists of 1,999 medline abstracts in the GENIA data set. Secondly, we proposed a detailed framework for the coreference resolution task, in which we augmented the traditional learning model by incorporating non-anaphors into training. Lastly, we explored various sources of knowledge for coreference resolution, particularly, those that can deal with the complexity of biomedical texts. The evaluation on the MedCo corpus showed promising results. Our coreference resolution system achieved a high precision of 85.2% with a reasonable recall of 65.3%, obtaining an F-measure of 73.9%. The results also suggested that our augmented learning model significantly boosted precision (up to 24.0%) without much loss in recall (less than 5%), and brought a gain of over 8% in F-measure.", "which data source ?", "MEDLINE", 762.0, 769.0], ["The availability in machine-readable form of descriptions of the structure of documents, as well as of the document discourse (e.g. the scientific discourse within scholarly articles), is crucial for facilitating semantic publishing and the overall comprehension of documents by both users and machines. In this paper we introduce DoCO, the Document Components Ontology, an OWL 2 DL ontology that provides a general-purpose structured vocabulary of document elements to describe both structural and rhetorical document components in RDF. In addition to describing the formal description of the ontology, this paper showcases its utility in practice in a variety of our own applications and other activities of the Semantic Publishing community that rely on DoCO to annotate and retrieve document components of scholarly articles.", "which Ontology ?", "DoCO", 331.0, 335.0], ["The SPAR Ontology Network is a suite of complementary ontology modules to describe the scholarly publishing domain. BiDO Standard Bibliometric Measures is part of its set of ontologies. It allows describing of numerical and categorical bibliometric data such as h-index, author citation count, journal impact factor. These measures may be used to evaluate scientific production of researchers. However, they are not enough. In a previous study, we determined the lack of some terms to provide a more complete representation of scientific production. Hence, we have built an extension using the NeOn Methodology to restructure the BiDO ontology. With this extension, it is possible to represent and measure the number of documents from research, the number of citations from a paper and the number of publications in high impact journals according to its area and discipline.", "which Ontology ?", "BIDO", 116.0, 120.0], ["The availability in machine-readable form of descriptions of the structure of documents, as well as of the document discourse (e.g. the scientific discourse within scholarly articles), is crucial for facilitating semantic publishing and the overall comprehension of documents by both users and machines. In this paper we introduce DoCO, the Document Components Ontology, an OWL 2 DL ontology that provides a general-purpose structured vocabulary of document elements to describe both structural and rhetorical document components in RDF. In addition to describing the formal description of the ontology, this paper showcases its utility in practice in a variety of our own applications and other activities of the Semantic Publishing community that rely on DoCO to annotate and retrieve document components of scholarly articles.", "which Ontology ?", "DoCO", 331.0, 335.0], ["While the fast-paced inception of novel tasks and new datasets helps foster active research in a community towards interesting directions, keeping track of the abundance of research activity in different areas on different datasets is likely to become increasingly difficult. The community could greatly benefit from an automatic system able to summarize scientific results, e.g., in the form of a leaderboard. In this paper we build two datasets and develop a framework (TDMS-IE) aimed at automatically extracting task, dataset, metric and score from NLP papers, towards the automatic construction of leaderboards. Experiments show that our model outperforms several baselines by a large margin. Our model is a first step towards automatic leaderboard construction, e.g., in the NLP domain.", "which has model ?", "TDMS-IE", 472.0, 479.0], ["Machine translation systems achieve near human-level performance on some languages, yet their effectiveness strongly relies on the availability of large amounts of parallel sentences, which hinders their applicability to the majority of language pairs. This work investigates how to learn to translate when having access to only large monolingual corpora in each language. We propose two model variants, a neural and a phrase-based model. Both versions leverage a careful initialization of the parameters, the denoising effect of language models and automatic generation of parallel data by iterative back-translation. These models are significantly better than methods from the literature, while being simpler and having fewer hyper-parameters. On the widely used WMT\u201914 English-French and WMT\u201916 German-English benchmarks, our models respectively obtain 28.1 and 25.2 BLEU points without using a single parallel sentence, outperforming the state of the art by more than 11 BLEU points. On low-resource languages like English-Urdu and English-Romanian, our methods achieve even better results than semi-supervised and supervised approaches leveraging the paucity of available bitexts. Our code for NMT and PBSMT is publicly available.", "which has model ?", "PBSMT", 1207.0, 1212.0], ["We examine the capabilities of a unified, multi-task framework for three information extraction tasks: named entity recognition, relation extraction, and event extraction. Our framework (called DyGIE++) accomplishes all tasks by enumerating, refining, and scoring text spans designed to capture local (within-sentence) and global (cross-sentence) context. Our framework achieves state-of-the-art results across all tasks, on four datasets from a variety of domains. We perform experiments comparing different techniques to construct span representations. Contextualized embeddings like BERT perform well at capturing relationships among entities in the same or adjacent sentences, while dynamic span graph updates model long-range cross-sentence relationships. For instance, propagating span representations via predicted coreference links can enable the model to disambiguate challenging entity mentions. Our code is publicly available at https://github.com/dwadden/dygiepp and can be easily adapted for new tasks or datasets.", "which has model ?", "DyGIE++", NaN, NaN], ["With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.", "which has model ?", "XLNet", 407.0, 412.0], ["This paper proposes a novel deep reinforcement learning (RL) architecture, called Value Prediction Network (VPN), which integrates model-free and model-based RL methods into a single neural network. In contrast to typical model-based RL methods, VPN learns a dynamics model whose abstract states are trained to make option-conditional predictions of future values (discounted sum of rewards) rather than of future observations. Our experimental results show that VPN has several advantages over both model-free and model-based baselines in a stochastic environment where careful planning is required but building an accurate observation-prediction model is difficult. Furthermore, VPN outperforms Deep Q-Network (DQN) on several Atari games even with short-lookahead planning, demonstrating its potential as a new way of learning a good state representation.", "which has model ?", "VPN", 108.0, 111.0], ["We introduce SpERT, an attention model for span-based joint entity and relation extraction. Our key contribution is a light-weight reasoning on BERT embeddings, which features entity recognition and filtering, as well as relation classification with a localized, marker-free context representation. The model is trained using strong within-sentence negative samples, which are efficiently extracted in a single BERT pass. These aspects facilitate a search over all spans in the sentence. In ablation studies, we demonstrate the benefits of pre-training, strong negative sampling and localized context. Our model outperforms prior work by up to 2.6% F1 score on several datasets for joint entity and relation extraction.", "which has model ?", "SpERT", 13.0, 18.0], ["Although simple individually, artificial neurons provide state-of-the-art performance when interconnected in deep networks. Arguably, the Tsetlin Automaton is an even simpler and more versatile learning mechanism, capable of solving the multi-armed bandit problem. Merely by means of a single integer as memory, it learns the optimal action in stochastic environments through increment and decrement operations. In this paper, we introduce the Tsetlin Machine, which solves complex pattern recognition problems with propositional formulas, composed by a collective of Tsetlin Automata. To eliminate the longstanding problem of vanishing signal-to-noise ratio, the Tsetlin Machine orchestrates the automata using a novel game. Further, both inputs, patterns, and outputs are expressed as bits, while recognition and learning rely on bit manipulation, simplifying computation. Our theoretical analysis establishes that the Nash equilibria of the game align with the propositional formulas that provide optimal pattern recognition accuracy. This translates to learning without local optima, only global ones. In five benchmarks, the Tsetlin Machine provides competitive accuracy compared with SVMs, Decision Trees, Random Forests, Naive Bayes Classifier, Logistic Regression, and Neural Networks. We further demonstrate how the propositional formulas facilitate interpretation. We believe the combination of high accuracy, interpretability, and computational simplicity makes the Tsetlin Machine a promising tool for a wide range of domains.", "which has model ?", "Tsetlin Machine", 444.0, 459.0], ["Several deep learning models have been proposed for question answering. However, due to their single-pass nature, they have no way to recover from local maxima corresponding to incorrect answers. To address this problem, we introduce the Dynamic Coattention Network (DCN) for question answering. The DCN first fuses co-dependent representations of the question and the document in order to focus on relevant parts of both. Then a dynamic pointing decoder iterates over potential answer spans. This iterative procedure enables the model to recover from initial local maxima corresponding to incorrect answers. On the Stanford question answering dataset, a single DCN model improves the previous state of the art from 71.0% F1 to 75.9%, while a DCN ensemble obtains 80.4% F1.", "which has model ?", "DCN", 267.0, 270.0], ["Increasingly large document collections require improved information processing methods for searching, retrieving, and organizing text. Central to these information processing methods is document classification, which has become an important application for supervised learning. Recently the performance of traditional supervised classifiers has degraded as the number of documents has increased. This is because along with growth in the number of documents has come an increase in the number of categories. This paper approaches this problem differently from current document classification methods that view the problem as multi-class classification. Instead we perform hierarchical classification using an approach we call Hierarchical Deep Learning for Text classification (HDLTex). HDLTex employs stacks of deep learning architectures to provide specialized understanding at each level of the document hierarchy.", "which has model ?", "HDLTex", 778.0, 784.0], ["Large language models have become increasingly difficult to train because of the growing computation time and cost. In this work, we present SRU++, a highly-efficient architecture that combines fast recurrence and attention for sequence modeling. SRU++ exhibits strong modeling capacity and training efficiency. On standard language modeling tasks such as Enwik8, Wiki-103 and Billion Word datasets, our model obtains better bits-per-character and perplexity while using 3x-10x less training cost compared to top-performing Transformer models. For instance, our model achieves a state-of-the-art result on the Enwik8 dataset using 1.6 days of training on an 8-GPU machine. We further demonstrate that SRU++ requires minimal attention for near state-of-the-art performance. Our results suggest jointly leveraging fast recurrence with little attention as a promising direction for accelerating model training and inference.", "which has model ?", "SRU++", NaN, NaN], ["Named entity recognition and relation extraction are two important fundamental problems. Joint learning algorithms have been proposed to solve both tasks simultaneously, and many of them cast the joint task as a table-filling problem. However, they typically focused on learning a single encoder (usually learning representation in the form of a table) to capture information required for both tasks within the same space. We argue that it can be beneficial to design two distinct encoders to capture such two different types of information in the learning process. In this work, we propose the novel {\\em table-sequence encoders} where two different encoders -- a table encoder and a sequence encoder are designed to help each other in the representation learning process. Our experiments confirm the advantages of having {\\em two} encoders over {\\em one} encoder. On several standard datasets, our model shows significant improvements over existing approaches.", "which has model ?", "Table-Sequence", 606.0, 620.0], ["Machine comprehension(MC) style question answering is a representative problem in natural language processing. Previous methods rarely spend time on the improvement of encoding layer, especially the embedding of syntactic information and name entity of the words, which are very crucial to the quality of encoding. Moreover, existing attention methods represent each query word as a vector or use a single vector to represent the whole query sentence, neither of them can handle the proper weight of the key words in query sentence. In this paper, we introduce a novel neural network architecture called Multi-layer Embedding with Memory Network(MEMEN) for machine reading task. In the encoding layer, we employ classic skip-gram model to the syntactic and semantic information of the words to train a new kind of embedding layer. We also propose a memory network of full-orientation matching of the query and passage to catch more pivotal information. Experiments show that our model has competitive results both from the perspectives of precision and efficiency in Stanford Question Answering Dataset(SQuAD) among all published results and achieves the state-of-the-art results on TriviaQA dataset.", "which has model ?", "MEMEN", 646.0, 651.0], ["While normalizing flows have led to significant advances in modeling high-dimensional continuous distributions, their applicability to discrete distributions remains unknown. In this paper, we show that flows can in fact be extended to discrete events---and under a simple change-of-variables formula not requiring log-determinant-Jacobian computations. Discrete flows have numerous applications. We consider two flow architectures: discrete autoregressive flows that enable bidirectionality, allowing, for example, tokens in text to depend on both left-to-right and right-to-left contexts in an exact language model; and discrete bipartite flows that enable efficient non-autoregressive generation as in RealNVP. Empirically, we find that discrete autoregressive flows outperform autoregressive baselines on synthetic discrete distributions, an addition task, and Potts models; and bipartite flows can obtain competitive performance with autoregressive baselines on character-level language modeling for Penn Tree Bank and text8.", "which has model ?", "Bipartite Flow", NaN, NaN], ["Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever. Here, we combine the advantages of the popular bandit-based HPO method Hyperband (HB) and the evolutionary search approach of Differential Evolution (DE) to yield a new HPO method which we call DEHB. Comprehensive results on a very broad range of HPO problems, as well as a wide range of tabular benchmarks from neural architecture search, demonstrate that DEHB achieves strong performance far more robustly than all previous HPO methods we are aware of, especially for high-dimensional problems with discrete input dimensions. For example, DEHB is up to 1000x faster than random search. It is also efficient in computational time, conceptually simple and easy to implement, positioning it well to become a new default HPO method.", "which keywords ?", "Differential Evolution", 317.0, 339.0], ["A novel and highly sensitive nonenzymatic glucose biosensor was developed by nucleating colloidal silver nanoparticles (AgNPs) on MoS2. The facile fabrication method, high reproducibility (97.5%) and stability indicates a promising capability for large-scale manufacturing. Additionally, the excellent sensitivity (9044.6 \u03bcA\u00b7mM\u22121\u00b7cm\u22122), low detection limit (0.03 \u03bcM), appropriate linear range of 0.1\u20131000 \u03bcM, and high selectivity suggests that this biosensor has a great potential to be applied for noninvasive glucose detection in human body fluids, such as sweat and saliva.", "which keywords ?", "glucose biosensor", 42.0, 59.0], ["Herein, a novel electrochemical glucose biosensor based on glucose oxidase (GOx) immobilized on a surface containing platinum nanoparticles (PtNPs) electrodeposited on poly(Azure A) (PAA) previously electropolymerized on activated screen-printed carbon electrodes (GOx-PtNPs-PAA-aSPCEs) is reported. The resulting electrochemical biosensor was validated towards glucose oxidation in real samples and further electrochemical measurement associated with the generated H2O2. The electrochemical biosensor showed an excellent sensitivity (42.7 \u03bcA mM\u22121 cm\u22122), limit of detection (7.6 \u03bcM), linear range (20 \u03bcM\u20132.3 mM), and good selectivity towards glucose determination. Furthermore, and most importantly, the detection of glucose was performed at a low potential (0.2 V vs. Ag). The high performance of the electrochemical biosensor was explained through surface exploration using field emission SEM, XPS, and impedance measurements. The electrochemical biosensor was successfully applied to glucose quantification in several real samples (commercial juices and a plant cell culture medium), exhibiting a high accuracy when compared with a classical spectrophotometric method. This electrochemical biosensor can be easily prepared and opens up a good alternative in the development of new sensitive glucose sensors.", "which keywords ?", " activated screen-printed carbon electrodes", 220.0, 263.0], ["Abstract With the increased dependence on online learning platforms and educational resource repositories, a unified representation of digital learning resources becomes essential to support a dynamic and multi-source learning experience. We introduce the EduCOR ontology, an educational, career-oriented ontology that provides a foundation for representing online learning resources for personalised learning systems. The ontology is designed to enable learning material repositories to offer learning path recommendations, which correspond to the user\u2019s learning goals and preferences, academic and psychological parameters, and labour-market skills. We present the multiple patterns that compose the EduCOR ontology, highlighting its cross-domain applicability and integrability with other ontologies. A demonstration of the proposed ontology on the real-life learning platform eDoer is discussed as a use case. We evaluate the EduCOR ontology using both gold standard and task-based approaches. The comparison of EduCOR to three gold schemata, and its application in two use-cases, shows its coverage and adaptability to multiple OER repositories, which allows generating user-centric and labour-market oriented recommendations. Resource : https://tibonto.github.io/educor/.", "which keywords ?", "Skill", NaN, NaN], ["A new variant of the classic pulsed laser deposition (PLD) process is introduced as a room-temperature dry process for the growth and stoichiometry control of hybrid perovskite films through the use of nonstoichiometric single target ablation and off-axis growth. Mixed halide hybrid perovskite films nominally represented by CH3NH3PbI3\u2013xAx (A = Cl or F) are also grown and are shown to reveal interesting trends in the optical properties and photoresponse. Growth of good quality lead-free CH3NH3SnI3 films is also demonstrated, and the corresponding optical properties are presented. Finally, perovskite solar cells fabricated at room temperature (which makes the process adaptable to flexible substrates) are shown to yield a conversion efficiency of about 7.7%.", "which keywords ?", "Solar Cells", 606.0, 617.0], ["This paper presents a new RF MEMS tunable capacitor based on the zipper principle and with interdigitated RF and actuation electrodes. The electrode configuration prevents dielectric charging under high actuation voltages. It also increases the capacitance ratio and the tunable analog range. The effect of the residual stress on the capacitance tunability is also investigated. Two devices with different interdigital RF and actuation electrodes are fabricated on an alumina substrate and result in a capacitance ratio around 3.0 (Cmin = 70?90 fF, Cmax = 240?270 fF) and with a Q > 100 at 3 GHz. This design can be used in wideband tunable filters and matching networks.", "which keywords ?", "Tunable capacitor", 34.0, 51.0], ["Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5\uff05. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel\uff1bDrug delivery\uff1bNanoparticle\uff1bRadiotherapy\uff1bHypoxia\uff1bHuman tumor cells\uff1bcellular uptake", "which keywords ?", "Hypoxia", 1851.0, 1858.0], ["In this report, we demonstrate high spectral responsivity (SR) solar blind deep ultraviolet (UV) \u03b2-Ga2O3 metal-semiconductor-metal (MSM) photodetectors grown by the mist chemical-vapor deposition (Mist-CVD) method. The \u03b2-Ga2O3 thin film was grown on c-plane sapphire substrates, and the fabricated MSM PDs with Al contacts in an interdigitated geometry were found to exhibit peak SR>150A/W for the incident light wavelength of 254 nm at a bias of 20 V. The devices exhibited very low dark current, about 14 pA at 20 V, and showed sharp transients with a photo-to-dark current ratio>105. The corresponding external quantum efficiency is over 7 \u00d7 104%. The excellent deep UV \u03b2-Ga2O3 photodetectors will enable significant advancements for the next-generation photodetection applications.", "which keywords ?", "Photodetectors", 137.0, 151.0], ["PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.", "which keywords ?", "Nanoparticles", 205.0, 218.0], ["Herein, we incorporated dual biotemplates, i.e., cellulose nanocrystals (CNC) and apoferritin, into electrospinning solution to achieve three distinct benefits, i.e., (i) facile synthesis of a WO3 nanotube by utilizing the self-agglomerating nature of CNC in the core of as-spun nanofibers, (ii) effective sensitization by partial phase transition from WO3 to Na2W4O13 induced by interaction between sodium-doped CNC and WO3 during calcination, and (iii) uniform functionalization with monodispersive apoferritin-derived Pt catalytic nanoparticles (2.22 \u00b1 0.42 nm). Interestingly, the sensitization effect of Na2W4O13 on WO3 resulted in highly selective H2S sensing characteristics against seven different interfering molecules. Furthermore, synergistic effects with a bioinspired Pt catalyst induced a remarkably enhanced H2S response ( Rair/ Rgas = 203.5), unparalleled selectivity ( Rair/ Rgas < 1.3 for the interfering molecules), and rapid response (<10 s)/recovery (<30 s) time at 1 ppm of H2S under 95% relative humidity level. This work paves the way for a new class of cosensitization routes to overcome critical shortcomings of SMO-based chemical sensors, thus providing a potential platform for diagnosis of halitosis.", "which keywords ?", " apoferritin", 81.0, 93.0], ["This paper studies the effect of surface roughness on up-state and down-state capacitances of microelectromechanical systems (MEMS) capacitive switches. When the root-mean-square (RMS) roughness is 10 nm, the up-state capacitance is approximately 9% higher than the theoretical value. When the metal bridge is driven down, the normalized contact area between the metal bridge and the surface of the dielectric layer is less than 1% if the RMS roughness is larger than 2 nm. Therefore, the down-state capacitance is actually determined by the non-contact part of the metal bridge. The normalized isolation is only 62% for RMS roughness of 10 nm when the hold-down voltage is 30 V. The analysis also shows that the down-state capacitance and the isolation increase with the hold-down voltage. The normalized isolation increases from 58% to 65% when the hold-down voltage increases from 10 V to 60 V for RMS roughness of 10 nm.", "which keywords ?", "MEMS", 126.0, 130.0], ["A hybrid cascaded configuration consisting of a fiber Sagnac interferometer (FSI) and a Fabry-Perot interferometer (FPI) was proposed and experimentally demonstrated to enhance the temperature intensity by the Vernier-effect. The FSI, which consists of a certain length of Panda fiber, is for temperature sensing, while the FPI acts as a filter due to its temperature insensitivity. The two interferometers have almost the same free spectral range, with the spectral envelope of the cascaded sensor shifting much more than the single FSI. Experimental results show that the temperature sensitivity is enhanced from \u22121.4 nm/\u00b0C (single FSI) to \u221229.0 (cascaded configuration). The enhancement factor is 20.7, which is basically consistent with theoretical analysis (19.9).", "which keywords ?", "cascaded configuration", 9.0, 31.0], ["E-learning recommender systems are gaining significance nowadays due to its ability to enhance the learning experience by providing tailor-made services based on learner preferences. A Personalized Learning Environment (PLE) that automatically adapts to learner characteristics such as learning styles and knowledge level can recommend appropriate learning resources that would favor the learning process and improve learning outcomes. The pure cold-start problem is a relevant issue in PLEs, which arises due to the lack of prior information about the new learner in the PLE to create appropriate recommendations. This article introduces a semantic framework based on ontology to address the pure cold-start problem in content recommenders. The ontology encapsulates the domain knowledge about the learners as well as Learning Objects (LOs). The semantic model that we built has been experimented with different combinations of the key learner parameters such as learning style, knowledge level, and background knowledge. The proposed framework utilizes these parameters to build natural learner groups from the learner ontology using SPARQL queries. The ontology holds 480 learners\u2019 data, 468 annotated learning objects with 5,600 learner ratings. A multivariate k-means clustering algorithm, an unsupervised machine learning technique for grouping similar data, is used to evaluate the learner similarity computation accuracy. The learner satisfaction achieved with the proposed model is measured based on the ratings given by the 40 participants of the experiments. From the evaluation perspective, it is evident that 79% of the learners are satisfied with the recommendations generated by the proposed model in pure cold-start condition.", "which keywords ?", "personalized learning environment", 185.0, 218.0], ["An electrically conductive ultralow percolation threshold of 0.1 wt% graphene was observed in the thermoplastic polyurethane (TPU) nanocomposites. The homogeneously dispersed graphene effectively enhanced the mechanical properties of TPU significantly at a low graphene loading of 0.2 wt%. These nanocomposites were subjected to cyclic loading to investigate the influences of graphene loading, strain amplitude and strain rate on the strain sensing performances. The two dimensional graphene and the flexible TPU matrix were found to endow these nanocomposites with a wide range of strain sensitivity (gauge factor ranging from 0.78 for TPU with 0.6 wt% graphene at the strain rate of 0.1 min\u22121 to 17.7 for TPU with 0.2 wt% graphene at the strain rate of 0.3 min\u22121) and good sensing stability for different strain patterns. In addition, these nanocomposites demonstrated good recoverability and reproducibility after stabilization by cyclic loading. An analytical model based on tunneling theory was used to simulate the resistance response to strain under different strain rates. The change in the number of conductive pathways and tunneling distance under strain was responsible for the observed resistance-strain behaviors. This study provides guidelines for the fabrication of graphene based polymer strain sensors.", "which keywords ?", "Graphene", 69.0, 77.0], ["Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179\u00b122 nm were produced. Res was loaded with high EE of 73\u00b10.9% and DL content of 6.2\u00b10.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", "which keywords ?", "Resveratrol", 316.0, 327.0], ["Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever. Here, we combine the advantages of the popular bandit-based HPO method Hyperband (HB) and the evolutionary search approach of Differential Evolution (DE) to yield a new HPO method which we call DEHB. Comprehensive results on a very broad range of HPO problems, as well as a wide range of tabular benchmarks from neural architecture search, demonstrate that DEHB achieves strong performance far more robustly than all previous HPO methods we are aware of, especially for high-dimensional problems with discrete input dimensions. For example, DEHB is up to 1000x faster than random search. It is also efficient in computational time, conceptually simple and easy to implement, positioning it well to become a new default HPO method.", "which keywords ?", "HPO", 160.0, 163.0], ["Recently, pure transformer-based models have shown great potentials for vision tasks such as image classification and detection. However, the design of transformer networks is challenging. It has been observed that the depth, embedding dimension, and number of heads can largely affect the performance of vision transformers. Previous models configure these dimensions based upon manual crafting. In this work, we propose a new one-shot architecture search framework, namely AutoFormer, dedicated to vision transformer search. AutoFormer entangles the weights of different blocks in the same layers during supernet training. Benefiting from the strategy, the trained supernet allows thousands of subnets to be very well-trained. Specifically, the performance of these subnets with weights inherited from the supernet is comparable to those retrained from scratch. Besides, the searched models, which we refer to AutoFormers, surpass the recent state-of-the-arts such as ViT and DeiT. In particular, AutoFormer-tiny/small/base achieve 74.7%/81.7%/82.4% top-1 accuracy on ImageNet with 5.7M/22.9M/53.7M parameters, respectively. Lastly, we verify the transferability of AutoFormer by providing the performance on downstream benchmarks and distillation experiments. Code and models are available at https://github.com/microsoft/Cream.", "which keywords ?", "Image Classification", 93.0, 113.0], ["Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.", "which keywords ?", "instructional process", NaN, NaN], ["PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.", "which keywords ?", "mucoadhesive", 1206.0, 1218.0], ["A simple and inexpensive method for growing Ga2O3 using GaAs wafers is demonstrated. Si-doped GaAs wafers are heated to 1050 \u00b0C in a horizontal tube furnace in both argon and air ambients in order to convert their surfaces to \u03b2-Ga2O3. The \u03b2-Ga2O3 films are characterized using scanning electron micrograph, energy-dispersive X-ray spectroscopy, and X-ray diffraction. They are also used to fabricate solar blind photodetectors. The devices, which had nanotextured surfaces, exhibited a high sensitivity to ultraviolet (UV) illumination due in part to large surface areas. Furthermore, the films have coherent interfaces with the substrate, which leads to a robust device with high resistance to thermo-mechanical stress. The photoconductance of the \u03b2-Ga2O3 films is found to increase by more than three orders of magnitude under 270 nm ultraviolet illumination with respect to the dark current. The fabricated device shows a responsivity of \u223c292 mA/W at this wavelength.", "which keywords ?", "photodetector", NaN, NaN], ["Programs offered by academic institutions in higher education need to meet specific standards that are established by the appropriate accreditation bodies. Curriculum mapping is an important part of the curriculum management process that is used to document the expected learning outcomes, ensure quality, and align programs and courses with industry standards. Semantic web languages can be used to express and share common agreement about the vocabularies used in the domain under study. In this paper, we present an approach based on ontology for curriculum mapping in higher education. Our proposed approach is focused on the creation of a core curriculum ontology that can support effective knowledge representation and knowledge discovery. The research work presents the case of ontology reuse through the extension of the curriculum ontology to support the creation of micro-credentials. We also present a conceptual framework for knowledge discovery to support various business use case scenarios based on ontology inferencing and querying operations.", "which keywords ?", "curriculum ontology", 649.0, 668.0], ["Poor delivery of insoluble anticancer drugs has so far precluded their clinical application. In this study, we developed a tumor-targeting delivery system for insoluble drug (paclitaxel, PTX) by PEGylated O-carboxymethyl-chitosan (CMC) nanoparticles grafted with cyclic Arg-Gly-Asp (RGD) peptide. To improve the loading efficiency (LE), we combined O/W/O double emulsion method with temperature-programmed solidification technique and controlled PTX within the matrix network as in situ nanocrystallite form. Furthermore, these CMC nanoparticles were PEGylated, which could reduce recognition by the reticuloendothelial system (RES) and prolong the circulation time in blood. In addition, further graft of cyclic RGD peptide at the terminal of PEG chain endowed these nanoparticles with higher affinity to in vitro Lewis lung carcinoma (LLC) cells and in vivo tumor tissue. These outstanding properties enabled as-designed nanodevice to exhibit a greater tumor growth inhibition effect and much lower side effects over the commercial formulation Taxol.", "which keywords ?", "Nanoparticles", 236.0, 249.0], ["Ultralight graphene-based cellular elastomers are found to exhibit nearly frequency-independent piezoresistive behaviors. Surpassing the mechanoreceptors in the human skin, these graphene elastomers can provide an instantaneous and high-fidelity electrical response to dynamic pressures ranging from quasi-static up to 2000 Hz, and are capable of detecting ultralow pressures as small as 0.082 Pa.", "which keywords ?", "graphene elastomers ", 179.0, 199.0], ["CdTe-based solar cells exhibiting 19% power conversion efficiency were produced using widely available thermal evaporation deposition of the absorber layers on SnO2-coated glass with or without a t...", "which keywords ?", "Solar cells", 11.0, 22.0], ["Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5\uff05. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel\uff1bDrug delivery\uff1bNanoparticle\uff1bRadiotherapy\uff1bHypoxia\uff1bHuman tumor cells\uff1bcellular uptake", "which keywords ?", "Nanoparticles", 223.0, 236.0], ["The development of p-type metal-oxide semiconductors (MOSs) is of increasing interest for applications in next-generation optoelectronic devices, display backplane, and low-power-consumption complementary MOS circuits. Here, we report the high performance of solution-processed, p-channel copper-tin-sulfide-gallium oxide (CTSGO) thin-film transistors (TFTs) using UV/O3 exposure. Hall effect measurement confirmed the p-type conduction of CTSGO with Hall mobility of 6.02 \u00b1 0.50 cm2 V-1 s-1. The p-channel CTSGO TFT using UV/O3 treatment exhibited the field-effect mobility (\u03bcFE) of 1.75 \u00b1 0.15 cm2 V-1 s-1 and an on/off current ratio (ION/IOFF) of \u223c104 at a low operating voltage of -5 V. The significant enhancement in the device performance is due to the good p-type CTSGO material, smooth surface morphology, and fewer interfacial traps between the semiconductor and the Al2O3 gate insulator. Therefore, the p-channel CTSGO TFT can be applied for CMOS MOS TFT circuits for next-generation display.", "which keywords ?", "Transistors", 340.0, 351.0], ["This paper studies the effect of surface roughness on up-state and down-state capacitances of microelectromechanical systems (MEMS) capacitive switches. When the root-mean-square (RMS) roughness is 10 nm, the up-state capacitance is approximately 9% higher than the theoretical value. When the metal bridge is driven down, the normalized contact area between the metal bridge and the surface of the dielectric layer is less than 1% if the RMS roughness is larger than 2 nm. Therefore, the down-state capacitance is actually determined by the non-contact part of the metal bridge. The normalized isolation is only 62% for RMS roughness of 10 nm when the hold-down voltage is 30 V. The analysis also shows that the down-state capacitance and the isolation increase with the hold-down voltage. The normalized isolation increases from 58% to 65% when the hold-down voltage increases from 10 V to 60 V for RMS roughness of 10 nm.", "which keywords ?", "Capacitive switches", 132.0, 151.0], ["PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.", "which keywords ?", "Chitosan", 74.0, 82.0], ["Mixed tin (Sn)-lead (Pb) perovskites with high Sn content exhibit low bandgaps suitable for fabricating the bottom cell of perovskite-based tandem solar cells. In this work, we report on the fabrication of efficient mixed Sn-Pb perovskite solar cells using precursors combining formamidinium tin iodide (FASnI3) and methylammonium lead iodide (MAPbI3). The best-performing cell fabricated using a (FASnI3)0.6(MAPbI3)0.4 absorber with an absorption edge of \u223c1.2 eV achieved a power conversion efficiency (PCE) of 15.08 (15.00)% with an open-circuit voltage of 0.795 (0.799) V, a short-circuit current density of 26.86(26.82) mA/cm(2), and a fill factor of 70.6(70.0)% when measured under forward (reverse) voltage scan. The average PCE of 50 cells we have fabricated is 14.39 \u00b1 0.33%, indicating good reproducibility.", "which keywords ?", "Perovskites", 25.0, 36.0], ["The Vernier effect of two cascaded in-fiber Mach-Zehnder interferometers (MZIs) based on a spherical-shaped structure has been investigated. The envelope based on the Vernier effect is actually formed by a frequency component of the superimposed spectrum, and the frequency value is determined by the subtraction between the optical path differences of two cascaded MZIs. A method based on band-pass filtering is put forward to extract the envelope efficiently; strain and curvature measurements are carried out to verify the validity of the method. The results show that the strain and curvature sensitivities are enhanced to -8.47 pm/\u03bc\u03b5 and -33.70 nm/m-1 with magnification factors of 5.4 and -5.4, respectively. The detection limit of the sensors with the Vernier effect is also discussed.", "which keywords ?", "Vernier effect", 4.0, 18.0], ["This paper reports on the experimental and theoretical characterization of RF microelectromechanical systems (MEMS) switches for high-power applications. First, we investigate the problem of self-actuation due to high RF power and we demonstrate switches that do not self-actuate or catastrophically fail with a measured RF power of up to 5.5 W. Second, the problem of switch stiction to the down state as a function of the applied RF power is also theoretically and experimentally studied. Finally, a novel switch design with a top electrode is introduced and its advantages related to RF power-handling capabilities are presented. By applying this technology, we demonstrate hot-switching measurements with a maximum power of 0.8 W. Our results, backed by theory and measurements, illustrate that careful design can significantly improve the power-handling capabilities of RF MEMS switches.", "which keywords ?", "Switch stiction", 369.0, 384.0], ["In this paper, a liquid-based micro thermal convective accelerometer (MTCA) is optimized by the Rayleigh number (Ra) based compact model and fabricated using the $0.35\\mu $ m CMOS MEMS technology. To achieve water-proof performance, the conformal Parylene C coating was adopted as the isolation layer with the accelerated life-testing results of a 9-year-lifetime for liquid-based MTCA. Then, the device performance was characterized considering sensitivity, response time, and noise. Both the theoretical and experimental results demonstrated that fluid with a larger Ra number can provide better performance for the MTCA. More significantly, Ra based model showed its advantage to make a more accurate prediction than the simple linear model to select suitable fluid to enhance the sensitivity and balance the linear range of the device. Accordingly, an alcohol-based MTCA was achieved with a two-order-of magnitude increase in sensitivity (43.8 mV/g) and one-order-of-magnitude decrease in the limit of detection (LOD) ( $61.9~\\mu \\text{g}$ ) compared with the air-based MTCA. [2021-0092]", "which keywords ?", "CMOS", 265.0, 269.0], ["In this work, pure and IIIA element doped ZnO thin films were grown on p type silicon (Si) with (100) orientated surface by sol-gel method, and were characterized for comparing their electrical characteristics. The heterojunction parameters were obtained from the current-voltage (I-V) and capacitance-voltage (C-V) characteristics at room temperature. The ideality factor (n), saturation current (Io) and junction resistance of ZnO/p-Si heterojunction for both pure and doped (with Al or In) cases were determined by using different methods at room ambient. Other electrical parameters such as Fermi energy level (EF), barrier height (\u03a6B), acceptor concentration (Na), built-in potential (\u03a6i) and voltage dependence of surface states (Nss) profile were obtained from the C-V measurements. The results reveal that doping ZnO with IIIA (Al or In) elements to fabricate n-ZnO/p-Si heterojunction can result in high performance diode characteristics.", "which keywords ?", "ZnO Thin film", NaN, NaN], ["Current approaches to RDF graph indexing suffer from weak data locality, i.e., information regarding a piece of data appears in multiple locations, spanning multiple data structures. Weak data locality negatively impacts storage and query processing costs. Towards stronger data locality, we propose a Three-way Triple Tree (TripleT) secondary memory indexing technique to facilitate flexible and efficient join evaluation on RDF data. The novelty of TripleT is that the index is built over the atoms occurring in the data set, rather than at a coarser granularity, such as whole triples occurring in the data set; and, the atoms are indexed regardless of the roles (i.e., subjects, predicates, or objects) they play in the triples of the data set. We show through extensive empirical evaluation that TripleT exhibits multiple orders of magnitude improvement over the state-of-the-art, in terms of both storage and query processing costs.", "which keywords ?", "Query processing costs", 233.0, 255.0], ["This paper presents a new RF MEMS tunable capacitor based on the zipper principle and with interdigitated RF and actuation electrodes. The electrode configuration prevents dielectric charging under high actuation voltages. It also increases the capacitance ratio and the tunable analog range. The effect of the residual stress on the capacitance tunability is also investigated. Two devices with different interdigital RF and actuation electrodes are fabricated on an alumina substrate and result in a capacitance ratio around 3.0 (Cmin = 70?90 fF, Cmax = 240?270 fF) and with a Q > 100 at 3 GHz. This design can be used in wideband tunable filters and matching networks.", "which keywords ?", "Interdigitated RF", 91.0, 108.0], ["Abstract Objective: Paclitaxel (PTX)-loaded polymer (Poly(lactic-co-glycolic acid), PLGA)-based nanoformulation was developed with the objective of formulating cremophor EL-free nanoformulation intended for intravenous use. Significance: The polymeric PTX nanoparticles free from the cremophor EL will help in eliminating the shortcomings of the existing delivery system as cremophor EL causes serious allergic reactions to the subjects after intravenous use. Methods and results: Paclitaxel-loaded nanoparticles were formulated by nanoprecipitation method. The diminutive nanoparticles (143.2 nm) with uniform size throughout (polydispersity index, 0.115) and high entrapment efficiency (95.34%) were obtained by employing the Box\u2013Behnken design for the optimization of the formulation with the aid of desirability approach-based numerical optimization technique. Optimized levels for each factor viz. polymer concentration (X1), amount of organic solvent (X2), and surfactant concentration (X3) were 0.23%, 5 ml %, and 1.13%, respectively. The results of the hemocompatibility studies confirmed the safety of PLGA-based nanoparticles for intravenous administration. Pharmacokinetic evaluations confirmed the longer retention of PTX in systemic circulation. Conclusion: In a nutshell, the developed polymeric nanoparticle formulation of PTX precludes the inadequacy of existing PTX formulation and can be considered as superior alternative carrier system of the same.", "which keywords ?", "Paclitaxel", 20.0, 30.0], ["In recent years, the development of recommender systems has attracted increased interest in several domains, especially in e-learning. Massive Open Online Courses have brought a revolution. However, deficiency in support and personalization in this context drive learners to lose their motivation and leave the learning process. To overcome this problem we focus on adapting learning activities to learners' needs using a recommender system.This paper attempts to provide an introduction to different recommender systems for e-learning settings, as well as to present our proposed recommender system for massive learning activities in order to provide learners with the suitable learning activities to follow the learning process and maintain their motivation. We propose a hybrid knowledge-based recommender system based on ontology for recommendation of e-learning activities to learners in the context of MOOCs. In the proposed recommendation approach, ontology is used to model and represent the knowledge about the domain model, learners and learning activities.", "which keywords ?", "recommender system", 422.0, 440.0], ["Abstract Objective: Paclitaxel (PTX)-loaded polymer (Poly(lactic-co-glycolic acid), PLGA)-based nanoformulation was developed with the objective of formulating cremophor EL-free nanoformulation intended for intravenous use. Significance: The polymeric PTX nanoparticles free from the cremophor EL will help in eliminating the shortcomings of the existing delivery system as cremophor EL causes serious allergic reactions to the subjects after intravenous use. Methods and results: Paclitaxel-loaded nanoparticles were formulated by nanoprecipitation method. The diminutive nanoparticles (143.2 nm) with uniform size throughout (polydispersity index, 0.115) and high entrapment efficiency (95.34%) were obtained by employing the Box\u2013Behnken design for the optimization of the formulation with the aid of desirability approach-based numerical optimization technique. Optimized levels for each factor viz. polymer concentration (X1), amount of organic solvent (X2), and surfactant concentration (X3) were 0.23%, 5 ml %, and 1.13%, respectively. The results of the hemocompatibility studies confirmed the safety of PLGA-based nanoparticles for intravenous administration. Pharmacokinetic evaluations confirmed the longer retention of PTX in systemic circulation. Conclusion: In a nutshell, the developed polymeric nanoparticle formulation of PTX precludes the inadequacy of existing PTX formulation and can be considered as superior alternative carrier system of the same.", "which keywords ?", "Nanoparticles", 256.0, 269.0], ["Bladder cancer (BC) is a very common cancer. Nonmuscle-invasive bladder cancer (NMIBC) is the most common type of bladder cancer. After postoperative tumor resection, chemotherapy intravesical instillation is recommended as a standard treatment to significantly reduce recurrences. Nanomedicine-mediated delivery of a chemotherapeutic agent targeting cancer could provide a solution to obtain longer residence time and high bioavailability of an anticancer drug. The approach described here provides a nanomedicine with sustained and prolonged delivery of paclitaxel and enhanced therapy of intravesical bladder cancer, which is paclitaxel/chitosan (PTX/CS) nanosupensions (NSs). The positively charged PTX/CS NSs exhibited a rod-shaped morphology with a mean diameter about 200 nm. They have good dispersivity in water without any protective agents, and the positively charged properties make them easy to be adsorbed on the inner mucosa of the bladder through electrostatic adsorption. PTX/CS NSs also had a high drug loading capacity and can maintain sustained release of paclitaxel which could be prolonged over 10 days. Cell experiments in vitro demonstrated that PTX/CS NSs had good biocompatibility and effective bladder cancer cell proliferation inhibition. The significant anticancer efficacy against intravesical bladder cancer was verified by an in situ bladder cancer model. The paclitaxel/chitosan nanosupensions could provide sustained delivery of chemotherapeutic agents with significant anticancer efficacy against intravesical bladder cancer.", "which keywords ?", "Chitosan", 640.0, 648.0], ["In this study, we synthesized hierarchical CuO nanoleaves in large-quantity via the hydrothermal method. We employed different techniques to characterize the morphological, structural, optical properties of the as-prepared hierarchical CuO nanoleaves sample. An electrochemical based nonenzymatic glucose biosensor was fabricated using engineered hierarchical CuO nanoleaves. The electrochemical behavior of fabricated biosensor towards glucose was analyzed with cyclic voltammetry (CV) and amperometry (i\u2013t) techniques. Owing to the high electroactive surface area, hierarchical CuO nanoleaves based nonenzymatic biosensor electrode shows enhanced electrochemical catalytic behavior for glucose electro-oxidation in 100 mM sodium hydroxide (NaOH) electrolyte. The nonenzymatic biosensor displays a high sensitivity (1467.32 \u03bc A/(mM cm 2 )), linear range (0.005\u20135.89 mM), and detection limit of 12 nM (S/N = 3). Moreover, biosensor displayed good selectivity, reproducibility, repeatability, and stability at room temperature over three-week storage period. Further, as-fabricated nonenzymatic glucose biosensors were employed for practical applications in human serum sample measurements. The obtained data were compared to the commercial biosensor, which demonstrates the practical usability of nonenzymatic glucose biosensors in real sample analysis.", "which keywords ?", "CuO nanoleaves", 43.0, 57.0], ["Background: Paclitaxel (PTX) is one of the most important and effective anticancer drugs for the treatment of human cancer. However, its low solubility and severe adverse effects limited clinical use. To overcome this limitation, nanotechnology has been used to overcome tumors due to its excellent antimicrobial activity. Objective: This study was to demonstrate the anticancer properties of functionalization silver nanoparticles loaded with paclitaxel (Ag@PTX) induced A549 cells apoptosis through ROS-mediated signaling pathways. Methods: The Ag@PTX nanoparticles were charged with a zeta potential of about -17 mv and characterized around 2 nm with a narrow size distribution. Results: Ag@PTX significantly decreased the viability of A549 cells and possessed selectivity between cancer and normal cells. Ag@PTX induced A549 cells apoptosis was confirmed by nuclear condensation, DNA fragmentation, and activation of caspase-3. Furthermore, Ag@PTX enhanced the anti-cancer activity of A549 cells through ROS-mediated p53 and AKT signalling pathways. Finally, in a xenograft nude mice model, Ag@PTX suppressed the growth of tumors. Conclusion: Our findings suggest that Ag@PTX may be a candidate as a chemopreventive agent and could be a highly efficient way to achieve anticancer synergism for human cancers.", "which keywords ?", "Apoptosis", 483.0, 492.0], ["Choosing a higher education course at university is not an easy task for students. A wide range of courses are offered by the individual universities whose delivery mode and entry requirements differ. A personalized recommendation system can be an effective way of suggesting the relevant courses to the prospective students. This paper introduces a novel approach that personalizes course recommendations that will match the individual needs of users. The proposed approach developed a framework of an ontology-based hybrid-filtering system called the ontology-based personalized course recommendation (OPCR). This approach aims to integrate the information from multiple sources based on the hierarchical ontology similarity with a view to enhancing the efficiency and the user satisfaction and to provide students with appropriate recommendations. The OPCR combines collaborative-based filtering with content-based filtering. It also considers familiar related concepts that are evident in the profiles of both the student and the course, determining the similarity between them. Furthermore, OPCR uses an ontology mapping technique, recommending jobs that will be available following the completion of each course. This method can enable students to gain a comprehensive knowledge of courses based on their relevance, using dynamic ontology mapping to link the course profiles and student profiles with job profiles. Results show that a filtering algorithm that uses hierarchically related concepts produces better outcomes compared to a filtering method that considers only keyword similarity. In addition, the quality of the recommendations is improved when the ontology similarity between the items\u2019 and the users\u2019 profiles were utilized. This approach, using a dynamic ontology mapping, is flexible and can be adapted to different domains. The proposed framework can be used to filter the items for both postgraduate courses and items from other domains.", "which keywords ?", "ontology", 503.0, 511.0], ["Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.", "which keywords ?", "goals", 742.0, 747.0], ["Herein, a novel electrochemical glucose biosensor based on glucose oxidase (GOx) immobilized on a surface containing platinum nanoparticles (PtNPs) electrodeposited on poly(Azure A) (PAA) previously electropolymerized on activated screen-printed carbon electrodes (GOx-PtNPs-PAA-aSPCEs) is reported. The resulting electrochemical biosensor was validated towards glucose oxidation in real samples and further electrochemical measurement associated with the generated H2O2. The electrochemical biosensor showed an excellent sensitivity (42.7 \u03bcA mM\u22121 cm\u22122), limit of detection (7.6 \u03bcM), linear range (20 \u03bcM\u20132.3 mM), and good selectivity towards glucose determination. Furthermore, and most importantly, the detection of glucose was performed at a low potential (0.2 V vs. Ag). The high performance of the electrochemical biosensor was explained through surface exploration using field emission SEM, XPS, and impedance measurements. The electrochemical biosensor was successfully applied to glucose quantification in several real samples (commercial juices and a plant cell culture medium), exhibiting a high accuracy when compared with a classical spectrophotometric method. This electrochemical biosensor can be easily prepared and opens up a good alternative in the development of new sensitive glucose sensors.", "which keywords ?", "poly(Azure A)", NaN, NaN], ["High-performance p-type oxide thin film transistors (TFTs) have great potential for many semiconductor applications. However, these devices typically suffer from low hole mobility and high off-state currents. We fabricated p-type TFTs with a phase-pure polycrystalline Cu2O semiconductor channel grown by atomic layer deposition (ALD). The TFT switching characteristics were improved by applying a thin ALD Al2O3 passivation layer on the Cu2O channel, followed by vacuum annealing at 300 \u00b0C. Detailed characterization by transmission electron microscopy-energy dispersive X-ray analysis and X-ray photoelectron spectroscopy shows that the surface of Cu2O is reduced following Al2O3 deposition and indicates the formation of a 1-2 nm thick CuAlO2 interfacial layer. This, together with field-effect passivation caused by the high negative fixed charge of the ALD Al2O3, leads to an improvement in the TFT performance by reducing the density of deep trap states as well as by reducing the accumulation of electrons in the semiconducting layer in the device off-state.", "which keywords ?", "Passivation", 413.0, 424.0], ["The application of thinner cadmium sulfide (CdS) window layer is a feasible approach to improve the performance of cadmium telluride (CdTe) thin film solar cells. However, the reduction of compactness and continuity of thinner CdS always deteriorates the device performance. In this work, transparent Al2O3 films with different thicknesses, deposited by using atomic layer deposition (ALD), were utilized as buffer layers between the front electrode transparent conductive oxide (TCO) and CdS layers to solve this problem, and then, thin-film solar cells with a structure of TCO/Al2O3/CdS/CdTe/BC/Ni were fabricated. The characteristics of the ALD-Al2O3 films were studied by UV\u2013visible transmittance spectrum, Raman spectroscopy, and atomic force microscopy (AFM). The light and dark J\u2013V performances of solar cells were also measured by specific instrumentations. The transmittance measurement conducted on the TCO/Al2O3 films verified that the transmittance of TCO/Al2O3 were comparable to that of single TCO layer, meaning that no extra absorption loss occurred when Al2O3 buffer layers were introduced into cells. Furthermore, due to the advantages of the ALD method, the ALD-Al2O3 buffer layers formed an extremely continuous and uniform coverage on the substrates to effectively fill and block the tiny leakage channels in CdS/CdTe polycrystalline films and improve the characteristics of the interface between TCO and CdS. However, as the thickness of alumina increased, the negative effects of cells were gradually exposed, especially the increase of the series resistance (Rs) and the more serious \u201croll-over\u201d phenomenon. Finally, the cell conversion efficiency (\u03b7) of more than 13.0% accompanied by optimized uniformity performances was successfully achieved corresponding to the 10 nm thick ALD-Al2O3 thin film.", "which keywords ?", "Al2O3", 309.0, 314.0], ["A novel and highly sensitive nonenzymatic glucose biosensor was developed by nucleating colloidal silver nanoparticles (AgNPs) on MoS2. The facile fabrication method, high reproducibility (97.5%) and stability indicates a promising capability for large-scale manufacturing. Additionally, the excellent sensitivity (9044.6 \u03bcA\u00b7mM\u22121\u00b7cm\u22122), low detection limit (0.03 \u03bcM), appropriate linear range of 0.1\u20131000 \u03bcM, and high selectivity suggests that this biosensor has a great potential to be applied for noninvasive glucose detection in human body fluids, such as sweat and saliva.", "which keywords ?", "nonenzymatic", 29.0, 41.0], ["SnO2 nanowire gas sensors have been fabricated on Cd\u2212Au comb-shaped interdigitating electrodes using thermal evaporation of the mixed powders of SnO2 and active carbon. The self-assembly grown sensors have excellent performance in sensor response to hydrogen concentration in the range of 10 to 1000 ppm. This high response is attributed to the large portion of undercoordinated atoms on the surface of the SnO2 nanowires. The influence of the Debye length of the nanowires and the gap between electrodes in the gas sensor response is examined and discussed.", "which keywords ?", "Nanowires", 412.0, 421.0], ["A highly sensitive strain sensor consisting of two cascaded fiber ring resonators based on the Vernier effect is proposed. Each fiber ring resonator, composed of an input optical coupler, an output optical coupler, and a polarization controller, has a comb-like transmission spectrum with peaks at its resonance wavelengths. As a result, the Vernier effect will be generated, due to the displacement of the two transmission spectra. Using this technique, strain measurements can be achieved by measuring the free spectral range of the cascaded fiber ring resonators. The experimental results show that the sensing setup can operate in large strain range with a sensitivity of 0.0129 nm-1/\u03bc\u03b5. The new generation of Vernier strain sensor can also be useful for micro-displacement measurement.", "which keywords ?", "Strain", 19.0, 25.0], ["A new variant of the classic pulsed laser deposition (PLD) process is introduced as a room-temperature dry process for the growth and stoichiometry control of hybrid perovskite films through the use of nonstoichiometric single target ablation and off-axis growth. Mixed halide hybrid perovskite films nominally represented by CH3NH3PbI3\u2013xAx (A = Cl or F) are also grown and are shown to reveal interesting trends in the optical properties and photoresponse. Growth of good quality lead-free CH3NH3SnI3 films is also demonstrated, and the corresponding optical properties are presented. Finally, perovskite solar cells fabricated at room temperature (which makes the process adaptable to flexible substrates) are shown to yield a conversion efficiency of about 7.7%.", "which keywords ?", "Pulsed Laser Deposition", 29.0, 52.0], ["Compositional engineering of recently arising methylammonium (MA) lead (Pb) halide based perovskites is an essential approach for finding better perovskite compositions to resolve still remaining issues of toxic Pb, long-term instability, etc. In this work, we carried out crystallographic, morphological, optical, and photovoltaic characterization of compositional MASn0.6Pb0.4I3-xBrx by gradually introducing bromine (Br) into parental Pb-Sn binary perovskite (MASn0.6Pb0.4I3) to elucidate its function in Sn-rich (Sn:Pb = 6:4) perovskites. We found significant advances in crystallinity and dense coverage of the perovskite films by inserting the Br into Sn-rich perovskite lattice. Furthermore, light-intensity-dependent open circuit voltage (Voc) measurement revealed much suppressed trap-assisted recombination for a proper Br-added (x = 0.4) device. These contributed to attaining the unprecedented power conversion efficiency of 12.1% and Voc of 0.78 V, which are, to the best of our knowledge, the highest performance in the Sn-rich (\u226560%) perovskite solar cells reported so far. In addition, impressive enhancement of photocurrent-output stability and little hysteresis were found, which paves the way for the development of environmentally benign (Pb reduction), stable monolithic tandem cells using the developed low band gap (1.24-1.26 eV) MASn0.6Pb0.4I3-xBrx with suggested composition (x = 0.2-0.4).", "which keywords ?", "Perovskites", 89.0, 100.0], ["Graphene (Gr) has been widely used as a transparent electrode material for photodetectors because of its high conductivity and high transmittance in recent years. However, the current low-efficiency manipulation of Gr has hindered the arraying and practical use of such detectors. We invented a multistep method of accurately tailoring graphene into interdigital electrodes for fabricating a sensitive, stable deep-ultraviolet photodetector based on Zn-doped Ga2O3 films. The fabricated photodetector exhibits a series of excellent performance, including extremely low dark current (\u223c10-11 A), an ultrahigh photo-to-dark ratio (>105), satisfactory responsivity (1.05 A/W), and excellent selectivity for the deep-ultraviolet band, compared to those with ordinary metal electrodes. The raise of photocurrent and responsivity is attributed to the increase of incident photons through Gr and separated carriers caused by the built-in electric field formed at the interface of Gr and Ga2O3:Zn films. The proposed ideas and methods of tailoring Gr can not only improve the performance of devices but more importantly contribute to the practical development of graphene.", "which keywords ?", "Electrodes", 363.0, 373.0], ["In recent years, the development of electronic skin and smart wearable body sensors has put forward high requirements for flexible pressure sensors with high sensitivity and large linear measuring range. However it turns out to be difficult to increase both of them simultaneously. In this paper, a flexible capacitive pressure sensor based on porous carbon conductive paste-PDMS composite is reported, the sensitivity and the linear measuring range of which were developed using multiple methods including adjusting the stiffness of the dielectric layer material, fabricating micro-structure and increasing dielectric permittivity of dielectric layer. The capacitive pressure sensor reported here has a relatively high sensitivity of 1.1 kPa-1 and a large linear measuring range of 10 kPa, making the product of the sensitivity and linear measuring range is 11, which is higher than that of the most reported capacitive pressure sensor to our best knowledge. The sensor has a detection of limit of 4 Pa, response time of 60 ms and great stability. Some potential applications of the sensor were demonstrated such as arterial pulse wave measuring and breathe measuring, which shows a promising candidate for wearable biomedical devices. In addition, a pressure sensor array based on the material was also fabricated and it could identify objects in the shape of different letters clearly, which shows a promising application in the future electronic skins.", "which keywords ?", "Flexible", 122.0, 130.0], ["Bladder cancer (BC) is a very common cancer. Nonmuscle-invasive bladder cancer (NMIBC) is the most common type of bladder cancer. After postoperative tumor resection, chemotherapy intravesical instillation is recommended as a standard treatment to significantly reduce recurrences. Nanomedicine-mediated delivery of a chemotherapeutic agent targeting cancer could provide a solution to obtain longer residence time and high bioavailability of an anticancer drug. The approach described here provides a nanomedicine with sustained and prolonged delivery of paclitaxel and enhanced therapy of intravesical bladder cancer, which is paclitaxel/chitosan (PTX/CS) nanosupensions (NSs). The positively charged PTX/CS NSs exhibited a rod-shaped morphology with a mean diameter about 200 nm. They have good dispersivity in water without any protective agents, and the positively charged properties make them easy to be adsorbed on the inner mucosa of the bladder through electrostatic adsorption. PTX/CS NSs also had a high drug loading capacity and can maintain sustained release of paclitaxel which could be prolonged over 10 days. Cell experiments in vitro demonstrated that PTX/CS NSs had good biocompatibility and effective bladder cancer cell proliferation inhibition. The significant anticancer efficacy against intravesical bladder cancer was verified by an in situ bladder cancer model. The paclitaxel/chitosan nanosupensions could provide sustained delivery of chemotherapeutic agents with significant anticancer efficacy against intravesical bladder cancer.", "which keywords ?", "nanosupension", NaN, NaN], ["High-performance p-type oxide thin film transistors (TFTs) have great potential for many semiconductor applications. However, these devices typically suffer from low hole mobility and high off-state currents. We fabricated p-type TFTs with a phase-pure polycrystalline Cu2O semiconductor channel grown by atomic layer deposition (ALD). The TFT switching characteristics were improved by applying a thin ALD Al2O3 passivation layer on the Cu2O channel, followed by vacuum annealing at 300 \u00b0C. Detailed characterization by transmission electron microscopy-energy dispersive X-ray analysis and X-ray photoelectron spectroscopy shows that the surface of Cu2O is reduced following Al2O3 deposition and indicates the formation of a 1-2 nm thick CuAlO2 interfacial layer. This, together with field-effect passivation caused by the high negative fixed charge of the ALD Al2O3, leads to an improvement in the TFT performance by reducing the density of deep trap states as well as by reducing the accumulation of electrons in the semiconducting layer in the device off-state.", "which keywords ?", "Atomic layer deposition", 305.0, 328.0], ["Ni-MOF (metal-organic framework)/Ni/NiO/carbon frame nanocomposite was formed by combing Ni and NiO nanoparticles and a C frame with Ni-MOF using an efficient one-step calcination method. The morphology and structure of Ni-MOF/Ni/NiO/C nanocomposite were characterized by transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), and energy disperse spectroscopy (EDS) mapping. Ni-MOF/Ni/NiO/C nanocomposites were immobilized onto glassy carbon electrodes (GCEs) with Nafion film to construct high-performance nonenzymatic glucose and H2O2 electrochemical sensors. Cyclic voltammetric (CV) study showed Ni-MOF/Ni/NiO/C nanocomposite displayed better electrocatalytic activity toward glucose oxidation as compared to Ni-MOF. Amperometric study indicated the glucose sensor displayed high performance, offering a low detection limit (0.8 \u03bcM), a high sensitivity of 367.45 mA M-1 cm-2, and a wide linear range (from 4 to 5664 \u03bcM). Importantly, good reproducibility, long-time stability, and excellent selectivity were obtained within the as-fabricated glucose sensor. Furthermore, the constructed high-performance sensor was utilized to monitor the glucose levels in human serum, and satisfactory results were obtained. It demonstrated the Ni-MOF/Ni/NiO/C nanocomposite can be used as a good electrochemical sensing material in practical biological applications.", "which keywords ?", "Human serum", 1211.0, 1222.0], ["Most Semantic Web applications rely on querying graphs, typically by using SPARQL with a triple store. Increasingly, applications also analyze properties of the graph structure to compute statistical inferences. The current Semantic Web infrastructure, however, does not efficiently support such operations. This forces developers to extract the relevant data for external statistical post-processing. In this paper we propose to rethink query execution in a triple store as a highly parallelized asynchronous graph exploration on an active index data structure. This approach also allows to integrate SPARQL-querying with the sampling of graph properties. To evaluate this architecture we implemented Random Walk TripleRush, which is built on a distributed graph processing system. Our evaluations show that this architecture enables both competitive graph querying, as well as the ability to execute various types of random walks with restarts that sample interesting graph properties. Thanks to the asynchronous architecture, first results are sometimes returned in a fraction of the full execution time. We also evaluate the scalability and show that the architecture supports fast query-times on a dataset with more than a billion triples.", "which Algorithm ?", "Random walk", 702.0, 713.0], ["Adaptive support weight (ASW) methods represent the state of the art in local stereo matching, while the bilateral filter-based ASW method achieves outstanding performance. However, this method fails to resolve the ambiguity induced by nearby pixels at different disparities but with similar colors. In this paper, we introduce a novel trilateral filter (TF)-based ASW method that remedies such ambiguities by considering the possible disparity discontinuities through color discontinuity boundaries, i.e., the boundary strength between two pixels, which is measured by a local energy model. We also present a recursive TF-based ASW method whose computational complexity is O(N) for the cost aggregation step, and O(NLog2(N)) for boundary detection, where N denotes the input image size. This complexity is thus independent of the support window size. The recursive TF-based method is a nonlocal cost aggregation strategy. The experimental evaluation on the Middlebury benchmark shows that the proposed method, whose average error rate is 4.95%, outperforms other local methods in terms of accuracy. Equally, the average runtime of the proposed TF-based cost aggregation is roughly 260 ms on a 3.4-GHz Inter Core i7 CPU, which is comparable with state-of-the-art efficiency.", "which Algorithm ?", "Trilateral", 336.0, 346.0], ["Binocular stereo matching is one of the most important algorithms in the field of computer vision. Adaptive support-weight approaches, the current state-of-the-art local methods, produce results comparable to those generated by global methods. However, excessive time consumption is the main problem of these algorithms since the computational complexity is proportionally related to the support window size. In this paper, we present a novel cost aggregation method inspired by domain transformation, a recently proposed dimensionality reduction technique. This transformation enables the aggregation of 2-D cost data to be performed using a sequence of 1-D filters, which lowers computation and memory costs compared to conventional 2-D filters. Experiments show that the proposed method outperforms the state-of-the-art local methods in terms of computational performance, since its computational complexity is independent of the input parameters. Furthermore, according to the experimental results with the Middlebury dataset and real-world images, our algorithm is currently one of the most accurate and efficient local algorithms.", "which Algorithm ?", "Domain transformation", 479.0, 500.0], ["Several computer programs are available for detecting copy number variants (CNVs) using genome-wide SNP arrays. We evaluated the performance of four CNV detection software suites\u2014Birdsuite, Partek, HelixTree, and PennCNV-Affy\u2014in the identification of both rare and common CNVs. Each program's performance was assessed in two ways. The first was its recovery rate, i.e., its ability to call 893 CNVs previously identified in eight HapMap samples by paired-end sequencing of whole-genome fosmid clones, and 51,440 CNVs identified by array Comparative Genome Hybridization (aCGH) followed by validation procedures, in 90 HapMap CEU samples. The second evaluation was program performance calling rare and common CNVs in the Bipolar Genome Study (BiGS) data set (1001 bipolar cases and 1033 controls, all of European ancestry) as measured by the Affymetrix SNP 6.0 array. Accuracy in calling rare CNVs was assessed by positive predictive value, based on the proportion of rare CNVs validated by quantitative real-time PCR (qPCR), while accuracy in calling common CNVs was assessed by false positive/false negative rates based on qPCR validation results from a subset of common CNVs. Birdsuite recovered the highest percentages of known HapMap CNVs containing >20 markers in two reference CNV datasets. The recovery rate increased with decreased CNV frequency. In the tested rare CNV data, Birdsuite and Partek had higher positive predictive values than the other software suites. In a test of three common CNVs in the BiGS dataset, Birdsuite's call was 98.8% consistent with qPCR quantification in one CNV region, but the other two regions showed an unacceptable degree of accuracy. We found relatively poor consistency between the two \u201cgold standards,\u201d the sequence data of Kidd et al., and aCGH data of Conrad et al. Algorithms for calling CNVs especially common ones need substantial improvement, and a \u201cgold standard\u201d for detection of CNVs remains to be established.", "which Algorithm ?", "HelixTree", 198.0, 207.0], ["The disparity estimation problem is commonly solved using graph cut (GC) methods, in which the disparity assignment problem is transformed to one of minimizing global energy function. Although such an approach yields an accurate disparity map, the computational cost is relatively high. Accordingly, this paper proposes a hierarchical bilateral disparity structure (HBDS) algorithm in which the efficiency of the GC method is improved without any loss in the disparity estimation performance by dividing all the disparity levels within the stereo image hierarchically into a series of bilateral disparity structures of increasing fineness. To address the well-known foreground fattening effect, a disparity refinement process is proposed comprising a fattening foreground region detection procedure followed by a disparity recovery process. The efficiency and accuracy of the HBDS-based GC algorithm are compared with those of the conventional GC method using benchmark stereo images selected from the Middlebury dataset. In addition, the general applicability of the proposed approach is demonstrated using several real-world stereo images.", "which Algorithm ?", "Hierarchical bilateral disparity structure (HBDS)", NaN, NaN], ["With the proliferation of the RDF data format, engines for RDF query processing are faced with very large graphs that contain hundreds of millions of RDF triples. This paper addresses the resulting scalability problems. Recent prior work along these lines has focused on indexing and other physical-design issues. The current paper focuses on join processing, as the fine-grained and schema-relaxed use of RDF often entails star- and chain-shaped join queries with many input streams from index scans. We present two contributions for scalable join processing. First, we develop very light-weight methods for sideways information passing between separate joins at query run-time, to provide highly effective filters on the input streams of joins. Second, we improve previously proposed algorithms for join-order optimization by more accurate selectivity estimations for very large RDF graphs. Experimental studies with several RDF datasets, including the UniProt collection, demonstrate the performance gains of our approach, outperforming the previously fastest systems by more than an order of magnitude.", "which Has implementation ?", "Join-order optimization", 801.0, 824.0], ["Query optimization in RDF Stores is a challenging problem as SPARQL queries typically contain many more joins than equivalent relational plans, and hence lead to a large join order search space. In such cases, cost-based query optimization often is not possible. One practical reason for this is that statistics typically are missing in web scale setting such as the Linked Open Datasets (LOD). The more profound reason is that due to the absence of schematic structure in RDF, join-hit ratio estimation requires complicated forms of correlated join statistics; and currently there are no methods to identify the relevant correlations beforehand. For this reason, the use of good heuristics is essential in SPARQL query optimization, even in the case that are partially used with cost-based statistics (i.e., hybrid query optimization). In this paper we describe a set of useful heuristics for SPARQL query optimizers. We present these in the context of a new Heuristic SPARQL Planner (HSP) that is capable of exploiting the syntactic and the structural variations of the triple patterns in a SPARQL query in order to choose an execution plan without the need of any cost model. For this, we define the variable graph and we show a reduction of the SPARQL query optimization problem to the maximum weight independent set problem. We implemented our planner on top of the MonetDB open source column-store and evaluated its effectiveness against the state-of-the-art RDF-3X engine as well as comparing the plan quality with a relational (SQL) equivalent of the benchmarks.", "which Has implementation ?", "RDF-3X", 1465.0, 1471.0], ["This paper presents the MPI parallelization of a new algorithm\u2014DPD-B thermostat\u2014for molecular dynamics simulations. The presented results are using Martini Coarse Grained Water System. It should be taken into account that molecular dynamics simulations are time consuming. In some cases the running time varies from days to weeks and even months. Therefore, parallelization is one solution for reducing the execution time. The paper describes the new algorithm, the main characteristics of the MPI parallelization of the new algorithm, and the simulation performances.", "which Has implementation ?", "a new algorithm", 47.0, 62.0], ["Most Semantic Web applications rely on querying graphs, typically by using SPARQL with a triple store. Increasingly, applications also analyze properties of the graph structure to compute statistical inferences. The current Semantic Web infrastructure, however, does not efficiently support such operations. This forces developers to extract the relevant data for external statistical post-processing. In this paper we propose to rethink query execution in a triple store as a highly parallelized asynchronous graph exploration on an active index data structure. This approach also allows to integrate SPARQL-querying with the sampling of graph properties. To evaluate this architecture we implemented Random Walk TripleRush, which is built on a distributed graph processing system. Our evaluations show that this architecture enables both competitive graph querying, as well as the ability to execute various types of random walks with restarts that sample interesting graph properties. Thanks to the asynchronous architecture, first results are sometimes returned in a fraction of the full execution time. We also evaluate the scalability and show that the architecture supports fast query-times on a dataset with more than a billion triples.", "which Has implementation ?", "SPARQL", 75.0, 81.0], ["Herein, we present an approach to create a hybrid between single-atom-dispersed silver and a carbon nitride polymer. Silver tricyanomethanide (AgTCM) is used as a reactive comonomer during templated carbon nitride synthesis to introduce both negative charges and silver atoms/ions to the system. The successful introduction of the extra electron density under the formation of a delocalized joint electronic system is proven by photoluminescence measurements, X-ray photoelectron spectroscopy investigations, and measurements of surface \u03b6-potential. At the same time, the principal structure of the carbon nitride network is not disturbed, as shown by solid-state nuclear magnetic resonance spectroscopy and electrochemical impedance spectroscopy analysis. The synthesis also results in an improvement of the visible light absorption and the development of higher surface area in the final products. The atom-dispersed AgTCM-doped carbon nitride shows an enhanced performance in the selective hydrogenation of alkynes in comparison with the performance of other conventional Ag-based materials prepared by spray deposition and impregnation-reduction methods, here exemplified with 1-hexyne.", "which substrate ?", "1-hexyne", 1181.0, 1189.0], ["Palladium nanoparticles supported on a mesoporous graphitic carbon nitride, Pd@mpg-C3N4, has been developed as an effective, heterogeneous catalyst for the liquid-phase semihydrogenation of phenylacetylene under mild conditions (303 K, atmospheric H2). A total conversion was achieved with high selectivity of styrene (higher than 94%) within 85 minutes. Moreover, the spent catalyst can be easily recovered by filtration and then reused nine times without apparent lose of selectivity. The generality of Pd@mpg-C3N4 catalyst for partial hydrogenation of alkynes was also checked for terminal and internal alkynes with similar performance. The Pd@mpg-C3N4 catalyst was proven to be of industrial interest.", "which substrate ?", "phenylacetylene", 190.0, 205.0], ["Abstract Cleavage of C\u2013O bonds in lignin can afford the renewable aryl sources for fine chemicals. However, the high bond energies of these C\u2013O bonds, especially the 4-O-5-type diaryl ether C\u2013O bonds (~314 kJ/mol) make the cleavage very challenging. Here, we report visible-light photoredox-catalyzed C\u2013O bond cleavage of diaryl ethers by an acidolysis with an aryl carboxylic acid and a following one-pot hydrolysis. Two molecules of phenols are obtained from one molecule of diaryl ether at room temperature. The aryl carboxylic acid used for the acidolysis can be recovered. The key to success of the acidolysis is merging visible-light photoredox catalysis using an acridinium photocatalyst and Lewis acid catalysis using Cu(TMHD) 2 . Preliminary mechanistic studies indicate that the catalytic cycle occurs via a rare selective electrophilic attack of the generated aryl carboxylic radical on the electron-rich aryl ring of the diphenyl ether. This transformation is applied to a gram-scale reaction and the model of 4-O-5 lignin linkages.", "which substrate ?", "Diaryl ethers", 322.0, 335.0], ["Gold nanoparticles on a number of supporting materials, including anatase TiO2 (TiO2-A, in 40 nm and 45 \u03bcm), rutile TiO2 (TiO2-R), ZrO2, Al2O3, SiO2 , and activated carbon, were evaluated for hydrodeoxygenation of guaiacol in 6.5 MPa initial H2 pressure at 300 \u00b0C. The presence of gold nanoparticles on the supports did not show distinguishable performance compared to that of the supports alone in the conversion level and in the product distribution, except for that on a TiO2-A-40 nm. The lack of marked catalytic activity on supports other than TiO2-A-40 nm suggests that Au nanoparticles are not catalytically active on these supports. Most strikingly, the gold nanoparticles on the least-active TiO2-A-40 nm support stood out as the best catalyst exhibiting high activity with excellent stability and remarkable selectivity to phenolics from guaiacol hydrodeoxygenation. The conversion of guaiacol (\u223c43.1%) over gold on the TiO2-A-40 nm was about 33 times that (1.3%) over the TiO2-A-40 nm alone. The selectivity o...", "which substrate ?", "guaiacol", 214.0, 222.0], ["Catalytic bio\u2010oil upgrading to produce renewable fuels has attracted increasing attention in response to the decreasing oil reserves and the increased fuel demand worldwide. Herein, the catalytic hydrodeoxygenation (HDO) of guaiacol with carbon\u2010supported non\u2010sulfided metal catalysts was investigated. Catalytic tests were performed at 4.0 MPa and temperatures ranging from 623 to 673 K. Both Ru/C and Mo/C catalysts showed promising catalytic performance in HDO. The selectivity to benzene was 69.5 and 83.5 % at 653 K over Ru/C and 10Mo/C catalysts, respectively. Phenol, with a selectivity as high as 76.5 %, was observed mainly on 1Mo/C. However, the reaction pathway over both catalysts is different. Over the Ru/C catalyst, the O\uf8ffCH3 bond was cleaved to form the primary intermediate catechol, whereas only traces of catechol were detected over Mo/C catalysts. In addition, two types of active sites were detected over Mo samples after reduction in H2 at 973 K. Catalytic studies showed that the demethoxylation of guaiacol is performed over residual MoOx sites with high selectivity to phenol whereas the consecutive HDO of phenol is performed over molybdenum carbide species, which is widely available only on the 10Mo/C sample. Different deactivation patterns were also observed over Ru/C and Mo/C catalysts.", "which substrate ?", "guaiacol", 224.0, 232.0], ["

Silica supported and unsupported PdAu single atom alloys (SAAs) were investigated for the selective hydrogenation of 1-hexyne to hexenes under mild conditions.

", "which substrate ?", "1-hexyne", 120.0, 128.0], ["AbstractThis paper presents the results of study on titanium dioxide thin films prepared by atomic layer deposition method on a silicon substrate. The changes of surface morphology have been observed in topographic images performed with the atomic force microscope (AFM) and scanning electron microscope (SEM). Obtained roughness parameters have been calculated with XEI Park Systems software. Qualitative studies of chemical composition were also performed using the energy dispersive spectrometer (EDS). The structure of titanium dioxide was investigated by X-ray crystallography. A variety of crystalline TiO2was also confirmed by using the Raman spectrometer. The optical reflection spectra have been measured with UV-Vis spectrophotometry.", "which substrate ?", "Silicon", 161.0, 168.0], ["Clustered graph visualization techniques are an easy to understand way of hiding complex parts of a visualized graph when they are not needed by the user. When visualizing RDF, there are several situations where such clusters are defined in a very natural way. Using this techniques, we can give the user optional access to some detailed information without unnecessarily occupying space in the basic view of the data. This paper describes algorithms for clustered visualization used in the Trisolda RDF visualizer. Most notable is the newly added clustered navigation technique.", "which System ?", "Trisolda ", 491.0, 500.0], ["In the past decade, much effort has been put into the visual representation of ontologies. However, present visualization strategies are not equipped to handle complex ontologies with many relations, leading to visual clutter and inefficient use of space. In this paper, we propose GLOW, a method for ontology visualization based on Hierarchical Edge Bundles. Hierarchical Edge Bundles is a new visually attractive technique for displaying relations in hierarchical data, such as concept structures formed by 'subclass-of' and 'type-of' relations. We have developed a visualization library based on OWL API, as well as a plug-in for Prot\u00e9g\u00e9, a well-known ontology editor. The displayed adjacency relations can be selected from an ontology using a set of common configurations, allowing for intuitive discovery of information. Our evaluation demonstrates that the GLOW visualization provides better visual clarity, and displays relations and complex ontologies better than the existing Prot\u00e9g\u00e9 visualization plug-in Jambalaya.", "which System ?", "GLOW ", 863.0, 868.0], ["In an effort to optimize visualization and editing of OWL ontologies we have developed GrOWL: a browser and visual editor for OWL that accurately visualizes the underlying DL semantics of OWL ontologies while avoiding the difficulties of the verbose OWL syntax. In this paper, we discuss GrOWL visualization model and the essential visualization techniques implemented in GrOWL.", "which System ?", "GrOWL ", 288.0, 294.0], ["Objectives. Electronic laboratory reporting (ELR) reduces the time between communicable disease diagnosis and case reporting to local health departments (LHDs). However, it also imposes burdens on public health agencies, such as increases in the number of unique and duplicate case reports. We assessed how ELR affects the timeliness and accuracy of case report processing within public health agencies. Methods. Using data from May\u2013August 2010 and January\u2013March 2012, we assessed timeliness by calculating the time between receiving a case at the LHD and reporting the case to the state (first stage of reporting) and between submitting the report to the state and submitting it to the Centers for Disease Control and Prevention (second stage of reporting). We assessed accuracy by calculating the proportion of cases returned to the LHD for changes or additional information. We compared timeliness and accuracy for ELR and non-ELR cases. Results. ELR was associated with decreases in case processing time (median = 40 days for ELR cases vs. 52 days for non-ELR cases in 2010; median = 20 days for ELR cases vs. 25 days for non-ELR cases in 2012; both p<0.001). ELR also allowed time to reduce the backlog of unreported cases. Finally, ELR was associated with higher case reporting accuracy (in 2010, 2% of ELR case reports vs. 8% of non-ELR case reports were returned; in 2012, 2% of ELR case reports vs. 6% of non-ELR case reports were returned; both p<0.001). Conclusion. The overall impact of increased ELR is more efficient case processing at both local and state levels.", "which Epidemiological surveillance software ?", "Electronic Laboratory Reporting", 12.0, 43.0], ["Our aim was to evaluate the results of automated surveillance of Lyme neuroborreliosis (LNB) in Denmark using the national microbiology database (MiBa), and to describe the epidemiology of laboratory-confirmed LNB at a national level. MiBa-based surveillance includes electronic transfer of laboratory results, in contrast to the statutory surveillance based on manually processed notifications. Antibody index (AI) testing is the recommend laboratory test to support the diagnosis of LNB in Denmark. In the period from 2010 to 2012, 217 clinical cases of LNB were notified to the statutory surveillance system, while 533 cases were reported AI positive by the MiBa system. Thirty-five unconfirmed cases (29 AI-negative and 6 not tested) were notified, but not captured by MiBa. Using MiBa, the number of reported cases was increased almost 2.5 times. Furthermore, the reporting was timelier (median lag time: 6 vs 58 days). Average annual incidence of AI-confirmed LNB in Denmark was 3.2/100,000 population and incidences stratified by municipality ranged from none to above 10/100,000. This is the first study reporting nationwide incidence of LNB using objective laboratory criteria. Laboratory-based surveillance with electronic data-transfer was more accurate, complete and timely compared to the surveillance based on manually processed notifications. We propose using AI test results for LNB surveillance instead of clinical reporting.", "which Epidemiological surveillance software ?", "MiBa", 146.0, 150.0], ["INTRODUCTION: The 2019 coronavirus disease (COVID-19) is a major global health concern. Joint efforts for effective surveillance of COVID-19 require immediate transmission of reliable data. In this regard, a standardized and interoperable reporting framework is essential in a consistent and timely manner. Thus, this research aimed at to determine data requirements towards interoperability. MATERIALS AND METHODS: In this cross-sectional and descriptive study, a combination of literature study and expert consensus approach was used to design COVID-19 Minimum Data Set (MDS). A MDS checklist was extracted and validated. The definitive data elements of the MDS were determined by applying the Delphi technique. Then, the existing messaging and data standard templates (Health Level Seven-Clinical Document Architecture [HL7-CDA] and SNOMED-CT) were used to design the surveillance interoperable framework. RESULTS: The proposed MDS was divided into administrative and clinical sections with three and eight data classes and 29 and 40 data fields, respectively. Then, for each data field, structured data values along with SNOMED-CT codes were defined and structured according HL7-CDA standard. DISCUSSION AND CONCLUSION: The absence of effective and integrated system for COVID-19 surveillance can delay critical public health measures, leading to increased disease prevalence and mortality. The heterogeneity of reporting templates and lack of uniform data sets hamper the optimal information exchange among multiple systems. Thus, developing a unified and interoperable reporting framework is more effective to prompt reaction to the COVID-19 outbreak.", "which Epidemiological surveillance software ?", "SNOMED", 836.0, 842.0], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "which Domain ?", "Biodiversity", 21.0, 33.0], ["The Visual Notation for OWL Ontologies (VOWL) is a well-specified visual language for the user-oriented representation of ontologies. It defines graphical depictions for most elements of the Web Ontology Language (OWL) that are combined to a force-directed graph layout visualizing the ontology. In contrast to related work, VOWL aims for an intuitive and comprehensive representation that is also understandable to users less familiar with ontologies. This article presents VOWL in detail and describes its implementation in two different tools: ProtegeVOWL and WebVOWL. The first is a plugin for the ontology editor Protege, the second a standalone web application. Both tools demonstrate the applicability of VOWL by means of various ontologies. In addition, the results of three user studies that evaluate the comprehensibility and usability of VOWL are summarized. They are complemented by findings from an interview with experienced ontology users and from testing the visual scope and completeness of VOWL with a benchmark ontology. The evaluations helped to improve VOWL and confirm that it produces comparatively intuitive and comprehensible ontology visualizations.", "which Domain ?", "ontology", 195.0, 203.0], ["Recently, the amount of semantic data available in the Web has increased dramatically. The potential of this vast amount of data is enormous but in most cases it is difficult for users to explore and use this data, especially for those without experience with Semantic Web technologies. Applying information visualization techniques to the Semantic Web helps users to easily explore large amounts of data and interact with them. In this article we devise a formal Linked Data Visualization Model (LDVM), which allows to dynamically connect data with visualizations. We report about our implementation of the LDVM comprising a library of generic visualizations that enable both users and data analysts to get an overview on, visualize and explore the Data Web and perform detailed analyzes on Linked Data.", "which Domain ?", "generic", 637.0, 644.0], ["Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of \u201cwhat a chemical does to a gene/protein\" (chemical \u2192 gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical \u2013 biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor \u2013 target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. \u201cInhibitor\u201d, \u201cAntagonist\u201d, \u201cAgonist\u201d,...), some of them being regarded as highly specific for the purpose of this task (e.g. \u201cintercalation\u201d, \u201ccross-linking/alkylation\u201d) or referring to biological roles (e.g. \u201cAntibody\u201d, \u201cIncorporation into and Destabilization\u201d) and others, partially overlapping between them (e.g. \u201cBinder\u201d and \u201cLigand\u201d), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (\u201cdecreases\u201d, \u201cdirectlyDecreases\u201d, \u201cincreases\u201d, \u201cdirectlyIncreases\u201d and \u201ccausesNoChange\u201d) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su", "which Relation types ?", "Antagonist", 851.0, 861.0], ["Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of \u201cwhat a chemical does to a gene/protein\" (chemical \u2192 gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical \u2013 biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor \u2013 target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. \u201cInhibitor\u201d, \u201cAntagonist\u201d, \u201cAgonist\u201d,...), some of them being regarded as highly specific for the purpose of this task (e.g. \u201cintercalation\u201d, \u201ccross-linking/alkylation\u201d) or referring to biological roles (e.g. \u201cAntibody\u201d, \u201cIncorporation into and Destabilization\u201d) and others, partially overlapping between them (e.g. \u201cBinder\u201d and \u201cLigand\u201d), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (\u201cdecreases\u201d, \u201cdirectlyDecreases\u201d, \u201cincreases\u201d, \u201cdirectlyIncreases\u201d and \u201ccausesNoChange\u201d) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su", "which Relation types ?", "Inhibitor", 790.0, 799.0], ["Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of \u201cwhat a chemical does to a gene/protein\" (chemical \u2192 gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical \u2013 biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor \u2013 target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. \u201cInhibitor\u201d, \u201cAntagonist\u201d, \u201cAgonist\u201d,...), some of them being regarded as highly specific for the purpose of this task (e.g. \u201cintercalation\u201d, \u201ccross-linking/alkylation\u201d) or referring to biological roles (e.g. \u201cAntibody\u201d, \u201cIncorporation into and Destabilization\u201d) and others, partially overlapping between them (e.g. \u201cBinder\u201d and \u201cLigand\u201d), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (\u201cdecreases\u201d, \u201cdirectlyDecreases\u201d, \u201cincreases\u201d, \u201cdirectlyIncreases\u201d and \u201ccausesNoChange\u201d) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su", "which Relation types ?", "Activator", 780.0, 789.0], ["Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.", "which Relation types ?", "Developer", 805.0, 814.0], ["This paper analyses the relationship between GDP and carbon dioxide emissions for Mauritius and vice-versa in a historical perspective. Using rigorous econometrics analysis, our results suggest that the carbon dioxide emission trajectory is closely related to the GDP time path. We show that emissions elasticity on income has been increasing over time. By estimating the EKC for the period 1975-2009, we were unable to prove the existence of a reasonable turning point and thus no EKC \u201cU\u201d shape was obtained. Our results suggest that Mauritius could not curb its carbon dioxide emissions in the last three decades. Thus, as hypothesized, the cost of degradation associated with GDP grows over time and it suggests that the economic and human activities are having increasingly negative environmental impacts on the country as cpmpared to their economic prosperity.", "which Type of data ?", "time", 268.0, 272.0], ["This study examines the impact of various factors such as gross domestic product (GDP) per capita, energy use per capita and trade openness on carbon dioxide (CO 2 ) emission per capita in the Central and Eastern European Countries. The extended environmental Kuznets curve (EKC) was employed, utilizing the available panel data from 1980 to 2002 for Bulgaria, Hungary, Romania and Turkey. The results confirm the existence of an EKC for the region such that CO 2 emission per capita decreases over time as the per capita GDP increases. Energy use per capita is a significant factor that causes pollution in the region, indicating that the region produces environmentally unclean energy. The trade openness variable implies that globalization has not facilitated the emission level in the region. The results imply that the region needs environmentally cleaner technologies in energy production to achieve sustainable development. Copyright \u00a9 2008 John Wiley & Sons, Ltd and ERP Environment.", "which Type of data ?", "Panel", 318.0, 323.0], ["In this article, we attempt to use panel unit root and panel cointegration tests as well as the fully-modified ordinary least squares (OLS) approach to examine the relationships among carbon dioxide emissions, energy use and gross domestic product for 22 Organization for Economic Cooperation and Development (OECD) countries (Annex II Parties) over the 1971\u20132000 period. Furthermore, in order to investigate these results for other direct greenhouse gases (GHGs), we have estimated the Environmental Kuznets Curve (EKC) hypothesis by using the total GHG, methane, and nitrous oxide. The empirical results support that energy use still plays an important role in explaining the GHG emissions for OECD countries. In terms of the EKC hypothesis, the results showed that a quadratic relationship was found to exist in the long run. Thus, other countries could learn from developed countries in this regard and try to smooth the EKC curve at relatively less cost.", "which Type of data ?", "Panel", 35.0, 40.0], ["We explore the emissions income relationship for CO2 in OECD countries using various modelling strategies.Even for this relatively homogeneous sample, we find that the inverted-U-shaped curve is quite sensitive to the degree of heterogeneity included in the panel estimations.This finding is robust, not only across different model specifications but also across estimation techniques, including the more flexible non-parametric approach.Differences in restrictions applied in panel estimations are therefore responsible for the widely divergent findings for an inverted-U shape for CO2.Our findings suggest that allowing for enough heterogeneity is essential to prevent spurious correlation from reduced-form panel estimations.Moreover, this inverted U for CO2 is likely to exist for many, but not for all, countries.", "which Type of data ?", "Panel", 258.0, 263.0], ["This paper examines the dynamic causal relationship between carbon dioxide emissions, energy consumption, economic growth, foreign trade and urbanization using time series data for the period of 1960-2009. Short-run unidirectional causalities are found from energy consumption and trade openness to carbon dioxide emissions, from trade openness to energy consumption, from carbon dioxide emissions to economic growth, and from economic growth to trade openness. The test results also support the evidence of existence of long-run relationship among the variables in the form of Equation (1) which also conform the results of bounds and Johansen conintegration tests. It is found that over time higher energy consumption in Japan gives rise to more carbon dioxide emissions as a result the environment will be polluted more. But in respect of economic growth, trade openness and urbanization the environmental quality is found to be normal good in the long-run.", "which Type of data ?", "Time series", 160.0, 171.0], ["There has been a growing interest in the design and synthesis of non-fullerene acceptors for organic solar cells that may overcome the drawbacks of the traditional fullerene-based acceptors. Herein, two novel push-pull (acceptor-donor-acceptor) type small-molecule acceptors, that is, ITDI and CDTDI, with indenothiophene and cyclopentadithiophene as the core units and 2-(3-oxo-2,3-dihydroinden-1-ylidene)malononitrile (INCN) as the end-capping units, are designed and synthesized for non-fullerene polymer solar cells (PSCs). After device optimization, PSCs based on ITDI exhibit good device performance with a power conversion efficiency (PCE) as high as 8.00%, outperforming the CDTDI-based counterparts fabricated under identical condition (2.75% PCE). We further discuss the performance of these non-fullerene PSCs by correlating the energy level and carrier mobility with the core of non-fullerene acceptors. These results demonstrate that indenothiophene is a promising electron-donating core for high-performance non-fullerene small-molecule acceptors.", "which Acceptor ?", "ITDI", 285.0, 289.0], ["Low bandgap n-type organic semiconductor (n-OS) ITIC has attracted great attention for the application as an acceptor with medium bandgap p-type conjugated polymer as donor in nonfullerene polymer solar cells (PSCs) because of its attractive photovoltaic performance. Here we report a modification on the molecular structure of ITIC by side-chain isomerization with meta-alkyl-phenyl substitution, m-ITIC, to further improve its photovoltaic performance. In a comparison with its isomeric counterpart ITIC with para-alkyl-phenyl substitution, m-ITIC shows a higher film absorption coefficient, a larger crystalline coherence, and higher electron mobility. These inherent advantages of m-ITIC resulted in a higher power conversion efficiency (PCE) of 11.77% for the nonfullerene PSCs with m-ITIC as acceptor and a medium bandgap polymer J61 as donor, which is significantly improved over that (10.57%) of the corresponding devices with ITIC as acceptor. To the best of our knowledge, the PCE of 11.77% is one of the highest values reported in the literature to date for nonfullerene PSCs. More importantly, the m-ITIC-based device shows less thickness-dependent photovoltaic behavior than ITIC-based devices in the active-layer thickness range of 80-360 nm, which is beneficial for large area device fabrication. These results indicate that m-ITIC is a promising low bandgap n-OS for the application as an acceptor in PSCs, and the side-chain isomerization could be an easy and convenient way to further improve the photovoltaic performance of the donor and acceptor materials for high efficiency PSCs.", "which Acceptor ?", "m-ITIC", 398.0, 404.0], ["We develop an efficient fused-ring electron acceptor (ITIC-Th) based on indacenodithieno[3,2-b]thiophene core and thienyl side-chains for organic solar cells (OSCs). Relative to its counterpart with phenyl side-chains (ITIC), ITIC-Th shows lower energy levels (ITIC-Th: HOMO = -5.66 eV, LUMO = -3.93 eV; ITIC: HOMO = -5.48 eV, LUMO = -3.83 eV) due to the \u03c3-inductive effect of thienyl side-chains, which can match with high-performance narrow-band-gap polymer donors and wide-band-gap polymer donors. ITIC-Th has higher electron mobility (6.1 \u00d7 10(-4) cm(2) V(-1) s(-1)) than ITIC (2.6 \u00d7 10(-4) cm(2) V(-1) s(-1)) due to enhanced intermolecular interaction induced by sulfur-sulfur interaction. We fabricate OSCs by blending ITIC-Th acceptor with two different low-band-gap and wide-band-gap polymer donors. In one case, a power conversion efficiency of 9.6% was observed, which rivals some of the highest efficiencies for single junction OSCs based on fullerene acceptors.", "which Acceptor ?", "ITIC-Th", 54.0, 61.0], ["

A nonfullerene electron acceptor (IEIC) based on indaceno[1,2-b:5,6-b\u2032]dithiophene and 2-(3-oxo-2,3-dihydroinden-1-ylidene)malononitrile was designed and synthesized, and fullerene-free polymer solar cells based on the IEIC acceptor showed power conversion efficiencies of up to 6.31%.

", "which Acceptor ?", "IEIC", 37.0, 41.0], ["Two cheliform non-fullerene acceptors, DTPC-IC and DTPC-DFIC, based on a highly electron-rich core, dithienopicenocarbazole (DTPC), are synthesized, showing ultra-narrow bandgaps (as low as 1.21 eV). The two-dimensional nitrogen-containing conjugated DTPC possesses strong electron-donating capability, which induces intense intramolecular charge transfer and intermolecular \u03c0-\u03c0 stacking in derived acceptors. The solar cell based on DTPC-DFIC and a spectrally complementary polymer donor, PTB7-Th, showed a high power conversion efficiency of 10.21% and an extremely low energy loss of 0.45 eV, which is the lowest among reported efficient OSCs.", "which Acceptor ?", "DTPC-DFIC", 51.0, 60.0], ["Abstract Objective: Paclitaxel (PTX)-loaded polymer (Poly(lactic-co-glycolic acid), PLGA)-based nanoformulation was developed with the objective of formulating cremophor EL-free nanoformulation intended for intravenous use. Significance: The polymeric PTX nanoparticles free from the cremophor EL will help in eliminating the shortcomings of the existing delivery system as cremophor EL causes serious allergic reactions to the subjects after intravenous use. Methods and results: Paclitaxel-loaded nanoparticles were formulated by nanoprecipitation method. The diminutive nanoparticles (143.2 nm) with uniform size throughout (polydispersity index, 0.115) and high entrapment efficiency (95.34%) were obtained by employing the Box\u2013Behnken design for the optimization of the formulation with the aid of desirability approach-based numerical optimization technique. Optimized levels for each factor viz. polymer concentration (X1), amount of organic solvent (X2), and surfactant concentration (X3) were 0.23%, 5 ml %, and 1.13%, respectively. The results of the hemocompatibility studies confirmed the safety of PLGA-based nanoparticles for intravenous administration. Pharmacokinetic evaluations confirmed the longer retention of PTX in systemic circulation. Conclusion: In a nutshell, the developed polymeric nanoparticle formulation of PTX precludes the inadequacy of existing PTX formulation and can be considered as superior alternative carrier system of the same.", "which Uses drug ?", "Paclitaxel", 20.0, 30.0], ["Abstract Melanotransferrin antibody (MA) and tamoxifen (TX) were conjugated on etoposide (ETP)-entrapped solid lipid nanoparticles (ETP-SLNs) to target the blood\u2013brain barrier (BBB) and glioblastom multiforme (GBM). MA- and TX-conjugated ETP-SLNs (MA\u2013TX\u2013ETP\u2013SLNs) were used to infiltrate the BBB comprising a monolayer of human astrocyte-regulated human brain-microvascular endothelial cells (HBMECs) and to restrain the proliferation of malignant U87MG cells. TX-grafted ETP-SLNs (TX\u2013ETP\u2013SLNs) significantly enhanced the BBB permeability coefficient for ETP and raised the fluorescent intensity of calcein-AM when compared with ETP-SLNs. In addition, surface MA could increase the BBB permeability coefficient for ETP about twofold. The viability of HBMECs was higher than 86%, suggesting a high biocompatibility of MA\u2013TX\u2013ETP-SLNs. Moreover, the efficiency in antiproliferation against U87MG cells was in the order of MA\u2013TX\u2013ETP-SLNs > TX\u2013ETP-SLNs > ETP-SLNs > SLNs. The capability of MA\u2013TX\u2013ETP-SLNs to target HBMECs and U87MG cells during internalization was verified by immunochemical staining of expressed melanotransferrin. MA\u2013TX\u2013ETP-SLNs can be a potent pharmacotherapy to deliver ETP across the BBB to GBM.", "which Uses drug ?", "Tamoxifen", 45.0, 54.0], ["Nanocrystal formulation has become a viable solution for delivering poorly soluble drugs including chemotherapeutic agents. The purpose of this study was to examine cellular uptake of paclitaxel nanocrystals by confocal imaging and concentration measurement. It was found that drug nanocrystals could be internalized by KB cells at much higher concentrations than a conventional, solubilized formulation. The imaging and quantitative results suggest that nanocrystals could be directly taken up by cells as solid particles, likely via endocytosis. Moreover, it was found that polymer treatment to drug nanocrystals, such as surface coating and lattice entrapment, significantly influenced the cellular uptake. While drug molecules are in the most stable physical state, nanocrystals of a poorly soluble drug are capable of achieving concentrated intracellular presence enabling needed therapeutic effects.", "which Uses drug ?", "Paclitaxel", 184.0, 194.0], ["Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5\uff05. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel\uff1bDrug delivery\uff1bNanoparticle\uff1bRadiotherapy\uff1bHypoxia\uff1bHuman tumor cells\uff1bcellular uptake", "which Uses drug ?", "Paclitaxel", 170.0, 180.0], ["Alzheimer's disease is a growing concern in the modern world. As the currently available medications are not very promising, there is an increased need for the fabrication of newer drugs. Curcumin is a plant derived compound which has potential activities beneficial for the treatment of Alzheimer's disease. Anti-amyloid activity and anti-oxidant activity of curcumin is highly beneficial for the treatment of Alzheimer's disease. The insolubility of curcumin in water restricts its use to a great extend, which can be overcome by the synthesis of curcumin nanoparticles. In our work, we have successfully synthesized water-soluble PLGA coated- curcumin nanoparticles and characterized it using different techniques. As drug targeting to diseases of cerebral origin are difficult due to the stringency of blood-brain barrier, we have coupled the nanoparticle with Tet-1 peptide, which has the affinity to neurons and possess retrograde transportation properties. Our results suggest that curcumin encapsulated-PLGA nanoparticles are able to destroy amyloid aggregates, exhibit anti-oxidative property and are non-cytotoxic. The encapsulation of the curcumin in PLGA does not destroy its inherent properties and so, the PLGA-curcumin nanoparticles can be used as a drug with multiple functions in treating Alzheimer's disease proving it to be a potential therapeutic tool against this dreaded disease.", "which Uses drug ?", "Curcumin", 188.0, 196.0], ["Biotherapeutics such as peptides possess strong potential for the treatment of intractable neurological disorders. However, because of their low stability and the impermeability of the blood-brain barrier (BBB), biotherapeutics are difficult to transport into brain parenchyma via intravenous injection. Herein, we present a novel poly(ethylene glycol)-poly(d,l-lactic-co-glycolic acid) polymersome-based nanomedicine with self-assembled bilayers, which was functionalized with lactoferrin (Lf-POS) to facilitate the transport of a neuroprotective peptide into the brain. The apparent diffusion coefficient (D*) of H(+) through the polymersome membrane was 5.659 \u00d7 10(-26) cm(2) s(-1), while that of liposomes was 1.017 \u00d7 10(-24) cm(2) s(-1). The stability of the polymersome membrane was much higher than that of liposomes. The uptake of polymersomes by mouse brain capillary endothelial cells proved that the optimal density of lactoferrin was 101 molecules per polymersome. Fluorescence imaging indicated that Lf101-POS was effectively transferred into the brain. In pharmacokinetics, compared with transferrin-modified polymersomes and cationic bovine serum albumin-modified polymersomes, Lf-POS obtained the greatest BBB permeability surface area and percentage of injected dose per gram (%ID per g). Furthermore, Lf-POS holding S14G-humanin protected against learning and memory impairment induced by amyloid-\u03b225-35 in rats. Western blotting revealed that the nanomedicine provided neuroprotection against over-expression of apoptotic proteins exhibiting neurofibrillary tangle pathology in neurons. The results indicated that polymersomes can be exploited as a promising non-invasive nanomedicine capable of mediating peptide therapeutic delivery and controlling the release of drugs to the central nervous system.", "which Uses drug ?", "Humanin", 1339.0, 1346.0], ["The Fe(3)O(4) nanoparticles, tailored with maleimidyl 3-succinimidopropionate ligands, were conjugated with paclitaxel molecules that were attached with a poly(ethylene glycol) (PEG) spacer through a phosphodiester moiety at the (C-2')-OH position. The average number of paclitaxel molecules/nanoparticles was determined as 83. These nanoparticles liberated paclitaxel molecules upon exposure to phosphodiesterase.", "which Uses drug ?", "Paclitaxel", 108.0, 118.0], ["Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p\u02c20.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.", "which Uses drug ?", "Cisplatin", 248.0, 257.0], ["Background: Paclitaxel (PTX) is one of the most important and effective anticancer drugs for the treatment of human cancer. However, its low solubility and severe adverse effects limited clinical use. To overcome this limitation, nanotechnology has been used to overcome tumors due to its excellent antimicrobial activity. Objective: This study was to demonstrate the anticancer properties of functionalization silver nanoparticles loaded with paclitaxel (Ag@PTX) induced A549 cells apoptosis through ROS-mediated signaling pathways. Methods: The Ag@PTX nanoparticles were charged with a zeta potential of about -17 mv and characterized around 2 nm with a narrow size distribution. Results: Ag@PTX significantly decreased the viability of A549 cells and possessed selectivity between cancer and normal cells. Ag@PTX induced A549 cells apoptosis was confirmed by nuclear condensation, DNA fragmentation, and activation of caspase-3. Furthermore, Ag@PTX enhanced the anti-cancer activity of A549 cells through ROS-mediated p53 and AKT signalling pathways. Finally, in a xenograft nude mice model, Ag@PTX suppressed the growth of tumors. Conclusion: Our findings suggest that Ag@PTX may be a candidate as a chemopreventive agent and could be a highly efficient way to achieve anticancer synergism for human cancers.", "which Uses drug ?", "Paclitaxel", 12.0, 22.0], ["Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p\u02c20.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.", "which Uses drug ?", "Doxorubicin", 259.0, 270.0], ["In this study, we examine the relationship between the physical structure and dissolution behavior of olanzapine (OLZ) prepared via hot-melt extrusion in three polymers [polyvinylpyrrolidone (PVP) K30, polyvinylpyrrolidone-co-vinyl acetate (PVPVA) 6:4, and Soluplus\u00ae (SLP)]. In particular, we examine whether full amorphicity is necessary to achieve a favorable dissolution profile. Drug\u2013polymer miscibility was estimated using melting point depression and Hansen solubility parameters. Solid dispersions were characterized using differential scanning calorimetry, X-ray powder diffraction, and scanning electron microscopy. All the polymers were found to be miscible with OLZ in a decreasing order of PVP>PVPVA>SLP. At a lower extrusion temperature (160\u00b0C), PVP generated fully amorphous dispersions with OLZ, whereas the formulations with PVPVA and SLP contained 14%\u201316% crystalline OLZ. Increasing the extrusion temperature to 180\u00b0C allowed the preparation of fully amorphous systems with PVPVA and SLP. Despite these differences, the dissolution rates of these preparations were comparable, with PVP showing a lower release rate despite being fully amorphous. These findings suggested that, at least in the particular case of OLZ, the absence of crystalline material may not be critical to the dissolution performance. We suggest alternative key factors determining dissolution, particularly the dissolution behavior of the polymers themselves.", "which Uses drug ?", "Olanzapine", 102.0, 112.0], ["Abstract\u2014 The effect of topically active 2\u2010hydroxypropyl\u2010\u03b2\u2010cyclodextrin (HP\u2010\u03b2\u2010CyD) eye\u2010drop formulations containing solutions of acetazolamide, ethoxyzolamide or timolol on the intra\u2010ocular pressure (IOP) was investigated in normotensive conscious rabbits. Both acetazolamide and ethoxyzolamide were active but their IOP\u2010lowering effect was less than that of timolol. The IOP\u2010lowering effects of acetazolamide and ethoxyzolamide and that of timolol appeared to be to some extent additive. Combination of acetazolamide and timolol or ethoxyzolamide and timolol in one HP\u2010\u03b2\u2010CyD formulation resulted in a significant increase in the duration of activity compared with HP\u2010\u03b2\u2010CyD formulations containing only acetazolamide, ethoxyzolamide or timolol. Also, it was possible to increase the IOP\u2010lowering effect of acetazolamide by formulating the drug as a suspension in an aqueous HP\u2010\u03b2\u2010CyD vehicle.", "which Uses drug ?", "Acetazolamide", 129.0, 142.0], ["UNLABELLED Interactivity between humans and smart systems, including wearable, body-attachable, or implantable platforms, can be enhanced by realization of multifunctional human-machine interfaces, where a variety of sensors collect information about the surrounding environment, intentions, or physiological conditions of the human to which they are attached. Here, we describe a stretchable, transparent, ultrasensitive, and patchable strain sensor that is made of a novel sandwich-like stacked piezoresisitive nanohybrid film of single-wall carbon nanotubes (SWCNTs) and a conductive elastomeric composite of polyurethane (PU)-poly(3,4-ethylenedioxythiophene) polystyrenesulfonate ( PEDOT PSS). This sensor, which can detect small strains on human skin, was created using environmentally benign water-based solution processing. We attributed the tunability of strain sensitivity (i.e., gauge factor), stability, and optical transparency to enhanced formation of percolating networks between conductive SWCNTs and PEDOT phases at interfaces in the stacked PU-PEDOT:PSS/SWCNT/PU-PEDOT:PSS structure. The mechanical stability, high stretchability of up to 100%, optical transparency of 62%, and gauge factor of 62 suggested that when attached to the skin of the face, this sensor would be able to detect small strains induced by emotional expressions such as laughing and crying, as well as eye movement, and we confirmed this experimentally.", "which Sensing material ?", "SWCNT/PU-PEDOT:PSS", 1071.0, 1089.0], ["Graphene is widely regarded as one of the most promising materials for sensor applications. Here, we demonstrate that a pristine graphene can detect gas molecules at extremely low concentrations with detection limits as low as 158 parts-per-quadrillion (ppq) for a range of gas molecules at room temperature. The unprecedented sensitivity was achieved by applying our recently developed concept of continuous in situ cleaning of the sensing material with ultraviolet light. The simplicity of the concept, together with graphene\u2019s flexibility to be used on various platforms, is expected to intrigue more investigations to develop ever more sensitive sensors.", "which Sensing material ?", "Graphene", 0.0, 8.0], ["In this article, cupric oxide (CuO) leafletlike nanosheets have been synthesized by a facile, low-cost, and surfactant-free method, and they have further been successfully developed for sensitive and selective determination of hydrogen sulfide (H2S) with high recovery ability. The experimental results have revealed that the sensitivity and recovery time of the present H2S gas sensor are strongly dependent on the working temperature. The best H2S sensing performance has been achieved with a low detection limit of 2 ppb and broad linear range from 30 ppb to 1.2 ppm. The gas sensor is reversible, with a quick response time of 4 s and a short recovery time of 9 s. In addition, negligible responses can be observed exposed to 100-fold concentrations of other gases which may exist in the atmosphere such as nitrogen (N2), oxygen (O2), nitric oxide (NO), cabon monoxide (CO), nitrogen dioxide (NO2), hydrogen (H2), and so on, indicating relatively high selectivity of the present H2S sensor. The H2S sensor based on t...", "which Sensing material ?", "CuO", 31.0, 34.0], ["Despite their successful use in many conscientious studies involving outdoor learning applications, mobile learning systems still have certain limitations. For instance, because students cannot obtain real-time, contextaware content in outdoor locations such as historical sites, endangered animal habitats, and geological landscapes, they are unable to search, collect, share, and edit information by using information technology. To address such concerns, this work proposes an environment of ubiquitous learning with educational resources (EULER) based on radio frequency identification (RFID), augmented reality (AR), the Internet, ubiquitous computing, embedded systems, and database technologies. EULER helps teachers deliver lessons on site and cultivate student competency in adopting information technology to improve learning. To evaluate its effectiveness, we used the proposed EULER for natural science learning at the Guandu Nature Park in Taiwan. The participants were elementary school teachers and students. The analytical results revealed that the proposed EULER improves student learning. Moreover, the largely positive feedback from a post-study survey confirms the effectiveness of EULER in supporting outdoor learning and its ability to attract the interest of students.", "which Result ?", "Positive", 1129.0, 1137.0], ["Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.", "which Result ?", "F1 score", 926.0, 934.0], ["A rapidly growing amount of content posted online, such as food recipes, opens doors to new exciting applications at the intersection of vision and language. In this work, we aim to estimate the calorie amount of a meal directly from an image by learning from recipes people have published on the Internet, thus skipping time-consuming manual data annotation. Since there are few large-scale publicly available datasets captured in unconstrained environments, we propose the pic2kcal benchmark comprising 308 000 images from over 70 000 recipes including photographs, ingredients, and instructions. To obtain nutritional information of the ingredients and automatically determine the ground-truth calorie value, we match the items in the recipes with structured information from a food item database. We evaluate various neural networks for regression of the calorie quantity and extend them with the multi-task paradigm. Our learning procedure combines the calorie estimation with prediction of proteins, carbohydrates, and fat amounts as well as a multi-label ingredient classification. Our experiments demonstrate clear benefits of multi-task learning for calorie estimation, surpassing the single-task calorie regression by 9.9%. To encourage further research on this task, we make the code for generating the dataset and the models publicly available.", "which Result ?", "Prediction of proteins", 982.0, 1004.0], ["A rapidly growing amount of content posted online, such as food recipes, opens doors to new exciting applications at the intersection of vision and language. In this work, we aim to estimate the calorie amount of a meal directly from an image by learning from recipes people have published on the Internet, thus skipping time-consuming manual data annotation. Since there are few large-scale publicly available datasets captured in unconstrained environments, we propose the pic2kcal benchmark comprising 308 000 images from over 70 000 recipes including photographs, ingredients, and instructions. To obtain nutritional information of the ingredients and automatically determine the ground-truth calorie value, we match the items in the recipes with structured information from a food item database. We evaluate various neural networks for regression of the calorie quantity and extend them with the multi-task paradigm. Our learning procedure combines the calorie estimation with prediction of proteins, carbohydrates, and fat amounts as well as a multi-label ingredient classification. Our experiments demonstrate clear benefits of multi-task learning for calorie estimation, surpassing the single-task calorie regression by 9.9%. To encourage further research on this task, we make the code for generating the dataset and the models publicly available.", "which Result ?", "Ingredient classification", 1062.0, 1087.0], ["In wireless sensor networks, nodes in the area of interest must report sensing readings to the sink, and this report always satisfies the report frequency required by the sink. This paper proposes a link-aware clustering mechanism, called LCM, to determine an energy-efficient and reliable routing path. The LCM primarily considers node status and link condition, and uses a novel clustering metric called the predicted transmission count (PTX), to evaluate the qualification of nodes for clusterheads and gateways to construct clusters. Each clusterhead or gateway candidate depends on the PTX to derive its priority, and the candidate with the highest priority becomes the clusterhead or gateway. Simulation results validate that the proposed LCM significantly outperforms the clustering mechanisms using random selection and by considering only link quality and residual energy in the packet delivery ratio, energy consumption, and delivery latency.", "which Protocol ?", "LCM", 239.0, 242.0], ["The ability to extract topological regularity out of large randomly deployed sensor networks holds the promise to maximally leverage correlation for data aggregation and also to assist with sensor localization and hierarchy creation. This paper focuses on extracting such regular structures from physical topology through the development of a distributed clustering scheme. The topology adaptive spatial clustering (TASC) algorithm presented here is a distributed algorithm that partitions the network into a set of locally isotropic, non-overlapping clusters without prior knowledge of the number of clusters, cluster size and node coordinates. This is achieved by deriving a set of weights that encode distance measurements, connectivity and density information within the locality of each node. The derived weights form the terrain for holding a coordinated leader election in which each node selects the node closer to the center of mass of its neighborhood to become its leader. The clustering algorithm also employs a dynamic density reachability criterion that groups nodes according to their neighborhood's density properties. Our simulation results show that the proposed algorithm can trace locally isotropic structures in non-isotropic network and cluster the network with respect to local density attributes. We also found out that TASC exhibits consistent behavior in the presence of moderate measurement noise levels", "which Protocol ?", "TASC", 416.0, 420.0], ["Sensor webs consisting of nodes with limited battery power and wireless communications are deployed to collect useful information from the field. Gathering sensed information in an energy efficient manner is critical to operate the sensor network for a long period of time. In W. Heinzelman et al. (Proc. Hawaii Conf. on System Sci., 2000), a data collection problem is defined where, in a round of communication, each sensor node has a packet to be sent to the distant base station. If each node transmits its sensed data directly to the base station then it will deplete its power quickly. The LEACH protocol presented by W. Heinzelman et al. is an elegant solution where clusters are formed to fuse data before transmitting to the base station. By randomizing the cluster heads chosen to transmit to the base station, LEACH achieves a factor of 8 improvement compared to direct transmissions, as measured in terms of when nodes die. In this paper, we propose PEGASIS (power-efficient gathering in sensor information systems), a near optimal chain-based protocol that is an improvement over LEACH. In PEGASIS, each node communicates only with a close neighbor and takes turns transmitting to the base station, thus reducing the amount of energy spent per round. Simulation results show that PEGASIS performs better than LEACH by about 100 to 300% when 1%, 20%, 50%, and 100% of nodes die for different network sizes and topologies.", "which Protocol ?", "PEGASIS", 962.0, 969.0], ["Future large-scale sensor networks may comprise thousands of wirelessly connected sensor nodes that could provide an unimaginable opportunity to interact with physical phenomena in real time. However, the nodes are typically highly resource-constrained. Since the communication task is a significant power consumer, various attempts have been made to introduce energy-awareness at different levels within the communication stack. Clustering is one such attempt to control energy dissipation for sensor data dissemination in a multihop fashion. The Time-Controlled Clustering Algorithm (TCCA) is proposed to realize a network-wide energy reduction. A realistic energy dissipation model is derived probabilistically to quantify the sensor network's energy consumption using the proposed clustering algorithm. A discrete-event simulator is developed to verify the mathematical model and to further investigate TCCA in other scenarios. The simulator is also extended to include the rest of the communication stack to allow a comprehensive evaluation of the proposed algorithm.", "which Protocol ?", "TCCA", 586.0, 590.0], ["Wireless sensor networks are expected to find wide applicability and increasing deployment in the near future. In this paper, we propose a formal classification of sensor networks, based on their mode of functioning, as proactive and reactive networks. Reactive networks, as opposed to passive data collecting proactive networks, respond immediately to changes in the relevant parameters of interest. We also introduce a new energy efficient protocol, TEEN (Threshold sensitive Energy Efficient sensor Network protocol) for reactive networks. We evaluate the performance of our protocol for a simple temperature sensing application. In terms of energy efficiency, our protocol has been observed to outperform existing conventional sensor network protocols.", "which Protocol ?", "TEEN", 452.0, 456.0], ["In the last few years, several studies have found an inverted-U relationship between per capita income and environmental degradation. This relationship, known as the environmental Kuznets curve (EKC), suggests that environmental degradation increases in the early stages of growth, but it eventually decreases as income exceeds a threshold level. However, this paper investigation relationship between per capita CO2 emission, growth economics and trade liberalization based on econometric techniques of unit root test, co-integration and a panel data set during the period 1960-1996 for BRICS countries. Data properties were analyzed to determine their stationarity using the LLC , IPS , ADF and PP unit root tests which indicated that the series are I(1). We find a cointegration relationship between per capita CO2 emission, growth economics and trade liberalization by applying Kao panel cointegration test. The evidence indi cates that in the long-run trade liberalization has a positive significant impact on CO2 emissions and impact of trade liberalization on emissions growth depends on the level of income Our findings suggest that there is a quadratic relationship between relationship between real GDP and CO2 emissions for the region as a whole. The estimated long-run coefficients of real GDP and its square satisfy the EKC hypothesis in all of studied countries. Our estimation shows that the inflection point or optimal point real GDP per capita is about 5269.4 dollars. The results show that on average, sample countries are on the positive side of the inverted U curve. The turning points are very low in some cases and very high in other cases, hence providing poor evidence in support of the EKC hypothesis. Thus, our findings suggest that all BRICS countries need to sacrifice economic growth to decrease their emission levels", "which Methodology ?", "Kao Panel", 882.0, 891.0], ["The objective of this study is to analyse the long-run dynamic relationship of carbon dioxide emissions, real gross domestic product (GDP), the square of real GDP, energy consumption, trade and tourism under an Environmental Kuznets Curve (EKC) model for the Organization for Economic Co-operation and Development (OECD) member countries. Since we find the presence of cross-sectional dependence within the panel time-series data, we apply second-generation unit root tests, cointegration test and causality test which can deal with cross-sectional dependence problems. The cross-sectionally augmented Dickey-Fuller (CADF) and the cross-sectionally augmented Im-Pesaran-Shin (CIPS) unit root tests indicate that the analysed variables become stationary at their first differences. The Lagrange multiplier bootstrap panel cointegration test shows the existence of a long-run relationship between the analysed variables. The dynamic ordinary least squares (DOLS) estimation technique indicates that energy consumption and tourism contribute to the levels of gas emissions, while increases in trade lead to environmental improvements. In addition, the EKC hypothesis cannot be supported as the sign of coefficients on GDP and GDP2 is negative and positive, respectively. Moreover, the Dumitrescu\u2013Hurlin causality tests exploit a variety of causal relationship between the analysed variables. The OECD countries are suggested to invest in improving energy efficiency, regulate necessary environmental protection policies for tourism sector in specific and promote trading activities through several types of encouragement act.", "which Methodology ?", "DOLS", 955.0, 959.0], ["This paper reexamines the causality between energy consumption and economic growth with both bivariate and multivariate models by applying the recently developed methods of cointegration and Hsiao`s version of the Granger causality to transformed U.S. data for the period 1947-1990. The Phillips-Perron (PP) tests reveal that the original series are not stationary and, therefore, a first differencing is performed to secure stationarity. The study finds no causal linkages between energy consumption and economic growth. Energy and gross national product (GNP) each live a life of its own. The results of this article are consistent with some of the past studies that find no relationship between energy and GNP but are contrary to some other studies that find GNP unidirectionally causes energy consumption. Both the bivariate and trivariate models produce the similar results. We also find that there is no causal relationship between energy consumption and industrial production. The United States is basically a service-oriented economy and changes in energy consumption can cause little or no changes in GNP. In other words, an implementation of energy conservation policy may not impair economic growth. 27 refs., 5 tabs.", "which Methodology ?", "Cointegration", 173.0, 186.0], ["Eye localization is an important part in face recognition system, because its precision closely affects the performance of face recognition. Although various methods have already achieved high precision on the face images with high quality, their precision will drop on low quality images. In this paper, we propose a robust eye localization method for low quality face images to improve the eye detection rate and localization precision. First, we propose a probabilistic cascade (P-Cascade) framework, in which we reformulate the traditional cascade classifier in a probabilistic way. The P-Cascade can give chance to each image patch contributing to the final result, regardless the patch is accepted or rejected by the cascade. Second, we propose two extensions to further improve the robustness and precision in the P-Cascade framework. There are: (1) extending feature set, and (2) stacking two classifiers in multiple scales. Extensive experiments on JAFFE, BioID, LFW and a self-collected video surveillance database show that our method is comparable to state-of-the-art methods on high quality images and can work well on low quality images. This work supplies a solid base for face recognition applications under unconstrained or surveillance environments.", "which Methods ?", "Probabilistic Cascade ", NaN, NaN], ["Eye localization is an important part in face recognition system, because its precision closely affects the performance of face recognition. Although various methods have already achieved high precision on the face images with high quality, their precision will drop on low quality images. In this paper, we propose a robust eye localization method for low quality face images to improve the eye detection rate and localization precision. First, we propose a probabilistic cascade (P-Cascade) framework, in which we reformulate the traditional cascade classifier in a probabilistic way. The P-Cascade can give chance to each image patch contributing to the final result, regardless the patch is accepted or rejected by the cascade. Second, we propose two extensions to further improve the robustness and precision in the P-Cascade framework. There are: (1) extending feature set, and (2) stacking two classifiers in multiple scales. Extensive experiments on JAFFE, BioID, LFW and a self-collected video surveillance database show that our method is comparable to state-of-the-art methods on high quality images and can work well on low quality images. This work supplies a solid base for face recognition applications under unconstrained or surveillance environments.", "which Methods ?", "Probabilistic cascade ", NaN, NaN], ["Many computer vision problems (e.g., camera calibration, image alignment, structure from motion) are solved through a nonlinear optimization method. It is generally accepted that 2nd order descent methods are the most robust, fast and reliable approaches for nonlinear optimization of a general smooth function. However, in the context of computer vision, 2nd order descent methods have two main drawbacks: (1) The function might not be analytically differentiable and numerical approximations are impractical. (2) The Hessian might be large and not positive definite. To address these issues, this paper proposes a Supervised Descent Method (SDM) for minimizing a Non-linear Least Squares (NLS) function. During training, the SDM learns a sequence of descent directions that minimizes the mean of NLS functions sampled at different points. In testing, SDM minimizes the NLS objective using the learned descent directions without computing the Jacobian nor the Hessian. We illustrate the benefits of our approach in synthetic and real examples, and show how SDM achieves state-of-the-art performance in the problem of facial feature detection. The code is available at www.humansensing.cs. cmu.edu/intraface.", "which Methods ?", "SDM", 643.0, 646.0], ["In this paper, we take a look at an enhanced approach for eye detection under difficult acquisition circumstances such as low-light, distance, pose variation, and blur. We present a novel correlation filter based eye detection pipeline that is specifically designed to reduce face alignment errors, thereby increasing eye localization accuracy and ultimately face recognition accuracy. The accuracy of our eye detector is validated using data derived from the Labeled Faces in the Wild (LFW) and the Face Detection on Hard Datasets Competition 2011 (FDHD) sets. The results on the LFW dataset also show that the proposed algorithm exhibits enhanced performance, compared to another correlation filter based detector, and that a considerable increase in face recognition accuracy may be achieved by focusing more effort on the eye localization stage of the face recognition process. Our results on the FDHD dataset show that our eye detector exhibits superior performance, compared to 11 different state-of-the-art algorithms, on the entire set of difficult data without any per set modifications to our detection or preprocessing algorithms. The immediate application of eye detection is automatic face recognition, though many good applications exist in other areas, including medical research, training simulators, communication systems for the disabled, and automotive engineering.", "which Methods ?", "Novel correlation filter ", 182.0, 207.0], ["ABSTRACT On December 31, 2019, the World Health Organization was notified about a cluster of pneumonia of unknown aetiology in the city of Wuhan, China. Chinese authorities later identified a new coronavirus (2019-nCoV) as the causative agent of the outbreak. As of January 23, 2020, 655 cases have been confirmed in China and several other countries. Understanding the transmission characteristics and the potential for sustained human-to-human transmission of 2019-nCoV is critically important for coordinating current screening and containment strategies, and determining whether the outbreak constitutes a public health emergency of international concern (PHEIC). We performed stochastic simulations of early outbreak trajectories that are consistent with the epidemiological findings to date. We found the basic reproduction number, R 0 , to be around 2.2 (90% high density interval 1.4\u20143.8), indicating the potential for sustained human-to-human transmission. Transmission characteristics appear to be of a similar magnitude to severe acute respiratory syndrome-related coronavirus (SARS-CoV) and the 1918 pandemic influenza. These findings underline the importance of heightened screening, surveillance and control efforts, particularly at airports and other travel hubs, in order to prevent further international spread of 2019-nCoV.", "which Methods ?", "Stochastic simulations of early outbreak trajectories", 681.0, 734.0], ["This paper addresses the problem of facial landmark localization and tracking from a single camera. We present a two-stage cascaded deformable shape model to effectively and efficiently localize facial landmarks with large head pose variations. For face detection, we propose a group sparse learning method to automatically select the most salient facial landmarks. By introducing 3D face shape model, we use procrustes analysis to achieve pose-free facial landmark initialization. For deformation, the first step uses mean-shift local search with constrained local model to rapidly approach the global optimum. The second step uses component-wise active contours to discriminatively refine the subtle shape variation. Our framework can simultaneously handle face detection, pose-free landmark localization and tracking in real time. Extensive experiments are conducted on both laboratory environmental face databases and face-in-the-wild databases. All results demonstrate that our approach has certain advantages over state-of-the-art methods in handling pose variations.", "which Methods ?", "Pose-free", 440.0, 449.0], ["This paper empirically examines the suitability of monetary union in East African community members namely, Burundi, Kenya, Rwanda, Tanzania and Uganda, on the basis of business cycle synchronization. This research considers annual GDP (gross domestic product) data from IMF (international monetary fund) for the period of 1980 to 2010. In order to extract the business cycles and trends, the study uses HP (Hodrick-Prescott) and the BP (band pass) filters. After identifying the cycles and trends of the business cycle, the study considers cross country correlation analysis and analysis of variance technique to examine whether EAC (East African community) countries are characterized by synchronized business cycles or not. The results show that four EAC countries (Burundi, Kenya, Tanzania and Uganda) among five countries are having similar pattern of business cycle and trend from the last ten years of the formation of the EAC. The research concludes that these countries, except Rwanda, do not differ significantly in transitory or cycle components but do differ in permanent components especially in growth trend. Key words: Business cycle synchronization, optimum currency area, East African community, monetary union, development.", "which Countries ?", "East African Community", 69.0, 91.0], ["Do changes in monetary policy affect inflation and output in the East African Community (EAC)? We find that (i) Monetary Transmission Mechanism (MTM) tends to be generally weak when using standard statistical inferences, but somewhat strong when using non-standard inference methods; (ii) when MTM is present, the precise transmission channels and their importance differ across countries; and (iii) reserve money and the policy rate, two frequently used instruments of monetary policy, sometimes move in directions that exert offsetting expansionary and contractionary effects on inflation - posing challenges to harmonization of monetary policies across the EAC and transition to a future East African Monetary Union. The paper offers some suggestions for strengthening the MTM in the EAC.", "which Countries ?", "East African Community", 65.0, 87.0], ["This paper investigates the short-run and long-run causality issues between electricity consumption and economic growth in Turkey by using the co-integration and vector error-correction models with structural breaks. It employs annual data covering the period 1968\u20132005. The study also explores the causal relationship between these variables in terms of the three error-correction based Granger causality models. The empirical results are as follows: i) Both variables are nonstationary in levels and stationary in the first differences with/without structural breaks, ii) there exists a longrun relationship between variables, iii) there is unidirectional causality running from the electricity consumption to economic growth. The overall results indicate that \u201cgrowth hypothesis\u201d for electricity consumption and growth nexus holds in Turkey. Thus, energy conservation policies, such as rationing electricity consumption, may harm economic growth in Turkey.", "which Countries ?", "Turkey", 123.0, 129.0], ["We develop a model in which governments' financing needs exceed the socially optimal level because public resources are diverted to serve the narrow interests of the group in power. From a social welfare perspective, this results in undue pressure on the central bank to extract seigniorage. Monetary policy also suffers from an expansive bias, owing to the authorities' inability to precommit to price stability. Such a conjecture about the fiscal-monetary policy mix appears quite relevant in Africa, with deep implications for the incentives of fiscally heterogeneous countries to form a currency union. We calibrate the model to data for West Africa and use it to assess proposed ECOWAS monetary unions. Fiscal heterogeneity indeed appears critical in shaping regional currency blocs that would be mutually beneficial for all their members. In particular, Nigeria's membership in the configurations currently envisaged would not be in the interests of other ECOWAS countries unless it were accompanied by effective containment on Nigeria's financing needs.", "which Countries ?", "ECOWAS", 684.0, 690.0], ["Through a quantitative content analysis, this study applies situational crisis communication theory (SCCT) to investigate how 13 corporate and government organizations responded to the first phase of the 2009 flu pandemic. The results indicate that government organizations emphasized providing instructing information to their primary publics such as guidelines about how to respond to the crisis. On the other hand, organizations representing corporate interests emphasized reputation management in their crisis responses, frequently adopting denial, diminish, and reinforce response strategies. In addition, both government and corporate organizations used social media more often than traditional media in responding to the crisis. Finally, the study expands SCCT's response options.", "which Technology ?", "social media", 660.0, 672.0], ["Social media for emergency management has emerged as a vital resource for government agencies across the globe. In this study, we explore social media strategies employed by governments to respond to major weather-related events. Using social media monitoring software, we analyze how social media is used in six cities following storms in the winter of 2012. We listen, monitor, and assess online discourse available on the full range of social media outlets (e.g., Twitter, Facebook, blogs). To glean further insight, we conduct a survey and extract themes from citizen comments and government's response. We conclude with recommendations on how practitioners can develop social media strategies that enable citizen participation in emergency management.", "which Technology ?", "social media", 0.0, 12.0], ["Distributed group support systems are likely to be widely used in the future as a means for dispersed groups of people to work together through computer networks. They combine characteristics of computer-mediated communication systems with the specialized tools and processes developed in the context of group decision support systems, to provide communications, a group memory, and tools and structures to coordinate the group process and analyze data. These tools and structures can take a wide variety of forms in order to best support computer-mediated interaction for different types of tasks and groups. This article summarizes five case studies of different distributed group support systems developed by the authors and their colleagues over the last decade to support different types of tasks and to accommodate fairly large numbers of participants (tens to hundreds). The case studies are placed within conceptual frameworks that aid in classifying and comparing such systems. The results of the case studies demonstrate that design requirements and the associated research issues for group support systems an be very different in the distributed environment compared to the decision room approach.", "which Technology ?", "Distributed group support systems", 0.0, 33.0], ["The notion of communities getting together during a disaster to help each other is common. However, how does this communal activity happen within the online world? Here we examine this issue using the Communities of Practice (CoP) approach. We extend CoP to multiple CoP (MCoPs) and examine the role of social media applications in disaster management, extending work done by Ahmed (2011). Secondary data in the form of newspaper reports during 2010 to 2011 were analysed to understand how social media, particularly Facebook and Twitter, facilitated the process of communication among various communities during the Queensland floods in 2010. The results of media-content analysis along with the findings of relevant literature were used to extend our existing understanding on various communities of practice involved in disaster management, their communication tasks and the role of Twitter and Facebook as common conducive platforms of communication during disaster management alongside traditional communication channels.", "which Technology ?", "Facebook and Twitter", 517.0, 537.0], ["The project DEKO (Detection of artificial objects in sea areas) is integrated in the German DeMarine-Security project and focuses on the detection and classification of ships and offshore artificial objects relying on TerraSAR-X as well as on RapidEye multispectral optical images. The objectives are 1/ the development of reliable detection algorithms and 2/ the definition of effective, customized service concepts. In addition to an earlier publication, we describe in the following paper some selected results of our work. The algorithms for TerraSAR-X have been extended to a processing chain including all needed steps for ship detection and ship signature analysis, with an emphasis on object segmentation. For Rapid Eye imagery, a ship detection algorithm has been developed. Finally, some applications are described: Ship monitoring in the Strait of Dover based on TerraSAR-X StripMap using AIS information for verification, analyzing TerraSAR-X HighResolution scenes of an industrial harbor and finally an example of surveying a wind farm using change detection.", "which Satellite sensor ?", "RapidEye", 243.0, 251.0], ["With the increasement of spatial resolution of remote sensing, the ship detection methods for low-resolution images are no longer suitable. In this study, a ship target automatic detection method for high-resolution remote sensing is proposed, which mainly contains steps of Otsu binary segmentation, morphological operation, calculation of target features and target judgment. The results show that almost all of the offshore ships can be detected, and the total detection rates are 94% and 91% with the experimental Google Earth data and GF-1 data respectively. The ship target automatic detection method proposed in this study is more suitable for detecting ship targets offshore rather than anchored along the dock.", "which Satellite sensor ?", "Google Earth", 518.0, 530.0], ["In this letter, we present a new method to detect inshore ships using shape and context information. We first propose a new energy function based on an active contour model to segment water and land and minimize it with an iterative global optimization method. The proposed energy performs well on the different intensity distributions between water and land and produces a result that can be well used in shape and context analyses. In the segmented image, ships are detected with successive shape analysis, including shape analysis in the localization of ship head and region growing in computing the width and length of ship. Finally, to locate ships accurately and remove the false alarms, we unify them with a binary linear programming problem by utilizing the context information. Experiments on QuickBird images show the robustness and precision of our method.", "which Satellite sensor ?", "QuickBird", 802.0, 811.0], ["The idea of exploiting Genetic Programming (GP) to estimate software development effort is based on the observation that the effort estimation problem can be formulated as an optimization problem. Indeed, among the possible models, we have to identify the one providing the most accurate estimates. To this end a suitable measure to evaluate and compare different models is needed. However, in the context of effort estimation there does not exist a unique measure that allows us to compare different models but several different criteria (e.g., MMRE, Pred(25), MdMRE) have been proposed. Aiming at getting an insight on the effects of using different measures as fitness function, in this paper we analyzed the performance of GP using each of the five most used evaluation criteria. Moreover, we designed a Multi-Objective Genetic Programming (MOGP) based on Pareto optimality to simultaneously optimize the five evaluation measures and analyzed whether MOGP is able to build estimation models more accurate than those obtained using GP. The results of the empirical analysis, carried out using three publicly available datasets, showed that the choice of the fitness function significantly affects the estimation accuracy of the models built with GP and the use of some fitness functions allowed GP to get estimation accuracy comparable with the ones provided by MOGP.", "which Algorithm(s) ?", "MOGP", 845.0, 849.0], ["The problem known as CAITO refers to the determination of an order to integrate and test classes and aspects that minimizes stubbing costs. Such problem is NP-hard and to solve it efficiently, search based algorithms have been used, mainly evolutionary ones. However, the problem is very complex since it involves different factors that may influence the stubbing process, such as complexity measures, contractual issues and so on. These factors are usually in conflict and different possible solutions for the problem exist. To deal properly with this problem, this work explores the use of multi-objective optimization algorithms. The paper presents results from the application of two evolutionary algorithms - NSGA-II and SPEA2 - to the CAITO problem in four real systems, implemented in AspectJ. Both multi-objective algorithms are evaluated and compared with the traditional Tarjan's algorithm and with a mono-objective genetic algorithm. Moreover, it is shown how the tester can use the found solutions, according to the test goals.", "which Algorithm(s) ?", "NSGA-II", 714.0, 721.0], ["The success of the team allocation in a agile software development project is essential. The agile team allocation is a NP-hard problem, since it comprises the allocation of self-organizing and cross-functional teams. Many researchers have driven efforts to apply Computational Intelligence techniques to solve this problem. This work presents a hybrid approach based on NSGA-II multi-objective metaheuristic and Mamdani Fuzzy Inference Systems to solve the agile team allocation problem, together with an initial evaluation of its use in a real environment.", "which Algorithm(s) ?", "NSGA-II", 371.0, 378.0], ["Drug repositioning is the only feasible option to immediately address the COVID-19 global challenge. We screened a panel of 48 FDA-approved drugs against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) which were preselected by an assay of SARS-CoV. We identified 24 potential antiviral drug candidates against SARS-CoV-2 infection. Some drug candidates showed very low 50% inhibitory concentrations (IC 50 s), and in particular, two FDA-approved drugs\u2014niclosamide and ciclesonide\u2014were notable in some respects.", "which Has participant ?", "SARS-CoV-2", 203.0, 213.0], ["A novel coronavirus, named SARS-CoV-2, emerged in 2019 from Hubei region in China and rapidly spread worldwide. As no approved therapeutics exists to treat Covid-19, the disease associated to SARS-Cov-2, there is an urgent need to propose molecules that could quickly enter into clinics. Repurposing of approved drugs is a strategy that can bypass the time consuming stages of drug development. In this study, we screened the Prestwick Chemical Library\u00ae composed of 1,520 approved drugs in an infected cell-based assay. 90 compounds were identified. The robustness of the screen was assessed by the identification of drugs, such as Chloroquine derivatives and protease inhibitors, already in clinical trials. The hits were sorted according to their chemical composition and their known therapeutic effect, then EC50 and CC50 were determined for a subset of compounds. Several drugs, such as Azithromycine, Opipramol, Quinidine or Omeprazol present antiviral potency with 21400 m -2 , providing an average of 600 cm of hard substrate per m 2 on this mudflat. Its shells were used as habitat almost exclusively by the introduced Atlantic slipper shell Crepidula convexa, the introduced Asian anemone Diadumene lineata, and 2 native hermit crabs Pagurus hirsutiusculus and P. granosimanus. In addition, manipulative experiments showed that the abundance of the mudsnail Nassarius fraterculus and percentage cover of the eelgrass Zostera japonica, both introduced from the NW Pacific, increased significantly in the presence of B. attramentaria. The most likely mechanisms for these facilitations are indirect grazing effects and bioturbation, respectively. Since the precise arrival dates of all these invaders are unknown, the role of B. attramentaria's positive interactions in their initial invasion success is unknown. Nevertheless, by providing habitat for 2 non-native epibionts and 2 native species, and by facilitating 2 other invaders, the non-native B. attramentaria enhances the level of invasion by all 6 species.", "which Habitat ?", "Marine", 358.0, 364.0], ["Besides exacerbated exploitation, pollution, flow alteration and habitats degradation, freshwater biodiversity is also threatened by biological invasions. This paper addresses how native aquatic macrophyte communities are affected by the non-native species Urochloa arrecta, a current successful invader in Brazilian freshwater systems. We compared the native macrophytes colonizing patches dominated and non-dominated by this invader species. We surveyed eight streams in Northwest Parana State (Brazil). In each stream, we recorded native macrophytes' richness and biomass in sites where U. arrecta was dominant and in sites where it was not dominant or absent. No native species were found in seven, out of the eight investigated sites where U. arrecta was dominant. Thus, we found higher native species richness, Shannon index and native biomass values in sites without dominance of U. arrecta than in sites dominated by this invader. Although difficult to conclude about causes of such differences, we infer that the elevated biomass production by this grass might be the primary reason for alterations in invaded environments and for the consequent impacts on macrophytes' native communities. However, biotic resistance offered by native richer sites could be an alternative explanation for our results. To mitigate potential impacts and to prevent future environmental perturbations, we propose mechanical removal of the invasive species and maintenance or restoration of riparian vegetation, for freshwater ecosystems have vital importance for the maintenance of ecological services and biodiversity and should be preserved.", "which Habitat ?", "Freshwater", 87.0, 97.0], ["Despite long-standing interest of terrestrial ecologists, freshwater ecosystems are a fertile, yet unappreciated, testing ground for applying community phylogenetics to uncover mechanisms of species assembly. We quantify phylogenetic clustering and overdispersion of native and non-native fishes of a large river basin in the American Southwest to test for the mechanisms (environmental filtering versus competitive exclusion) and spatial scales influencing community structure. Contrary to expectations, non-native species were phylogenetically clustered and related to natural environmental conditions, whereas native species were not phylogenetically structured, likely reflecting human-related changes to the basin. The species that are most invasive (in terms of ecological impacts) tended to be the most phylogenetically divergent from natives across watersheds, but not within watersheds, supporting the hypothesis that Darwin's naturalization conundrum is driven by the spatial scale. Phylogenetic distinctiveness may facilitate non-native establishment at regional scales, but environmental filtering restricts local membership to closely related species with physiological tolerances for current environments. By contrast, native species may have been phylogenetically clustered in historical times, but species loss from contemporary populations by anthropogenic activities has likely shaped the phylogenetic signal. Our study implies that fundamental mechanisms of community assembly have changed, with fundamental consequences for the biogeography of both native and non-native species.", "which Habitat ?", "Freshwater", 58.0, 68.0], ["In recent decades the grass Phragmites australis has been aggressively in- vading coastal, tidal marshes of North America, and in many areas it is now considered a nuisance species. While P. australis has historically been restricted to the relatively benign upper border of brackish and salt marshes, it has been expanding seaward into more phys- iologically stressful regions. Here we test a leading hypothesis that the spread of P. australis is due to anthropogenic modification of coastal marshes. We did a field experiment along natural borders between stands of P. australis and the other dominant grasses and rushes (i.e., matrix vegetation) in a brackish marsh in Rhode Island, USA. We applied a pulse disturbance in one year by removing or not removing neighboring matrix vegetation and adding three levels of nutrients (specifically nitrogen) in a factorial design, and then we monitored the aboveground performance of P. australis and the matrix vegetation. Both disturbances increased the density, height, and biomass of shoots of P. australis, and the effects of fertilization were more pronounced where matrix vegetation was removed. Clear- ing competing matrix vegetation also increased the distance that shoots expanded and their reproductive output, both indicators of the potential for P. australis to spread within and among local marshes. In contrast, the biomass of the matrix vegetation decreased with increasing severity of disturbance. Disturbance increased the total aboveground production of plants in the marsh as matrix vegetation was displaced by P. australis. A greenhouse experiment showed that, with increasing nutrient levels, P. australis allocates proportionally more of its biomass to aboveground structures used for spread than to belowground struc- tures used for nutrient acquisition. Therefore, disturbances that enrich nutrients or remove competitors promote the spread of P. australis by reducing belowground competition for nutrients between P. australis and the matrix vegetation, thus allowing P. australis, the largest plant in the marsh, to expand and displace the matrix vegetation. Reducing nutrient load and maintaining buffers of matrix vegetation along the terrestrial-marsh ecotone will, therefore, be important methods of control for this nuisance species.", "which Habitat ?", "Terrestrial", 2209.0, 2220.0], ["Summary Biological invasions threaten ecosystem integrity and biodiversity, with numerous adverse implications for native flora and fauna. Established populations of two notorious freshwater invaders, the snail Tarebia granifera and the fish Pterygoplichthys disjunctivus, have been reported on three continents and are frequently predicted to be in direct competition with native species for dietary resources. Using comparisons of species' isotopic niche widths and stable isotope community metrics, we investigated whether the diets of the invasive T. granifera and P. disjunctivus overlapped with those of native species in a highly invaded river. We also attempted to resolve diet composition for both species, providing some insight into the original pathway of invasion in the Nseleni River, South Africa. Stable isotope metrics of the invasive species were similar to or consistently mid-range in comparison with their native counterparts, with the exception of markedly more uneven spread in isotopic space relative to indigenous species. Dietary overlap between the invasive P. disjunctivus and native fish was low, with the majority of shared food resources having overlaps of <0.26. The invasive T. granifera showed effectively no overlap with the native planorbid snail. However, there was a high degree of overlap between the two invasive species (\u02dc0.86). Bayesian mixing models indicated that detrital mangrove Barringtonia racemosa leaves contributed the largest proportion to P. disjunctivus diet (0.12\u20130.58), while the diet of T. granifera was more variable with high proportions of detrital Eichhornia crassipes (0.24\u20130.60) and Azolla filiculoides (0.09\u20130.33) as well as detrital Barringtonia racemosa leaves (0.00\u20130.30). Overall, although the invasive T. granifera and P. disjunctivus were not in direct competition for dietary resources with native species in the Nseleni River system, their spread in isotopic space suggests they are likely to restrict energy available to higher consumers in the food web. Establishment of these invasive populations in the Nseleni River is thus probably driven by access to resources unexploited or unavailable to native residents.", "which Habitat ?", "Freshwater", 180.0, 190.0], ["Propagule pressure is fundamental to invasion success, yet our understanding of its role in the marine domain is limited. Few studies have manipulated or controlled for propagule supply in the field, and consequently there is little empirical data to test for non-linearities or interactions with other processes. Supply of non-indigenous propagules is most likely to be elevated in urban estuaries, where vessels congregate and bring exotic species on fouled hulls and in ballast water. These same environments are also typically subject to elevated levels of disturbance from human activities, creating the potential for propagule pressure and disturbance to interact. By applying a controlled dose of free-swimming larvae to replicate assemblages, we were able to quantify a dose-response relationship at much finer spatial and temporal scales than previously achieved in the marine environment. We experimentally crossed controlled levels of propagule pressure and disturbance in the field, and found that both were required for invasion to occur. Only recruits that had settled onto bare space survived beyond three months, precluding invader persistence in undisturbed communities. In disturbed communities initial survival on bare space appeared stochastic, such that a critical density was required before the probability of at least one colony surviving reached a sufficient level. Those that persisted showed 75% survival over the following three months, signifying a threshold past which invaders were resilient to chance mortality. Urban estuaries subject to anthropogenic disturbance are common throughout the world, and similar interactions may be integral to invasion dynamics in these ecosystems.", "which Habitat ?", "Marine", 96.0, 102.0], ["Hypoxia is increasing in marine and estuarine systems worldwide, primarily due to anthropogenic causes. Periodic hypoxia represents a pulse disturbance, with the potential to restruc- ture estuarine biotic communities. We chose the shallow, epifaunal community in the lower Chesa- peake Bay, Virginia, USA, to test the hypothesis that low dissolved oxygen (DO) (<4 mg l -1 ) affects community dynamics by reducing the cover of spatial dominants, creating space both for less domi- nant native species and for invasive species. Settling panels were deployed at shallow depths in spring 2000 and 2001 at Gloucester Point, Virginia, and were manipulated every 2 wk from late June to mid-August. Manipulation involved exposing epifaunal communities to varying levels of DO for up to 24 h followed by redeployment in the York River. Exposure to low DO affected both species com- position (presence or absence) and the abundance of the organisms present. Community dominance shifted away from barnacles as level of hypoxia increased. Barnacles were important spatial domi- nants which reduced species diversity when locally abundant. The cover of Hydroides dianthus, a native serpulid polychaete, doubled when exposed to periodic hypoxia. Increased H. dianthus cover may indicate whether a local region has experienced periodic, local DO depletion and thus provide an indicator of poor water-quality conditions. In 2001, the combined cover of the invasive and crypto- genic species in this community, Botryllus schlosseri (tunicate), Molgula manhattensis (tunicate), Ficopomatus enigmaticus (polychaete) and Diadumene lineata (anemone), was highest on the plates exposed to moderately low DO (2 mg l -1 < DO < 4 mg l -1 ). All 4 of these species are now found world- wide and exhibit life histories well adapted for establishment in foreign habitats. Low DO events may enhance success of invasive species, which further stress marine and estuarine ecosystems.", "which Habitat ?", "Marine", 25.0, 31.0], ["Many ecosystems receive a steady stream of non-native species. How biotic resistance develops over time in these ecosystems will depend on how established invaders contribute to subsequent resistance. If invasion success and defence capacity (i.e. contribution to resistance) are correlated, then community resistance should increase as species accumulate. If successful invaders also cause most impact (through replacing native species with low defence capacity) then the effect will be even stronger. If successful invaders instead have weak defence capacity or even facilitative attributes, then resistance should decrease with time, as proposed by the invasional meltdown hypothesis. We analysed 1157 introductions of freshwater fish in Swedish lakes and found that species' invasion success was positively correlated with their defence capacity and impact, suggesting that these communities will develop stronger resistance over time. These insights can be used to identify scenarios where invading species are expected to cause large impact.", "which Habitat ?", "Freshwater", 722.0, 732.0], ["The expression of defensive morphologies in prey often is correlated with predator abundance or diversity over a range of temporal and spatial scales. These patterns are assumed to reflect natural selection via differential predation on genetically determined, fixed phenotypes. Phenotypic variation, however, also can reflect within-generation developmental responses to environmental cues (phenotypic plasticity). For example, water-borne effluents from predators can induce the production of defensive morphologies in many prey taxa. This phenomenon, however, has been examined only on narrow scales. Here, we demonstrate adaptive phenotypic plasticity in prey from geographically separated populations that were reared in the presence of an introduced predator. Marine snails exposed to predatory crab effluent in the field increased shell thickness rapidly compared with controls. Induced changes were comparable to (i) historical transitions in thickness previously attributed to selection by the invading predator and (ii) present-day clinal variation predicted from water temperature differences. Thus, predator-induced phenotypic plasticity may explain broad-scale geographic and temporal phenotypic variation. If inducible defenses are heritable, then selection on the reaction norm may influence coevolution between predator and prey. Trade-offs may explain why inducible rather than constitutive defenses have evolved in several gastropod species.", "which Habitat ?", "Marine", 766.0, 772.0], ["1. Biological invasion theory predicts that the introduction and establishment of non-native species is positively correlated with propagule pressure. Releases of pet and aquarium fishes to inland waters has a long history; however, few studies have examined the demographic basis of their importation and incidence in the wild. 2. For the 1500 grid squares (10\u00d710 km) that make up England, data on human demographics (population density, numbers of pet shops, garden centres and fish farms), the numbers of non-native freshwater fishes (from consented licences) imported in those grid squares (i.e. propagule pressure), and the reported incidences (in a national database) of non-native fishes in the wild were used to examine spatial relationships between the occurrence of non-native fishes and the demographic factors associated with propagule pressure, as well as to test whether the demographic factors are statistically reliable predictors of the incidence of non-native fishes, and as such surrogate estimators of propagule pressure. 3. Principal coordinates of neighbour matrices analyses, used to generate spatially explicit models, and confirmatory factor analysis revealed that spatial distributions of non-native species in England were significantly related to human population density, garden centre density and fish farm density. Human population density and the number of fish imports were identified as the best predictors of propagule pressure. 4. Human population density is an effective surrogate estimator of non-native fish propagule pressure and can be used to predict likely areas of non-native fish introductions. In conjunction with fish movements, where available, human population densities can be used to support biological invasion monitoring programmes across Europe (and perhaps globally) and to inform management decisions as regards the prioritization of areas for the control of non-native fish introductions. \u00a9 Crown copyright 2010. Reproduced with the permission of her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd.", "which Habitat ?", "Freshwater", 519.0, 529.0], ["The objective of this study was to test if morphological differences in pumpkinseed Lepomis gibbosus found in their native range (eastern North America) that are linked to feeding regime, competition with other species, hydrodynamic forces and habitat were also found among stream- and lake- or reservoir-dwelling fish in Iberian systems. The species has been introduced into these systems, expanding its range, and is presumably well adapted to freshwater Iberian Peninsula ecosystems. The results show a consistent pattern for size of lateral fins, with L. gibbosus that inhabit streams in the Iberian Peninsula having longer lateral fins than those inhabiting reservoirs or lakes. Differences in fin placement, body depth and caudal peduncle dimensions do not differentiate populations of L. gibbosus from lentic and lotic water bodies and, therefore, are not consistent with functional expectations. Lepomis gibbosus from lotic and lentic habitats also do not show a consistent pattern of internal morphological differentiation, probably due to the lack of lotic-lentic differences in prey type. Overall, the univariate and multivariate analyses show that most of the external and internal morphological characters that vary among populations do not differentiate lotic from lentic Iberian populations. The lack of expected differences may be a consequence of the high seasonal flow variation in Mediterranean streams, and the resultant low- or no-flow conditions during periods of summer drought.", "which Habitat ?", "Freshwater", 446.0, 456.0], ["The relative importance of plasticity vs. adaptation for the spread of invasive species has rarely been studied. We examined this question in a clonal population of invasive freshwater snails (Potamopyrgus antipodarum) from the western United States by testing whether observed plasticity in life history traits conferred higher fitness across a range of temperatures. We raised isofemale lines from three populations from different climate regimes (high- and low-elevation rivers and an estuary) in a split-brood, common-garden design in three temperatures. We measured life history and growth traits and calculated population growth rate (as a measure of fitness) using an age-structured projection matrix model. We found a strong effect of temperature on all traits, but no evidence for divergence in the average level of traits among populations. Levels of genetic variation and significant reaction norm divergence for life history traits suggested some role for adaptation. Plasticity varied among traits and was lowest for size and reproductive traits compared to age-related traits and fitness. Plasticity in fitness was intermediate, suggesting that invasive populations are not general-purpose genotypes with respect to the range of temperatures studied. Thus, by considering plasticity in fitness and its component traits, we have shown that trait plasticity alone does not yield the same fitness across a relevant set of temperature conditions.", "which Habitat ?", "Freshwater", 174.0, 184.0], ["Species invasion is one of the leading mechanisms of global environmental change, particularly in freshwater ecosystems. We used the Food and Agriculture Organization's Database of Invasive Aquatic...", "which Habitat ?", "Freshwater", 98.0, 108.0], ["Species that are frequently introduced to an exotic range have a high potential of becoming invasive. Besides propagule pressure, however, no other generally strong determinant of invasion success is known. Although evidence has accumulated that human affiliates (domesticates, pets, human commensals) also have high invasion success, existing studies do not distinguish whether this success can be completely explained by or is partly independent of propagule pressure. Here, we analyze both factors independently, propagule pressure and human affiliation. We also consider a third factor directly related to humans, hunting, and 17 traits on each species' population size and extent, diet, body size, and life history. Our dataset includes all 2362 freshwater fish, mammals, and birds native to Europe or North America. In contrast to most previous studies, we look at the complete invasion process consisting of (1) introduction, (2) establishment, and (3) spread. In this way, we not only consider which of the introduced species became invasive but also which species were introduced. Of the 20 factors tested, propagule pressure and human affiliation were the two strongest determinants of invasion success across all taxa and steps. This was true for multivariate analyses that account for intercorrelations among variables as well as univariate analyses, suggesting that human affiliation influenced invasion success independently of propagule pressure. Some factors affected the different steps of the invasion process antagonistically. For example, game species were much more likely to be introduced to an exotic continent than nonhunted species but tended to be less likely to establish themselves and spread. Such antagonistic effects show the importance of considering the complete invasion process.", "which Measure of invasion success ?", "Establishment", 937.0, 950.0], ["Abstract: Roads are believed to be a major contributing factor to the ongoing spread of exotic plants. We examined the effect of road improvement and environmental variables on exotic and native plant diversity in roadside verges and adjacent semiarid grassland, shrubland, and woodland communities of southern Utah ( U.S.A. ). We measured the cover of exotic and native species in roadside verges and both the richness and cover of exotic and native species in adjacent interior communities ( 50 m beyond the edge of the road cut ) along 42 roads stratified by level of road improvement ( paved, improved surface, graded, and four\u2010wheel\u2010drive track ). In roadside verges along paved roads, the cover of Bromus tectorum was three times as great ( 27% ) as in verges along four\u2010wheel\u2010drive tracks ( 9% ). The cover of five common exotic forb species tended to be lower in verges along four\u2010wheel\u2010drive tracks than in verges along more improved roads. The richness and cover of exotic species were both more than 50% greater, and the richness of native species was 30% lower, at interior sites adjacent to paved roads than at those adjacent to four\u2010wheel\u2010drive tracks. In addition, environmental variables relating to dominant vegetation, disturbance, and topography were significantly correlated with exotic and native species richness and cover. Improved roads can act as conduits for the invasion of adjacent ecosystems by converting natural habitats to those highly vulnerable to invasion. However, variation in dominant vegetation, soil moisture, nutrient levels, soil depth, disturbance, and topography may render interior communities differentially susceptible to invasions originating from roadside verges. Plant communities that are both physically invasible ( e.g., characterized by deep or fertile soils ) and disturbed appear most vulnerable. Decision\u2010makers considering whether to build, improve, and maintain roads should take into account the potential spread of exotic plants.", "which Measure of invasion success ?", "Cover of exotic species", 967.0, 990.0], ["What determines the number of alien species in a given region? \u2018Native biodiversity\u2019 and \u2018human impact\u2019 are typical answers to this question. Indeed, studies comparing different regions have frequently found positive relationships between number of alien species and measures of both native biodiversity (e.g. the number of native species) and human impact (e.g. human population). These relationships are typically explained by biotic acceptance or resistance, i.e. by influence of native biodiversity and human impact on the second step of the invasion process, establishment. The first step of the invasion process, introduction, has often been ignored. Here we investigate whether relationships between number of alien mammals and native biodiversity or human impact in 43 European countries are mainly shaped by differences in number of introduced mammals or establishment success. Our results suggest that correlation between number of native and established mammals is spurious, as it is simply explainable by the fact that both quantities are linked to country area. We also demonstrate that countries with higher human impact host more alien mammals than other countries because they received more introductions than other countries. Differences in number of alien mammals cannot be explained by differences in establishment success. Our findings highlight importance of human activities and question, at least for mammals in Europe, importance of biotic acceptance and resistance.", "which Measure of invasion success ?", "Establishment success", 864.0, 885.0], ["Genetic diversity is supposed to support the colonization success of expanding species, in particular in situations where microsite availability is constrained. Addressing the role of genetic diversity in plant invasion experimentally requires its manipulation independent of propagule pressure. To assess the relative importance of these components for the invasion of Senecio vernalis, we created propagule mixtures of four levels of genotype diversity by combining seeds across remote populations, across proximate populations, within single populations and within seed families. In a first container experiment with constant Festuca rupicola density as matrix, genotype diversity was crossed with three levels of seed density. In a second experiment, we tested for effects of establishment limitation and genotype diversity by manipulating Festuca densities. Increasing genetic diversity had no effects on abundance and biomass of S. vernalis but positively affected the proportion of large individuals to small individuals. Mixtures composed from proximate populations had a significantly higher proportion of large individuals than mixtures composed from within seed families only. High propagule pressure increased emergence and establishment of S. vernalis but had no effect on individual growth performance. Establishment was favoured in containers with Festuca, but performance of surviving seedlings was higher in open soil treatments. For S. vernalis invasion, we found a shift in driving factors from density dependence to effects of genetic diversity across life stages. While initial abundance was mostly linked to the amount of seed input, genetic diversity, in contrast, affected later stages of colonization probably via sampling effects and seemed to contribute to filtering the genotypes that finally grew up. In consequence, when disentangling the mechanistic relationships of genetic diversity, seed density and microsite limitation in colonization of invasive plants, a clear differentiation between initial emergence and subsequent survival to juvenile and adult stages is required.", "which Measure of invasion success ?", "Abundance", 910.0, 919.0], ["Abstract: We developed a method to predict the potential of non\u2010native reptiles and amphibians (herpetofauna) to establish populations. This method may inform efforts to prevent the introduction of invasive non\u2010native species. We used boosted regression trees to determine whether nine variables influence establishment success of introduced herpetofauna in California and Florida. We used an independent data set to assess model performance. Propagule pressure was the variable most strongly associated with establishment success. Species with short juvenile periods and species with phylogenetically more distant relatives in regional biotas were more likely to establish than species that start breeding later and those that have close relatives. Average climate match (the similarity of climate between native and non\u2010native range) and life form were also important. Frogs and lizards were the taxonomic groups most likely to establish, whereas a much lower proportion of snakes and turtles established. We used results from our best model to compile a spreadsheet\u2010based model for easy use and interpretation. Probability scores obtained from the spreadsheet model were strongly correlated with establishment success as were probabilities predicted for independent data by the boosted regression tree model. However, the error rate for predictions made with independent data was much higher than with cross validation using training data. This difference in predictive power does not preclude use of the model to assess the probability of establishment of herpetofauna because (1) the independent data had no information for two variables (meaning the full predictive capacity of the model could not be realized) and (2) the model structure is consistent with the recent literature on the primary determinants of establishment success for herpetofauna. It may still be difficult to predict the establishment probability of poorly studied taxa, but it is clear that non\u2010native species (especially lizards and frogs) that mature early and come from environments similar to that of the introduction region have the highest probability of establishment.", "which Measure of invasion success ?", "Establishment", 306.0, 319.0], ["1 The cultivation and dissemination of alien ornamental plants increases their potential to invade. More specifically, species with bird\u2010dispersed seeds can potentially infiltrate natural nucleation processes in savannas. 2 To test (i) whether invasion depends on facilitation by host trees, (ii) whether propagule pressure determines invasion probability, and (iii) whether alien host plants are better facilitators of alien fleshy\u2010fruited species than indigenous species, we mapped the distribution of alien fleshy\u2010fruited species planted inside a military base, and compared this with the distribution of alien and native fleshy\u2010fruited species established in the surrounding natural vegetation. 3 Abundance and diversity of fleshy\u2010fruited plant species was much greater beneath tree canopies than in open grassland and, although some native fleshy\u2010fruited plants were found both beneath host trees and in the open, alien fleshy\u2010fruited plants were found only beneath trees. 4 Abundance of fleshy\u2010fruited alien species in the natural savanna was positively correlated with the number of individuals of those species planted in the grounds of the military base, while the species richness of alien fleshy\u2010fruited taxa decreased with distance from the military base, supporting the notion that propagule pressure is a fundamental driver of invasions. 5 There were more fleshy\u2010fruited species beneath native Acacia tortilis than beneath alien Prosopis sp. trees of the equivalent size. Although there were significant differences in native plant assemblages beneath these hosts, the proportion of alien to native fleshy\u2010fruited species did not differ with host. 6 Synthesis. Birds facilitate invasion of a semi\u2010arid African savanna by alien fleshy\u2010fruited plants, and this process does not require disturbance. Instead, propagule pressure and a few simple biological observations define the probability that a plant will invade, with alien species planted in gardens being a major source of propagules. Some invading species have the potential to transform this savanna by overtopping native trees, leading to ecosystem\u2010level impacts. Likewise, the invasion of the open savanna by alien host trees (such as Prosopis sp.) may change the diversity, abundance and species composition of the fleshy\u2010fruited understorey. These results illustrate the complex interplay between propagule pressure, facilitation, and a range of other factors in biological invasions.", "which Measure of invasion success ?", "Abundance", 701.0, 710.0], ["Questions: How did post-wildfire understorey plant community response, including exotic species response, differ between pre-fire treated areas that were less severely burned, and pre-fire untreated areas that were more severely burned? Were these differences consistent through time? Location: East-central Arizona, southwestern US. Methods: We used a multi-year data set from the 2002 Rodeo\u2013Chediski Fire to detect post-fire trends in plant community response in burned ponderosa pine forests. Within the burn perimeter, we examined the effects of pre-fire fuels treatments on post-fire vegetation by comparing paired treated and untreated sites on the Apache-Sitgreaves National Forest. We sampled these paired sites in 2004, 2005 and 2011. Results: There were significant differences in pre-fire treated and untreated plant communities by species composition and abundance in 2004 and 2005, but these communities were beginning to converge in 2011. Total understorey plant cover was significantly higher in untreated areas for all 3 yr. Plant cover generally increased between 2004 and 2005 and markedly decreased in 2011, with the exception of shrub cover, which steadily increased through time. The sharp decrease in forb and graminoid cover in 2011 is likely related to drought conditions since the fire. Annual/biennial forb and graminoid cover decreased relative to perennial cover through time, consistent with the initial floristics hypothesis. Exotic plant response was highly variable and not limited to the immediate post-fire, annual/biennial community. Despite low overall exotic forb and graminoid cover for all years (<2.5%), several exotic species increased in frequency, and the relative proportion of exotic to native cover increased through time. Conclusions: Pre-treatment fuel reduction treatments helped maintain foundation overstorey species and associated native plant communities following this large wildfire. The overall low cover of exotic species on these sites supports other findings that the disturbance associated with high-severity fire does not always result in exotic species invasions. The increase in relative cover and frequency though time indicates that some species are proliferating, and continued monitoring is recommended. Patterns of exotic species invasions after severe burning are not easily predicted, and are likely more dependent on site-specific factors such as propagules, weather patterns and management.", "which Measure of invasion success ?", "Cover of exotic species ", 1955.0, 1979.0], ["Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.", "which Minerals Mapped/ Identified ?", "Buddingtonite", 1076.0, 1089.0], ["Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.", "which Minerals Mapped/ Identified ?", "Dolomite", 1382.0, 1390.0], ["Satellite-based hyperspectral imaging became a reality in November 2000 with the successful launch and operation of the Hyperion system on board the EO-1 platform. Hyperion is a pushbroom imager with 220 spectral bands in the 400-2500 nm wavelength range, a 30 meter pixel size and a 7.5 km swath. Pre-launch characterization of Hyperion measured low signal to noise (SNR<40:1) for the geologically significant shortwave infrared (SWIR) wavelength region (2000-2500 nm). The impact of this low SNR on Hyperion's capacity to resolve spectral detail was evaluated for the Mount Fitton test site in South Australia, which comprises a diverse range of minerals with narrow, diagnostic absorption bands in the SWIR. Following radiative transfer correction of the Hyperion radiance at sensor data to surface radiance (apparent reflectance), diagnostic spectral signatures were clearly apparent, including: green vegetation; talc; dolomite; chlorite; white mica and possibly tremolite. Even though the derived surface composition maps generated from these image endmembers were noisy (both random and column), they were nonetheless spatially coherent and correlated well with the known geology. In addition, the Hyperion data were used to measure and map spectral shifts of <10 nm in the SWIR related to white mica chemical variations.", "which Minerals Mapped/ Identified ?", "Dolomite", 924.0, 932.0], ["Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.", "which Minerals Mapped/ Identified ?", "Silica", 1115.0, 1121.0], ["Chinese Named entity recognition is one of the most important tasks in NLP. The paper mainly describes our work on NER tasks. The paper built up a system under the framework of Conditional Random Fields (CRFs) model. With an improved tag set the system gets an F-value of 93.49 using SIGHAN2007 MSRA corpus.", "which Language/domain ?", "Chinese", 0.0, 7.0], ["Drug abuse pertains to the consumption of a substance that may induce adverse effects to a person. In international security studies, drug trafficking has become an important topic. In this regard, drug-related crimes are identified as an extremely significant challenge faced by any community. Several techniques for investigations in the crime domain have been implemented by many researchers. However, most of these researchers focus on extracting general crime entities. The number of studies that focus on the drug crime domain is relatively limited. This paper mainly aims to propose a rule-based named entity recognition model for drug-related crime news documents. In this work, a set of heuristic and grammatical rules is used to extract named entities, such as types of drugs, amount of drugs, price of drugs, drug hiding methods, and the nationality of the suspect. A set of grammatical and heuristic rules is established based on part-ofspeech information, developed gazetteers, and indicator word lists. The combined approach of heuristic and grammatical rules achieves a good performance with an overall precision of 86%, a recall of 87%, and an F1-measure of 87%. Results indicate that the ensemble of both heuristic and grammatical rules improves the extraction effectiveness in terms of macro-F1 for all entities.", "which Language/domain ?", "Drug-related crime news documents", 638.0, 671.0], ["One difficulty with machine learning for information extraction is the high cost of collecting labeled examples. Active Learning can make more efficient use of the learner's time by asking them to label only instances that are most useful for the trainer. In random sampling approach, unlabeled data is selected for annotation at random and thus can't yield the desired results. In contrast, active learning selects the useful data from a huge pool of unlabeled data for the classifier. The strategies used often classify the corpus tokens (or, data points) into wrong classes. The classifier is confused between two categories if the token is located near the margin. We propose a novel method for solving this problem and show that it favorably results in the increased performance. Our approach is based on the supervised machine learning algorithm, namely Support Vector Machine (SVM). The proposed approach is applied for solving the problem of named entity recognition (NER) in two Indian languages, namely Hindi and Bengali. Results show that proposed active learning based technique indeed improves the performance of the system.", "which Language/domain ?", "Hindi and Bengali", 1013.0, 1030.0], ["Natural language processing (NLP) is widely applied in biological domains to retrieve information from publications. Systems to address numerous applications exist, such as biomedical named entity recognition (BNER), named entity normalization (NEN) and protein-protein interaction extraction (PPIE). High-quality datasets can assist the development of robust and reliable systems; however, due to the endless applications and evolving techniques, the annotations of benchmark datasets may become outdated and inappropriate. In this study, we first review commonlyused BNER datasets and their potential annotation problems such as inconsistency and low portability. Then, we introduce a revised version of the JNLPBA dataset that solves potential problems in the original and use state-of-the-art named entity recognition systems to evaluate its portability to different kinds of biomedical literature, including protein-protein interaction and biology events. Lastly, we introduce an ensembled biomedical entity dataset (EBED) by extending the revised JNLPBA dataset with PubMed Central full-text paragraphs, figure captions and patent abstracts. This EBED is a multi-task dataset that covers annotations including gene, disease and chemical entities. In total, it contains 85000 entity mentions, 25000 entity mentions with database identifiers and 5000 attribute tags. To demonstrate the usage of the EBED, we review the BNER track from the AI CUP Biomedical Paper Analysis challenge. Availability: The revised JNLPBA dataset is available at https://iasl-btm.iis.sinica.edu.tw/BNER/Content/Re vised_JNLPBA.zip. The EBED dataset is available at https://iasl-btm.iis.sinica.edu.tw/BNER/Content/AICUP _EBED_dataset.rar. Contact: Email: thtsai@g.ncu.edu.tw, Tel. 886-3-4227151 ext. 35203, Fax: 886-3-422-2681 Email: hsu@iis.sinica.edu.tw, Tel. 886-2-2788-3799 ext. 2211, Fax: 886-2-2782-4814 Supplementary information: Supplementary data are available at Briefings in Bioinformatics online.", "which Data domains ?", "Protein-Protein Interaction Extraction (PPIE)", NaN, NaN], ["We present the BioCreative VII Task 3 which focuses on drug names extraction from tweets. Recognized to provide unique insights into population health, detecting health related tweets is notoriously challenging for natural language processing tools. Tweets are written about any and all topics, most of them not related to health. Additionally, they are written with little regard for proper grammar, are inherently colloquial, and are almost never proof-read. Given a tweet, task 3 consists of detecting if the tweet has a mention of a drug name and, if so, extracting the span of the drug mention. We made available 182,049 tweets publicly posted by 212 Twitter users with all drugs mentions manually annotated. This corpus exhibits the natural and strongly imbalanced distribution of positive tweets, with only 442 tweets (0.2%) mentioning a drug. This task was an opportunity for participants to evaluate methods robust to classimbalance beyond the simple lexical match. A total of 65 teams registered, and 16 teams submitted a system run. We summarize the corpus and the tools created for the challenge, which is freely available at https://biocreative.bioinformatics.udel.edu/tasks/biocreativevii/track-3/. We analyze the methods and the results of the competing systems with a focus on learning from classimbalanced data. Keywords\u2014social media; pharmacovigilance; named entity recognition; drug name extraction; class-imbalance.", "which Data domains ?", "Social media", 1338.0, 1350.0], ["A considerable effort has been made to extract biological and chemical entities, as well as their relationships, from the scientific literature, either manually through traditional literature curation or by using information extraction and text mining technologies. Medicinal chemistry patents contain a wealth of information, for instance to uncover potential biomarkers that might play a role in cancer treatment and prognosis. However, current biomedical annotation databases do not cover such information, partly due to limitations of publicly available biomedical patent mining software. As part of the BioCreative V CHEMDNER patents track, we present the results of the first named entity recognition (NER) assignment carried out to detect mentions of chemical compounds and genes/proteins in running patent text. More specifically, this task aimed to evaluate the performance of automatic name recognition strategies capable of isolating chemical names and gene and gene product mentions from surrounding text within patent titles and abstracts. A total of 22 unique teams submitted results for at least one of the three CHEMDNER subtasks. The first subtask, called the CEMP (chemical entity mention in patents) task, focused on the detection of chemical named entity mentions in patents, requesting teams to return the start and end indices corresponding to all the chemical entities found in a given record. A total of 21 teams submitted 93 runs, for this subtask. The top performing team reached an f-measure of 0.89 with a precision of 0.87 and a recall of 0.91. The CPD (chemical passage detection) task required the classification of patent titles and abstracts whether they do or do not contain chemical compound mentions. Nine teams returned predictions for this task (40 runs). The top run in terms of Matthew\u2019s correlation coefficient (MCC) had a score of 0.88, the highest sensitivity ? Corresponding author", "which Data domains ?", "MEDICINAL CHEMISTRY", 266.0, 285.0], ["Abstract Background Biology-focused databases and software define bioinformatics and their use is central to computational biology. In such a complex and dynamic field, it is of interest to understand what resources are available, which are used, how much they are used, and for what they are used. While scholarly literature surveys can provide some insights, large-scale computer-based approaches to identify mentions of bioinformatics databases and software from primary literature would automate systematic cataloguing, facilitate the monitoring of usage, and provide the foundations for the recovery of computational methods for analysing biological data, with the long-term aim of identifying best/common practice in different areas of biology. Results We have developed bioNerDS, a named entity recogniser for the recovery of bioinformatics databases and software from primary literature. We identify such entities with an F-measure ranging from 63% to 91% at the mention level and 63-78% at the document level, depending on corpus. Not attaining a higher F-measure is mostly due to high ambiguity in resource naming, which is compounded by the on-going introduction of new resources. To demonstrate the software, we applied bioNerDS to full-text articles from BMC Bioinformatics and Genome Biology. General mention patterns reflect the remit of these journals, highlighting BMC Bioinformatics\u2019s emphasis on new tools and Genome Biology\u2019s greater emphasis on data analysis. The data also illustrates some shifts in resource usage: for example, the past decade has seen R and the Gene Ontology join BLAST and GenBank as the main components in bioinformatics processing. Conclusions We demonstrate the feasibility of automatically identifying resource names on a large-scale from the scientific literature and show that the generated data can be used for exploration of bioinformatics database and software usage. For example, our results help to investigate the rate of change in resource usage and corroborate the suspicion that a vast majority of resources are created, but rarely (if ever) used thereafter. bioNerDS is available at http://bionerds.sourceforge.net/.", "which Data domains ?", "Biology", 20.0, 27.0], ["The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records presented three tasks: a concept extraction task focused on the extraction of medical concepts from patient reports; an assertion classification task focused on assigning assertion types for medical problem concepts; and a relation classification task focused on assigning relation types that hold between medical problems, tests, and treatments. i2b2 and the VA provided an annotated reference standard corpus for the three tasks. Using this reference standard, 22 systems were developed for concept extraction, 21 for assertion classification, and 16 for relation classification. These systems showed that machine learning approaches could be augmented with rule-based systems to determine concepts, assertions, and relations. Depending on the task, the rule-based systems can either provide input for machine learning or post-process the output of machine learning. Ensembles of classifiers, information from unlabeled data, and external knowledge sources can help when the training data are inadequate.", "which Data domains ?", "Clinical", 72.0, 80.0], ["The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and disease NER were 61, 74 and 51%, respectively. Response times ranged from fractions-of-a-second to over a minute per article. We present a description of the challenge and summary of results, demonstrating how curation groups can effectively use interoperable NER technologies to simplify text-mining pipeline implementation. Database URL: http://ctdbase.org/", "which Coarse-grained Entity type ?", "Disease", 1999.0, 2006.0], ["Gene function curation via Gene Ontology (GO) annotation is a common task among Model Organism Database groups. Owing to its manual nature, this task is considered one of the bottlenecks in literature curation. There have been many previous attempts at automatic identification of GO terms and supporting information from full text. However, few systems have delivered an accuracy that is comparable with humans. One recognized challenge in developing such systems is the lack of marked sentence-level evidence text that provides the basis for making GO annotations. We aim to create a corpus that includes the GO evidence text along with the three core elements of GO annotations: (i) a gene or gene product, (ii) a GO term and (iii) a GO evidence code. To ensure our results are consistent with real-life GO data, we recruited eight professional GO curators and asked them to follow their routine GO annotation protocols. Our annotators marked up more than 5000 text passages in 200 articles for 1356 distinct GO terms. For evidence sentence selection, the inter-annotator agreement (IAA) results are 9.3% (strict) and 42.7% (relaxed) in F1-measures. For GO term selection, the IAAs are 47% (strict) and 62.9% (hierarchical). Our corpus analysis further shows that abstracts contain \u223c10% of relevant evidence sentences and 30% distinct GO terms, while the Results/Experiment section has nearly 60% relevant sentences and >70% GO terms. Further, of those evidence sentences found in abstracts, less than one-third contain enough experimental detail to fulfill the three core criteria of a GO annotation. This result demonstrates the need of using full-text articles for text mining GO annotations. Through its use at the BioCreative IV GO (BC4GO) task, we expect our corpus to become a valuable resource for the BioNLP research community. Database URL: http://www.biocreative.org/resources/corpora/bc-iv-go-task-corpus/.", "which Coarse-grained Entity type ?", "GO Term", 717.0, 724.0], ["The BioCreative NLM-Chem track calls for a community effort to fine-tune automated recognition of chemical names in biomedical literature. Chemical names are one of the most searched biomedical entities in PubMed and \u2013 as highlighted during the COVID-19 pandemic \u2013 their identification may significantly advance research in multiple biomedical subfields. While previous community challenges focused on identifying chemical names mentioned in titles and abstracts, the full text contains valuable additional detail. We organized the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document. This manuscript summarizes the BioCreative NLM-Chem track. We received a total of 88 submissions in total from 17 teams worldwide. The highest performance achieved for the Chemical Identification task was 0.8672 f-score (0.8759 precision, 0.8587 recall) for strict NER performance and 0.8136 f-score (0.8621 precision, 0.7702 recall) for strict normalization performance. The highest performance achieved for the Chemical Indexing task was 0.4825 f-score (0.4397 precision, 0.5344 recall). The NLM-Chem track dataset and other challenge materials are publicly available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/. This community challenge demonstrated 1) the current substantial achievements in deep learning technologies can be utilized to further improve automated prediction accuracy, and 2) the Chemical Indexing task is substantially more challenging. We look forward to further development of biomedical text mining methods to respond to the rapid growth of biomedical literature. Keywords\u2014 biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; text mining; chemical entity recognition; chemical indexing", "which Coarse-grained Entity type ?", "Chemical", 98.0, 106.0], ["Software contributions to academic research are relatively invisible, especially to the formalized scholarly reputation system based on bibliometrics. In this article, we introduce a gold\u2010standard dataset of software mentions from the manual annotation of 4,971 academic PDFs in biomedicine and economics. The dataset is intended to be used for automatic extraction of software mentions from PDF format research publications by supervised learning at scale. We provide a description of the dataset and an extended discussion of its creation process, including improved text conversion of academic PDFs. Finally, we reflect on our challenges and lessons learned during the dataset creation, in hope of encouraging more discussion about creating datasets for machine learning use.", "which Entity types ?", "Version", NaN, NaN], ["Cross-domain named entity recognition (NER) models are able to cope with the scarcity issue of NER samples in target domains. However, most of the existing NER benchmarks lack domain-specialized entity types or do not focus on a certain domain, leading to a less effective cross-domain evaluation. To address these obstacles, we introduce a cross-domain NER dataset (CrossNER), a fully-labeled collection of NER data spanning over five diverse domains with specialized entity categories for different domains. Additionally, we also provide a domain-related corpus since using it to continue pre-training language models (domain-adaptive pre-training) is effective for the domain adaptation. We then conduct comprehensive experiments to explore the effectiveness of leveraging different levels of the domain corpus and pre-training strategies to do domain-adaptive pre-training for the cross-domain task. Results show that focusing on the fractional corpus containing domain-specialized entities and utilizing a more challenging pre-training strategy in domain-adaptive pre-training are beneficial for the NER domain adaptation, and our proposed method can consistently outperform existing cross-domain NER baselines. Nevertheless, experiments also illustrate the challenge of this cross-domain NER task. We hope that our dataset and baselines will catalyze research in the NER domain adaptation area. The code and data are available at this https URL.", "which Entity types ?", "Task", 898.0, 902.0], ["Abstract Background Biology-focused databases and software define bioinformatics and their use is central to computational biology. In such a complex and dynamic field, it is of interest to understand what resources are available, which are used, how much they are used, and for what they are used. While scholarly literature surveys can provide some insights, large-scale computer-based approaches to identify mentions of bioinformatics databases and software from primary literature would automate systematic cataloguing, facilitate the monitoring of usage, and provide the foundations for the recovery of computational methods for analysing biological data, with the long-term aim of identifying best/common practice in different areas of biology. Results We have developed bioNerDS, a named entity recogniser for the recovery of bioinformatics databases and software from primary literature. We identify such entities with an F-measure ranging from 63% to 91% at the mention level and 63-78% at the document level, depending on corpus. Not attaining a higher F-measure is mostly due to high ambiguity in resource naming, which is compounded by the on-going introduction of new resources. To demonstrate the software, we applied bioNerDS to full-text articles from BMC Bioinformatics and Genome Biology. General mention patterns reflect the remit of these journals, highlighting BMC Bioinformatics\u2019s emphasis on new tools and Genome Biology\u2019s greater emphasis on data analysis. The data also illustrates some shifts in resource usage: for example, the past decade has seen R and the Gene Ontology join BLAST and GenBank as the main components in bioinformatics processing. Conclusions We demonstrate the feasibility of automatically identifying resource names on a large-scale from the scientific literature and show that the generated data can be used for exploration of bioinformatics database and software usage. For example, our results help to investigate the rate of change in resource usage and corroborate the suspicion that a vast majority of resources are created, but rarely (if ever) used thereafter. bioNerDS is available at http://bionerds.sourceforge.net/.", "which Entity types ?", "Biology-focused databases and software", 20.0, 58.0], ["One of the biomedical entity types of relevance for medicine or biosciences are chemical compounds and drugs. The correct detection these entities is critical for other text mining applications building on them, such as adverse drug-reaction detection, medication-related fake news or drug-target extraction. Although a significant effort was made to detect mentions of drugs/chemicals in English texts, so far only very limited attempts were made to recognize them in medical documents in other languages. Taking into account the growing amount of medical publications and clinical records written in Spanish, we have organized the first shared task on detecting drug and chemical entities in Spanish medical documents. Additionally, we included a clinical concept-indexing sub-track asking teams to return SNOMED-CT identifiers related to drugs/chemicals for a collection of documents. For this task, named PharmaCoNER, we generated annotation guidelines together with a corpus of 1,000 manually annotated clinical case studies. A total of 22 teams participated in the sub-track 1, (77 system runs), and 7 teams in the sub-track 2 (19 system runs). Top scoring teams used sophisticated deep learning approaches yielding very competitive results with F-measures above 0.91. These results indicate that there is a real interest in promoting biomedical text mining efforts beyond English. We foresee that the PharmaCoNER annotation guidelines, corpus and participant systems will foster the development of new resources for clinical and biomedical text mining systems of Spanish medical data.", "which Entity types ?", "Chemical", 80.0, 88.0], ["This paper presents the fourth edition of the Bacteria Biotope task at BioNLP Open Shared Tasks 2019. The task focuses on the extraction of the locations and phenotypes of microorganisms from PubMed abstracts and full-text excerpts, and the characterization of these entities with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on biodiversity for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, and the challenge organization. We also provide an analysis of the results obtained by participants, and inspect the evolution of the results since the last edition in 2016.", "which Entity types ?", "Microorganism", NaN, NaN], ["This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2016, which follows the previous 2013 and 2011 editions. The task focuses on the extraction of the locations (biotopes and geographical places) of bacteria from PubMe abstracts and the characterization of bacteria and their associated habitats with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on bacteria habitats for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, the challenge organization, and the evaluation metrics. We also provide an analysis of the results obtained by participants.", "which Entity types ?", "Geographical places", 195.0, 214.0], ["This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2016, which follows the previous 2013 and 2011 editions. The task focuses on the extraction of the locations (biotopes and geographical places) of bacteria from PubMe abstracts and the characterization of bacteria and their associated habitats with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on bacteria habitats for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, the challenge organization, and the evaluation metrics. We also provide an analysis of the results obtained by participants.", "which Entity types ?", "Habitat", NaN, NaN], ["We consider a distribution problem in which a product has to be shipped from a supplier to several retailers over a given time horizon. Each retailer defines a maximum inventory level. The supplier monitors the inventory of each retailer and determines its replenishment policy, guaranteeing that no stockout occurs at the retailer (vendor-managed inventory policy). Every time a retailer is visited, the quantity delivered by the supplier is such that the maximum inventory level is reached (deterministic order-up-to level policy). Shipments from the supplier to the retailers are performed by a vehicle of given capacity. The problem is to determine for each discrete time instant the quantity to ship to each retailer and the vehicle route. We present a mixed-integer linear programming model and derive new additional valid inequalities used to strengthen the linear relaxation of the model. We implement a branch-and-cut algorithm to solve the model optimally. We then compare the optimal solution of the problem with the optimal solution of two problems obtained by relaxing in different ways the deterministic order-up-to level policy. Computational results are presented on a set of randomly generated problem instances.", "which Demand ?", "Deterministic", 493.0, 506.0], ["An industrial gases tanker vehicle visitsn customers on a tour, with a possible ( n + 1)st customer added at the end. The amount of needed product at each customer is a known random process, typically a Wiener process. The objective is to adjust dynamically the amount of product provided on scene to each customer so as to minimize total expected costs, comprising costs of earliness, lateness, product shortfall, and returning to the depot nonempty. Earliness costs are computed by invocation of an annualized incremental cost argument. Amounts of product delivered to each customer are not known until the driver is on scene at the customer location, at which point the customer is either restocked to capacity or left with some residual empty capacity, the policy determined by stochastic dynamic programming. The methodology has applications beyond industrial gases.", "which Demand ?", "Stochastic", 782.0, 792.0], ["We consider a new approach to stochastic inventory/routing that approximates the future costs of current actions using optimal dual prices of a linear program. We obtain two such linear programs by formulating the control problem as a Markov decision process and then replacing the optimal value function with the sum of single-customer inventory value functions. The resulting approximation yields statewise lower bounds on optimal infinite-horizon discounted costs. We present a linear program that takes into account inventory dynamics and economics in allocating transportation costs for stochastic inventory routing. On test instances we find that these allocations do not introduce any error in the value function approximations relative to the best approximations that can be achieved without them. Also, unlike other approaches, we do not restrict the set of allowable vehicle itineraries in any way. Instead, we develop an efficient algorithm to both generate and eliminate itineraries during solution of the linear programs and control policy. In simulation experiments, the price-directed policy outperforms other policies from the literature.", "which Demand ?", "Stochastic", 30.0, 40.0], ["Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported thus far still need to be improved about precision and computational time for successful applications. In this paper, we propose an improved eye localization method based on multi-scale Gator feature vector models. The proposed method first tries to locate eyes in the downscaled face image by utilizing Gabor Jet similarity between Gabor feature vector at an initial eye coordinates and the eye model bunch of the corresponding scale. The proposed method finally locates eyes in the original input face image after it processes in the same way recursively in each scaled face image by using the eye coordinates localized in the downscaled image as initial eye coordinates. Experiments verify that our proposed method improves the precision rate without causing much computational overhead compared with other eye localization methods reported in the previous researches.", "which Challenges ?", "pose", NaN, NaN], ["In this paper, we present an enhanced pictorial structure (PS) model for precise eye localization, a fundamental problem involved in many face processing tasks. PS is a computationally efficient framework for part-based object modelling. For face images taken under uncontrolled conditions, however, the traditional PS model is not flexible enough for handling the complicated appearance and structural variations. To extend PS, we 1) propose a discriminative PS model for a more accurate part localization when appearance changes seriously, 2) introduce a series of global constraints to improve the robustness against scale, rotation and translation, and 3) adopt a heuristic prediction method to address the difficulty of eye localization with partial occlusion. Experimental results on the challenging LFW (Labeled Face in the Wild) database show that our model can locate eyes accurately and efficiently under a broad range of uncontrolled variations involving poses, expressions, lightings, camera qualities, occlusions, etc.", "which Challenges ?", "Uncontrolled", 266.0, 278.0], ["Multi-agent systems (MASs) have received tremendous attention from scholars in different disciplines, including computer science and civil engineering, as a means to solve complex problems by subdividing them into smaller tasks. The individual tasks are allocated to autonomous entities, known as agents. Each agent decides on a proper action to solve the task using multiple inputs, e.g., history of actions, interactions with its neighboring agents, and its goal. The MAS has found multiple applications, including modeling complex systems, smart grids, and computer networks. Despite their wide applicability, there are still a number of challenges faced by MAS, including coordination between agents, security, and task allocation. This survey provides a comprehensive discussion of all aspects of MAS, starting from definitions, features, applications, challenges, and communications to evaluation. A classification on MAS applications and challenges is provided along with references for further studies. We expect this paper to serve as an insightful and comprehensive resource on the MAS for researchers and practitioners in the area.", "which Challenges ?", "Security", 705.0, 713.0], ["In vendor-managed inventory replenishment, the vendor decides when to make deliveries to customers, how much to deliver, and how to combine shipments using the available vehicles. This gives rise to the inventory-routing problem in which the goal is to coordinate inventory replenishment and transportation to minimize costs. The problem tackled in this paper is the stochastic inventory-routing problem, where stochastic demands are specified through general discrete distributions. The problem is formulated as a discounted infinite-horizon Markov decision problem. Heuristics based on finite scenario trees are developed. Computational results confirm the efficiency of these heuristics.", "which approach ?", "scenario tree", NaN, NaN], ["We investigate the one warehouse multiretailer distribution problem with traveling salesman tour vehicle routing costs. We model the system in the framework of the more general production/distribution system with arbitrary non-negative monotone joint order costs. We develop polynomial time heuristics whose policy costs are provably close to the cost of an optimal policy. In particular, we show that given a submodular function which is close to the true order cost then we can find a power-of-two policy whose cost is only moderately greater than the cost of an optimal policy. Since such submodular approximations exist for traveling salesman tour vehicle routing costs we present a detailed description of heuristics for the one warehouse multiretailer distribution problem. We formulate a nonpolynomial dynamic program that computes optimal power-of-two policies for the one warehouse multiretailer system assuming only that the order costs are non-negative monotone. Finally, we perform computational tests which compare our heuristics to optimal power of two policies for problems of up to sixteen retailers. We also perform computational tests on larger problems; these tests give us insight into what policies one should employ.", "which approach ?", "Submodular approximation", NaN, NaN], ["We describe a dynamic and stochastic vehicle dispatching problem called the delivery dispatching problem. This problem is modeled as a Markov decision process. Because exact solution of this model is impractical, we adopt a heuristic approach for handling the problem. The heuristic is based in part on a decomposition of the problem by customer, where customer subproblems generate penalty functions that are applied in a master dispatching problem. We describe how to compute bounds on the algorithm's performance, and apply it to several examples with good results.", "which approach ?", "Markov decision process", 135.0, 158.0], ["Vendor managed inventory replenishment is a business practice in which vendors monitor their customers' inventories, and decide when and how much inventory should be replenished. The inventory routing problem addresses the coordination of inventory management and transportation. The ability to solve the inventory routing problem contributes to the realization of the potential savings in inventory and transportation costs brought about by vendor managed inventory replenishment. The inventory routing problem is hard, especially if a large number of customers is involved. We formulate the inventory routing problem as a Markov decision process, and we propose approximation methods to find good solutions with reasonable computational effort. Computational results are presented for the inventory routing problem with direct deliveries.", "which approach ?", "Markov decision process", 624.0, 647.0], ["We describe a system which is capable of learning the presentation of document logical structure, exemplary as shown for business letters. Presenting a set of instances to the system, it clusters them into structural concepts and induces a concept hierarchy. This concept hierarchy is taken as a reference for classifying future input. The article introduces the sequence of learning steps and describes how the resulting concept hierarchy is applied to logical labeling, and reports the results.", "which Application Domain ?", "letters", 130.0, 137.0], ["In order to deal with heterogeneous knowledge in the medical field, this paper proposes a method which can learn a heavy-weighted medical ontology based on medical glossaries and Web resources. Firstly, terms and taxonomic relations are extracted based on disease and drug glossaries and a light-weighted ontology is constructed, Secondly, non-taxonomic relations are automatically learned from Web resources with linguistic patterns, and the two ontologies (disease and drug) are expanded from light-weighted level towards heavy-weighted level, At last, the disease ontology and drug ontology are integrated to create a practical medical ontology. Experiment shows that this method can integrate and expand medical terms with taxonomic and different kinds of non-taxonomic relations. Our experiments show that the performance is promising.", "which Application Domain ?", "Medical", 53.0, 60.0], ["Explainability can help cyber-physical systems alleviating risk in automating decisions that are affecting our life. Building an explainable cyber-physical system requires deriving explanations from system events and causality between the system elements. Cyber-physical energy systems such as smart grids involve cyber and physical aspects of energy systems and other elements, namely social and economic. Moreover, a smart-grid scale can range from a small village to a large region across countries. Therefore, integrating these varieties of data and knowledge is a fundamental challenge to build an explainable cyber-physical energy system. This paper aims to use knowledge graph based framework to solve this challenge. The framework consists of an ontology to model and link data from various sources and graph-based algorithm to derive explanations from the events. A simulated demand response scenario covering the above aspects further demonstrates the applicability of this framework.", "which Application Domain ?", "Smart Grids", 294.0, 305.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which method ?", "the priori ty program 1103", 725.0, 751.0], ["Today the Web represents a rich source of labour market data for both public and private operators, as a growing number of job offers are advertised through Web portals and services. In this paper we apply and compare several techniques, namely explicit-rules, machine learning, and LDA-based algorithms to classify a real dataset of Web job offers collected from 12 heterogeneous sources against a standard classification system of occupations.", "which method ?", "machine learning", 261.0, 277.0], ["While Wikipedia exists in 287 languages, its content is unevenly distributed among them. In this work, we investigate the generation of open domain Wikipedia summaries in underserved languages using structured data from Wikidata. To this end, we propose a neural network architecture equipped with copy actions that learns to generate single-sentence and comprehensible textual summaries from Wikidata triples. We demonstrate the effectiveness of the proposed approach by evaluating it against a set of baselines on two languages of different natures: Arabic, a morphological rich language with a larger vocabulary than English, and Esperanto, a constructed language known for its easy acquisition.", "which method ?", "a neural network architecture equipped with copy actions", 254.0, 310.0], ["Today the Web represents a rich source of labour market data for both public and private operators, as a growing number of job offers are advertised through Web portals and services. In this paper we apply and compare several techniques, namely explicit-rules, machine learning, and LDA-based algorithms to classify a real dataset of Web job offers collected from 12 heterogeneous sources against a standard classification system of occupations.", "which method ?", "LDA-based algorithms", 283.0, 303.0], ["The purpose of this paper is to develop an understanding of the knowledge, skills and competencies demanded of early career information systems (IS) graduates in Australia. Online job advertisements from 2006 were collected and investigated using content analysis software to determine the frequencies and patterns of occurrence of specific requirements. This analysis reveals a dominant cluster of core IS knowledge and competency skills that revolves around IS Development as the most frequently required category of knowledge (78% of ads) and is strongly associated with: Business Analysis, Systems Analysis; Management; Operations, Maintenance & Support; Communication Skills; Personal Characteristics; Computer Languages; Data & Information Management; Internet, Intranet, Web Applications; and Software Packages. Identification of the core cluster of IS knowledge and skills - in demand across a wide variety of jobs - is important to better understand employers' needs for and expectations from IS graduates and the implications for education programs. Much less prevalent is the second cluster that includes knowledge and skills at a more technical side of IS (Architecture and Infrastructure, Operating Systems, Networks, and Security). Issues raised include the nature of entry level positions and their role in the preparation of their incumbents for future more senior positions. The findings add an Australian perspective to the literature on information systems job ads and should be of value to educators, employers, as well as current and future IS professionals.", "which method ?", "content analysis", 247.0, 263.0], ["Today the Web represents a rich source of labour market data for both public and private operators, as a growing number of job offers are advertised through Web portals and services. In this paper we apply and compare several techniques, namely explicit-rules, machine learning, and LDA-based algorithms to classify a real dataset of Web job offers collected from 12 heterogeneous sources against a standard classification system of occupations.", "which method ?", "explicit-rules", 245.0, 259.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which method ?", "the research project \u201cRelation Based Process Modelling", 434.0, 488.0], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "which method ?", "a graph database", 1064.0, 1080.0], ["A two-dimensional model of microwave-induced plasma (field frequency 2.45 GHz) in argon at atmospheric pressure is presented. The model describes in a self-consistent manner the gas flow and heat transfer, the in-coupling of the microwave energy into the plasma, and the reaction kinetics relevant to high-pressure argon plasma including the contribution of molecular ion species. The model provides the gas and electron temperature distributions, the electron, ion, and excited state number densities, and the power deposited into the plasma for given gas flow rate and temperature at the inlet, and input power of the incoming TEM microwave. For flow rate and absorbed microwave power typical for analytical applications (200-400 ml/min and 20 W), the plasma is far from thermodynamic equilibrium. The gas temperature reaches values above 2000 K in the plasma region, while the electron temperature is about 1 eV. The electron density reaches a maximum value of about 4 \u00d7 10(21) m(-3). The balance of the charged particles is essentially controlled by the kinetics of the molecular ions. For temperatures above 1200 K, quasineutrality of the plasma is provided by the atomic ions, and below 1200 K the molecular ion density exceeds the atomic ion density and a contraction of the discharge is observed. Comparison with experimental data is presented which demonstrates good quantitative and qualitative agreement.", "which Excitation_frequency ?", "2.45 GHz", 69.0, 77.0], ["The atmospheric-pressure helium plasma jet is of emerging interest as a cutting-edge biomedical device for cancer treatment, wound healing and sterilization. Reactive oxygen species such as OH and O radicals are considered to be major factors in the application of biological plasma. In this study, density distribution, temporal behaviour and flux of OH and O radicals on a surface are measured using laser-induced fluorescence. A helium plasma jet is generated by applying pulsed high voltage of 8 kV with 10 kHz using a quartz tube with an inner diameter of 4 mm. To evaluate the relation between the surface condition and active species production, three surfaces are used: dry, wet and rat skin. When the helium flow rate is 1.5 l min\u22121, radial distribution of OH density on the rat skin surface shows a maximum density of 1.2 \u00d7 1013 cm\u22123 at the centre of the plasma-mediated area, while O atom density shows a maximum of 1.0 \u00d7 1015 cm\u22123 at 2.0 mm radius from the centre of the plasma-mediated area. Their densities in the effluent of the plasma jet are almost constant during the intervals of the discharge pulses because their lifetimes are longer than the pulse interval. Their density distribution depends on the helium flow rate and the surface humidity. With these results, OH and O production mechanisms in the plasma jet and their flux onto the surface are discussed.", "which Excitation_frequency ?", "10 kHz", 508.0, 514.0], ["Two-dimensional spatially resolved absolute atomic oxygen densities are measured within an atmospheric pressure micro plasma jet and in its effluent. The plasma is operated in helium with an admixture of 0.5% of oxygen at 13.56 MHz and with a power of 1 W. Absolute atomic oxygen densities are obtained using two photon absorption laser induced fluorescence spectroscopy. The results are interpreted based on measurements of the electron dynamics by phase resolved optical emission spectroscopy in combination with a simple model that balances the production of atomic oxygen with its losses due to chemical reactions and diffusion. Within the discharge, the atomic oxygen density builds up with a rise time of 600 \u00b5s along the gas flow and reaches a plateau of 8 \u00d7 1015 cm\u22123. In the effluent, the density decays exponentially with a decay time of 180 \u00b5s (corresponding to a decay length of 3 mm at a gas flow of 1.0 slm). It is found that both, the species formation behavior and the maximum distance between the jet nozzle and substrates for possible oxygen treatments of surfaces can be controlled by adjusting the gas flow.", "which Excitation_frequency ?", "13.56", 222.0, 227.0], ["We describe the annotation of chemical named entities in scientific text. A set of annotation guidelines defines 5 types of named entities, and provides instructions for the resolution of special cases. A corpus of fulltext chemistry papers was annotated, with an inter-annotator agreement F score of 93%. An investigation of named entity recognition using LingPipe suggests that F scores of 63% are possible without customisation, and scores of 74% are possible with the addition of custom tokenisation and the use of dictionaries.", "which description ?", "annotation of chemical named entities in scientific text. A set of annotation guidelines defines 5 types of named entities, and provides instructions for the resolution of special cases.", NaN, NaN], ["MOTIVATION The MEDLINE database of biomedical abstracts contains scientific knowledge about thousands of interacting genes and proteins. Automated text processing can aid in the comprehension and synthesis of this valuable information. The fundamental task of identifying gene and protein names is a necessary first step towards making full use of the information encoded in biomedical text. This remains a challenging task due to the irregularities and ambiguities in gene and protein nomenclature. We propose to approach the detection of gene and protein names in scientific abstracts as part-of-speech tagging, the most basic form of linguistic corpus annotation. RESULTS We present a method for tagging gene and protein names in biomedical text using a combination of statistical and knowledge-based strategies. This method incorporates automatically generated rules from a transformation-based part-of-speech tagger, and manually generated rules from morphological clues, low frequency trigrams, indicator terms, suffixes and part-of-speech information. Results of an experiment on a test corpus of 56K MEDLINE documents demonstrate that our method to extract gene and protein names can be applied to large sets of MEDLINE abstracts, without the need for special conditions or human experts to predetermine relevant subsets. AVAILABILITY The programs are available on request from the authors.", "which description ?", "We present a method for tagging gene and protein names in biomedical text using a combination of statistical and knowledge-based strategies", 675.0, 814.0], ["Leukemia is a fatal cancer and has two main types: Acute and chronic. Each type has two more subtypes: Lymphoid and myeloid. Hence, in total, there are four subtypes of leukemia. This study proposes a new approach for diagnosis of all subtypes of leukemia from microscopic blood cell images using convolutional neural networks (CNN), which requires a large training data set. Therefore, we also investigated the effects of data augmentation for an increasing number of training samples synthetically. We used two publicly available leukemia data sources: ALL-IDB and ASH Image Bank. Next, we applied seven different image transformation techniques as data augmentation. We designed a CNN architecture capable of recognizing all subtypes of leukemia. Besides, we also explored other well-known machine learning algorithms such as naive Bayes, support vector machine, k-nearest neighbor, and decision tree. To evaluate our approach, we set up a set of experiments and used 5-fold cross-validation. The results we obtained from experiments showed that our CNN model performance has 88.25% and 81.74% accuracy, in leukemia versus healthy and multi-class classification of all subtypes, respectively. Finally, we also showed that the CNN model has a better performance than other well-known machine learning algorithms.", "which description ?", "Leukemia is a fatal cancer and has two main types: Acute and chronic. Each type has two more subtypes: Lymphoid and myeloid. Hence, in total, there are four subtypes of leukemia. This study proposes a new approach for diagnosis of all subtypes of leukemia from microscopic blood cell images using convolutional neural networks (CNN), which requires a large training data set", NaN, NaN], ["This paper presents the results of the 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas. The shared task featured two independent tracks, and participants submitted machine translation systems for up to 10 indigenous languages. Overall, 8 teams participated with a total of 214 submissions. We provided training sets consisting of data collected from various sources, as well as manually translated sentences for the development and test sets. An official baseline trained on this data was also provided. Team submissions featured a variety of architectures, including both statistical and neural models, and for the majority of languages, many teams were able to considerably improve over the baseline. The best performing systems achieved 12.97 ChrF higher than baseline, when averaged across languages.", "which description ?", "Open Machine Translation for Indigenous Languages of the Americas", 59.0, 124.0], ["The BioCreative NLM-Chem track calls for a community effort to fine-tune automated recognition of chemical names in biomedical literature. Chemical names are one of the most searched biomedical entities in PubMed and \u2013 as highlighted during the COVID-19 pandemic \u2013 their identification may significantly advance research in multiple biomedical subfields. While previous community challenges focused on identifying chemical names mentioned in titles and abstracts, the full text contains valuable additional detail. We organized the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document. This manuscript summarizes the BioCreative NLM-Chem track. We received a total of 88 submissions in total from 17 teams worldwide. The highest performance achieved for the Chemical Identification task was 0.8672 f-score (0.8759 precision, 0.8587 recall) for strict NER performance and 0.8136 f-score (0.8621 precision, 0.7702 recall) for strict normalization performance. The highest performance achieved for the Chemical Indexing task was 0.4825 f-score (0.4397 precision, 0.5344 recall). The NLM-Chem track dataset and other challenge materials are publicly available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/. This community challenge demonstrated 1) the current substantial achievements in deep learning technologies can be utilized to further improve automated prediction accuracy, and 2) the Chemical Indexing task is substantially more challenging. We look forward to further development of biomedical text mining methods to respond to the rapid growth of biomedical literature. Keywords\u2014 biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; text mining; chemical entity recognition; chemical indexing", "which description ?", "the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document.", NaN, NaN], ["Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the \"Normalized Gene List\" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.", "which description ?", "The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse).", NaN, NaN], ["MOTIVATION The MEDLINE database of biomedical abstracts contains scientific knowledge about thousands of interacting genes and proteins. Automated text processing can aid in the comprehension and synthesis of this valuable information. The fundamental task of identifying gene and protein names is a necessary first step towards making full use of the information encoded in biomedical text. This remains a challenging task due to the irregularities and ambiguities in gene and protein nomenclature. We propose to approach the detection of gene and protein names in scientific abstracts as part-of-speech tagging, the most basic form of linguistic corpus annotation. RESULTS We present a method for tagging gene and protein names in biomedical text using a combination of statistical and knowledge-based strategies. This method incorporates automatically generated rules from a transformation-based part-of-speech tagger, and manually generated rules from morphological clues, low frequency trigrams, indicator terms, suffixes and part-of-speech information. Results of an experiment on a test corpus of 56K MEDLINE documents demonstrate that our method to extract gene and protein names can be applied to large sets of MEDLINE abstracts, without the need for special conditions or human experts to predetermine relevant subsets. AVAILABILITY The programs are available on request from the authors.", "which description ?", "We propose to approach the detection of gene and protein names in scientific abstracts as part-of-speech tagging, the most basic form of linguistic corpus annotation", 500.0, 665.0], ["NLTK, the Natural Language Toolkit, is a suite of open source program modules, tutorials and problem sets, providing ready-to-use computational linguistics courseware. NLTK covers symbolic and statistical natural language processing, and is interfaced to annotated corpora. Students augment and replace existing components, learn structured programming by example, and manipulate sophisticated models from the outset.", "which description ?", "NLTK, the Natural Language Toolkit, is a suite of open source program modules, tutorials and problem sets, providing ready-to-use computational linguistics courseware. NLTK covers symbolic and statistical natural language processing, and is interfaced to annotated corpora.", NaN, NaN], ["ABSTRACT Question: Do specific environmental conditions affect the performance and growth dynamics of one of the most invasive taxa (Carpobrotus aff. acinaciformis) on Mediterranean islands? Location: Four populations located on Mallorca, Spain. Methods: We monitored growth rates of main and lateral shoots of this stoloniferous plant for over two years (2002\u20132003), comparing two habitats (rocky coast vs. coastal dune) and two different light conditions (sun vs. shade). In one population of each habitat type, we estimated electron transport rate and the level of plant stress (maximal photochemical efficiency Fv/Fm) by means of chlorophyll fluorescence. Results: Main shoots of Carpobrotus grew at similar rates at all sites, regardless habitat type. However, growth rate of lateral shoots was greater in shaded plants than in those exposed to sunlight. Its high phenotypic plasticity, expressed in different allocation patterns in sun and shade individuals, and its clonal growth which promotes the continuous sea...", "which Specific traits ?", "Growth rates of main and lateral shoots", 268.0, 307.0], ["Invasiveness may result from genetic variation and adaptation or phenotypic plasticity, and genetic variation in fitness traits may be especially critical. Pennisetum setaceum (fountain grass, Poaceae) is highly invasive in Hawaii (HI), moderately invasive in Arizona (AZ), and less invasive in southern California (CA). In common garden experiments, we examined the relative importance of quantitative trait variation, precipitation, and phenotypic plasticity in invasiveness. In two very different environments, plants showed no differences by state of origin (HI, CA, AZ) in aboveground biomass, seeds/flower, and total seed number. Plants from different states were also similar within watering treatment. Plants with supplemental watering, relative to unwatered plants, had greater biomass, specific leaf area (SLA), and total seed number, but did not differ in seeds/flower. Progeny grown from seeds produced under different watering treatments showed no maternal effects in seed mass, germination, biomass or SLA. High phenotypic plasticity, rather than local adaptation is likely responsible for variation in invasiveness. Global change models indicate that temperature and precipitation patterns over the next several decades will change, although the direction of change is uncertain. Drier summers in southern California may retard further invasion, while wetter summers may favor the spread of fountain grass.", "which Specific traits ?", "Biomass", 590.0, 597.0], ["Background: In temperate mountains, most non-native plant species reach their distributional limit somewhere along the elevational gradient. However, it is unclear if growth limitations can explain upper range limits and whether phenotypic plasticity or genetic changes allow species to occupy a broad elevational gradient. Aims: We investigated how non-native plant individuals from different elevations responded to growing season temperatures, which represented conditions at the core and margin of the elevational distributions of the species. Methods: We recorded the occurrence of nine non-native species in the Swiss Alps and subsequently conducted a climate chamber experiment to assess growth rates of plants from different elevations under different temperature treatments. Results: The elevational limit observed in the field was not related to the species' temperature response in the climate chamber experiment. Almost all species showed a similar level of reduction in growth rates under lower temperatures independent of the upper elevational limit of the species' distribution. For two species we found indications for genetic differentiation among plants from different elevations. Conclusions: We conclude that factors other than growing season temperatures, such as extreme events or winter mortality, might shape the elevational limit of non-native species, and that ecological filtering might select for genotypes that are phenotypically plastic.", "which Specific traits ?", "Growth rates of plants from different elevations under different temperature treatments", 695.0, 782.0], ["Due to altered ecological and evolutionary contexts, we might expect the responses of alien plants to environmental gradients, as revealed through patterns of trait variation, to differ from those of the same species in their native range. In particular, the spread of alien plant species along such gradients might be limited by their ability to establish clinal patterns of trait variation. We investigated trends in growth and reproductive traits in natural populations of eight invasive Asteraceae forbs along altitudinal gradients in their native and introduced ranges (Valais, Switzerland, and Wallowa Mountains, Oregon, USA). Plants showed similar responses to altitude in both ranges, being generally smaller and having fewer inflorescences but larger seeds at higher altitudes. However, these trends were modified by region-specific effects that were independent of species status (native or introduced), suggesting that any differential performance of alien species in the introduced range cannot be interpreted without a fully reciprocal approach to test the basis of these differences. Furthermore, we found differences in patterns of resource allocation to capitula among species in the native and the introduced areas. These suggest that the mechanisms underlying trait variation, for example, increasing seed size with altitude, might differ between ranges. The rapid establishment of clinal patterns of trait variation in the new range indicates that the need to respond to altitudinal gradients, possibly by local adaptation, has not limited the ability of these species to invade mountain regions. Studies are now needed to test the underlying mechanisms of altitudinal clines in traits of alien species.", "which Specific traits ?", "Trends in growth and reproductive traits", 409.0, 449.0], ["Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and \u201cdisorder\u201d and \u201ctransmembrane\u201d analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein\u2014binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.", "which Has result ?", "attachment phase of the virus", 2149.0, 2178.0], ["ABSTRACT The technology behind big data, although still in its nascent stages, is inspiring many companies to hire data scientists and explore the potential of big data to support strategic initiatives, including developing new products and services. To better understand the skills and knowledge that are highly valued by industry for jobs within big data, this study reports on an analysis of 1216 job advertisements that contained \u201cbig data\u201d in the job title. Our results are presented within a conceptual framework of big data skills categories and confirm the multi-faceted nature of big data job skills. Our research also found that many big data job advertisements emphasize developing analytical information systems and that soft skills remain highly valued, in addition to the value placed on emerging hard technological skills.", "which Has result ?", "conceptual framework of big data skills categories", 498.0, 548.0], ["Petroleum hydrocarbons contamination of soil, sediments and marine environment associated with the inadvertent discharges of petroleum\u2013derived chemical wastes and petroleum hydrocarbons associated with spillage and other sources into the environment often pose harmful effects on human health and the natural environment, and have negative socio\u2013economic impacts in the oil\u2013producing host communities. In practice, plants and microbes have played a major role in microbial transformation and growth\u2013linked mineralization of petroleum hydrocarbons in contaminated soils and/or sediments over the past years. Bioremediation strategies has been recognized as an environmental friendly and cost\u2013effective alternative in comparison with the traditional physico-chemical approaches for the restoration and reclamation of contaminated sites. The success of any plant\u2013based remediation strategy depends on the interaction of plants with rhizospheric microbial populations in the surrounding soil medium and the organic contaminant. Effective understanding of the fate and behaviour of organic contaminants in the soil can help determine the persistence of the contaminant in the terrestrial environment, promote the success of any bioremediation approach and help develop a high\u2013level of risks mitigation strategies. In this review paper, we provide a clear insight into the role of plants and microbes in the microbial degradation of petroleum hydrocarbons in contaminated soil that have emerged from the growing body of bioremediation research and its applications in practice. In addition, plant\u2013microbe interactions have been discussed with respect to biodegradation of petroleum hydrocarbons and these could provide a better understanding of some important factors necessary for development of in situ bioremediation strategies for risks mitigation in petroleum hydrocarbon\u2013contaminated soil.", "which Has result ?", "Bioremediation strategies has been recognized as an environmental friendly and cost\u2013effective alternative in comparison with the traditional physico-chemical approaches for the restoration and reclamation of contaminated sites. The success of any plant\u2013based remediation strategy depends on the interaction of plants with rhizospheric microbial populations in the surrounding soil medium and the organic contaminant. ", 607.0, 1024.0], ["ABSTRACT\nThe clinical performances of six molecular diagnostic tests and a rapid antigen test for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) were clinically evaluated for the diagnosis of coronavirus disease 2019 (COVID-19) in self-collected saliva. Saliva samples from 103 patients with laboratory-confirmed COVID-19 (15 asymptomatic and 88 symptomatic) were collected on the day of hospital admission. SARS-CoV-2 RNA in saliva was detected using a quantitative reverse transcription-PCR (RT-qPCR) laboratory-developed test (LDT), a cobas SARS-CoV-2 high-throughput system, three direct RT-qPCR kits, and reverse transcription\u2013loop-mediated isothermal amplification (RT-LAMP). The viral antigen was detected by a rapid antigen immunochromatographic assay. Of the 103 samples, viral RNA was detected in 50.5 to 81.6% of the specimens by molecular diagnostic tests, and an antigen was detected in 11.7% of the specimens by the rapid antigen test. Viral RNA was detected at significantly higher percentages (65.6 to 93.4%) in specimens collected within 9 days of symptom onset than in specimens collected after at least 10 days of symptoms (22.2 to 66.7%) and in specimens collected from asymptomatic patients (40.0 to 66.7%). Self-collected saliva is an alternative specimen option for diagnosing COVID-19. The RT-qPCR LDT, a cobas SARS-CoV-2 high-throughput system, direct RT-qPCR kits (except for one commercial kit), and RT-LAMP showed sufficient sensitivities in clinical use to be selectively used in clinical settings and facilities. The rapid antigen test alone is not recommended for an initial COVID-19 diagnosis because of its low sensitivity.", "which Has result ?", "11.7", 947.0, 951.0], ["Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.", "which Has result ?", "data import workflow", NaN, NaN], ["Knowledge graphs in manufacturing and production aim to make production lines more efficient and flexible with higher quality output. This makes knowledge graphs attractive for companies to reach Industry 4.0 goals. However, existing research in the field is quite preliminary, and more research effort on analyzing how knowledge graphs can be applied in the field of manufacturing and production is needed. Therefore, we have conducted a systematic literature review as an attempt to characterize the state-of-the-art in this field, i.e., by identifying existing research and by identifying gaps and opportunities for further research. We have focused on finding the primary studies in the existing literature, which were classified and analyzed according to four criteria: bibliometric key facts, research type facets, knowledge graph characteristics, and application scenarios. Besides, an evaluation of the primary studies has also been carried out to gain deeper insights in terms of methodology, empirical evidence, and relevance. As a result, we can offer a complete picture of the domain, which includes such interesting aspects as the fact that knowledge fusion is currently the main use case for knowledge graphs, that empirical research and industrial application are still missing to a large extent, that graph embeddings are not fully exploited, and that technical literature is fast-growing but still seems to be far from its peak.", "which Has result ?", " knowledge fusion is currently the main use case for knowledge graphs,", NaN, NaN], ["The development of electronic health records, wearable devices, health applications and Internet of Things (IoT)-empowered smart homes is promoting various applications. It also makes health self-management much more feasible, which can partially mitigate one of the challenges that the current healthcare system is facing. Effective and convenient self-management of health requires the collaborative use of health data and home environment data from different services, devices, and even open data on the Web. Although health data interoperability standards including HL7 Fast Healthcare Interoperability Resources (FHIR) and IoT ontology including Semantic Sensor Network (SSN) have been developed and promoted, it is impossible for all the different categories of services to adopt the same standard in the near future. This study presents a method that applies Semantic Web technologies to integrate the health data and home environment data from heterogeneously built services and devices. We propose a Web Ontology Language (OWL)-based integration ontology that models health data from HL7 FHIR standard implemented services, normal Web services and Web of Things (WoT) services and Linked Data together with home environment data from formal ontology-described WoT services. It works on the resource integration layer of the layered integration architecture. An example use case with a prototype implementation shows that the proposed method successfully integrates the health data and home environment data into a resource graph. The integrated data are annotated with semantics and ontological links, which make them machine-understandable and cross-system reusable.", "which Has result ?", "successfully integrates the health data and home environment data into a resource graph", 1450.0, 1537.0], ["Coastal safety may be influenced by climate change, as changes in extreme surge levels and wave extremes may increase the vulnerability of dunes and other coastal defenses. In the North Sea, an area already prone to severe flooding, these high surge levels and waves are generated by low atmospheric pressure and severe wind speeds during storm events. As a result of the geometry of the North Sea, not only the maximum wind speed is relevant, but also wind direction. Climate change could change maximum wind conditions, with potentially negative effects for coastal safety. Here, we use an ensemble of 12 Coupled Model Intercomparison Project Phase 5 (CMIP5) General Circulation Models (GCMs) and diagnose the effect of two climate scenarios (rcp4.5 and rcp8.5) on annual maximum wind speed, wind speeds with lower return frequencies, and the direction of these annual maximum wind speeds. The 12 selected CMIP5 models do not project changes in annual maximum wind speed and in wind speeds with lower return frequencies; however, we do find an indication that the annual extreme wind events are coming more often from western directions. Our results are in line with the studies based on CMIP3 models and do not confirm the statement based on some reanalysis studies that there is a climate\u2010change\u2010related upward trend in storminess in the North Sea area.", "which Has result ?", "n indication that the annual extreme wind events are coming more often from western directions", NaN, NaN], ["Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and \u201cdisorder\u201d and \u201ctransmembrane\u201d analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein\u2014binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.", "which Has result ?", "codon mutation", 1901.0, 1915.0], ["Research and development activities are one of the main drivers for progress, economic growth and wellbeing in many societies. This article proposes a text mining approach applied to a large amount of data extracted from job vacancies advertisements, aiming to shed light on the main skills and demands that characterize first stage research positions in Europe. Results show that data handling and processing skills are essential for early career researchers, irrespective of their research field. Also, as many analyzed first stage research positions are connected to universities, they include teaching activities to a great extent. Management of time, risks, projects, and resources plays an important part in the job requirements included in the analyzed advertisements. Such information is relevant not only for early career researchers who perform job selection taking into account the match of possessed skills with the required ones, but also for educational institutions that are responsible for skills development of the future R&D professionals.", "which Has result ?", "data handling and processing skills are essential for early career researchers, irrespective of their research field", 381.0, 497.0], ["This paper presents the results of the two shared tasks associated with W-NUT 2015: (1) a text normalization task with 10 participants; and (2) a named entity tagging task with 8 participants. We outline the task, annotation process and dataset statistics, and provide a high-level overview of the participating systems for each shared task.", "which Total teams ?", "8 participants", 177.0, 191.0], ["This paper presents the results of the Twitter Named Entity Recognition shared task associated with W-NUT 2016: a named entity tagging task with 10 teams participating. We outline the shared task, annotation process and dataset statistics, and provide a high-level overview of the participating systems for each shared task.", "which Total teams ?", "10 teams", 145.0, 153.0], ["Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of \u201cwhat a chemical does to a gene/protein\" (chemical \u2192 gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical \u2013 biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor \u2013 target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. \u201cInhibitor\u201d, \u201cAntagonist\u201d, \u201cAgonist\u201d,...), some of them being regarded as highly specific for the purpose of this task (e.g. \u201cintercalation\u201d, \u201ccross-linking/alkylation\u201d) or referring to biological roles (e.g. \u201cAntibody\u201d, \u201cIncorporation into and Destabilization\u201d) and others, partially overlapping between them (e.g. \u201cBinder\u201d and \u201cLigand\u201d), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (\u201cdecreases\u201d, \u201cdirectlyDecreases\u201d, \u201cincreases\u201d, \u201cdirectlyIncreases\u201d and \u201ccausesNoChange\u201d) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su", "which Total teams ?", "9 teams submitted results for the large-scale text mining subtrack", 1272.0, 1338.0], ["Amid the intensive competition among global industries, the relationship between manufacturers and suppliers has turned from antagonist to cooperative. Through partnerships, both parties can be mutually benefited, and the key factor that maintains such relationship lies in how manufacturers select proper suppliers. The purpose of this study is to explore the key factors considered by manufacturers in supplier selection and the relationships between these factors. Through a literature review, eight supplier selection factors, comprising price response capability, quality management capability, technological capability, delivery capability, flexible capability, management capability, commercial image, and financial capability are derived. Based on the theoretic foundation proposed by previous researchers, a causal model of supplier selection factors is further constructed. The results of a survey on high-tech industries are used to verify the relationships between the eight factors using structural equation modelling (SEM). Based on the empirical results, conclusions and suggestions are finally proposed as a reference for manufacturers and suppliers.", "which Critical success factors ?", "quality management capability", 569.0, 598.0], ["Amid the intensive competition among global industries, the relationship between manufacturers and suppliers has turned from antagonist to cooperative. Through partnerships, both parties can be mutually benefited, and the key factor that maintains such relationship lies in how manufacturers select proper suppliers. The purpose of this study is to explore the key factors considered by manufacturers in supplier selection and the relationships between these factors. Through a literature review, eight supplier selection factors, comprising price response capability, quality management capability, technological capability, delivery capability, flexible capability, management capability, commercial image, and financial capability are derived. Based on the theoretic foundation proposed by previous researchers, a causal model of supplier selection factors is further constructed. The results of a survey on high-tech industries are used to verify the relationships between the eight factors using structural equation modelling (SEM). Based on the empirical results, conclusions and suggestions are finally proposed as a reference for manufacturers and suppliers.", "which Critical success factors ?", "Price response capability", 542.0, 567.0], ["Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors \u201crelationship with 3PLs and skilled logistics professionals\u201d would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India.", "which Critical success factors ?", "supply chain integration", 489.0, 513.0], ["Purpose \u2013 The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach \u2013 In total, 13 CSFs for SD are identified (i.e. long\u2010term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings \u2013 The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long\u2010term strategic goal is found to be ...", "which Critical success factors ?", "Top management commitment", 288.0, 313.0], ["Extranet is an enabler/system that enriches the information service quality in e-supply chain. This paper uses factor analysis to determine four extranet success factors: system quality, information quality, service quality, and work performance quality. A critical analysis of areas that require improvement is also conducted.", "which Critical success factors ?", "work performance quality", 229.0, 253.0], ["The purpose of this paper is to shed the light on the critical success factors that lead to high supply chain performance outcomes in a Malaysian manufacturing company. The critical success factors consist of relationship with customer and supplier, information communication and technology (ICT), material flow management, corporate culture and performance measurement. Questionnaire was the main instrument for the study and it was distributed to 84 staff from departments of purchasing, planning, logistics and operation. Data analysis was conducted by employing descriptive analysis (mean and standard deviation), reliability analysis, Pearson correlation analysis and multiple regression. The findings show that there are relationships exist between relationship with customer and supplier, ICT, material flow management, performance measurement and supply chain management (SCM) performance, but not for corporate culture. Forming a good customer and supplier relationship is the main predictor of SCM performance, followed by performance measurement, material flow management and ICT. It is recommended that future study to determine additional success factors that are pertinent to firms\u2019 current SCM strategies and directions, competitive advantages and missions. Logic suggests that further study to include more geographical data coverage, other nature of businesses and research instruments. Key words: Supply chain management, critical success factor.", "which Critical success factors ?", "material flow management", 298.0, 322.0], ["Purpose \u2013 The aim of this paper is threefold: first, to examine the content of supply chain quality management (SCQM); second, to identify the structure of SCQM; and third, to show ways for finding improvement opportunities and organizing individual institution's resources/actions into collective performance outcomes.Design/methodology/approach \u2013 To meet the goals of this work, the paper uses abductive reasoning and two qualitative methods: content analysis and formal concept analysis (FCA). Primary data were collected from both original design manufacturers (ODMs) and original equipment manufacturers (OEMs) in Taiwan.Findings \u2013 According to the qualitative empirical study, modern enterprises need to pay immediate attention to the following two pathways: a compliance approach and a voluntary approach. For the former, three strategic content variables are identified: training programs, ISO, and supplier quality audit programs. As for initiating a voluntary effort, modern lead firms need to instill \u201cmotivat...", "which Critical success factors ?", "quality management", 92.0, 110.0], ["Amid the intensive competition among global industries, the relationship between manufacturers and suppliers has turned from antagonist to cooperative. Through partnerships, both parties can be mutually benefited, and the key factor that maintains such relationship lies in how manufacturers select proper suppliers. The purpose of this study is to explore the key factors considered by manufacturers in supplier selection and the relationships between these factors. Through a literature review, eight supplier selection factors, comprising price response capability, quality management capability, technological capability, delivery capability, flexible capability, management capability, commercial image, and financial capability are derived. Based on the theoretic foundation proposed by previous researchers, a causal model of supplier selection factors is further constructed. The results of a survey on high-tech industries are used to verify the relationships between the eight factors using structural equation modelling (SEM). Based on the empirical results, conclusions and suggestions are finally proposed as a reference for manufacturers and suppliers.", "which Critical success factors ?", "management capability", 577.0, 598.0], ["Purpose \u2013 The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach \u2013 In total, 13 CSFs for SD are identified (i.e. long\u2010term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings \u2013 The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long\u2010term strategic goal is found to be ...", "which Critical success factors ?", "environmental readiness", 459.0, 482.0], ["This paper reports the results of a survey on the critical success factors (CSFs) of web-based supply-chain management systems (WSCMS). An empirical study was conducted and an exploratory factor analysis of the survey data revealed five major dimensions of the CSFs for WSCMS implementation, namely (1) communication, (2) top management commitment, (3) data security, (4) training and education, and (5) hardware and software reliability. The findings of the results provide insights for companies using or planning to use WSCMS.", "which Critical success factors ?", "training and education", 372.0, 394.0], ["Purpose \u2013 The purpose of this paper is to determine those factors perceived by users to influence the successful on\u2010going use of e\u2010commerce systems in business\u2010to\u2010business (B2B) buying and selling transactions through examination of the views of individuals acting in both purchasing and selling roles within the UK National Health Service (NHS) pharmaceutical supply chain.Design/methodology/approach \u2013 Literature from the fields of operations and supply chain management (SCM) and information systems (IS) is used to determine candidate factors that might influence the success of the use of e\u2010commerce. A questionnaire based on these is used for primary data collection in the UK NHS pharmaceutical supply chain. Factor analysis is used to analyse the data.Findings \u2013 The paper yields five composite factors that are perceived by users to influence successful e\u2010commerce use. \u201cSystem quality,\u201d \u201cinformation quality,\u201d \u201cmanagement and use,\u201d \u201cworld wide web \u2013 assurance and empathy,\u201d and \u201ctrust\u201d are proposed as potentia...", "which Critical success factors ?", "trust", 989.0, 994.0], ["This study is the first attempt that assembled published academic work on critical success factors (CSFs) in supply chain management (SCM) fields. The purpose of this study are to review the CSFs in SCM and to uncover the major CSFs that are apparent in SCM literatures. This study apply literature survey techniques from published CSFs studies in SCM. A collection of 42 CSFs studies in various SCM fields are obtained from major databases. The search uses keywords such as as supply chain management, critical success factors, logistics management and supply chain drivers and barriers. From the literature survey, four major CSFs are proposed. The factors are collaborative partnership, information technology, top management support and human resource. It is hoped that this review will serve as a platform for future research in SCM and CSFs studies. Plus, this study contribute to existing SCM knowledge and further appraise the concept of CSFs.", "which Critical success factors ?", "collaborative partnership", 663.0, 688.0], ["This paper reports the results of a survey on the critical success factors (CSFs) of web-based supply-chain management systems (WSCMS). An empirical study was conducted and an exploratory factor analysis of the survey data revealed five major dimensions of the CSFs for WSCMS implementation, namely (1) communication, (2) top management commitment, (3) data security, (4) training and education, and (5) hardware and software reliability. The findings of the results provide insights for companies using or planning to use WSCMS.", "which Critical success factors ?", "Top management commitment", 322.0, 347.0], ["Extranet is an enabler/system that enriches the information service quality in e-supply chain. This paper uses factor analysis to determine four extranet success factors: system quality, information quality, service quality, and work performance quality. A critical analysis of areas that require improvement is also conducted.", "which Critical success factors ?", "service quality", 60.0, 75.0], ["Purpose \u2013 The purpose of this paper is to explore critical factors for implementing green supply chain management (GSCM) practice in the Taiwanese electrical and electronics industries relative to European Union directives.Design/methodology/approach \u2013 A tentative list of critical factors of GSCM was developed based on a thorough and detailed analysis of the pertinent literature. The survey questionnaire contained 25 items, developed based on the literature and interviews with three industry experts, specifically quality and product assurance representatives. A total of 300 questionnaires were mailed out, and 87 were returned, of which 84 were valid, representing a response rate of 28 percent. Using the data collected, the identified critical factors were performed via factor analysis to establish reliability and validity.Findings \u2013 The results show that 20 critical factors were extracted into four dimensions, which denominated supplier management, product recycling, organization involvement and life cycl...", "which Critical success factors ?", "Supplier management", 942.0, 961.0], ["Objectives: To estimate the prevalence and incidence of epilepsy in Italy using a national database of general practitioners (GPs). Methods: The Health Search CSD Longitudinal Patient Database (HSD) has been established in 1998 by the Italian College of GPs. Participants were 700 GPs, representing a population of 912,458. For each patient, information on age and sex, EEG, CT scan, and MRI was included. Prevalent cases with a diagnosis of \u2018epilepsy' (ICD9CM: 345*) were selected in the 2011 population. Incident cases of epilepsy were identified in 2011 by excluding patients diagnosed for epilepsy and convulsions and those with EEG, CT scan, MRI prescribed for epilepsy and/or convulsions in the previous years. Crude and standardized (Italian population) prevalence and incidence were calculated. Results: Crude prevalence of epilepsy was 7.9 per 1,000 (men 8.1; women 7.7). The highest prevalence was in patients <25 years and \u226575 years. The incidence of epilepsy was 33.5 per 100,000 (women 35.3; men 31.5). The highest incidence was in women <25 years and in men 75 years or older. Conclusions: Prevalence and incidence of epilepsy in this study were similar to those of other industrialized countries. HSD appears as a reliable data source for the surveillance of epilepsy in Italy. i 2014 S. Karger AG, Basel", "which Country of study ?", "Italy", 68.0, 73.0], ["Abstract Purpose: The paper analyzes factors that affect the likelihood of adoption of different agriculture-related information sources by farmers. Design/Methodology/Approach: The paper links the theoretical understanding of the existing multiple sources of information that farmer use, with the empirical model to analyze the factors that affect the farmer's adoption of different agriculture-related information sources. The analysis is done using a multivariate probit model and primary survey data of 1,200 farmer households of five Indo-Gangetic states of India, covering 120 villages. Findings: The results of the study highlight that farmer's age, education level and farm size influence farmer's behaviour in selecting different sources of information. The results show that farmers use multiple information sources, that may be complementary or substitutes to each other and this also implies that any single source does not satisfy all information needs of the farmer. Practical implication: If we understand the likelihood of farmer's choice of source of information then direction can be provided and policies can be developed to provide information through those sources in targeted regions with the most effective impact. Originality/Value: Information plays a key role in a farmer's life by enhancing their knowledge and strengthening their decision-making ability. Farmers use multiple sources of information as no one source is sufficient in itself.", "which Country of study ?", "India", 563.0, 568.0], ["Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and \u201cfeeling left out\u201d can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.", "which Country of study ?", "Germany", 730.0, 737.0], ["ABSTRACT This study has investigated farm households' simultaneous use of social networks, field extension, traditional media, and modern information and communication technologies (ICTs) to access information on cotton crop production. The study was based on a field survey, conducted in Punjab, Pakistan. Data were collected from 399 cotton farm households using the multistage sampling technique. Important combinations of information sources were found in terms of their simultaneous use to access information. The study also examined the factors influencing the use of various available information sources. A multivariate probit model was used considering the correlation among the use of social networks, field extension, traditional media, and modern ICTs. The findings indicated the importance of different socioeconomic and institutional factors affecting farm households' use of available information sources on cotton production. Important policy conclusions are drawn based on findings.", "which Country of study ?", "Pakistan", 297.0, 305.0], ["Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.", "which Study location ?", "Singapore", 297.0, 306.0], ["With ongoing research, increased information sharing and knowledge exchange, humanitarian organizations have an increasing amount of evidence at their disposal to support their decisions. Nevertheless, effectively building decisions on the increasing amount of insights and information remains challenging. At the individual, organizational, and environmental levels, various factors influence the use of evidence in the decision-making process. This research examined these factors and specifically their influence in a case-study on humanitarian organizations and their WASH interventions in Uganda. Interviewees reported several factors that impede the implementation of evidence-based decision making. Revealing that, despite advancements in the past years, evidence-based information itself is relatively small, contradictory, and non-repeatable. Moreover, the information is often not connected or in a format that can be acted upon. Most importantly, however, are the human aspects and organizational settings that limit access to and use of supporting data, information, and evidence. This research shows the importance of considering these factors, in addition to invest in creating knowledge and technologies to support evidence-based decision-making.", "which Study location ?", "Uganda", 594.0, 600.0], ["Humanitarian disasters are highly dynamic and uncertain. The shifting situation, volatility of information, and the emergence of decision processes and coordination structures require humanitarian organizations to continuously adapt their operations. In this study, we aim to make headway in understanding adaptive decision-making in a dynamic interplay between changing situation, volatile information, and emerging coordination structures. Starting from theories of sensemaking, coordination, and decision-making, we present two case studies that represent the response to two different humanitarian disasters: Typhoon Haiyan in the Philippines, and the Syria Crisis, one of the most prominent ongoing conflicts. For both, we highlight how volatile information and the urge to respond via sensemaking lead to fragmentation and misalignment of emergent coordination structures and decisions, which, in turn, slow down adaptation. Based on the case studies, we derive propositions and the need to continuously align laterally between different regions and hierarchically between operational and strategic levels to avoid persistence of coordination-information bubbles. We discuss the implications of our findings for the development of methods and theory to ensure that humanitarian operations management captures the critical role of information as a driver of emergent coordination and adaptive decisions.", "which Study location ?", "The Philippines", 631.0, 646.0], ["ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.", "which has smart city instance ?", "Chicago", 356.0, 363.0], ["Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.", "which has smart city instance ?", "Budapest (Hungary)", NaN, NaN], ["Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.", "which has smart city instance ?", "Tarragona (Spain)", NaN, NaN], ["ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.", "which has smart city instance ?", "PlanIT Valley", 253.0, 266.0], ["ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.", "which has smart city instance ?", "Masdar", 330.0, 336.0], ["Abstract A full description of the ModelE version of the Goddard Institute for Space Studies (GISS) atmospheric general circulation model (GCM) and results are presented for present-day climate simulations (ca. 1979). This version is a complete rewrite of previous models incorporating numerous improvements in basic physics, the stratospheric circulation, and forcing fields. Notable changes include the following: the model top is now above the stratopause, the number of vertical layers has increased, a new cloud microphysical scheme is used, vegetation biophysics now incorporates a sensitivity to humidity, atmospheric turbulence is calculated over the whole column, and new land snow and lake schemes are introduced. The performance of the model using three configurations with different horizontal and vertical resolutions is compared to quality-controlled in situ data, remotely sensed and reanalysis products. Overall, significant improvements over previous models are seen, particularly in upper-atmosphere te...", "which Earth System Model ?", "Atmosphere", 1007.0, 1017.0], ["The NCEP Climate Forecast System Reanalysis (CFSR) was completed for the 31-yr period from 1979 to 2009, in January 2010. The CFSR was designed and executed as a global, high-resolution coupled atmosphere\u2013ocean\u2013land surface\u2013sea ice system to provide the best estimate of the state of these coupled domains over this period. The current CFSR will be extended as an operational, real-time product into the future. New features of the CFSR include 1) coupling of the atmosphere and ocean during the generation of the 6-h guess field, 2) an interactive sea ice model, and 3) assimilation of satellite radiances by the Gridpoint Statistical Interpolation (GSI) scheme over the entire period. The CFSR global atmosphere resolution is ~38 km (T382) with 64 levels extending from the surface to 0.26 hPa. The global ocean's latitudinal spacing is 0.25\u00b0 at the equator, extending to a global 0.5\u00b0 beyond the tropics, with 40 levels to a depth of 4737 m. The global land surface model has four soil levels and the global sea ice m...", "which Earth System Model ?", "Atmosphere", 194.0, 204.0], ["The NCEP Climate Forecast System Reanalysis (CFSR) was completed for the 31-yr period from 1979 to 2009, in January 2010. The CFSR was designed and executed as a global, high-resolution coupled atmosphere\u2013ocean\u2013land surface\u2013sea ice system to provide the best estimate of the state of these coupled domains over this period. The current CFSR will be extended as an operational, real-time product into the future. New features of the CFSR include 1) coupling of the atmosphere and ocean during the generation of the 6-h guess field, 2) an interactive sea ice model, and 3) assimilation of satellite radiances by the Gridpoint Statistical Interpolation (GSI) scheme over the entire period. The CFSR global atmosphere resolution is ~38 km (T382) with 64 levels extending from the surface to 0.26 hPa. The global ocean's latitudinal spacing is 0.25\u00b0 at the equator, extending to a global 0.5\u00b0 beyond the tropics, with 40 levels to a depth of 4737 m. The global land surface model has four soil levels and the global sea ice m...", "which Earth System Model ?", "Sea Ice", 224.0, 231.0], ["4OASIS3.2\u20135 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 \u00b0C in ACCESS1.0 and 0.04 \u00b0C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.", "which Earth System Model ?", "Land Surface", 706.0, 718.0], ["Abstract. The recently developed Norwegian Earth System Model (NorESM) is employed for simulations contributing to the CMIP5 (Coupled Model Intercomparison Project phase 5) experiments and the fifth assessment report of the Intergovernmental Panel on Climate Change (IPCC-AR5). In this manuscript, we focus on evaluating the ocean and land carbon cycle components of the NorESM, based on the preindustrial control and historical simulations. Many of the observed large scale ocean biogeochemical features are reproduced satisfactorily by the NorESM. When compared to the climatological estimates from the World Ocean Atlas (WOA), the model simulated temperature, salinity, oxygen, and phosphate distributions agree reasonably well in both the surface layer and deep water structure. However, the model simulates a relatively strong overturning circulation strength that leads to noticeable model-data bias, especially within the North Atlantic Deep Water (NADW). This strong overturning circulation slightly distorts the structure of the biogeochemical tracers at depth. Advancements in simulating the oceanic mixed layer depth with respect to the previous generation model particularly improve the surface tracer distribution as well as the upper ocean biogeochemical processes, particularly in the Southern Ocean. Consequently, near-surface ocean processes such as biological production and air\u2013sea gas exchange, are in good agreement with climatological observations. The NorESM adopts the same terrestrial model as the Community Earth System Model (CESM1). It reproduces the general pattern of land-vegetation gross primary productivity (GPP) when compared to the observationally based values derived from the FLUXNET network of eddy covariance towers. While the model simulates well the vegetation carbon pool, the soil carbon pool is smaller by a factor of three relative to the observational based estimates. The simulated annual mean terrestrial GPP and total respiration are slightly larger than observed, but the difference between the global GPP and respiration is comparable. Model-data bias in GPP is mainly simulated in the tropics (overestimation) and in high latitudes (underestimation). Within the NorESM framework, both the ocean and terrestrial carbon cycle models simulate a steady increase in carbon uptake from the preindustrial period to the present-day. The land carbon uptake is noticeably smaller than the observations, which is attributed to the strong nitrogen limitation formulated by the land model.", "which Earth System Model ?", "Ocean", 325.0, 330.0], ["The NCEP Climate Forecast System Reanalysis (CFSR) was completed for the 31-yr period from 1979 to 2009, in January 2010. The CFSR was designed and executed as a global, high-resolution coupled atmosphere\u2013ocean\u2013land surface\u2013sea ice system to provide the best estimate of the state of these coupled domains over this period. The current CFSR will be extended as an operational, real-time product into the future. New features of the CFSR include 1) coupling of the atmosphere and ocean during the generation of the 6-h guess field, 2) an interactive sea ice model, and 3) assimilation of satellite radiances by the Gridpoint Statistical Interpolation (GSI) scheme over the entire period. The CFSR global atmosphere resolution is ~38 km (T382) with 64 levels extending from the surface to 0.26 hPa. The global ocean's latitudinal spacing is 0.25\u00b0 at the equator, extending to a global 0.5\u00b0 beyond the tropics, with 40 levels to a depth of 4737 m. The global land surface model has four soil levels and the global sea ice m...", "which Earth System Model ?", "Land Surface", 211.0, 223.0], ["4OASIS3.2\u20135 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 \u00b0C in ACCESS1.0 and 0.04 \u00b0C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.", "which Earth System Model ?", "Ocean", 566.0, 571.0], ["The coordination of humanitarian relief, e.g. in a natural disaster or a conflict situation, is often complicated by a scarcity of data to inform planning. Remote sensing imagery, from satellites or drones, can give important insights into conditions on the ground, including in areas which are difficult to access. Applications include situation awareness after natural disasters, structural damage assessment in conflict, monitoring human rights violations or population estimation in settlements. We review machine learning approaches for automating these problems, and discuss their potential and limitations. We also provide a case study of experiments using deep learning methods to count the numbers of structures in multiple refugee settlements in Africa and the Middle East. We find that while high levels of accuracy are possible, there is considerable variation in the characteristics of imagery collected from different sensors and regions. In this, as in the other applications discussed in the paper, critical inferences must be made from a relatively small amount of pixel data. We, therefore, consider that using machine learning systems as an augmentation of human analysts is a reasonable strategy to transition from current fully manual operational pipelines to ones which are both more efficient and have the necessary levels of quality control. This article is part of a discussion meeting issue \u2018The growing ubiquity of algorithms in society: implications, impacts and innovations\u2019.", "which Study Area ?", "Africa", 756.0, 762.0], ["Gale Crater on Mars has the layered structure of deposit covered by the Noachian/Hesperian boundary. Mineral identification and classification at this region can provide important constrains on environment and geological evolution for Mars. Although Curiosity rove has provided the in-situ mineralogical analysis in Gale, but it restricted in small areas. Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) aboard the Mars Reconnaissance Orbiter (MRO) with enhanced spectral resolution can provide more information in spatial and time scale. In this paper, CRISM near-infrared spectral data are used to identify mineral classes and groups at Martian Gale region. By using diagnostic absorptions features analysis in conjunction with spectral angle mapper (SAM), detailed mineral species are identified at Gale region, e.g., kaolinite, chlorites, smectite, jarosite, and northupite. The clay minerals' diversity in Gale Crater suggests the variation of aqueous alteration. The detection of northupite suggests that the Gale region has experienced the climate change from moist condition with mineral dissolution to dryer climate with water evaporation. The presence of ferric sulfate mineral jarosite formed through the oxidation of iron sulfides in acidic environments shows the experience of acidic sulfur-rich condition in Gale history.", "which Study Area ?", "Gale crater", 0.0, 11.0], ["Identification of Martian surface minerals can contribute to understand the Martian environmental change and geological evolution as well as explore the habitability of the Mars. The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) aboard the Mars Reconnaissance Orbiter (MRO) is covers the visible to near infrared wavelengths along with enhanced spectral resolution, which provides ability to map the mineralogy on Mars. In this paper, based on the spectrum matching, mineral composition and geological evolution of Martian Jezero and Holden crater are analyzed using the MRO CRISM. The hydrated minerals are detected in the studied areas, including the carbonate, hydrated silicate and hydrated sulfate. These minerals suggested that the Holden and Jezero craters have experienced long time water-rock interactions. Also, the diverse alteration minerals found in these regions indicate the aqueous activities in multiple distinct environments.", "which Study Area ?", "Jezero crater", NaN, NaN], ["CRISM is a hyperspectral imager onboard the Mars Reconnaissance Orbiter (MRO; NASA, 2005) which has been acquiring data since November 2006 and has targeted hydrated minerals previously detected by OMEGA (Mars Express; ESA, 2003). The present study focuses on hydrated minerals detected with CRISM at high spatial resolution in the vicinity of Capri Chasma, a canyon of the Valles Marineris system. CRISM data were processed and coupled with MRO and other spacecraft data, in particular HiRiSE (High Resolution Science Experiment, MRO) images. Detections revealed sulfates in abundance in Capri, especially linked to the interior layered deposits (ILD) that lie in the central part of the chasma. Both monohydrated and polyhydrated sulfates are found at different elevations and are associated with different layers. Monohydrated sulfates are widely detected over the massive light-toned cliffs of the ILD, whereas polyhydrated sulfates seem to form a basal and a top layer associated with lower-albedo deposits in flatter areas. Hydrated silicates (phyllosilicates or opaline silica) have also been detected very locally on two mounds about a few hundred meters in diameter at the bottom of the ILD cliffs. We suggest some formation models of these minerals that are consistent with our observations.", "which Study Area ?", " Capri Chasma", 343.0, 356.0], ["Butterfly monitoring and Red List programs in Switzerland rely on a combination of observations and collection records to document changes in species distributions through time. While most butterflies can be identified using morphology, some taxa remain challenging, making it difficult to accurately map their distributions and develop appropriate conservation measures. In this paper, we explore the use of the DNA barcode (a fragment of the mitochondrial gene COI) as a tool for the identification of Swiss butterflies and forester moths (Rhopalocera and Zygaenidae). We present a national DNA barcode reference library including 868 sequences representing 217 out of 224 resident species, or 96.9% of Swiss fauna. DNA barcodes were diagnostic for nearly 90% of Swiss species. The remaining 10% represent cases of para- and polyphyly likely involving introgression or incomplete lineage sorting among closely related taxa. We demonstrate that integrative taxonomic methods incorporating a combination of morphological and genetic techniques result in a rate of species identification of over 96% in females and over 98% in males, higher than either morphology or DNA barcodes alone. We explore the use of the DNA barcode for exploring boundaries among taxa, understanding the geographical distribution of cryptic diversity and evaluating the status of purportedly endemic taxa. Finally, we discuss how DNA barcodes may be used to improve field practices and ultimately enhance conservation strategies.", "which Study Location ?", "Switzerland", 46.0, 57.0], ["This study summarizes results of a DNA barcoding campaign on German Diptera, involving analysis of 45,040 specimens. The resultant DNA barcode library includes records for 2,453 named species comprising a total of 5,200 barcode index numbers (BINs), including 2,700 COI haplotype clusters without species\u2010level assignment, so called \u201cdark taxa.\u201d Overall, 88 out of 117 families (75%) recorded from Germany were covered, representing more than 50% of the 9,544 known species of German Diptera. Until now, most of these families, especially the most diverse, have been taxonomically inaccessible. By contrast, within a few years this study provided an intermediate taxonomic system for half of the German Dipteran fauna, which will provide a useful foundation for subsequent detailed, integrative taxonomic studies. Using DNA extracts derived from bulk collections made by Malaise traps, we further demonstrate that species delineation using BINs and operational taxonomic units (OTUs) constitutes an effective method for biodiversity studies using DNA metabarcoding. As the reference libraries continue to grow, and gaps in the species catalogue are filled, BIN lists assembled by metabarcoding will provide greater taxonomic resolution. The present study has three main goals: (a) to provide a DNA barcode library for 5,200 BINs of Diptera; (b) to demonstrate, based on the example of bulk extractions from a Malaise trap experiment, that DNA barcode clusters, labelled with globally unique identifiers (such as OTUs and/or BINs), provide a pragmatic, accurate solution to the \u201ctaxonomic impediment\u201d; and (c) to demonstrate that interim names based on BINs and OTUs obtained through metabarcoding provide an effective method for studies on species\u2010rich groups that are usually neglected in biodiversity research projects because of their unresolved taxonomy.", "which Study Location ?", "Germany", 398.0, 405.0], ["The identification of Afrotropical hoverflies is very difficult because of limited recent taxonomic revisions and the lack of comprehensive identification keys. In order to assist in their identification, and to improve the taxonomy of this group, we constructed a reference dataset of 513 COI barcodes of 90 of the more common nominal species from Ghana, Togo, Benin and Nigeria (W Africa) and added ten publically available COI barcodes from nine nominal Afrotropical species to this (total: 523 COI barcodes; 98 nominal species; 26 genera). The identification accuracy of this dataset was evaluated with three methods (K2P distance-based, Neighbor-Joining (NJ) / Maximum Likelihood (ML) analysis, and using SpeciesIdentifier). Results of the three methods were highly congruent and showed a high identification success. Nine species pairs showed a low (< 0.03) mean interspecific K2P distance that resulted in several incorrect identifications. A high (> 0.03) maximum intraspecific K2P distance was observed in eight species and barcodes of these species not always formed single clusters in the NJ / ML analayses which may indicate the occurrence of cryptic species. Optimal K2P thresholds to differentiate intra- from interspecific K2P divergence were highly different among the three subfamilies (Eristalinae: 0.037, Syrphinae: 0.06, Microdontinae: 0.007\u20130.02), and among the different general suggesting that optimal thresholds are better defined at the genus level. In addition to providing an alternative identification tool, our study indicates that DNA barcoding improves the taxonomy of Afrotropical hoverflies by selecting (groups of) taxa that deserve further taxonomic study, and by attributing the unknown sex to species for which only one of the sexes is known.", "which Study Location ?", " Nigeria", 371.0, 379.0], ["Mosquitoes are insects of the Diptera, Nematocera, and Culicidae families, some species of which are important disease vectors. Identifying mosquito species based on morphological characteristics is difficult, particularly the identification of specimens collected in the field as part of disease surveillance programs. Because of this difficulty, we constructed DNA barcodes of the cytochrome c oxidase subunit 1, the COI gene, for the more common mosquito species in China, including the major disease vectors. A total of 404 mosquito specimens were collected and assigned to 15 genera and 122 species and subspecies on the basis of morphological characteristics. Individuals of the same species grouped closely together in a Neighborhood-Joining tree based on COI sequence similarity, regardless of collection site. COI gene sequence divergence was approximately 30 times higher for species in the same genus than for members of the same species. Divergence in over 98% of congeneric species ranged from 2.3% to 21.8%, whereas divergence in conspecific individuals ranged from 0% to 1.67%. Cryptic species may be common and a few pseudogenes were detected.", "which Study Location ?", "China", 469.0, 474.0], ["Although central to much biological research, the identification of species is often difficult. The use of DNA barcodes, short DNA sequences from a standardized region of the genome, has recently been proposed as a tool to facilitate species identification and discovery. However, the effectiveness of DNA barcoding for identifying specimens in species-rich tropical biotas is unknown. Here we show that cytochrome c oxidase I DNA barcodes effectively discriminate among species in three Lepidoptera families from Area de Conservaci\u00f3n Guanacaste in northwestern Costa Rica. We found that 97.9% of the 521 species recognized by prior taxonomic work possess distinctive cytochrome c oxidase I barcodes and that the few instances of interspecific sequence overlap involve very similar species. We also found two or more barcode clusters within each of 13 supposedly single species. Covariation between these clusters and morphological and/or ecological traits indicates overlooked species complexes. If these results are general, DNA barcoding will significantly aid species identification and discovery in tropical settings.", "which Study Location ?", "Costa Rica", 562.0, 572.0], ["Abstract A fundamental aspect of well performing cities is successful public spaces. For centuries, understanding these places has been limited to sporadic observations and laborious data collection. This study proposes a novel methodology to analyze citywide, discrete urban spaces using highly accurate anonymized telecom data and machine learning algorithms. Through superposition of human dynamics and urban features, this work aims to expose clear correlations between the design of the city and the behavioral patterns of its users. Geolocated telecom data, obtained for the state of Andorra, were initially analyzed to identify \u201cstay-points\u201d\u2014events in which cellular devices remain within a certain roaming distance for a given length of time. These stay-points were then further analyzed to find clusters of activity characterized in terms of their size, persistence, and diversity. Multivariate linear regression models were used to identify associations between the formation of these clusters and various urban features such as urban morphology or land-use within a 25\u201350 meters resolution. Some of the urban features that were found to be highly related to the creation of large, diverse and long-lasting clusters were the presence of service and entertainment amenities, natural water features, and the betweenness centrality of the road network; others, such as educational and park amenities were shown to have a negative impact. Ultimately, this study suggests a \u201creversed urbanism\u201d methodology: an evidence-based approach to urban design, planning, and decision making, in which human behavioral patterns are instilled as a foundational design tool for inferring the success rates of highly performative urban places.", "which Study Location ?", "Andorra", 590.0, 597.0], ["DNA barcodes were obtained for 81 butterfly species belonging to 52 genera from sites in north\u2010central Pakistan to test the utility of barcoding for their identification and to gain a better understanding of regional barcode variation. These species represent 25% of the butterfly fauna of Pakistan and belong to five families, although the Nymphalidae were dominant, comprising 38% of the total specimens. Barcode analysis showed that maximum conspecific divergence was 1.6%, while there was 1.7\u201314.3% divergence from the nearest neighbour species. Barcode records for 55 species showed <2% sequence divergence to records in the Barcode of Life Data Systems (BOLD), but only 26 of these cases involved specimens from neighbouring India and Central Asia. Analysis revealed that most species showed little incremental sequence variation when specimens from other regions were considered, but a threefold increase was noted in a few cases. There was a clear gap between maximum intraspecific and minimum nearest neighbour distance for all 81 species. Neighbour\u2010joining cluster analysis showed that members of each species formed a monophyletic cluster with strong bootstrap support. The barcode results revealed two provisional species that could not be clearly linked to known taxa, while 24 other species gained their first coverage. Future work should extend the barcode reference library to include all butterfly species from Pakistan as well as neighbouring countries to gain a better understanding of regional variation in barcode sequences in this topographically and climatically complex region.", "which Study Location ?", "Pakistan", 103.0, 111.0], ["Abstract. Carrion-breeding Sarcophagidae (Diptera) can be used to estimate the post-mortem interval in forensic cases. Difficulties with accurate morphological identifications at any life stage and a lack of documented thermobiological profiles have limited their current usefulness. The molecular-based approach of DNA barcoding, which utilises a 648-bp fragment of the mitochondrial cytochrome oxidase subunit I gene, was evaluated in a pilot study for discrimination between 16 Australian sarcophagids. The current study comprehensively evaluated barcoding for a larger taxon set of 588 Australian sarcophagids. In total, 39 of the 84 known Australian species were represented by 580 specimens, which includes 92% of potentially forensically important species. A further eight specimens could not be identified, but were included nonetheless as six unidentifiable taxa. A neighbour-joining tree was generated and nucleotide sequence divergences were calculated. All species except Sarcophaga (Fergusonimyia) bancroftorum, known for high morphological variability, were resolved as monophyletic (99.2% of cases), with bootstrap support of 100. Excluding S. bancroftorum, the mean intraspecific and interspecific variation ranged from 1.12% and 2.81\u201311.23%, respectively, allowing for species discrimination. DNA barcoding was therefore validated as a suitable method for molecular identification of Australian Sarcophagidae, which will aid in the implementation of this fauna in forensic entomology.", "which Study Location ?", " Australia", NaN, NaN], ["Abstract How introduced plants, which may be locally adapted to specific climatic conditions in their native range, cope with the new abiotic conditions that they encounter as exotics is not well understood. In particular, it is unclear what role plasticity versus adaptive evolution plays in enabling exotics to persist under new environmental circumstances in the introduced range. We determined the extent to which native and introduced populations of St. John's Wort (Hypericum perforatum) are genetically differentiated with respect to leaf-level morphological and physiological traits that allow plants to tolerate different climatic conditions. In common gardens in Washington and Spain, and in a greenhouse, we examined clinal variation in percent leaf nitrogen and carbon, leaf \u03b413C values (as an integrative measure of water use efficiency), specific leaf area (SLA), root and shoot biomass, root/shoot ratio, total leaf area, and leaf area ratio (LAR). As well, we determined whether native European H. perforatum experienced directional selection on leaf-level traits in the introduced range and we compared, across gardens, levels of plasticity in these traits. In field gardens in both Washington and Spain, native populations formed latitudinal clines in percent leaf N. In the greenhouse, native populations formed latitudinal clines in root and shoot biomass and total leaf area, and in the Washington garden only, native populations also exhibited latitudinal clines in percent leaf C and leaf \u03b413C. Traits that failed to show consistent latitudinal clines instead exhibited significant phenotypic plasticity. Introduced St. John's Wort populations also formed significant or marginally significant latitudinal clines in percent leaf N in Washington and Spain, percent leaf C in Washington, and in root biomass and total leaf area in the greenhouse. In the Washington common garden, there was strong directional selection among European populations for higher percent leaf N and leaf \u03b413C, but no selection on any other measured trait. The presence of convergent, genetically based latitudinal clines between native and introduced H. perforatum, together with previously published molecular data, suggest that native and exotic genotypes have independently adapted to a broad-scale variation in climate that varies with latitude.", "which Continent ?", "Europe", NaN, NaN], ["Both human-related and natural factors can affect the establishment and distribution of exotic species. Understanding the relative role of the different factors has important scientific and applied implications. Here, we examined the relative effect of human-related and natural factors in determining the richness of exotic bird species established across Europe. Using hierarchical partitioning, which controls for covariation among factors, we show that the most important factor is the human-related community-level propagule pressure (the number of exotic species introduced), which is often not included in invasion studies due to the lack of information for this early stage in the invasion process. Another, though less important, factor was the human footprint (an index that includes human population size, land use and infrastructure). Biotic and abiotic factors of the environment were of minor importance in shaping the number of established birds when tested at a European extent using 50\u00d750 km2 grid squares. We provide, to our knowledge, the first map of the distribution of exotic bird richness in Europe. The richest hotspot of established exotic birds is located in southeastern England, followed by areas in Belgium and The Netherlands. Community-level propagule pressure remains the major factor shaping the distribution of exotic birds also when tested for the UK separately. Thus, studies examining the patterns of establishment should aim at collecting the crucial and hard-to-find information on community-level propagule pressure or develop reliable surrogates for estimating this factor. Allowing future introductions of exotic birds into Europe should be reconsidered carefully, as the number of introduced species is basically the main factor that determines the number established.", "which Continent ?", "Europe", 357.0, 363.0], ["Blue spruce (Picea pungens Engelm.) is native to the central and southern Rocky Mountains of the USA (DAUBENMIRE, 1972), from where it has been introduced to other parts of North America, Europe, etc. In Central Europe, blue spruce was mostly planted in ornamental settings in urban areas, Christmas tree plantations and forests too. In the Slovak Republic, blue spruce has patchy distribution. Its scattered stands cover the area of 2,618 ha and 0.14% of the forest area (data from the National Forest Centre, Zvolen). Compared to the Slovak Republic, the area afforested with blue spruce in the Czech Republic is much larger \u20138,741 ha and 0.4% of the forest area (KRIVANEK et al., 2006, UHUL, 2006). Plantations of blue spruce in the Czech Republic were largely established in the western and north-western parts of the country (BERAN and SINDELAR, 1996; BALCAR et al., 2008b).", "which Continent ?", "Europe", 188.0, 194.0], ["To quantify the relative importance of propagule pressure, climate\u2010matching and host availability for the invasion of agricultural pest arthropods in Europe and to forecast newly emerging pest species and European areas with the highest risk of arthropod invasion under current climate and a future climate scenario (A1F1).", "which Continent ?", "Europe", 150.0, 156.0], ["Species introduced to novel regions often leave behind many parasite species. Signatures of parasite release could thus be used to resolve cryptogenic (uncertain) origins such as that of Littorina littorea, a European marine snail whose history in North America has been debated for over 100 years. Through extensive field and literature surveys, we examined species richness of parasitic trematodes infecting this snail and two co-occurring congeners, L. saxatilis and L. obtusata, both considered native throughout the North Atlantic. Of the three snails, only L. littorea possessed significantly fewer trematode species in North America, and all North American trematodes infecting the three Littorina spp. were a nested subset of Europe. Surprisingly, several of L. littorea's missing trematodes in North America infected the other Littorina congeners. Most likely, long separation of these trematodes from their former host resulted in divergence of the parasites' recognition of L. littorea. Overall, these patterns of parasitism suggest a recent invasion from Europe to North America for L. littorea and an older, natural expansion from Europe to North America for L. saxatilis and L. obtusata.", "which Continent ?", "North America", 248.0, 261.0], ["Norway maple (Acer platanoides L), which is among the most invasive tree species in forests of eastern North America, is associated with reduced regeneration of the related native species, sugar maple (Acer saccharum Marsh) and other native flora. To identify traits conferring an advantage to Norway maple, we grew both species through an entire growing season under simulated light regimes mimicking a closed forest understorey vs. a canopy disturbance (gap). Dynamic shade-houses providing a succession of high-intensity direct-light events between longer periods of low, diffuse light were used to simulate the light regimes. We assessed seedling height growth three times in the season, as well as stem diameter, maximum photosynthetic capacity, biomass allocation above- and below-ground, seasonal phenology and phenotypic plasticity. Given the north European provenance of Norway maple, we also investigated the possibility that its growth in North America might be increased by delayed fall senescence. We found that Norway maple had significantly greater photosynthetic capacity in both light regimes and grew larger in stem diameter than sugar maple. The differences in below- and above-ground biomass, stem diameter, height and maximum photosynthesis were especially important in the simulated gap where Norway maple continued extension growth during the late fall. In the gap regime sugar maple had a significantly higher root : shoot ratio that could confer an advantage in the deepest shade of closed understorey and under water stress or browsing pressure. Norway maple is especially invasive following canopy disturbance where the opposite (low root : shoot ratio) could confer a competitive advantage. Considering the effects of global change in extending the potential growing season, we anticipate that the invasiveness of Norway maple will increase in the future.", "which Continent ?", "North America", 103.0, 116.0], ["AbstractRussian olive (Elaeagnus angustifolia Linnaeus; Elaeagnaceae) is an exotic shrub/tree that has become invasive in many riparian ecosystems throughout semi-arid, western North America, including southern British Columbia, Canada. Despite its prevalence and the potentially dramatic impacts it can have on riparian and aquatic ecosystems, little is known about the insect communities associated with Russian olive within its invaded range. At six sites throughout the Okanagan valley of southern British Columbia, Canada, we compared the diversity of insects associated with Russian olive plants to that of insects associated with two commonly co-occurring native plant species: Woods\u2019 rose (Rosa woodsii Lindley; Rosaceae) and Saskatoon (Amelanchier alnifolia (Nuttall) Nuttall ex Roemer; Rosaceae). Total abundance did not differ significantly among plant types. Family richness and Shannon diversity differed significantly between Woods\u2019 rose and Saskatoon, but not between either of these plant types and Russian olive. An abundance of Thripidae (Thysanoptera) on Russian olive and Tingidae (Hemiptera) on Saskatoon contributed to significant compositional differences among plant types. The families Chloropidae (Diptera), Heleomyzidae (Diptera), and Gryllidae (Orthoptera) were uniquely associated with Russian olive, albeit in low abundances. Our study provides valuable and novel information about the diversity of insects associated with an emerging plant invader of western Canada.", "which Continent ?", "North America", 237.0, 250.0], ["Introduced hosts populations may benefit of an \"enemy release\" through impoverishment of parasite communities made of both few imported species and few acquired local ones. Moreover, closely related competing native hosts can be affected by acquiring introduced taxa (spillover) and by increased transmission risk of native parasites (spillback). We determined the macroparasite fauna of invasive grey squirrels (Sciurus carolinensis) in Italy to detect any diversity loss, introduction of novel parasites or acquisition of local ones, and analysed variation in parasite burdens to identify factors that may increase transmission risk for native red squirrels (S. vulgaris). Based on 277 grey squirrels sampled from 7 populations characterised by different time scales in introduction events, we identified 7 gastro-intestinal helminths and 4 parasite arthropods. Parasite richness is lower than in grey squirrel's native range and independent from introduction time lags. The most common parasites are Nearctic nematodes Strongyloides robustus (prevalence: 56.6%) and Trichostrongylus calcaratus (6.5%), red squirrel flea Ceratophyllus sciurorum (26.0%) and Holarctic sucking louse Neohaematopinus sciuri (17.7%). All other parasites are European or cosmopolitan species with prevalence below 5%. S. robustus abundance is positively affected by host density and body mass, C. sciurorum abundance increases with host density and varies with seasons. Overall, we show that grey squirrels in Italy may benefit of an enemy release, and both spillback and spillover processes towards native red squirrels may occur.", "which Continent ?", "Europe", NaN, NaN], ["ABSTRACT: Despite mounting evidence of invasive species\u2019 impacts on the environment and society,our ability to predict invasion establishment, spread, and impact are inadequate. Efforts to explainand predict invasion outcomes have been limited primarily to terrestrial and freshwater ecosystems.Invasions are also common in coastal marine ecosystems, yet to date predictive marine invasion mod-els are absent. Here we present a model based on biological attributes associated with invasion suc-cess (establishment) of marine molluscs that compares successful and failed invasions from a groupof 93 species introduced to San Francisco Bay (SFB) in association with commercial oyster transfersfrom eastern North America (ca. 1869 to 1940). A multiple logistic regression model correctly classi-fied 83% of successful and 80% of failed invaders according to their source region abundance at thetime of oyster transfers, tolerance of low salinity, and developmental mode. We tested the generalityof the SFB invasion model by applying it to 3 coastal locations (2 in North America and 1 in Europe)that received oyster transfers from the same source and during the same time as SFB. The model cor-rectly predicted 100, 75, and 86% of successful invaders in these locations, indicating that abun-dance, environmental tolerance (ability to withstand low salinity), and developmental mode not onlyexplain patterns of invasion success in SFB, but more importantly, predict invasion success in geo-graphically disparate marine ecosystems. Finally, we demonstrate that the proportion of marine mol-luscs that succeeded in the latter stages of invasion (i.e. that establish self-sustaining populations,spread and become pests) is much greater than has been previously predicted or shown for otheranimals and plants.KEY WORDS: Invasion \u00b7 Bivalve \u00b7 Gastropod \u00b7 Mollusc \u00b7 Marine \u00b7 Oyster \u00b7 Vector \u00b7 Risk assessment", "which Continent ?", "North America", 704.0, 717.0], ["Herbivorous arthropod fauna of the horse nettle Solanum carolinense L., an alien solanaceous herb of North American origin, was characterized by surveying arthropod communities in the fields and comparing them with the original community compiled from published data to infer the impact of herbivores on the weed in the introduced region. Field surveys were carried out in the central part of mainland Japan for five years including an intensive regular survey in 1992. Thirty-nine arthropod species were found feeding on the weed. The leaf, stem, flower and fruit of the weed were infested by the herbivores. The comparison of characteristics of the arthropod community with those of the community in the USA indicated that more sapsuckers and less chewers were on the weed in Japan than in the USA. The community in Japan was composed of high proportions of polyphages and exophages compared to that in the USA. Eighty-seven percent of the species are known to be pests of agricultural crops. Low species diversity of the community was also suggested. The depauperated herbivore community, in terms of feeding habit and niche on S. carolinense, suggested that the weed partly escaped from herbivory in its reproductive parts. The regular population census, however, indicated that a dominant coccinellid beetle, Epilachna vigintioctopunctata, caused a noticeable damage on the leaves of the weed.", "which Continent ?", "North America", NaN, NaN], ["Schinus molle (Peruvian pepper tree) was introduced to South Africa more than 150 years ago and was widely planted, mainly along roads. Only in the last two decades has the species become naturalized and invasive in some parts of its new range, notably in semi-arid savannas. Research is being undertaken to predict its potential for further invasion in South Africa. We studied production, dispersal and predation of seeds, seed banks, and seedling establishment in relation to land uses at three sites, namely ungrazed savanna once used as a military training ground; a savanna grazed by native game; and an ungrazed mine dump. We found that seed production and seed rain density of S. molle varied greatly between study sites, but was high at all sites (384 864\u20131 233 690 seeds per tree per year; 3877\u20139477 seeds per square metre per year). We found seeds dispersed to distances of up to 320 m from female trees, and most seeds were deposited within 50 m of putative source trees. Annual seed rain density below canopies of Acacia tortillis, the dominant native tree at all sites, was significantly lower in grazed savanna. The quality of seed rain was much reduced by endophagous predators. Seed survival in the soil was low, with no survival recorded beyond 1 year. Propagule pressure to drive the rate of recruitment: densities of seedlings and sapling densities were higher in ungrazed savanna and the ungrazed mine dump than in grazed savanna, as reflected by large numbers of young individuals, but adult : seedling ratios did not differ between savanna sites. Frequent and abundant seed production, together with effective dispersal of viable S. molle seed by birds to suitable establishment sites below trees of other species to overcome predation effects, facilitates invasion. Disturbance enhances invasion, probably by reducing competition from native plants.", "which Continent ?", "Africa", 61.0, 67.0], ["The introduction of non-indigenous species (NIS) across the major European seas is a dynamic non-stop process. Up to September 2004, 851 NIS (the majority being zoobenthic organ- isms) have been reported in European marine and brackish waters, the majority during the 1960s and 1970s. The Mediterranean is by far the major recipient of exotic species with an average of one introduction every 4 wk over the past 5 yr. Of the 25 species recorded in 2004, 23 were reported in the Mediterranean and only two in the Baltic. The most updated patterns and trends in the rate, mode of introduction and establishment success of introductions were examined, revealing a process similar to introductions in other parts of the world, but with the uniqueness of migrants through the Suez Canal into the Mediterranean (Lessepsian or Erythrean migration). Shipping appears to be the major vector of introduction (excluding the Lessepsian migration). Aquaculture is also an important vector with target species outnumbered by those introduced unintentionally. More than half of immigrants have been estab- lished in at least one regional sea. However, for a significant part of the introductions both the establishment success and mode of introduction remain unknown. Finally, comparing trends across taxa and seas is not as accurate as could have been wished because there are differences in the spatial and taxonomic effort in the study of NIS. These differences lead to the conclusion that the number of NIS remains an underestimate, calling for continuous updating and systematic research.", "which Continent ?", "Europe", NaN, NaN], ["Biogeographic experiments that test how multiple interacting factors influence exotic plant abundance in their home and recipient communities are remarkably rare. We examined the effects of soil fungi, disturbance and propagule pressure on seed germination, seedling recruitment and adult plant establishment of the invasive Centaurea stoebe in its native European and non\u2010native North American ranges. Centaurea stoebe can establish virtual monocultures in parts of its non\u2010native range, but occurs at far lower abundances where it is native. We conducted parallel experiments at four European and four Montana (USA) grassland sites with all factorial combinations of \u00b1 suppression of soil fungi, \u00b1disturbance and low versus high knapweed propagule pressure [100 or 300 knapweed seeds per 0.3 m \u00d7 0.3 m plot (1000 or 3000 per m2)]. We also measured germination in buried bags containing locally collected knapweed seeds that were either treated or not with fungicide. Disturbance and propagule pressure increased knapweed recruitment and establishment, but did so similarly in both ranges. Treating plots with fungicides had no effect on recruitment or establishment in either range. However, we found: (i) greater seedling recruitment and plant establishment in undisturbed plots in Montana compared to undisturbed plots in Europe and (ii) substantially greater germination of seeds in bags buried in Montana compared to Europe. Also, across all treatments, total plant establishment was greater in Montana than in Europe. Synthesis. Our results highlight the importance of simultaneously examining processes that could influence invasion in both ranges. They indicate that under \u2018background\u2019 undisturbed conditions, knapweed recruits and establishes at greater abundance in Montana than in Europe. However, our results do not support the importance of soil fungi or local disturbances as mechanisms for knapweed's differential success in North America versus Europe.", "which Continent ?", "North America", 1941.0, 1954.0], ["1. With continued globalization, species are being transported and introduced into novel habitats at an accelerating rate. Interactions between invasive species may provide important mechanisms that moderate their impacts on native species. 2. The European green crab Carcinus maenas is an aggressive predator that was introduced to the east coast of North America in the mid-1800 s and is capable of rapid consumption of bivalve prey. A newer invasive predator, the Asian shore crab Hemigrapsus sanguineus, was first discovered on the Atlantic coast in the 1980s, and now inhabits many of the same regions as C. maenas within the Gulf of Maine. Using a series of field and laboratory investigations, we examined the consequences of interactions between these predators. 3. Density patterns of these two species at different spatial scales are consistent with negative interactions. As a result of these interactions, C. maenas alters its diet to consume fewer mussels, its preferred prey, in the presence of H. sanguineus. Decreased mussel consumption in turn leads to lower growth rates for C. maenas, with potential detrimental effects on C. maenas populations. 4. Rather than an invasional meltdown, this study demonstrates that, within the Gulf of Maine, this new invasive predator can moderate the impacts of the older invasive predator.", "which Continent ?", "North America", 351.0, 364.0], ["Species become invasive if they (i) are introduced to a new range, (ii) establish themselves, and (iii) spread. To address the global problems caused by invasive species, several studies investigated steps ii and iii of this invasion process. However, only one previous study looked at step i and examined the proportion of species that have been introduced beyond their native range. We extend this research by investigating all three steps for all freshwater fish, mammals, and birds native to Europe or North America. A higher proportion of European species entered North America than vice versa. However, the introduction rate from Europe to North America peaked in the late 19th century, whereas it is still rising in the other direction. There is no clear difference in invasion success between the two directions, so neither the imperialism dogma (that Eurasian species are exceptionally successful invaders) is supported, nor is the contradictory hypothesis that North America offers more biotic resistance to invaders than Europe because of its less disturbed and richer biota. Our results do not support the tens rule either: that approximately 10% of all introduced species establish themselves and that approximately 10% of established species spread. We find a success of approximately 50% at each step. In comparison, only approximately 5% of native vertebrates were introduced in either direction. These figures show that, once a vertebrate is introduced, it has a high potential to become invasive. Thus, it is crucial to minimize the number of species introductions to effectively control invasive vertebrates.", "which Continent ?", "North America", 506.0, 519.0], ["In a microcosm experiment, I tested how species composition, species rich- ness, and community age affect the susceptibility of grassland communities to invasion by a noxious weed (Centaurea solstitialis L.). I also examined how these factors influenced Centaurea's impact on the rest of the plant community. When grown in monoculture, eight species found in California's grasslands differed widely in their ability to suppress Centaurea growth. The most effective competitor in monoculture was Hemizonia congesta ssp. Iuzulifolia, which, like Centaurea, is a summer- active annual forb. On average, Centaurea growth decreased as the species richness of communities increased. However, no polyculture suppressed Centaurea growth more than the monoculture of Hemizonia. Centaurea generally made up a smaller proportion of com- munity biomass in newly created (\"new\") microcosms than in older (\"established\") mi- crocosms, largely because Centaurea's competitors were more productive in the new treat- ment. Measures of complementarity suggest that Centaurea partitioned resources with an- nual grasses in the new microcosms. This resource partitioning may help to explain Cen- taurea's great success in western North American grasslands. Centaurea strongly suppressed growth of some species but hardly affected others. An- nual grasses were the least affected species in the new monocultures, and perennial grasses were among the least affected species in the established monocultures. In the new micro- cosms, Centaurea's suppression of competing species marginally abated with increasing species richness. This trend was a consequence of the declining success of Centaurea in species-rich communities, rather than a change in the vulnerability of these communities to suppression by a given amount of the invader. The impact of the invader was not related to species richness in the-established microcosms. The results of this study suggest that, at the neighborhood level, diversity can limit invasibility and may reduce the impact of an invader.", "which Continent ?", "North America", NaN, NaN], ["At large spatial scales, exotic and native plant diversity exhibit a strong positive relationship. This may occur because exotic and native species respond similarly to processes that influence diversity over large geographical areas. To test this hypothesis, we compared exotic and native species-area relationships within six North American ecoregions. We predicted and found that within ecoregions the ratio of exotic to native species richness remains constant with increasing area. Furthermore, we predicted that areas with more native species than predicted by the species-area relationship would have proportionally more exotics as well. We did find that these exotic and native deviations were highly correlated, but areas that were good (or bad) for native plants were even better (or worse) for exotics. Similar processes appear to influence exotic and native plant diversity but the degree of this influence may differ with site quality.", "which Continent ?", "North America", NaN, NaN], ["The Natural Enemies Hypothesis (i.e., introduced species experience release from their natural enemies) is a common explanation for why invasive species are so successful. We tested this hypothesis for Ammophila arenaria (Poaceae: European beachgrass), an aggressive plant invading the coastal dunes of California, USA, by comparing the demographic effects of belowground pathogens on A. arenaria in its introduced range to those reported in its native range. European research on A. arenaria in its native range has established that soil-borne pathogens, primarily nematodes and fungi, reduce A. arenaria's growth. In a greenhouse experiment designed to parallel European studies, seeds and 2-wk-old seedlings were planted in sterilized and nonsterilized soil collected from the A. arenaria root zone in its introduced range of California. We assessed the effects of pathogens via soil sterilization on three early performance traits: seed germination, seedling survival, and plant growth. We found that seed germinatio...", "which Continent ?", "Europe", NaN, NaN], ["Abstract: The tallgrass prairie is one of the most severely affected ecosystems in North America. As a result of extensive conversion to agriculture during the last century, as little as 1% of the original tallgrass prairie remains. The remaining fragments of tallgrass prairie communities have conservation significance, but questions remain about their viability and importance to conservation. We investigated the effects of fragment size, native plant species diversity, and location on invasion by exotic plant species at 25 tallgrass prairie sites in central North America at various geographic scales. We used exotic species richness and relative cover as measures of invasion. Exotic species richness and cover were not related to area for all sites considered together. There were no significant relationships between native species richness and exotic species richness at the cluster and regional scale or for all sites considered together. At the local scale, exotic species richness was positively related to native species richness at four sites and negatively related at one. The 10 most frequently occurring and abundant exotic plant species in the prairie fragments were cool\u2010season, or C3, species, in contrast to the native plant community, which was dominated by warm\u2010season, or C4, species. This suggests that timing is important to the success of exotic species in the tallgrass prairie. Our study indicates that some small fragments of tallgrass prairie are relatively intact and should not be overlooked as long\u2010term refuges for prairie species, sources of genetic variability, and material for restoration.", "which Continent ?", "North America", 83.0, 96.0], ["Since 1995, Dikerogammarus villosus Sowinski, a Ponto-Caspian amphi- pod species, has been invading most of Western Europe' s hydrosystems. D. villosus geographic extension and quickly increasing population density has enabled it to become a major component of macrobenthic assemblages in recipient ecosystems. The ecological characteristics of D. villosus on a mesohabitat scale were investigated at a station in the Moselle River. This amphipod is able to colonize a wide range of sub- stratum types, thus posing a threat to all freshwater ecosystems. Rivers whose domi- nant substratum is cobbles and which have tree roots along the banks could harbour particularly high densities of D. villosus. A relationship exists between substratum par- ticle size and the length of the individuals, and spatial segregation according to length was shown. This allows the species to limit intra-specific competition between genera- tions while facilitating reproduction. A strong association exists between D. villosus and other Ponto-Caspian species, such as Dreissena polymorpha and Corophium cur- vispinum, in keeping with Invasional Meltdown Theory. Four taxa (Coenagrionidae, Calopteryx splendens, Corophium curvispinum and Gammarus pulex ) exhibited spa- tial niches that overlap significantly that of D. villosus. According to the predatory be- haviour of the newcomer, their populations may be severely impacted.", "which Continent ?", "Europe", 116.0, 122.0], ["Invasive plant species are a considerable threat to ecosystems globally and on islands in particular where species diversity can be relatively low. In this study, we examined the phylogenetic basis of invasion success on Robben Island in South Africa. The flora of the island was sampled extensively and the phylogeny of the local community was reconstructed using the two core DNA barcode regions, rbcLa and matK. By analysing the phylogenetic patterns of native and invasive floras at two different scales, we found that invasive alien species are more distantly related to native species, a confirmation of Darwin's naturalization hypothesis. However, this pattern also holds even for randomly generated communities, therefore discounting the explanatory power of Darwin's naturalization hypothesis as the unique driver of invasion success on the island. These findings suggest that the drivers of invasion success on the island may be linked to species traits rather than their evolutionary history alone, or to the combination thereof. This result also has implications for the invasion management programmes currently being implemented to rehabilitate the native diversity on Robben Island. \u00a9 2013 The Linnean Society of London, Botanical Journal of the Linnean Society, 2013, 172, 142\u2013152.", "which Continent ?", "Africa", 244.0, 250.0], ["Few field experiments have examined the effects of both resource availability and propagule pressure on plant community invasibility. Two non-native forest species, a herb and a shrub (Hesperis matronalis and Rhamnus cathartica, respectively), were sown into 60 1-m 2 sub-plots distributed across three plots. These contained reconstructed native plant communities in a replaced surface soil layer in a North American forest interior. Resource availability and propagule pressure were manipulated as follows: understorey light level (shaded/unshaded), nutrient availability (control/fertilized), and seed pressures of the two non-native species (control/low/high). Hesperis and Rhamnus cover and the above-ground biomass of Hesperis were significantly higher in shaded sub-plots and at greater propagule pressures. Similarly, the above-ground biomass of Rhamnus was significantly increased with propagule pressure, although this was a function of density. In contrast, of species that seeded into plots from the surrounding forest during the growing season, the non-native species had significantly greater cover in unshaded sub-plots. Plants in these unshaded sub-plots were significantly taller than plants in shaded sub-plots, suggesting a greater fitness. Total and non-native species richness varied significantly among plots indicating the importance of fine-scale dispersal patterns. None of the experimental treatments influenced native species. Since the forest seed bank in our study was colonized primarily by non-native ruderal species that dominated understorey vegetation, the management of invasions by non-native species in forest understoreys will have to address factors that influence light levels and dispersal pathways.", "which Continent ?", "North America", NaN, NaN], ["Hussner A (2012). Alien aquatic plant species in European countries. Weed Research52, 297\u2013306. Summary Alien aquatic plant species cause serious ecological and economic impacts to European freshwater ecosystems. This study presents a comprehensive overview of all alien aquatic plants in Europe, their places of origin and their distribution within the 46 European countries. In total, 96 aquatic species from 30 families have been reported as aliens from at least one European country. Most alien aquatic plants are native to Northern America, followed by Asia and Southern America. Elodea canadensis is the most widespread alien aquatic plant in Europe, reported from 41 European countries. Azolla filiculoides ranks second (25), followed by Vallisneria spiralis (22) and Elodea nuttallii (20). The highest number of alien aquatic plant species has been found in Italy and France (34 species), followed by Germany (27), Belgium and Hungary (both 26) and the Netherlands (24). Even though the number of alien aquatic plants seems relatively small, the European and Mediterranean Plant Protection Organization (EPPO, http://www.eppo.org) has listed 18 of these species as invasive or potentially invasive within the EPPO region. As ornamental trade has been regarded as the major pathway for the introduction of alien aquatic plants, trading bans seem to be the most effective option to reduce the risk of further unintended entry of alien aquatic plants into Europe.", "which Continent ?", "Europe", 288.0, 294.0], ["Invasions by alien species are one of the major threats to the native environment. There are multifold attempts to counter alien species, but limited resources for mitigation or eradication programmes makes prioritisation indispensable. We used the generic impact scoring system to assess the impact of alien fish species in Europe. It prioritises species, but also offers the possibility to compare the impact of alien invasive species between different taxonomic groups. For alien fish in Europe, we compiled a list of 40 established species. By literature research, we assessed the environmental impact (through herbivory, predation, competition, disease transmission, hybridisation and ecosystem alteration) and economic impact (on agriculture, animal production, forestry, human infrastructure, human health and human social life) of each species. The goldfish/gibel complex Carassius auratus/C. gibelio scored the highest impact points, followed by the grass carp Ctenopharyngodon idella and the topmouth gudgeon Pseudorasbora parva. According to our analyses, alien fish species have the strongest impact on the environment through predation, followed by competition with native species. Besides negatively affecting animal production (mainly in aquaculture), alien fish have no pronounced economic impact. At the species level, C. auratus/C. gibelio show similar impact scores to the worst alien mammals in Europe. This study indicates that the generic impact scoring system is useful to investigate the impact of alien fish, also allowing cross-taxa comparisons. Our results are therefore of major relevance for stakeholders and decision-makers involved in management and eradication of alien fish species.", "which Continent ?", "Europe", 325.0, 331.0], ["1 The emerald ash borer Agrilus planipennis (Coleoptera: Buprestidae) (EAB), an invasive wood\u2010boring beetle, has recently caused significant losses of native ash (Fraxinus spp.) trees in North America. Movement of wood products has facilitated EAB spread, and heat sanitation of wooden materials according to International Standards for Phytosanitary Measures No. 15 (ISPM 15) is used to prevent this. 2 In the present study, we assessed the thermal conditions experienced during a typical heat\u2010treatment at a facility using protocols for pallet wood treatment under policy PI\u201007, as implemented in Canada. The basal high temperature tolerance of EAB larvae and pupae was determined, and the observed heating rates were used to investigate whether the heat shock response and expression of heat shock proteins occurred in fourth\u2010instar larvae. 3 The temperature regime during heat treatment greatly exceeded the ISPM 15 requirements of 56 \u00b0C for 30 min. Emerald ash borer larvae were highly tolerant of elevated temperatures, with some instars surviving exposure to 53 \u00b0C without any heat pre\u2010treatments. High temperature survival was increased by either slow warming or pre\u2010exposure to elevated temperatures and a recovery regime that was accompanied by up\u2010regulated hsp70 expression under some of these conditions. 4 Because EAB is highly heat tolerant and exhibits a fully functional heat shock response, we conclude that greater survival than measured in vitro is possible under industry treatment conditions (with the larvae still embedded in the wood). We propose that the phenotypic plasticity of EAB may lead to high temperature tolerance very close to conditions experienced in an ISPM 15 standard treatment.", "which Continent ?", "North America", 187.0, 200.0], ["Abstract Objectives: To examine associations of household crop diversity with school-aged child dietary diversity in Vietnam and Ethiopia and mechanisms underlying these associations. Design: We created a child diet diversity score (DDS) using data on seven food groups consumed in the last 24 h. Generalised estimating equations were used to model associations of household-level crop diversity, measured as a count of crop species richness (CSR) and of plant crop nutritional functional richness (CNFR), with DDS. We examined effect modification by household wealth and subsistence orientation, and mediation by the farm\u2019s market orientation. Setting: Two survey years of longitudinal data from the Young Lives cohort. Participants: Children (aged 5 years in 2006 and 8 years in 2009) from rural farming households in Ethiopia (n 1012) and Vietnam (n 1083). Results: There was a small, positive association between household CNFR and DDS in Ethiopia (CNFR\u2013DDS, \u03b2 = 0\u00b713; (95 % CI 0\u00b707, 0\u00b719)), but not in Vietnam. Associations of crop diversity and child diet diversity were strongest among poor households in Ethiopia and among subsistence-oriented households in Vietnam. Agricultural earnings positively mediated the crop diversity\u2013diet diversity association in Ethiopia. Discussion: Children from households that are poorer and those that rely more on their own agricultural production for food may benefit most from increased crop diversity.", "which Location ?", "Ethiopia ", 129.0, 138.0], ["ABSTRACT Accurate and up-to-date built-up area mapping is of great importance to the science community, decision-makers, and society. Therefore, satellite-based, built-up area (BUA) extraction at medium resolution with supervised classification has been widely carried out. However, the spectral confusion between BUA and bare land (BL) is the primary hindering factor for accurate BUA mapping over large regions. Here we propose a new methodology for the efficient BUA extraction using multi-sensor data under Google Earth Engine cloud computing platform. The proposed method mainly employs intra-annual satellite imagery for water and vegetation masks, and a random-forest machine learning classifier combined with auxiliary data to discriminate between BUA and BL. First, a vegetation mask and water mask are generated using NDVI (normalized differenced vegetation index) max in vegetation growth periods and the annual water-occurrence frequency. Second, to accurately extract BUA from unmasked pixels, consisting of BUA and BL, random-forest-based classification is conducted using multi-sensor features, including temperature, night-time light, backscattering, topography, optical spectra, and NDVI time-series metrics. This approach is applied in Zhejiang Province, China, and an overall accuracy of 92.5% is obtained, which is 3.4% higher than classification with spectral data only. For large-scale BUA mapping, it is feasible to enhance the performance of BUA mapping with multi-temporal and multi-sensor data, which takes full advantage of datasets available in Google Earth Engine.", "which Location ?", "China", 1273.0, 1278.0], ["ABSTRACT In this study, we investigated the relationship between agricultural biodiversity and dietary diversity of children and whether factors such as economic access may affect this relationship. This paper is based on data collected in a baseline cross-sectional survey in November 2013.The study population comprising 1200 mother-child pairs was selected using a two-stage cluster sampling. Dietary diversity was defined as the number of food groups consumed 24 h prior to the assessment. The number of crop and livestock species produced on a farm was used as the measure of production diversity. Hierarchical regression analysis was used to identify predictors and test for interactions. Whereas the average production diversity score was 4.7 \u00b1 1.6, only 42.4% of households consumed at least four food groups out of seven over the preceding 24-h recall period. Agricultural biodiversity (i.e. variety of animals kept and food groups produced) associated positively with dietary diversity of children aged 6\u201336 months but the relationship was moderated by household socioeconomic status. The interaction term was also statistically significant [\u03b2 = \u22120.08 (95% CI: \u22120.05, \u22120.01, p = 0.001)]. Spearman correlation (rho) analysis showed that agricultural biodiversity was positively associated with individual dietary diversity of the child more among children of low socioeconomic status in rural households compared to children of high socioeconomic status (r = 0.93, p < 0.001 versus r = 0.08, p = 0.007). Socioeconomic status of the household also partially mediated the link between agricultural biodiversity and dietary diversity of a child\u2019s diet. The effect of increased agricultural biodiversity on dietary diversity was significantly higher in households of lower socioeconomic status. Therefore, improvement of agricultural biodiversity could be one of the best approaches for ensuring diverse diets especially for households of lower socioeconomic status in rural areas of Northern Ghana.", "which Location ?", "Ghana", 1998.0, 2003.0], ["Abstract Objective To evaluate viral loads at different stages of disease progression in patients infected with the 2019 severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) during the first four months of the epidemic in Zhejiang province, China. Design Retrospective cohort study. Setting A designated hospital for patients with covid-19 in Zhejiang province, China. Participants 96 consecutively admitted patients with laboratory confirmed SARS-CoV-2 infection: 22 with mild disease and 74 with severe disease. Data were collected from 19 January 2020 to 20 March 2020. Main outcome measures Ribonucleic acid (RNA) viral load measured in respiratory, stool, serum, and urine samples. Cycle threshold values, a measure of nucleic acid concentration, were plotted onto the standard curve constructed on the basis of the standard product. Epidemiological, clinical, and laboratory characteristics and treatment and outcomes data were obtained through data collection forms from electronic medical records, and the relation between clinical data and disease severity was analysed. Results 3497 respiratory, stool, serum, and urine samples were collected from patients after admission and evaluated for SARS-CoV-2 RNA viral load. Infection was confirmed in all patients by testing sputum and saliva samples. RNA was detected in the stool of 55 (59%) patients and in the serum of 39 (41%) patients. The urine sample from one patient was positive for SARS-CoV-2. The median duration of virus in stool (22 days, interquartile range 17-31 days) was significantly longer than in respiratory (18 days, 13-29 days; P=0.02) and serum samples (16 days, 11-21 days; P<0.001). The median duration of virus in the respiratory samples of patients with severe disease (21 days, 14-30 days) was significantly longer than in patients with mild disease (14 days, 10-21 days; P=0.04). In the mild group, the viral loads peaked in respiratory samples in the second week from disease onset, whereas viral load continued to be high during the third week in the severe group. Virus duration was longer in patients older than 60 years and in male patients. Conclusion The duration of SARS-CoV-2 is significantly longer in stool samples than in respiratory and serum samples, highlighting the need to strengthen the management of stool samples in the prevention and control of the epidemic, and the virus persists longer with higher load and peaks later in the respiratory tissue of patients with severe disease.", "which Location ?", "China", 249.0, 254.0], ["Abstract This paper attempts to provide methods to estimate the real scenario of the novel coronavirus pandemic crisis on Brazil and the states of Sao Paulo, Pernambuco, Espirito Santo, Amazonas and Distrito Federal. By the use of a SEIRD mathematical model with age division, we predict the infection and death curve, stating the peak date for Brazil and these states. We also carry out a prediction for the ICU demand on these states for a visualization of the size of a possible collapse on the local health system. By the end, we establish some future scenarios including the stopping of social isolation and the introduction of vaccines and efficient medicine against the virus.", "which has location ?", "Brazil", 122.0, 128.0], ["ABSTRACT This paper provides a timely evaluation of whether the main COVID-19 lockdown policies \u2013 remote work, short-time work and closure of schools and childcare \u2013 have an immediate effect on the German population in terms of changes in satisfaction with work and family life. Relying on individual level panel data collected before and during the lockdown, we examine (1) how family satisfaction and work satisfaction of individuals have changed over the lockdown period, and (2) how lockdown-driven changes in the labour market situation (i.e. working remotely and being sent on short-time work) have affected satisfactions. We apply first-difference regressions for mothers, fathers, and persons without children. Our results show a general decrease in family satisfaction. We also find an overall decline in work satisfaction which is most pronounced for mothers and those without children who have to switch to short-time work. In contrast, fathers' well-being is less affected negatively and their family satisfaction even increased after changing to short-time work. We conclude that while the lockdown circumstances generally have a negative effect on the satisfaction with work and family of individuals in Germany, effects differ between childless persons, mothers, and fathers with the latter being least negatively affected.", "which has location ?", "Germany", 1218.0, 1225.0], ["Abstract We examine the effects of Covid-19 and related restrictions on individuals with dependent children in Germany. We specifically focus on the role of day care center and school closures, which may be regarded as a \u201cdisruptive exogenous shock\u201d to family life. We make use of a novel representative survey of parental well-being collected in May and June 2020 in Germany, when schools and day care centers were closed but while other measures had been relaxed and new infections were low. In our descriptive analysis, we compare well-being during this period with a pre-crisis period for different groups. In a difference-in-differences design, we compare the change for individuals with children to the change for individuals without children, accounting for unrelated trends as well as potential survey mode and context effects. We find that the crisis lowered the relative well-being of individuals with children, especially for individuals with young children, for women, and for persons with lower secondary schooling qualifications. Our results suggest that public policy measures taken to contain Covid-19 can have large effects on family well-being, with implications for child development and parental labor market outcomes.", "which has location ?", "Germany", 111.0, 118.0], ["Wesselsbron is a neglected, mosquito-borne zoonotic disease endemic to Africa. The virus is mainly transmitted by the mosquitoes of the Aedes genus and primarily affects domestic livestock species with teratogenic effects but can jump to humans. Although no major outbreak or fatal case in humans has been reported as yet worldwide, a total of 31 acute human cases of Wesselsbron infection have been previously described since its first isolation in 1955. However, most of these cases were reported from Sub-Saharan Africa where resources are limited and a lack of diagnostic means exists. We describe here two molecular diagnostic tools suitable for Wesselsbron virus detection. The newly established reverse transcription-quantitative polymerase chain reaction and reverse-transcription-recombinase polymerase amplification assays are highly specific and repeatable, and exhibit good agreement with the reference assay on the samples tested. The validation on clinical and veterinary samples shows that they can be accurately used for Wesselsbron virus detection in public health activities and the veterinary field. Considering the increasing extension of Aedes species worldwide, these new assays could be useful not only in laboratory studies for Wesselsbron virus, but also in routine surveillance activities for zoonotic arboviruses and could be applied in well-equipped central laboratories or in remote areas in Africa, regarding the reverse-transcription-recombinase polymerase amplification assay.", "which has location ?", "Africa", 71.0, 77.0], ["Abstract Hackathons, time-bounded events where participants write computer code and build apps, have become a popular means of socializing tech students and workers to produce \u201cinnovation\u201d despite little promise of material reward. Although they offer participants opportunities for learning new skills and face-to-face networking and set up interaction rituals that create an emotional \u201chigh,\u201d potential advantage is even greater for the events\u2019 corporate sponsors, who use them to outsource work, crowdsource innovation, and enhance their reputation. Ethnographic observations and informal interviews at seven hackathons held in New York during the course of a single school year show how the format of the event and sponsors\u2019 discursive tropes, within a dominant cultural frame reflecting the appeal of Silicon Valley, reshape unpaid and precarious work as an extraordinary opportunity, a ritual of ecstatic labor, and a collective imaginary for fictional expectations of innovation that benefits all, a powerful strategy for manufacturing workers\u2019 consent in the \u201cnew\u201d economy.", "which has location ?", "New York", 631.0, 639.0], ["Abstract Introduction The first wave of COVID-19 pandemic period has drastically changed people\u2019s lives all over the world. To cope with the disruption, digital solutions have become more popular. However, the ability to adopt digitalised alternatives is different across socio-economic and socio-demographic groups. Objective This study investigates how individuals have changed their activity-travel patterns and internet usage during the first wave of the COVID-19 pandemic period, and which of these changes may be kept. Methods An empirical data collection was deployed through online forms. 781 responses from different countries (Italy, Sweden, India and others) have been collected, and a series of multivariate analyses was carried out. Two linear regression models are presented, related to the change of travel activities and internet usage, before and during the pandemic period. Furthermore, a binary regression model is used to examine the likelihood of the respondents to adopt and keep their behaviours beyond the pandemic period. Results The results show that the possibility to change the behaviour matter. External restrictions and personal characteristics are the driving factors of the reduction in ones' daily trips. However, the estimation results do not show a strong correlation between the countries' restriction policy and the respondents' likelihood to adopt the new and online-based behaviours for any of the activities after the restriction period. Conclusion The acceptance and long-term adoption of the online alternatives for activities are correlated with the respondents' personality and socio-demographic group, highlighting the importance of promoting alternatives as a part of longer-term behavioural and lifestyle changes.", "which Country ?", "India", 652.0, 657.0], ["Advocates of software risk management claim that by identifying and analyzing threats to success (i.e., risks) action can be taken to reduce the chance of failure of a project. The first step in the risk management process is to identify the risk itself, so that appropriate countermeasures can be taken. One problem in this task, however, is that no validated lists are available to help the project manager understand the nature and types of risks typically faced in a software project. This paper represents a first step toward alleviating this problem by developing an authoritative list of common risk factors. We deploy a rigorous data collection method called a \"ranking-type\" Delphi survey to produce a rank-order list of risk factors. This data collection method is designed to elicit and organize opinions of a panel of experts through iterative, controlled feedback. Three simultaneous surveys were conducted in three different settings: Hong Kong, Finland, and the United States. This was done to broaden our view of the types of risks, rather than relying on the view of a single culture-an aspect that has been ignored in past risk management research. In forming the three panels, we recruited experienced project managers in each country. The paper presents the obtained risk factor list, compares it with other published risk factor lists for completeness and variation, and analyzes common features and differences in risk factor rankings in the three countries. We conclude by discussing implications of our findings for both research and improving risk management practice.", "which Country ?", "Hong Kong", 949.0, 958.0], ["The restrictive measures implemented in response to the COVID-19 pandemic have triggered sudden massive changes to travel behaviors of people all around the world. This study examines the individual mobility patterns for all transport modes (walk, bicycle, motorcycle, car driven alone, car driven in company, bus, subway, tram, train, airplane) before and during the restrictions adopted in ten countries on six continents: Australia, Brazil, China, Ghana, India, Iran, Italy, Norway, South Africa and the United States. This cross-country study also aims at understanding the predictors of protective behaviors related to the transport sector and COVID-19. Findings hinge upon an online survey conducted in May 2020 (N = 9,394). The empirical results quantify tremendous disruptions for both commuting and non-commuting travels, highlighting substantial reductions in the frequency of all types of trips and use of all modes. In terms of potential virus spread, airplanes and buses are perceived to be the riskiest transport modes, while avoidance of public transport is consistently found across the countries. According to the Protection Motivation Theory, the study sheds new light on the fact that two indicators, namely income inequality, expressed as Gini index, and the reported number of deaths due to COVID-19 per 100,000 inhabitants, aggravate respondents\u2019 perceptions. This research indicates that socio-economic inequality and morbidity are not only related to actual health risks, as well documented in the relevant literature, but also to the perceived risks. These findings document the global impact of the COVID-19 crisis as well as provide guidance for transportation practitioners in developing future strategies.", "which Country ?", "India", 458.0, 463.0], ["Advocates of software risk management claim that by identifying and analyzing threats to success (i.e., risks) action can be taken to reduce the chance of failure of a project. The first step in the risk management process is to identify the risk itself, so that appropriate countermeasures can be taken. One problem in this task, however, is that no validated lists are available to help the project manager understand the nature and types of risks typically faced in a software project. This paper represents a first step toward alleviating this problem by developing an authoritative list of common risk factors. We deploy a rigorous data collection method called a \"ranking-type\" Delphi survey to produce a rank-order list of risk factors. This data collection method is designed to elicit and organize opinions of a panel of experts through iterative, controlled feedback. Three simultaneous surveys were conducted in three different settings: Hong Kong, Finland, and the United States. This was done to broaden our view of the types of risks, rather than relying on the view of a single culture-an aspect that has been ignored in past risk management research. In forming the three panels, we recruited experienced project managers in each country. The paper presents the obtained risk factor list, compares it with other published risk factor lists for completeness and variation, and analyzes common features and differences in risk factor rankings in the three countries. We conclude by discussing implications of our findings for both research and improving risk management practice.", "which Country ?", "Finland", 960.0, 967.0], ["The article analyses the most intense phase of a process of constitutional review in Kenya that has been ongoing since about 1990: that stage began in 2000 and is, perhaps, not yet completed, there being as yet no new constitution. The article describes the reasons for the review and the process. It offers an account of the role of the media and various sectors of society including women and previously marginalized ethnic groups, in shaping the agenda, the process and the outcome. It argues that although civil society, with much popular support, was prominent in pushing for change, when an official process of review began, the vested interests of government and even of those trusted with the review frustrated a quick outcome, and especially any outcome that meant curtailing the powers of government. Even high levels of popular involvement were unable to guarantee a new constitution against manipulation by government and other vested interests involved in review, including the law and the courts. However, a new constitution may yet emerge, and in any case the process may prove to have made an ineradicable impact on the shape of the nation's politics and the consciousness of the ordinary citizen.", "which Country ?", "Kenya", 85.0, 90.0], ["The restrictive measures implemented in response to the COVID-19 pandemic have triggered sudden massive changes to travel behaviors of people all around the world. This study examines the individual mobility patterns for all transport modes (walk, bicycle, motorcycle, car driven alone, car driven in company, bus, subway, tram, train, airplane) before and during the restrictions adopted in ten countries on six continents: Australia, Brazil, China, Ghana, India, Iran, Italy, Norway, South Africa and the United States. This cross-country study also aims at understanding the predictors of protective behaviors related to the transport sector and COVID-19. Findings hinge upon an online survey conducted in May 2020 (N = 9,394). The empirical results quantify tremendous disruptions for both commuting and non-commuting travels, highlighting substantial reductions in the frequency of all types of trips and use of all modes. In terms of potential virus spread, airplanes and buses are perceived to be the riskiest transport modes, while avoidance of public transport is consistently found across the countries. According to the Protection Motivation Theory, the study sheds new light on the fact that two indicators, namely income inequality, expressed as Gini index, and the reported number of deaths due to COVID-19 per 100,000 inhabitants, aggravate respondents\u2019 perceptions. This research indicates that socio-economic inequality and morbidity are not only related to actual health risks, as well documented in the relevant literature, but also to the perceived risks. These findings document the global impact of the COVID-19 crisis as well as provide guidance for transportation practitioners in developing future strategies.", "which Country ?", "Australia", 425.0, 434.0], ["The article deals with an oral speech phenomenon widespread in the Republic of Belarus, where it is known as trasjanka. This code originated through constant contact between Russian and Belarusian, two closely related East Slavonic languages. Discussed are the main features of this code (as used in the city of Minsk), the sources of its origin, different linguistic definitions and the attitude towards this code from those who dwell in the city of Minsk. Special attention is paid to the problem of distinction between trasjanka and different forms of codeswitching, also widely used in the Minsk language community.", "which Population under analysis ?", "Minsk", 312.0, 317.0], ["This paper addresses the issue of overeducation and undereducation using for the first time a British dataset which contains explicit information on the level of required education to enter a job across the generality of occupations. Three key issues within the overeducation literature are addressed. First, what determines the existence of over and undereducation and to what extent are over and undereducation substitutes for experience, tenure and training? Second, to what extent are over and undereducation temporary or permanent phenomena? Third, what are the returns to over and undereducation and do certain stylized facts discovered for the US and a number of European countries hold for Britain?", "which Population under analysis ?", "General", NaN, NaN], ["There is little question that substantial labormarket differences exist between men and women. Among the most researched difference is the male-female wage gap. Many different theories are used to explain why men earn more than women. One possible reason is based on the limited geographic mobility of married women (Robert Frank, 1978). Family mobility is a joint decision in which the needs of the husband and wife are balanced to maximize family welfare. Job-motivated relocations are generally made to benefit the primary earner in the family. This leads to a constrained job search for the secondary earner, as he or she must search for a job in a limited geographic area. Since the husband is still the primary wage earner in many families, the job search of the wife may suffer. Individuals who are tied to a certain area are labeled \"tied-stayers,\" while secondary earners who move for the benefit of the family are labeled \"tied-movers\" (Jacob Mincer, 1978). The wages of a tied-stayer or tied-mover may not be substantially lower if the family lives in or moves to a large city. If a large labor market has more vacancies, the wife may locate a wage offer near the maximum she would find with a nationwide job search. However, being a tied-stayer or tied-mover can lower the wife's wage if the family lives in or moves to a small community. A small labor market will reduce the likelihood of her finding a job that utilizes her skills. As a result she may accept a job for which she is overqualified and thus earn a lower wage.' This hypothesized relationship between the likelihood of being overqualified and SMSA size is termed \"differential overqualification.\" Frank ( 1978) and Haim Ofek and Yesook Merrill (1994) provide support for the theory of differential overqualification by finding that the malefemale wage gap is greater in smaller SMSA's. While the results are consistent with the existence of differential overqualification, they may also result from other situations as well. Firms in small labor markets may use their monopsony power to keep wages down.2 Local demand shocks are found to be a major source of wage variation both across and within local labor markets (Robert Topel, 1986). Since large labor markets are generally more diversified, a demand shock can have a substantial impact on immobile workers in small labor markets. Another reason for examining differential overqualification involves the assumption that there are more vacancies in large labor markets. While there is little doubt that more vacancies exist in large labor markets, there are also likely to be more people searching for jobs in large labor markets. If the greater number of vacancies is offset by the larger number of searchers, it is unclear whether women will be more likely to be overqualified in small labor markets. Instead of relying on wages to determine if differential overqualification exists, we consider an explicit form of overqualification based on education.", "which Population under analysis ?", "General", NaN, NaN], ["A SEIR simulation model for the COVID-19 pandemic was developed (http://covidsim.eu) and applied to a hypothetical European country of 10 million population. Our results show which interventions potentially push the epidemic peak into the subsequent year (when vaccinations may be available) or which fail. Different levels of control (via contact reduction) resulted in 22% to 63% of the population sick, 0.2% to 0.6% hospitalised, and 0.07% to 0.28% dead (n=6,450 to 28,228).", "which location ?", "Hypothetical European Country", 102.0, 131.0], ["\nBackground\nTo control the COVID-19 outbreak in Japan, sports and entertainment events were canceled and schools were closed throughout Japan from February 26 through March 19. That policy has been designated as voluntary event cancellation and school closure (VECSC).\n\n\nObject\nThis study assesses VECSC effectiveness based on predicted outcomes.\n\n\nMethods\nA simple susceptible\u2013infected\u2013recovered model was applied to data of patients with symptoms in Japan during January 14 through March 26. The respective reproduction numbers for periods before VECSC (R0), during VECSC (Re), and after VECSC (Ra) were estimated.\n\n\nResults\nResults suggest R0 before VECSC as 2.534 [2.449, 2.598], Re during VECSC as 1.077 [0.948, 1.228], and Ra after VECSC as 4.455 [3.615, 5.255].\n\n\nDiscussion and conclusion\nResults demonstrated that VECSC can reduce COVID-19 infectiousness considerably, but after VECSC, the value of the reproduction number rose to exceed 4.0.\n", "which location ?", "Japan", 103.0, 108.0], ["Since December 2019, COVID-19 has raged in Wuhan and subsequently all over China and the world. We propose a Cybernetics-based Dynamic Infection Model (CDIM) to the dynamic infection process with a probability distributed incubation delay and feedback principle. Reproductive trends and the stability of the SARS-COV-2 infection in a city can then be analyzed, and the uncontrollable risks can be forecasted before they really happen. The infection mechanism of a city is depicted using the philosophy of cybernetics and approaches of the control engineering. Distinguished with other epidemiological models, such as SIR, SEIR, etc., that compute the theoretical number of infected people in a closed population, CDIM considers the immigration and emigration population as system inputs, and administrative and medical resources as dynamic control variables. The epidemic regulation can be simulated in the model to support the decision-making for containing the outbreak. City case studies are demonstrated for verification and validation.", "which location ?", "China", 75.0, 80.0], ["(1) Background: Although bullying victimization is a phenomenon that is increasingly being recognized as a public health and mental health concern in many countries, research attention on this aspect of youth violence in low- and middle-income countries, especially sub-Saharan Africa, is minimal. The current study examined the national prevalence of bullying victimization and its correlates among in-school adolescents in Ghana. (2) Methods: A sample of 1342 in-school adolescents in Ghana (55.2% males; 44.8% females) aged 12\u201318 was drawn from the 2012 Global School-based Health Survey (GSHS) for the analysis. Self-reported bullying victimization \u201cduring the last 30 days, on how many days were you bullied?\u201d was used as the central criterion variable. Three-level analyses using descriptive, Pearson chi-square, and binary logistic regression were performed. Results of the regression analysis were presented as adjusted odds ratios (aOR) at 95% confidence intervals (CIs), with a statistical significance pegged at p < 0.05. (3) Results: Bullying victimization was prevalent among 41.3% of the in-school adolescents. Pattern of results indicates that adolescents in SHS 3 [aOR = 0.34, 95% CI = 0.25, 0.47] and SHS 4 [aOR = 0.30, 95% CI = 0.21, 0.44] were less likely to be victims of bullying. Adolescents who had sustained injury [aOR = 2.11, 95% CI = 1.63, 2.73] were more likely to be bullied compared to those who had not sustained any injury. The odds of bullying victimization were higher among adolescents who had engaged in physical fight [aOR = 1.90, 95% CI = 1.42, 2.25] and those who had been physically attacked [aOR = 1.73, 95% CI = 1.32, 2.27]. Similarly, adolescents who felt lonely were more likely to report being bullied [aOR = 1.50, 95% CI = 1.08, 2.08] as against those who did not feel lonely. Additionally, adolescents with a history of suicide attempts were more likely to be bullied [aOR = 1.63, 95% CI = 1.11, 2.38] and those who used marijuana had higher odds of bullying victimization [aOR = 3.36, 95% CI = 1.10, 10.24]. (4) Conclusions: Current findings require the need for policy makers and school authorities in Ghana to design and implement policies and anti-bullying interventions (e.g., Social Emotional Learning (SEL), Emotive Behavioral Education (REBE), Marijuana Cessation Therapy (MCT)) focused on addressing behavioral issues, mental health and substance abuse among in-school adolescents.", "which location ?", "Ghana", 425.0, 430.0], ["Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.", "which location ?", "Singapore", 297.0, 306.0], ["Based on the study of Darjeeling Municipality, the paper engages with issues pertaining to understanding the matrixes of power relations involved in the supply of water in Darjeeling town in India. The discussions in the paper focuses on urbanization, the shrinking water resources, and increased demand for water on the one hand; and the role of local administration, the emergence of the water mafia, and the \u2018Samaj\u2019 (society) all contributing to a skewed and inequitable distribution of water and the assumption of proprietorship or the appropriation of water commons, culminating in the accentuation of water-rights deprivation in Darjeeling Municipal Area.\u00a0HYDRO Nepal JournalJournal of Water Energy and EnvironmentIssue No: 22Page: 16-24Uploaded date: January 14, 2018", "which location ?", "Darjeeling", 30.0, 40.0], ["English Abstract: Background: Since the emergence of the first pneumonia cases in Wuhan, China, the novel coronavirus (2019-nCov) infection has been quickly spreading out to other provinces and neighbouring countries. Estimation of the basic reproduction number by means of mathematical modelling can be helpful for determining the potential and severity of an outbreak, and providing critical information for identifying the type of disease interventions and intensity. Methods: A deterministic compartmental model was devised based on the clinical progression of the disease, epidemiological status of the individuals, and the intervention measures. Findings: The estimation results based on likelihood and model analysis reveal that the control reproduction number may be as high as 6.47 (95% CI 5.71-7.23). Sensitivity analyses reveal that interventions, such as intensive contact tracing followed by quarantine and isolation, can effectively reduce the control reproduction number and transmission risk, with the effect of travel restriction of Wuhan on 2019-nCov infection in Beijing being almost equivalent to increasing quarantine by 100-thousand baseline value. Interpretation: It is essential to assess how the expensive, resource-intensive measures implemented by the Chinese authorities can contribute to the prevention and control of the 2019-nCov infection, and how long should be maintained. Under the most restrictive measures, the outbreak is expected to peak within two weeks (since January 23rd 2020) with significant low peak value. With travel restriction (no imported exposed individuals to Beijing), the number of infected individuals in 7 days will decrease by 91.14% in Beijing, compared with the scenario of no travel restriction. Mandarin Abstract: \u80cc\u666f\uff1a\u81ea\u4ece\u4e2d\u56fd\u6b66\u6c49\u51fa\u73b0\u7b2c\u4e00\u4f8b\u80ba\u708e\u75c5\u4f8b\u4ee5\u6765\uff0c\u65b0\u578b\u51a0\u72b6\u75c5\u6bd2\uff082019-nCov\uff09\u611f\u67d3\u5df2\u8fc5\u901f\u4f20\u64ad\u5230\u5176\u4ed6\u7701\u4efd\u548c\u5468\u8fb9\u56fd\u5bb6\u3002\u901a\u8fc7\u6570\u5b66\u6a21\u578b\u4f30\u8ba1\u57fa\u672c\u518d\u751f\u6570\uff0c\u6709\u52a9\u4e8e\u786e\u5b9a\u75ab\u60c5\u7206\u53d1\u7684\u53ef\u80fd\u6027\u548c\u4e25\u91cd\u6027\uff0c\u5e76\u4e3a\u786e\u5b9a\u75be\u75c5\u5e72\u9884\u7c7b\u578b\u548c\u5f3a\u5ea6\u63d0\u4f9b\u5173\u952e\u4fe1\u606f\u3002 \u65b9\u6cd5\uff1a\u6839\u636e\u75be\u75c5\u7684\u4e34\u5e8a\u8fdb\u5c55\uff0c\u4e2a\u4f53\u7684\u6d41\u884c\u75c5\u5b66\u72b6\u51b5\u548c\u5e72\u9884\u63aa\u65bd\uff0c\u8bbe\u8ba1\u786e\u5b9a\u6027\u7684\u4ed3\u5ba4\u6a21\u578b\u3002 \u7ed3\u679c\uff1a\u57fa\u4e8e\u4f3c\u7136\u51fd\u6570\u548c\u6a21\u578b\u5206\u6790\u7684\u4f30\u8ba1\u7ed3\u679c\u8868\u660e\uff0c\u63a7\u5236\u518d\u751f\u6570\u53ef\u80fd\u9ad8\u8fbe6.47\uff0895\uff05CI 5.71-7.23\uff09\u3002\u654f\u611f\u6027\u5206\u6790\u663e\u793a\uff0c\u5bc6\u96c6\u63a5\u89e6\u8ffd\u8e2a\u548c\u9694\u79bb\u7b49\u5e72\u9884\u63aa\u65bd\u53ef\u4ee5\u6709\u6548\u51cf\u5c11\u63a7\u5236\u518d\u751f\u6570\u548c\u4f20\u64ad\u98ce\u9669\uff0c\u6b66\u6c49\u5c01\u57ce\u63aa\u65bd\u5bf9\u5317\u4eac2019-nCov\u611f\u67d3\u7684\u5f71\u54cd\u51e0\u4e4e\u7b49\u540c\u4e8e\u589e\u52a0\u9694\u79bb\u63aa\u65bd10\u4e07\u7684\u57fa\u7ebf\u503c\u3002 \u89e3\u91ca\uff1a\u5fc5\u987b\u8bc4\u4f30\u4e2d\u56fd\u5f53\u5c40\u5b9e\u65bd\u7684\u6602\u8d35\uff0c\u8d44\u6e90\u5bc6\u96c6\u578b\u63aa\u65bd\u5982\u4f55\u6709\u52a9\u4e8e\u9884\u9632\u548c\u63a7\u52362019-nCov\u611f\u67d3\uff0c\u4ee5\u53ca\u5e94\u7ef4\u6301\u591a\u957f\u65f6\u95f4\u3002\u5728\u6700\u4e25\u683c\u7684\u63aa\u65bd\u4e0b\uff0c\u9884\u8ba1\u75ab\u60c5\u5c06\u5728\u4e24\u5468\u5185\uff08\u81ea2020\u5e741\u670823\u65e5\u8d77\uff09\u8fbe\u5230\u5cf0\u503c\uff0c\u5cf0\u503c\u8f83\u4f4e\u3002\u4e0e\u6ca1\u6709\u51fa\u884c\u9650\u5236\u7684\u60c5\u51b5\u76f8\u6bd4\uff0c\u6709\u4e86\u51fa\u884c\u9650\u5236\uff08\u5373\u6ca1\u6709\u8f93\u5165\u7684\u6f5c\u4f0f\u7c7b\u4e2a\u4f53\u8fdb\u5165\u5317\u4eac\uff09\uff0c\u5317\u4eac\u76847\u5929\u611f\u67d3\u8005\u6570\u91cf\u5c06\u51cf\u5c1191.14\uff05\u3002", "which location ?", "China", 89.0, 94.0], ["By 27 February 2020, the outbreak of coronavirus disease 2019 (COVID\u201019) caused 82 623 confirmed cases and 2858 deaths globally, more than severe acute respiratory syndrome (SARS) (8273 cases, 775 deaths) and Middle East respiratory syndrome (MERS) (1139 cases, 431 deaths) caused in 2003 and 2013, respectively. COVID\u201019 has spread to 46 countries internationally. Total fatality rate of COVID\u201019 is estimated at 3.46% by far based on published data from the Chinese Center for Disease Control and Prevention (China CDC). Average incubation period of COVID\u201019 is around 6.4 days, ranges from 0 to 24 days. The basic reproductive number (R0) of COVID\u201019 ranges from 2 to 3.5 at the early phase regardless of different prediction models, which is higher than SARS and MERS. A study from China CDC showed majority of patients (80.9%) were considered asymptomatic or mild pneumonia but released large amounts of viruses at the early phase of infection, which posed enormous challenges for containing the spread of COVID\u201019. Nosocomial transmission was another severe problem. A total of 3019 health workers were infected by 12 February 2020, which accounted for 3.83% of total number of infections, and extremely burdened the health system, especially in Wuhan. Limited epidemiological and clinical data suggest that the disease spectrum of COVID\u201019 may differ from SARS or MERS. We summarize latest literatures on genetic, epidemiological, and clinical features of COVID\u201019 in comparison to SARS and MERS and emphasize special measures on diagnosis and potential interventions. This review will improve our understanding of the unique features of COVID\u201019 and enhance our control measures in the future.", "which location ?", "China", 511.0, 516.0], ["We estimate the effective reproduction number for 2019-nCoV based on the daily reported cases from China CDC. The results indicate that 2019-nCoV has a higher effective reproduction number than SARS with a comparable fatality rate.", "which location ?", "China", 99.0, 104.0], ["Self-sustaining human-to-human transmission of the novel coronavirus (2019-nCov) is the only plausible explanation of the scale of the outbreak in Wuhan. We estimate that, on average, each case infected 2.6 (uncertainty range: 1.5-3.5) other people up to 18 January 2020, based on an analysis combining our past estimates of the size of the outbreak in Wuhan with computational modelling of potential epidemic trajectories. This implies that control measures need to block well over 60% of transmission to be effective in controlling the outbreak. It is likely, based on the experience of SARS and MERS-CoV, that the number of secondary cases caused by a case of 2019-nCoV is highly variable \u2013 with many cases causing no secondary infections, and a few causing many. Whether transmission is continuing at the same rate currently depends on the effectiveness of current control measures implemented in China and the extent to which the populations of affected areas have adopted risk-reducing behaviours. In the absence of antiviral drugs or vaccines, control relies upon the prompt detection and isolation of symptomatic cases. It is unclear at the current time whether this outbreak can be contained within China; uncertainties include the severity spectrum of the disease caused by this virus and whether cases with relatively mild symptoms are able to transmit the virus efficiently. Identification and testing of potential cases need to be as extensive as is permitted by healthcare and diagnostic testing capacity \u2013 including the identification, testing and isolation of suspected cases with only mild to moderate disease (e.g. influenza-like illness), when logistically feasible.", "which location ?", "Wuhan", 147.0, 152.0], ["Background: Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods: We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results: The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions: Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities.", "which location ?", "Tianjin, China", 421.0, 435.0], ["Bullying is relatively common and is considered to be a public health problem among adolescents worldwide. The present study examined the risk factors associated with bullying behavior among adolescents in a lower-middle-income country setting. Data on 6235 adolescents aged 11\u201316 years, derived from the Republic of Ghana\u2019s contribution to the Global School-based Health Survey, were analyzed using bivariate and multinomial logistic regression analysis. A high prevalence of bullying was found among Ghanaian adolescents. Alcohol-related health compromising behaviors (alcohol use, alcohol misuse and getting into trouble as a result of alcohol) increased the risk of being bullied. In addition, substance use, being physically attacked, being seriously injured, hunger and truancy were also found to increase the risk of being bullied. However, having understanding parents and having classmates who were kind and helpful reduced the likelihood of being bullied. These findings suggest that school-based intervention programs aimed at reducing rates of peer victimization should simultaneously target multiple risk behaviors. Teachers can also reduce peer victimization by introducing programs that enhance adolescents\u2019 acceptance of each other in the classroom.", "which location ?", "Ghana", 317.0, 322.0], ["ABSTRACT The present study exploits high-resolution hyperspectral imagery acquired by the Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) sensor from the Hutti-Maski gold deposit area, India, to map hydrothermal alteration minerals. The study area is a volcanic-dominated late Archean greenstone belt that hosts major gold mineralization in the Eastern Dharwar Craton of southern India. The study encompasses pre-processing, spectral and spatial image reduction using Minimum Noise Fraction (MNF) and Fast Pixel Purity Index (FPPI), followed by endmember extraction using n-dimensional visualizer and the United States Geological Survey (USGS) mineral spectral library. Image derived endmembers such as goethite, chlorite, chlorite at the mine site (chlorite mixed with mined materials), kaolinite, and muscovite were subsequently used in spectral mapping methods such as Spectral Angle Mapper (SAM), Spectral Information Divergence (SID) and its hybrid, i.e. SIDSAMtan. Spectral similarity matrix of the target and non-target-based method has been proposed to find the possible optimum threshold needed to obtain mineral map using spectral mapping methods. Relative Spectral Discrimination Power (RSDPW) and Confusion Matrix (CM) have been used to evaluate the performance of SAM, SID, and SIDSAMtan. The RSDPW and CM illustrate that the SIDSAMtan benefits from the unique characteristics of SAM and SID to achieve better discrimination capability. The Overall Accuracy (OA) and kappa coefficient (\u04a1) of SAM, SID, and SIDSAMtan were computed using 900 random validation points and obtained 90% (OA) and 0.88 (\u04a1), 91.4% and 0.90, and 94.4% and 0.93, respectively. Obtained mineral map demonstrates that the northern portion of the area mainly consists of muscovite whereas the southern part is marked by chlorite, goethite, muscovite and kaolinite, indicating the propylitic alteration. Most of these minerals are associated with altered metavolcanic rocks and migmatite.", "which Techniques/Methods ?", "Minimum Noise Fraction (MNF)", NaN, NaN], ["Several classification algorithms for pattern recognition had been tested in the mapping of tropical forest cover using airborne hyperspectral data. Results from the use of Maximum Likelihood (ML), Spectral Angle Mapper (SAM), Artificial Neural Network (ANN) and Decision Tree (DT) classifiers were compared and evaluated. It was found that ML performed the best followed by ANN, DT and SAM with accuracies of 86%, 84%, 51% and 49% respectively.", "which Techniques/Methods ?", "Spectral Angle Mapper (SAM)", NaN, NaN], ["Imaging spectroscopic technique has been used for the mineral and rock geological mapping and alteration information extraction successfully with many reasonable results, but it is mainly used in arid and semi-arid land with low vegetation covering. In the case of the high vegetation covering, the outcrop of the altered rocks is small and distributes sparsely, the altered rocks is difficult to be identified directly. The target detection technique using imaging spectroscopic data should be introduced to the extraction of small geological targets under high vegetation covering area. In the paper, we take Ding-Ma gold deposit as the study area which located in Zhenan country, Shanxi province, the spectral features of the targets and the backgrounds are studied and analyzed using the field reflectance spectra, in addition to the study of the principle of the algorithms, some target detection algorithms which is appropriate to the small geological target detection are introduced. At last, the small altered rock targets under the covering of vegetation in forest are detected and discriminated in imaging spectroscopy data with the methods of spectral angle mapper (SAM), Constrained Energy Minimization (CEM) and Adaptive Cosine Estimator (ACE). The detection results are reasonable and indicate the ability of target detection algorithms in geological target detection in the forest area.", "which Techniques/Methods ?", "Spectral Angle Mapper (SAM)", NaN, NaN], ["Name ambiguity in the context of bibliographic citation affects the quality of services in digital libraries. Previous methods are not widely applied in practice because of their high computational complexity and their strong dependency on excessive attributes, such as institutional affiliation, research area, address, etc., which are difficult to obtain in practice. To solve this problem, we propose a novel coarse\u2010to\u2010fine framework for name disambiguation which sequentially employs 3 common and easily accessible attributes (i.e., coauthor name, article title, and publication venue). Our proposed framework is based on multiple clustering and consists of 3 steps: (a) clustering articles by coauthorship and obtaining rough clusters, that is fragments; (b) clustering fragments obtained in step 1 by title information and getting bigger fragments; (c) and clustering fragments obtained in step 2 by the latent relations among venues. Experimental results on a Digital Bibliography and Library Project (DBLP) data set show that our method outperforms the existing state\u2010of\u2010the\u2010art methods by 2.4% to 22.7% on the average pairwise F1 score and is 10 to 100 times faster in terms of execution time.", "which Evidence ?", "Author name", NaN, NaN], ["This paper proposes a methodology which discriminates the articles by the target authors (\u201ctrue\u201d articles) from those by other homonymous authors (\u201cfalse\u201d articles). Author name searches for 2,595 \u201csource\u201d authors in six subject fields retrieved about 629,000 articles. In order to extract true articles from the large amount of the retrieved articles, including many false ones, two filtering stages were applied. At the first stage any retrieved article was eliminated as false if either its affiliation addresses had little similarity to those of its source article or there was no citation relationship between the journal of the retrieved article and that of its source article. At the second stage, a sample of retrieved articles was subjected to manual judgment, and utilizing the judgment results, discrimination functions based on logistic regression were defined. These discrimination functions demonstrated both the recall ratio and the precision of about 95% and the accuracy (correct answer ratio) of 90\u201395%. Existence of common coauthor(s), address similarity, title words similarity, and interjournal citation relationships between the retrieved and source articles were found to be the effective discrimination predictors. Whether or not the source author was from a specific country was also one of the important predictors. Furthermore, it was shown that a retrieved article is almost certainly true if it was cited by, or cocited with, its source article. The method proposed in this study would be effective when dealing with a large number of articles whose subject fields and affiliation addresses vary widely. \u00a9 2011 Wiley Periodicals, Inc.", "which Evidence ?", "Title words", 1075.0, 1086.0], ["This paper proposes a methodology which discriminates the articles by the target authors (\u201ctrue\u201d articles) from those by other homonymous authors (\u201cfalse\u201d articles). Author name searches for 2,595 \u201csource\u201d authors in six subject fields retrieved about 629,000 articles. In order to extract true articles from the large amount of the retrieved articles, including many false ones, two filtering stages were applied. At the first stage any retrieved article was eliminated as false if either its affiliation addresses had little similarity to those of its source article or there was no citation relationship between the journal of the retrieved article and that of its source article. At the second stage, a sample of retrieved articles was subjected to manual judgment, and utilizing the judgment results, discrimination functions based on logistic regression were defined. These discrimination functions demonstrated both the recall ratio and the precision of about 95% and the accuracy (correct answer ratio) of 90\u201395%. Existence of common coauthor(s), address similarity, title words similarity, and interjournal citation relationships between the retrieved and source articles were found to be the effective discrimination predictors. Whether or not the source author was from a specific country was also one of the important predictors. Furthermore, it was shown that a retrieved article is almost certainly true if it was cited by, or cocited with, its source article. The method proposed in this study would be effective when dealing with a large number of articles whose subject fields and affiliation addresses vary widely. \u00a9 2011 Wiley Periodicals, Inc.", "which Evidence ?", "Citation relationship", 585.0, 606.0], ["Abstract. Carrion-breeding Sarcophagidae (Diptera) can be used to estimate the post-mortem interval in forensic cases. Difficulties with accurate morphological identifications at any life stage and a lack of documented thermobiological profiles have limited their current usefulness. The molecular-based approach of DNA barcoding, which utilises a 648-bp fragment of the mitochondrial cytochrome oxidase subunit I gene, was evaluated in a pilot study for discrimination between 16 Australian sarcophagids. The current study comprehensively evaluated barcoding for a larger taxon set of 588 Australian sarcophagids. In total, 39 of the 84 known Australian species were represented by 580 specimens, which includes 92% of potentially forensically important species. A further eight specimens could not be identified, but were included nonetheless as six unidentifiable taxa. A neighbour-joining tree was generated and nucleotide sequence divergences were calculated. All species except Sarcophaga (Fergusonimyia) bancroftorum, known for high morphological variability, were resolved as monophyletic (99.2% of cases), with bootstrap support of 100. Excluding S. bancroftorum, the mean intraspecific and interspecific variation ranged from 1.12% and 2.81\u201311.23%, respectively, allowing for species discrimination. DNA barcoding was therefore validated as a suitable method for molecular identification of Australian Sarcophagidae, which will aid in the implementation of this fauna in forensic entomology.", "which Studied taxonomic group (Biology) ?", "Sarcophagidae", 27.0, 40.0], ["Butterfly monitoring and Red List programs in Switzerland rely on a combination of observations and collection records to document changes in species distributions through time. While most butterflies can be identified using morphology, some taxa remain challenging, making it difficult to accurately map their distributions and develop appropriate conservation measures. In this paper, we explore the use of the DNA barcode (a fragment of the mitochondrial gene COI) as a tool for the identification of Swiss butterflies and forester moths (Rhopalocera and Zygaenidae). We present a national DNA barcode reference library including 868 sequences representing 217 out of 224 resident species, or 96.9% of Swiss fauna. DNA barcodes were diagnostic for nearly 90% of Swiss species. The remaining 10% represent cases of para- and polyphyly likely involving introgression or incomplete lineage sorting among closely related taxa. We demonstrate that integrative taxonomic methods incorporating a combination of morphological and genetic techniques result in a rate of species identification of over 96% in females and over 98% in males, higher than either morphology or DNA barcodes alone. We explore the use of the DNA barcode for exploring boundaries among taxa, understanding the geographical distribution of cryptic diversity and evaluating the status of purportedly endemic taxa. Finally, we discuss how DNA barcodes may be used to improve field practices and ultimately enhance conservation strategies.", "which Studied taxonomic group (Biology) ?", "Zygaenidae", 558.0, 568.0], ["Dasysyrphus Enderlein (Diptera: Syrphidae) has posed taxonomic challenges to researchers in the past, primarily due to their lack of interspecific diagnostic characters. In the present study, DNA data (mitochondrial cytochrome c oxidase sub-unit I\u2014COI) were combined with morphology to help delimit species. This led to two species being resurrected from synonymy (D. laticaudus and D. pacificus) and the discovery of one new species (D. occidualis sp. nov.). An additional new species was described based on morphology alone (D. richardi sp. nov.), as the specimens were too old to obtain COI. Part of the taxonomic challenge presented by this group arises from missing type specimens. Neotypes are designated here for D. pauxillus and D. pinastri to bring stability to these names. An illustrated key to 13 Nearctic species is presented, along with descriptions, maps and supplementary data. A phylogeny based on COI is also presented and discussed.", "which Studied taxonomic group (Biology) ?", "Syrphidae", 32.0, 41.0], ["This study summarizes results of a DNA barcoding campaign on German Diptera, involving analysis of 45,040 specimens. The resultant DNA barcode library includes records for 2,453 named species comprising a total of 5,200 barcode index numbers (BINs), including 2,700 COI haplotype clusters without species\u2010level assignment, so called \u201cdark taxa.\u201d Overall, 88 out of 117 families (75%) recorded from Germany were covered, representing more than 50% of the 9,544 known species of German Diptera. Until now, most of these families, especially the most diverse, have been taxonomically inaccessible. By contrast, within a few years this study provided an intermediate taxonomic system for half of the German Dipteran fauna, which will provide a useful foundation for subsequent detailed, integrative taxonomic studies. Using DNA extracts derived from bulk collections made by Malaise traps, we further demonstrate that species delineation using BINs and operational taxonomic units (OTUs) constitutes an effective method for biodiversity studies using DNA metabarcoding. As the reference libraries continue to grow, and gaps in the species catalogue are filled, BIN lists assembled by metabarcoding will provide greater taxonomic resolution. The present study has three main goals: (a) to provide a DNA barcode library for 5,200 BINs of Diptera; (b) to demonstrate, based on the example of bulk extractions from a Malaise trap experiment, that DNA barcode clusters, labelled with globally unique identifiers (such as OTUs and/or BINs), provide a pragmatic, accurate solution to the \u201ctaxonomic impediment\u201d; and (c) to demonstrate that interim names based on BINs and OTUs obtained through metabarcoding provide an effective method for studies on species\u2010rich groups that are usually neglected in biodiversity research projects because of their unresolved taxonomy.", "which Studied taxonomic group (Biology) ?", "Diptera", 68.0, 75.0], ["For the first time, a nearly complete barcode library for European Gelechiidae is provided. DNA barcode sequences (COI gene - cytochrome c oxidase 1) from 751 out of 865 nominal species, belonging to 105 genera, were successfully recovered. A total of 741 species represented by specimens with sequences \u2265 500bp and an additional ten species represented by specimens with shorter sequences were used to produce 53 NJ trees. Intraspecific barcode divergence averaged only 0.54% whereas distance to the Nearest-Neighbour species averaged 5.58%. Of these, 710 species possessed unique DNA barcodes, but 31 species could not be reliably discriminated because of barcode sharing or partial barcode overlap. Species discrimination based on the Barcode Index System (BIN) was successful for 668 out of 723 species which clustered from minimum one to maximum 22 unique BINs. Fifty-five species shared a BIN with up to four species and identification from DNA barcode data is uncertain. Finally, 65 clusters with a unique BIN remained unidentified to species level. These putative taxa, as well as 114 nominal species with more than one BIN, suggest the presence of considerable cryptic diversity, cases which should be examined in future revisionary studies.", "which Studied taxonomic group (Biology) ?", "Gelechiidae", 67.0, 78.0], ["This study provides a first, comprehensive, diagnostic use of DNA barcodes for the Canadian fauna of noctuoids or \u201cowlet\u201d moths (Lepidoptera: Noctuoidea) based on vouchered records for 1,541 species (99.1% species coverage), and more than 30,000 sequences. When viewed from a Canada-wide perspective, DNA barcodes unambiguously discriminate 90% of the noctuoid species recognized through prior taxonomic study, and resolution reaches 95.6% when considered at a provincial scale. Barcode sharing is concentrated in certain lineages with 54% of the cases involving 1.8% of the genera. Deep intraspecific divergence exists in 7.7% of the species, but further studies are required to clarify whether these cases reflect an overlooked species complex or phylogeographic variation in a single species. Non-native species possess higher Nearest-Neighbour (NN) distances than native taxa, whereas generalist feeders have lower NN distances than those with more specialized feeding habits. We found high concordance between taxonomic names and sequence clusters delineated by the Barcode Index Number (BIN) system with 1,082 species (70%) assigned to a unique BIN. The cases of discordance involve both BIN mergers and BIN splits with 38 species falling into both categories, most likely reflecting bidirectional introgression. One fifth of the species are involved in a BIN merger reflecting the presence of 158 species sharing their barcode sequence with at least one other taxon, and 189 species with low, but diagnostic COI divergence. A very few cases (13) involved species whose members fell into both categories. Most of the remaining 140 species show a split into two or three BINs per species, while Virbia ferruginosa was divided into 16. The overall results confirm that DNA barcodes are effective for the identification of Canadian noctuoids. This study also affirms that BINs are a strong proxy for species, providing a pathway for a rapid, accurate estimation of animal diversity.", "which Studied taxonomic group (Biology) ?", "Noctuoidea", 142.0, 152.0], ["Abstract The DNA barcode reference library for Lepidoptera holds much promise as a tool for taxonomic research and for providing the reliable identifications needed for conservation assessment programs. We gathered sequences for the barcode region of the mitochondrial cytochrome c oxidase subunit I gene from 160 of the 176 nominal species of Erebidae moths (Insecta: Lepidoptera) known from the Iberian Peninsula. These results arise from a research project which constructing a DNA barcode library for the insect species of Spain. New records for 271 specimens (122 species) are coupled with preexisting data for 38 species from the Iberian fauna. Mean interspecific distance was 12.1%, while the mean nearest neighbour divergence was 6.4%. All 160 species possessed diagnostic barcode sequences, but one pair of congeneric taxa (Eublemma rosea and Eublemma rietzi) were assigned to the same BIN. As well, intraspecific sequence divergences higher than 1.5% were detected in four species which likely represent species complexes. This study reinforces the effectiveness of DNA barcoding as a tool for monitoring biodiversity in particular geographical areas and the strong correspondence between sequence clusters delineated by BINs and species recognized through detailed taxonomic analysis.", "which Order (Taxonomy - biology) ?", "Lepidoptera", 47.0, 58.0], ["The proliferation of DNA data is revolutionizing all fields of systematic research. DNA barcode sequences, now available for millions of specimens and several hundred thousand species, are increasingly used in algorithmic species delimitations. This is complicated by occasional incongruences between species and gene genealogies, as indicated by situations where conspecific individuals do not form a monophyletic cluster in a gene tree. In two previous reviews, non-monophyly has been reported as being common in mitochondrial DNA gene trees. We developed a novel web service \u201cMonophylizer\u201d to detect non-monophyly in phylogenetic trees and used it to ascertain the incidence of species non-monophyly in COI (a.k.a. cox1) barcode sequence data from 4977 species and 41,583 specimens of European Lepidoptera, the largest data set of DNA barcodes analyzed from this regard. Particular attention was paid to accurate species identification to ensure data integrity. We investigated the effects of tree-building method, sampling effort, and other methodological issues, all of which can influence estimates of non-monophyly. We found a 12% incidence of non-monophyly, a value significantly lower than that observed in previous studies. Neighbor joining (NJ) and maximum likelihood (ML) methods yielded almost equal numbers of non-monophyletic species, but 24.1% of these cases of non-monophyly were only found by one of these methods. Non-monophyletic species tend to show either low genetic distances to their nearest neighbors or exceptionally high levels of intraspecific variability. Cases of polyphyly in COI trees arising as a result of deep intraspecific divergence are negligible, as the detected cases reflected misidentifications or methodological errors. Taking into consideration variation in sampling effort, we estimate that the true incidence of non-monophyly is \u223c23%, but with operational factors still being included. Within the operational factors, we separately assessed the frequency of taxonomic limitations (presence of overlooked cryptic and oversplit species) and identification uncertainties. We observed that operational factors are potentially present in more than half (58.6%) of the detected cases of non-monophyly. Furthermore, we observed that in about 20% of non-monophyletic species and entangled species, the lineages involved are either allopatric or parapatric\u2014conditions where species delimitation is inherently subjective and particularly dependent on the species concept that has been adopted. These observations suggest that species-level non-monophyly in COI gene trees is less common than previously supposed, with many cases reflecting misidentifications, the subjectivity of species delimitation or other operational factors.", "which Order (Taxonomy - biology) ?", "Lepidoptera", 797.0, 808.0], ["This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences.", "which Order (Taxonomy - biology) ?", "Lepidoptera", NaN, NaN], ["Mosquitoes are insects of the Diptera, Nematocera, and Culicidae families, some species of which are important disease vectors. Identifying mosquito species based on morphological characteristics is difficult, particularly the identification of specimens collected in the field as part of disease surveillance programs. Because of this difficulty, we constructed DNA barcodes of the cytochrome c oxidase subunit 1, the COI gene, for the more common mosquito species in China, including the major disease vectors. A total of 404 mosquito specimens were collected and assigned to 15 genera and 122 species and subspecies on the basis of morphological characteristics. Individuals of the same species grouped closely together in a Neighborhood-Joining tree based on COI sequence similarity, regardless of collection site. COI gene sequence divergence was approximately 30 times higher for species in the same genus than for members of the same species. Divergence in over 98% of congeneric species ranged from 2.3% to 21.8%, whereas divergence in conspecific individuals ranged from 0% to 1.67%. Cryptic species may be common and a few pseudogenes were detected.", "which Order (Taxonomy - biology) ?", "Diptera", 30.0, 37.0], ["The introduced C4 bunchgrass, Schizachyrium condensatum, is abundant in unburned, seasonally dry woodlands on the island of Hawaii, where it promotes the spread of fire. After fire, it is partially replaced by Melinis minutiflora, another invasive C4 grass. Seed bank surveys in unburned woodland showed that Melinis seed is present in locations without adult plants. Using a combination of germination tests and seedling outplant ex- periments, we tested the hypothesis that Melinis was unable to invade the unburned wood- land because of nutrient and/or light limitation. We found that Melinis germination and seedling growth are depressed by the low light levels common under Schizachyrium in unburned woodland. Outplanted Melinis seedlings grew rapidly to flowering and persisted for several years in unburned woodland without nutrient additions, but only if Schizachyrium individuals were removed. Nutrients alone did not facilitate Melinis establishment. Competition between Melinis and Schizachyrium naturally occurs when individuals of both species emerge from the seed bank simultaneously, or when seedlings of one species emerge in sites already dominated by individuals of the other species. When both species are grown from seed, we found that Melinis consistently outcompetes Schizachyrium, re- gardless of light or nutrient treatments. When seeds of Melinis were added to pots with well-established Schizachyrium (and vice versa), Melinis eventually invaded and overgrew adult Schizachyrium under high, but not low, nutrients. By contrast, Schizachyrium could not invade established Melinis pots regardless of nutrient level. A field experiment dem- onstrated that Schizachyrium individuals are suppressed by Melinis in burned sites through competition for both light and nutrients. Overall, Melinis is a dominant competitor over Schizachyrium once it becomes estab- lished, whether in a pot or in the field. We believe that the dominance of Schizachyrium, rather than Melinis, in the unburned woodland is the result of asymmetric competition due to the prior establishment of Schizachyrium in these sites. If Schizachyrium were not present, the unburned woodland could support dense stands of Melinis. Fire disrupts the priority effect of Schizachyrium and allows the dominant competitor (Melinis) to enter the system where it eventually replaces Schizachyrium through resource competition.", "which Type of disturbance ?", "Fire", 164.0, 168.0], ["Context. Wildfire is a major driver of the structure and function of mallee eucalypt- and spinifex-dominated landscapes. Understanding how fire influences the distribution of biota in these fire-prone environments is essential for effective ecological and conservation-based management. Aims. We aimed to (1) determine the effects of an extensive wildfire (118 000 ha) on a small mammal community in the mallee shrublands of semiarid Australia and (2) assess the hypothesis that the fire-response patterns of small mammals can be predicted by their life-history characteristics. Methods. Small-mammal surveys were undertaken concurrently at 26 sites: once before the fire and on four occasions following the fire (including 14 sites that remained unburnt). We documented changes in small-mammal occurrence before and after the fire, and compared burnt and unburnt sites. In addition, key components of vegetation structure were assessed at each site. Key results. Wildfire had a strong influence on vegetation structure and on the occurrence of small mammals. The mallee ningaui, Ningaui yvonneae, a dasyurid marsupial, showed a marked decline in the immediate post-fire environment, corresponding with a reduction in hummock-grass cover in recently burnt vegetation. Species richness of native small mammals was positively associated with unburnt vegetation, although some species showed no clear response to wildfire. Conclusions. Our results are consistent with the contention that mammal responses to fire are associated with their known life-history traits. The species most strongly affected by wildfire, N. yvonneae, has the most specific habitat requirements and restricted life history of the small mammals in the study area. The only species positively associated with recently burnt vegetation, the introduced house mouse, Mus domesticus, has a flexible life history and non-specialised resource requirements. Implications. Maintaining sources for recolonisation after large-scale wildfires will be vital to the conservation of native small mammals in mallee ecosystems.", "which Type of disturbance ?", "Fire", 139.0, 143.0], ["Questions: How did post-wildfire understorey plant community response, including exotic species response, differ between pre-fire treated areas that were less severely burned, and pre-fire untreated areas that were more severely burned? Were these differences consistent through time? Location: East-central Arizona, southwestern US. Methods: We used a multi-year data set from the 2002 Rodeo\u2013Chediski Fire to detect post-fire trends in plant community response in burned ponderosa pine forests. Within the burn perimeter, we examined the effects of pre-fire fuels treatments on post-fire vegetation by comparing paired treated and untreated sites on the Apache-Sitgreaves National Forest. We sampled these paired sites in 2004, 2005 and 2011. Results: There were significant differences in pre-fire treated and untreated plant communities by species composition and abundance in 2004 and 2005, but these communities were beginning to converge in 2011. Total understorey plant cover was significantly higher in untreated areas for all 3 yr. Plant cover generally increased between 2004 and 2005 and markedly decreased in 2011, with the exception of shrub cover, which steadily increased through time. The sharp decrease in forb and graminoid cover in 2011 is likely related to drought conditions since the fire. Annual/biennial forb and graminoid cover decreased relative to perennial cover through time, consistent with the initial floristics hypothesis. Exotic plant response was highly variable and not limited to the immediate post-fire, annual/biennial community. Despite low overall exotic forb and graminoid cover for all years (<2.5%), several exotic species increased in frequency, and the relative proportion of exotic to native cover increased through time. Conclusions: Pre-treatment fuel reduction treatments helped maintain foundation overstorey species and associated native plant communities following this large wildfire. The overall low cover of exotic species on these sites supports other findings that the disturbance associated with high-severity fire does not always result in exotic species invasions. The increase in relative cover and frequency though time indicates that some species are proliferating, and continued monitoring is recommended. Patterns of exotic species invasions after severe burning are not easily predicted, and are likely more dependent on site-specific factors such as propagules, weather patterns and management.", "which Type of disturbance ?", "Fire", 125.0, 129.0], ["Streams in mediterranean\u2010type climate regions are shaped by predictable seasonal events of flooding and drying over an annual cycle, but also present a strong interannual flow variation.", "which Type of disturbance ?", "Flood", NaN, NaN], ["Native herbivores can establish novel interactions with alien plants after invasion. Nevertheless, it is unclear whether these new associations are quantitatively significant compared to the assemblages with native flora under natural conditions. Herbivores associated with two exotic plants, namely Senecio inaequidens and S. pterophorus, and two coexisting natives, namely S. vulgaris and S. lividus, were surveyed in a replicated long\u2010term field study to ascertain whether the plant\u2013herbivore assemblages in mixed communities are related to plant novelty and insect diet breadth. Native herbivores used exotic Senecio as their host plants. Of the 19 species of Lepidoptera, Diptera, and Hemiptera found in this survey, 14 were associated with the exotic Senecio plants. Most of these species were polyphagous, yet we found a higher number of individuals with a narrow diet breadth, which is contrary to the assumption that host switching mainly occurs in generalist herbivores. The Senecio specialist Sphenella marginata (Diptera: Tephritidae) was the most abundant and widely distributed insect species (ca. 80% of the identified specimens). Sphenella was associated with S. lividus, S. vulgaris and S. inaequidens and was not found on S. pterophorus. The presence of native plant congeners in the invaded community did not ensure an instantaneous ecological fitting between insects and alien plants. We conclude that novel associations between native herbivores and introduced Senecio plants are common under natural conditions. Plant novelty is, however, not the only predictor of herbivore abundance due to the complexity of natural conditions.", "which Investigated species ?", "Plants", 62.0, 68.0], ["Effective management of invasive species requires that we understand the mechanisms determining community invasibility. Successful invaders must tolerate abiotic conditions and overcome resistance from native species in invaded habitats. Biotic resistance to invasions may reflect the diversity, abundance, or identity of species in a community. Few studies, however, have examined the relative importance of abiotic and biotic factors determining community invasibility. In a greenhouse experiment, we simulated the abiotic and biotic gradients typically found in vernal pools to better understand their impacts on invasibility. Specifically, we invaded plant communities differing in richness, identity, and abundance of native plants (the \"plant neighborhood\") and depth of inundation to measure their effects on growth, reproduction, and survival of five exotic plant species. Inundation reduced growth, reproduction, and survival of the five exotic species more than did plant neighborhood. Inundation reduced survival of three species and growth and reproduction of all five species. Neighboring plants reduced growth and reproduction of three species but generally did not affect survival. Brassica rapa, Centaurea solstitialis, and Vicia villosa all suffered high mortality due to inundation but were generally unaffected by neighboring plants. In contrast, Hordeum marinum and Lolium multiflorum, whose survival was unaffected by inundation, were more impacted by neighboring plants. However, the four measures describing plant neighborhood differed in their effects. Neighbor abundance impacted growth and reproduction more than did neighbor richness or identity, with growth and reproduction generally decreasing with increasing density and mass of neighbors. Collectively, these results suggest that abiotic constraints play the dominant role in determining invasibility along vernal pool and similar gradients. By reducing survival, abiotic constraints allow only species with the appropriate morphological and physiological traits to invade. In contrast, biotic resistance reduces invasibility only in more benign environments and is best predicted by the abundance, rather than diversity, of neighbors. These results suggest that stressful environments are not likely to be invaded by most exotic species. However, species, such as H. marinum, that are able to invade these habitats require careful management, especially since these environments often harbor rare species and communities.", "which Investigated species ?", "Plants", 730.0, 736.0], ["Theory suggests that introduction effort (propagule size or number) should be a key determinant of establishment success for exotic species. Unfortunately, however, propagule pressure is not recorded for most introductions. Studies must therefore either use proxies whose efficacy must be largely assumed, or ignore effort altogether. The results of such studies will be flawed if effort is not distributed at random with respect to other characteristics that are predicted to influence success. We use global data for more than 600 introduction events for birds to show that introduction effort is both the strongest correlate of introduction success, and correlated with a large number of variables previously thought to influence success. Apart from effort, only habitat generalism relates to establishment success in birds.", "which Investigated species ?", "Birds", 557.0, 562.0], ["What determines the number of alien species in a given region? \u2018Native biodiversity\u2019 and \u2018human impact\u2019 are typical answers to this question. Indeed, studies comparing different regions have frequently found positive relationships between number of alien species and measures of both native biodiversity (e.g. the number of native species) and human impact (e.g. human population). These relationships are typically explained by biotic acceptance or resistance, i.e. by influence of native biodiversity and human impact on the second step of the invasion process, establishment. The first step of the invasion process, introduction, has often been ignored. Here we investigate whether relationships between number of alien mammals and native biodiversity or human impact in 43 European countries are mainly shaped by differences in number of introduced mammals or establishment success. Our results suggest that correlation between number of native and established mammals is spurious, as it is simply explainable by the fact that both quantities are linked to country area. We also demonstrate that countries with higher human impact host more alien mammals than other countries because they received more introductions than other countries. Differences in number of alien mammals cannot be explained by differences in establishment success. Our findings highlight importance of human activities and question, at least for mammals in Europe, importance of biotic acceptance and resistance.", "which Investigated species ?", "Mammals", 723.0, 730.0], ["1 During the last centuries many alien species have established and spread in new regions, where some of them cause large ecological and economic problems. As one of the main explanations of the spread of alien species, the enemy\u2010release hypothesis is widely accepted and frequently serves as justification for biological control. 2 We used a global fungus\u2013plant host distribution data set for 140 North American plant species naturalized in Europe to test whether alien plants are generally released from foliar and floral pathogens, whether they are mainly released from pathogens that are rare in the native range, and whether geographic spread of the North American plant species in Europe is associated with release from fungal pathogens. 3 We show that the 140 North American plant species naturalized in Europe were released from 58% of their foliar and floral fungal pathogen species. However, when we also consider fungal pathogens of the native North American host range that in Europe so far have only been reported on other plant species, the estimated release is reduced to 10.3%. Moreover, in Europe North American plants have mainly escaped their rare, pathogens, of which the impact is restricted to few populations. Most importantly and directly opposing the enemy\u2010release hypothesis, geographic spread of the alien plants in Europe was negatively associated with their release from fungal pathogens. 4 Synthesis. North American plants may have escaped particular fungal species that control them in their native range, but based on total loads of fungal species, release from foliar and floral fungal pathogens does not explain the geographic spread of North American plant species in Europe. To test whether enemy release is the major driver of plant invasiveness, we urgently require more studies comparing release of invasive and non\u2010invasive alien species from enemies of different guilds, and studies that assess the actual impact of the enemies.", "which Investigated species ?", "Plants", 471.0, 477.0], ["Disturbance is one of the most important factors promoting exotic invasion. However, if disturbance per se is sufficient to explain exotic success, then \u201cinvasion\u201d abroad should not differ from \u201ccolonization\u201d at home. Comparisons of the effects of disturbance on organisms in their native and introduced ranges are crucial to elucidate whether this is the case; however, such comparisons have not been conducted. We investigated the effects of disturbance on the success of Eurasian native Centaurea solstitialis in two invaded regions, California and Argentina, and one native region, Turkey, by conducting field experiments consisting of simulating different disturbances and adding locally collected C. solstitialis seeds. We also tested differences among C. solstitialis genotypes in these three regions and the effects of local soil microbes on C. solstitialis performance in greenhouse experiments. Disturbance increased C. solstitialis abundance and performance far more in nonnative ranges than in the native range, but C. solstitialis biomass and fecundity were similar among populations from all regions grown under common conditions. Eurasian soil microbes suppressed growth of C. solstitialis plants, while Californian and Argentinean soil biota did not. We suggest that escape from soil pathogens may contribute to the disproportionately powerful effect of disturbance in introduced regions.", "which Investigated species ?", "Plants", 1205.0, 1211.0], ["Questions: Are island vegetation communities moreinvaded than their mainland counterparts? Is thispattern consistent among community types?Location: The coastal provinces of Catalonia andthepara-oceanicBalearicIslands,bothinNESpain.These islands were connected to the continent morethan 5.35 million years ago and are now locatedo200km from the coast.Methods: We compiled a database of almost 3000phytosociological releve\u00b4s from the Balearic Islandsand Catalonia and compared the level of invasionby alien plants in island versus mainland commu-nities. Twenty distinct plant community typeswere compared between island and mainland coun-terparts.Results: The percentage of plots with alien species,number, percentage and cover percentage of alienspecies per plot was greater in Catalonia than in theBalearic Islands in most communities. Overall,across communities, more alien species were foundin the mainland (53) compared to the islands (onlynine). Despite these differences, patterns of the levelof invasion in communities were highly consistentbetween the islands and mainland. The most in-vaded communities were ruderal and riparian.Main conclusion: Our results indicate that para-oceanic island communities such as the BalearicIslands are less invaded than their mainlandcounterparts. This difference re\ufb02ects a smaller re-gional alien species pool in the Balearic Islands thanin the adjacent mainland, probably due to differ-ences in landscape heterogeneity and propagulepressure.Keywords: alien plants; Balearic Islands; communitysimilarity; Mediterranean communities; para-ocea-nic islands; releve\u00b4; species richness.Nomenclature: Bolo`s & Vigo (1984\u20132001), Rivas-Martinez et al. (2001).", "which Investigated species ?", "Plants", 506.0, 512.0], ["The Enemy Release Hypothesis (ERH) predicts that when plant species are introduced outside their native range there is a release from natural enemies resulting in the plants becoming problematic invasive alien species (Lake & Leishman 2004; Puliafico et al. 2008). The release from natural enemies may benefit alien plants more than simply reducing herbivory because, according to the Evolution of Increased Competitive Ability (EICA) hypothesis, without pressure from herbivores more resources that were previously allocated to defence can be allocated to reproduction (Blossey & Notzold 1995). Alien invasive plants are therefore expected to have simpler herbivore communities with fewer specialist herbivores (Frenzel & Brandl 2003; Heleno et al. 2008; Heger & Jeschke 2014).", "which Investigated species ?", "Plants", 167.0, 173.0], ["Invasiveness may result from genetic variation and adaptation or phenotypic plasticity, and genetic variation in fitness traits may be especially critical. Pennisetum setaceum (fountain grass, Poaceae) is highly invasive in Hawaii (HI), moderately invasive in Arizona (AZ), and less invasive in southern California (CA). In common garden experiments, we examined the relative importance of quantitative trait variation, precipitation, and phenotypic plasticity in invasiveness. In two very different environments, plants showed no differences by state of origin (HI, CA, AZ) in aboveground biomass, seeds/flower, and total seed number. Plants from different states were also similar within watering treatment. Plants with supplemental watering, relative to unwatered plants, had greater biomass, specific leaf area (SLA), and total seed number, but did not differ in seeds/flower. Progeny grown from seeds produced under different watering treatments showed no maternal effects in seed mass, germination, biomass or SLA. High phenotypic plasticity, rather than local adaptation is likely responsible for variation in invasiveness. Global change models indicate that temperature and precipitation patterns over the next several decades will change, although the direction of change is uncertain. Drier summers in southern California may retard further invasion, while wetter summers may favor the spread of fountain grass.", "which Investigated species ?", "Plants", 514.0, 520.0], ["Non-native mammals that are disturbance agents can promote non-native plant invasions, but to date there is scant evidence on the mechanisms behind this pattern. We used wild boar (Sus scrofa) as a model species to evaluate the role of non-native mammals in promoting plant invasion by identifying the degree to which soil disturbance and endozoochorous seed dispersal drive plant invasions. To test if soil disturbance promotes plant invasion, we conducted an exclosure experiment in which we recorded emergence, establishment and biomass of seedlings of seven non-native plant species planted in no-rooting, boar-rooting and artificial rooting patches in Patagonia, Argentina. To examine the role of boar in dispersing seeds we germinated viable seeds from 181 boar droppings and compared this collection to the soil seed bank by collecting a soil sample adjacent to each dropping. We found that both establishment and biomass of non-native seedlings in boar-rooting patches were double those in no-rooting patches. Values in artificial rooting patches were intermediate between those in boar-rooting and no-rooting treatments. By contrast, we found that the proportion of non-native seedlings in the soil samples was double that in the droppings, and over 80% of the germinated seeds were native species in both samples. Lastly, an effect size test showed that soil disturbance by wild boar rather than endozoochorous dispersal facilitates plant invasions. These results have implications for both the native and introduced ranges of wild boar, where rooting disturbance may facilitate community composition shifts.", "which Investigated species ?", "Mammals", 11.0, 18.0], ["In their colonized ranges, exotic plants may be released from some of the herbivores or pathogens of their home ranges but these can be replaced by novel enemies. It is of basic and practical interest to understand which characteristics of invaded communities control accumulation of the new pests. Key questions are whether enemy load on exotic species is smaller than on native competitors as suggested by the enemy release hypothesis (ERH) and whether this difference is most pronounced in resource\u2010rich habitats as predicted by the resource\u2013enemy release hypothesis (R\u2010ERH). In 72 populations of 12 exotic invasive species, we scored all visible above\u2010ground damage morphotypes caused by herbivores and fungal pathogens. In addition, we quantified levels of leaf herbivory and fruit damage. We then assessed whether variation in damage diversity and levels was explained by habitat fertility, by relatedness between exotic species and the native community or rather by native species diversity. In a second part of the study, we also tested the ERH and the R\u2010ERH by comparing damage of plants in 28 pairs of co\u2010occurring native and exotic populations, representing nine congeneric pairs of native and exotic species. In the first part of the study, diversity of damage morphotypes and damage levels of exotic populations were greater in resource\u2010rich habitats. Co\u2010occurrence of closely related, native species in the community significantly increased the probability of fruit damage. Herbivory on exotics was less likely in communities with high phylogenetic diversity. In the second part of the study, exotic and native congeneric populations incurred similar damage diversity and levels, irrespective of whether they co\u2010occurred in nutrient\u2010poor or nutrient\u2010rich habitats. Synthesis. We identified habitat productivity as a major community factor affecting accumulation of enemy damage by exotic populations. Similar damage levels in exotic and native congeneric populations, even in species pairs from fertile habitats, suggest that the enemy release hypothesis or the R\u2010ERH cannot always explain the invasiveness of introduced species.", "which Investigated species ?", "Plants", 34.0, 40.0], ["Few field experiments have examined the effects of both resource availability and propagule pressure on plant community invasibility. Two non-native forest species, a herb and a shrub (Hesperis matronalis and Rhamnus cathartica, respectively), were sown into 60 1-m 2 sub-plots distributed across three plots. These contained reconstructed native plant communities in a replaced surface soil layer in a North American forest interior. Resource availability and propagule pressure were manipulated as follows: understorey light level (shaded/unshaded), nutrient availability (control/fertilized), and seed pressures of the two non-native species (control/low/high). Hesperis and Rhamnus cover and the above-ground biomass of Hesperis were significantly higher in shaded sub-plots and at greater propagule pressures. Similarly, the above-ground biomass of Rhamnus was significantly increased with propagule pressure, although this was a function of density. In contrast, of species that seeded into plots from the surrounding forest during the growing season, the non-native species had significantly greater cover in unshaded sub-plots. Plants in these unshaded sub-plots were significantly taller than plants in shaded sub-plots, suggesting a greater fitness. Total and non-native species richness varied significantly among plots indicating the importance of fine-scale dispersal patterns. None of the experimental treatments influenced native species. Since the forest seed bank in our study was colonized primarily by non-native ruderal species that dominated understorey vegetation, the management of invasions by non-native species in forest understoreys will have to address factors that influence light levels and dispersal pathways.", "which Investigated species ?", "Plants", 1136.0, 1142.0], ["We surveyed naturally occurring leaf herbivory in nine invasive and nine non-invasive exotic plant species sampled in natural areas in Ontario, New York and Massachusetts, and found that invasive plants experienced, on average, 96% less leaf damage than non-invasive species. Invasive plants were also more taxonomically isolated than non-invasive plants, belonging to families with 75% fewer native North American genera. However, the relationship between taxonomic isolation at the family level and herbivory was weak. We suggest that invasive plants may possess novel phytochemicals with anti-herbivore properties in addition to allelopathic and anti-microbial characteristics. Herbivory could be employed as an easily measured predictor of the likelihood that recently introduced exotic plants may become invasive.", "which Investigated species ?", "Plants", 196.0, 202.0], ["The vegetation of Kings Park, near the centre of Perth, Western Australia, once had an overstorey of Eucalyptus marginata (jarrah) or Eucalyptus gomphocephala (tuart), and many trees still remain in the bushland parts of the Park. Avenues and roadsides have been planted with eastern Australian species, including Eucalyptus cladocalyx (sugar gum) and Eucalyptus botryoides (southern mahogany), both of which have become invasive. The present study examined the effect of a recent burn on the level of herbivory on these native and exotic eucalypts. Leaf damage, shoot extension and number of new leaves were measured on tagged shoots of saplings of each tree species in unburnt and burnt areas over an 8-month period. Leaf macronutrient levels were quantified and the number of arthropods on saplings was measured at the end of the recording period by chemical knockdown. Leaf macronutrients were mostly higher in all four species in the burnt area, and this was associated with generally higher numbers of canopy arthropods and greater levels of leaf damage. It is suggested that the pulse of soil nutrients after the fire resulted in more nutrient-rich foliage, which in turn was more palatable to arthropods. The resulting high levels of herbivory possibly led to reduced shoot extension of E. gomphocephala, E. botryoides and, to a lesser extent, E. cladocalyx. This acts as a negative feedback mechanism that lessens the tendency for lush, post-fire regrowth to outcompete other species of plants. There was no consistent difference in the levels of the various types of leaf damage or of arthropods on the native and the exotic eucalypts, suggesting that freedom from herbivory is not contributing to the invasiveness of the two exotic species.", "which Investigated species ?", "Plants", 1496.0, 1502.0], ["While small-scale studies show that more diverse native communities are less invasible by exotics, studies at large spatial scales often find positive correlations between native and exotic diversity. This large-scale pattern is thought to arise because landscapes with favorable conditions for native species also have favorable conditions for exotic species. From theory, we proposed an alternative hypothesis: the positive relationship at large scales is driven by spatial heterogeneity in species composition, which is driven by spatial heterogeneity in the environment. Landscapes with more spatial heterogeneity in the environment can sustain more native and more exotic species, leading to a positive correlation of native and exotic diversity at large scales. In a nested data set for grassland plants, we detected negative relationships between native and exotic diversity at small spatial scales and positive relationships at large spatial scales. Supporting our hypothesis, the positive relationships between native and exotic diversity at large scales were driven by positive relationships between native and exotic beta diversity. Further, both native and exotic diversity were positively correlated with spatial heterogeneity in abiotic conditions (variance of soil depth, soil nitrogen, and aspect) but were uncorrelated with average abiotic conditions, supporting the spatial-heterogeneity hypothesis but not the favorable-conditions", "which Investigated species ?", "Plants", 803.0, 809.0], ["The enemy release hypothesis (ERH) is often cited to explain why some plants successfully invade natural communities while others do not. This hypothesis maintains that plant populations are regulated by coevolved enemies in their native range but are relieved of this pressure where their enemies have not been co-introduced. Some studies have shown that invasive plants sustain lower levels of herbivore damage when compared to native species, but how damage affects fitness and population dynamics remains unclear. We used a system of co-occurring native and invasive Eugenia congeners in south Florida (USA) to experimentally test the ERH, addressing deficiencies in our understanding of the role of natural enemies in plant invasion at the population level. Insecticide was used to experimentally exclude insect herbivores from invasive Eugenia uniflora and its native co-occurring congeners in the field for two years. Herbivore damage, plant growth, survival, and population growth rates for the three species were then compared for control and insecticide-treated plants. Our results contradict the ERH, indicating that E. uniflora sustains more herbivore damage than its native congeners and that this damage negatively impacts stem height, survival, and population growth. In addition, most damage to E. uniflora, a native of Brazil, is carried out by Myllocerus undatus, a recently introduced weevil from Sri Lanka, and M. undatus attacks a significantly greater proportion of E. uniflora leaves than those of its native congeners. This interaction is particularly interesting because M. undatus and E. uniflora share no coevolutionary history, having arisen on two separate continents and come into contact on a third. Our study is the first to document negative population-level effects for an invasive plant as a result of the introduction of a novel herbivore. Such inhibitory interactions are likely to become more prevalent as suites of previously noninteracting species continue to accumulate and new communities assemble worldwide.", "which Investigated species ?", "Plants", 70.0, 76.0], ["This is an analysis of the attempts to colonize at least 208 species of parasites and predators on about 75 species of pest insects in the field in Canada. There was colonization by about 10% of the species that were introduced in totals of under 5,000 individuals, 40% of those introduced in totals of between 5,000 and 31,200, and 78% of those introduced in totals of over 31,200. Indications exist that initial colonizations may be favoured by large releases and by selection of release sites that are semi-isolated and not ecologically complex but that colonizations are hindered when the target species differs taxonomically from the species from which introduced agents originated and when the release site lacks factors needed for introduced agents to survive or when it is subject to potentially-avoidable physical disruptions. There was no evidence that the probability of colonization was increased when the numbers of individuals released were increased by laboratory propagation. About 10% of the attempts were successful from the economic viewpoint. Successes may be overestimated if the influence of causes of coincidental, actual, or supposed changes in pest abundance are overlooked. Most of the successes were by two or more kinds of agents of which at least one attacked species additional to the target pests. Unplanned consequences of colonization have not been sufficiently harmful to warrant precautions to the extent advocated by Turnbull and Chant but are sufficiently potentially dangerous to warrant the restriction of all colonization attempts to biological control experts. It is concluded that most failures were caused by inadequate procedures, rather than by any weaknesses inherent in the method, that those inadequacies can be avoided in the future, and therefore that biological control of pest insects has much unrealized potential for use in Canada.", "which Investigated species ?", "Insects", 124.0, 131.0], ["Surveys of recent (1973 to 1986) intentional releases of native birds and mammals to the wild in Australia, Canada, Hawaii, New Zealand, and the United States were conducted to document current activities, identify factors associated with success, and suggest guidelines for enhancing future work. Nearly 700 translocations were conducted each year. Native game species constituted 90 percent of translocations and were more successful (86 percent) than were translocations of threatened, endangered, or sensitive species (46 percent). Knowledge of habitat quality, location of release area within the species range, number of animals released, program length, and reproductive traits allowed correct classification of 81 percent of observed translocations as successful or not.", "which Investigated species ?", "Birds and Mammals", 64.0, 81.0], ["sempervirens L., a non-invasive native. We hypothesized that greater morphological plasticity may contribute to the ability of L. japonica to occupy more habitat types, and contribute to its invasiveness. We compared the morphology of plants provided with climbing supports with plants that had no climbing supports, and thus quantified their morphological plasticity in response to an important variable in their habitats. The two species responded differently to the treatments, with L. japonica showing greater responses in more characters. For example, Lonicera japonica responded to climbing supports with a 15.3% decrease in internode length, a doubling of internode number and a 43% increase in shoot biomass. In contrast, climbing supports did not influence internode length or shoot biomass for L. sempervirens, and only resulted in a 25% increase in internode number. This plasticity may allow L. japonica to actively place plant modules in favorable microhabitats and ultimately affect plant fitness.", "which Investigated species ?", "Plants", 235.0, 241.0], ["Many ecosystems are created by the presence of ecosystem engineers that play an important role in determining species' abundance and species composition. Additionally, a mosaic environment of engineered and non-engineered habitats has been shown to increase biodiversity. Non-native ecosystem engineers can be introduced into environments that do not contain or have lost species that form biogenic habitat, resulting in dramatic impacts upon native communities. Yet, little is known about how non-native ecosystem engineers interact with natives and other non-natives already present in the environment, specifically whether non-native ecosystem engineers facilitate other non-natives, and whether they increase habitat heterogeneity and alter the diversity, abundance, and distribution of benthic species. Through sampling and experimental removal of reefs, we examine the effects of a non-native reef-building tubeworm, Ficopomatus enigmaticus, on community composition in the central Californian estuary, Elkhorn Slough. Tubeworm reefs host significantly greater abundances of many non-native polychaetes and amphipods, particularly the amphipods Monocorophium insidiosum and Melita nitida, compared to nearby mudflats. Infaunal assemblages under F. enigmaticus reefs and around reef's edges show very low abundance and taxonomic diversity. Once reefs are removed, the newly exposed mudflat is colonized by opportunistic non-native species, such as M. insidiosum and the polychaete Streblospio benedicti, making removal of reefs a questionable strategy for control. These results show that provision of habitat by a non-native ecosystem engineer may be a mechanism for invasional meltdown in Elkhorn Slough, and that reefs increase spatial heterogeneity in the abundance and composition of benthic communities.", "which Investigated species ?", "Polychaetes", 1097.0, 1108.0], ["Brown, R. L. and Fridley, J. D. 2003. Control of plant species diversity andcommunity invasibility by species immigration: seed richness versus seed density. \u2013Oikos 102: 15\u201324.Immigration rates of species into communities are widely understood to in\ufb02uencecommunity diversity, which in turn is widely expected to in\ufb02uence the susceptibilityof ecosystems to species invasion. For a given community, however, immigrationprocesses may impact diversity by means of two separable components: the numberof species represented in seed inputs and the density of seed per species. Theindependent effects of these components on plant species diversity and consequentrates of invasion are poorly understood. We constructed experimental plant commu-nities through repeated seed additions to independently measure the effects of seedrichness and seed density on the trajectory of species diversity during the develop-ment of annual plant communities. Because we sowed species not found in theimmediate study area, we were able to assess the invasibility of the resultingcommunities by recording the rate of establishment of species from adjacent vegeta-tion. Early in community development when species only weakly interacted, seedrichness had a strong effect on community diversity whereas seed density had littleeffect. After the plants became established, the effect of seed richness on measureddiversity strongly depended on seed density, and disappeared at the highest level ofseed density. The ability of surrounding vegetation to invade the experimentalcommunities was decreased by seed density but not by seed richness, primarilybecause the individual effects of a few sown species could explain the observedinvasion rates. These results suggest that seed density is just as important as seedrichness in the control of species diversity, and perhaps a more important determi-nant of community invasibility than seed richness in dynamic plant assemblages.", "which Investigated species ?", "Plants", 1318.0, 1324.0], ["ABSTRACT Question: Do anthropogenic activities facilitate the distribution of exotic plants along steep altitudinal gradients? Location: Sani Pass road, Grassland biome, South Africa. Methods: On both sides of this road, presence and abundance of exotic plants was recorded in four 25-m long road-verge plots and in parallel 25 m \u00d7 2 m adjacent land plots, nested at five altitudinal levels: 1500, 1800, 2100, 2400 and 2700 m a.s.l. Exotic community structure was analyzed using Canonical Correspondence Analysis while a two-level nested Generalized Linear Model was fitted for richness and cover of exotics. We tested the upper altitudinal limits for all exotics along this road for spatial clustering around four potential propagule sources using a t-test. Results: Community structure, richness and abundance of exotics were negatively correlated with altitude. Greatest invasion by exotics was recorded for adjacent land at the 1500 m level. Of the 45 exotics, 16 were found at higher altitudes than expected and observations were spatially clustered around potential propagule sources. Conclusions: Spatial clustering of upper altitudinal limits around human inhabited areas suggests that exotics originate from these areas, while exceeding expected altitudinal limits suggests that distribution ranges of exotics are presently underestimated. Exotics are generally characterised by a high propagule pressure and/or persistent seedbanks, thus future tarring of the Sani Pass may result in an increase of exotic species richness and abundance. This would initially result from construction-related soil disturbance and subsequently from increased traffic, water run-off, and altered fire frequency. We suggest examples of management actions to prevent this. Nomenclature: Germishuizen & Meyer (2003).", "which Investigated species ?", "Plants", 85.0, 91.0], ["Ecosystems that are heavily invaded by an exotic species often contain abundant populations of other invasive species. This may reflect shared responses to a common factor, but may also reflect positive interactions among these exotic species. Armand Bayou (Pasadena, TX) is one such ecosystem where multiple species of invasive aquatic plants are common. We used this system to investigate whether presence of one exotic species made subsequent invasions by other exotic species more likely, less likely, or if it had no effect. We performed an experiment in which we selectively removed exotic rooted and/or floating aquatic plant species and tracked subsequent colonization and growth of native and invasive species. This allowed us to quantify how presence or absence of one plant functional group influenced the likelihood of successful invasion by members of the other functional group. We found that presence of alligatorweed (rooted plant) decreased establishment of new water hyacinth (free-floating plant) patches but increased growth of hyacinth in established patches, with an overall net positive effect on success of water hyacinth. Water hyacinth presence had no effect on establishment of alligatorweed but decreased growth of existing alligatorweed patches, with an overall net negative effect on success of alligatorweed. Moreover, observational data showed positive correlations between hyacinth and alligatorweed with hyacinth, on average, more abundant. The negative effect of hyacinth on alligatorweed growth implies competition, not strong mutual facilitation (invasional meltdown), is occurring in this system. Removal of hyacinth may increase alligatorweed invasion through release from competition. However, removal of alligatorweed may have more complex effects on hyacinth patch dynamics because there were strong opposing effects on establishment versus growth. The mix of positive and negative interactions between floating and rooted aquatic plants may influence local population dynamics of each group and thus overall invasion pressure in this watershed.", "which Investigated species ?", "Plants", 337.0, 343.0], ["Factors such as aggressiveness and adaptation to disturbed environments have been suggested as important characteristics of invasive ant species, but diet has rarely been considered. However, because invasive ants reach extraordinary densities at introduced locations, increased feeding efficiency or increased exploitation of new foods should be important in their success. Earlier studies suggest that honeydew produced by Homoptera (e.g., aphids, mealybugs, scale insects) may be important in the diet of the invasive ant species Solenopsis invicta. To determine if this is the case, we studied associations of S. invicta and Homoptera in east Texas and conducted a regional survey for such associations throughout the species' range in the southeast United States. In east Texas, we found that S. invicta tended Ho- moptera extensively and actively constructed shelters around them. The shelters housed a variety of Homoptera whose frequency differed according to either site location or season, presumably because of differences in host plant availability and temperature. Overall, we estimate that the honeydew produced in Homoptera shelters at study sites in east Texas could supply nearly one-half of the daily energetic requirements of an S. invicta colony. Of that, 70% may come from a single species of invasive Homoptera, the mealybugAntonina graminis. Homoptera shelters were also common at regional survey sites and A. graminis occurred in shelters at nine of 11 survey sites. A comparison of shelter densities at survey sites and in east Texas suggests that our results from east Texas could apply throughout the range of S. invicta in the southeast United States. Antonina graminis may be an ex- ceptionally important nutritional resource for S. invicta in the southeast United States. While it remains largely unstudied, the tending of introduced or invasive Homoptera also appears important to other, and perhaps all, invasive ant species. Exploitative or mutually beneficial associations that occur between these insects may be an important, previously unrecognized factor promoting their success.", "which Investigated species ?", "Insects", 467.0, 474.0], ["Plants introduced into a new range are expected to harbour fewer specialized herbivores and to receive less damage than conspecifics in native ranges. Datura stramonium was introduced in Spain about five centuries ago. Here, we compare damage by herbivores, plant size, and leaf trichomes between plants from non-native and native ranges and perform selection analyses. Non-native plants experienced much less damage, were larger and less pubescent than plants of native populations. While plant size was related to fitness in both ranges, selection to increase resistance was only detected in the native region. We suggest this is a consequence of a release from enemies in this new environment.", "which Investigated species ?", "Plants", 0.0, 6.0], ["Genetic diversity is supposed to support the colonization success of expanding species, in particular in situations where microsite availability is constrained. Addressing the role of genetic diversity in plant invasion experimentally requires its manipulation independent of propagule pressure. To assess the relative importance of these components for the invasion of Senecio vernalis, we created propagule mixtures of four levels of genotype diversity by combining seeds across remote populations, across proximate populations, within single populations and within seed families. In a first container experiment with constant Festuca rupicola density as matrix, genotype diversity was crossed with three levels of seed density. In a second experiment, we tested for effects of establishment limitation and genotype diversity by manipulating Festuca densities. Increasing genetic diversity had no effects on abundance and biomass of S. vernalis but positively affected the proportion of large individuals to small individuals. Mixtures composed from proximate populations had a significantly higher proportion of large individuals than mixtures composed from within seed families only. High propagule pressure increased emergence and establishment of S. vernalis but had no effect on individual growth performance. Establishment was favoured in containers with Festuca, but performance of surviving seedlings was higher in open soil treatments. For S. vernalis invasion, we found a shift in driving factors from density dependence to effects of genetic diversity across life stages. While initial abundance was mostly linked to the amount of seed input, genetic diversity, in contrast, affected later stages of colonization probably via sampling effects and seemed to contribute to filtering the genotypes that finally grew up. In consequence, when disentangling the mechanistic relationships of genetic diversity, seed density and microsite limitation in colonization of invasive plants, a clear differentiation between initial emergence and subsequent survival to juvenile and adult stages is required.", "which Investigated species ?", "Plants", 1983.0, 1989.0], ["Plant distributions are in part determined by environmental heterogeneity on both large (landscape) and small (several meters) spatial scales. Plant populations can respond to environmental heterogeneity via genetic differentiation between large distinct patches, and via phenotypic plasticity in response to heterogeneity occurring at small scales relative to dispersal distance. As a result, the level of environmental heterogeneity experienced across generations, as determined by seed dispersal distance, may itself be under selection. Selection could act to increase or decrease seed dispersal distance, depending on patterns of heterogeneity in environmental quality with distance from a maternal home site. Serpentine soils, which impose harsh and variable abiotic stress on non-adapted plants, have been partially invaded by Erodium cicutarium in northern California, USA. Using nearby grassland sites characterized as either serpentine or non-serpentine, we collected seeds from dense patches of E. cicutarium on both soil types in spring 2004 and subsequently dispersed those seeds to one of four distances from their maternal home site (0, 0.5, 1, or 10 m). We examined distance-dependent patterns of variation in offspring lifetime fitness, conspecific density, soil availability, soil water content, and aboveground grass and forb biomass. ANOVA revealed a distinct fitness peak when seeds were dispersed 0.5 m from their maternal home site on serpentine patches. In non-serpentine patches, fitness was reduced only for seeds placed back into the maternal home site. Conspecific density was uniformly high within 1 m of a maternal home site on both soils, whereas soil water content and grass biomass were significantly heterogeneous among dispersal distances only on serpentine soils. Structural equation modeling and multigroup analysis revealed significantly stronger direct and indirect effects linking abiotic and biotic variation to offspring performance on serpentine soils than on non-serpentine soils, indicating the potential for soil-specific selection on seed dispersal distance in this invasive species.", "which Investigated species ?", "Plants", 794.0, 800.0], ["Propagule pressure is recognized as a fundamental driver of freshwater fish invasions, though few studies have quantified its role. Natural experiments can be used to quantify the role of this factor relative to others in driving establishment success. An irrigation network in South Africa takes water from an inter-basin water transfer (IBWT) scheme to supply multiple small irrigation ponds. We compared fish community composition upstream, within, and downstream of the irrigation network, to show that this system is a unidirectional dispersal network with a single immigration source. We then assessed the effect of propagule pressure and biological adaptation on the colonization success of nine fish species across 30 recipient ponds of varying age. Establishing species received significantly more propagules at the source than did incidental species, while rates of establishment across the ponds displayed a saturation response to propagule pressure. This shows that propagule pressure is a significant driver of establishment overall. Those species that did not establish were either extremely rare at the immigration source or lacked the reproductive adaptations to breed in the ponds. The ability of all nine species to arrive at some of the ponds illustrates how long-term continuous propagule pressure from IBWT infrastructure enables range expansion of fishes. The quantitative link between propagule pressure and success and rate of population establishment confirms the driving role of this factor in fish invasion ecology.", "which Investigated species ?", "Fishes", 1370.0, 1376.0], ["Abstract: Invasive alien organisms pose a major threat to global biodiversity. The Cape Peninsula, South Africa, provides a case study of the threat of alien plants to native plant diversity. We sought to identify where alien plants would invade the landscape and what their threat to plant diversity could be. This information is needed to develop a strategy for managing these invasions at the landscape scale. We used logistic regression models to predict the potential distribution of six important invasive alien plants in relation to several environmental variables. The logistic regression models showed that alien plants could cover over 89% of the Cape Peninsula. Acacia cyclops and Pinus pinaster were predicted to cover the greatest area. These predictions were overlaid on the current distribution of native plant diversity for the Cape Peninsula in order to quantify the threat of alien plants to native plant diversity. We defined the threat to native plant diversity as the number of native plant species (divided into all species, rare and threatened species, and endemic species) whose entire range is covered by the predicted distribution of alien plant species. We used a null model, which assumed a random distribution of invaded sites, to assess whether area invaded is confounded with threat to native plant diversity. The null model showed that most alien species threaten more plant species than might be suggested by the area they are predicted to invade. For instance, the logistic regression model predicted that P. pinaster threatens 350 more native species, 29 more rare and threatened species, and 21 more endemic species than the null model would predict. Comparisons between the null and logistic regression models suggest that species richness and invasibility are positively correlated and that species richness is a poor indicator of invasive resistance in the study site. Our results emphasize the importance of adopting a spatially explicit approach to quantifying threats to biodiversity, and they provide the information needed to prioritize threats from alien species and the sites that need urgent management intervention.", "which Investigated species ?", "Plants", 158.0, 164.0], ["Lionfish (Pterois volitans), venomous predators from the Indo-Pacific, are recent invaders of the Caribbean Basin and southeastern coast of North America. Quantification of invasive lionfish abundances, along with potentially important physical and biological environmental characteristics, permitted inferences about the invasion process of reefs on the island of San Salvador in the Bahamas. Environmental wave-exposure had a large influence on lionfish abundance, which was more than 20 and 120 times greater for density and biomass respectively at sheltered sites as compared with wave-exposed environments. Our measurements of topographic complexity of the reefs revealed that lionfish abundance was not driven by habitat rugosity. Lionfish abundance was not negatively affected by the abundance of large native predators (or large native groupers) and was also unrelated to the abundance of medium prey fishes (total length of 5\u201310 cm). These relationships suggest that (1) higher-energy environments may impose intrinsic resistance against lionfish invasion, (2) habitat complexity may not facilitate the lionfish invasion process, (3) predation or competition by native fishes may not provide biotic resistance against lionfish invasion, and (4) abundant prey fish might not facilitate lionfish invasion success. The relatively low biomass of large grouper on this island could explain our failure to detect suppression of lionfish abundance and we encourage continuing the preservation and restoration of potential lionfish predators in the Caribbean. In addition, energetic environments might exert direct or indirect resistance to the lionfish proliferation, providing native fish populations with essential refuges.", "which Investigated species ?", "Fishes", 909.0, 915.0], ["Although the species pool, dispersal, and local interactions all influence species diversity, their relative importance is debated. I examined their importance in controlling the number of native and exotic plant species occupying tussocks formed by the sedge Carex nudata along a California stream. Of particular interest were the factors underlying a downstream increase in plant diversity and biological invasions. I conducted seed addition experiments and manipulated local diversity and cover to evaluate the degree to which tussocks saturate with species, and to examine the roles of local competitive processes, abiotic factors, and seed supply in controlling the system-wide patterns. Seeds of three native and three exotic plants sown onto experimentally assembled tussock communities less successfully established on tussocks with a greater richness of resident plants. Nonetheless, even the most diverse tussocks were somewhat colonized, suggesting that tussocks are not completely saturated with species. Similarly, in an experiment where I sowed seeds onto natural tussocks along the river, colonization increased two- to three-fold when I removed the resident species. Even on intact tussocks, however, seed addition increased diversity, indicating that the tussock assemblages are seed limited. Colonization success on cleared and uncleared tussocks increased downstream from km 0 to km 3 of the study site, but showed no trends from km 3 to km 8. This suggests that while abiotic and biotic features of the tussocks may control the increase in diversity and invasions from km 0 to km 3, similar increases from km 3 to km 8 are more likely explained by potential downstream increases in seed supply. The effective water dispersal of seed mimics and prevailingly downstream winds indicated that dispersal most likely occurs in a downstream direction. These results suggest that resident species diversity, competitive interactions, and seed supply similarly influence the colonization of native and exotic species.", "which Investigated species ?", "Plants", 732.0, 738.0], ["It has been suggested that more species have been successfully introduced to oceanic islands than to mainland regions. This suggestion has attracted considerable ecological interest and several theoretical mechanisms havebeen proposed. However, few data are available to test the hypotheses directly, and the pattern may simply result from many more species being transported to islands rather than mainland regions. Here I test this idea using data for global land birds and present evidence that introductions to islands have a higher probability of success than those to mainland regions. This difference between island and mainland landforms is not consistent among either taxonomic families or biogeographic regions. Instead, introduction attempts within the same biogeographic region have been significantly more successful than those that have occurred between two different biogeographic regions. Subsequently, the proportion of introduction attempts that have occurred within a single biogeographic region is thus a significant predictor of the observed variability in introduction success. I also show that the correlates of successful island introductions are probably different to those of successful mainland introductions.", "which Investigated species ?", "Birds", 466.0, 471.0], ["Introduced species must adapt their ecology, behaviour, and morphological traits to new conditions. The successful introduction and invasive potential of a species are related to its levels of phenotypic plasticity and genetic polymorphism. We analysed changes in the body mass and length of American mink (Neovison vison) since its introduction into the Warta Mouth National Park, western Poland, in relation to diet composition and colonization progress from 1996 to 2004. Mink body mass decreased significantly during the period of population establishment within the study area, with an average decrease of 13% from 1.36 to 1.18 kg in males and of 16% from 0.83 to 0.70 kg in females. Diet composition varied seasonally and between consecutive years. The main prey items were mammals and fish in the cold season and birds and fish in the warm season. During the study period the proportion of mammals preyed upon increased in the cold season and decreased in the warm season. The proportion of birds preyed upon decreased over the study period, whereas the proportion of fish increased. Following introduction, the strictly aquatic portion of mink diet (fish and frogs) increased over time, whereas the proportion of large prey (large birds, muskrats, and water voles) decreased. The average yearly proportion of large prey and average-sized prey in the mink diet was significantly correlated with the mean body masses of males and females. Biogeographical variation in the body mass and length of mink was best explained by the percentage of large prey in the mink diet in both sexes, and by latitude for females. Together these results demonstrate that American mink rapidly changed their body mass in relation to local conditions. This phenotypic variability may be underpinned by phenotypic plasticity and/or by adaptation of quantitative genetic variation. The potential to rapidly change phenotypic variation in this manner is an important factor determining the negative ecological impacts of invasive species. \u00a9 2012 The Linnean Society of London, Biological Journal of the Linnean Society, 2012, 105, 681\u2013693.", "which Investigated species ?", "Mammals", 780.0, 787.0], ["Aim Charles Darwin posited that introduced species with close relatives were less likely to succeed because of fiercer competition resulting from their similarity to residents. There is much debate about the generality of this rule, and recent studies on plant and fish introductions have been inconclusive. Information on phylogenetic relatedness is potentially valuable for explaining invasion outcomes and could form part of screening protocols for minimizing future invasions. We provide the first test of this hypothesis for terrestrial vertebrates using two new molecular phylogenies for native and introduced reptiles for two regions with the best data on introduction histories.", "which Investigated species ?", "Reptiles", 616.0, 624.0], ["Abstract: We present a generic scoring system that compares the impact of alien species among members of large taxonomic groups. This scoring can be used to identify the most harmful alien species so that conservation measures to ameliorate their negative effects can be prioritized. For all alien mammals in Europe, we assessed impact reports as completely as possible. Impact was classified as either environmental or economic. We subdivided each of these categories into five subcategories (environmental: impact through competition, predation, hybridization, transmission of disease, and herbivory; economic: impact on agriculture, livestock, forestry, human health, and infrastructure). We assigned all impact reports to one of these 10 categories. All categories had impact scores that ranged from zero (minimal) to five (maximal possible impact at a location). We summed all impact scores for a species to calculate \"potential impact\" scores. We obtained \"actual impact\" scores by multiplying potential impact scores by the percentage of area occupied by the respective species in Europe. Finally, we correlated species\u2019 ecological traits with the derived impact scores. Alien mammals from the orders Rodentia, Artiodactyla, and Carnivora caused the highest impact. In particular, the brown rat (Rattus norvegicus), muskrat (Ondathra zibethicus), and sika deer (Cervus nippon) had the highest overall scores. Species with a high potential environmental impact also had a strong potential economic impact. Potential impact also correlated with the distribution of a species in Europe. Ecological flexibility (measured as number of different habitats a species occupies) was strongly related to impact. The scoring system was robust to uncertainty in knowledge of impact and could be adjusted with weight scores to account for specific value systems of particular stakeholder groups (e.g., agronomists or environmentalists). Finally, the scoring system is easily applicable and adaptable to other taxonomic groups.", "which Investigated species ?", "Mammals", 298.0, 305.0], ["A commonly cited mechanism for invasion resistance is more complete resource use by diverse plant assemblages with maximum niche complementarity. We investigated the invasion resistance of several plant functional groups against the nonindigenous forb Spotted knapweed (Centaurea maculosa). The study consisted of a factorial combination of seven functional group removals (groups singularly or in combination) and two C. maculosa treatments (addition vs. no addition) applied in a randomized complete block design replicated four times at each of two sites. We quantified aboveground plant material nutrient concentration and uptake (concentration \u00d7 biomass) by indigenous functional groups: grasses, shallow\u2010rooted forbs, deep\u2010rooted forbs, spikemoss, and the nonindigenous invader C. maculosa. In 2001, C. maculosa density depended upon which functional groups were removed. The highest C. maculosa densities occurred where all vegetation or all forbs were removed. Centaurea maculosa densities were the lowest in plots where nothing, shallow\u2010rooted forbs, deep\u2010rooted forbs, grasses, or spikemoss were removed. Functional group biomass was also collected and analyzed for nitrogen, phosphorus, potassium, and sulphur. Based on covariate analyses, postremoval indigenous plot biomass did not relate to invasion by C. maculosa. Analysis of variance indicated that C. maculosa tissue nutrient percentage and net nutrient uptake were most similar to indigenous forb functional groups. Our study suggests that establishing and maintaining a diversity of plant functional groups within the plant community enhances resistance to invasion. Indigenous plants of functionally similar groups as an invader may be particularly important in invasion resistance.", "which Investigated species ?", "Plants", 1648.0, 1654.0], ["Aim Island faunas, particularly those with high levels of endemism, usually are considered especially susceptible to disruption from habitat disturbance and invasive alien species. We tested this general hypothesis by examining the distribution of small mammals along gradients of anthropogenic habitat disturbance in northern Luzon Island, an area with a very high level of mammalian endemism.", "which Investigated species ?", "Mammals", 254.0, 261.0], [". The effect of fire on annual plants was examined in two vegetation types at remnant vegetation edges in the Western Australian wheatbelt. Density and cover of non-native species were consistently greatest at the reserve edges, decreasing rapidly with increasing distance from reserve edge. Numbers of native species showed little effect of distance from reserve edge. Fire had no apparent effect on abundance of non-natives in Allocasuarina shrubland but abundance of native plants increased. Density of both non-native and native plants in Acacia acuminata-Eucalyptus loxophleba woodland decreased after fire. Fewer non-native species were found in the shrubland than in the woodland in both unburnt and burnt areas, this difference being smallest between burnt areas. Levels of soil phosphorus and nitrate were higher in burnt areas of both communities and ammonium also increased in the shrubland. Levels of soil phosphorus and nitrate were higher at the reserve edge in the unburnt shrubland, but not in the woodland. There was a strong correlation between soil phosphorus levels and abundance of non-native species in the unburnt shrubland, but not after fire or in the woodland. Removal of non-native plants in the burnt shrubland had a strong positive effect on total abundance of native plants, apparently due to increases in growth of smaller, suppressed native plants in response to decreased competition. Two native species showed increased seed production in plots where non-native plants had been removed. There was a general indication that, in the short term, fire does not necessarily increase invasion of these communities by non-native species and could, therefore be a useful management tool in remnant vegetation, providing other disturbances are minimised.", "which Investigated species ?", "Plants", 31.0, 37.0], ["ABSTRACT Early successional ruderal plants in North America include numerous native and nonnative species, and both are abundant in disturbed areas. The increasing presence of nonnative plants may negatively impact a critical component of food web function if these species support fewer or a less diverse arthropod fauna than the native plant species that they displace. We compared arthropod communities on six species of common early successional native plants and six species of nonnative plants, planted in replicated native and nonnative plots in a farm field. Samples were taken twice each year for 2 yr. In most arthropod samples, total biomass and abundance were substantially higher on the native plants than on the nonnative plants. Native plants produced as much as five times more total arthropod biomass and up to seven times more species per 100 g of dry leaf biomass than nonnative plants. Both herbivores and natural enemies (predators and parasitoids) predominated on native plants when analyzed separately. In addition, species richness was about three times greater on native than on nonnative plants, with 83 species of insects collected exclusively from native plants, and only eight species present only on nonnatives. These results support a growing body of evidence suggesting that nonnative plants support fewer arthropods than native plants, and therefore contribute to reduced food resources for higher trophic levels.", "which Investigated species ?", "Plants", 36.0, 42.0], ["Species that are introduced to novel environments can lose their native pathogens and parasites during the process of introduction. The escape from the negative effects associated with these natural enemies is commonly employed as an explanation for the success and expansion of invasive species, which is termed the enemy release hypothesis (ERH). In this study, nested PCR techniques and microscopy were used to determine the prevalence and intensity (respectively) of Plasmodium spp. and Haemoproteus spp. in introduced house sparrows and native urban birds of central Brazil. Generalized linear mixed models were fitted by Laplace approximation considering a binomial error distribution and logit link function. Location and species were considered as random effects and species categorization (native or non-indigenous) as fixed effects. We found that native birds from Brazil presented significantly higher parasite prevalence in accordance with the ERH. We also compared our data with the literature, and found that house sparrows native to Europe exhibited significantly higher parasite prevalence than introduced house sparrows from Brazil, which also supports the ERH. Therefore, it is possible that house sparrows from Brazil might have experienced a parasitic release during the process of introduction, which might also be related to a demographic release (e.g. release from the negative effects of parasites on host population dynamics).", "which Investigated species ?", "Birds", 555.0, 560.0], ["ABSTRACT Question: Do specific environmental conditions affect the performance and growth dynamics of one of the most invasive taxa (Carpobrotus aff. acinaciformis) on Mediterranean islands? Location: Four populations located on Mallorca, Spain. Methods: We monitored growth rates of main and lateral shoots of this stoloniferous plant for over two years (2002\u20132003), comparing two habitats (rocky coast vs. coastal dune) and two different light conditions (sun vs. shade). In one population of each habitat type, we estimated electron transport rate and the level of plant stress (maximal photochemical efficiency Fv/Fm) by means of chlorophyll fluorescence. Results: Main shoots of Carpobrotus grew at similar rates at all sites, regardless habitat type. However, growth rate of lateral shoots was greater in shaded plants than in those exposed to sunlight. Its high phenotypic plasticity, expressed in different allocation patterns in sun and shade individuals, and its clonal growth which promotes the continuous sea...", "which Investigated species ?", "Plants", 818.0, 824.0], ["We assessed the field-scale plant community associations of Carduus nutans and C. acanthoides, two similar, economically important invasive thistles. Several plant species were associated with the presence of Carduus thistles while others, including an important pasture species, were associated with Carduus free areas. Thus, even within fields, areas invaded by Carduus thistles have different vegetation than uninvaded areas, either because some plants can resist invasion or because invasion changes the local plant community. Our results will allow us to target future research about the role of vegetation structure in resisting and responding to invasion.", "which Investigated species ?", "Plants", 449.0, 455.0], ["Abstract: The successful invasion of exotic plants is often attributed to the absence of coevolved enemies in the introduced range (i.e., the enemy release hypothesis). Nevertheless, several components of this hypothesis, including the role of generalist herbivores, remain relatively unexplored. We used repeated censuses of exclosures and paired controls to investigate the role of a generalist herbivore, white\u2010tailed deer (Odocoileus virginianus), in the invasion of 3 exotic plant species (Microstegium vimineum, Alliaria petiolata, and Berberis thunbergii) in eastern hemlock (Tsuga canadensis) forests in New Jersey and Pennsylvania (U.S.A.). This work was conducted in 10 eastern hemlock (T. canadensis) forests that spanned gradients in deer density and in the severity of canopy disturbance caused by an introduced insect pest, the hemlock woolly adelgid (Adelges tsugae). We used maximum likelihood estimation and information theoretics to quantify the strength of evidence for alternative models of the influence of deer density and its interaction with the severity of canopy disturbance on exotic plant abundance. Our results were consistent with the enemy release hypothesis in that exotic plants gained a competitive advantage in the presence of generalist herbivores in the introduced range. The abundance of all 3 exotic plants increased significantly more in the control plots than in the paired exclosures. For all species, the inclusion of canopy disturbance parameters resulted in models with substantially greater support than the deer density only models. Our results suggest that white\u2010tailed deer herbivory can accelerate the invasion of exotic plants and that canopy disturbance can interact with herbivory to magnify the impact. In addition, our results provide compelling evidence of nonlinear relationships between deer density and the impact of herbivory on exotic species abundance. These findings highlight the important role of herbivore density in determining impacts on plant abundance and provide evidence of the operation of multiple mechanisms in exotic plant invasion.", "which Investigated species ?", "Plants", 44.0, 50.0], ["1. Information on the approximate number of individuals released is available for 47 of the 133 exotic bird species introduced to New Zealand in the late 19th and early 20th centuries. Of these, 21 species had populations surviving in the wild in 1969-79. The long interval between introduction and assessment of outcome provides a rare opportunity to examine the factors correlated with successful establishment without the uncertainty of long-term population persistence associated with studies of short duration. 2. The probability of successful establishment was strongly influenced by the number of individuals released during the main period of introductions. Eight-three per cent of species that had more than 100 individuals released within a 10-year period became established, compared with 21% of species that had less than 100 birds released. The relationship between the probability of establishment and number of birds released was similar to that found in a previous study of introductions of exotic birds to Australia. 3. It was possible to look for a within-family influence on the success of introduction of the number of birds released in nine bird families. A positive influence was found within seven families and no effect in two families. This preponderance of families with a positive effect was statistically significant. 4. A significant effect of body weight on the probability of successful establishment was found, and negative effects of clutch size and latitude of origin. However, the statistical significance of these effects varied according to whether comparison was or was not restricted to within-family variation. After applying the Bonferroni adjustment to significance levels, to allow for the large number of variables and factors being considered, only the effect of the number of birds released was statistically significant. 5. No significant effects on the probability of successful establishment were apparent for the mean date of release, the minimum number of years in which birds were released, the hemisphere of origin (northern or southern) and the size and diversity of latitudinal distribution of the natural geographical range.", "which Investigated species ?", "Birds", 838.0, 843.0], ["Despite long-standing interest of terrestrial ecologists, freshwater ecosystems are a fertile, yet unappreciated, testing ground for applying community phylogenetics to uncover mechanisms of species assembly. We quantify phylogenetic clustering and overdispersion of native and non-native fishes of a large river basin in the American Southwest to test for the mechanisms (environmental filtering versus competitive exclusion) and spatial scales influencing community structure. Contrary to expectations, non-native species were phylogenetically clustered and related to natural environmental conditions, whereas native species were not phylogenetically structured, likely reflecting human-related changes to the basin. The species that are most invasive (in terms of ecological impacts) tended to be the most phylogenetically divergent from natives across watersheds, but not within watersheds, supporting the hypothesis that Darwin's naturalization conundrum is driven by the spatial scale. Phylogenetic distinctiveness may facilitate non-native establishment at regional scales, but environmental filtering restricts local membership to closely related species with physiological tolerances for current environments. By contrast, native species may have been phylogenetically clustered in historical times, but species loss from contemporary populations by anthropogenic activities has likely shaped the phylogenetic signal. Our study implies that fundamental mechanisms of community assembly have changed, with fundamental consequences for the biogeography of both native and non-native species.", "which Investigated species ?", "Fishes", 289.0, 295.0], ["Methods of risk assessment for alien species, especially for nonagricultural systems, are largely qualitative. Using a generalizable risk assessment approach and statistical models of fish introductions into the Great Lakes, North America, we developed a quantitative approach to target prevention efforts on species most likely to cause damage. Models correctly categorized established, quickly spreading, and nuisance fishes with 87 to 94% accuracy. We then identified fishes that pose a high risk to the Great Lakes if introduced from unintentional (ballast water) or intentional pathways (sport, pet, bait, and aquaculture industries).", "which Investigated species ?", "Fishes", 420.0, 426.0], ["1 Theory and empirical evidence suggest that community invasibility is influenced by propagule pressure, physical stress and biotic resistance from resident species. We studied patterns of exotic and native species richness across the Flooding Pampas of Argentina, and tested for exotic richness correlates with major environmental gradients, species pool size, and native richness, among and within different grassland habitat types. 2 Native and exotic richness were positively correlated across grassland types, increasing from lowland meadows and halophyte steppes, through humid to mesophyte prairies in more elevated topographic positions. Species pool size was positively correlated with local richness of native and exotic plants, being larger for mesophyte and humid prairies. Localities in the more stressful meadow and halophyte steppe habitats contained smaller fractions of their landscape species pools. 3 Native and exotic species numbers decreased along a gradient of increasing soil salinity and decreasing soil depth, and displayed a unimodal relationship with soil organic carbon. When covarying habitat factors were held constant, exotic and native richness residuals were still positively correlated across sites. Within grassland habitat types, exotic and native species richness were positively associated in meadows and halophyte steppes but showed no consistent relationship in the least stressful, prairie habitat types. 4 Functional group composition differed widely between native and exotic species pools. Patterns suggesting biotic resistance to invasion emerged only within humid prairies, where exotic richness decreased with increasing richness of native warm\u2010season grasses. This negative relationship was observed for other descriptors of invasion such as richness and cover of annual cool\u2010season forbs, the commonest group of exotics. 5 Our results support the view that ecological factors correlated with differences in invasion success change with the range of environmental heterogeneity encompassed by the analysis. Within narrow habitat ranges, invasion resistance may be associated with either physical stress or resident native diversity. Biotic resistance through native richness, however, appeared to be effective only at intermediate locations along a stress/fertility gradient. 6 We show that certain functional groups, not just total native richness, may be critical to community resistance to invasion. Identifying such native species groups is important for directing management and conservation efforts.", "which Investigated species ?", "Plants", 731.0, 737.0], ["Abstract. An emerging body of literature suggests that the richness of native and naturalized plant species are often positively correlated. It is unclear, however, whether this relationship is robust across spatial scales, and how a disturbance regime may affect it. Here, I examine the relationships of both richness and abundance between native and naturalized species of plants in two mediterranean scrub communities: coastal sage scrub (CSS) in California and xeric\u2010sloped matorral (XSM) in Chile. In each vegetation type I surveyed multiple sites, where I identified vascular plant species and estimated their relative cover. Herbaceous species richness was higher in XSM, while cover of woody species was higher in CSS, where woody species have a strong impact upon herbaceous species. As there were few naturalized species with a woody growth form, the analyses performed here relate primarily to herbaceous species. Relationships between the herbaceous cover of native and naturalized species were not significant in CSS, but were nearly significant in XSM. The herbaceous species richness of native and naturalized plants were not significantly correlated on sites that had burned less than one year prior to sampling in CSS, and too few sites were available to examine this relationship in XSM. In post 1\u2010year burn sites, however, herbaceous richness of native and naturalized species were positively correlated in both CSS and XSM. This relationship occurred at all spatial scales, from 400 m2 to 1 m2 plots. The consistency of this relationship in this study, together with its reported occurrence in the literature, suggests that this relationship may be general. Finally, the residuals from the correlations between native and naturalized species richness and cover, when plotted against site age (i.e. time since the last fire), show that richness and cover of naturalized species are strongly favoured on recently burned sites in XSM; this suggests that herbaceous species native to Chile are relatively poorly adapted to fire.", "which Investigated species ?", "Plants", 375.0, 381.0], ["European countries in general, and England in particular, have a long history of introducing non-native fish species, but there exist no detailed studies of the introduction pathways and propagules pressure for any European country. Using the nine regions of England as a preliminary case study, the potential relationship between the occurrence in the wild of non-native freshwater fishes (from a recent audit of non-native species) and the intensity (i.e. propagule pressure) and diversity of fish imports was investigated. The main pathways of introduction were via imports of fishes for ornamental use (e.g. aquaria and garden ponds) and sport fishing, with no reported or suspected cases of ballast water or hull fouling introductions. The recorded occurrence of non-native fishes in the wild was found to be related to the time (number of years) since the decade of introduction. A shift in the establishment rate, however, was observed in the 1970s after which the ratio of established-to-introduced species declined. The number of established non-native fish species observed in the wild was found to increase significantly (P < 0\u00b705) with increasing import intensity (log10x + 1 of the numbers of fish imported for the years 2000\u20132004) and with increasing consignment diversity (log10x + 1 of the numbers of consignment types imported for the years 2000\u20132004). The implications for policy and management are discussed.", "which Investigated species ?", "Fishes", 383.0, 389.0], ["Alien plants invade many ecosystems worldwide and often have substantial negative effects on ecosystem structure and functioning. Our ability to quantitatively predict these impacts is, in part, limited by the absence of suitable plant-spread models and by inadequate parameter estimates for such models. This paper explores the effects of model, plant, and environmental attributes on predicted rates and patterns of spread of alien pine trees (Pinus spp.) in South African fynbos (a mediterranean-type shrubland). A factorial experimental design was used to: (1) compare the predictions of a simple reaction-diffusion model and a spatially explicit, individual-based simulation model; (2) investigate the sensitivity of predicted rates and patterns of spread to parameter values; and (3) quantify the effects of the simulation model's spatial grain on its predictions. The results show that the spatial simulation model places greater emphasis on interactions among ecological processes than does the reaction-diffusion model. This ensures that the predictions of the two models differ substantially for some factor combinations. The most important factor in the model is dispersal ability. Fire frequency, fecundity, and age of reproductive maturity are less important, while adult mortality has little effect on the model's predictions. The simulation model's predictions are sensitive to the model's spatial grain. This suggests that simulation models that use matrices as a spatial framework should ensure that the spatial grain of the model is compatible with the spatial processes being modeled. We conclude that parameter estimation and model development must be integrated pro- cedures. This will ensure that the model's structure is compatible with the biological pro- cesses being modeled. Failure to do so may result in spurious predictions.", "which Investigated species ?", "Plants", 6.0, 12.0], ["Horticulture is an important source of naturalized plants, but our knowledge about naturalization frequencies and potential patterns of naturalization in horticultural plants is limited. We analyzed a unique set of data derived from the detailed sales catalogs (1887-1930) of the most important early Florida, USA, plant nursery (Royal Palm Nursery) to detect naturalization patterns of these horticultural plants in the state. Of the 1903 nonnative species sold by the nursery, 15% naturalized. The probability of plants becoming naturalized increases significantly with the number of years the plants were marketed. Plants that became invasive and naturalized were sold for an average of 19.6 and 14.8 years, respectively, compared to 6.8 years for non-naturalized plants, and the naturalization of plants sold for 30 years or more is 70%. Unexpectedly, plants that were sold earlier were less likely to naturalize than those sold later. The nursery's inexperience, which caused them to grow and market many plants unsuited to Florida during their early period, may account for this pattern. Plants with pantropical distributions and those native to both Africa and Asia were more likely to naturalize (42%), than were plants native to other smaller regions, suggesting that plants with large native ranges were more likely to naturalize. Naturalization percentages also differed according to plant life form, with the most naturalization occurring in aquatic herbs (36.8%) and vines (30.8%). Plants belonging to the families Araceae, Apocynaceae, Convolvulaceae, Moraceae, Oleaceae, and Verbenaceae had higher than expected naturalization. Information theoretic model selection indicated that the number of years a plant was sold, alone or together with the first year a plant was sold, was the strongest predictor of naturalization. Because continued importation and marketing of nonnative horticultural plants will lead to additional plant naturalization and invasion, a comprehensive approach to address this problem, including research to identifyand select noninvasive forms and types of horticultural plants is urgently needed.", "which Investigated species ?", "Plants", 51.0, 57.0], ["Ecological filters and availability of propagules play key roles structuring natural communities. Propagule pressure has recently been suggested to be a fundamental factor explaining the success or failure of biological introductions. We tested this hypothesis with a remarkable data set on trees introduced to Isla Victoria, Nahuel Huapi National Park, Argentina. More than 130 species of woody plants, many known to be highly invasive elsewhere, were introduced to this island early in the 20th century, as part of an experiment to test their suitability as commercial forestry trees for this region. We obtained detailed data on three estimates of propagule pressure (number of introduced individuals, number of areas where introduced, and number of years during which the species was planted) for 18 exotic woody species. We matched these data with a survey of the species and number of individuals currently invading the island. None of the three estimates of propagule pressure predicted the current pattern of invasion. We suggest that other factors, such as biotic resistance, may be operating to determine the observed pattern of invasion, and that propagule pressure may play a relatively minor role in explaining at least some observed patterns of invasion success and failure.", "which Investigated species ?", "Plants", 396.0, 402.0], ["Island communities are generally viewed as being more susceptible to invasion than those of mainland areas, yet empirical evidence is almost lacking. A species-by-species examination of introduced birds in two independent island-mainland comparisons is not consistent with this hypothesis. In the New Zealand-mainland Australia comparison, 16 species were successful in both regions, 19 always failed and only eight had mixed outcomes. Mixed results were observed less often than expected by chance, and in only 5 cases was the relationship in the predicted direction. This result is not biased by differences in introduction effort because, within species, the number of individuals released in New Zealand did not differ significantly from those released in mainland Australia. A similar result emerged in the Hawaiian islands-mainland USA comparison: among the 35 species considered, 15 were successful in both regions, seven always failed and 13 had mixed outcomes. In this occasion, the results fit well to those expected by chance, and in only seven cases was the relationship in the direction predicted. I therefore conclude that, if true, the view that islands are less resistant than continents to invasions is far from universal.", "which Investigated species ?", "Birds", 197.0, 202.0], ["1 Alien species may form plant\u2013animal mutualistic complexes that contribute to their invasive potential. Using multivariate techniques, we examined the structure of a plant\u2013pollinator web comprising both alien and native plants and flower visitors in the temperate forests of north\u2010west Patagonia, Argentina. Our main objective was to assess whether plant species origin (alien or native) influences the composition of flower visitor assemblages. We also examined the influence of other potential confounding intrinsic factors such as flower symmetry and colour, and extrinsic factors such as flowering time, site and habitat disturbance. 2 Flowers of alien and native plant species were visited by a similar number of species and proportion of insects from different orders, but the composition of the assemblages of flower\u2010visiting species differed between alien and native plants. 3 The influence of plant species origin on the composition of flower visitor assemblages persisted after accounting for other significant factors such as flowering time, bearing red corollas, and habitat disturbance. This influence was at least in part determined by the fact that alien flower visitors were more closely associated with alien plants than with native plants. The main native flower visitors were, on average, equally associated with native and alien plant species. 4 In spite of representing a minor fraction of total species richness (3.6% of all species), alien flower visitors accounted for > 20% of all individuals recorded on flowers. Thus, their high abundance could have a significant impact in terms of pollination. 5 The mutualistic web of alien plants and flower\u2010visiting insects is well integrated into the overall community\u2010wide pollination web. However, in addition to their use of the native biota, invasive plants and flower visitors may benefit from differential interactions with their alien partners. The existence of these invader complexes could contribute to the spread of aliens into novel environments.", "which Ecological Level of evidence ?", "Individual", NaN, NaN], ["A Ponto-Caspian amphipod Dikerogammarus haemobaphes has recently invaded European waters. In the recipient area, it encountered Dreissena polymorpha, a habitat-forming bivalve, co-occurring with the gammarids in their native range. We assumed that interspecific interactions between these two species, which could develop during their long-term co-evolution, may affect the gammarid behaviour in novel areas. We examined the gammarid ability to select a habitat containing living mussels and searched for cues used in that selection. We hypothesized that they may respond to such traits of a living mussel as byssal threads, activity (e.g. valve movements, filtration) and/or shell surface properties. We conducted the pairwise habitat-choice experiments in which we offered various objects to single gammarids in the following combinations: (1) living mussels versus empty shells (the general effect of living Dreissena); (2) living mussels versus shells with added byssal threads and shells with byssus versus shells without it (the effect of byssus); (3) living mussels versus shells, both coated with nail varnish to neutralize the shell surface (the effect of mussel activity); (4) varnished versus clean living mussels (the effect of shell surface); (5) varnished versus clean stones (the effect of varnish). We checked the gammarid positions in the experimental tanks after 24 h. The gammarids preferred clean living mussels over clean shells, regardless of the presence of byssal threads under the latter. They responded to the shell surface, exhibiting preferences for clean mussels over varnished individuals. They were neither affected by the presence of byssus nor by mussel activity. The ability to detect and actively select zebra mussel habitats may be beneficial for D. haemobaphes and help it establish stable populations in newly invaded areas.", "which Ecological Level of evidence ?", "Individual", NaN, NaN], ["Summary Introduction of an exotic species has the potential to alter interactions between fish and bivalves; yet our knowledge in this field is limited, not least by lack of studies involving fish early life stages (ELS). Here, for the first time, we examine glochidial infection of fish ELS by native and exotic bivalves in a system recently colonised by two exotic gobiid species (round goby Neogobius melanostomus, tubenose goby Proterorhinus semilunaris) and the exotic Chinese pond mussel Anodonta woodiana. The ELS of native fish were only rarely infected by native glochidia. By contrast, exotic fish displayed significantly higher native glochidia prevalence and mean intensity of infection than native fish (17 versus 2% and 3.3 versus 1.4 respectively), inferring potential for a parasite spillback/dilution effect. Exotic fish also displayed a higher parasitic load for exotic glochidia, inferring potential for invasional meltdown. Compared to native fish, presence of gobiids increased the total number of glochidia transported downstream on drifting fish by approximately 900%. We show that gobiid ELS are a novel, numerous and \u2018attractive\u2019 resource for unionid glochidia. As such, unionids could negatively affect gobiid recruitment through infection-related mortality of gobiid ELS and/or reinforce downstream unionid populations through transport on drifting gobiid ELS. These implications go beyond what is suggested in studies of older life stages, thereby stressing the importance of an holistic ontogenetic approach in ecological studies.", "which Ecological Level of evidence ?", "Population", NaN, NaN], ["ABSTRACT Questions: 1. Is there any post-dispersal positive effect of the exotic shrub Pyracantha angustifolia on the success of Ligustrum lucidum seedlings, as compared to the effect of the native Condalia montana or the open herbaceous patches between shrubs? 2. Is the possible facilitation by Pyracantha and/or Condalia related to differential emergence, growth, or survival of Ligustrum seedlings under their canopies? Location: Cordoba, central Argentina. Methods: We designed three treatments, in which ten mature individuals of Pyracantha, ten of the dominant native shrub Condalia montana, and ten patches without shrub cover were involved. In each treatment we planted seeds and saplings of Ligustrum collected from nearby natural populations. Seedlings emerging from the planted seeds were harvested after one year to measure growth. Survival of the transplanted saplings was recorded every two month during a year. Half of the planted seeds and transplanted saplings were cage-protected from rodents. Results...", "which Ecological Level of evidence ?", "Individual", NaN, NaN], ["Abstract The impact of human\u2010induced stressors, such as invasive species, is often measured at the organismal level, but is much less commonly scaled up to the population level. Interactions with invasive species represent an increasingly common source of stressor in many habitats. However, due to the increasing abundance of invasive species around the globe, invasive species now commonly cause stresses not only for native species in invaded areas, but also for other invasive species. I examine the European green crab Carcinus maenas, an invasive species along the northeast coast of North America, which is known to be negatively impacted in this invaded region by interactions with the invasive Asian shore crab Hemigrapsus sanguineus. Asian shore crabs are known to negatively impact green crabs via two mechanisms: by directly preying on green crab juveniles and by indirectly reducing green crab fecundity via interference (and potentially exploitative) competition that alters green crab diets. I used life\u2010table analyses to scale these two mechanistic stressors up to the population level in order to examine their relative impacts on green crab populations. I demonstrate that lost fecundity has larger impacts on per capita population growth rates, but that both predation and lost fecundity are capable of reducing population growth sufficiently to produce the declines in green crab populations that have been observed in areas where these two species overlap. By scaling up the impacts of one invader on a second invader, I have demonstrated that multiple documented interactions between these species are capable of having population\u2010level impacts and that both may be contributing to the decline of European green crabs in their invaded range on the east coast of North America.", "which Ecological Level of evidence ?", "Population", 160.0, 170.0], ["Abstract: Recent experiments suggest that introduced, non-migratory Canada geese (Branta canadensis) may be facilitating the spread of exotic grasses and decline of native plant species abundance on small islets in the Georgia Basin, British Columbia, which otherwise harbour outstanding examples of threatened maritime meadow ecosystems. We examined this idea by testing if the presence of geese predicted the abundance of exotic grasses and native competitors at 2 spatial scales on 39 islands distributed throughout the Southern Gulf and San Juan Islands of Canada and the United States, respectively. At the plot level, we found significant positive relationships between the percent cover of goose feces and exotic annual grasses. However, this trend was absent at the scale of whole islands. Because rapid population expansion of introduced geese in the region only began in the 1980s, our results are consistent with the hypothesis that the deleterious effects of geese on the cover of exotic annual grasses have yet to proceed beyond the local scale, and that a window of opportunity now exists in which to implement management strategies to curtail this emerging threat to native ecosystems. Research is now needed to test if the removal of geese results in the decline of exotic annual grasses.", "which Ecological Level of evidence ?", "Population", 812.0, 822.0], ["Plants introduced into a new range are expected to harbour fewer specialized herbivores and to receive less damage than conspecifics in native ranges. Datura stramonium was introduced in Spain about five centuries ago. Here, we compare damage by herbivores, plant size, and leaf trichomes between plants from non-native and native ranges and perform selection analyses. Non-native plants experienced much less damage, were larger and less pubescent than plants of native populations. While plant size was related to fitness in both ranges, selection to increase resistance was only detected in the native region. We suggest this is a consequence of a release from enemies in this new environment.", "which Indicator for enemy release ?", "Damage", 108.0, 114.0], ["Grasslands have been lost and degraded in the United States since Euro-American settlement due to agriculture, development, introduced invasive species, and changes in fire regimes. Fire is frequently used in prairie restoration to control invasion by trees and shrubs, but may have additional consequences. For example, fire might reduce damage by herbivore and pathogen enemies by eliminating litter, which harbors eggs and spores. Less obviously, fire might influence enemy loads differently for native and introduced plant hosts. We used a controlled burn in a Willamette Valley (Oregon) prairie to examine these questions. We expected that, without fire, introduced host plants should have less damage than native host plants because the introduced species are likely to have left many of their enemies behind when they were transported to their new range (the enemy release hypothesis, or ERH). If the ERH holds, then fire, which should temporarily reduce enemies on all species, should give an advantage to the natives because they should see greater total reduction in damage by enemies. Prior to the burn, we censused herbivore and pathogen attack on eight plant species (five of nonnative origin: Bromus hordaceous, Cynosuros echinatus, Galium divaricatum, Schedonorus arundinaceus (= Festuca arundinacea), and Sherardia arvensis; and three natives: Danthonia californica, Epilobium minutum, and Lomatium nudicale). The same plots were monitored for two years post-fire. Prior to the burn, native plants had more kinds of damage and more pathogen damage than introduced plants, consistent with the ERH. Fire reduced pathogen damage relative to the controls more for the native than the introduced species, but the effects on herbivory were negligible. Pathogen attack was correlated with plant reproductive fitness, whereas herbivory was not. These results suggest that fire may be useful for promoting some native plants in prairies due to its negative effects on their pathogens.", "which Indicator for enemy release ?", "Damage", 339.0, 345.0], ["During the past centuries, humans have introduced many plant species in areas where they do not naturally occur. Some of these species establish populations and in some cases become invasive, causing economic and ecological damage. Which factors determine the success of non-native plants is still incompletely understood, but the absence of natural enemies in the invaded area (Enemy Release Hypothesis; ERH) is one of the most popular explanations. One of the predictions of the ERH, a reduced herbivore load on non-native plants compared with native ones, has been repeatedly tested. However, many studies have either used a community approach (sampling from native and non-native species in the same community) or a biogeographical approach (sampling from the same plant species in areas where it is native and where it is non-native). Either method can sometimes lead to inconclusive results. To resolve this, we here add to the small number of studies that combine both approaches. We do so in a single study of insect herbivory on 47 woody plant species (trees, shrubs, and vines) in the Netherlands and Japan. We find higher herbivore diversity, higher herbivore load and more herbivory on native plants than on non-native plants, generating support for the enemy release hypothesis.", "which Indicator for enemy release ?", "Damage", 224.0, 230.0], ["ABSTRACT One explanation for the success of exotic plants in their introduced habitats is that, upon arriving to a new continent, plants escaped their native herbivores or pathogens, resulting in less damage and lower abundance of enemies than closely related native species (enemy release hypothesis). We tested whether the three exotic plant species, Rubus phoenicolasius (wineberry), Fallopia japonica (Japanese knotweed), and Persicaria perfoliata (mile-a-minute weed), suffered less herbivory or pathogen attack than native species by comparing leaf damage and invertebrate herbivore abundance and diversity on the invasive species and their native congeners. Fallopia japonica and R. phoenicolasius received less leaf damage than their native congeners, and F. japonica also contained a lower diversity and abundance of invertebrate herbivores. If the observed decrease in damage experienced by these two plant species contributes to increased fitness, then escape from enemies may provide at least a partial explanation for their invasiveness. However, P. perfoliata actually received greater leaf damage than its native congener. Rhinoncomimus latipes, a weevil previously introduced in the United States as a biological control for P. perfoliata, accounted for the greatest abundance of insects collected from P. perfoliata. Therefore, it is likely that the biocontrol R. latipes was responsible for the greater damage on P. perfoliata, suggesting this insect may be effective at controlling P. perfoliata populations if its growth and reproduction is affected by the increased herbivore damage.", "which Indicator for enemy release ?", "Damage", 201.0, 207.0], ["1 Acer platanoides (Norway maple) is an important non\u2010native invasive canopy tree in North American deciduous forests, where native species diversity and abundance are greatly reduced under its canopy. We conducted a field experiment in North American forests to compare planted seedlings of A. platanoides and Acer saccharum (sugar maple), a widespread, common native that, like A. platanoides, is shade tolerant. Over two growing seasons in three forests we compared multiple components of seedling success: damage from natural enemies, ecophysiology, growth and survival. We reasoned that equal or superior performance by A. platanoides relative to A. saccharum indicates seedling characteristics that support invasiveness, while inferior performance indicates potential barriers to invasion. 2 Acer platanoides seedlings produced more leaves and allocated more biomass to roots, A. saccharum had greater water use efficiency, and the two species exhibited similar photosynthesis and first\u2010season mortality rates. Acer platanoides had greater winter survival and earlier spring leaf emergence, but second\u2010season mortality rates were similar. 3 The success of A. platanoides seedlings was not due to escape from natural enemies, contrary to the enemy release hypothesis. Foliar insect herbivory and disease symptoms were similarly high for both native and non\u2010native, and seedling biomass did not differ. Rather, A. platanoides compared well with A. saccharum because of its equivalent ability to photosynthesize in the low light herb layer, its higher leaf production and greater allocation to roots, and its lower winter mortality coupled with earlier spring emergence. Its only potential barrier to seedling establishment, relative to A. saccharum, was lower water use efficiency, which possibly could hinder its invasion into drier forests. 4 The spread of non\u2010native canopy trees poses an especially serious problem for native forest communities, because canopy trees strongly influence species in all forest layers. Success at reaching the canopy depends on a tree's ecology in previous life\u2010history stages, particularly as a vulnerable seedling, but little is known about seedling characteristics that promote non\u2010native tree invasion. Experimental field comparison with ecologically successful native trees provides insight into why non\u2010native trees succeed as seedlings, which is a necessary stage on their journey into the forest canopy.", "which Indicator for enemy release ?", "Damage", 510.0, 516.0], ["Abstract Enemy release is a commonly accepted mechanism to explain plant invasions. Both the diploid Leucanthemum vulgare and the morphologically very similar tetraploid Leucanthemum ircutianum have been introduced into North America. To verify which species is more prevalent in North America we sampled 98 Leucanthemum populations and determined their ploidy level. Although polyploidy has repeatedly been proposed to be associated with increased invasiveness in plants, only two of the populations surveyed in North America were the tetraploid L. ircutianum . We tested the enemy release hypothesis by first comparing 20 populations of L. vulgare and 27 populations of L. ircutianum in their native range in Europe, and then comparing the European L. vulgare populations with 31 L. vulgare populations sampled in North America. Characteristics of the site and associated vegetation, plant performance and invertebrate herbivory were recorded. In Europe, plant height and density of the two species were similar but L. vulgare produced more flower heads than L. ircutianum . Leucanthemum vulgare in North America was 17 % taller, produced twice as many flower heads and grew much denser compared to L. vulgare in Europe. Attack rates by root- and leaf-feeding herbivores on L. vulgare in Europe (34 and 75 %) was comparable to that on L. ircutianum (26 and 71 %) but higher than that on L. vulgare in North America (10 and 3 %). However, herbivore load and leaf damage were low in Europe. Cover and height of the co-occurring vegetation was higher in L. vulgare populations in the native than in the introduced range, suggesting that a shift in plant competition may more easily explain the invasion success of L. vulgare than escape from herbivory.", "which Indicator for enemy release ?", "Damage", 1464.0, 1470.0], ["The vegetation of Kings Park, near the centre of Perth, Western Australia, once had an overstorey of Eucalyptus marginata (jarrah) or Eucalyptus gomphocephala (tuart), and many trees still remain in the bushland parts of the Park. Avenues and roadsides have been planted with eastern Australian species, including Eucalyptus cladocalyx (sugar gum) and Eucalyptus botryoides (southern mahogany), both of which have become invasive. The present study examined the effect of a recent burn on the level of herbivory on these native and exotic eucalypts. Leaf damage, shoot extension and number of new leaves were measured on tagged shoots of saplings of each tree species in unburnt and burnt areas over an 8-month period. Leaf macronutrient levels were quantified and the number of arthropods on saplings was measured at the end of the recording period by chemical knockdown. Leaf macronutrients were mostly higher in all four species in the burnt area, and this was associated with generally higher numbers of canopy arthropods and greater levels of leaf damage. It is suggested that the pulse of soil nutrients after the fire resulted in more nutrient-rich foliage, which in turn was more palatable to arthropods. The resulting high levels of herbivory possibly led to reduced shoot extension of E. gomphocephala, E. botryoides and, to a lesser extent, E. cladocalyx. This acts as a negative feedback mechanism that lessens the tendency for lush, post-fire regrowth to outcompete other species of plants. There was no consistent difference in the levels of the various types of leaf damage or of arthropods on the native and the exotic eucalypts, suggesting that freedom from herbivory is not contributing to the invasiveness of the two exotic species.", "which Indicator for enemy release ?", "Damage", 555.0, 561.0], ["Several hypotheses proposed to explain the success of introduced species focus on altered interspecific interactions. One of the most prominent, the Enemy Release Hypothesis, posits that invading species benefit compared to their native counterparts if they lose their herbivores and pathogens during the invasion process. We previously reported on a common garden experiment (from 2002) in which we compared levels of herbivory between 30 taxonomically paired native and introduced old-field plants. In this phyloge- netically controlled comparison, herbivore damage tended to be higher on introduced than on native plants. This striking pattern, the opposite of current theory, prompted us to further investigate herbivory and several other interspecific interactions in a series of linked ex- periments with the same set of species. Here we show that, in these new experiments, introduced plants, on average, received less insect herbivory and were subject to half the negative soil microbial feedback compared to natives; attack by fungal and viral pathogens also tended to be reduced on introduced plants compared to natives. Although plant traits (foliar C:N, toughness, and water content) suggested that introduced species should be less resistant to generalist consumers, they were not consistently more heavily attacked. Finally, we used meta-analysis to combine data from this study with results from our previous work to show that escape generally was inconsistent among guilds of enemies: there were few instances in which escape from multiple guilds occurred for a taxonomic pair, and more cases in which the patterns of escape from different enemies canceled out. Our examination of multiple interspecific interactions demonstrates that escape from one guild of enemies does not necessarily imply escape from other guilds. Because the effects of each guild are likely to vary through space and time, the net effect of all enemies is also likely to be variable. The net effect of these interactions may create ''invasion opportunity windows'': times when introduced species make advances in native communities.", "which Indicator for enemy release ?", "Damage", 561.0, 567.0], ["To quantify the relative importance of propagule pressure, climate\u2010matching and host availability for the invasion of agricultural pest arthropods in Europe and to forecast newly emerging pest species and European areas with the highest risk of arthropod invasion under current climate and a future climate scenario (A1F1).", "which hypothesis ?", "Propagule pressure", 39.0, 57.0], ["In their colonized ranges, exotic plants may be released from some of the herbivores or pathogens of their home ranges but these can be replaced by novel enemies. It is of basic and practical interest to understand which characteristics of invaded communities control accumulation of the new pests. Key questions are whether enemy load on exotic species is smaller than on native competitors as suggested by the enemy release hypothesis (ERH) and whether this difference is most pronounced in resource\u2010rich habitats as predicted by the resource\u2013enemy release hypothesis (R\u2010ERH). In 72 populations of 12 exotic invasive species, we scored all visible above\u2010ground damage morphotypes caused by herbivores and fungal pathogens. In addition, we quantified levels of leaf herbivory and fruit damage. We then assessed whether variation in damage diversity and levels was explained by habitat fertility, by relatedness between exotic species and the native community or rather by native species diversity. In a second part of the study, we also tested the ERH and the R\u2010ERH by comparing damage of plants in 28 pairs of co\u2010occurring native and exotic populations, representing nine congeneric pairs of native and exotic species. In the first part of the study, diversity of damage morphotypes and damage levels of exotic populations were greater in resource\u2010rich habitats. Co\u2010occurrence of closely related, native species in the community significantly increased the probability of fruit damage. Herbivory on exotics was less likely in communities with high phylogenetic diversity. In the second part of the study, exotic and native congeneric populations incurred similar damage diversity and levels, irrespective of whether they co\u2010occurred in nutrient\u2010poor or nutrient\u2010rich habitats. Synthesis. We identified habitat productivity as a major community factor affecting accumulation of enemy damage by exotic populations. Similar damage levels in exotic and native congeneric populations, even in species pairs from fertile habitats, suggest that the enemy release hypothesis or the R\u2010ERH cannot always explain the invasiveness of introduced species.", "which hypothesis ?", "Enemy release", 412.0, 425.0], ["Anthropogenic disturbance is considered a risk factor in the establishment of non\u2010indigenous species (NIS); however, few studies have investigated the role of anthropogenic disturbance in facilitating the establishment and spread of NIS in marine environments. A baseline survey of native and NIS was undertaken in conjunction with a manipulative experiment to determine the effect that heavy metal pollution had on the diversity and invasibility of marine hard\u2010substrate assemblages. The study was repeated at two sites in each of two harbours in New South Wales, Australia. The survey sampled a total of 47 sessile invertebrate taxa, of which 15 (32%) were identified as native, 19 (40%) as NIS, and 13 (28%) as cryptogenic. Increasing pollution exposure decreased native species diversity at all study sites by between 33% and 50%. In contrast, there was no significant change in the numbers of NIS. Percentage cover was used as a measure of spatial dominance, with increased pollution exposure leading to increased NIS dominance across all sites. At three of the four study sites, assemblages that had previously been dominated by natives changed to become either extensively dominated by NIS or equally occupied by native and NIS alike. No single native or NIS was repeatedly responsible for the observed changes in native species diversity or NIS dominance at all sites. Rather, the observed effects of pollution were driven by a diverse range of taxa and species. These findings have important implications for both the way we assess pollution impacts, and for the management of NIS. When monitoring the response of assemblages to pollution, it is not sufficient to simply assess changes in community diversity. Rather, it is important to distinguish native from NIS components since both are expected to respond differently. In order to successfully manage current NIS, we first need to address levels of pollution within recipient systems in an effort to bolster the resilience of native communities to invasion.", "which hypothesis ?", "Disturbance", 14.0, 25.0], ["The success of introduced species is frequently explained by their escape from natural enemies in the introduced region. We tested the enemy release hypothesis with respect to two well studied blood parasite genera (Plasmodium and Haemoproteus) in native and six introduced populations of the common myna Acridotheres tristis. Not all comparisons of introduced populations to the native population were consistent with expectations of the enemy release hypothesis. Native populations show greater overall parasite prevalence than introduced populations, but the lower prevalence in introduced populations is driven by low prevalence in two populations on oceanic islands (Fiji and Hawaii). When these are excluded, prevalence does not differ significantly. We found a similar number of parasite lineages in native populations compared to all introduced populations. Although there is some evidence that common mynas may have carried parasite lineages from native to introduced locations, and also that introduced populations may have become infected with novel parasite lineages, it may be difficult to differentiate between parasites that are native and introduced, because malarial parasite lineages often do not show regional or host specificity.", "which hypothesis ?", "Enemy release", 135.0, 148.0], ["Biological invasions are facilitated by the global transportation of species and climate change. Given that invasions may cause ecological and economic damage and pose a major threat to biodiversity, understanding the mechanisms behind invasion success is essential. Both the release of non-native populations from natural enemies, such as parasites, and the genetic diversity of these populations may play key roles in their invasion success. We investigated the roles of parasite communities, through enemy release and parasite acquisition, and genetic diversity in the invasion success of the non-native bumblebee, Bombus hypnorum, in the United Kingdom. The invasive B. hypnorum had higher parasite prevalence than most, or all native congeners for two high-impact parasites, probably due to higher susceptibility and parasite acquisition. Consequently parasites had a higher impact on B. hypnorum queens\u2019 survival and colony-founding success than on native species. Bombus hypnorum also had lower functional genetic diversity at the sex-determining locus than native species. Higher parasite prevalence and lower genetic diversity have not prevented the rapid invasion of the United Kingdom by B. hypnorum. These data may inform our understanding of similar invasions by commercial bumblebees around the world. This study suggests that concerns about parasite impacts on the small founding populations common to re-introduction and translocation programs may be less important than currently believed.", "which hypothesis ?", "Enemy release", 503.0, 516.0], ["Studying plant invasions along environmental gradients is a promising approach to dissect the relative importance of multiple interacting factors that affect the spread of a species in a new range. Along altitudinal gradients, factors such as propagule pressure, climatic conditions and biotic interactions change simultaneously across rather small geographic scales. Here we investigate the distribution of eight Asteraceae forbs along mountain roads in both their native and introduced ranges in the Valais (southern Swiss Alps) and the Wallowa Mountains (northeastern Oregon, USA). We hypothesised that a lack of adaptation and more limiting propagule pressure at higher altitudes in the new range restricts the altitudinal distribution of aliens relative to the native range. However, all but one of the species reached the same or even a higher altitude in the new range. Thus neither the need to adapt to changing climatic conditions nor lower propagule pressure at higher altitudes appears to have prevented the altitudinal spread of introduced populations. We found clear differences between regions in the relative occurrence of alien species in ruderal sites compared to roadsides, and in the degree of invasion away from the roadside, presumably reflecting differences in disturbance patterns between regions. Whilst the upper altitudinal limits of these plant invasions are apparently climatically constrained, factors such as anthropogenic disturbance and competition with native vegetation appear to have greater influence than changing climatic conditions on the distribution of these alien species along altitudinal gradients.", "which hypothesis ?", "Disturbance", 1283.0, 1294.0], ["Species that are introduced to novel environments can lose their native pathogens and parasites during the process of introduction. The escape from the negative effects associated with these natural enemies is commonly employed as an explanation for the success and expansion of invasive species, which is termed the enemy release hypothesis (ERH). In this study, nested PCR techniques and microscopy were used to determine the prevalence and intensity (respectively) of Plasmodium spp. and Haemoproteus spp. in introduced house sparrows and native urban birds of central Brazil. Generalized linear mixed models were fitted by Laplace approximation considering a binomial error distribution and logit link function. Location and species were considered as random effects and species categorization (native or non-indigenous) as fixed effects. We found that native birds from Brazil presented significantly higher parasite prevalence in accordance with the ERH. We also compared our data with the literature, and found that house sparrows native to Europe exhibited significantly higher parasite prevalence than introduced house sparrows from Brazil, which also supports the ERH. Therefore, it is possible that house sparrows from Brazil might have experienced a parasitic release during the process of introduction, which might also be related to a demographic release (e.g. release from the negative effects of parasites on host population dynamics).", "which hypothesis ?", "Enemy release", 317.0, 330.0], ["The enemy release hypothesis (ERH) is often cited to explain why some plants successfully invade natural communities while others do not. This hypothesis maintains that plant populations are regulated by coevolved enemies in their native range but are relieved of this pressure where their enemies have not been co-introduced. Some studies have shown that invasive plants sustain lower levels of herbivore damage when compared to native species, but how damage affects fitness and population dynamics remains unclear. We used a system of co-occurring native and invasive Eugenia congeners in south Florida (USA) to experimentally test the ERH, addressing deficiencies in our understanding of the role of natural enemies in plant invasion at the population level. Insecticide was used to experimentally exclude insect herbivores from invasive Eugenia uniflora and its native co-occurring congeners in the field for two years. Herbivore damage, plant growth, survival, and population growth rates for the three species were then compared for control and insecticide-treated plants. Our results contradict the ERH, indicating that E. uniflora sustains more herbivore damage than its native congeners and that this damage negatively impacts stem height, survival, and population growth. In addition, most damage to E. uniflora, a native of Brazil, is carried out by Myllocerus undatus, a recently introduced weevil from Sri Lanka, and M. undatus attacks a significantly greater proportion of E. uniflora leaves than those of its native congeners. This interaction is particularly interesting because M. undatus and E. uniflora share no coevolutionary history, having arisen on two separate continents and come into contact on a third. Our study is the first to document negative population-level effects for an invasive plant as a result of the introduction of a novel herbivore. Such inhibitory interactions are likely to become more prevalent as suites of previously noninteracting species continue to accumulate and new communities assemble worldwide.", "which hypothesis ?", "Enemy release", 4.0, 17.0], ["Escape from natural enemies is a widely held generalization for the success of exotic plants. We conducted a large-scale experiment in Hawaii (USA) to quantify impacts of ungulate removal on plant growth and performance, and to test whether elimination of an exotic generalist herbivore facilitated exotic success. Assessment of impacted and control sites before and after ungulate exclusion using airborne imaging spectroscopy and LiDAR, time series satellite observations, and ground-based field studies over nine years indicated that removal of generalist herbivores facilitated exotic success, but the abundance of native species was unchanged. Vegetation cover <1 m in height increased in ungulate-free areas from 48.7% +/- 1.5% to 74.3% +/- 1.8% over 8.4 years, corresponding to an annualized growth rate of lambda = 1.05 +/- 0.01 yr(-1) (median +/- SD). Most of the change was attributable to exotic plant species, which increased from 24.4% +/- 1.4% to 49.1% +/- 2.0%, (lambda = 1.08 +/- 0.01 yr(-1)). Native plants experienced no significant change in cover (23.0% +/- 1.3% to 24.2% +/- 1.8%, lambda = 1.01 +/- 0.01 yr(-1)). Time series of satellite phenology were indistinguishable between the treatment and a 3.0-km2 control site for four years prior to ungulate removal, but they diverged immediately following exclusion of ungulates. Comparison of monthly EVI means before and after ungulate exclusion and between the managed and control areas indicates that EVI strongly increased in the managed area after ungulate exclusion. Field studies and airborne analyses show that the dominant invader was Senecio madagascariensis, an invasive annual forb that increased from < 0.01% to 14.7% fractional cover in ungulate-free areas (lambda = 1.89 +/- 0.34 yr(-1)), but which was nearly absent from the control site. A combination of canopy LAI, water, and fractional cover were expressed in satellite EVI time series and indicate that the invaded region maintained greenness during drought conditions. These findings demonstrate that enemy release from generalist herbivores can facilitate exotic success and suggest a plausible mechanism by which invasion occurred. They also show how novel remote-sensing technology can be integrated with conservation and management to help address exotic plant invasions.", "which hypothesis ?", "Enemy release", 2041.0, 2054.0], ["1 Assembly rules are broadly defined as any filter imposed on a regional species pool that acts to determine the local community structure and composition. Environmental filtering is thought to result in the formation of groups of species with similar traits that tend to co\u2010occur more often than expected by chance alone, known as Beta guilds. At a smaller scale, within a single Beta guild, species may be partitioned into Alpha guilds \u2013 groups of species that have similar resource use and hence should tend not to co\u2010occur at small scales due the principle of limiting similarity. 2 This research investigates the effects of successional age and the presence of an invasive exotic species on Alpha and Beta guild structuring within plant communities along two successional river terrace sequences in the Waimakariri braided river system in New Zealand. 3 Fifteen sites were sampled, six with and nine without the Russel lupin (Lupinus polyphyllus), an invasive exotic species. At each site, species presence/absence was recorded in 100 circular quadrats (5 cm in diameter) at 30\u2010cm intervals along a 30\u2010m transect. Guild proportionality (Alpha guild structuring) was tested for using two a priori guild classifications each containing three guilds, and cluster analysis was used to test for environmental structuring between sites. 4 Significant assembly rules based on Alpha guild structuring were found, particularly for the monocot and dicot guild. Guild proportionality increased with increasing ecological age, which indicated an increase in the relative importance of competitive structuring at later stages of succession. This provides empirical support for Weiher and Keddy's theoretical model of community assembly. 5 Lupins were associated with altered Alpha and Beta guild structuring at early mid successional sites. Lupin\u2010containing sites had higher silt content than sites without lupins, and this could have altered the strength and scale of competitive structuring within the communities present. 6 This research adds to the increasing evidence for the existence of assembly rules based on limiting similarity within plant communities, and demonstrates the need to incorporate gradients of environmental and competitive adversity when investigating the rules that govern community assembly.", "which hypothesis ?", "limiting similarity", 564.0, 583.0], ["BACKGROUND AND AIMS The successful spread of invasive plants in new environments is often linked to multiple introductions and a diverse gene pool that facilitates local adaptation to variable environmental conditions. For clonal plants, however, phenotypic plasticity may be equally important. Here the primary adaptive strategy in three non-native, clonally reproducing macrophytes (Egeria densa, Elodea canadensis and Lagarosiphon major) in New Zealand freshwaters were examined and an attempt was made to link observed differences in plant morphology to local variation in habitat conditions. METHODS Field populations with a large phenotypic variety were sampled in a range of lakes and streams with different chemical and physical properties. The phenotypic plasticity of the species before and after cultivation was studied in a common garden growth experiment, and the genetic diversity of these same populations was also quantified. KEY RESULTS For all three species, greater variation in plant characteristics was found before they were grown in standardized conditions. Moreover, field populations displayed remarkably little genetic variation and there was little interaction between habitat conditions and plant morphological characteristics. CONCLUSIONS The results indicate that at the current stage of spread into New Zealand, the primary adaptive strategy of these three invasive macrophytes is phenotypic plasticity. However, while limited, the possibility that genetic diversity between populations may facilitate ecotypic differentiation in the future cannot be excluded. These results thus indicate that invasive clonal aquatic plants adapt to new introduced areas by phenotypic plasticity. Inorganic carbon, nitrogen and phosphorous were important in controlling plant size of E. canadensis and L. major, but no other relationships between plant characteristics and habitat conditions were apparent. This implies that within-species differences in plant size can be explained by local nutrient conditions. All together this strongly suggests that invasive clonal aquatic plants adapt to a wide range of habitats in introduced areas by phenotypic plasticity rather than local adaptation.", "which hypothesis ?", "Phenotypic plasticity", 247.0, 268.0], ["To understand the role of leaf-level plasticity and variability in species invasiveness, foliar characteristics were studied in relation to seasonal average integrated quantum flux density (Qint) in the understorey evergreen species Rhododendron ponticum and Ilex aquifolium at two sites. A native relict population of R. ponticum was sampled in southern Spain (Mediterranean climate), while an invasive alien population was investigated in Belgium (temperate maritime climate). Ilex aquifolium was native at both sites. Both species exhibited a significant plastic response to Qint in leaf dry mass per unit area, thickness, photosynthetic potentials, and chlorophyll contents at the two sites. However, R. ponticum exhibited a higher photosynthetic nitrogen use efficiency and larger investment of nitrogen in chlorophyll than I. aquifolium. Since leaf nitrogen (N) contents per unit dry mass were lower in R. ponticum, this species formed a larger foliar area with equal photosynthetic potential and light-harvesting efficiency compared with I. aquifolium. The foliage of R. ponticum was mechanically more resistant with larger density in the Belgian site than in the Spanish site. Mean leaf-level phenotypic plasticity was larger in the Belgian population of R. ponticum than in the Spanish population of this species and the two populations of I. aquifolium. We suggest that large fractional investments of foliar N in photosynthetic function coupled with a relatively large mean, leaf-level phenotypic plasticity may provide the primary explanation for the invasive nature and superior performance of R. ponticum at the Belgian site. With alleviation of water limitations from Mediterranean to temperate maritime climates, the invasiveness of R. ponticum may also be enhanced by the increased foliage mechanical resistance observed in the alien populations.", "which hypothesis ?", "Phenotypic plasticity", 1201.0, 1222.0], ["In the Bonin Islands of the western Pacific where the light environment is characterized by high fluctuations due to frequent typhoon disturbance, we hypothesized that the invasive success of Bischofia javanica Blume (invasive tree, mid-successional) may be attributable to a high acclimation capacity under fluctuating light availability. The physiological and morphological responses of B. javanica to both simulated canopy opening and closure were compared against three native species of different successional status: Trema orientalis Blume (pioneer), Schima mertensiana (Sieb. et Zucc.) Koidz (mid-successional) and Elaeocarpus photiniaefolius Hook.et Arn (late-successional). The results revealed significant species-specific differences in the timing of physiological maturity and phenotypic plasticity in leaves developed under constant high and low light levels. For example, the photosynthetic capacity of T. orientalis reached a maximum in leaves that had just fully expanded when grown under constant high light (50% of full sun) whereas that of E. photiniaefolius leaves continued to increase until 50 d after full expansion. For leaves that had just reached full expansion, T. orientalis, having high photosynthetic plasticity between high and low light, exhibited low acclimation capacity under the changing light (from high to low or low to high light). In comparison with native species, B. javanica showed a higher degree of physiological and morphological acclimation following transfer to a new light condition in leaves of all age classes (i.e. before and after reaching full expansion). The high acclimation ability of B. javanica in response to changes in light availability may be a part of its pre-adaptations for invasiveness in the fluctuating environment of the Bonin Islands.", "which hypothesis ?", "Phenotypic plasticity", 789.0, 810.0], ["The enemy release hypothesis predicts that invasive species will receive less damage from enemies, compared to co-occurring native and noninvasive exotic species in their introduced range. However, release operating early in invasion could be lost over time and with increased range size as introduced species acquire new enemies. We used three years of data, from 61 plant species planted into common gardens, to determine whether (1) invasive, noninvasive exotic, and native species experience differential damage from insect herbivores. and mammalian browsers, and (2) enemy release is lost with increased residence time and geographic spread in the introduced range. We find no evidence suggesting enemy release is a general mechanism contributing to invasiveness in this region. Invasive species received the most insect herbivory, and damage increased with longer residence times and larger range sizes at three spatial scales. Our results show that invasive and exotic species fail to escape enemies, particularly over longer temporal and larger spatial scales.", "which hypothesis ?", "Enemy release", 4.0, 17.0], ["Invasiveness may result from genetic variation and adaptation or phenotypic plasticity, and genetic variation in fitness traits may be especially critical. Pennisetum setaceum (fountain grass, Poaceae) is highly invasive in Hawaii (HI), moderately invasive in Arizona (AZ), and less invasive in southern California (CA). In common garden experiments, we examined the relative importance of quantitative trait variation, precipitation, and phenotypic plasticity in invasiveness. In two very different environments, plants showed no differences by state of origin (HI, CA, AZ) in aboveground biomass, seeds/flower, and total seed number. Plants from different states were also similar within watering treatment. Plants with supplemental watering, relative to unwatered plants, had greater biomass, specific leaf area (SLA), and total seed number, but did not differ in seeds/flower. Progeny grown from seeds produced under different watering treatments showed no maternal effects in seed mass, germination, biomass or SLA. High phenotypic plasticity, rather than local adaptation is likely responsible for variation in invasiveness. Global change models indicate that temperature and precipitation patterns over the next several decades will change, although the direction of change is uncertain. Drier summers in southern California may retard further invasion, while wetter summers may favor the spread of fountain grass.", "which hypothesis ?", "Phenotypic plasticity", 65.0, 86.0], ["Ecological filters and availability of propagules play key roles structuring natural communities. Propagule pressure has recently been suggested to be a fundamental factor explaining the success or failure of biological introductions. We tested this hypothesis with a remarkable data set on trees introduced to Isla Victoria, Nahuel Huapi National Park, Argentina. More than 130 species of woody plants, many known to be highly invasive elsewhere, were introduced to this island early in the 20th century, as part of an experiment to test their suitability as commercial forestry trees for this region. We obtained detailed data on three estimates of propagule pressure (number of introduced individuals, number of areas where introduced, and number of years during which the species was planted) for 18 exotic woody species. We matched these data with a survey of the species and number of individuals currently invading the island. None of the three estimates of propagule pressure predicted the current pattern of invasion. We suggest that other factors, such as biotic resistance, may be operating to determine the observed pattern of invasion, and that propagule pressure may play a relatively minor role in explaining at least some observed patterns of invasion success and failure.", "which hypothesis ?", "Propagule pressure", 98.0, 116.0], ["The role of novel ecological interactions between mammals, fungi and plants in invaded ecosystems remains unresolved, but may play a key role in the widespread successful invasion of pines and their ectomycorrhizal fungal associates, even where mammal faunas originate from different continents to trees and fungi as in New Zealand. We examine the role of novel mammal associations in dispersal of ectomycorrhizal fungal inoculum of North American pines (Pinus contorta, Pseudotsuga menziesii), and native beech trees (Lophozonia menziesii) using faecal analyses, video monitoring and a bioassay experiment. Both European red deer (Cervus elaphus) and Australian brushtail possum (Trichosurus vulpecula) pellets contained spores and DNA from a range of native and non\u2010native ectomycorrhizal fungi. Faecal pellets from both animals resulted in ectomycorrhizal infection of pine seedlings with fungal genera Rhizopogon and Suillus, but not with native fungi or the invasive fungus Amanita muscaria, despite video and DNA evidence of consumption of these fungi. Native L. menziesii seedlings never developed any ectomycorrhizal infection from faecal pellet inoculation. Synthesis. Our results show that introduced mammals from Australia and Europe facilitate the co\u2010invasion of invasive North American trees and Northern Hemisphere fungi in New Zealand, while we find no evidence that introduced mammals benefit native trees or fungi. This novel tripartite \u2018invasional meltdown\u2019, comprising taxa from three kingdoms and three continents, highlights unforeseen consequences of global biotic homogenization.", "which hypothesis ?", "Invasional meltdown", 1455.0, 1474.0], ["We report the impact of an extreme weather event, the October 1987 severe storm, on fragmented woodlands in southern Britain. We analysed ecological changes between 1971 and 2002 in 143 200\u2010m2 plots in 10 woodland sites exposed to the storm with an ecologically equivalent sample of 150 plots in 16 non\u2010exposed sites. Comparing both years, understorey plant species\u2010richness, species composition, soil pH and woody basal area of the tree and shrub canopy were measured. We tested the hypothesis that the storm had deflected sites from the wider national trajectory of an increase in woody basal area and reduced understorey species\u2010richness associated with ageing canopies and declining woodland management. We also expected storm disturbance to amplify the background trend of increasing soil pH, a UK\u2010wide response to reduced atmospheric sulphur deposition. Path analysis was used to quantify indirect effects of storm exposure on understorey species richness via changes in woody basal area and soil pH. By 2002, storm exposure was estimated to have increased mean species richness per 200 m2 by 32%. Woody basal area changes were highly variable and did not significantly differ with storm exposure. Increasing soil pH was associated with a 7% increase in richness. There was no evidence that soil pH increased more as a function of storm exposure. Changes in species richness and basal area were negatively correlated: a 3.4% decrease in richness occurred for every 0.1\u2010m2 increase in woody basal area per plot. Despite all sites substantially exceeding the empirical critical load for nitrogen deposition, there was no evidence that in the 15 years since the storm, disturbance had triggered a eutrophication effect associated with dominance of gaps by nitrophilous species. Synthesis. Although the impacts of the 1987 storm were spatially variable in terms of impacts on woody basal area, the storm had a positive effect on understorey species richness. There was no evidence that disturbance had increased dominance of gaps by invasive species. This could change if recovery from acidification results in a soil pH regime associated with greater macronutrient availability.", "which hypothesis ?", "Disturbance", 731.0, 742.0], ["Abstract: The successful invasion of exotic plants is often attributed to the absence of coevolved enemies in the introduced range (i.e., the enemy release hypothesis). Nevertheless, several components of this hypothesis, including the role of generalist herbivores, remain relatively unexplored. We used repeated censuses of exclosures and paired controls to investigate the role of a generalist herbivore, white\u2010tailed deer (Odocoileus virginianus), in the invasion of 3 exotic plant species (Microstegium vimineum, Alliaria petiolata, and Berberis thunbergii) in eastern hemlock (Tsuga canadensis) forests in New Jersey and Pennsylvania (U.S.A.). This work was conducted in 10 eastern hemlock (T. canadensis) forests that spanned gradients in deer density and in the severity of canopy disturbance caused by an introduced insect pest, the hemlock woolly adelgid (Adelges tsugae). We used maximum likelihood estimation and information theoretics to quantify the strength of evidence for alternative models of the influence of deer density and its interaction with the severity of canopy disturbance on exotic plant abundance. Our results were consistent with the enemy release hypothesis in that exotic plants gained a competitive advantage in the presence of generalist herbivores in the introduced range. The abundance of all 3 exotic plants increased significantly more in the control plots than in the paired exclosures. For all species, the inclusion of canopy disturbance parameters resulted in models with substantially greater support than the deer density only models. Our results suggest that white\u2010tailed deer herbivory can accelerate the invasion of exotic plants and that canopy disturbance can interact with herbivory to magnify the impact. In addition, our results provide compelling evidence of nonlinear relationships between deer density and the impact of herbivory on exotic species abundance. These findings highlight the important role of herbivore density in determining impacts on plant abundance and provide evidence of the operation of multiple mechanisms in exotic plant invasion.", "which hypothesis ?", "Enemy release", 142.0, 155.0], ["During the past centuries, humans have introduced many plant species in areas where they do not naturally occur. Some of these species establish populations and in some cases become invasive, causing economic and ecological damage. Which factors determine the success of non-native plants is still incompletely understood, but the absence of natural enemies in the invaded area (Enemy Release Hypothesis; ERH) is one of the most popular explanations. One of the predictions of the ERH, a reduced herbivore load on non-native plants compared with native ones, has been repeatedly tested. However, many studies have either used a community approach (sampling from native and non-native species in the same community) or a biogeographical approach (sampling from the same plant species in areas where it is native and where it is non-native). Either method can sometimes lead to inconclusive results. To resolve this, we here add to the small number of studies that combine both approaches. We do so in a single study of insect herbivory on 47 woody plant species (trees, shrubs, and vines) in the Netherlands and Japan. We find higher herbivore diversity, higher herbivore load and more herbivory on native plants than on non-native plants, generating support for the enemy release hypothesis.", "which hypothesis ?", "Enemy release", 379.0, 392.0], ["In recent decades the grass Phragmites australis has been aggressively in- vading coastal, tidal marshes of North America, and in many areas it is now considered a nuisance species. While P. australis has historically been restricted to the relatively benign upper border of brackish and salt marshes, it has been expanding seaward into more phys- iologically stressful regions. Here we test a leading hypothesis that the spread of P. australis is due to anthropogenic modification of coastal marshes. We did a field experiment along natural borders between stands of P. australis and the other dominant grasses and rushes (i.e., matrix vegetation) in a brackish marsh in Rhode Island, USA. We applied a pulse disturbance in one year by removing or not removing neighboring matrix vegetation and adding three levels of nutrients (specifically nitrogen) in a factorial design, and then we monitored the aboveground performance of P. australis and the matrix vegetation. Both disturbances increased the density, height, and biomass of shoots of P. australis, and the effects of fertilization were more pronounced where matrix vegetation was removed. Clear- ing competing matrix vegetation also increased the distance that shoots expanded and their reproductive output, both indicators of the potential for P. australis to spread within and among local marshes. In contrast, the biomass of the matrix vegetation decreased with increasing severity of disturbance. Disturbance increased the total aboveground production of plants in the marsh as matrix vegetation was displaced by P. australis. A greenhouse experiment showed that, with increasing nutrient levels, P. australis allocates proportionally more of its biomass to aboveground structures used for spread than to belowground struc- tures used for nutrient acquisition. Therefore, disturbances that enrich nutrients or remove competitors promote the spread of P. australis by reducing belowground competition for nutrients between P. australis and the matrix vegetation, thus allowing P. australis, the largest plant in the marsh, to expand and displace the matrix vegetation. Reducing nutrient load and maintaining buffers of matrix vegetation along the terrestrial-marsh ecotone will, therefore, be important methods of control for this nuisance species.", "which hypothesis ?", "Disturbance", 710.0, 721.0], ["Theory suggests that introduction effort (propagule size or number) should be a key determinant of establishment success for exotic species. Unfortunately, however, propagule pressure is not recorded for most introductions. Studies must therefore either use proxies whose efficacy must be largely assumed, or ignore effort altogether. The results of such studies will be flawed if effort is not distributed at random with respect to other characteristics that are predicted to influence success. We use global data for more than 600 introduction events for birds to show that introduction effort is both the strongest correlate of introduction success, and correlated with a large number of variables previously thought to influence success. Apart from effort, only habitat generalism relates to establishment success in birds.", "which hypothesis ?", "Propagule pressure", 165.0, 183.0], ["The enemy release hypothesis predicts that native herbivores prefer native, rather than exotic plants, giving invaders a competitive advantage. In contrast, the biotic resistance hypothesis states that many invaders are prevented from establishing because of competitive interactions, including herbivory, with native fauna and flora. Success or failure of spread and establishment might also be influenced by the presence or absence of mutualists, such as pollinators. Senecio madagascariensis (fireweed), an annual weed from South Africa, inhabits a similar range in Australia to the related native S. pinnatifolius. The aim of this study was to determine, within the context of invasion biology theory, whether the two Senecio species share insect fauna, including floral visitors and herbivores. Surveys were carried out in south-east Queensland on allopatric populations of the two Senecio species, with collected insects identified to morphospecies. Floral visitor assemblages were variable between populations. However, the two Senecio species shared the two most abundant floral visitors, honeybees and hoverflies. Herbivore assemblages, comprising mainly hemipterans of the families Cicadellidae and Miridae, were variable between sites and no patterns could be detected between Senecio species at the morphospecies level. However, when insect assemblages were pooled (i.e. community level analysis), S. pinnatifolius was shown to host a greater total abundance and richness of herbivores. Senecio madagascariensis is unlikely to be constrained by lack of pollinators in its new range and may benefit from lower levels of herbivory compared to its native congener S. pinnatifolius.", "which hypothesis ?", "Enemy release", 4.0, 17.0], ["Factors that influence the early stages of invasion can be critical to invasion success, yet are seldom studied. In particular, broad pre-adaptation to recipient climate may importantly influence early colonization success, yet few studies have explicitly examined this. I performed an experiment to determine how similarity between seed source and transplant site latitude, as a general indicator of pre-adaptation to climate, interacts with propagule pressure (100, 200 and 400 seeds/pot) to influence early colonization success of the widespread North American weed, St. John's wort Hypericum perforatum. Seeds originating from seven native European source populations were sown in pots buried in the ground in a field in western Montana. Seed source populations were either similar or divergent in latitude to the recipient transplant site. Across seed density treatments, the match between seed source and recipient latitude did not affect the proportion of pots colonized or the number of individual colonists per pot. In contrast, propagule pressure had a significant and positive effect on colonization. These results suggest that propagules from many climatically divergent source populations can be viable invaders.", "which hypothesis ?", "Propagule pressure", 443.0, 461.0], ["In multiply invaded ecosystems, introduced species should interact with each other as well as with native species. Invader-invader interactions may affect the success of further invaders by altering attributes of recipient communities and propagule pressure. The invasional meltdown hypothesis (IMH) posits that positive interactions among invaders initiate positive population-level feedback that intensifies impacts and promotes secondary invasions. IMH remains controversial: few studies show feedback between invaders that amplifies their effects, and none yet demonstrate facilitation of entry and spread of secondary invaders. Our results show that supercolonies of an alien ant, promoted by mutualism with introduced honeydew-secreting scale insects, permitted invasion by an exotic land snail on Christmas Island, Indian Ocean. Modeling of land snail spread over 750 sites across 135 km2 over seven years showed that the probability of land snail invasion was facilitated 253-fold in ant supercolonies but impeded in intact forest where predaceous native land crabs remained abundant. Land snail occurrence at neighboring sites, a measure of propagule pressure, also promoted land snail spread. Site comparisons and experiments revealed that ant supercolonies, by killing land crabs but not land snails, disrupted biotic resistance and provided enemy-free space. Predation pressure on land snails was lower (28.6%), survival 115 times longer, and abundance 20-fold greater in supercolonies than in intact forest. Whole-ecosystem suppression of supercolonies reversed the probability of land snail invasion by allowing recolonization of land crabs; land snails were much less likely (0.79%) to invade sites where supercolonies were suppressed than where they remained intact. Our results provide strong empirical evidence for IMH by demonstrating that mutualism between invaders reconfigures key interactions in the recipient community. This facilitates entry of secondary invaders and elevates propagule pressure, propagating their spread at the whole-ecosystem level. We show that identification and management of key facilitative interactions in invaded ecosystems can be used to reverse impacts and restore resistance to further invasions.", "which hypothesis ?", "Invasional meltdown", 263.0, 282.0], ["The enemy release hypothesis, which posits that exotic species are less regulated by enemies than native species, has been well-supported in terrestrial systems but rarely tested in marine systems. Here, the enemy release hypothesis was tested in a marine system by excluding large enemies (>1.3 cm) in dock fouling communities in Washington, USA. After documenting the distribution and abundance of potential enemies such as chitons, gastropods and flatworms at 4 study sites, exclusion experiments were conducted to test the hypotheses that large grazing ene- mies (1) reduced recruitment rates in the exotic ascidian Botrylloides violaceus and native species, (2) reduced B. violaceus and native species abundance, and (3) altered fouling community struc- ture. Experiments demonstrated that, as predicted by the enemy release hypothesis, exclusion of large enemies did not significantly alter B. violaceus recruitment or abundance and it did signifi- cantly increase abundance or recruitment of 2 common native species. However, large enemy exclusion had no significant effects on most native species or on overall fouling community struc- ture. Furthermore, neither B. violaceus nor total exotic species abundance correlated positively with abundance of large enemies across sites. I therefore conclude that release from large ene- mies is likely not an important mechanism for the success of exotic species in Washington fouling communities.", "which hypothesis ?", "Enemy release", 4.0, 17.0], ["Woodlands comprised of planted, nonnative trees are increasing in extent globally, while native woodlands continue to decline due to human activities. The ecological impacts of planted woodlands may include changes to the communities of understory plants and animals found among these nonnative trees relative to native woodlands, as well as invasion of adjacent habitat areas through spread beyond the originally planted areas. Eucalypts (Eucalyptus spp.) are among the most widely planted trees worldwide, and are very common in California, USA. The goals of our investigation were to compare the biological communities of nonnative eucalypt woodlands to native oak woodlands in coastal central California, and to examine whether planted eucalypt groves have increased in size over the past decades. We assessed site and habitat attributes and characterized biological communities using understory plant, ground-dwelling arthropod, amphibian, and bird communities as indicators. Degree of difference between native and nonnative woodlands depended on the indicator used. Eucalypts had significantly greater canopy height and cover, and significantly lower cover by perennial plants and species richness of arthropods than oaks. Community composition of arthropods also differed significantly between eucalypts and oaks. Eucalypts had marginally significantly deeper litter depth, lower abundance of native plants with ranges limited to western North America, and lower abundance of amphibians. In contrast to these differences, eucalypt and oak groves had very similar bird community composition, species richness, and abundance. We found no evidence of \"invasional meltdown,\" documenting similar abundance and richness of nonnatives in eucalypt vs. oak woodlands. Our time-series analysis revealed that planted eucalypt groves increased 271% in size, on average, over six decades, invading adjacent areas. Our results inform science-based management of California woodlands, revealing that while bird communities would probably not be affected by restoration of eucalypt to oak woodlands, such a restoration project would not only stop the spread of eucalypts into adjacent habitats but would also enhance cover by western North American native plants and perennials, enhance amphibian abundance, and increase arthropod richness.", "which hypothesis ?", "Invasional meltdown", 1657.0, 1676.0], ["Phenotypic plasticity has been suggested as the main mechanism for species persistence under a global change scenario, and also as one of the main mechanisms that alien species use to tolerate and invade broad geographic areas. However, contrasting with this central role of phenotypic plasticity, standard models aimed to predict the effect of climatic change on species distributions do not allow for the inclusion of differences in plastic responses among populations. In this context, the climatic variability hypothesis (CVH), which states that higher thermal variability at higher latitudes should determine an increase in phenotypic plasticity with latitude, could be considered a timely and promising hypothesis. Accordingly, in this study we evaluated, for the first time in a plant species (Taraxacum officinale), the prediction of the CVH. Specifically, we measured plastic responses at different environmental temperatures (5 and 20\u00b0C), in several ecophysiological and fitness-related traits for five populations distributed along a broad latitudinal gradient. Overall, phenotypic plasticity increased with latitude for all six traits analyzed, and mean trait values increased with latitude at both experimental temperatures, the change was noticeably greater at 20\u00b0 than at 5\u00b0C. Our results suggest that the positive relationship found between phenotypic plasticity and geographic latitude could have very deep implications on future species persistence and invasion processes under a scenario of climate change.", "which hypothesis ?", "Phenotypic plasticity", 0.0, 21.0], ["Many ecosystems receive a steady stream of non-native species. How biotic resistance develops over time in these ecosystems will depend on how established invaders contribute to subsequent resistance. If invasion success and defence capacity (i.e. contribution to resistance) are correlated, then community resistance should increase as species accumulate. If successful invaders also cause most impact (through replacing native species with low defence capacity) then the effect will be even stronger. If successful invaders instead have weak defence capacity or even facilitative attributes, then resistance should decrease with time, as proposed by the invasional meltdown hypothesis. We analysed 1157 introductions of freshwater fish in Swedish lakes and found that species' invasion success was positively correlated with their defence capacity and impact, suggesting that these communities will develop stronger resistance over time. These insights can be used to identify scenarios where invading species are expected to cause large impact.", "which hypothesis ?", "Invasional meltdown", 656.0, 675.0], ["We used three congeneric annual thistles, which vary in their ability to invade California (USA) annual grasslands, to test whether invasiveness is related to differences in life history traits. We hypothesized that populations of these summer-flowering Centaurea species must pass through a demographic gauntlet of survival and reproduction in order to persist and that the most invasive species (C. solstitialis) might possess unique life history characteristics. Using the idea of a demographic gauntlet as a conceptual framework, we compared each congener in terms of (1) seed germination and seedling establishment, (2) survival of rosettes subjected to competition from annual grasses, (3) subsequent growth and flowering in adult plants, and (4) variation in breeding system. Grazing and soil disturbance is thought to affect Centaurea establishment, growth, and reproduction, so we also explored differences among congeners in their response to clipping and to different sizes of soil disturbance. We found minimal differences among congeners in either seed germination responses or seedling establishment and survival. In contrast, differential growth responses of congeners to different sizes of canopy gaps led to large differences in adult size and fecundity. Canopy-gap size and clipping affected the fecundity of each species, but the most invasive species (C. solstitialis) was unique in its strong positive response to combinations of clipping and canopy gaps. In addition, the phenology of C. solstitialis allows this species to extend its growing season into the summer\u2014a time when competition from winter annual vegetation for soil water is minimal. Surprisingly, C. solstitialis was highly self-incompatible while the less invasive species were highly self-compatible. Our results suggest that the invasiveness of C. solstitialis arises, in part, from its combined ability to persist in competition with annual grasses and its plastic growth and reproductive responses to open, disturbed habitat patches. Corresponding Editor: D. P. C. Peters.", "which hypothesis ?", "Disturbance", 800.0, 811.0], ["Background: Weedy non-native species have long been predicted to be more phenotypically plastic than native species. Question: Are weedy non-native species more plastic than natives? Organisms: Fourteen perennial plant species: Acer platanoides, Acer saccharum, Bromus inermis, Bromus latiglumis, Celastrus orbiculatus, Celastrus scandens, Elymus repens, Elymus trachycaulus, Plantago major, Plantago rugelii, Rosa multiflora, Rosa palustris, Solanum dulcamara, and Solanum carolinense. Field site: Mesic old-field in Dryden, NY (422749\u2033N, 762640\u2033W). Methods: We grew seven pairs of native and non-native plant congeners in the field and tested their responses to reduced competition and the addition of fertilizer. We measured the plasticity of six traits related to growth and leaf palatability (total length, leaf dry mass, maximum relative growth rate, leaf toughness, trichome density, and specific leaf area). Conclusions: Weedy non-native species did not differ consistently from natives in their phenotypic plasticity. Instead, relatedness was a better predictor of plasticity.", "which hypothesis ?", "Phenotypic plasticity", 1004.0, 1025.0], ["Although biological invasions are of considerable concern to ecologists, relatively little attention has been paid to the potential for and consequences of indirect interactions between invasive species. Such interactions are generally thought to enhance invasives' spread and impact (i.e., the \"invasional meltdown\" hypothesis); however, exotic species might also act indirectly to slow the spread or blunt the impact of other invasives. On the east coast of the United States, the invasive hemlock woolly adelgid (Adelges tsugae, HWA) and elongate hemlock scale (Fiorinia externa, EHS) both feed on eastern hemlock (Tsuga canadensis). Of the two insects, HWA is considered far more damaging and disproportionately responsible for hemlock mortality. We describe research assessing the interaction between HWA and EHS, and the consequences of this interaction for eastern hemlock. We conducted an experiment in which uninfested hemlock branches were experimentally infested with herbivores in a 2 x 2 factorial design (either, both, or neither herbivore species). Over the 2.5-year course of the experiment, each herbivore's density was approximately 30% lower in mixed- vs. single-species treatments. Intriguingly, however, interspecific competition weakened rather than enhanced plant damage: growth was lower in the HWA-only treatment than in the HWA + EHS, EHS-only, or control treatments. Our results suggest that, for HWA-infested hemlocks, the benefit of co-occurring EHS infestations (reduced HWA density) may outweigh the cost (increased resource depletion).", "which hypothesis ?", "Invasional meltdown", 296.0, 315.0], ["Hanley ME (2012). Seedling defoliation, plant growth and flowering potential in native- and invasive-range Plantago lanceolata populations. Weed Research52, 252\u2013259. Summary The plastic response of weeds to new environmental conditions, in particular the likely relaxation of herbivore pressure, is considered vital for successful colonisation and spread. However, while variation in plant anti-herbivore resistance between native- and introduced-range populations is well studied, few authors have considered herbivore tolerance, especially at the seedling stage. This study examines variation in seedling tolerance in native (European) and introduced (North American) Plantago lanceolata populations following cotyledon removal at 14 days old. Subsequent effects on plant growth were quantified at 35 days, along with effects on flowering potential at maturity. Cotyledon removal reduced early growth for all populations, with no variation between introduced- or native-range plants. Although more variable, the effects of cotyledon loss on flowering potential were also unrelated to range. The likelihood that generalist seedling herbivores are common throughout North America may explain why no difference in seedling tolerance was apparent. However, increased flowering potential in plants from North American P. lanceolata populations was observed. As increased flowering potential was not lost, even after severe cotyledon damage, the manifestation of phenotypic plasticity in weeds at maturity may nonetheless still be shaped by plasticity in the ability to tolerate herbivory during seedling establishment.", "which hypothesis ?", "Phenotypic plasticity", 1459.0, 1480.0], ["The current rate of invasive species introductions is unprecedented, and the dramatic impacts of exotic invasive plants on community and ecosystem properties have been well documented. Despite the pressing management implications, the mechanisms that control exotic plant invasion remain poorly understood. Several factors, such as disturbance, propagule pressure, species diversity, and herbivory, are widely believed to play a critical role in exotic plant invasions. However, few studies have examined the relative importance of these factors, and little is known about how propagule pressure interacts with various mechanisms of ecological resistance to determine invasion success. We quantified the relative importance of canopy disturbance, propagule pressure, species diversity, and herbivory in determining exotic plant invasion in 10 eastern hemlock forests in Pennsylvania and New Jersey (USA). Use of a maximum-likelihood estimation framework and information theoretics allowed us to quantify the strength of evidence for alternative models of the influence of these factors on changes in exotic plant abundance. In addition, we developed models to determine the importance of interactions between ecosystem properties and propagule pressure. These analyses were conducted for three abundant, aggressive exotic species that represent a range of life histories: Alliaria petiolata, Berberis thunbergii, and Microstegium vimineum. Of the four hypothesized determinants of exotic plant invasion considered in this study, canopy disturbance and propagule pressure appear to be the most important predictors of A. petiolata, B. thunbergii, and M. vimineum invasion. Herbivory was also found to be important in contributing to the invasion of some species. In addition, we found compelling evidence of an important interaction between propagule pressure and canopy disturbance. This is the first study to demonstrate the dominant role of the interaction between canopy disturbance and propagule pressure in determining forest invasibility relative to other potential controlling factors. The importance of the disturbance-propagule supply interaction, and its nonlinear functional form, has profound implications for the management of exotic plant species populations. Improving our ability to predict exotic plant invasions will require enhanced understanding of the interaction between propagule pressure and ecological resistance mechanisms.", "which hypothesis ?", "Propagule pressure", 345.0, 363.0], ["Species become invasive if they (i) are introduced to a new range, (ii) establish themselves, and (iii) spread. To address the global problems caused by invasive species, several studies investigated steps ii and iii of this invasion process. However, only one previous study looked at step i and examined the proportion of species that have been introduced beyond their native range. We extend this research by investigating all three steps for all freshwater fish, mammals, and birds native to Europe or North America. A higher proportion of European species entered North America than vice versa. However, the introduction rate from Europe to North America peaked in the late 19th century, whereas it is still rising in the other direction. There is no clear difference in invasion success between the two directions, so neither the imperialism dogma (that Eurasian species are exceptionally successful invaders) is supported, nor is the contradictory hypothesis that North America offers more biotic resistance to invaders than Europe because of its less disturbed and richer biota. Our results do not support the tens rule either: that approximately 10% of all introduced species establish themselves and that approximately 10% of established species spread. We find a success of approximately 50% at each step. In comparison, only approximately 5% of native vertebrates were introduced in either direction. These figures show that, once a vertebrate is introduced, it has a high potential to become invasive. Thus, it is crucial to minimize the number of species introductions to effectively control invasive vertebrates.", "which hypothesis ?", " Tens rule", 1117.0, 1127.0], ["1 During the last centuries many alien species have established and spread in new regions, where some of them cause large ecological and economic problems. As one of the main explanations of the spread of alien species, the enemy\u2010release hypothesis is widely accepted and frequently serves as justification for biological control. 2 We used a global fungus\u2013plant host distribution data set for 140 North American plant species naturalized in Europe to test whether alien plants are generally released from foliar and floral pathogens, whether they are mainly released from pathogens that are rare in the native range, and whether geographic spread of the North American plant species in Europe is associated with release from fungal pathogens. 3 We show that the 140 North American plant species naturalized in Europe were released from 58% of their foliar and floral fungal pathogen species. However, when we also consider fungal pathogens of the native North American host range that in Europe so far have only been reported on other plant species, the estimated release is reduced to 10.3%. Moreover, in Europe North American plants have mainly escaped their rare, pathogens, of which the impact is restricted to few populations. Most importantly and directly opposing the enemy\u2010release hypothesis, geographic spread of the alien plants in Europe was negatively associated with their release from fungal pathogens. 4 Synthesis. North American plants may have escaped particular fungal species that control them in their native range, but based on total loads of fungal species, release from foliar and floral fungal pathogens does not explain the geographic spread of North American plant species in Europe. To test whether enemy release is the major driver of plant invasiveness, we urgently require more studies comparing release of invasive and non\u2010invasive alien species from enemies of different guilds, and studies that assess the actual impact of the enemies.", "which hypothesis ?", "Enemy release", 1727.0, 1740.0], ["Numerous studies have shown how interactions between nonindigenous spe- cies (NIS) can accelerate the rate at which they establish and spread in invaded habitats, leading to an \"invasional meltdown.\" We investigated facilitation at an earlier stage in the invasion process: during entrainment of propagules in a transport pathway. The introduced bryozoan Watersipora subtorquata is tolerant of several antifouling biocides and a common component of hull-fouling assemblages, a major transport pathway for aquatic NIS. We predicted that colonies of W. subtorquata act as nontoxic refugia for other, less tolerant species to settle on. We compared rates of recruitment of W. subtorquata and other fouling organisms to surfaces coated with three antifouling paints and a nontoxic primer in coastal marinas in Queensland, Australia. Diversity and abundance of fouling taxa were compared between bryozoan colonies and adjacent toxic or nontoxic paint surfaces. After 16 weeks immersion, W. subtorquata covered up to 64% of the tile surfaces coated in antifouling paint. Twenty-two taxa occurred exclusively on W. subtorquata and were not found on toxic surfaces. Other fouling taxa present on toxic surfaces were up to 248 times more abundant on W. subtorquata. Because biocides leach from the paint surface, we expected a positive relationship between the size of W. subtorquata colonies and the abundance and diversity of epibionts. To test this, we compared recruitment of fouling organisms to mimic W. subtorquata colonies of three different sizes that had the same total surface area. Sec- ondary recruitment to mimic colonies was greater when the surrounding paint surface contained biocides. Contrary to our predictions, epibionts were most abundant on small mimic colonies with a large total perimeter. This pattern was observed in encrusting and erect bryozoans, tubiculous amphipods, and serpulid and sabellid polychaetes, but only in the presence of toxic paint. Our results show that W. subtorquata acts as a foundation species for fouling assemblages on ship hulls and facilitates the transport of other species at greater abundance and frequency than would otherwise be possible. Invasion success may be increased by positive interactions between NIS that enhance the delivery of prop- agules by human transport vectors.", "which hypothesis ?", "Invasional meltdown", 178.0, 197.0], ["1 Understanding why some alien plant species become invasive when others fail is a fundamental goal in invasion ecology. We used detailed historical planting records of alien plant species introduced to Amani Botanical Garden, Tanzania and contemporary surveys of their invasion status to assess the relative ability of phylogeny, propagule pressure, residence time, plant traits and other factors to explain the success of alien plant species at different stages of the invasion process. 2 Species with native ranges centred in the tropics and with larger seeds were more likely to regenerate, whereas naturalization success was explained by longer residence time, faster growth rate, fewer seeds per fruit, smaller seed mass and shade tolerance. 3 Naturalized species spreading greater distances from original plantings tended to have more seeds per fruit, whereas species dispersed by canopy\u2010feeding animals and with native ranges centred on the tropics tended to have spread more widely in the botanical garden. Species dispersed by canopy\u2010feeding animals and with greater seed mass were more likely to be established in closed forest. 4 Phylogeny alone made a relatively minor contribution to the explanatory power of statistical models, but a greater proportion of variation in spread within the botanical garden and in forest establishment was explained by phylogeny alone than for other models. Phylogeny jointly with variables also explained a greater proportion of variation in forest establishment than in other models. Phylogenetic correction weakened the importance of dispersal syndrome in explaining compartmental spread, seed mass in the forest establishment model, and all factors except for growth rate and residence time in the naturalization model. 5 Synthesis. This study demonstrates that it matters considerably how invasive species are defined when trying to understand the relative ability of multiple variables to explain invasion success. By disentangling different invasion stages and using relatively objective criteria to assess species status, this study highlights that relatively simple models can help to explain why some alien plants are able to naturalize, spread and even establish in closed tropical forests.", "which hypothesis ?", "Propagule pressure", 331.0, 349.0], ["Abstract: Roads are believed to be a major contributing factor to the ongoing spread of exotic plants. We examined the effect of road improvement and environmental variables on exotic and native plant diversity in roadside verges and adjacent semiarid grassland, shrubland, and woodland communities of southern Utah ( U.S.A. ). We measured the cover of exotic and native species in roadside verges and both the richness and cover of exotic and native species in adjacent interior communities ( 50 m beyond the edge of the road cut ) along 42 roads stratified by level of road improvement ( paved, improved surface, graded, and four\u2010wheel\u2010drive track ). In roadside verges along paved roads, the cover of Bromus tectorum was three times as great ( 27% ) as in verges along four\u2010wheel\u2010drive tracks ( 9% ). The cover of five common exotic forb species tended to be lower in verges along four\u2010wheel\u2010drive tracks than in verges along more improved roads. The richness and cover of exotic species were both more than 50% greater, and the richness of native species was 30% lower, at interior sites adjacent to paved roads than at those adjacent to four\u2010wheel\u2010drive tracks. In addition, environmental variables relating to dominant vegetation, disturbance, and topography were significantly correlated with exotic and native species richness and cover. Improved roads can act as conduits for the invasion of adjacent ecosystems by converting natural habitats to those highly vulnerable to invasion. However, variation in dominant vegetation, soil moisture, nutrient levels, soil depth, disturbance, and topography may render interior communities differentially susceptible to invasions originating from roadside verges. Plant communities that are both physically invasible ( e.g., characterized by deep or fertile soils ) and disturbed appear most vulnerable. Decision\u2010makers considering whether to build, improve, and maintain roads should take into account the potential spread of exotic plants.", "which hypothesis ?", "Disturbance", 1237.0, 1248.0], ["The expression of defensive morphologies in prey often is correlated with predator abundance or diversity over a range of temporal and spatial scales. These patterns are assumed to reflect natural selection via differential predation on genetically determined, fixed phenotypes. Phenotypic variation, however, also can reflect within-generation developmental responses to environmental cues (phenotypic plasticity). For example, water-borne effluents from predators can induce the production of defensive morphologies in many prey taxa. This phenomenon, however, has been examined only on narrow scales. Here, we demonstrate adaptive phenotypic plasticity in prey from geographically separated populations that were reared in the presence of an introduced predator. Marine snails exposed to predatory crab effluent in the field increased shell thickness rapidly compared with controls. Induced changes were comparable to (i) historical transitions in thickness previously attributed to selection by the invading predator and (ii) present-day clinal variation predicted from water temperature differences. Thus, predator-induced phenotypic plasticity may explain broad-scale geographic and temporal phenotypic variation. If inducible defenses are heritable, then selection on the reaction norm may influence coevolution between predator and prey. Trade-offs may explain why inducible rather than constitutive defenses have evolved in several gastropod species.", "which hypothesis ?", "Phenotypic plasticity", 392.0, 413.0], ["Aim We used alien plant species introduced to a botanic garden to investigate the relative importance of species traits (leaf traits, dispersal syndrome) and introduction characteristics (propagule pressure, residence time and distance to forest) in explaining establishment success in surrounding tropical forest. We also used invasion scores from a weed risk assessment protocol as an independent measure of invasion risk and assessed differences in variables between high\u2010 and low\u2010risk species.", "which hypothesis ?", "Propagule pressure", 188.0, 206.0], ["We studied the relative importance of residence time, propagule pressure, and species traits in three stages of invasion of alien woody plants cultivated for about 150 years in the Czech Republic, Central Europe. The probability of escape from cultivation, naturalization, and invasion was assessed using classification trees. We compared 109 escaped-not-escaped congeneric pairs, 44 naturalized-not-naturalized, and 17 invasive-not-invasive congeneric pairs. We used the following predictors of the above probabilities: date of introduction to the target region as a measure of residence time; intensity of planting in the target area as a proxy for propagule pressure; the area of origin; and 21 species-specific biological and ecological traits. The misclassification rates of the naturalization and invasion model were low, at 19.3% and 11.8%, respectively, indicating that the variables used included the major determinants of these processes. The probability of escape increased with residence time in the Czech Republic, whereas the probability of naturalization increased with the residence time in Europe. This indicates that some species were already adapted to local conditions when introduced to the Czech Republic. Apart from residence time, the probability of escape depends on planting intensity (propagule pressure), and that of naturalization on the area of origin and fruit size; it is lower for species from Asia and those with small fruits. The probability of invasion is determined by a long residence time and the ability to tolerate low temperatures. These results indicate that a simple suite of factors determines, with a high probability, the invasion success of alien woody plants, and that the relative role of biological traits and other factors is stage dependent. High levels of propagule pressure as a result of planting lead to woody species eventually escaping from cultivation, regardless of biological traits. However, the biological traits play a role in later stages of invasion.", "which hypothesis ?", "Propagule pressure", 54.0, 72.0], ["Propagule pressure is recognized as a fundamental driver of freshwater fish invasions, though few studies have quantified its role. Natural experiments can be used to quantify the role of this factor relative to others in driving establishment success. An irrigation network in South Africa takes water from an inter-basin water transfer (IBWT) scheme to supply multiple small irrigation ponds. We compared fish community composition upstream, within, and downstream of the irrigation network, to show that this system is a unidirectional dispersal network with a single immigration source. We then assessed the effect of propagule pressure and biological adaptation on the colonization success of nine fish species across 30 recipient ponds of varying age. Establishing species received significantly more propagules at the source than did incidental species, while rates of establishment across the ponds displayed a saturation response to propagule pressure. This shows that propagule pressure is a significant driver of establishment overall. Those species that did not establish were either extremely rare at the immigration source or lacked the reproductive adaptations to breed in the ponds. The ability of all nine species to arrive at some of the ponds illustrates how long-term continuous propagule pressure from IBWT infrastructure enables range expansion of fishes. The quantitative link between propagule pressure and success and rate of population establishment confirms the driving role of this factor in fish invasion ecology.", "which hypothesis ?", "Propagule pressure", 0.0, 18.0], ["We used multiscale plots to sample vascular plant diversity and soil characteristics in and adjacent to 26 long-term grazing exclosure sites in Colorado, Wyoming, Montana, and South Dakota, USA. The exclosures were 7\u201360 yr old (31.2 \u00b1 2.5 yr, mean \u00b1 1 se). Plots were also randomly placed in the broader landscape in open rangeland in the same vegetation type at each site to assess spatial variation in grazed landscapes. Consistent sampling in the nine National Parks, Wildlife Refuges, and other management units yielded data from 78 1000-m2 plots and 780 1-m2 subplots. We hypothesized that native species richness would be lower in the exclosures than in grazed sites, due to competitive exclusion in the absence of grazing. We also hypothesized that grazed sites would have higher native and exotic species richness compared to ungrazed areas, due to disturbance (i.e., the intermediate-disturbance hypothesis) and the conventional wisdom that grazing may accelerate weed invasion. Both hypotheses were soundly rej...", "which hypothesis ?", "Disturbance", 857.0, 868.0], ["European countries in general, and England in particular, have a long history of introducing non-native fish species, but there exist no detailed studies of the introduction pathways and propagules pressure for any European country. Using the nine regions of England as a preliminary case study, the potential relationship between the occurrence in the wild of non-native freshwater fishes (from a recent audit of non-native species) and the intensity (i.e. propagule pressure) and diversity of fish imports was investigated. The main pathways of introduction were via imports of fishes for ornamental use (e.g. aquaria and garden ponds) and sport fishing, with no reported or suspected cases of ballast water or hull fouling introductions. The recorded occurrence of non-native fishes in the wild was found to be related to the time (number of years) since the decade of introduction. A shift in the establishment rate, however, was observed in the 1970s after which the ratio of established-to-introduced species declined. The number of established non-native fish species observed in the wild was found to increase significantly (P < 0\u00b705) with increasing import intensity (log10x + 1 of the numbers of fish imported for the years 2000\u20132004) and with increasing consignment diversity (log10x + 1 of the numbers of consignment types imported for the years 2000\u20132004). The implications for policy and management are discussed.", "which hypothesis ?", "Propagule pressure", 458.0, 476.0], ["In introduced organisms, dispersal propensity is expected to increase during range expansion. This prediction is based on the assumption that phenotypic plasticity is low compared to genetic diversity, and an increase in dispersal can be counteracted by the Allee effect. Empirical evidence in support of these hypotheses is however lacking. The present study tested for evidence of differentiation in dispersal-related traits and the Allee effect in the wind-dispersed invasive Senecio inaequidens (Asteraceae). We collected capitula from individuals in ten field populations, along an invasion route including the original introduction site in southern France. In addition, we conducted a common garden experiment from field-collected seeds and obtained capitula from individuals representing the same ten field populations. We analysed phenotypic variation in dispersal traits between field and common garden environments as a function of the distance between populations and the introduction site. Our results revealed low levels of phenotypic differentiation among populations. However, significant clinal variation in dispersal traits was demonstrated in common garden plants representing the invasion route. In field populations, similar trends in dispersal-related traits and evidence of an Allee effect were not detected. In part, our results supported expectations of increased dispersal capacity with range expansion, and emphasized the contribution of phenotypic plasticity under natural conditions.", "which hypothesis ?", "Phenotypic plasticity", 142.0, 163.0], ["Enemy release of exotic plants from soil pathogens has been tested by examining plant-soil feedback effects in repetitive growth cycles. However, positive soil feedback may also be due to enhanced benefit from the local arbuscular mycorrhizal fungi (AMF). Few studies actually have tested pathogen effects, and none of them did so in arid savannas. In the Kalahari savanna in Botswana, we compared the soil feedback of the exotic grass Cenchrus biflorus with that of two dominant native grasses, Eragrostis lehmanniana and Aristida meridionalis. The exotic grass had neutral to positive soil feedback, whereas both native grasses showed neutral to negative feedback effects. Isolation and testing of root-inhabiting fungi of E. lehmanniana yielded two host-specific pathogens that did not influence the exotic C. biflorus or the other native grass, A. meridionalis. None of the grasses was affected by the fungi that were isolated from the roots of the exotic C. biflorus. We isolated and compared the AMF community of the native and exotic grasses by polymerase chain reaction-denaturing gradient gel elecrophoresis (PCR-DGGE), targeting AMF 18S rRNA. We used roots from monospecific field stands and from plants grown in pots with mixtures of soils from the monospecific field stands. Three-quarters of the root samples of the exotic grass had two nearly identical sequences, showing 99% similarity with Glomus versiforme. The two native grasses were also associated with distinct bands, but each of these bands occurred in only a fraction of the root samples. The native grasses contained a higher diversity of AMF bands than the exotic grass. Canonical correspondence analyses of the AMF band patterns revealed almost as much difference between the native and exotic grasses as between the native grasses. In conclusion, our results support the hypothesis that release from soil-borne enemies may facilitate local abundance of exotic plants, and we provide the first evidence that these processes may occur in arid savanna ecosystems. Pathogenicity tests implicated the involvement of soil pathogens in the soil feedback responses, and further studies should reveal the functional consequences of the observed high infection with a low diversity of AMF in the roots of exotic plants.", "which hypothesis ?", "Enemy release", 0.0, 13.0], ["Abstract In hardwood subtropical forests of southern Florida, nonnative vines have been hypothesized to be detrimental, as many species form dense \u201cvine blankets\u201d that shroud the forest. To investigate the effects of nonnative vines in post-hurricane regeneration, we set up four large (two pairs of 30 \u00d7 60 m) study areas in each of three study sites. One of each pair was unmanaged and the other was managed by removal of nonnative plants, predominantly vines. Within these areas, we sampled vegetation in 5 \u00d7 5 m plots for stems 2 cm DBH (diameter at breast height) or greater and in 2 \u00d7 0.5 m plots for stems of all sizes. For five years, at annual censuses, we tagged and measured stems of vines, trees, shrubs and herbs in these plots. For each 5 \u00d7 5 m plot, we estimated percent coverage by individual vine species, using native and nonnative vines as classes. We investigated the hypotheses that: (1) plot coverage, occurrence and recruitment of nonnative vines were greater than that of native vines in unmanaged plots; (2) the management program was effective at reducing cover by nonnative vines; and (3) reduction of cover by nonnative vines improved recruitment of seedlings and saplings of native trees, shrubs, and herbs. In unmanaged plots, nonnative vines recruited more seedlings and had a significantly higher plot-cover index, but not a higher frequency of occurrence. Management significantly reduced cover by nonnative vines and had a significant overall positive effect on recruitment of seedlings and saplings of native trees, shrubs and herbs. Management also affected the seedling community (which included vines, trees, shrubs, and herbs) in some unanticipated ways, favoring early successional species for a longer period of time. The vine species with the greatest potential to \u201cstrangle\u201d gaps were those that rapidly formed dense cover, had shade tolerant seedling recruitment, and were animal-dispersed. This suite of traits was more common in the nonnative vines than in the native vines. Our results suggest that some vines may alter the spatiotemporal pattern of recruitment sites in a forest ecosystem following a natural disturbance by creating many very shady spots very quickly.", "which hypothesis ?", "Disturbance", 2157.0, 2168.0], ["Abstract: We studied 28 alien tree species currently planted for forestry purposes in the Czech Republic to determine the probability of their escape from cultivation and naturalization. Indicators of propagule pressure (number of administrative units in which a species is planted and total planting area) and time of introduction into cultivation were used as explanatory variables in multiple regression models. Fourteen species escaped from cultivation, and 39% of the variance was explained by the number of planting units and the time of introduction, the latter being more important. Species introduced early had a higher probability of escape than those introduced later, with more than 95% probability of escape for those introduced before 1801 and <5% for those introduced after 1892. Probability of naturalization was more difficult to predict, and eight species were misclassified. A model omitting two species with the largest influence on the model yielded similar predictors of naturalization as did the probability of escape. Both phases of invasion therefore appear to be driven by planting and introduction history in a similar way. Our results demonstrate the importance of forestry for recruitment of invasive trees. Six alien forestry trees, classified as invasive in the Czech Republic, are currently reported in nature reserves. In addition, forestry authorities want to increase the diversity of alien species and planting area in the country.", "which hypothesis ?", "Propagule pressure", 201.0, 219.0], ["Disturbances have the potential to increase the success of bi ological invasions. Norway maple {Acer platanoides), a common street tree native to Europe, is a foreign invasive with greater tolerance and more effi cient resource utilization than the native sugar maple (Acer saccharum). This study examined the role disturbances from a road and path played in the invasion of Norway maple and in the distribution of sugar maple. Disturbed areas on the path and nearby undisturbed areas were surveyed for both species along transects running perpendicular to a road. Norway maples were present in greater number closer to the road and on the path, while the number of sugar maples was not significantly associated with either the road or the path. These results suggest that human-caused disturbances have a role in facili tating the establishment of an invasive species.", "which hypothesis ?", "Disturbance", NaN, NaN], ["The ability to succeed in diverse conditions is a key factor allowing introduced species to successfully invade and spread across new areas. Two non-exclusive factors have been suggested to promote this ability: adaptive phenotypic plasticity of individuals, and the evolution of locally adapted populations in the new range. We investigated these individual and population-level factors in Polygonum cespitosum, an Asian annual that has recently become invasive in northeastern North America. We characterized individual fitness, life-history, and functional plasticity in response to two contrasting glasshouse habitat treatments (full sun/dry soil and understory shade/moist soil) in 165 genotypes sampled from nine geographically separate populations representing the range of light and soil moisture conditions the species inhabits in this region. Polygonum cespitosum genotypes from these introduced-range populations expressed broadly similar plasticity patterns. In response to full sun, dry conditions, genotypes from all populations increased photosynthetic rate, water use efficiency, and allocation to root tissues, dramatically increasing reproductive fitness compared to phenotypes expressed in simulated understory shade. Although there were subtle among-population differences in mean trait values as well as in the slope of plastic responses, these population differences did not reflect local adaptation to environmental conditions measured at the population sites of origin. Instead, certain populations expressed higher fitness in both glasshouse habitat treatments. We also compared the introduced-range populations to a single population from the native Asian range, and found that the native population had delayed phenology, limited functional plasticity, and lower fitness in both experimental environments compared with the introduced-range populations. Our results indicate that the future spread of P. cespitosum in its introduced range will likely be fueled by populations consisting of individuals able to express high fitness across diverse light and moisture conditions, rather than by the evolution of locally specialized populations.", "which hypothesis ?", "Phenotypic plasticity", 221.0, 242.0], ["Abstract Extensive areas in the mountain grasslands of central Argentina are heavily invaded by alien species from Europe. A decrease in biodiversity and a loss of palatable species is also observed. The invasibility of the tall-grass mountain grassland community was investigated in an experiment of factorial design. Six alien species which are widely distributed in the region were sown in plots where soil disturbance, above-ground biomass removal by cutting and burning were used as treatments. Alien species did not establish in undisturbed plots. All three types of disturbances increased the number and cover of alien species; the effects of soil disturbance and biomass removal was cumulative. Cirsium vulgare and Oenothera erythrosepala were the most efficient alien colonizers. In conditions where disturbances did not continue the cover of aliens started to decrease in the second year, by the end of the third season, only a few adults were established. Consequently, disturbances are needed to maintain ali...", "which hypothesis ?", "Disturbance", 410.0, 421.0], ["The differences in phenotypic plasticity between invasive (North American) and native (German) provenances of the invasive plant Lythrum salicaria (purple loosestrife) were examined using a multivariate reaction norm approach testing two important attributes of reaction norms described by multivariate vectors of phenotypic change: the magnitude and direction of mean trait differences between environments. Data were collected for six life history traits from native and invasive plants using a split-plot design with experimentally manipulated water and nutrient levels. We found significant differences between native and invasive plants in multivariate phenotypic plasticity for comparisons between low and high water treatments within low nutrient levels, between low and high nutrient levels within high water treatments, and for comparisons that included both a water and nutrient level change. The significant genotype x environment (G x E) effects support the argument that invasiveness of purple loosestrife is closely associated with the interaction of high levels of soil nutrient and flooding water regime. Our results indicate that native and invasive plants take different strategies for growth and reproduction; native plants flowered earlier and allocated more to flower production, while invasive plants exhibited an extended period of vegetative growth before flowering to increase height and allocation to clonal reproduction, which may contribute to increased fitness and invasiveness in subsequent years.", "which Species name ?", "Lythrum salicaria", 129.0, 146.0], ["In introduced organisms, dispersal propensity is expected to increase during range expansion. This prediction is based on the assumption that phenotypic plasticity is low compared to genetic diversity, and an increase in dispersal can be counteracted by the Allee effect. Empirical evidence in support of these hypotheses is however lacking. The present study tested for evidence of differentiation in dispersal-related traits and the Allee effect in the wind-dispersed invasive Senecio inaequidens (Asteraceae). We collected capitula from individuals in ten field populations, along an invasion route including the original introduction site in southern France. In addition, we conducted a common garden experiment from field-collected seeds and obtained capitula from individuals representing the same ten field populations. We analysed phenotypic variation in dispersal traits between field and common garden environments as a function of the distance between populations and the introduction site. Our results revealed low levels of phenotypic differentiation among populations. However, significant clinal variation in dispersal traits was demonstrated in common garden plants representing the invasion route. In field populations, similar trends in dispersal-related traits and evidence of an Allee effect were not detected. In part, our results supported expectations of increased dispersal capacity with range expansion, and emphasized the contribution of phenotypic plasticity under natural conditions.", "which Species name ?", "Senecio inaequidens", 479.0, 498.0], ["An unresolved question in ecology concerns why the ecological effects of invasions vary in magnitude. Many introduced species fail to interact strongly with the recipient biota, whereas others profoundly disrupt the ecosystems they invade through predation, competition, and other mechanisms. In the context of ecological impacts, research on biological invasions seldom considers phenotypic or microevolutionary changes that occur following introduction. Here, we show how plasticity in key life history traits (colony size and longevity), together with omnivory, magnifies the predatory impacts of an invasive social wasp (Vespula pensylvanica) on a largely endemic arthropod fauna in Hawaii. Using a combination of molecular, experimental, and behavioral approaches, we demonstrate (i) that yellowjackets consume an astonishing diversity of arthropod resources and depress prey populations in invaded Hawaiian ecosystems and (ii) that their impact as predators in this region increases when they shift from small annual colonies to large perennial colonies. Such trait plasticity may influence invasion success and the degree of disruption that invaded ecosystems experience. Moreover, postintroduction phenotypic changes may help invaders to compensate for reductions in adaptive potential resulting from founder events and small population sizes. The dynamic nature of biological invasions necessitates a more quantitative understanding of how postintroduction changes in invader traits affect invasion processes.", "which Species name ?", "Vespula pensylvanica", 625.0, 645.0], ["Background: Brown trout (Salmo trutta) were introduced into, and subsequently colonized, a number of disparate watersheds on the island of Newfoundland, Canada (110,638 km 2 ), starting in 1883. Questions: Do environmental features of recently invaded habitats shape population-level phenotypic variability? Are patterns of phenotypic variability suggestive of parallel adaptive divergence? And does the extent of phenotypic divergence increase as a function of distance between populations? Hypotheses: Populations that display similar phenotypes will inhabit similar environments. Patterns in morphology, coloration, and growth in an invasive stream-dwelling fish should be consistent with adaptation, and populations closer to each other should be more similar than should populations that are farther apart. Organism and study system: Sixteen brown trout populations of probable common descent, inhabiting a gradient of environments. These populations include the most ancestral (\u223c130 years old) and most recently established (\u223c20 years old). Analytical methods: We used multivariate statistical techniques to quantify morphological (e.g. body shape via geometric morphometrics and linear measurements of traits), meristic (e.g. counts of pigmentation spots), and growth traits from 1677 individuals. To account for ontogenetic and allometric effects on morphology, we conducted separate analyses on three distinct size/age classes. We used the BIO-ENV routine and Mantel tests to measure the correlation between phenotypic and habitat features. Results: Phenotypic similarity was significantly correlated with environmental similarity, especially in the larger size classes of fish. The extent to which these associations between phenotype and habitat result from parallel evolution, adaptive phenotypic plasticity, or historical founder effects is not known. Observed patterns of body shape and fin sizes were generally consistent with predictions of adaptive trait patterns, but other traits showed less consistent patterns with habitat features. Phenotypic differences increased as a function of straight-line distance (km) between watersheds and to a lesser extent fish dispersal distances, which suggests habitat has played a more significant role in shaping population phenotypes compared with founder effects.", "which Species name ?", "Salmo trutta", 25.0, 37.0], ["Introduced species must adapt their ecology, behaviour, and morphological traits to new conditions. The successful introduction and invasive potential of a species are related to its levels of phenotypic plasticity and genetic polymorphism. We analysed changes in the body mass and length of American mink (Neovison vison) since its introduction into the Warta Mouth National Park, western Poland, in relation to diet composition and colonization progress from 1996 to 2004. Mink body mass decreased significantly during the period of population establishment within the study area, with an average decrease of 13% from 1.36 to 1.18 kg in males and of 16% from 0.83 to 0.70 kg in females. Diet composition varied seasonally and between consecutive years. The main prey items were mammals and fish in the cold season and birds and fish in the warm season. During the study period the proportion of mammals preyed upon increased in the cold season and decreased in the warm season. The proportion of birds preyed upon decreased over the study period, whereas the proportion of fish increased. Following introduction, the strictly aquatic portion of mink diet (fish and frogs) increased over time, whereas the proportion of large prey (large birds, muskrats, and water voles) decreased. The average yearly proportion of large prey and average-sized prey in the mink diet was significantly correlated with the mean body masses of males and females. Biogeographical variation in the body mass and length of mink was best explained by the percentage of large prey in the mink diet in both sexes, and by latitude for females. Together these results demonstrate that American mink rapidly changed their body mass in relation to local conditions. This phenotypic variability may be underpinned by phenotypic plasticity and/or by adaptation of quantitative genetic variation. The potential to rapidly change phenotypic variation in this manner is an important factor determining the negative ecological impacts of invasive species. \u00a9 2012 The Linnean Society of London, Biological Journal of the Linnean Society, 2012, 105, 681\u2013693.", "which Species name ?", "Neovison vison", 307.0, 321.0], ["Alliaria petiolata is a Eurasian biennial herb that is invasive in North America and for which phenotypic plasticity has been noted as a potentially important invasive trait. Using four European and four North American populations, we explored variation among populations in the response of a suite of antioxidant, antiherbivore, and morphological traits to the availability of water and nutrients and to jasmonic acid treatment. Multivariate analyses revealed substantial variation among populations in mean levels of these traits and in the response of this suite of traits to environmental variation, especially water availability. Univariate analyses revealed variation in plasticity among populations in the expression of all of the traits measured to at least one of these environmental factors, with the exception of leaf length. There was no evidence for continentally distinct plasticity patterns, but there was ample evidence for variation in phenotypic plasticity among the populations within continents. This implies that A. petiolata has the potential to evolve distinct phenotypic plasticity patterns within populations but that invasive populations are no more plastic than native populations.", "which Species name ?", "Alliaria petiolata", 0.0, 18.0], ["The mechanisms underlying successful biological invasions often remain unclear. In the case of the tropical water flea Daphnia lumholtzi, which invaded North America, it has been suggested that this species possesses a high thermal tolerance, which in the course of global climate change promotes its establishment and rapid spread. However, D. lumholtzi has an additional remarkable feature: it is the only water flea that forms rigid head spines in response to chemicals released in the presence of fishes. These morphologically (phenotypically) plastic traits serve as an inducible defence against these predators. Here, we show in controlled mesocosm experiments that the native North American species Daphnia pulicaria is competitively superior to D. lumholtzi in the absence of predators. However, in the presence of fish predation the invasive species formed its defences and became dominant. This observation of a predator-mediated switch in dominance suggests that the inducible defence against fish predation may represent a key adaptation for the invasion success of D. lumholtzi.", "which Species name ?", "Daphnia lumholtzi", 119.0, 136.0], ["Invasive species have been hypothesized to out-compete natives though either a Jack-of-all-trades strategy, where they are able to utilize resources effectively in unfavourable environments, a master-of-some, where resource utilization is greater than its competitors in favourable environments, or a combination of the two (Jack-and-master). We examined the invasive strategy of Berberis darwinii in New Zealand compared with four co-occurring native species by examining germination, seedling survival, photosynthetic characteristics and water-use efficiency of adult plants, in sun and shade environments. Berberis darwinii seeds germinated more in shady sites than the other natives, but survival was low. In contrast, while germination of B. darwinii was the same as the native species in sunny sites, seedling survival after 18 months was nearly twice that of the all native species. The maximum photosynthetic rate of B. darwinii was nearly double that of all native species in the sun, but was similar among all species in the shade. Other photosynthetic traits (quantum yield and stomatal conductance) did not generally differ between B. darwinii and the native species, regardless of light environment. Berberis darwinii had more positive values of \u03b413C than the four native species, suggesting that it gains more carbon per unit water transpired than the competing native species. These results suggest that the invasion success of B. darwinii may be partially explained by combination of a Jack-of-all-trades scenario of widespread germination with a master-of-some scenario through its ability to photosynthesize at higher rates in the sun and, hence, gain a rapid height and biomass advantage over native species in favourable environments.", "which Species name ?", "Berberis darwinii", 380.0, 397.0], ["Invasiveness may result from genetic variation and adaptation or phenotypic plasticity, and genetic variation in fitness traits may be especially critical. Pennisetum setaceum (fountain grass, Poaceae) is highly invasive in Hawaii (HI), moderately invasive in Arizona (AZ), and less invasive in southern California (CA). In common garden experiments, we examined the relative importance of quantitative trait variation, precipitation, and phenotypic plasticity in invasiveness. In two very different environments, plants showed no differences by state of origin (HI, CA, AZ) in aboveground biomass, seeds/flower, and total seed number. Plants from different states were also similar within watering treatment. Plants with supplemental watering, relative to unwatered plants, had greater biomass, specific leaf area (SLA), and total seed number, but did not differ in seeds/flower. Progeny grown from seeds produced under different watering treatments showed no maternal effects in seed mass, germination, biomass or SLA. High phenotypic plasticity, rather than local adaptation is likely responsible for variation in invasiveness. Global change models indicate that temperature and precipitation patterns over the next several decades will change, although the direction of change is uncertain. Drier summers in southern California may retard further invasion, while wetter summers may favor the spread of fountain grass.", "which Species name ?", "Pennisetum setaceum", 156.0, 175.0], ["Abstract The selection and introduction of drought tolerant species is a common method of restoring degraded grasslands in arid environments. This study investigated the effects of water stress on growth, water relations, Na+ and K+ accumulation, and stomatal development in the native plant species Zygophyllum xanthoxylum (Bunge) Maxim., and an introduced species, Caragana korshinskii Kom., under three watering regimes. Moderate drought significantly reduced pre\u2010dawn water potential, leaf relative water content, total biomass, total leaf area, above\u2010ground biomass, total number of leaves and specific leaf area, but it increased the root/total weight ratio (0.23 versus 0.33) in C. korshinskii. Only severe drought significantly affected water status and growth in Z. xanthoxylum. In any given watering regime, a significantly higher total biomass was observed in Z. xanthoxylum (1.14 g) compared to C. korshinskii (0.19 g). Moderate drought significantly increased Na+ accumulation in all parts of Z. xanthoxylum, e.g., moderate drought increased leaf Na+ concentration from 1.14 to 2.03 g/100 g DW, however, there was no change in Na+ (0.11 versus 0.12) in the leaf of C. korshinskii when subjected to moderate drought. Stomatal density increased as water availability was reduced in both C. korshinskii and Z. xanthoxylum, but there was no difference in stomatal index of either species. Stomatal length and width, and pore width were significantly reduced by moderate water stress in Z. xanthoxylum, but severe drought was required to produce a significant effect in C. korshinskii. These results indicated that C. korshinskii is more responsive to water stress and exhibits strong phenotypic plasticity especially in above\u2010ground/below\u2010ground biomass allocation. In contrast, Z. xanthoxylum was more tolerant to water deficit, with a lower specific leaf area and a strong ability to maintain water status through osmotic adjustment and stomatal closure, thereby providing an effective strategy to cope with local extreme arid environments.", "which Species name ?", "Caragana korshinskii", 367.0, 387.0], ["The relative importance of plasticity vs. adaptation for the spread of invasive species has rarely been studied. We examined this question in a clonal population of invasive freshwater snails (Potamopyrgus antipodarum) from the western United States by testing whether observed plasticity in life history traits conferred higher fitness across a range of temperatures. We raised isofemale lines from three populations from different climate regimes (high- and low-elevation rivers and an estuary) in a split-brood, common-garden design in three temperatures. We measured life history and growth traits and calculated population growth rate (as a measure of fitness) using an age-structured projection matrix model. We found a strong effect of temperature on all traits, but no evidence for divergence in the average level of traits among populations. Levels of genetic variation and significant reaction norm divergence for life history traits suggested some role for adaptation. Plasticity varied among traits and was lowest for size and reproductive traits compared to age-related traits and fitness. Plasticity in fitness was intermediate, suggesting that invasive populations are not general-purpose genotypes with respect to the range of temperatures studied. Thus, by considering plasticity in fitness and its component traits, we have shown that trait plasticity alone does not yield the same fitness across a relevant set of temperature conditions.", "which Species name ?", "Potamopyrgus antipodarum", 193.0, 217.0], ["sempervirens L., a non-invasive native. We hypothesized that greater morphological plasticity may contribute to the ability of L. japonica to occupy more habitat types, and contribute to its invasiveness. We compared the morphology of plants provided with climbing supports with plants that had no climbing supports, and thus quantified their morphological plasticity in response to an important variable in their habitats. The two species responded differently to the treatments, with L. japonica showing greater responses in more characters. For example, Lonicera japonica responded to climbing supports with a 15.3% decrease in internode length, a doubling of internode number and a 43% increase in shoot biomass. In contrast, climbing supports did not influence internode length or shoot biomass for L. sempervirens, and only resulted in a 25% increase in internode number. This plasticity may allow L. japonica to actively place plant modules in favorable microhabitats and ultimately affect plant fitness.", "which Species name ?", "Lonicera japonic", NaN, NaN], ["We investigated whether plasticity in growth responses to nutrients could predict invasive potential in aquatic plants by measuring the effects of nutrients on growth of eight non\u2010invasive native and six invasive exotic aquatic plant species. Nutrients were applied at two levels, approximating those found in urbanized and relatively undisturbed catchments, respectively. To identify systematic differences between invasive and non\u2010invasive species, we compared the growth responses (total biomass, root:shoot allocation, and photosynthetic surface area) of native species with those of related invasive species after 13 weeks growth. The results were used to seek evidence of invasive potential among four recently naturalized species. There was evidence that invasive species tend to accumulate more biomass than native species (P = 0.0788). Root:shoot allocation did not differ between native and invasive plant species, nor was allocation affected by nutrient addition. However, the photosynthetic surface area of invasive species tended to increase with nutrients, whereas it did not among native species (P = 0.0658). Of the four recently naturalized species, Hydrocleys nymphoides showed the same nutrient\u2010related plasticity in photosynthetic area displayed by known invasive species. Cyperus papyrus showed a strong reduction in photosynthetic area with increased nutrients. H. nymphoides and C. papyrus also accumulated more biomass than their native relatives. H. nymphoides possesses both of the traits we found to be associated with invasiveness, and should thus be regarded as likely to be invasive.", "which Species name ?", "Aquatic plant species", 220.0, 241.0], ["The Natural Language Processing (NLP) community has significantly contributed to the solutions for entity and relation recognition from a natural language text, and possibly linking them to proper matches in Knowledge Graphs (KGs). Considering Wikidata as the background KG, there are still limited tools to link knowledge within the text to Wikidata. In this paper, we present Falcon 2.0, the first joint entity and relation linking tool over Wikidata. It receives a short natural language text in the English language and outputs a ranked list of entities and relations annotated with the proper candidates in Wikidata. The candidates are represented by their Internationalized Resource Identifier (IRI) in Wikidata. Falcon 2.0 resorts to the English language model for the recognition task (e.g., N-Gram tiling and N-Gram splitting), and then an optimization approach for the linking task. We have empirically studied the performance of Falcon 2.0 on Wikidata and concluded that it outperforms all the existing baselines. Falcon 2.0 is open source and can be reused by the community; all the required instructions of Falcon 2.0 are well-documented at our GitHub repository (https://github.com/SDM-TIB/falcon2.0). We also demonstrate an online API, which can be run without any technical expertise. Falcon 2.0 and its background knowledge bases are available as resources at https://labs.tib.eu/falcon/falcon2/.", "which contains ?", "Model", 762.0, 767.0], ["This conceptual paper reviews the current status of goal setting in the area of technology enhanced learning and education. Besides a brief literature review, three current projects on goal setting are discussed. The paper shows that the main barriers for goal setting applications in education are not related to the technology, the available data or analytical methods, but rather the human factor. The most important bottlenecks are the lack of students goal setting skills and abilities, and the current curriculum design, which, especially in the observed higher education institutions, provides little support for goal setting interventions.", "which contains ?", "Methods", 363.0, 370.0], ["Due to significant industrial demands toward software systems with increasing complexity and challenging quality requirements, software architecture design has become an important development activity and the research domain is rapidly evolving. In the last decades, software architecture optimization methods, which aim to automate the search for an optimal architecture design with respect to a (set of) quality attribute(s), have proliferated. However, the reported results are fragmented over different research communities, multiple system domains, and multiple quality attributes. To integrate the existing research results, we have performed a systematic literature review and analyzed the results of 188 research papers from the different research communities. Based on this survey, a taxonomy has been created which is used to classify the existing research. Furthermore, the systematic analysis of the research literature provided in this review aims to help the research community in consolidating the existing research efforts and deriving a research agenda for future developments.", "which contains ?", "Methods", 302.0, 309.0], ["Abstract Motivation Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. Results We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts. Availability and implementation We make the pre-trained weights of BioBERT freely available at https://github.com/naver/biobert-pretrained, and the source code for fine-tuning BioBERT available at https://github.com/dmis-lab/biobert.", "which contains ?", "Results", 482.0, 489.0], ["ABSTRACT Contested heritage has increasingly been studied by scholars over the last two decades in multiple disciplines, however, there is still limited knowledge about what contested heritage is and how it is realized in society. Therefore, the purpose of this paper is to produce a systematic literature review on this topic to provide a holistic understanding of contested heritage, and delineate its current state, trends and gaps. Methodologically, four electronic databases were searched, and 102 journal articles published before 2020 were extracted. A content analysis of each article was then conducted to identify key themes and variables for classification. Findings show that while its research often lacks theoretical underpinnings, contested heritage is marked by its diversity and complexity as it becomes a global issue for both tourism and urbanization. By presenting a holistic understanding of contested heritage, this review offers an extensive investigation of the topic area to help move literature pertaining contested heritage forward.", "which has subject domain ?", "Contested heritage", 9.0, 27.0], ["Abstract Hackathons, time-bounded events where participants write computer code and build apps, have become a popular means of socializing tech students and workers to produce \u201cinnovation\u201d despite little promise of material reward. Although they offer participants opportunities for learning new skills and face-to-face networking and set up interaction rituals that create an emotional \u201chigh,\u201d potential advantage is even greater for the events\u2019 corporate sponsors, who use them to outsource work, crowdsource innovation, and enhance their reputation. Ethnographic observations and informal interviews at seven hackathons held in New York during the course of a single school year show how the format of the event and sponsors\u2019 discursive tropes, within a dominant cultural frame reflecting the appeal of Silicon Valley, reshape unpaid and precarious work as an extraordinary opportunity, a ritual of ecstatic labor, and a collective imaginary for fictional expectations of innovation that benefits all, a powerful strategy for manufacturing workers\u2019 consent in the \u201cnew\u201d economy.", "which has subject domain ?", "Hackathons", 9.0, 19.0], ["ABSTRACT \u2018Heritage Interpretation\u2019 has always been considered as an effective learning, communication and management tool that increases visitors\u2019 awareness of and empathy to heritage sites or artefacts. Yet the definition of \u2018digital heritage interpretation\u2019 is still wide and so far, no significant method and objective are evident within the domain of \u2018digital heritage\u2019 theory and discourse. Considering \u2018digital heritage interpretation\u2019 as a process rather than as a tool to present or communicate with end-users, this paper presents a critical application of a theoretical construct ascertained from multiple disciplines and explicates four objectives for a comprehensive interpretive process. A conceptual model is proposed and further developed into a conceptual framework with fifteen considerations. This framework is then implemented and tested on an online platform to assess its impact on end-users\u2019 interpretation level. We believe the presented interpretive framework (PrEDiC) will help heritage professionals and media designers to develop interpretive heritage project.", "which has subject domain ?", "Heritage Interpretation", 10.0, 33.0], ["Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19\u2013related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.", "which has subject domain ?", "challenges posed by the COVID-19 pandemic", 418.0, 459.0], ["Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.", "which has subject domain ?", "cultural heritage", 237.0, 254.0], ["

A nonfullerene electron acceptor (IEIC) based on indaceno[1,2-b:5,6-b\u2032]dithiophene and 2-(3-oxo-2,3-dihydroinden-1-ylidene)malononitrile was designed and synthesized, and fullerene-free polymer solar cells based on the IEIC acceptor showed power conversion efficiencies of up to 6.31%.

", "which Mobility ?", "Electron", 18.0, 26.0], ["A new electron\u2010rich central building block, 5,5,12,12\u2010tetrakis(4\u2010hexylphenyl)\u2010indacenobis\u2010(dithieno[3,2\u2010b:2\u2032,3\u2032\u2010d]pyrrol) (INP), and two derivative nonfullerene acceptors (INPIC and INPIC\u20104F) are designed and synthesized. The two molecules reveal broad (600\u2013900 nm) and strong absorption due to the satisfactory electron\u2010donating ability of INP. Compared with its counterpart INPIC, fluorinated nonfullerene acceptor INPIC\u20104F exhibits a stronger near\u2010infrared absorption with a narrower optical bandgap of 1.39 eV, an improved crystallinity with higher electron mobility, and down\u2010shifted highest occupied molecular orbital and lowest unoccupied molecular orbital energy levels. Organic solar cells (OSCs) based on INPIC\u20104F exhibit a high power conversion efficiency (PCE) of 13.13% and a relatively low energy loss of 0.54 eV, which is among the highest efficiencies reported for binary OSCs in the literature. The results demonstrate the great potential of the new INP as an electron\u2010donating building block for constructing high\u2010performance nonfullerene acceptors for OSCs.", "which Mobility ?", "Electron", 6.0, 14.0], ["Two cheliform non-fullerene acceptors, DTPC-IC and DTPC-DFIC, based on a highly electron-rich core, dithienopicenocarbazole (DTPC), are synthesized, showing ultra-narrow bandgaps (as low as 1.21 eV). The two-dimensional nitrogen-containing conjugated DTPC possesses strong electron-donating capability, which induces intense intramolecular charge transfer and intermolecular \u03c0-\u03c0 stacking in derived acceptors. The solar cell based on DTPC-DFIC and a spectrally complementary polymer donor, PTB7-Th, showed a high power conversion efficiency of 10.21% and an extremely low energy loss of 0.45 eV, which is the lowest among reported efficient OSCs.", "which Mobility ?", "Electron", 80.0, 88.0], ["Low-bandgap polymers/molecules are an interesting family of semiconductor materials, and have enabled many recent exciting breakthroughs in the field of organic electronics, especially for organic photovoltaics (OPVs). Here, such a low-bandgap (1.43 eV) non-fullerene electron acceptor (BT-IC) bearing a fused 7-heterocyclic ring with absorption edge extending to the near-infrared (NIR) region was specially designed and synthesized. Benefitted from its NIR light harvesting, high performance OPVs were fabricated with medium bandgap polymers (J61 and J71) as donors, showing power conversion efficiencies of 9.6% with J61 and 10.5% with J71 along with extremely low energy loss (0.56 eV for J61 and 0.53 eV for J71). Interestingly, femtosecond transient absorption spectroscopy studies on both systems show that efficient charge generation was observed despite the fact that the highest occupied molecular orbital (HOMO)\u2013HOMO offset (\u0394EH) in the blends was as low as 0.10 eV, suggesting that such a small \u0394EH is not a crucial limitation in realizing high performance of NIR non-fullerene based OPVs. Our results indicated that BT-IC is an interesting NIR non-fullerene acceptor with great potential application in tandem/multi-junction, semitransparent, and ternary blend solar cells.", "which Mobility ?", "Electron", 268.0, 276.0], ["The development of multidrug resistance (due to drug efflux by P-glycoproteins) is a major drawback with the use of paclitaxel (PTX) in the treatment of cancer. The rationale behind this study is to prepare PTX nanoparticles (NPs) for the reversal of multidrug resistance based on the fact that PTX loaded into NPs is not recognized by P-glycoproteins and hence is not effluxed out of the cell. Also, the intracellular penetration of the NPs could be enhanced by anchoring transferrin (Tf) on the PTX-PLGA-NPs. PTX-loaded PLGA NPs (PTX-PLGA-NPs), Pluronic\u00aeP85-coated PLGA NPs (P85-PTX-PLGA-NPs), and Tf-anchored PLGA NPs (Tf-PTX-PLGA-NPs) were prepared and evaluted for cytotoxicity and intracellular uptake using C6 rat glioma cell line. A significant increase in cytotoxicity was observed in the order of Tf-PTX-PLGA-NPs > P85-PTX-PLGA-NPs > PTX-PLGA-NPs in comparison to drug solution. In vivo biodistribution on male Sprague\u2013Dawley rats bearing C6 glioma (subcutaneous) showed higher tumor PTX concentrations in animals administered with PTX-NPs compared to drug solution.", "which Polymer ?", "Transferrin (Tf)", NaN, NaN], ["AIM Drug targeting to the CNS is challenging due to the presence of blood-brain barrier. We investigated chitosan (Cs) nanoparticles (NPs) as drug transporter system across the blood-brain barrier, based on mAb OX26 modified Cs. MATERIALS & METHODS Cs NPs functionalized with PEG, modified and unmodified with OX26 (Cs-PEG-OX26) were prepared and chemico-physically characterized. These NPs were administered (intraperitoneal) in mice to define their ability to reach the brain. RESULTS Brain uptake of OX26-conjugated NPs is much higher than of unmodified NPs, because: long-circulating abilities (conferred by PEG), interaction between cationic Cs and brain endothelium negative charges and OX26 TfR receptor affinity. CONCLUSION Cs-PEG-OX26 NPs are promising drug delivery system to the CNS.", "which Polymer ?", "Chitosan", 105.0, 113.0], ["PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.", "which Polymer ?", "Chitosan", 74.0, 82.0], ["Beech lignin was oxidatively cleaved in ionic liquids to give phenols, unsaturated propylaromatics, and aromatic aldehydes. A multiparallel batch reactor system was used to screen different ionic liquids and metal catalysts. Mn(NO(3))(2) in 1-ethyl-3-methylimidazolium trifluoromethanesulfonate [EMIM][CF(3)SO(3)] proved to be the most effective reaction system. A larger scale batch reaction with this system in a 300 mL autoclave (11 g lignin starting material) resulted in a maximum conversion of 66.3 % (24 h at 100 degrees C, 84x10(5) Pa air). By adjusting the reaction conditions and catalyst loading, the selectivity of the process could be shifted from syringaldehyde as the predominant product to 2,6-dimethoxy-1,4-benzoquinone (DMBQ). Surprisingly, the latter could be isolated as a pure substance in 11.5 wt % overall yield by a simple extraction/crystallization process.", "which Product ?", "Aromatic aldehydes", 104.0, 122.0], ["Gold nanoparticles on a number of supporting materials, including anatase TiO2 (TiO2-A, in 40 nm and 45 \u03bcm), rutile TiO2 (TiO2-R), ZrO2, Al2O3, SiO2 , and activated carbon, were evaluated for hydrodeoxygenation of guaiacol in 6.5 MPa initial H2 pressure at 300 \u00b0C. The presence of gold nanoparticles on the supports did not show distinguishable performance compared to that of the supports alone in the conversion level and in the product distribution, except for that on a TiO2-A-40 nm. The lack of marked catalytic activity on supports other than TiO2-A-40 nm suggests that Au nanoparticles are not catalytically active on these supports. Most strikingly, the gold nanoparticles on the least-active TiO2-A-40 nm support stood out as the best catalyst exhibiting high activity with excellent stability and remarkable selectivity to phenolics from guaiacol hydrodeoxygenation. The conversion of guaiacol (\u223c43.1%) over gold on the TiO2-A-40 nm was about 33 times that (1.3%) over the TiO2-A-40 nm alone. The selectivity o...", "which Product ?", "phenol", NaN, NaN], ["Biaryl scaffolds were constructed via Ni-catalyzed aryl C-O activation by avoiding cleavage of the more reactive acyl C-O bond of aryl carboxylates. Now aryl esters, in general, can be successfully employed in cross-coupling reactions for the first time. The substrate scope and synthetic utility of the chemistry were demonstrated by the syntheses of more than 40 biaryls and by constructing complex organic molecules. Water was observed to play an important role in facilitating this transformation.", "which Product ?", "Biaryl", 0.0, 6.0], ["A considerable effort has been made to extract biological and chemical entities, as well as their relationships, from the scientific literature, either manually through traditional literature curation or by using information extraction and text mining technologies. Medicinal chemistry patents contain a wealth of information, for instance to uncover potential biomarkers that might play a role in cancer treatment and prognosis. However, current biomedical annotation databases do not cover such information, partly due to limitations of publicly available biomedical patent mining software. As part of the BioCreative V CHEMDNER patents track, we present the results of the first named entity recognition (NER) assignment carried out to detect mentions of chemical compounds and genes/proteins in running patent text. More specifically, this task aimed to evaluate the performance of automatic name recognition strategies capable of isolating chemical names and gene and gene product mentions from surrounding text within patent titles and abstracts. A total of 22 unique teams submitted results for at least one of the three CHEMDNER subtasks. The first subtask, called the CEMP (chemical entity mention in patents) task, focused on the detection of chemical named entity mentions in patents, requesting teams to return the start and end indices corresponding to all the chemical entities found in a given record. A total of 21 teams submitted 93 runs, for this subtask. The top performing team reached an f-measure of 0.89 with a precision of 0.87 and a recall of 0.91. The CPD (chemical passage detection) task required the classification of patent titles and abstracts whether they do or do not contain chemical compound mentions. Nine teams returned predictions for this task (40 runs). The top run in terms of Matthew\u2019s correlation coefficient (MCC) had a score of 0.88, the highest sensitivity ? Corresponding author", "which Evaluation metrics ?", "Recall", 1558.0, 1564.0], ["A considerable effort has been made to extract biological and chemical entities, as well as their relationships, from the scientific literature, either manually through traditional literature curation or by using information extraction and text mining technologies. Medicinal chemistry patents contain a wealth of information, for instance to uncover potential biomarkers that might play a role in cancer treatment and prognosis. However, current biomedical annotation databases do not cover such information, partly due to limitations of publicly available biomedical patent mining software. As part of the BioCreative V CHEMDNER patents track, we present the results of the first named entity recognition (NER) assignment carried out to detect mentions of chemical compounds and genes/proteins in running patent text. More specifically, this task aimed to evaluate the performance of automatic name recognition strategies capable of isolating chemical names and gene and gene product mentions from surrounding text within patent titles and abstracts. A total of 22 unique teams submitted results for at least one of the three CHEMDNER subtasks. The first subtask, called the CEMP (chemical entity mention in patents) task, focused on the detection of chemical named entity mentions in patents, requesting teams to return the start and end indices corresponding to all the chemical entities found in a given record. A total of 21 teams submitted 93 runs, for this subtask. The top performing team reached an f-measure of 0.89 with a precision of 0.87 and a recall of 0.91. The CPD (chemical passage detection) task required the classification of patent titles and abstracts whether they do or do not contain chemical compound mentions. Nine teams returned predictions for this task (40 runs). The top run in terms of Matthew\u2019s correlation coefficient (MCC) had a score of 0.88, the highest sensitivity ? Corresponding author", "which Evaluation metrics ?", "Precision", 1534.0, 1543.0], ["The BioCreative LitCovid track calls for a community effort to tackle automated topic annotation for COVID-19 literature. The number of COVID-19-related articles in the literature is growing by about 10,000 articles per month, significantly challenging curation efforts and downstream interpretation. LitCovid is a literature database of COVID-19related articles in PubMed, which has accumulated more than 180,000 articles with millions of accesses each month by users worldwide. The rapid literature growth significantly increases the burden of LitCovid curation, especially for topic annotations. Topic annotation in LitCovid assigns one or more (up to eight) labels to articles. The annotated topics have been widely used both directly in LitCovid (e.g., accounting for ~20% of total uses) and downstream studies such as knowledge network generation and citation analysis. It is, therefore, important to develop innovative text mining methods to tackle the challenge. We organized the BioCreative LitCovid track to call for a community effort to tackle automated topic annotation for COVID-19 literature. This article summarizes the BioCreative LitCovid track in terms of data collection and team participation. The dataset is publicly available via https://ftp.ncbi.nlm.nih.gov/pub/lu/LitCovid/biocreative/. It consists of over 30K PubMed articles, one of the largest multilabel classification datasets on biomedical literature. There were 80 submissions in total from 19 teams worldwide. The highestperforming submissions achieved 0.8875, 0.9181, and 0.9394 for macro F1-score, micro F1-score, and instance-based F1-score, respectively. We look forward to further participation in developing biomedical text mining methods in response to the rapid growth of the COVID-19 literature. Keywords\u2014biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; multi-label classification; COVID-19; LitCovid;", "which Evaluation metrics ?", "Macro F1", 1567.0, 1575.0], ["Abstract Background Since the onset of the pandemic, only few studies focused on longitudinal immune monitoring in critically ill COVID-19 patients with acute respiratory distress syndrome (ARDS) whereas their hospital stay may last for several weeks. Consequently, the question of whether immune parameters may drive or associate with delayed unfavorable outcome in these critically ill patients remains unsolved. Methods We present a dynamic description of immuno-inflammatory derangements in 64 critically ill COVID-19 patients including plasma IFN\u03b12 levels and IFN-stimulated genes (ISG) score measurements. Results ARDS patients presented with persistently decreased lymphocyte count and mHLA-DR expression and increased cytokine levels. Type-I IFN response was initially induced with elevation of IFN\u03b12 levels and ISG score followed by a rapid decrease over time. Survivors and non-survivors presented with apparent common immune responses over the first 3 weeks after ICU admission mixing gradual return to normal values of cellular markers and progressive decrease of cytokines levels including IFN\u03b12. Only plasma TNF-\u03b1 presented with a slow increase over time and higher values in non-survivors compared with survivors. This paralleled with an extremely high occurrence of secondary infections in COVID-19 patients with ARDS. Conclusions Occurrence of ARDS in response to SARS-CoV2 infection appears to be strongly associated with the intensity of immune alterations upon ICU admission of COVID-19 patients. In these critically ill patients, immune profile presents with similarities with the delayed step of immunosuppression described in bacterial sepsis.", "which Study population ?", "Critically ill COVID-19 patients", 115.0, 147.0], ["00-5712/$ see front matter q 200 i:10.1016/j.jdent.2004.08.007 * Corresponding author. Tel.: C44 7 5 1282. E-mail address: r.bedi@eastman.uc Summary Objective. To describe the prevalence of dental erosion and associated factors in preschool children in Guangxi and Hubei provinces of China. Methods. Dental examinations were carried out on 1949 children aged 3\u20135 years. Measurement of erosion was confined to primary maxillary incisors. The erosion index used was based upon the 1993 UK National Survey of Children\u2019s Dental Health. The children\u2019s general information as well as social background and dietary habits were collected based on a structured questionnaire. Results. A total of 112 children (5.7%) showed erosion on their maxillary incisors. Ninety-five (4.9%) was scored as being confined to enamel and 17 (0.9%) as erosion extending into dentine or pulp. There was a positive association between erosion and social class in terms of parental education. A significantly higher prevalence of erosion was observed in children whose parents had post-secondary education than those whose parents had secondary or lower level of education. There was also a correlation between the presence of dental erosion and intake of fruit drink from a feeding bottle or consumption of fruit drinks at bedtime. Conclusion. Erosion is not a serious problem for dental heath in Chinese preschool children. The prevalence of erosion is associated with social and dietary factors in this sample of children. q 2004 Elsevier Ltd. All rights reserved.", "which Study population ?", "Preschool children", 231.0, 249.0], ["Acidic soft drinks, including sports drinks, have been implicated in dental erosion with limited supporting data in scarce erosion studies worldwide. The purpose of this study was to determine the prevalence of dental erosion in a sample of athletes at a large Midwestern state university in the USA, and to evaluate whether regular consumption of sports drinks was associated with dental erosion. A cross-sectional, observational study was done using a convenience sample of 304 athletes, selected irrespective of sports drinks usage. The Lussi Index was used in a blinded clinical examination to grade the frequency and severity of erosion of all tooth surfaces excluding third molars and incisal surfaces of anterior teeth. A self-administered questionnaire was used to gather details on sports drink usage, lifestyle, health problems, dietary and oral health habits. Intraoral color slides were taken of all teeth with erosion. Sports drinks usage was found in 91.8% athletes and the total prevalence of erosion was 36.5%. Nonparametric tests and stepwise regression analysis using history variables showed no association between dental erosion and the use of sports drinks, quantity and frequency of consumption, years of usage and nonsport usage of sports drinks. The most significant predictor of erosion was found to be not belonging to the African race (p < 0.0001). The results of this study reveal no relationship between consumption of sports drinks and dental erosion.", "which Study population ?", "Athletes", 241.0, 249.0], ["In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency\u2019s perspective, which undermines communities\u2019 role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.", "which Emergency Type ?", "Flood", NaN, NaN], ["Social media for emergency management has emerged as a vital resource for government agencies across the globe. In this study, we explore social media strategies employed by governments to respond to major weather-related events. Using social media monitoring software, we analyze how social media is used in six cities following storms in the winter of 2012. We listen, monitor, and assess online discourse available on the full range of social media outlets (e.g., Twitter, Facebook, blogs). To glean further insight, we conduct a survey and extract themes from citizen comments and government's response. We conclude with recommendations on how practitioners can develop social media strategies that enable citizen participation in emergency management.", "which Emergency Type ?", "weather-related events", 206.0, 228.0], ["Two weeks after the Great Tohoku earthquake followed by the devastating tsunami, we have sent open-ended questionnaires to a randomly selected sample of Twitter users and also analysed the tweets sent from the disaster-hit areas. We found that people in directly affected areas tend to tweet about their unsafe and uncertain situation while people in remote areas post messages to let their followers know that they are safe. Our analysis of the open-ended answers has revealed that unreliable retweets (RTs) on Twitter was the biggest problem the users have faced during the disaster. Some of the solutions offered by the respondents included introducing official hash tags, limiting the number of RTs for each hash tag and adding features that allow users to trace information by maintaining anonymity.", "which Emergency Type ?", "Tsunami", 72.0, 79.0], ["We present a novel approach to localizing parts in images of human faces. The approach combines the output of local detectors with a nonparametric set of global models for the part locations based on over 1,000 hand-labeled exemplar images. By assuming that the global models generate the part locations as hidden variables, we derive a Bayesian objective function. This function is optimized using a consensus of models for these hidden variables. The resulting localizer handles a much wider range of expression, pose, lighting, and occlusion than prior ones. We show excellent performance on real-world face datasets such as Labeled Faces in the Wild (LFW) and a new Labeled Face Parts in the Wild (LFPW) and show that our localizer achieves state-of-the-art performance on the less challenging BioID dataset.", "which Variations ?", "occlusion", 535.0, 544.0], ["Human faces captured in real-world conditions present large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food). Current face landmark estimation approaches struggle under such conditions since they fail to provide a principled way of handling outliers. We propose a novel method, called Robust Cascaded Pose Regression (RCPR) which reduces exposure to outliers by detecting occlusions explicitly and using robust shape-indexed features. We show that RCPR improves on previous landmark estimation methods on three popular face datasets (LFPW, LFW and HELEN). We further explore RCPR's performance by introducing a novel face dataset focused on occlusion, composed of 1,007 faces presenting a wide range of occlusion patterns. RCPR reduces failure cases by half on all four datasets, at the same time as it detects face occlusions with a 80/40% precision/recall.", "which Variations ?", "expression", 123.0, 133.0], ["Face alignment is a crucial step in face recognition tasks. Especially, using landmark localization for geometric face normalization has shown to be very effective, clearly improving the recognition results. However, no adequate databases exist that provide a sufficient number of annotated facial landmarks. The databases are either limited to frontal views, provide only a small number of annotated images or have been acquired under controlled conditions. Hence, we introduce a novel database overcoming these limitations: Annotated Facial Landmarks in the Wild (AFLW). AFLW provides a large-scale collection of images gathered from Flickr, exhibiting a large variety in face appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total 25,993 faces in 21,997 real-world images are annotated with up to 21 landmarks per image. Due to the comprehensive set of annotations AFLW is well suited to train and test algorithms for multi-view face detection, facial landmark localization and face pose estimation. Further, we offer a rich set of tools that ease the integration of other face databases and associated annotations into our joint framework.", "which Variations ?", "expression", 703.0, 713.0], ["We present a novel approach to localizing parts in images of human faces. The approach combines the output of local detectors with a nonparametric set of global models for the part locations based on over 1,000 hand-labeled exemplar images. By assuming that the global models generate the part locations as hidden variables, we derive a Bayesian objective function. This function is optimized using a consensus of models for these hidden variables. The resulting localizer handles a much wider range of expression, pose, lighting, and occlusion than prior ones. We show excellent performance on real-world face datasets such as Labeled Faces in the Wild (LFW) and a new Labeled Face Parts in the Wild (LFPW) and show that our localizer achieves state-of-the-art performance on the less challenging BioID dataset.", "which Variations ?", "pose", 515.0, 519.0], ["Abstract Objective: Surveillance of surgical site infections (SSIs) is important for infection control and is usually performed through retrospective manual chart review. The aim of this study was to develop an algorithm for the surveillance of deep SSIs based on clinical variables to enhance efficiency of surveillance. Design: Retrospective cohort study (2012\u20132015). Setting: A Dutch teaching hospital. Participants: We included all consecutive patients who underwent colorectal surgery excluding those with contaminated wounds at the time of surgery. All patients were evaluated for deep SSIs through manual chart review, using the Centers for Disease Control and Prevention (CDC) criteria as the reference standard. Analysis: We used logistic regression modeling to identify predictors that contributed to the estimation of diagnostic probability. Bootstrapping was applied to increase generalizability, followed by assessment of statistical performance and clinical implications. Results: In total, 1,606 patients were included, of whom 129 (8.0%) acquired a deep SSI. The final model included postoperative length of stay, wound class, readmission, reoperation, and 30-day mortality. The model achieved 68.7% specificity and 98.5% sensitivity and an area under the receiver operator characteristic (ROC) curve (AUC) of 0.950 (95% CI, 0.932\u20130.969). Positive and negative predictive values were 21.5% and 99.8%, respectively. Applying the algorithm resulted in a 63.4% reduction in the number of records requiring full manual review (from 1,606 to 590). Conclusions: This 5-parameter model identified 98.5% of patients with a deep SSI. The model can be used to develop semiautomatic surveillance of deep SSIs after colorectal surgery, which may further improve efficiency and quality of SSI surveillance.", "which Infection ?", "Surgical Site Infection", NaN, NaN], ["Objective. A major challenge in treating Clostridium difficile infection (CDI) is relapse. Many new therapies are being developed to help prevent this outcome. We sought to establish risk factors for relapse and determine whether fields available in an electronic health record (EHR) could be used to identify high-risk patients for targeted relapse prevention strategies. Design. Retrospective cohort study. Setting. Large clinical data warehouse at a 4-hospital healthcare organization. Participants. Data were gathered from January 2006 through October 2010. Subjects were all inpatient episodes of a positive C. difficile test where patients were available for 56 days of follow-up. Methods. Relapse was defined as another positive test between 15 and 56 days after the initial test. Multivariable regression was performed to identify factors independently associated with CDI relapse. Results. Eight hundred twenty-nine episodes met eligibility criteria, and 198 resulted in relapse (23.9%). In the final multivariable analysis, risk of relapse was associated with age (odds ratio [OR], 1.02 per year [95% confidence interval (CI), 1.01\u20131.03]), fluoroquinolone exposure in the 90 days before diagnosis (OR, 1.58 [95% CI, 1.11\u20132.26]), intensive care unit stay in the 30 days before diagnosis (OR, 0.47 [95% CI, 0.30\u20130.75]), cephalosporin (OR, 1.80 [95% CI, 1.19\u20132.71]), proton pump inhibitor (PPI; OR, 1.55 [95% CI, 1.05\u20132.29]), and metronidazole exposure after diagnosis (OR, 2.74 [95% CI, 1.64\u20134.60]). A prediction model tuned to ensure a 50% probability of relapse would flag 14.6% of CDI episodes. Conclusions. Data from a comprehensive EHR can be used to identify patients at high risk for CDI relapse. Major risk factors include antibiotic and PPI exposure.", "which Infection ?", "Clostridium difficile infection", 41.0, 72.0], ["As part of a data mining competition, a training and test set of laboratory test data about patients with and without surgical site infection (SSI) were provided. The task was to develop predictive models with training set and identify patients with SSI in the no label test set. Lab test results are vital resources that guide healthcare providers make decisions about all aspects of surgical patient management. Many machine learning models were developed after pre-processing and imputing the lab tests data and only the top performing methods are discussed. Overall, RANDOM FOREST algorithms performed better than Support Vector Machine and Logistic Regression. Using a set of 74 lab tests, with RF, there were only 4 false positives in the training set and predicted 35 out of 50 SSI patients in the test set (Accuracy 0.86, Sensitivity 0.68, and Specificity 0.91). Optimal ways to address healthcare data quality concerns and imputation methods as well as newer generalizable algorithms need to be explored further to decipher new associations and knowledge among laboratory biomarkers and SSI.", "which Infection ?", "Surgical Site Infection", 126.0, 149.0], ["Sepsis, a dysregulated host response to infection, is a major health burden in terms of both mortality and cost. The difficulties clinicians face in diagnosing sepsis, alongside the insufficiencies of diagnostic biomarkers, motivate the present study. This work develops a machine-learning-based sepsis diagnostic for a high-risk patient group, using a geographically and institutionally diverse collection of nearly 500,000 patient health records. Using only a minimal set of clinical variables, our diagnostics outperform common severity scoring systems and sepsis biomarkers and benefit from being available immediately upon ordering.", "which Infection ?", "Sepsis", 0.0, 6.0], ["Background and Aims Prediction of severe clinical outcomes in Clostridium difficile infection (CDI) is important to inform management decisions for optimum patient care. Currently, treatment recommendations for CDI vary based on disease severity but validated methods to predict severe disease are lacking. The aim of the study was to derive and validate a clinical prediction tool for severe outcomes in CDI. Methods A cohort totaling 638 patients with CDI was prospectively studied at three tertiary care clinical sites (Boston, Dublin and Houston). The clinical prediction rule (CPR) was developed by multivariate logistic regression analysis using the Boston cohort and the performance of this model was then evaluated in the combined Houston and Dublin cohorts. Results The CPR included the following three binary variables: age \u2265 65 years, peak serum creatinine \u22652 mg/dL and peak peripheral blood leukocyte count of \u226520,000 cells/\u03bcL. The Clostridium difficile severity score (CDSS) correctly classified 76.5% (95% CI: 70.87-81.31) and 72.5% (95% CI: 67.52-76.91) of patients in the derivation and validation cohorts, respectively. In the validation cohort, CDSS scores of 0, 1, 2 or 3 were associated with severe clinical outcomes of CDI in 4.7%, 13.8%, 33.3% and 40.0% of cases respectively. Conclusions We prospectively derived and validated a clinical prediction rule for severe CDI that is simple, reliable and accurate and can be used to identify high-risk patients most likely to benefit from measures to prevent complications of CDI.", "which Infection ?", "Clostridium difficile infection", 62.0, 93.0], ["Intensive Care Unit (ICU) patients have significant morbidity and mortality, often from complications that arise during the hospital stay. Severe sepsis is one of the leading causes of death among these patients. Predictive models have the potential to allow for earlier detection of severe sepsis and ultimately earlier intervention. However, current methods for identifying and predicting severe sepsis are biased and inadequate. The goal of this work is to identify a new framework for the prediction of severe sepsis and identify early predictors utilizing clinical laboratory values and vital signs collected in adult ICU patients. We explore models with logistic regression (LR), support vector machines (SVM), and logistic model trees (LMT) utilizing vital signs, laboratory values, or a combination of vital and laboratory values. When applied to a retrospective cohort of ICU patients, the SVM model using laboratory and vital signs as predictors identified 339 (65%) of the 3,446 patients as developing severe sepsis correctly. Based on this new framework and developed models, we provide a recommendation for the use in clinical decision support in ICU and non-ICU environments.", "which Infection ?", "Sepsis", 146.0, 152.0], ["In recent times, social media has been increasingly playing a critical role in response actions following natural catastrophes. From facilitating the recruitment of volunteers during an earthquake to supporting emotional recovery after a hurricane, social media has demonstrated its power in serving as an effective disaster response platform. Based on a case study of Thailand flooding in 2011 \u2013 one of the worst flooding disasters in more than 50 years that left the country severely impaired \u2013 this paper provides an in\u2010depth understanding on the emergent roles of social media in disaster response. Employing the perspective of boundary object, we shed light on how different boundary spanning competences of social media emerged in practice to facilitate cross\u2010boundary response actions during a disaster, with an aim to promote further research in this area. We conclude this paper with guidelines for response agencies and impacted communities to deploy social media for future disaster response.", "which Emergency Management Phase ?", "response", 79.0, 87.0], ["Social media for emergency management has emerged as a vital resource for government agencies across the globe. In this study, we explore social media strategies employed by governments to respond to major weather-related events. Using social media monitoring software, we analyze how social media is used in six cities following storms in the winter of 2012. We listen, monitor, and assess online discourse available on the full range of social media outlets (e.g., Twitter, Facebook, blogs). To glean further insight, we conduct a survey and extract themes from citizen comments and government's response. We conclude with recommendations on how practitioners can develop social media strategies that enable citizen participation in emergency management.", "which Emergency Management Phase ?", "response", 598.0, 606.0], ["ABSTRACT This paper systematically develops a set of general and supporting design principles and specifications for a \"Dynamic Emergency Response Management Information System\" (DERMIS) by identifying design premises resulting from the use of the \"Emergency Management Information System and Reference Index\" (EMISARI) and design concepts resulting from a comprehensive literature review. Implicit in crises of varying scopes and proportions are communication and information needs that can be addressed by today's information and communication technologies. However, what is required is organizing the premises and concepts that can be mapped into a set of generic design principles in turn providing a framework for the sensible development of flexible and dynamic Emergency Response Information Systems. A framework is presented for the system design and development that addresses the communication and information needs of first responders as well as the decision making needs of command and control personnel. The framework also incorporates thinking about the value of insights and information from communities of geographically dispersed experts and suggests how that expertise can be brought to bear on crisis decision making. Historic experience is used to suggest nine design premises. These premises are complemented by a series of five design concepts based upon the review of pertinent and applicable research. The result is a set of eight general design principles and three supporting design considerations that are recommended to be woven into the detailed specifications of a DERMIS. The resulting DERMIS design model graphically indicates the heuristic taken by this paper and suggests that the result will be an emergency response system flexible, robust, and dynamic enough to support the communication and information needs of emergency and crisis personnel on all levels. In addition it permits the development of dynamic emergency response information systems with tailored flexibility to support and be integrated across different sizes and types of organizations. This paper provides guidelines for system analysts and designers, system engineers, first responders, communities of experts, emergency command and control personnel, and MIS/IT researchers. SECTIONS 1. Introduction 2. Historical Insights about EMISARI 3. The emergency Response Atmosphere of OEP 4. Resulting Requirements for Emergency Response and Conceptual Design Specifics 4.1 Metaphors 4.2 Roles 4.3 Notifications 4.4 Context Visibility 4.5 Hypertext 5. Generalized Design Principles 6. Supporting Design Considerations 6.1 Resource Databases and Community Collaboration 6.2 Collective Memory 6.3 Online Communities of Experts 7. Conclusions and Final Observations 8. References 1. INTRODUCTION There have been, since 9/11, considerable efforts to propose improvements in the ability to respond to emergencies. However, the vast majority of these efforts have concentrated on infrastructure improvements to aid in mitigation of the impacts of either a man-made or natural disaster. In the area of communication and information systems to support the actual ongoing reaction to a disaster situation, the vast majority of the efforts have focused on the underlying technology to reliably support survivability of the underlying networks and physical facilities (Kunreuther and LernerLam 2002; Mork 2002). The fact that there were major failures of the basic technology and loss of the command center for 48 hours in the 9/11 event has made this an understandable result. The very workable commercial paging and digital mail systems supplied immediately afterwards by commercial firms (Michaels 2001; Vatis 2002) to the emergency response workers demonstrated that the correction of underlying technology is largely a process of setting integration standards and deciding to spend the necessary funds to update antiquated systems. \u2026", "which Emergency Management Phase ?", "Response", 138.0, 146.0], ["Objective. Achieving accurate prediction of sepsis detection moment based on bedside monitor data in the intensive care unit (ICU). A good clinical outcome is more probable when onset is suspected and treated on time, thus early insight of sepsis onset may save lives and reduce costs. Methodology. We present a novel approach for feature extraction, which focuses on the hypothesis that unstable patients are more prone to develop sepsis during ICU stay. These features are used in machine learning algorithms to provide a prediction of a patient\u2019s likelihood to develop sepsis during ICU stay, hours before it is diagnosed. Results. Five machine learning algorithms were implemented using R software packages. The algorithms were trained and tested with a set of 4 features which represent the variability in vital signs. These algorithms aimed to calculate a patient\u2019s probability to become septic within the next 4 hours, based on recordings from the last 8 hours. The best area under the curve (AUC) was achieved with Support Vector Machine (SVM) with radial basis function, which was 88.38%. Conclusions. The high level of predictive accuracy along with the simplicity and availability of input variables present great potential if applied in ICUs. Variability of a patient\u2019s vital signs proves to be a good indicator of one\u2019s chance to become septic during ICU stay.", "which Objective ?", "Sepsis", 79.0, 85.0], ["A new glucose-insulin model is introduced which fits with the clinical data from in- and outpatients for two days. Its stability property is consistent with the glycemia behavior for type 1 diabetes. This is in contrast to traditional glucose-insulin models. Prior models fit with clinical data for a few hours only or display some nonnatural equilibria. The parameters of this new model are identifiable from standard clinical data as continuous glucose monitoring, insulin injection, and carbohydrate estimate. Moreover, it is shown that the parameters from the model allow the computation of the standard tools used in functional insulin therapy as the basal rate of insulin and the insulin sensitivity factor. This is a major outcome as they are required in therapeutic education of type 1 diabetic patients.", "which Objective ?", "For two days", 101.0, 113.0], ["Early pathogen exposure detection allows better patient care and faster implementation of public health measures (patient isolation, contact tracing). Existing exposure detection most frequently relies on overt clinical symptoms, namely fever, during the infectious prodromal period. We have developed a robust machine learning based method to better detect asymptomatic states during the incubation period using subtle, sub-clinical physiological markers. Starting with high-resolution physiological waveform data from non-human primate studies of viral (Ebola, Marburg, Lassa, and Nipah viruses) and bacterial (Y. pestis) exposure, we processed the data to reduce short-term variability and normalize diurnal variations, then provided these to a supervised random forest classification algorithm and post-classifier declaration logic step to reduce false alarms. In most subjects detection is achieved well before the onset of fever; subject cross-validation across exposure studies (varying viruses, exposure routes, animal species, and target dose) lead to 51h mean early detection (at 0.93 area under the receiver-operating characteristic curve [AUCROC]). Evaluating the algorithm against entirely independent datasets for Lassa, Nipah, and Y. pestis exposures un-used in algorithm training and development yields a mean 51h early warning time (at AUCROC=0.95). We discuss which physiological indicators are most informative for early detection and options for extending this capability to limited datasets such as those available from wearable, non-invasive, ECG-based sensors.", "which Objective ?", "Early warning", 1330.0, 1343.0], ["Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.", "which Process ?", "manuscript peer review", 1505.0, 1527.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Process ?", "Network-based Co-operative Planning Processes", 753.0, 798.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Process ?", "Co-operative Building Planning", 492.0, 522.0], ["The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.", "which Process ?", "net de novo protein synthesis", 1123.0, 1152.0], ["Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.", "which Process ?", "streamlined import", 624.0, 642.0], ["Purpose \u2013 The purpose of this paper is to investigate what employers seek when recruiting library and information professionals in the UK and whether professional skills, generic skills or personal qualities are most in demand.Design/methodology/approach \u2013 A content analysis of a sample of 180 advertisements requiring a professional library or information qualification from Chartered Institute of Library and Information Professional's Library + Information Gazette over the period May 2006\u20102007.Findings \u2013 The findings reveal that a multitude of skills and qualities are required in the profession. When the results were compared with Information National Training Organisation and Library and Information Management Employability Skills research, customer service, interpersonal and communication skills, and general computing skills emerged as the requirements most frequently sought by employers. Overall, requirements from the generic skills area were most important to employers, but the research also demonstra...", "which Process ?", "recruiting library and information professionals", 79.0, 127.0], ["With the rapid growth of online social media content, and the impact these have made on people\u2019s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as \u201cquantification.\u201d By the term \u201cquantification,\u201d we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.", "which Process ?", "rapid growth", 9.0, 21.0], ["Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.", "which Process ?", "open data publishing", 74.0, 94.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "which Process ?", "Co-operative Building Planning", 492.0, 522.0], ["Aligning two representations of the same domain with different expressiveness is a crucial topic in nowadays semantic web and big data research. OWL ontologies and Entity Relation Diagrams are the most widespread representations whose alignment allows for semantic data access via ontology interface, and ontology storing techniques. The term \"\"alignment\" encompasses three different processes: OWL-to-ERD and ERD-to-OWL transformation, and OWL-ERD mapping. In this paper an innovative statistical tool is presented to accomplish all the three aspects of the alignment. The main idea relies on the use of a HMM to estimate the most likely ERD sentence that is stated in a suitable grammar, and corresponds to the observed OWL axiom. The system and its theoretical background are presented, and some experiments are reported.", "which Learning method ?", "Mapping", 449.0, 456.0], ["One of the main challenges in content-based or semantic image retrieval is still to bridge the gap between low-level features and semantic information. In this paper, An approach is presented using integrated multi-level image features in ontology fusion construction by a fusion framework, which based on the latent semantic analysis. The proposed method promotes images ontology fusion efficiently and broadens the application fields of image ontology retrieval system. The relevant experiment shows that this method ameliorates the problem, such as too many redundant data and relations, in the traditional ontology system construction, as well as improves the performance of semantic images retrieval.", "which Learning method ?", "Ontology fusion", 239.0, 254.0], ["Learning ontologies requires the acquisition of relevant domain concepts and taxonomic, as well as non-taxonomic, relations. In this chapter, we present a methodology for automatic ontology enrichment and document annotation with concepts and relations of an existing domain core ontology. Natural language definitions from available glossaries in a given domain are processed and regular expressions are applied to identify general-purpose and domain-specific relations. We evaluate the methodology performance in extracting hypernymy and non-taxonomic relations. To this end, we annotated and formalized a relevant fragment of the glossary of Art and Architecture (AAT) with a set of 10 relations (plus the hypernymy relation) defined in the CRM CIDOC cultural heritage core ontology, a recent W3C standard. Finally, we assessed the generality of the approach on a set of web pages from the domains of history and biography.", "which Learning method ?", "Regular expression", NaN, NaN], ["With the increase in smart devices and abundance of video contents, efficient techniques for the indexing, analysis and retrieval of videos are becoming more and more desirable. Improved indexing and automated analysis of millions of videos could be accomplished by getting videos tagged automatically. A lot of existing methods fail to precisely tag videos because of their lack of ability to capture the video context. The context in a video represents the interactions of objects in a scene and their overall meaning. In this work, we propose a novel approach that integrates the video scene ontology with CNN (Convolutional Neural Network) for improved video tagging. Our method captures the content of a video by extracting the information from individual key frames. The key frames are then fed to a CNN based deep learning model to train its parameters. The trained parameters are used to generate the most frequent tags. Highly frequent tags are used to summarize the input video. The proposed technique is benchmarked on the most widely used dataset of video activities, namely, UCF-101. Our method managed to achieve an overall accuracy of 99.8% with an F1- score of 96.2%.", "which Learning method ?", "Convolutional Neural Network", 614.0, 642.0], ["This paper describes CrowdREquire, a platform that supports requirements engineering using the crowdsourcing concept. The power of the crowd is in the diversity of talents and expertise available within the crowd and CrowdREquire specifies how requirements engineering can harness skills available in the crowd. In developing CrowdREquire, this paper designs a crowdsourcing business model and market strategy for crowdsourcing requirements engineering irrespective of the professions and areas of expertise of the crowd involved. This is also a specific application of crowdsourcing which establishes the general applicability and efficacy of crowdsourcing. The results obtained could be used as a reference for other crowdsourcing systems as well.", "which Utilities in CrowdRE ?", "Crowd", 135.0, 140.0], ["To build needed mobile applications in specific domains, requirements should be collected and analyzed in holistic approach. However, resource is limited for small vendor groups to perform holistic requirement acquisition and elicitation. The rise of crowdsourcing and crowdfunding gives small vendor groups new opportunities to build needed mobile applications for the crowd. By finding prior stakeholders and gathering requirements effectively from the crowd, mobile application projects can establish sound foundation in early phase of software process. Therefore, integration of crowd-based requirement engineering into software process is important for small vendor groups. Conventional requirement acquisition and elicitation methods are analyst-centric. Very little discussion is in adapting requirement acquisition tools for crowdcentric context. In this study, several tool features of use case documentation are revised in crowd-centric context. These features constitute a use case-based framework, called UCFrame, for crowd-centric requirement acquisition. An instantiation of UCFrame is also presented to demonstrate the effectiveness of UCFrame in collecting crowd requirements for building two mobile applications.", "which Utilities in CrowdRE ?", "Crowd", 370.0, 375.0], ["Internetware is required to respond quickly to emergent user requirements or requirements changes by providing application upgrade or making context-aware recommendations. As user requirements in Internet computing environment are often changing fast and new requirements emerge more and more in a creative way, traditional requirements engineering approaches based on requirements elicitation and analysis cannot ensure the quick response of Internetware. In this paper, we propose an approach for mining context-aware user requirements from crowd contributed mobile data. The approach captures behavior records contributed by a crowd of mobile users and automatically mines context-aware user behavior patterns (i.e., when, where and under what conditions users require a specific service) from them using Apriori-M algorithm. Based on the mined user behaviors, emergent requirements or requirements changes can be inferred from the mined user behavior patterns and solutions that satisfy the requirements can be recommended to users. To evaluate the proposed approach, we conduct an experimental study and show the effectiveness of the requirements mining approach.", "which Utilities in CrowdRE ?", "Crowd", 543.0, 548.0], ["Automated app review analysis is an important avenue for extracting a variety of requirements-related information. Typically, a first step toward performing such analysis is preparing a training dataset, where developers (experts) identify a set of reviews and, manually, annotate them according to a given task. Having sufficiently large training data is important for both achieving a high prediction accuracy and avoiding overfitting. Given millions of reviews, preparing a training set is laborious. We propose to incorporate active learning, a machine learning paradigm, in order to reduce the human effort involved in app review analysis. Our app review classification framework exploits three active learning strategies based on uncertainty sampling. We apply these strategies to an existing dataset of 4,400 app reviews for classifying app reviews as features, bugs, rating, and user experience. We find that active learning, compared to a training dataset chosen randomly, yields a significantly higher prediction accuracy under multiple scenarios.", "which Utilities in CrowdRE ?", "Task", 307.0, 311.0], ["The article covers the issues of ensuring sustainable city development based on the achievements of digitalization. Attention is also paid to the use of quality economy tools in managing 'smart' cities under conditions of the digital transformation of the national economy. The current state of 'smart' cities and the main factors contributing to their sustainable development, including the digitalization requirements is analyzed. Based on the analysis of statistical material, the main prospects to form the 'smart city' concept, the possibility to assess such parameters as 'life quality', 'comfort', 'rational organization', 'opportunities', 'sustainable development', 'city environment accessibility', 'use of communication technologies'. The role of tools for quality economics is revealed in ensuring the big city life under conditions of digital economy. The concept of 'life quality' is considered, which currently is becoming one of the fundamental vectors of the human civilization development, a criterion that is increasingly used to compare countries and territories. Special attention is paid to such tools and methods of quality economics as standardization, metrology and quality management. It is proposed to consider these tools as a mechanism for solving the most important problems in the national economy development under conditions of digital transformation.", "which Components ?", "Sustainable development", 353.0, 376.0], ["The article covers the issues of ensuring sustainable city development based on the achievements of digitalization. Attention is also paid to the use of quality economy tools in managing 'smart' cities under conditions of the digital transformation of the national economy. The current state of 'smart' cities and the main factors contributing to their sustainable development, including the digitalization requirements is analyzed. Based on the analysis of statistical material, the main prospects to form the 'smart city' concept, the possibility to assess such parameters as 'life quality', 'comfort', 'rational organization', 'opportunities', 'sustainable development', 'city environment accessibility', 'use of communication technologies'. The role of tools for quality economics is revealed in ensuring the big city life under conditions of digital economy. The concept of 'life quality' is considered, which currently is becoming one of the fundamental vectors of the human civilization development, a criterion that is increasingly used to compare countries and territories. Special attention is paid to such tools and methods of quality economics as standardization, metrology and quality management. It is proposed to consider these tools as a mechanism for solving the most important problems in the national economy development under conditions of digital transformation.", "which Components ?", "Metrology", 1176.0, 1185.0], ["The digital transformation of our life changes the way we work, learn, communicate, and collaborate. Enterprises are presently transforming their strategy, culture, processes, and their information systems to become digital. The digital transformation deeply disrupts existing enterprises and economies. Digitization fosters the development of IT systems with many rather small and distributed structures, like Internet of Things, Microservices and mobile services. Since years a lot of new business opportunities appear using the potential of services computing, Internet of Things, mobile systems, big data with analytics, cloud computing, collaboration networks, and decision support. Biological metaphors of living and adaptable ecosystems provide the logical foundation for self-optimizing and resilient run-time environments for intelligent business services and adaptable distributed information systems with service-oriented enterprise architectures. This has a strong impact for architecting digital services and products following both a value-oriented and a service perspective. The change from a closed-world modeling world to a more flexible open-world composition and evolution of enterprise architectures defines the moving context for adaptable and high distributed systems, which are essential to enable the digital transformation. The present research paper investigates the evolution of Enterprise Architecture considering new defined value-oriented mappings between digital strategies, digital business models and an improved digital enterprise architecture.", "which Components ?", "Value", 1048.0, 1053.0], ["Abstract The study demonstrates a methodology for mapping various hematite ore classes based on their reflectance and absorption spectra, using Hyperion satellite imagery. Substantial validation is carried out, using the spectral feature fitting technique, with the field spectra measured over the Bailadila hill range in Chhattisgarh State in India. The results of the study showed a good correlation between the concentration of iron oxide with the depth of the near-infrared absorption feature (R 2 = 0.843) and the width of the near-infrared absorption feature (R 2 = 0.812) through different empirical models, with a root-mean-square error (RMSE) between < 0.317 and < 0.409. The overall accuracy of the study is 88.2% with a Kappa coefficient value of 0.81. Geochemical analysis and X-ray fluorescence (XRF) of field ore samples are performed to ensure different classes of hematite ore minerals. Results showed a high content of Fe > 60 wt% in most of the hematite ore samples, except banded hematite quartzite (BHQ) (< 47 wt%).", "which Analysis ?", "X-ray fluorescence (XRF)", NaN, NaN], ["Abstract Spatial distribution of altered minerals in rocks and soils in the Gadag Schist Belt (GSB) is carried out using Hyperion data of March 2013. The entire spectral range is processed with emphasis on VNIR (0.4\u20131.0 \u03bcm) and SWIR regions (2.0\u20132.4 \u03bcm). Processing methodology includes Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes correction, minimum noise fraction transformation, spectral feature fitting (SFF) and spectral angle mapper (SAM) in conjunction with spectra collected, using an analytical spectral device spectroradiometer. A total of 155 bands were analysed to identify and map the major altered minerals by studying the absorption bands between the 0.4\u20131.0-\u03bcm and 2.0\u20132.3-\u03bcm wavelength regions. The most important and diagnostic spectral absorption features occur at 0.6\u20130.7 \u03bcm, 0.86 and at 0.9 \u03bcm in the VNIR region due to charge transfer of crystal field effect in the transition elements, whereas absorption near 2.1, 2.2, 2.25 and 2.33 \u03bcm in the SWIR region is related to the bending and stretching of the bonds in hydrous minerals (Al-OH, Fe-OH and Mg-OH), particularly in clay minerals. SAM and SFF techniques are implemented to identify the minerals present. A score of 0.33\u20131 was assigned for both SAM and SFF, where a value of 1 indicates the exact mineral type. However, endmember spectra were compared with United States Geological Survey and John Hopkins University spectral libraries for minerals and soils. Five minerals, i.e. kaolinite-5, kaolinite-2, muscovite, haematite, kaosmec and one soil, i.e. greyish brown loam have been identified. Greyish brown loam and kaosmec have been mapped as the major weathering/altered products present in soils and rocks of the GSB. This was followed by haematite and kaolinite. The SAM classifier was then applied on a Hyperion image to produce a mineral map. The dominant lithology of the area included greywacke, argillite and granite gneiss.", "which Analysis ?", "Spectral Angle Mapper (SAM)", NaN, NaN], ["This study describes the utility of Earth Observation (EO)-1 Hyperion data for sub-pixel mineral investigation using Mixture Tuned Target Constrained Interference Minimized Filter (MTTCIMF) algorithm in hostile mountainous terrain of Rajsamand district of Rajasthan, which hosts economic mineralization such as lead, zinc, and copper etc. The study encompasses pre-processing, data reduction, Pixel Purity Index (PPI) and endmember extraction from reflectance image of surface minerals such as illite, montmorillonite, phlogopite, dolomite and chlorite. These endmembers were then assessed with USGS mineral spectral library and lab spectra of rock samples collected from field for spectral inspection. Subsequently, MTTCIMF algorithm was implemented on processed image to obtain mineral distribution map of each detected mineral. A virtual verification method has been adopted to evaluate the classified image, which uses directly image information to evaluate the result and confirm the overall accuracy and kappa coefficient of 68 % and 0.6 respectively. The sub-pixel level mineral information with reasonable accuracy could be a valuable guide to geological and exploration community for expensive ground and/or lab experiments to discover economic deposits. Thus, the study demonstrates the feasibility of Hyperion data for sub-pixel mineral mapping using MTTCIMF algorithm with cost and time effective approach.", "which Analysis ?", "Mixture Tuned Target Constrained Interference Minimized Filter (MTTCIMF) algorithm", NaN, NaN], ["Abstract. Hyperspectral remote sensing has been widely used in mineral identification using the particularly useful short-wave infrared (SWIR) wavelengths (1.0 to 2.5 \u03bcm). Current mineral mapping methods are easily limited by the sensor\u2019s radiometric sensitivity and atmospheric effects. Therefore, a simple mineral mapping algorithm (SMMA) based on the combined application with multitype diagnostic SWIR absorption features for hyperspectral data is proposed. A total of nine absorption features are calculated, respectively, from the airborne visible/infrared imaging spectrometer data, the Hyperion hyperspectral data, and the ground reference spectra data collected from the United States Geological Survey (USGS) spectral library. Based on spectral analysis and statistics, a mineral mapping decision-tree model for the Cuprite mining district in Nevada, USA, is constructed. Then, the SMMA algorithm is used to perform mineral mapping experiments. The mineral map from the USGS (USGS map) in the Cuprite area is selected for validation purposes. Results showed that the SMMA algorithm is able to identify most minerals with high coincidence with USGS map results. Compared with Hyperion data (overall accuracy=74.54%), AVIRIS data showed overall better mineral mapping results (overall accuracy=94.82%) due to low signal-to-noise ratio and high spatial resolution.", "which Analysis ?", "Simple mineral mapping algorithm (SMMA)", NaN, NaN], ["A commonly cited mechanism for invasion resistance is more complete resource use by diverse plant assemblages with maximum niche complementarity. We investigated the invasion resistance of several plant functional groups against the nonindigenous forb Spotted knapweed (Centaurea maculosa). The study consisted of a factorial combination of seven functional group removals (groups singularly or in combination) and two C. maculosa treatments (addition vs. no addition) applied in a randomized complete block design replicated four times at each of two sites. We quantified aboveground plant material nutrient concentration and uptake (concentration \u00d7 biomass) by indigenous functional groups: grasses, shallow\u2010rooted forbs, deep\u2010rooted forbs, spikemoss, and the nonindigenous invader C. maculosa. In 2001, C. maculosa density depended upon which functional groups were removed. The highest C. maculosa densities occurred where all vegetation or all forbs were removed. Centaurea maculosa densities were the lowest in plots where nothing, shallow\u2010rooted forbs, deep\u2010rooted forbs, grasses, or spikemoss were removed. Functional group biomass was also collected and analyzed for nitrogen, phosphorus, potassium, and sulphur. Based on covariate analyses, postremoval indigenous plot biomass did not relate to invasion by C. maculosa. Analysis of variance indicated that C. maculosa tissue nutrient percentage and net nutrient uptake were most similar to indigenous forb functional groups. Our study suggests that establishing and maintaining a diversity of plant functional groups within the plant community enhances resistance to invasion. Indigenous plants of functionally similar groups as an invader may be particularly important in invasion resistance.", "which Measure of species similarity ?", "Functional groups", 203.0, 220.0], ["AbstractUnderstanding the relative importance of various functional groups in minimizing invasion by medusahead is central to increasing the resistance of native plant communities. The objective of this study was to determine the relative importance of key functional groups within an intact Wyoming big sagebrush\u2013bluebunch wheatgrass community type on minimizing medusahead invasion. Treatments consisted of removal of seven functional groups at each of two sites, one with shrubs and one without shrubs. Removal treatments included (1) everything, (2) shrubs, (3) perennial grasses, (4) taprooted forbs, (5) rhizomatous forbs, (6) annual forbs, and (7) mosses. A control where nothing was removed was also established. Plots were arranged in a randomized complete block with 4 replications (blocks) at each site. Functional groups were removed beginning in the spring of 2004 and maintained monthly throughout each growing season through 2009. Medusahead was seeded at a rate of 2,000 seeds m\u22122 (186 seeds ft\u22122) in fall 2005. Removing perennial grasses nearly doubled medusahead density and biomass compared with any other removal treatment. The second highest density and biomass of medusahead occurred from removing rhizomatous forbs (phlox). We found perennial grasses played a relatively more significant role than other species in minimizing invasion by medusahead. We suggest that the most effective basis for establishing medusahead-resistant plant communities is to establish 2 or 3 highly productive grasses that are complementary in niche and that overlap that of the invading species.", "which Measure of species similarity ?", "Functional groups", 90.0, 107.0], ["1 Although experimental studies usually reveal that resistance to invasion increases with species diversity, observational studies sometimes show the opposite trend. The higher resistance of diverse plots to invasion may be partly due to the increased probability of a plot containing a species with similar resource requirements to the invader. 2 We conducted a study of the invasibility of monocultures belonging to three different functional groups by seven sown species of legume. By only using experimentally established monocultures, rather than manipulating the abundance of particular functional groups, we removed both species diversity and differences in underlying abiotic conditions as potentially confounding variables. 3 We found that legume monocultures were more resistant than monocultures of grasses or non\u2010leguminous forbs to invasion by sown legumes but not to invasion by other unsown species. The functional group effect remained after controlling for differences in total biomass and the average height of the above\u2010ground biomass. 4 The relative success of legume species and types also varied with monoculture characteristics. The proportional biomass of climbing legumes increased strongly with biomass height in non\u2010leguminous forb monocultures, while it declined with biomass height in grass monocultures. Trifolium pratense was the most successful invader in grass monocultures, while Vicia cracca was the most successful in non\u2010leguminous forb monocultures. 5 Our results suggest that non\u2010random assembly rules operate in grassland communities both between and within functional groups. Legume invaders found it much more difficult to invade legume plots, while grass and non\u2010leguminous forb plots favoured non\u2010climbing and climbing legumes, respectively. If plots mimic monospecific patches, the effect of these assembly rules in diverse communities might depend upon the patch structure of diverse communities. This dependency on patch structure may contribute to differences in results of research from experimental vs. natural communities.", "which Measure of species similarity ?", "Functional groups", 434.0, 451.0], ["Abstract. One of the major objectives of the BIOSOPE cruise, carried out on the R/V Atalante from October-November 2004 in the South Pacific Ocean, was to establish productivity rates along a zonal section traversing the oligotrophic South Pacific Gyre (SPG). These results were then compared to measurements obtained from the nutrient \u2013 replete waters in the Chilean upwelling and around the Marquesas Islands. A dual 13C/15N isotope technique was used to estimate the carbon fixation rates, inorganic nitrogen uptake (including dinitrogen fixation), ammonium (NH4) and nitrate (NO3) regeneration and release of dissolved organic nitrogen (DON). The SPG exhibited the lowest primary production rates (0.15 g C m\u22122 d\u22121), while rates were 7 to 20 times higher around the Marquesas Islands and in the Chilean upwelling, respectively. In the very low productive area of the SPG, most of the primary production was sustained by active regeneration processes that fuelled up to 95% of the biological nitrogen demand. Nitrification was active in the surface layer and often balanced the biological demand for nitrate, especially in the SPG. The percentage of nitrogen released as DON represented a large proportion of the inorganic nitrogen uptake (13\u201315% in average), reaching 26\u201341% in the SPG, where DON production played a major role in nitrogen cycling. Dinitrogen fixation was detectable over the whole study area; even in the Chilean upwelling, where rates as high as 3 nmoles l\u22121 d\u22121 were measured. In these nutrient-replete waters new production was very high (0.69\u00b10.49 g C m\u22122 d\u22121) and essentially sustained by nitrate levels. In the SPG, dinitrogen fixation, although occurring at much lower daily rates (\u22481\u20132 nmoles l\u22121 d\u22121), sustained up to 100% of the new production (0.008\u00b10.007 g C m\u22122 d\u22121) which was two orders of magnitude lower than that measured in the upwelling. The annual N2-fixation of the South Pacific is estimated to 21\u00d71012g, of which 1.34\u00d71012g is for the SPG only. Even if our \"snapshot\" estimates of N2-fixation rates were lower than that expected from a recent ocean circulation model, these data confirm that the N-deficiency South Pacific Ocean would provide an ideal ecological niche for the proliferation of N2-fixers which are not yet identified.", "which Region of data collection ?", "South Pacific Ocean", 127.0, 146.0], ["Nitrogen (N) is an essential element for life and controls the magnitude of primary productivity in the ocean. In order to describe the microorganisms that catalyze N transformations in surface waters in the South Pacific Ocean, we collected high-resolution biotic and abiotic data along a 7000 km transect, from the Antarctic ice edge to the equator. The transect, conducted between late Austral autumn and early winter 2016, covered major oceanographic features such as the polar front (PF), the subtropical front (STF) and the Pacific equatorial divergence (PED). We measured N2 fixation and nitrification rates and quantified the relative abundances of diazotrophs and nitrifiers in a region where few to no rate measurements are available. Even though N2 fixation rates are usually below detection limits in cold environments, we were able to measure this N pathway at 7/10 stations in the cold and nutrient rich waters near the PF. This result highlights that N2 fixation rates continue to be measured outside the well-known subtropical regions. The majority of the mid to high N2 fixation rates (>\u223c20 nmol L\u20131 d\u20131), however, still occurred in the expected tropical and subtropical regions. High throughput sequence analyses of the dinitrogenase reductase gene (nifH) revealed that the nifH Cluster I dominated the diazotroph diversity throughout the transect. nifH gene richness did not show a latitudinal trend, nor was it significantly correlated with N2 fixation rates. Nitrification rates above the mixed layer in the Southern Ocean ranged between 56 and 1440 nmol L\u20131 d\u20131. Our data showed a decoupling between carbon and N assimilation (NO3\u2013 and NH4+ assimilation rates) in winter in the South Pacific Ocean. Phytoplankton community structure showed clear changes across the PF, the STF and the PED, defining clear biomes. Overall, these findings provide a better understanding of the ecosystem functionality in the South Pacific Ocean across key oceanographic biomes.", "which Region of data collection ?", "South Pacific Ocean", 208.0, 227.0], ["Biogeochemical implications of global imbalance between the rates of marine dinitrogen (N2) fixation and denitrification have spurred us to understand the former process in the Arabian Sea, which contributes considerably to the global nitrogen budget. Heterotrophic bacteria have gained recent appreciation for their major role in marine N budget by fixing a significant amount of N2. Accordingly, we hypothesize a probable role of heterotrophic diazotrophs from the 15N2 enriched isotope labelling dark incubations that witnessed rates comparable to the light incubations in the eastern Arabian Sea during spring 2010. Maximum areal rates (8 mmol N m-2 d-1) were the highest ever observed anywhere in world oceans. Our results suggest that the eastern Arabian Sea gains ~92% of its new nitrogen through N2 fixation. Our results are consistent with the observations made in the same region in preceding year, i.e., during the spring of 2009.", "which Region of data collection ?", "Eastern Arabian Sea", 580.0, 599.0], ["The partial pressure of CO2 (pCO2) was measured during the 1995 South\u2010West Monsoon in the Arabian Sea. The Arabian Sea was characterized throughout by a moderate supersaturation of 12\u201330 \u00b5atm. The stable atmospheric pCO2 level was around 345 \u00b5atm. An extreme supersaturation was found in areas of coastal upwelling off the Omani coast with pCO2 peak values in surface waters of 750 \u00b5atm. Such two\u2010fold saturation (218%) is rarely found elsewhere in open ocean environments. We also encountered cold upwelled water 300 nm off the Omani coast in the region of Ekman pumping, which was also characterized by a strongly elevated seawater pCO2 of up to 525 \u00b5atm. Due to the strong monsoonal wind forcing the Arabian Sea as a whole and the areas of upwelling in particular represent a significant source of atmospheric CO2 with flux densities from around 2 mmol m\u22122 d\u22121 in the open ocean to 119 mmol m\u22122 d\u22121 in coastal upwelling. Local air masses passing the area of coastal upwelling showed increasing CO2 concentrations, which are consistent with such strong emissions.", "which Region of data collection ?", "Arabian Sea", 90.0, 101.0], ["The import of nitrogen via dinitrogen fixation supports primary production, particularly in the oligotrophic ocean; however, to what extent dinitrogen fixation influences primary production, and the role of specific types of diazotrophs, remains poorly understood. We examined the relationship between primary production and dinitrogen fixation together with diazotroph community structure in the oligotrophic western and eastern South Pacific Ocean and found that dinitrogen fixation was higher than nitrate\u2010based new production. Primary production increased in the middle of the western subtropical region, where the cyanobacterium Trichodesmium dominated the diazotroph community and accounted for up to 7.8% of the phytoplankton community, and the abundance of other phytoplankton taxa (especially Prochlorococcus) was high. These results suggest that regenerated production was enhanced by nitrogen released from Trichodesmium and that carbon fixation by Trichodesmium also contributed significantly to total primary production. Although volumetric dinitrogen fixation was comparable between the western and eastern subtropical regions, primary production in the western waters was more than twice as high as that in the eastern waters, where UCYN\u2010A1 (photoheterotroph) and heterotrophic bacteria were the dominant diazotrophs. This suggests that dinitrogen fixed by these diazotrophs contributed relatively little to primary production of the wider community, and there was limited carbon fixation by these diazotrophs. Hence, we document how the community composition of diazotrophs in the field can be reflected in how much nitrogen becomes available to the wider phytoplankton community and in how much autotrophic diazotrophs themselves fix carbon and thereby influences the magnitude of local primary production.", "which Region of data collection ?", "South Pacific Ocean", 430.0, 449.0], ["Young-of-the-year (YOY) bay anchovy Anchoa mitchilli occur in higher proportion rel- ative to larvae in the upper Chesapeake Bay. This has led to the hypothesis that up-bay dispersal favors recruitment. Here we test whether recruitment of bay anchovy to different parts of the Chesa- peake Bay results from differential dispersal rates. Electron microprobe analysis of otolith strontium was used to hind-cast patterns and rates of movement across salinity zones. Individual chronologies of strontium were constructed for 55 bay anchovy aged 43 to 103 d collected at 5 Chesapeake Bay mainstem sites representing upper, middle, and lower regions of the bay during September 1998. Most YOY anchovy were estimated to have originated in the lower bay. Those collected at 5 and 11 psu sites exhibited the highest past dispersal rates, all in an up-estuary direction. No significant net dispersal up- or down-estuary occurred for recruits captured at the polyhaline (\u266218 psu) site. Ini- tiation of ingress to lower salinity waters (<15 psu) was estimated to occur near metamorphosis, dur- ing the early juvenile stage, at sizes \u2662 25 mm standard length (SL) and ages \u2662 50 d after hatch. Esti- mated maximum upstream dispersal rate (over-the-ground speed) during the first 50 to 100 d of life exceeded 50 mm s -1 .", "which Species Order ?", "Anchoa mitchilli", 36.0, 52.0], ["We used an electron probe microanalyzer (EPMA) to determine the migratory environ- mental history of the catadromous grey mullet Mugil cephalus from the Sr:Ca ratios in otoliths of 10 newly recruited juveniles collected from estuaries and 30 adults collected from estuaries, nearshore (coastal waters and bay) and offshore, in the adjacent waters off Taiwan. Mean (\u00b1SD) Sr:Ca ratios at the edges of adult otoliths increased significantly from 6.5 \u00b1 0.9 \u00d7 10 -3 in estuaries and nearshore waters to 8.9 \u00b1 1.4 \u00d7 10 -3 in offshore waters (p < 0.01), corresponding to increasing ambi- ent salinity from estuaries and nearshore to offshore waters. The mean Sr:Ca ratios decreased sig- nificantly from the core (11.2 \u00b1 1.2 \u00d7 10 -3 ) to the otolith edge (6.2 \u00b1 1.4 \u00d7 10 -3 ) in juvenile otoliths (p < 0.001). The mullet generally spawned offshore and recruited to the estuary at the juvenile stage; therefore, these data support the use of Sr:Ca ratios in otoliths to reconstruct the past salinity history of the mullet. A life-history scan of the otolith Sr:Ca ratios indicated that the migratory environmen- tal history of the mullet beyond the juvenile stage consists of 2 types. In Type 1 mullet, Sr:Ca ratios range between 4.0 \u00d7 10 -3 and 13.9 \u00d7 10 -3 , indicating that they migrated between estuary and offshore waters but rarely entered the freshwater habitat. In Type 2 mullet, the Sr:Ca ratios decreased to a minimum value of 0.4 \u00d7 10 -3 , indicating that the mullet migrated to a freshwater habitat. Most mullet beyond the juvenile stage migrated from estuary to offshore waters, but a few mullet less than 2 yr old may have migrated into a freshwater habitat. Most mullet collected nearshore and offshore were of Type 1, while those collected from the estuaries were a mixture of Types 1 and 2. The mullet spawning stock consisted mainly of Type 1 fish. The growth rates of the mullet were similar for Types 1 and 2. The migratory patterns of the mullet were more divergent than indicated by previous reports of their catadromous behavior.", "which Species Order ?", "Mugil cephalus", 129.0, 143.0], ["The habitat use and migratory patterns of Osbeck\u2019s grenadier anchovy Coilia mystus in the Yangtze estuary and the estuarine tapertail anchovy Coilia ectenes from the Yangtze estuary and Taihu Lake, China, were studied by examining the environmental signatures of strontium and calcium in their otoliths using electron probe microanalysis. The results indicated that Taihu C. ectenes utilizes only freshwater habitats, whereas the habitat use patterns of Yangtze C. ectenes and C. mystus were much more flexible, apparently varying among fresh, brackish and marine areas. The present study suggests that the spawning populations of Yangtze C. ectenes and C. mystus in the Yangtze estuary consist of individuals with different migration histories, and individuals of these two Yangtze Coilia species seem to use a variety of different habitats during the non-spawning seasons.", "which Species Order ?", "Coilia mystus", 69.0, 82.0], ["The novel coronavirus (2019-nCoV) is a recently emerged human pathogen that has spread widely since January 2020. Initially, the basic reproductive number, R0, was estimated to be 2.2 to 2.7. Here we provide a new estimate of this quantity. We collected extensive individual case reports and estimated key epidemiology parameters, including the incubation period. Integrating these estimates and high-resolution real-time human travel and infection data with mathematical models, we estimated that the number of infected individuals during early epidemic double every 2.4 days, and the R0 value is likely to be between 4.7 and 6.6. We further show that quarantine and contact tracing of symptomatic individuals alone may not be effective and early, strong control measures are needed to stop transmission of the virus.", "which R0 estimates (average) ?", "6.6", 627.0, 630.0], ["The exported cases of 2019 novel coronavirus (COVID-19) infection that were confirmed outside China provide an opportunity to estimate the cumulative incidence and confirmed case fatality risk (cCFR) in mainland China. Knowledge of the cCFR is critical to characterize the severity and understand the pandemic potential of COVID-19 in the early stage of the epidemic. Using the exponential growth rate of the incidence, the present study statistically estimated the cCFR and the basic reproduction number\u2014the average number of secondary cases generated by a single primary case in a na\u00efve population. We modeled epidemic growth either from a single index case with illness onset on 8 December 2019 (Scenario 1), or using the growth rate fitted along with the other parameters (Scenario 2) based on data from 20 exported cases reported by 24 January 2020. The cumulative incidence in China by 24 January was estimated at 6924 cases (95% confidence interval [CI]: 4885, 9211) and 19,289 cases (95% CI: 10,901, 30,158), respectively. The latest estimated values of the cCFR were 5.3% (95% CI: 3.5%, 7.5%) for Scenario 1 and 8.4% (95% CI: 5.3%, 12.3%) for Scenario 2. The basic reproduction number was estimated to be 2.1 (95% CI: 2.0, 2.2) and 3.2 (95% CI: 2.7, 3.7) for Scenarios 1 and 2, respectively. Based on these results, we argued that the current COVID-19 epidemic has a substantial potential for causing a pandemic. The proposed approach provides insights in early risk assessment using publicly available data.", "which R0 estimates (average) ?", "2.1", 1214.0, 1217.0], ["Background The 2019 novel Coronavirus (COVID-19) emerged in Wuhan, China in December 2019 and has been spreading rapidly in China. Decisions about its pandemic threat and the appropriate level of public health response depend heavily on estimates of its basic reproduction number and assessments of interventions conducted in the early stages of the epidemic. Methods We conducted a mathematical modeling study using five independent methods to assess the basic reproduction number (R0) of COVID-19, using data on confirmed cases obtained from the China National Health Commission for the period 10th January to 8th February. We analyzed the data for the period before the closure of Wuhan city (10th January to 23rd January) and the post-closure period (23rd January to 8th February) and for the whole period, to assess both the epidemic risk of the virus and the effectiveness of the closure of Wuhan city on spread of COVID-19. Findings Before the closure of Wuhan city the basic reproduction number of COVID-19 was 4.38 (95% CI: 3.63-5.13), dropping to 3.41 (95% CI: 3.16-3.65) after the closure of Wuhan city. Over the entire epidemic period COVID-19 had a basic reproduction number of 3.39 (95% CI: 3.09-3.70), indicating it has a very high transmissibility. Interpretation COVID-19 is a highly transmissible virus with a very high risk of epidemic outbreak once it emerges in metropolitan areas. The closure of Wuhan city was effective in reducing the severity of the epidemic, but even after closure of the city and the subsequent expansion of that closure to other parts of Hubei the virus remained extremely infectious. Emergency planners in other cities should consider this high infectiousness when considering responses to this virus.", "which R0 estimates (average) ?", "3.41", 1057.0, 1061.0], ["Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.", "which R0 estimates (average) ?", "1.97", 1116.0, 1120.0], ["Knowledge graph embedding is an important task and it will benefit lots of downstream applications. Currently, deep neural networks based methods achieve state-of-the-art performance. However, most of these existing methods are very complex and need much time for training and inference. To address this issue, we propose a simple but effective atrous convolution based knowledge graph embedding method. Compared with existing state-of-the-art methods, our method has following main characteristics. First, it effectively increases feature interactions by using atrous convolutions. Second, to address the original information forgotten issue and vanishing/exploding gradient issue, it uses the residual learning method. Third, it has simpler structure but much higher parameter efficiency. We evaluate our method on six benchmark datasets with different evaluation metrics. Extensive experiments show that our model is very effective. On these diverse datasets, it achieves better results than the compared state-of-the-art methods on most of evaluation metrics. The source codes of our model could be found at https://github.com/neukg/AcrE.", "which has source code ?", "https://github.com/neukg/AcrE", 1112.0, 1141.0], ["While modern machine translation has relied on large parallel corpora, a recent line of work has managed to train Neural Machine Translation (NMT) systems from monolingual corpora only (Artetxe et al., 2018c; Lample et al., 2018). Despite the potential of this approach for low-resource settings, existing systems are far behind their supervised counterparts, limiting their practical interest. In this paper, we propose an alternative approach based on phrase-based Statistical Machine Translation (SMT) that significantly closes the gap with supervised systems. Our method profits from the modular architecture of SMT: we first induce a phrase table from monolingual corpora through cross-lingual embedding mappings, combine it with an n-gram language model, and fine-tune hyperparameters through an unsupervised MERT variant. In addition, iterative backtranslation improves results further, yielding, for instance, 14.08 and 26.22 BLEU points in WMT 2014 English-German and English-French, respectively, an improvement of more than 7-10 BLEU points over previous unsupervised systems, and closing the gap with supervised SMT (Moses trained on Europarl) down to 2-5 BLEU points. Our implementation is available at https://github.com/artetxem/monoses.", "which has source code ?", "https://github.com/artetxem/monoses", 1216.0, 1251.0], ["We examine the capabilities of a unified, multi-task framework for three information extraction tasks: named entity recognition, relation extraction, and event extraction. Our framework (called DyGIE++) accomplishes all tasks by enumerating, refining, and scoring text spans designed to capture local (within-sentence) and global (cross-sentence) context. Our framework achieves state-of-the-art results across all tasks, on four datasets from a variety of domains. We perform experiments comparing different techniques to construct span representations. Contextualized embeddings like BERT perform well at capturing relationships among entities in the same or adjacent sentences, while dynamic span graph updates model long-range cross-sentence relationships. For instance, propagating span representations via predicted coreference links can enable the model to disambiguate challenging entity mentions. Our code is publicly available at https://github.com/dwadden/dygiepp and can be easily adapted for new tasks or datasets.", "which has source code ?", "https://github.com/dwadden/dygiepp", 940.0, 974.0], ["Luo J & Cardina J (2012). Germination patterns and implications for invasiveness in three Taraxacum (Asteraceae) species. Weed Research 52, 112\u2013121. Summary The ability to germinate across different environments has been considered an important trait of invasive plant species that allows for establishment success in new habitats. Using two alien congener species of Asteraceae \u2013Taraxacum officinale (invasive) and Taraxacum laevigatum laevigatum (non-invasive) \u2013 we tested the hypothesis that invasive species germinate better than non-invasives under various conditions. The germination patterns of Taraxacum brevicorniculatum, a contaminant found in seeds of the crop Taraxacum kok-saghyz, were also investigated to evaluate its invasive potential. In four experiments, we germinated seeds along gradients of alternating temperature, constant temperature (with or without light), water potential and following accelerated ageing. Neither higher nor lower germination per se explained invasion success for the Taraxacum species tested here. At alternating temperature, the invasive T. officinale had higher germination than or similar to the non-invasive T. laevigatum. Contrary to predictions, T. laevigatum exhibited higher germination than T. officinale in environments of darkness, low water potential or after the seeds were exposed to an ageing process. These results suggested a complicated role of germination in the success of T. officinale. Taraxacum brevicorniculatum showed the highest germination among the three species in all environments. The invasive potential of this species is thus unclear and will probably depend on its performance at other life stages along environmental gradients.", "which Study date ?", "2012", 19.0, 23.0], ["Aims. To evaluate the role of native predators (birds) within an Australian foodweb (lerp psyllids and eucalyptus trees) reassembled in California. Location. Eucalyptus groves within Santa Cruz, California. Methods. We compared bird diversity and abundance between a eucalyptus grove infested with lerp psyllids and a grove that was uninfested, using point counts. We documented shifts in the foraging behaviour of birds between the groves using structured behavioural observations. Additionally, we judged the effect of bird foraging on lerp psyllid abundance using exclosure experiments. Results. We found a greater richness and abundance of Californian birds within a psyllid infested eucalyptus grove compared to a matched non-infested grove, and that Californian birds modify their foraging behaviour within the infested grove in order to concentrate on ingesting psyllids. This suggests that Californian birds could provide indirect top-down benefits to eucalyptus trees similar to those observed in Australia. However, using bird exclosure experiments, we found no evidence of top-down control of lerp psyllids by Californian birds. Main conclusions. We suggest that physiological and foraging differences between Californian and Australian pysllid-eating birds account for the failure to observe top-down control of psyllid populations in California. The increasing rate of non-indigenous species invasions has produced local biotas that are almost entirely composed of non-indigenous species. This example illustrates the complex nature of cosmopolitan native-exotic food webs, and the ecological insights obtainable through their study. \u00a9 2004 Blackwell Publishing Ltd.", "which Study date ?", "2004", 1649.0, 1653.0], ["1. Biological invasion theory predicts that the introduction and establishment of non-native species is positively correlated with propagule pressure. Releases of pet and aquarium fishes to inland waters has a long history; however, few studies have examined the demographic basis of their importation and incidence in the wild. 2. For the 1500 grid squares (10\u00d710 km) that make up England, data on human demographics (population density, numbers of pet shops, garden centres and fish farms), the numbers of non-native freshwater fishes (from consented licences) imported in those grid squares (i.e. propagule pressure), and the reported incidences (in a national database) of non-native fishes in the wild were used to examine spatial relationships between the occurrence of non-native fishes and the demographic factors associated with propagule pressure, as well as to test whether the demographic factors are statistically reliable predictors of the incidence of non-native fishes, and as such surrogate estimators of propagule pressure. 3. Principal coordinates of neighbour matrices analyses, used to generate spatially explicit models, and confirmatory factor analysis revealed that spatial distributions of non-native species in England were significantly related to human population density, garden centre density and fish farm density. Human population density and the number of fish imports were identified as the best predictors of propagule pressure. 4. Human population density is an effective surrogate estimator of non-native fish propagule pressure and can be used to predict likely areas of non-native fish introductions. In conjunction with fish movements, where available, human population densities can be used to support biological invasion monitoring programmes across Europe (and perhaps globally) and to inform management decisions as regards the prioritization of areas for the control of non-native fish introductions. \u00a9 Crown copyright 2010. Reproduced with the permission of her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd.", "which Study date ?", "2010", 1964.0, 1968.0], ["Channelization is often a major cause of human impacts on river systems. It affects both hydrogeomorphic features and habitat characteristics and potentially impacts riverine flora and fauna. Human-disturbed fluvial ecosystems also appear to be particularly vulnerable to exotic plant establishment. Following a 12-year recovery period, the distribution, composition and cover of both exotic and native plant species were studied along a Portuguese lowland river segment, which had been subjected to resectioning, straightening and two-stage bank reinforcement, and were compared with those of a nearby, less impacted segment. The species distribution was also related to environmental data. Species richness and floristic composition in the channelized river segment were found to be similar to those at the more \u2018natural\u2019 river sites. Floral differences were primarily consistent with the dominance of cover by certain species. However, there were significant differences in exotic and native species richness and cover between the \u2018natural\u2019 corridor and the channelized segment, which was more susceptible to invasion by exotic perennial taxa, such as Eryngium pandanifolium, Paspalum paspalodes, Tradescantia fluminensis and Acacia dealbata. Factorial and canonical correspondence analyses revealed considerable patchiness in the distribution of species assemblages. The latter were associated with small differences in substrate composition and their own relative position across the banks and along the river segments in question. Data was also subjected to an unweighted pair-group arithmetic average clustering, and the Indicator Value methodology was applied to selected cluster noda in order to obtain significant indicator species. Copyright \u00a9 2001 John Wiley & Sons, Ltd.", "which Study date ?", "2001", 1755.0, 1759.0], ["Nitrogen fixation is an essential process that biologically transforms atmospheric dinitrogen gas to ammonia, therefore compensating for nitrogen losses occurring via denitrification and anammox. Currently, inputs and losses of nitrogen to the ocean resulting from these processes are thought to be spatially separated: nitrogen fixation takes place primarily in open ocean environments (mainly through diazotrophic cyanobacteria), whereas nitrogen losses occur in oxygen-depleted intermediate waters and sediments (mostly via denitrifying and anammox bacteria). Here we report on rates of nitrogen fixation obtained during two oceanographic cruises in 2005 and 2007 in the eastern tropical South Pacific (ETSP), a region characterized by the presence of coastal upwelling and a major permanent oxygen minimum zone (OMZ). Our results show significant rates of nitrogen fixation in the water column; however, integrated rates from the surface down to 120 m varied by \u223c30 fold between cruises (7.5\u00b14.6 versus 190\u00b182.3 \u00b5mol m\u22122 d\u22121). Moreover, rates were measured down to 400 m depth in 2007, indicating that the contribution to the integrated rates of the subsurface oxygen-deficient layer was \u223c5 times higher (574\u00b1294 \u00b5mol m\u22122 d\u22121) than the oxic euphotic layer (48\u00b168 \u00b5mol m\u22122 d\u22121). Concurrent molecular measurements detected the dinitrogenase reductase gene nifH in surface and subsurface waters. Phylogenetic analysis of the nifH sequences showed the presence of a diverse diazotrophic community at the time of the highest measured nitrogen fixation rates. Our results thus demonstrate the occurrence of nitrogen fixation in nutrient-rich coastal upwelling systems and, importantly, within the underlying OMZ. They also suggest that nitrogen fixation is a widespread process that can sporadically provide a supplementary source of fixed nitrogen in these regions.", "which Sampling year ?", "2007", 662.0, 666.0], ["The classical paradigm about marine N2 fixation establishes that this process is mainly constrained to nitrogen-poor tropical and subtropical regions, and sustained by the colonial cyanobacterium Trichodesmium spp. and diatom-diazotroph symbiosis. However, the application of molecular techniques allowed determining a high phylogenic diversity and a wide distribution of marine diazotrophs, which extends the range of ocean environments where biological N2 fixation may be relevant. Between February 2014 and December 2015, we carried out 10 one-day samplings in the upwelling system off NW Iberia in order to: 1) investigate the seasonal variability in the magnitude of N2 fixation, 2) determine its biogeochemical role as a mechanism of new nitrogen supply, and 3) quantify the main diazotrophs in the region under contrasting hydrographic regimes. Our results indicate that the magnitude of N2 fixation in this region was relatively low (0.001\u00b10.002 \u2013 0.095\u00b10.024 \u00b5mol N m-3 d-1), comparable to the lower-end of rates described for the subtropical NE Atlantic. Maximum rates were observed at surface during both upwelling and relaxation conditions. The comparison with nitrate diffusive fluxes revealed the minor role of N2 fixation (2 fixation activity detected in the region. Quantitative PCR targeting the nifH gene revealed the highest abundances of two sublineages of Candidatus Atelocyanobacterium thalassa or UCYN-A (UCYN-A1 and UCYN-A2) mainly at surface waters during upwelling and relaxation conditions, and of Gammaproteobacteria \u03b3-24774A11 at deep waters during downwelling. Maximum abundance for the three groups were up to 6.7 \u00d7 102, 1.5 \u00d7 103 and 2.4 \u00d7 104 nifH copies L-1, respectively. Our findings demonstrate measurable N2 fixation activity and presence of diazotrophs throughout the year in a nitrogen-rich temperate region.", "which Sampling year ?", "2015", 519.0, 523.0], ["Biological N2 fixation rates were quantified in the Eastern Tropical South Pacific (ETSP) during both El Ni\u00f1o (February 2010) and La Ni\u00f1a (March\u2013April 2011) conditions, and from Low\u2010Nutrient, Low\u2010Chlorophyll (20\u00b0S) to High\u2010Nutrient, Low\u2010Chlorophyll (HNLC) (10\u00b0S) conditions. N2 fixation was detected at all stations with rates ranging from 0.01 to 0.88 nmol N L\u22121 d\u22121, with higher rates measured during El Ni\u00f1o conditions compared to La Ni\u00f1a. High N2 fixations rates were reported at northern stations (HNLC conditions) at the oxycline and in the oxygen minimum zone (OMZ), despite nitrate concentrations up to 30 \u00b5mol L\u22121, indicating that inputs of new N can occur in parallel with N loss processes in OMZs. Water\u2010column integrated N2 fixation rates ranged from 4 to 53 \u00b5mol N m\u22122 d\u22121 at northern stations, and from 0 to 148 \u00b5mol m\u22122 d\u22121 at southern stations, which are of the same order of magnitude as N2 fixation rates measured in the oligotrophic ocean. N2 fixation rates responded significantly to Fe and organic carbon additions in the surface HNLC waters, and surprisingly by concomitant Fe and N additions in surface waters at the edge of the subtropical gyre. Recent studies have highlighted the predominance of heterotrophic diazotrophs in this area, and we hypothesize that N2 fixation could be directly limited by inorganic nutrient availability, or indirectly through the stimulation of primary production and the subsequent excretion of dissolved organic matter and/or the formation of micro\u2010environments favorable for heterotrophic N2 fixation.", "which Sampling year ?", "2010", 120.0, 124.0], ["Extensive measurements of nitrous oxide (N2O) have been made during April\u2013May 1994 (intermonsoon), February\u2013March 1995 (northeast monsoon), July\u2013August 1995 and August 1996 (southwest monsoon) in the Arabian Sea. Low N2O supersaturations in the surface waters are observed during intermonsoon compared to those in northeast and southwest monsoons. Spatial distributions of supersaturations manifest the effects of larger mixing during winter cooling and wind\u2010driven upwelling during monsoon period off the Indian west coast. A net positive flux is observable during all the seasons, with no discernible differences from the open ocean to coastal regions. The average ocean\u2010to\u2010atmosphere fluxes of N2O are estimated, using wind speed dependent gas transfer velocity, to be of the order of 0.26, 0.003, and 0.51, and 0.78 pg (pico grams) cm\u22122 s\u22121 during northeast monsoon, intermonsoon, and southwest monsoon in 1995 and 1996, respectively. The lower range of annual emission of N2O is estimated to be 0.56\u20130.76 Tg N2O per year which constitutes 13\u201317% of the net global oceanic source. However, N2O emission from the Arabian Sea can be as high as 1.0 Tg N2O per year using different gas transfer models.", "which Sampling year ?", "1996", 168.0, 172.0], ["Depth profiles of dissolved nitrous oxide (N2O) were measured in the central and western Arabian Sea during four cruises in May and July\u2013August 1995 and May\u2013July 1997 as part of the German contribution to the Arabian Sea Process Study of the Joint Global Ocean Flux Study. The vertical distribution of N2O in the water column on a transect along 65\u00b0E showed a characteristic double-peak structure, indicating production of N2O associated with steep oxygen gradients at the top and bottom of the oxygen minimum zone. We propose a general scheme consisting of four ocean compartments to explain the N2O cycling as a result of nitrification and denitrification processes in the water column of the Arabian Sea. We observed a seasonal N2O accumulation at 600\u2013800 m near the shelf break in the western Arabian Sea. We propose that, in the western Arabian Sea, N2O might also be formed during bacterial oxidation of organic matter by the reduction of IO3 \u2212 to I\u2212, indicating that the biogeochemical cycling of N2O in the Arabian Sea during the SW monsoon might be more complex than previously thought. A compilation of sources and sinks of N2O in the Arabian Sea suggested that the N2O budget is reasonably balanced.", "which Sampling year ?", "1995", 144.0, 148.0], ["Abstract. Coastal upwelling ecosystems with marked oxyclines (redoxclines) present high availability of electron donors that favour chemoautotrophy, leading in turn to high N2O and CH4 cycling associated with aerobic NH4+ (AAO) and CH4 oxidation (AMO). This is the case of the highly productive coastal upwelling area off Central Chile (36\u00b0 S), where we evaluated the importance of total chemolithoautotrophic vs. photoautotrophic production, the specific contributions of AAO and AMO to chemosynthesis and their role in gas cycling. Chemoautotrophy (involving bacteria and archaea) was studied at a time-series station during monthly (2002\u20132009) and seasonal cruises (January 2008, September 2008, January 2009) and was assessed in terms of dark carbon assimilation (CA), N2O and CH4 cycling, and the natural C isotopic ratio of particulate organic carbon (\u03b413POC). Total Integrated dark CA fluctuated between 19.4 and 2.924 mg C m\u22122 d\u22121. It was higher during active upwelling and represented on average 27% of the integrated photoautotrophic production (from 135 to 7.626 mg C m\u22122d\u22121). At the oxycline, \u03b413POC averaged -22.209\u2030 this was significantly lighter compared to the surface (-19.674\u2030) and bottom layers (-20.716\u2030). This pattern, along with low NH4+ content and high accumulations of N2O, NO2- and NO3- within the oxycline indicates that chemolithoautotrophs and specifically AA oxydisers were active. Dark CA was reduced from 27 to 48% after addition of a specific AAO inhibitor (ATU) and from 24 to 76% with GC7, a specific archaea inhibitor, indicating that AAO and maybe AMO microbes (most of them archaea) were performing dark CA through oxidation of NH4+ and CH4. AAO produced N2O at rates from 8.88 to 43 nM d\u22121 and a fraction of it was effluxed into the atmosphere (up to 42.85 \u03bcmol m\u22122 d\u22121). AMO on the other hand consumed CH4 at rates between 0.41 and 26.8 nM d\u22121 therefore preventing its efflux to the atmosphere (up to 18.69 \u03bcmol m\u22122 d\u22121). These findings show that chemically driven chemoautotrophy (with NH4+ and CH4 acting as electron donors) could be more important than previously thought in upwelling ecosystems and open new questions concerning its future relevance.", "which Sampling year ?", "2009", 641.0, 645.0], ["The classical paradigm about marine N2 fixation establishes that this process is mainly constrained to nitrogen-poor tropical and subtropical regions, and sustained by the colonial cyanobacterium Trichodesmium spp. and diatom-diazotroph symbiosis. However, the application of molecular techniques allowed determining a high phylogenic diversity and a wide distribution of marine diazotrophs, which extends the range of ocean environments where biological N2 fixation may be relevant. Between February 2014 and December 2015, we carried out 10 one-day samplings in the upwelling system off NW Iberia in order to: 1) investigate the seasonal variability in the magnitude of N2 fixation, 2) determine its biogeochemical role as a mechanism of new nitrogen supply, and 3) quantify the main diazotrophs in the region under contrasting hydrographic regimes. Our results indicate that the magnitude of N2 fixation in this region was relatively low (0.001\u00b10.002 \u2013 0.095\u00b10.024 \u00b5mol N m-3 d-1), comparable to the lower-end of rates described for the subtropical NE Atlantic. Maximum rates were observed at surface during both upwelling and relaxation conditions. The comparison with nitrate diffusive fluxes revealed the minor role of N2 fixation (2 fixation activity detected in the region. Quantitative PCR targeting the nifH gene revealed the highest abundances of two sublineages of Candidatus Atelocyanobacterium thalassa or UCYN-A (UCYN-A1 and UCYN-A2) mainly at surface waters during upwelling and relaxation conditions, and of Gammaproteobacteria \u03b3-24774A11 at deep waters during downwelling. Maximum abundance for the three groups were up to 6.7 \u00d7 102, 1.5 \u00d7 103 and 2.4 \u00d7 104 nifH copies L-1, respectively. Our findings demonstrate measurable N2 fixation activity and presence of diazotrophs throughout the year in a nitrogen-rich temperate region.", "which Sampling year ?", "2014", 501.0, 505.0], ["We examined rates of N2 fixation from the surface to 2000 m depth in the Eastern Tropical South Pacific (ETSP) during El Ni\u00f1o (2010) and La Ni\u00f1a (2011). Replicated vertical profiles performed under oxygen-free conditions show that N2 fixation takes place both in euphotic and aphotic waters, with rates reaching 155 to 509 \u00b5mol N m\u22122 d\u22121 in 2010 and 24\u00b114 to 118\u00b187 \u00b5mol N m\u22122 d\u22121 in 2011. In the aphotic layers, volumetric N2 fixation rates were relatively low (<1.00 nmol N L\u22121 d\u22121), but when integrated over the whole aphotic layer, they accounted for 87\u201390% of total rates (euphotic+aphotic) for the two cruises. Phylogenetic studies performed in microcosms experiments confirm the presence of diazotrophs in the deep waters of the Oxygen Minimum Zone (OMZ), which were comprised of non-cyanobacterial diazotrophs affiliated with nifH clusters 1K (predominantly comprised of \u03b1-proteobacteria), 1G (predominantly comprised of \u03b3-proteobacteria), and 3 (sulfate reducing genera of the \u03b4-proteobacteria and Clostridium spp., Vibrio spp.). Organic and inorganic nutrient addition bioassays revealed that amino acids significantly stimulated N2 fixation in the core of the OMZ at all stations tested and as did simple carbohydrates at stations located nearest the coast of Peru/Chile. The episodic supply of these substrates from upper layers are hypothesized to explain the observed variability of N2 fixation in the ETSP.", "which Sampling year ?", "2010", 127.0, 131.0], ["We present new data on the nitrate (new production), ammonium, urea uptake rates and f\u2010ratios for the eastern Arabian Sea (10\u00b0 to 22\u00b0N) during the late winter (northeast) monsoon, 2004, including regions of green Noctilucascintillans bloom. A comparison of N\u2010uptake rates of the Noctiluca dominated northern zone to the southern non\u2010bloom zone indicates the presence of two biogeochemical regimes during the late winter monsoon: highly productive north and less productive south. The conservative estimates of photic zone\u2010integrated total N\u2010uptake and f\u2010ratio are high in the north (\u223c19 mmolNm\u22122d\u22121 and 0.82, respectively) during the bloom and low (\u223c5.5 mmolNm\u22122d\u22121 and 0.38 respectively) in the south. The present and earlier data imply persistence of high N\u2010uptake and f\u2010ratio during blooms year after year. This quantification of the enhanced seasonal sequestration of carbon is an important input to global biogeochemical models.", "which Sampling year ?", "2004", 180.0, 184.0], ["Abstract Picophytoplankton were investigated during spring 2015 and 2016 extending from near\u2010shore coastal waters to oligotrophic open waters in the eastern Indian Ocean (EIO). They were typically composed of Prochlorococcus (Pro), Synechococcus (Syn), and picoeukaryotes (PEuks). Pro dominated most regions of the entire EIO and were approximately 1\u20132 orders of magnitude more abundant than Syn and PEuks. Under the influence of physicochemical conditions induced by annual variations of circulations and water masses, no coherent abundance and horizontal distributions of picophytoplankton were observed between spring 2015 and 2016. Although previous studies reported the limited effects of nutrients and heavy metals around coastal waters or upwelling zones could constrain Pro growth, Pro abundance showed strong positive correlation with nutrients, indicating the increase in nutrient availability particularly in the oligotrophic EIO could appreciably elevate their abundance. The exceptional appearance of picophytoplankton with high abundance along the equator appeared to be associated with the advection processes supported by the Wyrtki jets. For vertical patterns of picophytoplankton, a simple conceptual model was built based upon physicochemical parameters. However, Pro and PEuks simultaneously formed a subsurface maximum, while Syn generally restricted to the upper waters, significantly correlating with the combined effects of temperature, light, and nutrient availability. The average chlorophyll a concentrations (Chl a) of picophytoplankton accounted for above 49.6% and 44.9% of the total Chl a during both years, respectively, suggesting that picophytoplankton contributed a significant proportion of the phytoplankton community in the whole EIO.", "which Sampling year ?", "2016", 68.0, 72.0]]}