{"training_set": [["The investigated new microwave plasma torch is based on an axially symmetric resonator. Microwaves of a frequency of 2.45 GHz are resonantly fed into this cavity resulting in a sufficiently high electric field to ignite plasma without any additional igniters as well as to maintain stable plasma operation. Optical emission spectroscopy was carried out to characterize a humid air plasma. OH\u2010bands were used to determine the gas rotational temperature Trot while the electron temperature was estimated by a Boltzmann plot of oxygen lines. Maximum temperatures of Trot of about 3600 K and electron temperatures of 5800 K could be measured. The electron density ne was estimated to ne \u2248 3 \u00b7 1020m\u20133 by using Saha's equation. Parametric studies in dependence of the gas flow and the supplied microwave power revealed that the maximum temperatures are independent of these parameters. However, the volume of the plasma increases with increasing microwave power and with a decrease of the gas flow. Considerations using collision frequencies, energy transfer times and power coupling provide an explanation of the observed phenomena: The optimal microwave heating is reached for electron\u2010neutral collision frequencies \u03bden being near to the angular frequency of the wave \u03c9 (\u00a9 2012 WILEY\u2010VCH Verlag GmbH & Co. KGaA, Weinheim)", "what Excitation_type ?", "GHz", 122.0, 125.0], ["A two-dimensional model of microwave-induced plasma (field frequency 2.45 GHz) in argon at atmospheric pressure is presented. The model describes in a self-consistent manner the gas flow and heat transfer, the in-coupling of the microwave energy into the plasma, and the reaction kinetics relevant to high-pressure argon plasma including the contribution of molecular ion species. The model provides the gas and electron temperature distributions, the electron, ion, and excited state number densities, and the power deposited into the plasma for given gas flow rate and temperature at the inlet, and input power of the incoming TEM microwave. For flow rate and absorbed microwave power typical for analytical applications (200-400 ml/min and 20 W), the plasma is far from thermodynamic equilibrium. The gas temperature reaches values above 2000 K in the plasma region, while the electron temperature is about 1 eV. The electron density reaches a maximum value of about 4 \u00d7 10(21) m(-3). The balance of the charged particles is essentially controlled by the kinetics of the molecular ions. For temperatures above 1200 K, quasineutrality of the plasma is provided by the atomic ions, and below 1200 K the molecular ion density exceeds the atomic ion density and a contraction of the discharge is observed. Comparison with experimental data is presented which demonstrates good quantitative and qualitative agreement.", "what Excitation_type ?", "GHz", 74.0, 77.0], ["The Integrated Microwave Atmospheric Plasma Source (IMAPlaS) operating with a microwave resonator at 2.45 GHz driven by a solid-state transistor oscillator generates a core plasma of high temperature (T > 1000 K), therefore producing reactive species such as NO very effectively. The effluent of the plasma source is much colder, which enables direct treatment of thermolabile materials or even living tissue. In this study the source was operated with argon, helium and nitrogen with gas flow rates between 0.3 and 1.0 slm. Depending on working gas and distance, axial gas temperatures between 30 and 250 \u00b0C were determined in front of the nozzle. Reactive species were identified by emission spectroscopy in the spectral range from vacuum ultraviolet to near infrared. The irradiance in the ultraviolet range was also measured. Using B. atrophaeus spores to test antimicrobial efficiency, we determined log10-reduction rates of up to a factor of 4.", "what Excitation_type ?", "GHz", 106.0, 109.0], ["An extensive electrical study was performed on a coaxial geometry atmospheric pressure plasma jet source in helium, driven by 30 kHz sine voltage. Two modes of operation were observed, a highly reproducible low-power mode that features the emission of one plasma bullet per voltage period and an erratic high-power mode in which micro-discharges appear around the grounded electrode. The minimum of power transfer efficiency corresponds to the transition between the two modes. Effective capacitance was identified as a varying property influenced by the discharge and the dissipated power. The charge carried by plasma bullets was found to be a small fraction of charge produced in the source irrespective of input power and configuration of the grounded electrode. The biggest part of the produced charge stays localized in the plasma source and below the grounded electrode, in the range 1.2\u20133.3 nC for ground length of 3\u20138 mm.", "what Excitation_type ?", "kHz", 129.0, 132.0], ["Providing easy to use methods for visual analysis of Linked Data is often hindered by the complexity of semantic technologies. On the other hand, semantic information inherent to Linked Data provides opportunities to support the user in interactively analysing the data. This paper provides a demonstration of an interactive, Web-based visualisation tool, the \"Vis Wizard\", which makes use of semantics to simplify the process of setting up visualisations, transforming the data and, most importantly, interactively analysing multiple datasets using brushing and linking methods.", "what implementation ?", "Vis Wizard", 361.0, 371.0], ["This paper describes the multi-document text summarization system NeATS. Using a simple algorithm, NeATS was among the top two performers of the DUC-01 evaluation.", "what implementation ?", "NeATS", 66.0, 71.0], ["In the past decade, much effort has been put into the visual representation of ontologies. However, present visualization strategies are not equipped to handle complex ontologies with many relations, leading to visual clutter and inefficient use of space. In this paper, we propose GLOW, a method for ontology visualization based on Hierarchical Edge Bundles. Hierarchical Edge Bundles is a new visually attractive technique for displaying relations in hierarchical data, such as concept structures formed by 'subclass-of' and 'type-of' relations. We have developed a visualization library based on OWL API, as well as a plug-in for Prot\u00e9g\u00e9, a well-known ontology editor. The displayed adjacency relations can be selected from an ontology using a set of common configurations, allowing for intuitive discovery of information. Our evaluation demonstrates that the GLOW visualization provides better visual clarity, and displays relations and complex ontologies better than the existing Prot\u00e9g\u00e9 visualization plug-in Jambalaya.", "what implementation ?", "GLOW", 282.0, 286.0], ["We present ERSS 2005, our entry to this year\u2019s DUC competition. With only slight modifications from last year\u2019s version to accommodate the more complex context information present in DUC 2005, we achieved a similar performance to last year\u2019s entry, ranking roughly in the upper third when examining the ROUGE-1 and Basic Element score. We also participated in the additional manual evaluation based on the new Pyramid method and performed further evaluations based on the Basic Elements method and the automatic generation of Pyramids. Interestingly, the ranking of our system differs greatly between the different measures; we attempt to analyse this effect based on correlations between the different results using the Spearman coefficient.", "what implementation ?", "ERSS 2005", 11.0, 20.0], ["In this paper, we present a novel exploratory visual analytic system called TIARA (Text Insight via Automated Responsive Analytics), which combines text analytics and interactive visualization to help users explore and analyze large collections of text. Given a collection of documents, TIARA first uses topic analysis techniques to summarize the documents into a set of topics, each of which is represented by a set of keywords. In addition to extracting topics, TIARA derives time-sensitive keywords to depict the content evolution of each topic over time. To help users understand the topic-based summarization results, TIARA employs several interactive text visualization techniques to explain the summarization results and seamlessly link such results to the original text. We have applied TIARA to several real-world applications, including email summarization and patient record analysis. To measure the effectiveness of TIARA, we have conducted several experiments. Our experimental results and initial user feedback suggest that TIARA is effective in aiding users in their exploratory text analytic tasks.", "what implementation ?", "TIARA", 76.0, 81.0], ["Gephi is an open source software for graph and network analysis. It uses a 3D render engine to display large networks in real-time and to speed up the exploration. A flexible and multi-task architecture brings new possibilities to work with complex data sets and produce valuable visual results. We present several key features of Gephi in the context of interactive exploration and interpretation of networks. It provides easy and broad access to network data and allows for spatializing, filtering, navigating, manipulating and clustering. Finally, by presenting dynamic features of Gephi, we highlight key aspects of dynamic network visualization.", "what implementation ?", "Gephi", 0.0, 5.0], ["This paper presents a method for the reuse of existing knowledge in UML software models. Our purpose is being able to adapt fragments of existing UML class diagrams in order to build domain ontologies, represented in OWL-DL, reducing the required amount of time and resources to create one from scratch. Our method is supported by a CASE tool, VisualWADE, and a developed plug-in, used for the management of ontologies and the generation of semantically tagged Web applications. In order to analyse the designed transformations between knowledge representation formalisms, UML and OWL, we have chosen a use case in the pharmacotherapeutic domain. Then, we discuss some of the most relevant aspects of the proposal and, finally, conclusions are obtained and future work briefly described.", "what implementation ?", "Visualwade", 344.0, 354.0], ["In this paper, we present the POMELO system developed for participating in the task 2 of the QALD-4 challenge. For translating natural language questions in SPARQL queries we exploit Natural Language Processing methods, semantic resources and RDF triples description. We designed a four-step method which pre-processes the question, performs an abstraction of the question, then builds a representation of the SPARQL query and finally generates the query. The system was ranked second out of three participating systems. It achieves good performance with 0.85 F-measure on the set of 25 test questions.", "what implementation ?", "POMELO", 30.0, 36.0], ["The success of Open Data initiatives has increased the amount of data available on the Web. Unfortunately, most of this data is only available in raw tabular form, what makes analysis and reuse quite difficult for non-experts. Linked Data principles allow for a more sophisticated approach by making explicit both the structure and semantics of the data. However, from the end-user viewpoint, they continue to be monolithic files completely opaque or difficult to explore by making tedious semantic queries. Our objective is to facilitate the user to grasp what kind of entities are in the dataset, how they are interrelated, which are their main properties and values, etc. Rhizomer is a tool for data publishing whose interface provides a set of components borrowed from Information Architecture (IA) that facilitate awareness of the dataset at hand. It automatically generates navigation menus and facets based on the kinds of things in the dataset and how they are described through metadata properties and values. Moreover, motivated by recent tests with end-users, it also provides the possibility to pivot among the faceted views created for each class of resources in the dataset.", "what implementation ?", "Rhizomer", 675.0, 683.0], ["We present and evaluate SumUM, a text summarization system that takes a raw technical text as input and produces an indicative informative summary. The indicative part of the summary identifies the topics of the document, and the informative part elaborates on some of these topics according to the reader's interest. SumUM motivates the topics, describes entities, and defines concepts. It is a first step for exploring the issue of dynamic summarization. This is accomplished through a process of shallow syntactic and semantic analysis, concept identification, and text regeneration. Our method was developed through the study of a corpus of abstracts written by professional abstractors. Relying on human judgment, we have evaluated indicativeness, informativeness, and text acceptability of the automatic summaries. The results thus far indicate good performance when compared with other summarization technologies.", "what implementation ?", "SumUM", 24.0, 29.0], ["This article presents GOFAISUM, a topicanswering and summarizing system developed for the main task of DUC 2007 by the Universit\u00e9 de Montr\u00e9al and the Universit\u00e9 de Gen\u00e8ve. We chose to use an all-symbolic approach, the only source of linguistic knowledge being FIPS, a multilingual syntactic parser. We further attempted to innovate by using XML and XSLT to both represent FIPS\u2019s analysis trees and to manipulate them to create summaries. We relied on tf\u00b7idf -like scores to ensure relevance to the topic and on syntactic pruning to achieve conciseness and grammaticality. NIST evaluation metrics show that our system performs well with respect to summary responsiveness and linguistic quality, but could be improved to reduce redundancy.", "what implementation ?", "GOFAIsum", 22.0, 30.0], ["We present the results of Michigan\u2019s participation in DUC 2004. Our system, MEAD, ranked as one of the top systems in four of the five tasks. We introduce our new feature, LexPageRank, a new measure of sentence centrality inspired by the prestige concept in social networks. LexPageRank gave promising results in multi-document summarization. Our approach for Task 5, biographical summarization, was simplistic, yet succesful. We used regular expression matching to boost up the scores of the sentences that are likely to contain biographical information patterns.", "what implementation ?", "MEAD", 76.0, 80.0], ["This paper presents a novel multi-document summarization approach based on personalized pagerank (PPRSum). In this algorithm, we uniformly integrate various kinds of information in the corpus. At first, we train a salience model of sentence global features based on Naive Bayes Model. Secondly, we generate a relevance model for each corpus utilizing the query of it. Then, we compute the personalized prior probability for each sentence in the corpus utilizing the salience model and the relevance model both. With the help of personalized prior probability, a Personalized PageRank ranking process is performed depending on the relationships among all sentences in the corpus. Additionally, the redundancy penalty is imposed on each sentence. The summary is produced by choosing the sentences with both high query-focused information richness and high information novelty. Experiments on DUC2007 are performed and the ROUGE evaluation results show that PPRSum ranks between the 1st and the 2nd systems on DUC2007 main task.", "what implementation ?", "PPRSum", 98.0, 104.0], ["Intui3 is one of the participating systems at the fourth evaluation campaign on multilingual question answering over linked data, QALD4. The system accepts as input a question formulated in natural language (in English), and uses syntactic and semantic information to construct its interpretation with respect to a given database of RDF triples (in this case DBpedia 3.9). The interpretation is mapped to the corresponding SPARQL query, which is then run against a SPARQL endpoint to retrieve the answers to the initial question. Intui3 competed in the challenge called Task 1: Multilingual question answering over linked data, which offered 200 training questions and 50 test questions in 7 different languages. It obtained an F-measure of 0.24 by providing a correct answer to 10 of the test questions and a partial answer to 4 of them.", "what implementation ?", "Intui3", 0.0, 6.0], ["This paper describes and analyzes how the FEMsum system deals with DUC 2007 tasks of providing summary-length answers to complex questions, both background and just-the-news summaries. We participated in producing background summaries for the main task with the FEMsum approach that obtained better results in our last year participation. The FEMsum semantic based approach was adapted to deal with the update pilot task with the aim of producing just-the-news summaries.", "what implementation ?", "FEMsum", 42.0, 48.0], ["Visualizing Resource Description Framework (RDF) data to support decision-making processes is an important and challenging aspect of consuming Linked Data. With the recent development of JavaScript libraries for data visualization, new opportunities for Web-based visualization of Linked Data arise. This paper presents an extensive evaluation of JavaScript-based libraries for visualizing RDF data. A set of criteria has been devised for the evaluation and 15 major JavaScript libraries have been analyzed against the criteria. The two JavaScript libraries with the highest score in the evaluation acted as the basis for developing LODWheel (Linked Open Data Wheel) - a prototype for visualizing Linked Open Data in graphs and charts - introduced in this paper. This way of visualizing RDF data leads to a great deal of challenges related to data-categorization and connecting data resources together in new ways, which are discussed in this paper.", "what implementation ?", "LODWheel", 633.0, 641.0], ["LodLive project, http://en.lodlive.it/, provides a demonstration of the use of Linked Data standard (RDF, SPARQL) to browse RDF resources. The application aims to spread linked data principles with a simple and friendly interface and reusable techniques. In this report we present an overview of the potential of LodLive, mentioning tools and methodologies that were used to create it.", "what implementation ?", "Lodlive", 0.0, 7.0], ["Recently, the amount of semantic data available in the Web has increased dramatically. The potential of this vast amount of data is enormous but in most cases it is difficult for users to explore and use this data, especially for those without experience with Semantic Web technologies. Applying information visualization techniques to the Semantic Web helps users to easily explore large amounts of data and interact with them. In this article we devise a formal Linked Data Visualization Model (LDVM), which allows to dynamically connect data with visualizations. We report about our implementation of the LDVM comprising a library of generic visualizations that enable both users and data analysts to get an overview on, visualize and explore the Data Web and perform detailed analyzes on Linked Data.", "what implementation ?", "LDVM", 497.0, 501.0], ["NEWSINESSENCE is a system for finding, visualizing and summarizing a topic-based cluster of news stories. In the generic scenario for NEWSINESSENCE, a user selects a single news story from a news Web site. Our system then searches other live sources of news for other stories related to the same event and produces summaries of a subset of the stories that it finds, according to parameters specified by the user.", "what implementation ?", "NewsInEssence", 0.0, 13.0], ["We present a question answering system (CASIA@V2) over Linked Data (DBpedia), which translates natural language questions into structured queries automatically. Existing systems usually adopt a pipeline framework, which con- tains four major steps: 1) Decomposing the question and detecting candidate phrases; 2) mapping the detected phrases into semantic items of Linked Data; 3) grouping the mapped semantic items into semantic triples; and 4) generat- ing the rightful SPARQL query. We present a jointly learning framework using Markov Logic Network(MLN) for phrase detection, phrases mapping to seman- tic items and semantic items grouping. We formulate the knowledge for resolving the ambiguities in three steps of QALD as first-order logic clauses in a MLN. We evaluate our approach on QALD-4 test dataset and achieve an F-measure score of 0.36, an average precision of 0.32 and an average recall of 0.40 over 50 questions.", "what implementation ?", "CASIA", 40.0, 45.0], ["Datasets published in the LOD cloud are recommended to follow some best practice in order to be 4-5 stars Linked Data compliant. They can often be consumed and accessed by different means such as API access, bulk download or as linked data fragments, but most of the time, a SPARQL endpoint is also provided. While the LOD cloud keeps growing, having a quick glimpse of those datasets is getting harder and there is a need to develop new methods enabling to detect automatically what an arbitrary dataset is about and to recommend visualizations for data samples. We consider that \"a visualization is worth a million triples\", and in this paper, we propose a novel approach that mines the content of datasets and automatically generates visualizations. Our approach is directly based on the usage of SPARQL queries that will detect the important categories of a dataset and that will specifically consider the properties used by the objects which have been interlinked via owl:sameAs links. We then propose to associate type of visualization for those categories. We have implemented this approach into a so-called Linked Data Vizualization Wizard (LDVizWiz).", "what implementation ?", "LDVizWiz", 1149.0, 1157.0], ["We present Paged Graph Visualization (PGV), a new semi-autonomous tool for RDF data exploration and visualization. PGV consists of two main components: a) the \"PGV explorer\" and b) the \"RDF pager\" module utilizing BRAHMS, our high per-formance main-memory RDF storage system. Unlike existing graph visualization techniques which attempt to display the entire graph and then filter out irrelevant data, PGV begins with a small graph and provides the tools to incrementally explore and visualize relevant data of very large RDF ontologies. We implemented several techniques to visualize and explore hot spots in the graph, i.e. nodes with large numbers of immediate neighbors. In response to the user-controlled, semantics-driven direction of the exploration, the PGV explorer obtains the necessary sub-graphs from the RDF pager and enables their incremental visualization leaving the previously laid out sub-graphs intact. We outline the problem of visualizing large RDF data sets, discuss our interface and its implementation, and through a controlled experiment we show the benefits of PGV.", "what implementation ?", "PGV", 38.0, 41.0], ["We present QAKiS, a system for open domain Question Answering over linked data. It addresses the problem of question interpretation as a relation-based match, where fragments of the question are matched to binary relations of the triple store, using relational textual patterns automatically collected. For the demo, the relational patterns are automatically extracted from Wikipedia, while DBpedia is the RDF data set to be queried using a natural language interface.", "what implementation ?", "QAKiS", 11.0, 16.0], ["A wealth of information has recently become available as browsable RDF data on the Web, but the selection of client applications to interact with this Linked Data remains limited. We show how to browse Linked Data with Fenfire, a Free and Open Source Software RDF browser and editor that employs a graph view and focuses on an engaging and interactive browsing experience. This sets Fenfire apart from previous table- and outline-based Linked Data browsers.", "what implementation ?", "Fenfire", 219.0, 226.0], ["Topic representation mismatch is a key problem in topic-oriented summarization for the specified topic is usually too short to understand/interpret. This paper proposes a novel adaptive model for summarization, AdaSum, under the assumption that the summary and the topic representation can be mutually boosted. AdaSum aims to simultaneously optimize the topic representation and extract effective summaries. This model employs a mutual boosting process to minimize the topic representation mismatch for base summarizers. Furthermore, a linear combination of base summarizers is proposed to further reduce the topic representation mismatch from the diversity of base summarizers with a general learning framework. We prove that the training process of AdaSum can enhance the performance measure used. Experimental results on DUC 2007 dataset show that AdaSum significantly outperforms the baseline methods for summarization (e.g. MRP, LexRank, and GSPS).", "what implementation ?", "AdaSum", 211.0, 217.0], ["We present a methodology for summarization of news about current events in the form of briefings that include appropriate background (historical) information. The system that we developed, SUMMONS, uses the output of systems developed for the DARPA Message Understanding Conferences to generate summaries of multiple documents on the same or related events, presenting similarities and differences, contradictions, and generalizations among sources of information. We describe the various components of the system, showing how information from multiple articles is combined, organized into a paragraph, and finally, realized as English sentences. A feature of our work is the extraction of descriptions of entities such as people and places for reuse to enhance a briefing.", "what implementation ?", "SUMMONS", 189.0, 196.0], ["With the continued growth of online semantic information, the processes of searching and managing this massive scale and heterogeneous content have become increasingly challenging. In this work, we present PowerAqua, an ontologybased Question Answering system that is able to answer queries by locating and integrating information, which can be distributed across heterogeneous semantic resources. We provide a complete overview of the system including: the research challenges that it addresses, its architecture, the evaluations that have been conducted to test it, and an in-depth discussion showing how PowerAqua effectively supports users in querying and exploring Semantic Web content.", "what implementation ?", "PowerAqua", 206.0, 215.0], ["Automatic Document Summarization is a highly interdisciplinary research area related with computer science as well as cognitive psychology. This Summarization is to compress an original document into a summarized version by extracting almost all of the essential concepts with text mining techniques. This research focuses on developing a statistical automatic text summarization approach, Kmixture probabilistic model, to enhancing the quality of summaries. KSRS employs the K-mixture probabilistic model to establish term weights in a statistical sense, and further identifies the term relationships to derive the semantic relationship significance (SRS) of nouns. Sentences are ranked and extracted based on their semantic relationship significance values. The objective of this research is thus to propose a statistical approach to text summarization. We propose a K-mixture semantic relationship significance (KSRS) approach to enhancing the quality of document summary results. The K-mixture probabilistic model is used to determine the term weights. Term relationships are then investigated to develop the semantic relationship of nouns that manifests sentence semantics. Sentences with significant semantic relationship, nouns are extracted to form the summary accordingly.", "what implementation ?", "KSRS", 459.0, 463.0], ["The need to visualize large social networks is growing as hardware capabilities make analyzing large networks feasible and many new data sets become available. Unfortunately, the visualizations in existing systems do not satisfactorily resolve the basic dilemma of being readable both for the global structure of the network and also for detailed analysis of local communities. To address this problem, we present NodeTrix, a hybrid representation for networks that combines the advantages of two traditional representations: node-link diagrams are used to show the global structure of a network, while arbitrary portions of the network can be shown as adjacency matrices to better support the analysis of communities. A key contribution is a set of interaction techniques. These allow analysts to create a NodeTrix visualization by dragging selections to and from node-link and matrix forms, and to flexibly manipulate the NodeTrix representation to explore the dataset and create meaningful summary visualizations of their findings. Finally, we present a case study applying NodeTrix to the analysis of the InfoVis 2004 coauthorship dataset to illustrate the capabilities of NodeTrix as both an exploration tool and an effective means of communicating results.", "what implementation ?", "NodeTrix", 414.0, 422.0], ["We present UWN, a large multilingual lexical knowledge base that describes the meanings and relationships of words in over 200 languages. This paper explains how link prediction, information integration and taxonomy induction methods have been used to build UWN based on WordNet and extend it with millions of named entities from Wikipedia. We additionally introduce extensions to cover lexical relationships, frame-semantic knowledge, and language data. An online interface provides human access to the data, while a software API enables applications to look up over 16 million words and names.", "what implementation ?", "UWN", 11.0, 14.0], ["Online news recommender systems aim to address the information explosion of news and make personalized recommendation for users. In general, news language is highly condensed, full of knowledge entities and common sense. However, existing methods are unaware of such external knowledge and cannot fully discover latent knowledge-level connections among news. The recommended results for a user are consequently limited to simple patterns and cannot be extended reasonably. To solve the above problem, in this paper, we propose a deep knowledge-aware network (DKN) that incorporates knowledge graph representation into news recommendation. DKN is a content-based deep recommendation framework for click-through rate prediction. The key component of DKN is a multi-channel and word-entity-aligned knowledge-aware convolutional neural network (KCNN) that fuses semantic-level and knowledge-level representations of news. KCNN treats words and entities as multiple channels, and explicitly keeps their alignment relationship during convolution. In addition, to address users\u00bb diverse interests, we also design an attention module in DKN to dynamically aggregate a user\u00bbs history with respect to current candidate news. Through extensive experiments on a real online news platform, we demonstrate that DKN achieves substantial gains over state-of-the-art deep recommendation models. We also validate the efficacy of the usage of knowledge in DKN.", "what Machine Learning Method ?", "CNN", NaN, NaN], ["With the revival of neural networks, many studies try to adapt powerful sequential neural models, \u0131e Recurrent Neural Networks (RNN), to sequential recommendation. RNN-based networks encode historical interaction records into a hidden state vector. Although the state vector is able to encode sequential dependency, it still has limited representation power in capturing complicated user preference. It is difficult to capture fine-grained user preference from the interaction sequence. Furthermore, the latent vector representation is usually hard to understand and explain. To address these issues, in this paper, we propose a novel knowledge enhanced sequential recommender. Our model integrates the RNN-based networks with Key-Value Memory Network (KV-MN). We further incorporate knowledge base (KB) information to enhance the semantic representation of KV-MN. RNN-based models are good at capturing sequential user preference, while knowledge-enhanced KV-MNs are good at capturing attribute-level user preference. By using a hybrid of RNNs and KV-MNs, it is expected to be endowed with both benefits from these two components. The sequential preference representation together with the attribute-level preference representation are combined as the final representation of user preference. With the incorporation of KB information, our model is also highly interpretable. To our knowledge, it is the first time that sequential recommender is integrated with external memories by leveraging large-scale KB information.", "what Machine Learning Method ?", "RNN", 128.0, 131.0], ["Providing model-generated explanations in recommender systems is important to user experience. State-of-the-art recommendation algorithms\u2014especially the collaborative filtering (CF)- based approaches with shallow or deep models\u2014usually work with various unstructured information sources for recommendation, such as textual reviews, visual images, and various implicit or explicit feedbacks. Though structured knowledge bases were considered in content-based approaches, they have been largely ignored recently due to the availability of vast amounts of data and the learning power of many complex models. However, structured knowledge bases exhibit unique advantages in personalized recommendation systems. When the explicit knowledge about users and items is considered for recommendation, the system could provide highly customized recommendations based on users\u2019 historical behaviors and the knowledge is helpful for providing informed explanations regarding the recommended items. A great challenge for using knowledge bases for recommendation is how to integrate large-scale structured and unstructured data, while taking advantage of collaborative filtering for highly accurate performance. Recent achievements in knowledge-base embedding (KBE) sheds light on this problem, which makes it possible to learn user and item representations while preserving the structure of their relationship with external knowledge for explanation. In this work, we propose to explain knowledge-base embeddings for explainable recommendation. Specifically, we propose a knowledge-base representation learning framework to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm is proposed to generate personalized explanations for the recommended items. Experimental results on real-world e-commerce datasets verified the superior recommendation performance and the explainability power of our approach compared with state-of-the-art baselines.", "what Machine Learning Method ?", "Collaborative Filtering", 153.0, 176.0], ["Knowledge graph embedding aims to learn distributed representations for entities and relations, and is proven to be effective in many applications. Crossover interactions -- bi-directional effects between entities and relations --- help select related information when predicting a new triple, but haven't been formally discussed before. In this paper, we propose CrossE, a novel knowledge graph embedding which explicitly simulates crossover interactions. It not only learns one general embedding for each entity and relation as most previous methods do, but also generates multiple triple specific embeddings for both of them, named interaction embeddings. We evaluate embeddings on typical link prediction tasks and find that CrossE achieves state-of-the-art results on complex and more challenging datasets. Furthermore, we evaluate embeddings from a new perspective -- giving explanations for predicted triples, which is important for real applications. In this work, an explanation for a triple is regarded as a reliable closed-path between the head and the tail entity. Compared to other baselines, we show experimentally that CrossE, benefiting from interaction embeddings, is more capable of generating reliable explanations to support its predictions.", "what Machine Learning Method ?", "Knowledge Graph Embedding", 0.0, 25.0], ["Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.", "what Machine Learning Method ?", "multi-task CNN", 898.0, 912.0], ["Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.", "what Machine Learning Method ?", "Multi-task CNN", 898.0, 912.0], ["A rapidly growing amount of content posted online, such as food recipes, opens doors to new exciting applications at the intersection of vision and language. In this work, we aim to estimate the calorie amount of a meal directly from an image by learning from recipes people have published on the Internet, thus skipping time-consuming manual data annotation. Since there are few large-scale publicly available datasets captured in unconstrained environments, we propose the pic2kcal benchmark comprising 308 000 images from over 70 000 recipes including photographs, ingredients, and instructions. To obtain nutritional information of the ingredients and automatically determine the ground-truth calorie value, we match the items in the recipes with structured information from a food item database. We evaluate various neural networks for regression of the calorie quantity and extend them with the multi-task paradigm. Our learning procedure combines the calorie estimation with prediction of proteins, carbohydrates, and fat amounts as well as a multi-label ingredient classification. Our experiments demonstrate clear benefits of multi-task learning for calorie estimation, surpassing the single-task calorie regression by 9.9%. To encourage further research on this task, we make the code for generating the dataset and the models publicly available.", "what Machine Learning Method ?", "Multi-Task Learning", 1135.0, 1154.0], ["Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.", "what Machine Learning Method ?", "Single-task CNN", 1549.0, 1564.0], ["DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.", "what Class hierarchy extraction/learning ?", "DTD", 0.0, 3.0], ["DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.", "what Concepts extraction/learning ?", "DTD", 0.0, 3.0], ["DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.", "what Input format ?", "DTD", 0.0, 3.0], ["Metabolic pathways are an important part of systems biology research since they illustrate complex interactions between metabolites, enzymes, and regulators. Pathway maps are drawn to elucidate metabolism or to set data in a metabolic context. We present MetaboMAPS, a web-based platform to visualize numerical data on individual metabolic pathway maps. Metabolic maps can be stored, distributed and downloaded in SVG-format. MetaboMAPS was designed for users without computational background and supports pathway sharing without strict conventions. In addition to existing applications that established standards for well-studied pathways, MetaboMAPS offers a niche for individual, customized pathways beyond common knowledge, supporting ongoing research by creating publication-ready visualizations of experimental data.", "what Input format ?", "SVG", 414.0, 417.0], ["Significant amounts of knowledge in science and technology have so far not been published as Linked Open Data but are contained in the text and tables of legacy PDF publications. Making such information available as RDF would, for example, provide direct access to claims and facilitate surveys of related work. A lot of valuable tabular information that till now only existed in PDF documents would also finally become machine understandable. Instead of studying scientific literature or engineering patents for months, it would be possible to collect such input by simple SPARQL queries. The SemAnn approach enables collaborative annotation of text and tables in PDF documents, a format that is still the common denominator of publishing, thus maximising the potential user base. The resulting annotations in RDF format are available for querying through a SPARQL endpoint. To incentivise users with an immediate benefit for making the effort of annotation, SemAnn recommends related papers, taking into account the hierarchical context of annotations in a novel way. We evaluated the usability of SemAnn and the usefulness of its recommendations by analysing annotations resulting from tasks assigned to test users and by interviewing them. While the evaluation shows that even few annotations lead to a good recall, we also observed unexpected, serendipitous recommendations, which confirms the merit of our low-threshold annotation support for the crowd.", "what Input format ?", "PDF", 161.0, 164.0], ["In this paper, we present a tool called X2OWL that aims at building an OWL ontology from an XML datasource. This method is based on XML schema to automatically generate the ontology structure, as well as, a set of mapping bridges. The presented method also includes a refinement step that allows to clean the mapping bridges and possibly to restructure the generated ontology.", "what Input format ?", "XML schema", 132.0, 142.0], ["Tabular data is an abundant source of information on the Web, but remains mostly isolated from the latter's interconnections since tables lack links and computer-accessible descriptions of their structure. In other words, the schemas of these tables -- attribute names, values, data types, etc. -- are not explicitly stored as table metadata. Consequently, the structure that these tables contain is not accessible to the crawlers that power search engines and thus not accessible to user search queries. We address this lack of structure with a new method for leveraging the principles of table construction in order to extract table schemas. Discovering the schema by which a table is constructed is achieved by harnessing the similarities and differences of nearby table rows through the use of a novel set of features and a feature processing scheme. The schemas of these data tables are determined using a classification technique based on conditional random fields in combination with a novel feature encoding method called logarithmic binning, which is specifically designed for the data table extraction task. Our method provides considerable improvement over the well-known WebTables schema extraction method. In contrast with previous work that focuses on extracting individual relations, our method excels at correctly interpreting full tables, thereby being capable of handling general tables such as those found in spreadsheets, instead of being restricted to HTML tables as is the case with the WebTables method. We also extract additional schema characteristics, such as row groupings, which are important for supporting information retrieval tasks on tabular data.", "what Input format ?", "HTML", 1473.0, 1477.0], ["Ontology provides a shared and reusable piece of knowledge about a specific domain, and has been applied in many fields, such as semantic Web, e-commerce and information retrieval, etc. However, building ontology by hand is a very hard and error-prone task. Learning ontology from existing resources is a good solution. Because relational database is widely used for storing data and OWL is the latest standard recommended by W3C, this paper proposes an approach of learning OWL ontology from data in relational database. Compared with existing methods, the approach can acquire ontology from relational database automatically by using a group of learning rules instead of using a middle model. In addition, it can obtain OWL ontology, including the classes, properties, properties characteristics, cardinality and instances, while none of existing methods can acquire all of them. The proposed learning rules have been proven to be correct by practice.", "what Input format ?", "Relational data", NaN, NaN], ["The Web contains vast amounts of HTML tables. Most of these tables are used for layout purposes, but a small subset of the tables is relational, meaning that they contain structured data describing a set of entities [2]. As these relational Web tables cover a very wide range of different topics, there is a growing body of research investigating the utility of Web table data for completing cross-domain knowledge bases [6], for extending arbitrary tables with additional attributes [7, 4], as well as for translating data values [5]. The existing research shows the potentials of Web tables. However, comparing the performance of the different systems is difficult as up till now each system is evaluated using a different corpus of Web tables and as most of the corpora are owned by large search engine companies and are thus not accessible to the public. In this poster, we present a large public corpus of Web tables which contains over 233 million tables and has been extracted from the July 2015 version of the CommonCrawl. By publishing the corpus as well as all tools that we used to extract it from the crawled data, we intend to provide a common ground for evaluating Web table systems. The main difference of the corpus compared to an earlier corpus that we extracted from the 2012 version of the CommonCrawl as well as the corpus extracted by Eberius et al. [3] from the 2014 version of the CommonCrawl is that the current corpus contains a richer set of metadata for each table. This metadata includes table-specific information such as table orientation, table caption, header row, and key column, but also context information such as the text before and after the table, the title of the HTML page, as well as timestamp information that was found before and after the table. The context information can be useful for recovering the semantics of a table [7]. The timestamp information is crucial for fusing time-depended data, such as alternative population numbers for a city [8].", "what Input format ?", "HTML", 33.0, 37.0], ["In this work, we offer an approach to combine standard multimedia analysis techniques with knowledge drawn from conceptual metadata provided by domain experts of a specialized scholarly domain, to learn a domain-specific multimedia ontology from a set of annotated examples. A standard Bayesian network learning algorithm that learns structure and parameters of a Bayesian network is extended to include media observables in the learning. An expert group provides domain knowledge to construct a basic ontology of the domain as well as to annotate a set of training videos. These annotations help derive the associations between high-level semantic concepts of the domain and low-level MPEG-7 based features representing audio-visual content of the videos. We construct a more robust and refined version of this ontology by learning from this set of conceptually annotated videos. To encode this knowledge, we use MOWL, a multimedia extension of Web Ontology Language (OWL) which is capable of describing domain concepts in terms of their media properties and of capturing the inherent uncertainties involved. We use the ontology specified knowledge for recognizing concepts relevant to a video to annotate fresh addition to the video database with relevant concepts in the ontology. These conceptual annotations are used to create hyperlinks in the video collection, to provide an effective video browsing interface to the user.", "what Input format ?", "Video", 1189.0, 1194.0], ["XML has become the de-facto standard of data exchange format in E-businesses. Although XML can support syntactic inter-operability, problems arise when data sources represented as XML documents are needed to be integrated. The reason is that XML lacks support for efficient sharing of conceptualization. The Web Ontology Language (OWL) can play an important role here as it can enable semantic inter-operability, and it supports the representation of domain knowledge using classes, properties and instances for applications. In many applications it is required to convert huge XML documents automatically to OWL ontologies, which is receiving a lot of attention. There are some existing converters for this job. Unfortunately they have serious shortcomings, e. g., they do not address the handling of characteristics like internal references, (transitive) import(s), include etc. which are commonly used in XML Schemas. To alleviate these drawbacks, we propose a new framework for mapping XML to OWL automatically. We illustrate our technique on examples to show the efficacy of our approach. We also provide the performance measures of our approach on some standard datasets. We also check the correctness of the conversion process.", "what Input format ?", "XML schema", NaN, NaN], ["The aims of XML data conversion to ontologies are the indexing, integration and enrichment of existing ontologies with knowledge acquired from these sources. The contribution of this paper consists in providing a classification of the approaches used for the conversion of XML documents into OWL ontologies. This classification underlines the usage profile of each conversion method, providing a clear description of the advantages and drawbacks belonging to each method. Hence, this paper focuses on two main processes, which are ontology enrichment and ontology population using XML data. Ontology enrichment is related to the schema of the ontology (TBox), and ontology population is related to an individual (Abox). In addition, the ontologies described in these methods are based on formal languages of the Semantic Web such as OWL (Ontology Web Language) or RDF (Resource Description Framework). These languages are formal because the semantics are formally defined and take advantage of the Description Logics. In contrast, XML data sources are without formal semantics. The XML language is used to store, export and share data between processes able to process the specific data structure. However, even if the semantics is not explicitly expressed, data structure contains the universe of discourse by using a qualified vocabulary regarding a consensual agreement. In order to formalize this semantics, the OWL language provides rich logical constraints. Therefore, these logical constraints are evolved in the transformation of XML documents into OWL documents, allowing the enrichment and the population of the target ontology. To design such a transformation, the current research field establishes connections between OWL constructs (classes, predicates, simple or complex data types, etc.) and XML constructs (elements, attributes, element lists, etc.). Two different approaches for the transformation process are exposed. The instance approaches are based on XML documents without any schema associated. The validation approaches are based on the XML schema and document validated by the associated schema. The second approaches benefit from the schema definition to provide automated transformations with logic constraints. Both approaches are discussed in the text.", "what Input format ?", "XML document", NaN, NaN], ["The objective of this paper is to present the role of Ontology Learning Process in supporting an ontology engineer for creating and maintaining ontologies from textual resources. The knowledge structures that interest us are legal domain-specific ontologies. We will use these ontologies to build legal domain ontology for a Lebanese legal knowledge based system. The domain application of this work is the Lebanese criminal system. Ontologies can be learnt from various sources, such as databases, structured and unstructured documents. Here, the focus is on the acquisition of ontologies from unstructured text, provided as input. In this work, the Ontology Learning Process represents a knowledge extraction phase using Natural Language Processing techniques. The resulted ontology is considered as inexpressive ontology. There is a need to reengineer it in order to build a complete, correct and more expressive domain-specific ontology.", "what Input format ?", "Text", 608.0, 612.0], ["Ontologies have proven beneficial in different settings that make use of textual reviews. However, manually constructing ontologies is a laborious and time-consuming process in need of automation. We propose a novel methodology for automatically extracting ontologies, in the form of meronomies, from product reviews, using a very limited amount of hand-annotated training data. We show that the ontologies generated by our method outperform hand-crafted ontologies (WordNet) and ontologies extracted by existing methods (Text2Onto and COMET) in several, diverse settings. Specifically, our generated ontologies outperform the others when evaluated by human annotators as well as on an existing Q&A dataset from Amazon. Moreover, our method is better able to generalise, in capturing knowledge about unseen products. Finally, we consider a real-world setting, showing that our method is better able to determine recommended products based on their reviews, in alternative to using Amazon\u2019s standard score aggregations.", "what Input format ?", "Text", NaN, NaN], ["DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.", "what Input format ?", "XML document", 908.0, 920.0], ["Today most of the data exchanged between information systems is done with the help of the XML syntax. Unfortunately when these data have to be integrated, the integration becomes difficult because of the semantics' heterogeneity. Consequently, leading researches in the domain of database systems are moving to semantic model in order to store data and its semantics definition. To benefit from these new systems and technologies, and to integrate different data sources, a flexible method consists in populating an existing OWL ontology from XML data. In paper we present such a method based on the definition of a graph which represents rules that drive the populating process. The graph of rules facilitates the mapping definition that consists in mapping elements from an XSD schema to the elements of the OWL schema.", "what Input format ?", "XSD", 776.0, 779.0], ["DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.", "what Properties hierarchy extraction/learning ?", "DTD", 0.0, 3.0], ["This paper describes the Duluth system that participated in SemEval-2021 Task 11, NLP Contribution Graph. It details the extraction of contribution sentences and scientific entities and their relations from scholarly articles in the domain of Natural Language Processing. Our solution uses deBERTa for multi-class sentence classification to extract the contributing sentences and their type, and dependency parsing to outline each sentence and extract subject-predicate-object triples. Our system ranked fifth of seven for Phase 1: end-to-end pipeline, sixth of eight for Phase 2 Part 1: phrases and triples, and fifth of eight for Phase 2 Part 2: triples extraction.", "what Team Name ?", "DULUTH", 25.0, 31.0], ["This paper describes the system we built as the YNU-HPCC team in the SemEval-2021 Task 11: NLPContributionGraph. This task involves first identifying sentences in the given natural language processing (NLP) scholarly articles that reflect research contributions through binary classification; then identifying the core scientific terms and their relation phrases from these contribution sentences by sequence labeling; and finally, these scientific terms and relation phrases are categorized, identified, and organized into subject-predicate-object triples to form a knowledge graph with the help of multiclass classification and multi-label classification. We developed a system for this task using a pre-trained language representation model called BERT that stands for Bidirectional Encoder Representations from Transformers, and achieved good results. The average F1-score for Evaluation Phase 2, Part 1 was 0.4562 and ranked 7th, and the average F1-score for Evaluation Phase 2, Part 2 was 0.6541, and also ranked 7th.", "what Team Name ?", "YNU-HPCC", 48.0, 56.0], ["Online Community Question Answering Forums (cQA) have gained massive popularity within recent years. The rise in users for such forums have led to the increase in the need for automated evaluation for question comprehension and fact evaluation of the answers provided by various participants in the forum. Our team, Fermi, participated in sub-task A of Task 8 at SemEval 2019 - which tackles the first problem in the pipeline of factual evaluation in cQA forums, i.e., deciding whether a posed question asks for a factual information, an opinion/advice or is just socializing. This information is highly useful in segregating factual questions from non-factual ones which highly helps in organizing the questions into useful categories and trims down the problem space for the next task in the pipeline for fact evaluation among the available answers. Our system uses the embeddings obtained from Universal Sentence Encoder combined with XGBoost for the classification sub-task A. We also evaluate other combinations of embeddings and off-the-shelf machine learning algorithms to demonstrate the efficacy of the various representations and their combinations. Our results across the evaluation test set gave an accuracy of 84% and received the first position in the final standings judged by the organizers.", "what Team Name ?", "Fermi", 316.0, 321.0], ["Aligning two representations of the same domain with different expressiveness is a crucial topic in nowadays semantic web and big data research. OWL ontologies and Entity Relation Diagrams are the most widespread representations whose alignment allows for semantic data access via ontology interface, and ontology storing techniques. The term \"\"alignment\" encompasses three different processes: OWL-to-ERD and ERD-to-OWL transformation, and OWL-ERD mapping. In this paper an innovative statistical tool is presented to accomplish all the three aspects of the alignment. The main idea relies on the use of a HMM to estimate the most likely ERD sentence that is stated in a suitable grammar, and corresponds to the observed OWL axiom. The system and its theoretical background are presented, and some experiments are reported.", "what Output format ?", "OWL", 145.0, 148.0], ["Metabolic pathways are an important part of systems biology research since they illustrate complex interactions between metabolites, enzymes, and regulators. Pathway maps are drawn to elucidate metabolism or to set data in a metabolic context. We present MetaboMAPS, a web-based platform to visualize numerical data on individual metabolic pathway maps. Metabolic maps can be stored, distributed and downloaded in SVG-format. MetaboMAPS was designed for users without computational background and supports pathway sharing without strict conventions. In addition to existing applications that established standards for well-studied pathways, MetaboMAPS offers a niche for individual, customized pathways beyond common knowledge, supporting ongoing research by creating publication-ready visualizations of experimental data.", "what Output format ?", "SVG", 414.0, 417.0], ["In this work, we offer an approach to combine standard multimedia analysis techniques with knowledge drawn from conceptual metadata provided by domain experts of a specialized scholarly domain, to learn a domain-specific multimedia ontology from a set of annotated examples. A standard Bayesian network learning algorithm that learns structure and parameters of a Bayesian network is extended to include media observables in the learning. An expert group provides domain knowledge to construct a basic ontology of the domain as well as to annotate a set of training videos. These annotations help derive the associations between high-level semantic concepts of the domain and low-level MPEG-7 based features representing audio-visual content of the videos. We construct a more robust and refined version of this ontology by learning from this set of conceptually annotated videos. To encode this knowledge, we use MOWL, a multimedia extension of Web Ontology Language (OWL) which is capable of describing domain concepts in terms of their media properties and of capturing the inherent uncertainties involved. We use the ontology specified knowledge for recognizing concepts relevant to a video to annotate fresh addition to the video database with relevant concepts in the ontology. These conceptual annotations are used to create hyperlinks in the video collection, to provide an effective video browsing interface to the user.", "what Output format ?", "MOWL", 914.0, 918.0], ["By now, XML has reached a wide acceptance as data exchange format in E-Business. An efficient collaboration between different participants in E-Business thus, is only possible, when business partners agree on a common syntax and have a common understanding of the basic concepts in the domain. XML covers the syntactic level, but lacks support for efficient sharing of conceptualizations. The Web Ontology Language (OWL [Bec04]) in turn supports the representation of domain knowledge using classes, properties and instances for the use in a distributed environment as the WorldWideWeb. We present in this paper a mapping between the data model elements of XML and OWL. We give account about its implementation within a ready-to-use XSLT framework, as well as its evaluation for common use cases.", "what Output format ?", "OWL", 416.0, 419.0], ["The aims of XML data conversion to ontologies are the indexing, integration and enrichment of existing ontologies with knowledge acquired from these sources. The contribution of this paper consists in providing a classification of the approaches used for the conversion of XML documents into OWL ontologies. This classification underlines the usage profile of each conversion method, providing a clear description of the advantages and drawbacks belonging to each method. Hence, this paper focuses on two main processes, which are ontology enrichment and ontology population using XML data. Ontology enrichment is related to the schema of the ontology (TBox), and ontology population is related to an individual (Abox). In addition, the ontologies described in these methods are based on formal languages of the Semantic Web such as OWL (Ontology Web Language) or RDF (Resource Description Framework). These languages are formal because the semantics are formally defined and take advantage of the Description Logics. In contrast, XML data sources are without formal semantics. The XML language is used to store, export and share data between processes able to process the specific data structure. However, even if the semantics is not explicitly expressed, data structure contains the universe of discourse by using a qualified vocabulary regarding a consensual agreement. In order to formalize this semantics, the OWL language provides rich logical constraints. Therefore, these logical constraints are evolved in the transformation of XML documents into OWL documents, allowing the enrichment and the population of the target ontology. To design such a transformation, the current research field establishes connections between OWL constructs (classes, predicates, simple or complex data types, etc.) and XML constructs (elements, attributes, element lists, etc.). Two different approaches for the transformation process are exposed. The instance approaches are based on XML documents without any schema associated. The validation approaches are based on the XML schema and document validated by the associated schema. The second approaches benefit from the schema definition to provide automated transformations with logic constraints. Both approaches are discussed in the text.", "what Output format ?", "RDF", 864.0, 867.0], ["In this paper we present a new tool, called DB_DOOWL, for creating domain ontology from relational database schema (RDBS). In contrast with existing transformation approaches, we propose a generic solution based on automatic instantiation of a specified meta-ontology. This later is an owl ontology which describes any database structure. A prototype of our proposed tool is implemented based on Jena in Java in order to demonstrate its feasibility.", "what Output format ?", "OWL", 286.0, 289.0], ["Today most of the data exchanged between information systems is done with the help of the XML syntax. Unfortunately when these data have to be integrated, the integration becomes difficult because of the semantics' heterogeneity. Consequently, leading researches in the domain of database systems are moving to semantic model in order to store data and its semantics definition. To benefit from these new systems and technologies, and to integrate different data sources, a flexible method consists in populating an existing OWL ontology from XML data. In paper we present such a method based on the definition of a graph which represents rules that drive the populating process. The graph of rules facilitates the mapping definition that consists in mapping elements from an XSD schema to the elements of the OWL schema.", "what Output format ?", "OWL", 525.0, 528.0], ["Ontology provides a shared and reusable piece of knowledge about a specific domain, and has been applied in many fields, such as semantic Web, e-commerce and information retrieval, etc. However, building ontology by hand is a very hard and error-prone task. Learning ontology from existing resources is a good solution. Because relational database is widely used for storing data and OWL is the latest standard recommended by W3C, this paper proposes an approach of learning OWL ontology from data in relational database. Compared with existing methods, the approach can acquire ontology from relational database automatically by using a group of learning rules instead of using a middle model. In addition, it can obtain OWL ontology, including the classes, properties, properties characteristics, cardinality and instances, while none of existing methods can acquire all of them. The proposed learning rules have been proven to be correct by practice.", "what Output format ?", "OWL", 384.0, 387.0], ["DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.", "what Output format ?", "OWL", 590.0, 593.0], ["DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.", "what Output format ?", "OWL individual", NaN, NaN], ["In this paper, we present a tool called X2OWL that aims at building an OWL ontology from an XML datasource. This method is based on XML schema to automatically generate the ontology structure, as well as, a set of mapping bridges. The presented method also includes a refinement step that allows to clean the mapping bridges and possibly to restructure the generated ontology.", "what Approaches ?", "X2OWL", 40.0, 45.0], ["In this paper we present a new tool, called DB_DOOWL, for creating domain ontology from relational database schema (RDBS). In contrast with existing transformation approaches, we propose a generic solution based on automatic instantiation of a specified meta-ontology. This later is an owl ontology which describes any database structure. A prototype of our proposed tool is implemented based on Jena in Java in order to demonstrate its feasibility.", "what Learning tool ?", "DB_DOOWL", 44.0, 52.0], ["One of the main holdbacks towards a wide use of ontologies is the high building cost. In order to reduce this effort, reuse of existing Knowledge Organization Systems (KOSs), and in particular thesauri, is a valuable and much cheaper alternative to build ontologies from scratch. In the literature tools to support such reuse and conversion of thesauri as well as re-engineering patterns already exist. However, few of these tools rely on a sort of semi-automatic reasoning on the structure of the thesaurus being converted. Furthermore, patterns proposed in the literature are not updated considering the new ISO 25964 standard on thesauri. This paper introduces a new application framework aimed to convert thesauri into OWL ontologies, differing from the existing approaches for taking into consideration ISO 25964 compliant thesauri and for applying completely automatic conversion rules.", "what dataset ?", "iso 25964", 610.0, 619.0], ["In this study, we examine the abuse of online social networks at the hands of spammers through the lens of the tools, techniques, and support infrastructure they rely upon. To perform our analysis, we identify over 1.1 million accounts suspended by Twitter for disruptive activities over the course of seven months. In the process, we collect a dataset of 1.8 billion tweets, 80 million of which belong to spam accounts. We use our dataset to characterize the behavior and lifetime of spam accounts, the campaigns they execute, and the wide-spread abuse of legitimate web services such as URL shorteners and free web hosting. We also identify an emerging marketplace of illegitimate programs operated by spammers that include Twitter account sellers, ad-based URL shorteners, and spam affiliate programs that help enable underground market diversification. Our results show that 77% of spam accounts identified by Twitter are suspended within on day of their first tweet. Because of these pressures, less than 9% of accounts form social relationships with regular Twitter users. Instead, 17% of accounts rely on hijacking trends, while 52% of accounts use unsolicited mentions to reach an audience. In spite of daily account attrition, we show how five spam campaigns controlling 145 thousand accounts combined are able to persist for months at a time, with each campaign enacting a unique spamming strategy. Surprisingly, three of these campaigns send spam directing visitors to reputable store fronts, blurring the line regarding what constitutes spam on social networks.", "what dataset ?", "1.8 billion tweets", 356.0, 374.0], ["Extraction of relevant features from high-dimensional multi-way functional MRI (fMRI) data is essential for the classification of a cognitive task. In general, fMRI records a combination of neural activation signals and several other noisy components. Alternatively, fMRI data is represented as a high dimensional array using a number of voxels, time instants, and snapshots. The organisation of fMRI data includes a number of Region Of Interests (ROI), snapshots, and thousand of voxels. The crucial step in cognitive task classification is a reduction of feature size through feature selection. Extraction of a specific pattern of interest within the noisy components is a challenging task. Tensor decomposition techniques have found several applications in the scientific fields. In this paper, a novel tensor gradient-based feature extraction technique for cognitive task classification is proposed. The technique has efficiently been applied on StarPlus fMRI data. Also, the technique has been used to discriminate the ROIs in fMRI data in terms of cognitive state classification. The method has been achieved a better average accuracy when compared to other existing feature extraction methods.", "what dataset ?", "StarPlus fMRI data", 950.0, 968.0], ["Given a task T, a pool of individuals X with different skills, and a social network G that captures the compatibility among these individuals, we study the problem of finding X, a subset of X, to perform the task. We call this the TEAM FORMATION problem. We require that members of X' not only meet the skill requirements of the task, but can also work effectively together as a team. We measure effectiveness using the communication cost incurred by the subgraph in G that only involves X'. We study two variants of the problem for two different communication-cost functions, and show that both variants are NP-hard. We explore their connections with existing combinatorial problems and give novel algorithms for their solution. To the best of our knowledge, this is the first work to consider the TEAM FORMATION problem in the presence of a social network of individuals. Experiments on the DBLP dataset show that our framework works well in practice and gives useful and intuitive results.", "what dataset ?", "DBLP", 893.0, 897.0], ["Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.", "what dataset ?", "C-NMC-2019", 835.0, 845.0], ["A rapidly growing amount of content posted online, such as food recipes, opens doors to new exciting applications at the intersection of vision and language. In this work, we aim to estimate the calorie amount of a meal directly from an image by learning from recipes people have published on the Internet, thus skipping time-consuming manual data annotation. Since there are few large-scale publicly available datasets captured in unconstrained environments, we propose the pic2kcal benchmark comprising 308 000 images from over 70 000 recipes including photographs, ingredients, and instructions. To obtain nutritional information of the ingredients and automatically determine the ground-truth calorie value, we match the items in the recipes with structured information from a food item database. We evaluate various neural networks for regression of the calorie quantity and extend them with the multi-task paradigm. Our learning procedure combines the calorie estimation with prediction of proteins, carbohydrates, and fat amounts as well as a multi-label ingredient classification. Our experiments demonstrate clear benefits of multi-task learning for calorie estimation, surpassing the single-task calorie regression by 9.9%. To encourage further research on this task, we make the code for generating the dataset and the models publicly available.", "what dataset ?", "pic2kcal", 475.0, 483.0], ["Name ambiguity in the context of bibliographic citation affects the quality of services in digital libraries. Previous methods are not widely applied in practice because of their high computational complexity and their strong dependency on excessive attributes, such as institutional affiliation, research area, address, etc., which are difficult to obtain in practice. To solve this problem, we propose a novel coarse\u2010to\u2010fine framework for name disambiguation which sequentially employs 3 common and easily accessible attributes (i.e., coauthor name, article title, and publication venue). Our proposed framework is based on multiple clustering and consists of 3 steps: (a) clustering articles by coauthorship and obtaining rough clusters, that is fragments; (b) clustering fragments obtained in step 1 by title information and getting bigger fragments; (c) and clustering fragments obtained in step 2 by the latent relations among venues. Experimental results on a Digital Bibliography and Library Project (DBLP) data set show that our method outperforms the existing state\u2010of\u2010the\u2010art methods by 2.4% to 22.7% on the average pairwise F1 score and is 10 to 100 times faster in terms of execution time.", "what dataset ?", "DBLP", 1009.0, 1013.0], ["Although research and practice has attributed considerable attention to Enterprise Resource Planning (ERP) projects their failure rate is still high. There are two main fields of research, which aim at increasing the success rate of ERP projects: Research on risk factors and research on success factors. Despite their topical relatedness, efforts to integrate these two fields have been rare. Against this background, this paper analyzes 68 articles dealing with risk and success factors and categorizes all identified factors into twelve categories. Though some topics are equally important in risk and success factor research, the literature on risk factors emphasizes topics which ensure achieving budget, schedule and functionality targets. In contrast, the literature on success factors concentrates more on strategic and organizational topics. We argue that both fields of research cover important aspects of project success. The paper concludes with the presentation of a possible holistic consideration to integrate both, the understanding of risk and success factors.", "what has research problem ?", "Enterprise resource planning", 72.0, 100.0], ["The Web is the most used Internet's service to create and share information. In large information collections, Knowledge Organization plays a key role in order to classify and to find valuable information. Likewise, Linked Open Data is a powerful approach for linking different Web datasets. Today, several Knowledge Organization Systems are published by using the design criteria of linked data, it facilitates the automatic processing of them. In this paper, we address the issue of traversing open Knowledge Organization Systems, considering difficulties associated with their dynamics and size. To fill this issue, we propose a method to identify irrelevant nodes on an open graph, thus reducing the time and the scope of the graph path and maximizing the possibilities of finding more relevant results. The approach for graph reduction is independent of the domain or task for which the open system will be used. The preliminary results of the proof of concept lead us to think that the method can be effective when the coverage of the concept of interest increases.", "what has research problem ?", "Knowledge Organization", 111.0, 133.0], ["Increasing global cooperation, vertical disintegration and a focus on core activities have led to the notion that firms are links in a networked supply chain. This strategic viewpoint has created the challenge of coordinating effectively the entire supply chain, from upstream to downstream activities. While supply chains have existed ever since businesses have been organized to bring products and services to customers, the notion of their competitive advantage, and consequently supply chain management (SCM), is a relatively recent thinking in management literature. Although research interests in and the importance of SCM are growing, scholarly materials remain scattered and disjointed, and no research has been directed towards a systematic identification of the core initiatives and constructs involved in SCM. Thus, the purpose of this study is to develop a research framework that improves understanding of SCM and stimulates and facilitates researchers to undertake both theoretical and empirical investigation on the critical constructs of SCM, and the exploration of their impacts on supply chain performance. To this end, we analyse over 400 articles and synthesize the large, fragmented body of work dispersed across many disciplines such as purchasing and supply, logistics and transportation, marketing, organizational dynamics, information management, strategic management, and operations management literature.", "what has research problem ?", "Supply chain management", 483.0, 506.0], ["E-learning recommender systems are gaining significance nowadays due to its ability to enhance the learning experience by providing tailor-made services based on learner preferences. A Personalized Learning Environment (PLE) that automatically adapts to learner characteristics such as learning styles and knowledge level can recommend appropriate learning resources that would favor the learning process and improve learning outcomes. The pure cold-start problem is a relevant issue in PLEs, which arises due to the lack of prior information about the new learner in the PLE to create appropriate recommendations. This article introduces a semantic framework based on ontology to address the pure cold-start problem in content recommenders. The ontology encapsulates the domain knowledge about the learners as well as Learning Objects (LOs). The semantic model that we built has been experimented with different combinations of the key learner parameters such as learning style, knowledge level, and background knowledge. The proposed framework utilizes these parameters to build natural learner groups from the learner ontology using SPARQL queries. The ontology holds 480 learners\u2019 data, 468 annotated learning objects with 5,600 learner ratings. A multivariate k-means clustering algorithm, an unsupervised machine learning technique for grouping similar data, is used to evaluate the learner similarity computation accuracy. The learner satisfaction achieved with the proposed model is measured based on the ratings given by the 40 participants of the experiments. From the evaluation perspective, it is evident that 79% of the learners are satisfied with the recommendations generated by the proposed model in pure cold-start condition.", "what has research problem ?", "Cold-start Problem", 445.0, 463.0], ["Extraction of relevant features from high-dimensional multi-way functional MRI (fMRI) data is essential for the classification of a cognitive task. In general, fMRI records a combination of neural activation signals and several other noisy components. Alternatively, fMRI data is represented as a high dimensional array using a number of voxels, time instants, and snapshots. The organisation of fMRI data includes a number of Region Of Interests (ROI), snapshots, and thousand of voxels. The crucial step in cognitive task classification is a reduction of feature size through feature selection. Extraction of a specific pattern of interest within the noisy components is a challenging task. Tensor decomposition techniques have found several applications in the scientific fields. In this paper, a novel tensor gradient-based feature extraction technique for cognitive task classification is proposed. The technique has efficiently been applied on StarPlus fMRI data. Also, the technique has been used to discriminate the ROIs in fMRI data in terms of cognitive state classification. The method has been achieved a better average accuracy when compared to other existing feature extraction methods.", "what has research problem ?", "Cognitive state classification", 1054.0, 1084.0], ["This paper presents the IJCNLP 2017 shared task on Dimensional Sentiment Analysis for Chinese Phrases (DSAP) which seeks to identify a real-value sentiment score of Chinese single words and multi-word phrases in the both valence and arousal dimensions. Valence represents the degree of pleasant and unpleasant (or positive and negative) feelings, and arousal represents the degree of excitement and calm. Of the 19 teams registered for this shared task for two-dimensional sentiment analysis, 13 submitted results. We expected that this evaluation campaign could produce more advanced dimensional sentiment analysis techniques, especially for Chinese affective computing. All data sets with gold standards and scoring script are made publicly available to researchers.", "what has research problem ?", "Dimensional Sentiment Analysis", 51.0, 81.0], ["We present a system for named entity recognition (ner) in astronomy journal articles. We have developed this system on a ne corpus comprising approximately 200,000 words of text from astronomy articles. These have been manually annotated with \u223c40 entity types of interest to astronomers. We report on the challenges involved in extracting the corpus, defining entity classes and annotating scientific text. We investigate which features of an existing state-of-the-art Maximum Entropy approach perform well on astronomy text. Our system achieves an F-score of 87.8%.", "what has research problem ?", "Named Entity Recognition", 24.0, 48.0], ["Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported thus far still need to be improved about precision and computational time for successful applications. In this paper, we propose an improved eye localization method based on multi-scale Gator feature vector models. The proposed method first tries to locate eyes in the downscaled face image by utilizing Gabor Jet similarity between Gabor feature vector at an initial eye coordinates and the eye model bunch of the corresponding scale. The proposed method finally locates eyes in the original input face image after it processes in the same way recursively in each scaled face image by using the eye coordinates localized in the downscaled image as initial eye coordinates. Experiments verify that our proposed method improves the precision rate without causing much computational overhead compared with other eye localization methods reported in the previous researches.", "what has research problem ?", "Eye localization", 0.0, 16.0], ["PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.", "what has research problem ?", "Breast cancer", 661.0, 674.0], ["Transformers have been recently adapted for large scale image classification, achieving high scores shaking up the long supremacy of convolutional neural networks. However the optimization of vision transformers has been little studied so far. In this work, we build and optimize deeper transformer networks for image classification. In particular, we investigate the interplay of architecture and optimization of such dedicated transformers. We make two architecture changes that significantly improve the accuracy of deep transformers. This leads us to produce models whose performance does not saturate early with more depth, for in-stance we obtain 86.5% top-1 accuracy on Imagenet when training with no external data, we thus attain the current sate of the art with less floating-point operations and parameters. Our best model establishes the new state of the art on Imagenet with Reassessed labels and Imagenet-V2 / match frequency, in the setting with no additional training data. We share our code and models1.", "what has research problem ?", "Image Classification", 56.0, 76.0], ["Abstract We introduce an architecture to learn joint multilingual sentence representations for 93 languages, belonging to more than 30 different families and written in 28 different scripts. Our system uses a single BiLSTM encoder with a shared byte-pair encoding vocabulary for all languages, which is coupled with an auxiliary decoder and trained on publicly available parallel corpora. This enables us to learn a classifier on top of the resulting embeddings using English annotated data only, and transfer it to any of the 93 languages without any modification. Our experiments in cross-lingual natural language inference (XNLI data set), cross-lingual document classification (MLDoc data set), and parallel corpus mining (BUCC data set) show the effectiveness of our approach. We also introduce a new test set of aligned sentences in 112 languages, and show that our sentence embeddings obtain strong results in multilingual similarity search even for low- resource languages. Our implementation, the pre-trained encoder, and the multilingual test set are available at https://github.com/facebookresearch/LASER.", "what has research problem ?", "Cross-Lingual Document Classification", 643.0, 680.0], ["Achieving efficient and scalable exploration in complex domains poses a major challenge in reinforcement learning. While Bayesian and PAC-MDP approaches to the exploration problem offer strong formal guarantees, they are often impractical in higher dimensions due to their reliance on enumerating the state-action space. Hence, exploration in complex domains is often performed with simple epsilon-greedy methods. In this paper, we consider the challenging Atari games domain, which requires processing raw pixel inputs and delayed rewards. We evaluate several more sophisticated exploration strategies, including Thompson sampling and Boltzman exploration, and propose a new exploration method based on assigning exploration bonuses from a concurrently learned model of the system dynamics. By parameterizing our learned model with a neural network, we are able to develop a scalable and efficient approach to exploration bonuses that can be applied to tasks with complex, high-dimensional state spaces. In the Atari domain, our method provides the most consistent improvement across a range of games that pose a major challenge for prior methods. In addition to raw game-scores, we also develop an AUC-100 metric for the Atari Learning domain to evaluate the impact of exploration on this benchmark.", "what has research problem ?", "Atari Games", 457.0, 468.0], ["A key challenge for manufacturers today is efficiently producing and delivering products on time. Issues include demand for customized products, changes in orders, and equipment status change, complicating the decision-making process. A real-time digital representation of the manufacturing operation would help address these challenges. Recent technology advancements of smart sensors, IoT, and cloud computing make it possible to realize a \"digital twin\" of a manufacturing system or process. Digital twins or surrogates are data-driven virtual representations that replicate, connect, and synchronize the operation of a manufacturing system or process. They utilize dynamically collected data to track system behaviors, analyze performance, and help make decisions without interrupting production. In this paper, we define digital surrogate, explore their relationships to simulation, digital thread, artificial intelligence, and IoT. We identify the technology and standard requirements and challenges for implementing digital surrogates. A production planning case is used to exemplify the digital surrogate concept.", "what has research problem ?", "digital twin", 443.0, 455.0], ["Balancing assembly lines, a family of optimization problems commonly known as Assembly Line Balancing Problem, is notoriously NP-Hard. They comprise a set of problems of enormous practical interest to manufacturing industry due to the relevant frequency of this type of production paradigm. For this reason, many researchers on Computational Intelligence and Industrial Engineering have been conceiving algorithms for tackling different versions of assembly line balancing problems utilizing different methodologies. In this article, it was proposed a problem version referred as Mixed Model Workplace Time-dependent Assembly Line Balancing Problem with the intention of including pressing issues of real assembly lines in the optimization problem, to which four versions were conceived. Heuristic search procedures were used, namely two Swarm Intelligence algorithms from the Fish School Search family: the original version, named \"vanilla\", and a special variation including a stagnation avoidance routine. Either approaches solved the newly posed problem achieving good results when compared to Particle Swarm Optimization algorithm.", "what has research problem ?", "Optimization problem", 727.0, 747.0], ["The term \"middle-income trap\" has entered common parlance in the development policy community, despite the lack of a precise definition. This paper discusses in more detail the definitional issues associated with the term. It also provides evidence on whether the growth performance of middle-income countries (MICs) has been different from other income categories, including historical transition phases in the inter-country distribution of income. A transition matrix analysis and an exploration of cross-country growth patterns provide little support for the existence of a middle-income trap.", "what has research problem ?", "Middle-Income Trap", 10.0, 28.0], ["CONTEXT Both antidepressant medication and structured psychotherapy have been proven efficacious, but less than one third of people with depressive disorders receive effective levels of either treatment. OBJECTIVE To compare usual primary care for depression with 2 intervention programs: telephone care management and telephone care management plus telephone psychotherapy. DESIGN Three-group randomized controlled trial with allocation concealment and blinded outcome assessment conducted between November 2000 and May 2002. SETTING AND PARTICIPANTS A total of 600 patients beginning antidepressant treatment for depression were systematically sampled from 7 group-model primary care clinics; patients already receiving psychotherapy were excluded. INTERVENTIONS Usual primary care; usual care plus a telephone care management program including at least 3 outreach calls, feedback to the treating physician, and care coordination; usual care plus care management integrated with a structured 8-session cognitive-behavioral psychotherapy program delivered by telephone. MAIN OUTCOME MEASURES Blinded telephone interviews at 6 weeks, 3 months, and 6 months assessed depression severity (Hopkins Symptom Checklist Depression Scale and the Patient Health Questionnaire), patient-rated improvement, and satisfaction with treatment. Computerized administrative data examined use of antidepressant medication and outpatient visits. RESULTS Treatment participation rates were 97% for telephone care management and 93% for telephone care management plus psychotherapy. Compared with usual care, the telephone psychotherapy intervention led to lower mean Hopkins Symptom Checklist Depression Scale depression scores (P =.02), a higher proportion of patients reporting that depression was \"much improved\" (80% vs 55%, P<.001), and a higher proportion of patients \"very satisfied\" with depression treatment (59% vs 29%, P<.001). The telephone care management program had smaller effects on patient-rated improvement (66% vs 55%, P =.04) and satisfaction (47% vs 29%, P =.001); effects on mean depression scores were not statistically significant. CONCLUSIONS For primary care patients beginning antidepressant treatment, a telephone program integrating care management and structured cognitive-behavioral psychotherapy can significantly improve satisfaction and clinical outcomes. These findings suggest a new public health model of psychotherapy for depression including active outreach and vigorous efforts to improve access to and motivation for treatment.", "what has research problem ?", "Psychotherapy for Depression", 2423.0, 2451.0], ["
We introduce a solution-processed copper tin sulfide (CTS) thin film to realize high-performance of thin-film transistors (TFT) by optimizing the CTS precursor solution concentration.
", "what has research problem ?", "Performance of thin-film transistors", 88.0, 124.0], ["We present the results of the Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge, aiming to bring together researchers in educational NLP technology and textual entailment. The task of giving feedback on student answers requires semantic inference and therefore is related to recognizing textual entailment. Thus, we offered to the community a 5-way student response labeling task, as well as 3-way and 2way RTE-style tasks on educational data. In addition, a partial entailment task was piloted. We present and compare results from 9 participating teams, and discuss future directions.", "what has research problem ?", " Joint Student Response Analysis", 29.0, 61.0], ["Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p\u02c20.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.", "what has research problem ?", "Breast cancer", 0.0, 13.0], ["Properly generated test suites may not only locate the defects in software systems, but also help in reducing the high cost associated with software testing, ft is often desired that test sequences in a test suite can be automatically generated to achieve required test coverage. However, automatic test sequence generation remains a major problem in software testing. This paper proposes an ant colony optimization approach to automatic test sequence generation for state-based software testing. The proposed approach can directly use UML artifacts to automatically generate test sequences to achieve required test coverage.", "what has research problem ?", "Ant Colony Optimization", 392.0, 415.0], ["The paper examines the impact of exchange rate volatility on the exports of five Asian countries. The countries are Turkey, South Korea, Malaysia, Indonesia and Pakistan. The impact of a volatility term on exports is examined by using an Engle-Granger residual-based cointegrating technique. The results indicate that the exchange rate volatility reduced real exports for these countries. This might mean that producers in these countries are risk-averse. The producers will prefer to sell in domestic markets rather than foreign markets if the exchange rate volatility increases.", "what has research problem ?", "Exchange rate volatility", 33.0, 57.0], ["Abstract As the term \u201csmart city\u201d gains wider and wider currency, there is still confusion about what a smart city is, especially since several similar terms are often used interchangeably. This paper aims to clarify the meaning of the word \u201csmart\u201d in the context of cities through an approach based on an in-depth literature review of relevant studies as well as official documents of international institutions. It also identifies the main dimensions and elements characterizing a smart city. The different metrics of urban smartness are reviewed to show the need for a shared definition of what constitutes a smart city, what are its features, and how it performs in comparison to traditional cities. Furthermore, performance measures and initiatives in a few smart cities are identified.", "what has research problem ?", "Smart cities", 763.0, 775.0], ["Purpose \u2013 The purpose of this paper is first, to develop a methodological framework for conducting a comprehensive literature review on an empirical phenomenon based on a vast amount of papers published. Second, to use this framework to gain an understanding of the current state of the enterprise resource planning (ERP) research field, and third, based on the literature review, to develop a conceptual framework identifying areas of concern with regard to ERP systems.Design/methodology/approach \u2013 Abstracts from 885 peer\u2010reviewed journal publications from 2000 to 2009 have been analysed according to journal, authors and year of publication, and further categorised into research discipline, research topic and methods used, using the structured methodological framework.Findings \u2013 The body of academic knowledge about ERP systems has reached a certain maturity and several different research disciplines have contributed to the field from different points of view using different methods, showing that the ERP rese...", "what has research problem ?", "Enterprise resource planning", 287.0, 315.0], ["The paper describes a probabilistic active learning strategy for support vector machine (SVM) design in large data applications. The learning strategy is motivated by the statistical query model. While most existing methods of active SVM learning query for points based on their proximity to the current separating hyperplane, the proposed method queries for a set of points according to a distribution as determined by the current separating hyperplane and a newly defined concept of an adaptive confidence factor. This enables the algorithm to have more robust and efficient learning capabilities. The confidence factor is estimated from local information using the k nearest neighbor principle. The effectiveness of the method is demonstrated on real-life data sets both in terms of generalization performance, query complexity, and training time.", "what has research problem ?", "Active learning", 36.0, 51.0], ["Introduction As a way to improve student academic performance, educators have begun paying special attention to computer games (Gee, 2005; Oblinger, 2006). Reflecting the interests of the educators, studies have been conducted to explore the effects of computer games on student achievement. However, there has been no consensus on the effects of computer games: Some studies support computer games as educational resources to promote students' learning (Annetta, Mangrum, Holmes, Collazo, & Cheng, 2009; Vogel et al., 2006). Other studies have found no significant effects on the students' performance in school, especially in math achievement of elementary school students (Ke, 2008). Researchers have also been interested in the differential effects of computer games between gender groups. While several studies have reported various gender differences in the preferences of computer games (Agosto, 2004; Kinzie & Joseph, 2008), a few studies have indicated no significant differential effect of computer games between genders and asserted generic benefits for both genders (Vogel et al., 2006). To date, the studies examining computer games and gender interaction are far from conclusive. Moreover, there is a lack of empirical studies examining the differential effects of computer games on the academic performance of diverse learners. These learners included linguistic minority students who speak languages other than English. Recent trends in the K-12 population feature the increasing enrollment of linguistic minority students, whose population reached almost four million (NCES, 2004). These students have been a grieve concern for American educators because of their reported low performance. In response, this study empirically examined the effects of math computer games on the math performance of 4th-graders with focused attention on differential effects for gender and linguistic groups. To achieve greater generalizability of the study findings, the study utilized a US nationally representative database--the 2005 National Assessment of Educational Progress (NAEP). The following research questions guided the current study: 1. Are computer games in math classes associated with the 4th-grade students' math performance? 2. How does the relationship differ by linguistic group? 3. How does the association vary by gender? 4. Is there an interaction effect of computer games on linguistic and gender groups? In other words, how does the effect of computer games on linguistic groups vary by gender group? Literature review Academic performance and computer games According DeBell and Chapman (2004), of 58,273,000 students of nursery and K-12 school age in the USA, 56% of students played computer games. Along with the popularity among students, computer games have received a lot of attention from educators as a potential way to provide learners with effective and fun learning environments (Oblinger, 2006). Gee (2005) agreed that a game would turn out to be good for learning when the game is built to incorporate learning principles. Some researchers have also supported the potential of games for affective domains of learning and fostering a positive attitude towards learning (Ke, 2008; Ke & Grabowski, 2007; Vogel et al., 2006). For example, based on the study conducted on 1,274 1st- and 2nd-graders, Rosas et al. (2003) found a positive effect of educational games on the motivation of students. Although there is overall support for the idea that games have a positive effect on affective aspects of learning, there have been mixed research results regarding the role of games in promoting cognitive gains and academic achievement. In the meta-analysis, Vogel et al. (2006) examined 32 empirical studies and concluded that the inclusion of games for students' learning resulted in significantly higher cognitive gains compared with traditional teaching methods without games. \u2026", "what has research problem ?", "Educational Games", 3379.0, 3396.0], ["Software design is a process of trading off competing objectives. If the user objective space is rich, then we should use optimizers that can fully exploit that richness. For example, this study configures software product lines (expressed as feature maps) using various search-based software engineering methods. As we increase the number of optimization objectives, we find that methods in widespread use (e.g. NSGA-II, SPEA2) perform much worse than IBEA (Indicator-Based Evolutionary Algorithm). IBEA works best since it makes most use of user preference knowledge. Hence it does better on the standard measures (hypervolume and spread) but it also generates far more products with 0% violations of domain constraints. Our conclusion is that we need to change our methods for search-based software engineering, particularly when studying complex decision spaces.", "what has research problem ?", "Search-Based Software Engineering", 271.0, 304.0], ["Nowadays, the enormous volume of health and fitness data gathered from IoT wearable devices offers favourable opportunities to the research community. For instance, it can be exploited using sophisticated data analysis techniques, such as automatic reasoning, to find patterns and, extract information and new knowledge in order to enhance decision-making and deliver better healthcare. However, due to the high heterogeneity of data representation formats, the IoT healthcare landscape is characterised by an ubiquitous presence of data silos which prevents users and clinicians from obtaining a consistent representation of the whole knowledge. Semantic web technologies, such as ontologies and inference rules, have been shown as a promising way for the integration and exploitation of data from heterogeneous sources. In this paper, we present a semantic data model useful to: (1) consistently represent health and fitness data from heterogeneous IoT sources; (2) integrate and exchange them; and (3) enable automatic reasoning by inference engines.", "what has research problem ?", "consistently represent health and fitness data from heterogeneous IoT sources", 885.0, 962.0], ["Common-sense or background knowledge is required to understand natural language, but in most neural natural language understanding (NLU) systems, the requisite background knowledge is indirectly acquired from static corpora. We develop a new reading architecture for the dynamic integration of explicit background knowledge in NLU models. A new task-agnostic reading module provides refined word representations to a task-specific NLU architecture by processing background knowledge in the form of free-text statements, together with the task-specific inputs. Strong performance on the tasks of document question answering (DQA) and recognizing textual entailment (RTE) demonstrate the effectiveness and flexibility of our approach. Analysis shows that our models learn to exploit knowledge selectively and in a semantically appropriate way.", "what has research problem ?", "Question Answering", 604.0, 622.0], ["The effect of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids) on the microbial mineralisation of phenanthrene in soil slurry by the indigenous microbial community has been investigated. The rate and extent of 14C\u2013phenanthrenemineralisation in artificially spiked soils were monitored in the absence of hydroxycinnamic acids and presence of hydroxycinnamic acids applied at three different concentrations (50, 100 and 200 \u00b5g kg-1) either as single compounds or as a mixture of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids at a 1:1:1 ratio). The highest extent of 14C\u2013phenanthrene mineralisation (P 200 \u00b5g kg-1. Depending on its concentrationin soil, hydroxycinnamic acids can either stimulate or inhibit mineralisation of phenanthrene by indigenous soil microbial community. Therefore, effective understanding of phytochemical\u2013microbe\u2013organic contaminant interactions is essential for further development of phytotechnologies for remediation of PAH\u2013contaminated soils.", "what has research problem ?", "The effect of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids) on the microbial mineralisation of phenanthrene in soil slurry by the indigenous microbial community has been investigated.", NaN, NaN], ["The body of research relating to the implementation of enterprise resource planning (ERP) systems in small- and medium-sized enterprises (SMEs) has been increasing rapidly over the last few years. It is important, particularly for SMEs, to recognize the elements for a successful ERP implementation in their environments. This research aims to examine the critical elements that constitute a successful ERP implementation in SMEs. The objective is to identify the constituents within the critical elements. A comprehensive literature review and interviews with eight SMEs in the UK were carried out. The results serve as the basic input into the formation of the critical elements and their constituents. Three main critical elements are formed: critical success factors, critical people and critical uncertainties. Within each critical element, the related constituents are identified. Using the process theory approach, the constituents within each critical element are linked to their specific phase(s) of ERP implementation. Ten constituents for critical success factors were found, nine constituents for critical people and 21 constituents for critical uncertainties. The research suggests that a successful ERP implementation often requires the identification and management of the critical elements and their constituents at each phase of implementation. The results are constructed as a reference framework that aims to provide researchers and practitioners with indicators and guidelines to improve the success rate of ERP implementation in SMEs.", "what has research problem ?", "Enterprise resource planning", 55.0, 83.0], ["Abstract: We investigate multiple techniques to improve upon the current state of the art deep convolutional neural network based image classification pipeline. The techiques include adding more image transformations to training data, adding more transformations to generate additional predictions at test time and using complementary models applied to higher resolution images. This paper summarizes our entry in the Imagenet Large Scale Visual Recognition Challenge 2013. Our system achieved a top 5 classification error rate of 13.55% using no external data which is over a 20% relative improvement on the previous year's winner.", "what has research problem ?", "Image Classification", 130.0, 150.0], ["This study represents two critical steps forward in the area of smart city research and practice. The first is in the form of the development of a comprehensive conceptualization of smart city as a resource for researchers and government practition- ers; the second is in the form of the creation of a bridge between smart cities research and practice expertise. City governments increasingly need innovative arrangements to solve a variety of technical, physical, and social problems. \"Smart city\" could be used to represent efforts that in many ways describe a vision of a city, but there is little clarity about this new concept. This paper proposes a comprehensive conceptualization of smart city, including its main components and several specific elements. Academic literature is used to create a robust framework, while a review of practical tools is used to identify specific elements or aspects not treated in the academic studies, but essential to create an integrative and comprehensive conceptualization of smart city. The paper also provides policy implications and suggests areas for future research in this topic.", "what has research problem ?", "Smart cities", 317.0, 329.0], ["The MEDIQA 2021 shared tasks at the BioNLP 2021 workshop addressed three tasks on summarization for medical text: (i) a question summarization task aimed at exploring new approaches to understanding complex real-world consumer health queries, (ii) a multi-answer summarization task that targeted aggregation of multiple relevant answers to a biomedical question into one concise and relevant answer, and (iii) a radiology report summarization task addressing the development of clinically relevant impressions from radiology report findings. Thirty-five teams participated in these shared tasks with sixteen working notes submitted (fifteen accepted) describing a wide variety of models developed and tested on the shared and external datasets. In this paper, we describe the tasks, the datasets, the models and techniques developed by various teams, the results of the evaluation, and a study of correlations among various summarization evaluation measures. We hope that these shared tasks will bring new research and insights in biomedical text summarization and evaluation.", "what has research problem ?", "Summarization", 82.0, 95.0], ["Research on definition extraction has been conducted for well over a decade, largely with significant constraints on the type of definitions considered. In this work, we present DeftEval, a SemEval shared task in which participants must extract definitions from free text using a term-definition pair corpus that reflects the complex reality of definitions in natural language. Definitions and glosses in free text often appear without explicit indicators, across sentences boundaries, or in an otherwise complex linguistic manner. DeftEval involved 3 distinct subtasks: 1) Sentence classification, 2) sequence labeling, and 3) relation extraction.", "what has research problem ?", "Relation extraction", 628.0, 647.0], ["Rapid industrial modernisation and economic reform have been features of the Korean economy since the 1990s, and have brought with it substantial environmental problems. In response to these problems, the Korean government has been developing approaches to promote cleaner production technologies. Green supply chain management (GSCM) is emerging to be an important approach for Korean enterprises to improve performance. The purpose of this study is to examine the impact of GSCM CSFs (critical success factors) on the BSC (balanced scorecard) performance by the structural equation modelling, using empirical results from 249 enterprise respondents involved in national GSCM business in Korea. Planning and implementation was a dominant antecedent factor in this study, followed by collaboration with partners and integration of infrastructure. However, activation of support was a negative impact to the finance performance, raising the costs and burdens. It was found out that there were important implications in the implementation of GSCM.", "what has research problem ?", "Supply chain management", 304.0, 327.0], ["Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.", "what has research problem ?", "Atari Games", 641.0, 652.0], ["Convolutional neural networks (CNNs) have recently emerged as a popular building block for natural language processing (NLP). Despite their success, most existing CNN models employed in NLP share the same learned (and static) set of filters for all input sentences. In this paper, we consider an approach of using a small meta network to learn context-sensitive convolutional filters for text processing. The role of meta network is to abstract the contextual information of a sentence or document into a set of input-sensitive filters. We further generalize this framework to model sentence pairs, where a bidirectional filter generation mechanism is introduced to encapsulate co-dependent sentence representations. In our benchmarks on four different tasks, including ontology classification, sentiment analysis, answer sentence selection, and paraphrase identification, our proposed model, a modified CNN with context-sensitive filters, consistently outperforms the standard CNN and attention-based CNN baselines. By visualizing the learned context-sensitive filters, we further validate and rationalize the effectiveness of proposed framework.", "what has research problem ?", "Text Processing", 388.0, 403.0], ["We describe the shared task for the CLPsych 2018 workshop, which focused on predicting current and future psychological health from an essay authored in childhood. Language-based predictions of a person\u2019s current health have the potential to supplement traditional psychological assessment such as questionnaires, improving intake risk measurement and monitoring. Predictions of future psychological health can aid with both early detection and the development of preventative care. Research into the mental health trajectory of people, beginning from their childhood, has thus far been an area of little work within the NLP community. This shared task represents one of the first attempts to evaluate the use of early language to predict future health; this has the potential to support a wide variety of clinical health care tasks, from early assessment of lifetime risk for mental health problems, to optimal timing for targeted interventions aimed at both prevention and treatment.", "what has research problem ?", "Predicting Current and Future Psychological Health", 76.0, 126.0], ["Internet of Things (IoT) covers a variety of applications including the Healthcare field. Consequently, medical objects become connected to each other with the purpose to share and exchange health data. These medical connected objects raise issues on how to ensure the analysis, interpretation and semantic interoperability of the extensive obtained health data with the purpose to make an appropriate decision. This paper proposes a HealthIoT ontology for representing the semantic interoperability of the medical connected objects and their data; while an algorithm alleviates the analysis of the detected vital signs and the decision-making of the doctor. The execution of this algorithm needs the definition of several SWRL rules (Semantic Web Rule Language).", "what has research problem ?", "semantic interoperability of the medical connected objects and their data", 474.0, 547.0], ["The Implementation of Enterprise Resource Planning ERP systems require huge investments while ineffective implementations of such projects are commonly observed. A considerable number of these projects have been reported to fail or take longer than it was initially planned, while previous studies show that the aim of rapid implementation of such projects has not been successful and the failure of the fundamental goals in these projects have imposed huge amounts of costs on investors. Some of the major consequences are the reduction in demand for such products and the introduction of further skepticism to the managers and investors of ERP systems. In this regard, it is important to understand the factors determining success or failure of ERP implementation. The aim of this paper is to study the critical success factors CSFs in implementing ERP systems and to develop a conceptual model which can serve as a basis for ERP project managers. These critical success factors that are called \"core critical success factors\" are extracted from 62 published papers using the content analysis and the entropy method. The proposed conceptual model has been verified in the context of five multinational companies.", "what has research problem ?", "Enterprise resource planning", 22.0, 50.0], ["This paper grounds the critique of the \u2018smart city\u2019 in its historical and geographical context. Adapting Brenner and Theodore\u2019s notion of \u2018actually existing neoliberalism\u2019, we suggest a greater attention be paid to the \u2018actually existing smart city\u2019, rather than the exceptional or paradigmatic smart cities of Songdo, Masdar and Living PlanIT Valley. Through a closer analysis of cases in Louisville and Philadelphia, we demonstrate the utility of understanding the material effects of these policies in actual cities around the world, with a particular focus on how and from where these policies have arisen, and how they have unevenly impacted the places that have adopted them.", "what has research problem ?", "Smart cities", 295.0, 307.0], ["Recommender Systems have been widely used to help users in finding what they are looking for thus tackling the information overload problem. After several years of research and industrial findings looking after better algorithms to improve accuracy and diversity metrics, explanation services for recommendation are gaining momentum as a tool to provide a human-understandable feedback to results computed, in most of the cases, by black-box machine learning techniques. As a matter of fact, explanations may guarantee users satisfaction, trust, and loyalty in a system. In this paper, we evaluate how different information encoded in a Knowledge Graph are perceived by users when they are adopted to show them an explanation. More precisely, we compare how the use of categorical information, factual one or a mixture of them both in building explanations, affect explanatory criteria for a recommender system. Experimental results are validated through an A/B testing platform which uses a recommendation engine based on a Semantics-Aware Autoencoder to build users profiles which are in turn exploited to compute recommendation lists and to provide an explanation.", "what has research problem ?", "Recommender Systems", 0.0, 19.0], ["We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and other recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa on GLUE and SQuAD, and achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 3.5 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also replicate other pretraining schemes within the BART framework, to understand their effect on end-task performance.", "what has research problem ?", "Question Answering", 938.0, 956.0], ["Many digital libraries recommend literature to their users considering the similarity between a query document and their repository. However, they often fail to distinguish what is the relationship that makes two documents alike. In this paper, we model the problem of finding the relationship between two documents as a pairwise document classification task. To find the semantic relation between documents, we apply a series of techniques, such as GloVe, Paragraph Vectors, BERT, and XLNet under different configurations (e.g., sequence length, vector concatenation scheme), including a Siamese architecture for the Transformer-based systems. We perform our experiments on a newly proposed dataset of 32,168 Wikipedia article pairs and Wikidata properties that define the semantic document relations. Our results show vanilla BERT as the best performing system with an F1-score of 0.93, which we manually examine to better understand its applicability to other domains. Our findings suggest that classifying semantic relations between documents is a solvable task and motivates the development of a recommender system based on the evaluated techniques. The discussions in this paper serve as first steps in the exploration of documents through SPARQL-like queries such that one could find documents that are similar in one aspect but dissimilar in another.", "what has research problem ?", "Document classification", 330.0, 353.0], ["This paper presents the task definition, resources, and the single participant system for Task 12: Turkish Lexical Sample Task (TLST), which was organized in the SemEval-2007 evaluation exercise. The methodology followed for developing the specific linguistic resources necessary for the task has been described in this context. A language-specific feature set was defined for Turkish. TLST consists of three pieces of data: The dictionary, the training data, and the evaluation data. Finally, a single system that utilizes a simple statistical method was submitted for the task and evaluated.", "what has research problem ?", "Turkish Lexical Sample Task", 99.0, 126.0], ["Current approaches to building knowledge-based systems propose the development of an ontology as a precursor to building the problem-solver. This paper outlines an attempt to do the reverse and discover interesting ontologies from systems built without the ontology being explicit. In particular the paper considers large classification knowledge bases used for the interpretation of medical chemical pathology results and built using Ripple-Down Rules (RDR). The rule conclusions in these knowledge bases provide free-text interpretations of the results rather than explicit classes. The goal is to discover implicit ontological relationships between these interpretations as the system evolves. RDR allows for incremental development and the goal is that the ontology emerges as the system evolves. The results suggest that approach has potential, but further investigation is required before strong claims can be made.", "what has research problem ?", " discover implicit ontological relationships", 599.0, 643.0], ["The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents\u2019 circumstances, such as a spouse\u2019s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.", "what has research problem ?", "housework", 920.0, 929.0], ["In this paper, I examine the convergence of big data and urban governance beyond the discursive and material contexts of the smart city. I argue that in addition to understanding the intensifying relationship between data, cities, and governance in terms of regimes of automated management and coordination in \u2018actually existing\u2019 smart cities, we should further engage with urban algorithmic governance and governmentality as material-discursive projects of future-ing, i.e., of anticipating particular kinds of cities-to-come. As urban big data looks to the future, it does so through the lens of an anticipatory security calculus fixated on identifying and diverting risks of urban anarchy and personal harm against which life in cities must be securitized. I suggest that such modes of algorithmic speculation are discernible at two scales of urban big data praxis: the scale of the body, and that of the city itself. At the level of the urbanite body, I use the selective example of mobile neighborhood safety apps to demonstrate how algorithmic governmentality enacts digital mediations of individual mobilities by routing individuals around \u2018unsafe\u2019 parts of the city in the interests of technologically ameliorating the risks of urban encounter. At the scale of the city, amongst other empirical examples, sentiment analytics approaches prefigure ephemeral spatialities of civic strife by aggregating and mapping individual emotions distilled from unstructured real-time content flows (such as Tweets). In both of these instances, the urban futures anticipated by the urban \u2018big data security assemblage\u2019 are highly uneven, as data and algorithms cannot divest themselves of urban inequalities and the persistence of their geographies.", "what has research problem ?", "Smart cities", 330.0, 342.0], ["Most learning algorithms are not invariant to the scale of the function that is being approximated. We propose to adaptively normalize the targets used in learning. This is useful in value-based reinforcement learning, where the magnitude of appropriate value approximations can change over time when we update the policy of behavior. Our main motivation is prior work on learning to play Atari games, where the rewards were all clipped to a predetermined range. This clipping facilitates learning across many different games with a single learning algorithm, but a clipped reward function can result in qualitatively different behavior. Using the adaptive normalization we can remove this domain-specific heuristic without diminishing overall performance.", "what has research problem ?", "Atari Games", 389.0, 400.0], ["Both, MOOCs and learning analytics, are two emergent topics in the field of educational technology. This paper shows the main contributions of the eMadrid network in these two topics during the last years (2014-2016), as well as the planned future works in the network. The contributions in the field of the MOOCs include the design and authoring of materials, the improvement of the peer review process or experiences about teaching these courses and institutional adoption. The contributions in the field of learning analytics include the inference of higher level information, the development of dashboards, the evaluation of the learning process, or the prediction and clustering.", "what has research problem ?", "Learning Analytics", 16.0, 34.0], ["This paper presents work on a method to detect names of proteins in running text. Our system - Yapex - uses a combination of lexical and syntactic knowledge, heuristic filters and a local dynamic dictionary. The syntactic information given by a general-purpose off-the-shelf parser supports the correct identification of the boundaries of protein names, and the local dynamic dictionary finds protein names in positions incompletely analysed by the parser. We present the different steps involved in our approach to protein tagging, and show how combinations of them influence recall and precision. We evaluate the system on a corpus of MEDLINE abstracts and compare it with the KeX system (Fukuda et al., 1998) along four different notions of correctness.", "what has research problem ?", "Protein tagging", 516.0, 531.0], ["The attention for Smart governance, a key aspect of Smart cities, is growing, but our conceptual understanding of it is still limited. This article fills this gap in our understanding by exploring the concept of Smart governance both theoretically and empirically and developing a research model of Smart governance. On the basis of a systematic review of the literature defining elements, aspired outcomes and implementation strategies are identified as key dimensions of Smart governance. Inductively, we identify various categories within these variables. The key dimensions were presented to a sample of representatives of European local governments to investigate the dominant perceptions of practitioners and to refine the categories. Our study results in a model for research into the implementation strategies, Smart governance arrangements, and outcomes of Smart governance.", "what has research problem ?", "Smart cities", 52.0, 64.0], ["Efficient exploration in complex environments remains a major challenge for reinforcement learning. We propose bootstrapped DQN, a simple algorithm that explores in a computationally and statistically efficient manner through use of randomized value functions. Unlike dithering strategies such as epsilon-greedy exploration, bootstrapped DQN carries out temporally-extended (or deep) exploration; this can lead to exponentially faster learning. We demonstrate these benefits in complex stochastic MDPs and in the large-scale Arcade Learning Environment. Bootstrapped DQN substantially improves learning times and performance across most Atari games.", "what has research problem ?", "Atari Games", 637.0, 648.0], ["In our age cities are complex systems and we can say systems of systems. Today locality is the result of using information and communication technologies in all departments of our life, but in future all cities must to use smart systems for improve quality of life and on the other hand for sustainable development. The smart systems make daily activities more easily, efficiently and represent a real support for sustainable city development. This paper analysis the sus-tainable development and identified the key elements of future smart cities.", "what has research problem ?", "Smart cities", 535.0, 547.0], ["Ladder-type dithienocyclopentacarbazole (DTCC) cores, which possess highly extended \u03c0-conjugated backbones and versatile modular structures for derivatization, were widely used to develop high-performance p-type polymeric semiconductors. However, an n-type DTCC-based organic semiconductor has not been reported to date. In this study, the first DTCC-based n-type organic semiconductor (DTCC\u2013IC) with a well-defined A\u2013D\u2013A backbone was designed, synthesized, and characterized, in which a DTCC derivative substituted by four p-octyloxyphenyl groups was used as the electron-donating core and two strongly electron-withdrawing 3-(dicyanomethylene)indan-1-one moieties were used as the terminal acceptors. It was found that DTCC\u2013IC has strong light-capturing ability in the range of 500\u2013720 nm and exhibits an impressively high molar absorption coefficient of 2.24 \u00d7 105 M\u22121 cm\u22121 at 669 nm owing to effective intramolecular charge transfer and a strong D\u2013A effect. Cyclic voltammetry measurements indicated that the HOMO and LUMO energy levels of DTCC\u2013IC are \u22125.50 and \u22123.87 eV, respectively. More importantly, a high electron mobility of 2.17 \u00d7 10\u22123 cm2 V\u22121 s\u22121 was determined by the space-charge-limited current method; this electron mobility can be comparable to that of fullerene derivative acceptors (\u03bce \u223c 10\u22123 cm2 V\u22121 s\u22121). To investigate its application potential in non-fullerene solar cells, we fabricated organic solar cells (OSCs) by blending a DTCC\u2013IC acceptor with a PTB7-Th donor under various conditions. The results suggest that the optimized device exhibits a maximum power conversion efficiency (PCE) of up to 6% and a rational high VOC of 0.95 V. These findings demonstrate that the ladder-type DTCC core is a promising building block for the development of high-mobility n-type organic semiconductors for OSCs.", "what has research problem ?", "Organic solar cells", 1412.0, 1431.0], ["The fifth phase of the Coupled Model Intercomparison Project (CMIP5) will produce a state-of-the- art multimodel dataset designed to advance our knowledge of climate variability and climate change. Researchers worldwide are analyzing the model output and will produce results likely to underlie the forthcoming Fifth Assessment Report by the Intergovernmental Panel on Climate Change. Unprecedented in scale and attracting interest from all major climate modeling groups, CMIP5 includes \u201clong term\u201d simulations of twentieth-century climate and projections for the twenty-first century and beyond. Conventional atmosphere\u2013ocean global climate models and Earth system models of intermediate complexity are for the first time being joined by more recently developed Earth system models under an experiment design that allows both types of models to be compared to observations on an equal footing. Besides the longterm experiments, CMIP5 calls for an entirely new suite of \u201cnear term\u201d simulations focusing on recent decades...", "what has research problem ?", "experiment design", 792.0, 809.0], ["BioNLP Open Shared Tasks (BioNLP-OST) is an international competition organized to facilitate development and sharing of computational tasks of biomedical text mining and solutions to them. For BioNLP-OST 2019, we introduced a new mental health informatics task called \u201cRDoC Task\u201d, which is composed of two subtasks: information retrieval and sentence extraction through National Institutes of Mental Health\u2019s Research Domain Criteria framework. Five and four teams around the world participated in the two tasks, respectively. According to the performance on the two tasks, we observe that there is room for improvement for text mining on brain research and mental illness.", "what has research problem ?", "Information Retrieval", 317.0, 338.0], ["Gene Ontology (GO) annotation is a common task among model organism databases (MODs) for capturing gene function data from journal articles. It is a time-consuming and labor-intensive task, and is thus often considered as one of the bottlenecks in literature curation. There is a growing need for semiautomated or fully automated GO curation techniques that will help database curators to rapidly and accurately identify gene function information in full-length articles. Despite multiple attempts in the past, few studies have proven to be useful with regard to assisting real-world GO curation. The shortage of sentence-level training data and opportunities for interaction between text-mining developers and GO curators has limited the advances in algorithm development and corresponding use in practical circumstances. To this end, we organized a text-mining challenge task for literature-based GO annotation in BioCreative IV. More specifically, we developed two subtasks: (i) to automatically locate text passages that contain GO-relevant information (a text retrieval task) and (ii) to automatically identify relevant GO terms for the genes in a given article (a concept-recognition task). With the support from five MODs, we provided teams with >4000 unique text passages that served as the basis for each GO annotation in our task data. Such evidence text information has long been recognized as critical for text-mining algorithm development but was never made available because of the high cost of curation. In total, seven teams participated in the challenge task. From the team results, we conclude that the state of the art in automatically mining GO terms from literature has improved over the past decade while much progress is still needed for computer-assisted GO curation. Future work should focus on addressing remaining technical challenges for improved performance of automatic GO concept recognition and incorporating practical benefits of text-mining tools into real-world GO annotation. Database URL: http://www.biocreative.org/tasks/biocreative-iv/track-4-GO/.", "what has research problem ?", "text retrieval", 1060.0, 1074.0], ["This paper presents the preparation, resources, results and analysis of the Infectious Diseases (ID) information extraction task, a main task of the BioNLP Shared Task 2011. The ID task represents an application and extension of the BioNLP'09 shared task event extraction approach to full papers on infectious diseases. Seven teams submitted final results to the task, with the highest-performing system achieving 56% F-score in the full task, comparable to state-of-the-art performance in the established BioNLP'09 task. The results indicate that event extraction methods generalize well to new domains and full-text publications and are applicable to the extraction of events relevant to the molecular mechanisms of infectious diseases.", "what has research problem ?", "Infectious Diseases (ID) information extraction task", NaN, NaN], ["We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random tokens, and (2) training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it. SpanBERT consistently outperforms BERT and our better-tuned baselines, with substantial gains on span selection tasks such as question answering and coreference resolution. In particular, with the same training data and model size as BERT large , our single model obtains 94.6% and 88.7% F1 on SQuAD 1.1 and 2.0 respectively. We also achieve a new state of the art on the OntoNotes coreference resolution task (79.6% F1), strong performance on the TACRED relation extraction benchmark, and even gains on GLUE. 1", "what has research problem ?", "pre-training method that is designed to better represent and predict spans of text", 23.0, 105.0], ["Multi-agent systems (MASs) have received tremendous attention from scholars in different disciplines, including computer science and civil engineering, as a means to solve complex problems by subdividing them into smaller tasks. The individual tasks are allocated to autonomous entities, known as agents. Each agent decides on a proper action to solve the task using multiple inputs, e.g., history of actions, interactions with its neighboring agents, and its goal. The MAS has found multiple applications, including modeling complex systems, smart grids, and computer networks. Despite their wide applicability, there are still a number of challenges faced by MAS, including coordination between agents, security, and task allocation. This survey provides a comprehensive discussion of all aspects of MAS, starting from definitions, features, applications, challenges, and communications to evaluation. A classification on MAS applications and challenges is provided along with references for further studies. We expect this paper to serve as an insightful and comprehensive resource on the MAS for researchers and practitioners in the area.", "what has research problem ?", "Multi-agent systems", 0.0, 19.0], ["Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. However, these visual transformers are pre-trained with hundreds of millions of images using an expensive infrastructure, thereby limiting their adoption by the larger community. In this work, with an adequate training scheme, we produce a competitive convolution-free transformer by training on Imagenet only. We train it on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external data. We share our code and models to accelerate community advances on this line of research. Additionally, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this tokenbased distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 84.4% accuracy) and when transferring to other tasks.", "what has research problem ?", "Image Classification", 108.0, 128.0], ["NLTK, the Natural Language Toolkit, is a suite of open source program modules, tutorials and problem sets, providing ready-to-use computational linguistics courseware. NLTK covers symbolic and statistical natural language processing, and is interfaced to annotated corpora. Students augment and replace existing components, learn structured programming by example, and manipulate sophisticated models from the outset.", "what has research problem ?", "ready-to-use computational linguistics courseware", 117.0, 166.0], ["With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.", "what has research problem ?", "Natural Language Inference", 942.0, 968.0], ["We present a method for precise eye localization that uses two Support Vector Machines trained on properly selected Haar wavelet coefficients. The evaluation of our technique on many standard databases exhibits very good performance. Furthermore, we study the strong correlation between the eye localization error and the face recognition rate.", "what has research problem ?", "Eye localization", 32.0, 48.0], ["Active machine learning algorithms are used when large numbers of unlabeled examples are available and getting labels for them is costly (e.g. requiring consulting a human expert). Many conventional active learning algorithms focus on refining the decision boundary, at the expense of exploring new regions that the current hypothesis misclassifies. We propose a new active learning algorithm that balances such exploration with refining of the decision boundary by dynamically adjusting the probability to explore at each step. Our experimental results demonstrate improved performance on data sets that require extensive exploration while remaining competitive on data sets that do not. Our algorithm also shows significant tolerance of noise.", "what has research problem ?", "Active learning", 199.0, 214.0], ["The continuing development of enterprise resource planning (ERP) systems has been considered by many researchers and practitioners as one of the major IT innovations in this decade. ERP solutions seek to integrate and streamline business processes and their associated information and work flows. What makes this technology more appealing to organizations is increasing capability to integrate with the most advanced electronic and mobile commerce technologies. However, as is the case with any new IT field, research in the ERP area is still lacking and the gap in the ERP literature is huge. Attempts to fill this gap by proposing a novel taxonomy for ERP research. Also presents the current status with some major themes of ERP research relating to ERP adoption, technical aspects of ERP and ERP in IS curricula. The discussion presented on these issues should be of value to researchers and practitioners. Future research work will continue to survey other major areas presented in the taxonomy framework.", "what has research problem ?", "Enterprise resource planning", 30.0, 58.0], ["Bidirectional Encoder Representations from Transformers (BERT) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves state-of-the-art results across the board in both extractive and abstractive settings.", "what has research problem ?", "Text Summarization", 270.0, 288.0], ["Nanoscale biocompatible photoluminescence (PL) thermometers that can be used to accurately and reliably monitor intracellular temperatures have many potential applications in biology and medicine. Ideally, such nanothermometers should be functional at physiological pH across a wide range of ionic strengths, probe concentrations, and local environments. Here, we show that water-soluble N,S-co-doped carbon dots (CDs) exhibit temperature-dependent photoluminescence lifetimes and can serve as highly sensitive and reliable intracellular nanothermometers. PL intensity measurements indicate that these CDs have many advantages over alternative semiconductor- and CD-based nanoscale temperature sensors. Importantly, their PL lifetimes remain constant over wide ranges of pH values (5-12), CD concentrations (1.5 \u00d7 10-5 to 0.5 mg/mL), and environmental ionic strengths (up to 0.7 mol\u00b7L-1 NaCl). Moreover, they are biocompatible and nontoxic, as demonstrated by cell viability and flow cytometry analyses using NIH/3T3 and HeLa cell lines. N,S-CD thermal sensors also exhibit good water dispersibility, superior photo- and thermostability, extraordinary environment and concentration independence, high storage stability, and reusability-their PL decay curves at temperatures between 15 and 45 \u00b0C remained unchanged over seven sequential experiments. In vitro PL lifetime-based temperature sensing performed with human cervical cancer HeLa cells demonstrated the great potential of these nanosensors in biomedicine. Overall, N,S-doped CDs exhibit excitation-independent emission with strongly temperature-dependent monoexponential decay, making them suitable for both in vitro and in vivo luminescence lifetime thermometry.", "what has research problem ?", "Nanothermometer", NaN, NaN], ["We propose a simple yet robust stochastic answer network (SAN) that simulates multi-step reasoning in machine reading comprehension. Compared to previous work such as ReasoNet which used reinforcement learning to determine the number of steps, the unique feature is the use of a kind of stochastic prediction dropout on the answer module (final layer) of the neural network during the training. We show that this simple trick improves robustness and achieves results competitive to the state-of-the-art on the Stanford Question Answering Dataset (SQuAD), the Adversarial SQuAD, and the Microsoft MAchine Reading COmprehension Dataset (MS MARCO).", "what has research problem ?", "Question Answering", 519.0, 537.0], ["This paper presents our recent work on the design and development of a new, large scale dataset, which we name MS MARCO, for MAchine Reading COmprehension. This new dataset is aimed to overcome a number of well-known weaknesses of previous publicly available datasets for the same task of reading comprehension and question answering. In MS MARCO, all questions are sampled from real anonymized user queries. The context passages, from which answers in the dataset are derived, are extracted from real web documents using the most advanced version of the Bing search engine. The answers to the queries are human generated. Finally, a subset of these queries has multiple answers. We aim to release one million queries and the corresponding answers in the dataset, which, to the best of our knowledge, is the most comprehensive real-world dataset of its kind in both quantity and quality. We are currently releasing 100,000 queries with their corresponding answers to inspire work in reading comprehension and question answering along with gathering feedback from the research community.", "what has research problem ?", "Question Answering ", 1009.0, 1028.0], ["It is consensual that Enterprise Resource Planning (ERP) after a successful implementation has significant effects on the productivity of firm as well small and medium-sized enterprises (SMEs) recognized as fundamentally different environments compared to large enterprises. There are few reviews in the literature about the post-adoption phase and even fewer at SME level. Furthermore, to the best of our knowledge there is none with focus in ERP value stage. This review will fill this gap. It provides an updated bibliography of ERP publications published in the IS journal and conferences during the period of 2000 and 2012. A total of 33 articles from 21 journals and 12 conferences are reviewed. The main focus of this paper is to shed the light on the areas that lack sufficient research within the ERP in SME domain, in particular in ERP business value stage, suggest future research avenues, as well as, present the current research findings that could support researchers and practitioners when embarking on ERP projects.", "what has research problem ?", "Enterprise resource planning", 22.0, 50.0], ["Metabolic pathways are an important part of systems biology research since they illustrate complex interactions between metabolites, enzymes, and regulators. Pathway maps are drawn to elucidate metabolism or to set data in a metabolic context. We present MetaboMAPS, a web-based platform to visualize numerical data on individual metabolic pathway maps. Metabolic maps can be stored, distributed and downloaded in SVG-format. MetaboMAPS was designed for users without computational background and supports pathway sharing without strict conventions. In addition to existing applications that established standards for well-studied pathways, MetaboMAPS offers a niche for individual, customized pathways beyond common knowledge, supporting ongoing research by creating publication-ready visualizations of experimental data.", "what has research problem ?", "Visualization", NaN, NaN], ["This paper presents the main results achieved in the program eMadrid Program in Open Educational Resources, Free Software, Open Data, and about formats and standardization of content and services.", "what has research problem ?", "Open Education", NaN, NaN], ["This paper presents the task definition, resources, participation, and comparative results for the Web People Search task, which was organized as part of the SemEval-2007 evaluation exercise. This task consists of clustering a set of documents that mention an ambiguous person name according to the actual entities referred to using that name.", "what has research problem ?", "Web People Search task", 99.0, 121.0], ["Online news recommender systems aim to address the information explosion of news and make personalized recommendation for users. In general, news language is highly condensed, full of knowledge entities and common sense. However, existing methods are unaware of such external knowledge and cannot fully discover latent knowledge-level connections among news. The recommended results for a user are consequently limited to simple patterns and cannot be extended reasonably. To solve the above problem, in this paper, we propose a deep knowledge-aware network (DKN) that incorporates knowledge graph representation into news recommendation. DKN is a content-based deep recommendation framework for click-through rate prediction. The key component of DKN is a multi-channel and word-entity-aligned knowledge-aware convolutional neural network (KCNN) that fuses semantic-level and knowledge-level representations of news. KCNN treats words and entities as multiple channels, and explicitly keeps their alignment relationship during convolution. In addition, to address users\u00bb diverse interests, we also design an attention module in DKN to dynamically aggregate a user\u00bbs history with respect to current candidate news. Through extensive experiments on a real online news platform, we demonstrate that DKN achieves substantial gains over state-of-the-art deep recommendation models. We also validate the efficacy of the usage of knowledge in DKN.", "what has research problem ?", "Recommender Systems", 12.0, 31.0], ["Graphs arise naturally in many real-world applications including social networks, recommender systems, ontologies, biology, and computational finance. Traditionally, machine learning models for graphs have been mostly designed for static graphs. However, many applications involve evolving graphs. This introduces important challenges for learning and inference since nodes, attributes, and edges change over time. In this survey, we review the recent advances in representation learning for dynamic graphs, including dynamic knowledge graphs. We describe existing models from an encoder-decoder perspective, categorize these encoders and decoders based on the techniques they employ, and analyze the approaches in each category. We also review several prominent applications and widely used datasets, and highlight directions for future research.", "what has research problem ?", "review the recent advances in representation learning for dynamic graphs", 434.0, 506.0], ["Over the past decade, Enterprise Resource Planning systems (ERP) have become one of the most important developments in the corporate use of information technology. ERP implementations are usually large, complex projects, involving large groups of people and other resources, working together under considerable time pressure and facing many unforeseen developments. In order for an organization to compete in this rapidly expanding and integrated marketplace, ERP systems must be employed to ensure access to an efficient, effective, and highly reliable information infrastructure. Despite the benefits that can be achieved from a successful ERP system implementation, there is evidence of high failure in ERP implementation projects. Too frequently key development practices are ignored and early warn ing signs that lead to project failure are not understood. Identifying project success and failure factors and their consequences as early as possible can provide valuable clues to help project managers improve their chances of success. It is the long-lange goal of our research to shed light on these factors and to provide a tool that project managers can use to help better manage their software development projects. This paper will present a review of the general background to our work; the results from the current research and conclude with a discussion of the findings thus far. The findings will include a list of 23 unique Critical Success Factors identified throughout the literature, which we believe to be essential for Project Managers. The implications of these results will be discussed along with the lessons learnt.", "what has research problem ?", "Enterprise resource planning", 22.0, 50.0], ["Abstract The essential oil content of Artemisia herba-alba Asso decreased along the drying period from 2.5 % to 1.8 %. Conversely, the composition of the essential oil was not qualitatively affected by the drying process. The same principle components were found in all essential analyzed such as \u03b1-thujone (13.0 \u2013 22.7 %), \u03b2-thujone (18.0 \u2013 25.0 %), camphor (8.6 - 13 %), 1,8-cineole (7.1 \u2013 9.4 %), chrysanthenone (6.7 \u2013 10.9 %), terpinen-4-ol (3.4 \u2013 4.7 %). Quantitatively, during the air-drying process, the content of some components decreased slightly such as \u03b1-thujone (from 22.7 to 15.9 %) and 1,8-cineole (from 9.4 to 7.1 %), while the amount of other compounds increased such as chrysanthenone (from 6.7 to 10.9 %), borneol (from 0.8 to 1.5 %), germacrene-D (from 1.0 to 2.4 %) and spathulenol (from 0.8 to 1.5 %). The chemical composition of the oil was more affected by oven-drying the plant material at 35\u00b0C. \u03b1-Thujone and \u03b2-thujone decreased to 13.0 %and 18.0 %respectively, while the percentage of camphor, germacrene-D and spathulenol increased to 13.0 %, 5.5 %and 3.7 %, respectively.", "what has research problem ?", "Oil", 23.0, 26.0], ["Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new \"Colossal Clean Crawled Corpus\", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.", "what has research problem ?", "transfer learning", 0.0, 17.0], ["Coastal safety may be influenced by climate change, as changes in extreme surge levels and wave extremes may increase the vulnerability of dunes and other coastal defenses. In the North Sea, an area already prone to severe flooding, these high surge levels and waves are generated by low atmospheric pressure and severe wind speeds during storm events. As a result of the geometry of the North Sea, not only the maximum wind speed is relevant, but also wind direction. Climate change could change maximum wind conditions, with potentially negative effects for coastal safety. Here, we use an ensemble of 12 Coupled Model Intercomparison Project Phase 5 (CMIP5) General Circulation Models (GCMs) and diagnose the effect of two climate scenarios (rcp4.5 and rcp8.5) on annual maximum wind speed, wind speeds with lower return frequencies, and the direction of these annual maximum wind speeds. The 12 selected CMIP5 models do not project changes in annual maximum wind speed and in wind speeds with lower return frequencies; however, we do find an indication that the annual extreme wind events are coming more often from western directions. Our results are in line with the studies based on CMIP3 models and do not confirm the statement based on some reanalysis studies that there is a climate\u2010change\u2010related upward trend in storminess in the North Sea area.", "what has research problem ?", "North sea", 180.0, 189.0], ["In recent years, the problem of scene text extraction from images has received extensive attention and significant progress. However, text extraction from scholarly figures such as plots and charts remains an open problem, in part due to the difficulty of locating irregularly placed text lines. To the best of our knowledge, literature has not described the implementation of a text extraction system for scholarly figures that adapts deep convolutional neural networks used for scene text detection. In this paper, we propose a text extraction approach for scholarly figures that forgoes preprocessing in favor of using a deep convolutional neural network for text line localization. Our system uses a publicly available scene text detection approach whose network architecture is well suited to text extraction from scholarly figures. Training data are derived from charts in arXiv papers which are extracted using Allen Institute's pdffigures tool. Since this tool analyzes PDF data as a container format in order to extract text location through the mechanisms which render it, we were able to gather a large set of labeled training samples. We show significant improvement from methods in the literature, and discuss the structural changes of the text extraction pipeline.", "what has research problem ?", "text extraction from images ", 38.0, 66.0], ["Abstract Flower-like palladium nanoclusters (FPNCs) are electrodeposited onto graphene electrode that are prepared by chemical vapor deposition (CVD). The CVD graphene layer is transferred onto a poly(ethylene naphthalate) (PEN) film to provide a mechanical stability and flexibility. The surface of the CVD graphene is functionalized with diaminonaphthalene (DAN) to form flower shapes. Palladium nanoparticles act as templates to mediate the formation of FPNCs, which increase in size with reaction time. The population of FPNCs can be controlled by adjusting the DAN concentration as functionalization solution. These FPNCs_CG electrodes are sensitive to hydrogen gas at room temperature. The sensitivity and response time as a function of the FPNCs population are investigated, resulted in improved performance with increasing population. Furthermore, the minimum detectable level (MDL) of hydrogen is 0.1 ppm, which is at least 2 orders of magnitude lower than that of chemical sensors based on other Pd-based hybrid materials.", "what has research problem ?", "Chemical sensors", 974.0, 990.0], ["Is there a \u201cmiddle-income trap\u201d? Theory suggests that the determinants of growth at low and high income levels may be different. If countries struggle to transition from growth strategies that are effective at low income levels to growth strategies that are effective at high income levels, they may stagnate at some middle income level; this phenomenon can be thought of as a \u201cmiddle-income trap.\u201d Defining income levels based on per capita gross domestic product relative to the United States, we do not find evidence for (unusual) stagnation at any particular middle income level. However, we do find evidence that the determinants of growth at low and high income levels differ. These findings suggest a mixed conclusion: middle-income countries may need to change growth strategies in order to transition smoothly to high income growth strategies, but this can be done smoothly and does not imply the existence of a middle-income trap.", "what has research problem ?", "Middle-Income Trap", 12.0, 30.0], ["Transformer-based models consist of interleaved feed-forward blocks - that capture content meaning, and relatively more expensive self-attention blocks - that capture context meaning. In this paper, we explored trade-offs and ordering of the blocks to improve upon the current Transformer architecture and proposed PAR Transformer. It needs 35% lower compute time than Transformer-XL achieved by replacing ~63% of the self-attention blocks with feed-forward blocks, and retains the perplexity on WikiText-103 language modelling benchmark. We further validated our results on text8 and enwiki8 datasets, as well as on the BERT model.", "what has research problem ?", "Language Modelling", 509.0, 527.0], ["Enterprise Resource Planning (ERP) application is often viewed as a strategic investment that can provide significant competitive advantage with positive return thus contributing to the firms' revenue and growth. Despite such strategic importance given to ERP the implementation success to achieve the desired goal has been viewed disappointing. There have been numerous industry stories about failures of ERP initiatives. There have also been stories reporting on the significant benefits achieved from successful ERP initiatives. This study review the industry and academic literature on ERP results and identify possible trends or factors which may help future ERP initiatives achieve greater success and less failure. The purpose of this study is to review the industry and academic literature on ERP results, identify and discuss critical success factors which may help future ERP initiatives achieve greater success and less failure.", "what has research problem ?", "Enterprise resource planning", 0.0, 28.0], ["In this paper, we present SemEval-2020 Task 4,CommonsenseValidation andExplanation(ComVE), which includes three subtasks, aiming to evaluate whether a system can distinguish anatural language statement thatmakes senseto humans from one that does not, and provide thereasons. Specifically, in our first subtask, the participating systems are required to choose from twonatural language statements of similar wording the one thatmakes senseand the one does not. Thesecond subtask additionally asks a system to select the key reason from three options why a givenstatement does not make sense. In the third subtask, a participating system needs to generate thereason automatically. 39 teams submitted their valid systems to at least one subtask. For SubtaskA and Subtask B, top-performing teams have achieved results closed to human performance.However, for Subtask C, there is still a considerable gap between system and human performance.The dataset used in our task can be found athttps://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation.", "what has research problem ?", "ComVE", 83.0, 88.0], ["Abstract While thousands of ontologies exist on the web, a unified system for handling online ontologies \u2013 in particular with respect to discovery, versioning, access, quality-control, mappings \u2013 has not yet surfaced and users of ontologies struggle with many challenges. In this paper, we present an online ontology interface and augmented archive called DBpedia Archivo, that discovers, crawls, versions and archives ontologies on the DBpedia Databus. Based on this versioned crawl, different features, quality measures and, if possible, fixes are deployed to handle and stabilize the changes in the found ontologies at web-scale. A comparison to existing approaches and ontology repositories is given .", "what has research problem ?", "Unified System for handling online Ontologies", 59.0, 104.0], ["This paper presents a new eye localization method via Multiscale Sparse Dictionaries (MSD). We built a pyramid of dictionaries that models context information at multiple scales. Eye locations are estimated at each scale by fitting the image through sparse coefficients of the dictionary. By using context information, our method is robust to various eye appearances. The method also works efficiently since it avoids sliding a search window in the image during localization. The experiments in BioID database prove the effectiveness of our method.", "what has research problem ?", "Eye localization", 26.0, 42.0], ["We describe ParsCit, a freely available, open-source implementation of a reference string parsing package. At the core of ParsCit is a trained conditional random field (CRF) model used to label the token sequences in the reference string. A heuristic model wraps this core with added functionality to identify reference strings from a plain text file, and to retrieve the citation contexts. The package comes with utilities to run it as a web service or as a standalone utility. We compare ParsCit on three distinct reference string datasets and show that it compares well with other previously published work.", "what has research problem ?", "Open-source implementation", 41.0, 67.0], ["This paper applies the quantile fixed effects technique in exploring the CO2 environmental Kuznets curve within two groups of economic development (OECD and Non-OECD countries) and six geographical regions West, East Europe, Latin America, East Asia, West Asia and Africa. A comparison of the findings resulting from the use of this technique with those of conventional fixed effects method reveals that the latter may depict a flawed summary of the prevailing incomeemissions nexus depending on the conditional quantile examined. We also extend the Machado and Mata decomposition method to the Kuznets curve framework to explore the most important explanations for the CO2 emissions gap between OECD and Non-OECD countries. We find a statistically significant OECD-Non-OECD emissions gap and this contracts as we ascent the emissions distribution. The decomposition further reveals that there are non-income related factors working against the Non-OECD group's greening. We tentatively conclude that deliberate and systematic mitigation of current CO2 emissions in the Non-OECD group is required. JEL Classification: Q56, Q58.", "what has research problem ?", "CO2 emissions", 670.0, 683.0], ["We investigated the paraclinical profile of monosymptomatic optic neuritis(ON) and its prognosis for multiple sclerosis (MS). The correct identification of patients with very early MS carrying a high risk for conversion to clinically definite MS is important when new treatments are emerging that hopefully will prevent or at least delay future MS. We conducted a prospective single observer and population-based study of 147 consecutive patients (118 women, 80%) with acute monosymptomatic ON referred from a catchment area of 1.6 million inhabitants between January 1, 1990 and December 31, 1995. Of 116 patients examined with brain MRI, 64 (55%) had three or more high signal lesions, 11 (9%) had one to two high signal lesions, and 41 (35%) had a normal brain MRI. Among 143 patients examined, oligoclonal IgG (OB) bands in CSF only were demonstrated in 103 patients (72%). Of 146 patients analyzed, 68 (47%) carried the DR15,DQ6,Dw2 haplotype. During the study period, 53 patients (36%) developed clinically definite MS. The presence of three or more MS-like MRI lesions as well as the presence of OB were strongly associated with the development of MS (p < 0.001). Also, Dw2 phenotype was related to the development of MS (p = 0.046). MRI and CSF studies in patients with ON give clinically important information regarding the risk for future MS.", "what has research problem ?", "Multiple sclerosis", 101.0, 119.0], ["Cross-domain named entity recognition (NER) models are able to cope with the scarcity issue of NER samples in target domains. However, most of the existing NER benchmarks lack domain-specialized entity types or do not focus on a certain domain, leading to a less effective cross-domain evaluation. To address these obstacles, we introduce a cross-domain NER dataset (CrossNER), a fully-labeled collection of NER data spanning over five diverse domains with specialized entity categories for different domains. Additionally, we also provide a domain-related corpus since using it to continue pre-training language models (domain-adaptive pre-training) is effective for the domain adaptation. We then conduct comprehensive experiments to explore the effectiveness of leveraging different levels of the domain corpus and pre-training strategies to do domain-adaptive pre-training for the cross-domain task. Results show that focusing on the fractional corpus containing domain-specialized entities and utilizing a more challenging pre-training strategy in domain-adaptive pre-training are beneficial for the NER domain adaptation, and our proposed method can consistently outperform existing cross-domain NER baselines. Nevertheless, experiments also illustrate the challenge of this cross-domain NER task. We hope that our dataset and baselines will catalyze research in the NER domain adaptation area. The code and data are available at this https URL.", "what has research problem ?", "Cross-Domain Named Entity Recognition", 0.0, 37.0], ["While machine translation has traditionally relied on large amounts of parallel corpora, a recent research line has managed to train both Neural Machine Translation (NMT) and Statistical Machine Translation (SMT) systems using monolingual corpora only. In this paper, we identify and address several deficiencies of existing unsupervised SMT approaches by exploiting subword information, developing a theoretically well founded unsupervised tuning method, and incorporating a joint refinement procedure. Moreover, we use our improved SMT system to initialize a dual NMT model, which is further fine-tuned through on-the-fly back-translation. Together, we obtain large improvements over the previous state-of-the-art in unsupervised machine translation. For instance, we get 22.5 BLEU points in English-to-German WMT 2014, 5.5 points more than the previous best unsupervised system, and 0.5 points more than the (supervised) shared task winner back in 2014.", "what has research problem ?", "Unsupervised Machine Translation", 719.0, 751.0], ["Significance Drug interactions, including drug\u2013drug interactions (DDIs) and drug\u2013food constituent interactions, can trigger unexpected pharmacological effects such as adverse drug events (ADEs). Several existing methods predict drug interactions, but require detailed, but often unavailable drug information as inputs, such as drug targets. To this end, we present a computational framework DeepDDI that accurately predicts DDI types for given drug pairs and drug\u2013food constituent pairs using only name and structural information as inputs. We show four applications of DeepDDI to better understand drug interactions, including prediction of DDI mechanisms causing ADEs, suggestion of alternative drug members for the intended pharmacological effects without negative health effects, prediction of the effects of food constituents on interacting drugs, and prediction of bioactivities of food constituents. Drug interactions, including drug\u2013drug interactions (DDIs) and drug\u2013food constituent interactions (DFIs), can trigger unexpected pharmacological effects, including adverse drug events (ADEs), with causal mechanisms often unknown. Several computational methods have been developed to better understand drug interactions, especially for DDIs. However, these methods do not provide sufficient details beyond the chance of DDI occurrence, or require detailed drug information often unavailable for DDI prediction. Here, we report development of a computational framework DeepDDI that uses names of drug\u2013drug or drug\u2013food constituent pairs and their structural information as inputs to accurately generate 86 important DDI types as outputs of human-readable sentences. DeepDDI uses deep neural network with its optimized prediction performance and predicts 86 DDI types with a mean accuracy of 92.4% using the DrugBank gold standard DDI dataset covering 192,284 DDIs contributed by 191,878 drug pairs. DeepDDI is used to suggest potential causal mechanisms for the reported ADEs of 9,284 drug pairs, and also predict alternative drug candidates for 62,707 drug pairs having negative health effects. Furthermore, DeepDDI is applied to 3,288,157 drug\u2013food constituent pairs (2,159 approved drugs and 1,523 well-characterized food constituents) to predict DFIs. The effects of 256 food constituents on pharmacological effects of interacting drugs and bioactivities of 149 food constituents are predicted. These results suggest that DeepDDI can provide important information on drug prescription and even dietary suggestions while taking certain drugs and also guidelines during drug development.", "what has research problem ?", "DDI prediction", 1401.0, 1415.0], ["Smart City Control Rooms are mainly focused on Dashboards which are in turn created by using the socalled Dashboard Builders tools or generated custom. For a city the production of Dashboards is not something that is performed once forever, and it is a continuous working task for improving city monitoring, to follow extraordinary events and/or activities, to monitor critical conditions and cases. Thus, relevant complexities are due to the data aggregation architecture and to the identification of modalities to present data and their identification, prediction, etc., to arrive at producing high level representations that can be used by decision makers. In this paper, the architecture of a Dashboard Builder for creating Smart City Control Rooms is presented. As a validation and test, it has been adopted for generating the dashboards in Florence city and other cities in Tuscany area. The solution proposed has been developed in the context of REPLICATE H2020 European Commission Flagship project on Smart City and Communities.", "what has research problem ?", "Smart city control rooms", 0.0, 24.0], ["Shape Expressions (ShEx) was defined as a human-readable and concise language to describe and validate RDF. In the last years, the usage of ShEx has grown and more functionalities are being demanded. One such functionality is to ensure interoperability between ShEx schemas and domain models in programming languages. In this paper, we present ShEx-Lite, a tabular based subset of ShEx that allows to generate domain object models in different object-oriented languages. Although the current system generates Java and Python, it offers a public interface so anyone can implement code generation in other programming languages. The system has been employed in a workflow where the shape expressions are used both to define constraints over an ontology and to generate domain objects that will be part of a clean architecture style.", "what has research problem ?", "Code Generation", 579.0, 594.0], ["We introduce a solution-processed copper tin sulfide (CTS) thin film to realize high-performance of thin-film transistors (TFT) by optimizing the CTS precursor solution concentration.
", "what keywords ?", "Transistors", 113.0, 124.0], ["This paper reports on the experimental and theoretical characterization of RF microelectromechanical systems (MEMS) switches for high-power applications. First, we investigate the problem of self-actuation due to high RF power and we demonstrate switches that do not self-actuate or catastrophically fail with a measured RF power of up to 5.5 W. Second, the problem of switch stiction to the down state as a function of the applied RF power is also theoretically and experimentally studied. Finally, a novel switch design with a top electrode is introduced and its advantages related to RF power-handling capabilities are presented. By applying this technology, we demonstrate hot-switching measurements with a maximum power of 0.8 W. Our results, backed by theory and measurements, illustrate that careful design can significantly improve the power-handling capabilities of RF MEMS switches.", "what keywords ?", "High-power applications", 129.0, 152.0], ["Compositional engineering of recently arising methylammonium (MA) lead (Pb) halide based perovskites is an essential approach for finding better perovskite compositions to resolve still remaining issues of toxic Pb, long-term instability, etc. In this work, we carried out crystallographic, morphological, optical, and photovoltaic characterization of compositional MASn0.6Pb0.4I3-xBrx by gradually introducing bromine (Br) into parental Pb-Sn binary perovskite (MASn0.6Pb0.4I3) to elucidate its function in Sn-rich (Sn:Pb = 6:4) perovskites. We found significant advances in crystallinity and dense coverage of the perovskite films by inserting the Br into Sn-rich perovskite lattice. Furthermore, light-intensity-dependent open circuit voltage (Voc) measurement revealed much suppressed trap-assisted recombination for a proper Br-added (x = 0.4) device. These contributed to attaining the unprecedented power conversion efficiency of 12.1% and Voc of 0.78 V, which are, to the best of our knowledge, the highest performance in the Sn-rich (\u226560%) perovskite solar cells reported so far. In addition, impressive enhancement of photocurrent-output stability and little hysteresis were found, which paves the way for the development of environmentally benign (Pb reduction), stable monolithic tandem cells using the developed low band gap (1.24-1.26 eV) MASn0.6Pb0.4I3-xBrx with suggested composition (x = 0.2-0.4).", "what keywords ?", "Sn-rich", 508.0, 515.0], ["Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5\uff05. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel\uff1bDrug delivery\uff1bNanoparticle\uff1bRadiotherapy\uff1bHypoxia\uff1bHuman tumor cells\uff1bcellular uptake", "what keywords ?", "Cellular uptake", 766.0, 781.0], ["Abstract New resonant emission of dispersive waves by oscillating solitary structures in optical fiber cavities is considered analytically and numerically. The pulse propagation is described in the framework of the Lugiato-Lefever equation when a Hopf-bifurcation can result in the formation of oscillating dissipative solitons. The resonance condition for the radiation of the dissipative oscillating solitons is derived and it is demonstrated that the predicted resonances match the spectral lines observed in numerical simulations perfectly. The complex recoil of the radiation on the soliton dynamics is discussed. The reported effect can have importance for the generation of frequency combs in nonlinear microring resonators.", "what keywords ?", "Microring resonators", 710.0, 730.0], ["Lightweight, stretchable, and wearable strain sensors have recently been widely studied for the development of health monitoring systems, human-machine interfaces, and wearable devices. Herein, highly stretchable polymer elastomer-wrapped carbon nanocomposite piezoresistive core-sheath fibers are successfully prepared using a facile and scalable one-step coaxial wet-spinning assembly approach. The carbon nanotube-polymeric composite core of the stretchable fiber is surrounded by an insulating sheath, similar to conventional cables, and shows excellent electrical conductivity with a low percolation threshold (0.74 vol %). The core-sheath elastic fibers are used as wearable strain sensors, exhibiting ultra-high stretchability (above 300%), excellent stability (>10 000 cycles), fast response, low hysteresis, and good washability. Furthermore, the piezoresistive core-sheath fiber possesses bending-insensitiveness and negligible torsion-sensitive properties, and the strain sensing performance of piezoresistive fibers maintains a high degree of stability under harsh conditions. On the basis of this high level of performance, the fiber-shaped strain sensor can accurately detect both subtle and large-scale human movements by embedding it in gloves and garments or by directly attaching it to the skin. The current results indicate that the proposed stretchable strain sensor has many potential applications in health monitoring, human-machine interfaces, soft robotics, and wearable electronics.", "what keywords ?", "wet-spinning ", 365.0, 378.0], ["This paper presents a new RF MEMS tunable capacitor based on the zipper principle and with interdigitated RF and actuation electrodes. The electrode configuration prevents dielectric charging under high actuation voltages. It also increases the capacitance ratio and the tunable analog range. The effect of the residual stress on the capacitance tunability is also investigated. Two devices with different interdigital RF and actuation electrodes are fabricated on an alumina substrate and result in a capacitance ratio around 3.0 (Cmin = 70?90 fF, Cmax = 240?270 fF) and with a Q > 100 at 3 GHz. This design can be used in wideband tunable filters and matching networks.", "what keywords ?", "RF MEMS", 26.0, 33.0], ["Abstract With the increased dependence on online learning platforms and educational resource repositories, a unified representation of digital learning resources becomes essential to support a dynamic and multi-source learning experience. We introduce the EduCOR ontology, an educational, career-oriented ontology that provides a foundation for representing online learning resources for personalised learning systems. The ontology is designed to enable learning material repositories to offer learning path recommendations, which correspond to the user\u2019s learning goals and preferences, academic and psychological parameters, and labour-market skills. We present the multiple patterns that compose the EduCOR ontology, highlighting its cross-domain applicability and integrability with other ontologies. A demonstration of the proposed ontology on the real-life learning platform eDoer is discussed as a use case. We evaluate the EduCOR ontology using both gold standard and task-based approaches. The comparison of EduCOR to three gold schemata, and its application in two use-cases, shows its coverage and adaptability to multiple OER repositories, which allows generating user-centric and labour-market oriented recommendations. Resource :An activated carbon supported \u03b1-molybdenum carbide catalyst (\u03b1-MoC1\u2212x/AC) showed remarkable activity in the selective deoxygenation of guaiacol to substituted mono-phenols in low carbon number alcohol solvents.
", "what substrate ?", "guaiacol", 149.0, 157.0], ["Pd/Al2O3 catalysts coated with various thiolate self-assembled monolayers (SAMs) were used to direct the partial hydrogenation of 18-carbon polyunsaturated fatty acids, yielding a product stream enriched in monounsaturated fatty acids (with low saturated fatty acid content), a favorable result for increasing the oxidative stability of biodiesel. The uncoated Pd/Al2O3 catalyst quickly saturated all fatty acid reactants under hydrogenation conditions, but the addition of alkanethiol SAMs markedly increased the reaction selectivity to the monounsaturated product oleic acid to a level of 80\u201390%, even at conversions >70%. This effect, which is attributed to steric effects between the SAMs and reactants, was consistent with the relative consumption rates of linoleic and oleic acid using alkanethiol-coated and uncoated Pd/Al2O3 catalysts. With an uncoated Pd/Al2O3 catalyst, each fatty acid, regardless of its degree of saturation had a reaction rate of \u223c0.2 mol reactant consumed per mole of surface palladium per ...", "what substrate ?", "18-carbon polyunsaturated fatty acids", 130.0, 167.0], ["This study investigated atmospheric hydrodeoxygenation (HDO) of guaiacol over Ni2P-supported catalysts. Alumina, zirconia, and silica served as the supports of Ni2P catalysts. The physicochemical properties of these catalysts were surveyed by N2 physisorption, X-ray diffraction (XRD), CO chemisorption, H2 temperature-programmed reduction (H2-TPR), H2 temperature-programmed desorption (H2-TPD), and NH3 temperature-programmed desorption (NH3-TPD). The catalytic performance of these catalysts was tested in a continuous fixed-bed system. This paper proposes a plausible network of atmospheric guaiacol HDO, containing demethoxylation (DMO), demethylation (DME), direct deoxygenation (DDO), hydrogenation (HYD), transalkylation, and methylation. Pseudo-first-order kinetics analysis shows that the intrinsic activity declined in the following order: Ni2P/ZrO2 > Ni2P/Al2O3 > Ni2P/SiO2. Product selectivity at zero guaiacol conversion indicates that Ni2P/SiO2 promotes DMO and DDO routes, whereas Ni2P/ZrO2 and Ni2P/Al2O...", "what substrate ?", "guaiacol", 64.0, 72.0], ["Herein, a novel electrochemical glucose biosensor based on glucose oxidase (GOx) immobilized on a surface containing platinum nanoparticles (PtNPs) electrodeposited on poly(Azure A) (PAA) previously electropolymerized on activated screen-printed carbon electrodes (GOx-PtNPs-PAA-aSPCEs) is reported. The resulting electrochemical biosensor was validated towards glucose oxidation in real samples and further electrochemical measurement associated with the generated H2O2. The electrochemical biosensor showed an excellent sensitivity (42.7 \u03bcA mM\u22121 cm\u22122), limit of detection (7.6 \u03bcM), linear range (20 \u03bcM\u20132.3 mM), and good selectivity towards glucose determination. Furthermore, and most importantly, the detection of glucose was performed at a low potential (0.2 V vs. Ag). The high performance of the electrochemical biosensor was explained through surface exploration using field emission SEM, XPS, and impedance measurements. The electrochemical biosensor was successfully applied to glucose quantification in several real samples (commercial juices and a plant cell culture medium), exhibiting a high accuracy when compared with a classical spectrophotometric method. This electrochemical biosensor can be easily prepared and opens up a good alternative in the development of new sensitive glucose sensors.", "what substrate ?", "activated screen-printed carbon", 221.0, 252.0], ["We present a one-dimensional (1D) theoretical model for the design analysis of a micro thermal convective accelerometer (MTCA). Systematical design analysis was conducted on the sensor performance covering the sensor output, sensitivity, and power consumption. The sensor output was further normalized as a function of normalized input acceleration in terms of Rayleigh numberPublishing studies using standardized, machine-readable formats will enable machines toperform meta-analyses on-demand. To build a semantically-enhanced technology that embodiesthese functions, we developed the Cooperation Databank (CoDa) \u2013 a databank that contains2,641 studies on human cooperation (1958-2017) conducted in 78 countries involving 356,680participants. Experts annotated these studies for 312 variables, including the quantitative results(13, 959 effect sizes). We designed an ontology that defines and relates concepts in cooperationresearch and that can represent the relationships between individual study results. We havecreated a research platform that, based on the dataset, enables users to retrieve studies that testthe relation of variables with cooperation, visualize these study results, and perform (1) metaanalyses, (2) meta-regressions, (3) estimates of publication bias, and (4) statistical poweranalyses for future studies. We leveraged the dataset with visualization tools that allow users toexplore the ontology of concepts in cooperation research and to plot a citation network of thehistory of studies. CoDa offers a vision of how publishing studies in a machine-readable formatcan establish institutions and tools that improve scientific practices and knowledge.
", "what Database ?", "CoDa", 236.0, 240.0], ["OpenAIRE is the European Union initiative for an Open Access Infrastructure for Research in support of open scholarly communication and access to the research output of European funded projects and open access content from a network of institutional and disciplinary repositories. This article outlines the curation activities conducted in the OpenAIRE infrastructure, which employs a multi-level, multi-targeted approach: the publication and implementation of interoperability guidelines to assist in the local data curation processes, the data curation due to the integration of heterogeneous sources supporting different types of data, the inference of links to accomplish the publication research contextualization and data enrichment, and the end-user metadata curation that allows users to edit the attributes and provide links among the entities.", "what Database ?", "OpenAIRE", 0.0, 8.0], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "what Database ?", "OpenBiodiv", 468.0, 478.0], ["In recent years, the development of recommender systems has attracted increased interest in several domains, especially in e-learning. Massive Open Online Courses have brought a revolution. However, deficiency in support and personalization in this context drive learners to lose their motivation and leave the learning process. To overcome this problem we focus on adapting learning activities to learners' needs using a recommender system.This paper attempts to provide an introduction to different recommender systems for e-learning settings, as well as to present our proposed recommender system for massive learning activities in order to provide learners with the suitable learning activities to follow the learning process and maintain their motivation. We propose a hybrid knowledge-based recommender system based on ontology for recommendation of e-learning activities to learners in the context of MOOCs. In the proposed recommendation approach, ontology is used to model and represent the knowledge about the domain model, learners and learning activities.", "what Development in ?", "OWL", NaN, NaN], ["Abstract Summary The COVID-19 crisis has elicited a global response by the scientific community that has led to a burst of publications on the pathophysiology of the virus. However, without coordinated efforts to organize this knowledge, it can remain hidden away from individual research groups. By extracting and formalizing this knowledge in a structured and computable form, as in the form of a knowledge graph, researchers can readily reason and analyze this information on a much larger scale. Here, we present the COVID-19 Knowledge Graph, an expansive cause-and-effect network constructed from scientific literature on the new coronavirus that aims to provide a comprehensive view of its pathophysiology. To make this resource available to the research community and facilitate its exploration and analysis, we also implemented a web application and released the KG in multiple standard formats. Availability and implementation The COVID-19 Knowledge Graph is publicly available under CC-0 license at https://github.com/covid19kg and https://bikmi.covid19-knowledgespace.de. Supplementary information Supplementary data are available at Bioinformatics online.", "what Domain ?", "COVID-19", 21.0, 29.0], ["Abstract Biological dinitrogen (N 2 ) fixation exerts an important control on oceanic primary production by providing bioavailable form of nitrogen (such as ammonium) to photosynthetic microorganisms. N 2 fixation is dominant in nutrient poor and warm surface waters. The Bay of Bengal is one such region where no measurements of phototrophic N 2 fixation rates exist. The surface water of the Bay of Bengal is generally nitrate-poor and warm due to prevailing stratification and thus, could favour N 2 fixation. We commenced the first N 2 fixation study in the photic zone of the Bay of Bengal using 15 N 2 gas tracer incubation experiment during summer monsoon 2018. We collected seawater samples from four depths (covering the mixed layer depth of up to 75 m) at eight stations. N 2 fixation rates varied from 4 to 75 \u03bc mol N m \u22122 d \u22121 . The contribution of N 2 fixation to primary production was negligible (<1%). However, the upper bound of observed N 2 fixation rates is higher than the rates measured in other oceanic regimes, such as the Eastern Tropical South Pacific, the Tropical Northwest Atlantic, and the Equatorial and Southern Indian Ocean.", "what Domain ?", "Ocean", 1153.0, 1158.0], ["Abstract In a humanitarian response, leaders are often tasked with making large numbers of decisions, many of which have significant consequences, in situations of urgency and uncertainty. These conditions have an impact on the decision-maker (causing stress, for example) and subsequently on how decisions get made. Evaluations of humanitarian action suggest that decision-making is an area of weakness in many operations. There are examples of important decisions being missed and of decision-making processes that are slow and ad hoc. As part of a research process to address these challenges, this article considers literature from the humanitarian and emergency management sectors that relates to decision-making. It outlines what the literature tells us about the nature of the decisions that leaders at the country level are taking during humanitarian operations, and the circumstances under which these decisions are taken. It then considers the potential application of two different types of decision-making process in these contexts: rational/analytical decision-making and naturalistic decision-making. The article concludes with broad hypotheses that can be drawn from the literature and with the recommendation that these be further tested by academics with an interest in the topic.", "what Domain ?", "Humanitarian response", 14.0, 35.0], ["SUMMARY The MIPS mammalian protein-protein interaction database (MPPI) is a new resource of high-quality experimental protein interaction data in mammals. The content is based on published experimental evidence that has been processed by human expert curators. We provide the full dataset for download and a flexible and powerful web interface for users with various requirements.", "what Domain ?", "Protein-protein interaction", 27.0, 54.0], ["Publishing studies using standardized, machine-readable formats will enable machines toperform meta-analyses on-demand. To build a semantically-enhanced technology that embodiesthese functions, we developed the Cooperation Databank (CoDa) \u2013 a databank that contains2,641 studies on human cooperation (1958-2017) conducted in 78 countries involving 356,680participants. Experts annotated these studies for 312 variables, including the quantitative results(13, 959 effect sizes). We designed an ontology that defines and relates concepts in cooperationresearch and that can represent the relationships between individual study results. We havecreated a research platform that, based on the dataset, enables users to retrieve studies that testthe relation of variables with cooperation, visualize these study results, and perform (1) metaanalyses, (2) meta-regressions, (3) estimates of publication bias, and (4) statistical poweranalyses for future studies. We leveraged the dataset with visualization tools that allow users toexplore the ontology of concepts in cooperation research and to plot a citation network of thehistory of studies. CoDa offers a vision of how publishing studies in a machine-readable formatcan establish institutions and tools that improve scientific practices and knowledge.
", "what Domain ?", "Human cooperation", 285.0, 302.0], ["In the past decade, much effort has been put into the visual representation of ontologies. However, present visualization strategies are not equipped to handle complex ontologies with many relations, leading to visual clutter and inefficient use of space. In this paper, we propose GLOW, a method for ontology visualization based on Hierarchical Edge Bundles. Hierarchical Edge Bundles is a new visually attractive technique for displaying relations in hierarchical data, such as concept structures formed by 'subclass-of' and 'type-of' relations. We have developed a visualization library based on OWL API, as well as a plug-in for Prot\u00e9g\u00e9, a well-known ontology editor. The displayed adjacency relations can be selected from an ontology using a set of common configurations, allowing for intuitive discovery of information. Our evaluation demonstrates that the GLOW visualization provides better visual clarity, and displays relations and complex ontologies better than the existing Prot\u00e9g\u00e9 visualization plug-in Jambalaya.", "what Domain ?", "ontology", 301.0, 309.0], ["Data sharing and reuse are crucial to enhance scientific progress and maximize return of investments in science. Although attitudes are increasingly favorable, data reuse remains difficult due to lack of infrastructures, standards, and policies. The FAIR (findable, accessible, interoperable, reusable) principles aim to provide recommendations to increase data reuse. Because of the broad interpretation of the FAIR principles, maturity indicators are necessary to determine the FAIRness of a dataset. In this work, we propose a reproducible computational workflow to assess data FAIRness in the life sciences. Our implementation follows principles and guidelines recommended by the maturity indicator authoring group and integrates concepts from the literature. In addition, we propose a FAIR balloon plot to summarize and compare dataset FAIRness. We evaluated the feasibility of our method on three real use cases where researchers looked for six datasets to answer their scientific questions. We retrieved information from repositories (ArrayExpress, Gene Expression Omnibus, eNanoMapper, caNanoLab, NanoCommons and ChEMBL), a registry of repositories, and a searchable resource (Google Dataset Search) via application program interfaces (API) wherever possible. With our analysis, we found that the six datasets met the majority of the criteria defined by the maturity indicators, and we showed areas where improvements can easily be reached. We suggest that use of standard schema for metadata and the presence of specific attributes in registries of repositories could increase FAIRness of datasets.", "what Domain ?", "Life Sciences", 597.0, 610.0], ["Research on visualizing Semantic Web data has yielded many tools that rely on information visualization techniques to better support the user in understanding and editing these data. Most tools structure the visualization according to the concept definitions and interrelations that constitute the ontology's vocabulary. Instances are often treated as somewhat peripheral information, when considered at all. These instances, that populate ontologies, represent an essential part of any knowledge base. Understanding instance-level data might be easier for users because of their higher concreteness, but instances will often be orders of magnitude more numerous than the concept definitions that give them machine-processable meaning. As such, the visualization of instance-level data poses different but real challenges. The authors present a visualization technique designed to enable users to visualize large instance sets and the relations that connect them. This visualization uses both node-link and adjacency matrix representations of graphs to visualize different parts of the data depending on their semantic and local structural properties. The technique was originally devised for simple social network visualization. The authors extend it to handle the richer and more complex graph structures of populated ontologies, exploiting ontological knowledge to drive the layout of, and navigation in, the representation embedded in a smooth zoomable environment.", "what Domain ?", "ontology", 298.0, 306.0], ["The growing maturity of Natural Language Processing (NLP) techniques and resources is dramatically changing the landscape of many application domains which are dependent on the analysis of unstructured data at scale. The finance domain, with its reliance on the interpretation of multiple unstructured and structured data sources and its demand for fast and comprehensive decision making is already emerging as a primary ground for the experimentation of NLP, Web Mining and Information Retrieval (IR) techniques for the automatic analysis of financial news and opinions online. This challenge focuses on advancing the state-of-the-art of aspect-based sentiment analysis and opinion-based Question Answering for the financial domain.", "what Domain ?", "financial domain", 716.0, 732.0], ["Instruments play an essential role in creating research data. Given the importance of instruments and associated metadata to the assessment of data quality and data reuse, globally unique, persistent and resolvable identification of instruments is crucial. The Research Data Alliance Working Group Persistent Identification of Instruments (PIDINST) developed a community-driven solution for persistent identification of instruments which we present and discuss in this paper. Based on an analysis of 10 use cases, PIDINST developed a metadata schema and prototyped schema implementation with DataCite and ePIC as representative persistent identifier infrastructures and with HZB (Helmholtz-Zentrum Berlin fur Materialien und Energie) and BODC (British Oceanographic Data Centre) as representative institutional instrument providers. These implementations demonstrate the viability of the proposed solution in practice. Moving forward, PIDINST will further catalyse adoption and consolidate the schema by addressing new stakeholder requirements.", "what uses identifier system ?", "ePIC", 605.0, 609.0], ["The Open Researcher & Contributor ID (ORCID) registry presents a unique opportunity to solve the problem of author name ambiguity. At its core the value of the ORCID registry is that it crosses disciplines, organizations, and countries, linking ORCID with both existing identifier schemes as well as publications and other research activities. By supporting linkages across multiple datasets \u2013 clinical trials, publications, patents, datasets \u2013 such a registry becomes a switchboard for researchers and publishers alike in managing the dissemination of research findings. We describe use cases for embedding ORCID identifiers in manuscript submission workflows, prior work searches, manuscript citations, and repository deposition. We make recommendations for storing and displaying ORCID identifiers in publication metadata to include ORCID identifiers, with CrossRef integration as a specific example. Finally, we provide an overview of ORCID membership and integration tools and resources.", "what uses identifier system ?", "ORCID Identifiers", 608.0, 625.0], ["Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.", "what Relation types ?", "URL", 818.0, 821.0], ["Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.", "what Relation types ?", "Citation", NaN, NaN], ["Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of \u201cwhat a chemical does to a gene/protein\" (chemical \u2192 gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical \u2013 biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor \u2013 target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. \u201cInhibitor\u201d, \u201cAntagonist\u201d, \u201cAgonist\u201d,...), some of them being regarded as highly specific for the purpose of this task (e.g. \u201cintercalation\u201d, \u201ccross-linking/alkylation\u201d) or referring to biological roles (e.g. \u201cAntibody\u201d, \u201cIncorporation into and Destabilization\u201d) and others, partially overlapping between them (e.g. \u201cBinder\u201d and \u201cLigand\u201d), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (\u201cdecreases\u201d, \u201cdirectlyDecreases\u201d, \u201cincreases\u201d, \u201cdirectlyIncreases\u201d and \u201ccausesNoChange\u201d) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su", "what Relation types ?", "Substrate", 920.0, 929.0], ["Software contributions to academic research are relatively invisible, especially to the formalized scholarly reputation system based on bibliometrics. In this article, we introduce a gold\u2010standard dataset of software mentions from the manual annotation of 4,971 academic PDFs in biomedicine and economics. The dataset is intended to be used for automatic extraction of software mentions from PDF format research publications by supervised learning at scale. We provide a description of the dataset and an extended discussion of its creation process, including improved text conversion of academic PDFs. Finally, we reflect on our challenges and lessons learned during the dataset creation, in hope of encouraging more discussion about creating datasets for machine learning use.", "what Relation types ?", "Version", NaN, NaN], ["Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of \u201cwhat a chemical does to a gene/protein\" (chemical \u2192 gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical \u2013 biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor \u2013 target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. \u201cInhibitor\u201d, \u201cAntagonist\u201d, \u201cAgonist\u201d,...), some of them being regarded as highly specific for the purpose of this task (e.g. \u201cintercalation\u201d, \u201ccross-linking/alkylation\u201d) or referring to biological roles (e.g. \u201cAntibody\u201d, \u201cIncorporation into and Destabilization\u201d) and others, partially overlapping between them (e.g. \u201cBinder\u201d and \u201cLigand\u201d), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (\u201cdecreases\u201d, \u201cdirectlyDecreases\u201d, \u201cincreases\u201d, \u201cdirectlyIncreases\u201d and \u201ccausesNoChange\u201d) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su", "what Relation types ?", "Agonist", 866.0, 873.0], ["Science across all disciplines has become increasingly data-driven, leading to additional needs with respect to software for collecting, processing and analysing data. Thus, transparency about software used as part of the scientific process is crucial to understand provenance of individual research data and insights, is a prerequisite for reproducibility and can enable macro-analysis of the evolution of scientific methods over time. However, missing rigor in software citation practices renders the automated detection and disambiguation of software mentions a challenging problem. In this work, we provide a large-scale analysis of software usage and citation practices facilitated through an unprecedented knowledge graph of software mentions and affiliated metadata generated through supervised information extraction models trained on a unique gold standard corpus and applied to more than 3 million scientific articles. Our information extraction approach distinguishes different types of software and mentions, disambiguates mentions and outperforms the state-of-the-art significantly, leading to the most comprehensive corpus of 11.8 M software mentions that are described through a knowledge graph consisting of more than 300 M triples. Our analysis provides insights into the evolution of software usage and citation patterns across various fields, ranks of journals, and impact of publications. Whereas, to the best of our knowledge, this is the most comprehensive analysis of software use and citation at the time, all data and models are shared publicly to facilitate further research into scientific use and citation of software.", "what Relation types ?", "Citation", 472.0, 480.0], ["Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.", "what Relation types ?", "Version", 792.0, 799.0], ["This paper describes the first task on semantic relation extraction and classification in scientific paper abstracts at SemEval 2018. The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios.", "what Relation types ?", "Result", NaN, NaN], ["Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task\u2014a result that approaches the human inter-annotator agreement (0.8875)\u2014and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system\u2019s ability to return real-time results: the average response time for each team\u2019s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "what Relation types ?", "Chemical-disease relation", 303.0, 328.0], ["We present two related tasks of the BioNLP Shared Tasks 2011: Bacteria Gene Renaming (Rename) and Bacteria Gene Interactions (GI). We detail the objectives, the corpus specification, the evaluation metrics, and we summarize the participants' results. Both issued from PubMed scientific literature abstracts, the Rename task aims at extracting gene name synonyms, and the GI task aims at extracting genic interaction events, mainly about gene transcriptional regulations in bacteria.", "what Relation types ?", "Interaction", 404.0, 415.0], ["Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of \u201cwhat a chemical does to a gene/protein\" (chemical \u2192 gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical \u2013 biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor \u2013 target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. \u201cInhibitor\u201d, \u201cAntagonist\u201d, \u201cAgonist\u201d,...), some of them being regarded as highly specific for the purpose of this task (e.g. \u201cintercalation\u201d, \u201ccross-linking/alkylation\u201d) or referring to biological roles (e.g. \u201cAntibody\u201d, \u201cIncorporation into and Destabilization\u201d) and others, partially overlapping between them (e.g. \u201cBinder\u201d and \u201cLigand\u201d), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (\u201cdecreases\u201d, \u201cdirectlyDecreases\u201d, \u201cincreases\u201d, \u201cdirectlyIncreases\u201d and \u201ccausesNoChange\u201d) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su", "what Relation types ?", "Direct Regulator", 7803.0, 7819.0], ["We present two related tasks of the BioNLP Shared Tasks 2011: Bacteria Gene Renaming (Rename) and Bacteria Gene Interactions (GI). We detail the objectives, the corpus specification, the evaluation metrics, and we summarize the participants' results. Both issued from PubMed scientific literature abstracts, the Rename task aims at extracting gene name synonyms, and the GI task aims at extracting genic interaction events, mainly about gene transcriptional regulations in bacteria.", "what Relation types ?", "Rename", 86.0, 92.0], ["type=\"main\" xml:lang=\"en\"> Based on the Environmental Kuznets Curve (EKC) hypothesis, this paper uses panel cointegration techniques to investigate the short- and long-run relationship between CO 2 emissions, gross domestic product (GDP), renewable energy consumption and international trade for a panel of 24 sub-Saharan Africa countries over the period 1980\u20132010. Short-run Granger causality results reveal that there is a bidirectional causality between emissions and economic growth; bidirectional causality between emissions and real exports; unidirectional causality from real imports to emissions; and unidirectional causality runs from trade (exports or imports) to renewable energy consumption. There is an indirect short-run causality running from emissions to renewable energy and an indirect short-run causality from GDP to renewable energy. In the long-run, the error correction term is statistically significant for emissions, renewable energy consumption and trade. The long-run estimates suggest that the inverted U-shaped EKC hypothesis is not supported for these countries; exports have a positive impact on CO 2 emissions, whereas imports have a negative impact on CO 2 emissions. As a policy recommendation, sub-Saharan Africa countries should expand their trade exchanges particularly with developed countries and try to maximize their benefit from technology transfer occurring when importing capital goods as this may increase their renewable energy consumption and reduce CO 2 emissions.", "what Type of data ?", "Panel", 102.0, 107.0], ["The present study examines whether the Race to the Bottom and Revised EKC scenarios presented by Dasgupta and others (2002) are, with regard to the analytical framework of the Environmental Kuznets Curve (EKC), applicable in Asia to representative environmental indices, such as sulphur emissions and carbon emissions. To carry out this study, a generalized method of moments (GMM) estimation was made, using panel data of 19 economies for the period 1950-2009. The main findings of the analysis on the validity of EKC indicate that sulphur emissions follow the expected inverted U-shape pattern, while carbon emissions tend to increase in line with per capita income in the observed range. As for the Race to the Bottom and Revised EKC scenarios, the latter was verified in sulphur emissions, as their EKC trajectories represent a linkage of the later development of the economy with the lower level of emissions while the former one was not present in neither sulphur nor carbon emissions.", "what Type of data ?", "Panel", 409.0, 414.0], ["This paper investigates the relationship between CO2 emission, real GDP, energy consumption, urbanization and trade openness for 10 for selected Central and Eastern European Countries (CEECs), including, Albania, Bulgaria, Croatia, Czech Republic, Macedonia, Hungary, Poland, Romania, Slovak Republic and Slovenia for the period of 1991\u20132011. The results show that the environmental Kuznets curve (EKC) hypothesis holds for these countries. The fully modified ordinary least squares (FMOLS) results reveal that a 1% increase in energy consumption leads to a %1.0863 increase in CO2 emissions. Results for the existence and direction of panel Vector Error Correction Model (VECM) Granger causality method show that there is bidirectional causal relationship between CO2 emissions - real GDP and energy consumption-real GDP as well.", "what Type of data ?", "Panel", 636.0, 641.0], ["This paper examines the relationship between per capita income and a wide range of environmental indicators using cross-country panel sets. The manner in which this has been done overcomes several of the weaknesses asscociated with the estimation of environmental Kuznets curves (EKCs). outlined by Stern et al. (1996). Results suggest that meaningful EKCs exist only for local air pollutants whilst indicators with a more global, or indirect, impact either increase monotonically with income or else have predicted turning points at high per capita income levels with large standard errors \u2013 unless they have been subjected to a multilateral policy initiative. Two other findings are also made: that concentration of local pollutants in urban areas peak at a lower per capita income level than total emissions per capita; and that transport-generated local air pollutants peak at a higher per capita income level than total emissions per capita. Given these findings, suggestions are made regarding the necessary future direction of environmental policy.", "what Type of data ?", "Panel", 128.0, 133.0], ["This study aims to examine the relationship between income and environmental degradation in West Africa and ascertain the validity of EKC hypothesis in the region. The study adopted a panel data approach for fifteen West Africa countries for the period 1980-2012. The available results from our estimation procedure confirmed the EKC theory in the region. At early development stages, pollution rises with income and reaching a turning point, pollution dwindles with increasing income; as indicated by the significant inverse relation between income and environmental degradation. Consequently, literacy level and sound institutional arrangement were found to contribute significantly in mitigating the extent of environmental degradation. Among notable recommendation is the need for awareness campaign on environment abatement and adaptation strategies, strengthening of institutions to caution production and dumping pollution emitting commodities and encourage adoption of cleaner technologies.", "what Type of data ?", "Panel", 184.0, 189.0], ["Abstract Semantic embedding of knowledge graphs has been widely studied and used for prediction and statistical analysis tasks across various domains such as Natural Language Processing and the Semantic Web. However, less attention has been paid to developing robust methods for embedding OWL (Web Ontology Language) ontologies, which contain richer semantic information than plain knowledge graphs, and have been widely adopted in domains such as bioinformatics. In this paper, we propose a random walk and word embedding based ontology embedding method named , which encodes the semantics of an OWL ontology by taking into account its graph structure, lexical information and logical constructors. Our empirical evaluation with three real world datasets suggests that benefits from these three different aspects of an ontology in class membership prediction and class subsumption prediction tasks. Furthermore, often significantly outperforms the state-of-the-art methods in our experiments.", "what Type of data ?", "OWL", 289.0, 292.0], ["This article applies the dynamic panel generalized method of moments technique to reexamine the environmental Kuznets curve (EKC) hypothesis for carbon dioxide (CO_2) emissions and asks two critical questions: \"Does the global data set fit the EKC hypothesis?\" and \"Do different income levels or regions influence the results of the EKC?\" We find evidence of the EKC hypothesis for CO_2 emissions in a global data set, middle-income, and American and European countries, but not in other income levels and regions. Thus, the hypothesis that one size fits all cannot be supported for the EKC, and even more importantly, results, robustness checking, and implications emerge. Copyright 2009 Agricultural and Applied Economics Association", "what Type of data ?", "Panel", 33.0, 38.0], ["Purpose \u2013 The purpose of this paper is to examine the relationship among environmental pollution, economic growth and energy consumption per capita in the case of Pakistan. The per capital carbon dioxide (CO2) emission is used as the environmental indicator, the commercial energy use per capita as the energy consumption indicator, and the per capita gross domestic product (GDP) as the economic indicator.Design/methodology/approach \u2013 The investigation is made on the basis of the environmental Kuznets curve (EKC), using time series data from 1971 to 2006, by applying different econometric tools like ADF Unit Root Johansen Co\u2010integration VECM and Granger causality tests.Findings \u2013 The Granger causality test shows that there is a long term relationship between these three indicators, with bidirectional causality between per capita CO2 emission and per capita energy consumption. A monotonically increasing curve between GDP and CO2 emission has been found for the sample period, rejecting the EKC relationship, i...", "what Type of data ?", "Time series", 524.0, 535.0], ["In the last few years, several studies have found an inverted-U relationship between per capita income and environmental degradation. This relationship, known as the environmental Kuznets curve (EKC), suggests that environmental degradation increases in the early stages of growth, but it eventually decreases as income exceeds a threshold level. However, this paper investigation relationship between per capita CO2 emission, growth economics and trade liberalization based on econometric techniques of unit root test, co-integration and a panel data set during the period 1960-1996 for BRICS countries. Data properties were analyzed to determine their stationarity using the LLC , IPS , ADF and PP unit root tests which indicated that the series are I(1). We find a cointegration relationship between per capita CO2 emission, growth economics and trade liberalization by applying Kao panel cointegration test. The evidence indi cates that in the long-run trade liberalization has a positive significant impact on CO2 emissions and impact of trade liberalization on emissions growth depends on the level of income Our findings suggest that there is a quadratic relationship between relationship between real GDP and CO2 emissions for the region as a whole. The estimated long-run coefficients of real GDP and its square satisfy the EKC hypothesis in all of studied countries. Our estimation shows that the inflection point or optimal point real GDP per capita is about 5269.4 dollars. The results show that on average, sample countries are on the positive side of the inverted U curve. The turning points are very low in some cases and very high in other cases, hence providing poor evidence in support of the EKC hypothesis. Thus, our findings suggest that all BRICS countries need to sacrifice economic growth to decrease their emission levels", "what Type of data ?", "Panel", 541.0, 546.0], ["Abstract Based on the Environmental Kuznets Curve theory, the authors choose provincial panel data of China in 1990\u20132007 and adopt panel unit root and co-integration testing method to study whether there is Environmental Kuznets Curve for China\u2019s carbon emissions. The research results show that: carbon emissions per capita of the eastern region and the central region of China fit into Environmental Kuznets Curve, but that of the western region does not. On this basis, the authors carry out scenario analysis on the occurrence time of the inflection point of carbon emissions per capita of different regions, and describe a specific time path.", "what Type of data ?", "Panel", 88.0, 93.0], ["Previous studies show that the environmental quality and economic growth can be represented by the inverted U curve called Environmental Kuznets Curve (EKC). In this study, we conduct empirical analyses on detecting the existence of EKC using the five common pollutants emissions (i.e. CO2, SO2, BOD, SPM10, and GHG) as proxy for environmental quality. The data spanning from year 1961 to 2009 and cover 40 countries. We seek to investigate if the EKC hypothesis holds in two groups of economies, i.e. developed versus developing economies. Applying panel data approach, our results show that the EKC does not hold in all countries. We also detect the existence of U shape and increasing trend in other cases. The results reveal that CO2 and SPM10 are good data to proxy for environmental pollutant and they can be explained well by GDP. Also, it is observed that the developed countries have higher turning points than the developing countries. Higher economic growth may lead to different impacts on environmental quality in different economies.", "what Type of data ?", "Panel", 550.0, 555.0], ["Learning low-dimensional embeddings of knowledge graphs is a powerful approach used to predict unobserved or missing edges between entities. However, an open challenge in this area is developing techniques that can go beyond simple edge prediction and handle more complex logical queries, which might involve multiple unobserved edges, entities, and variables. For instance, given an incomplete biological knowledge graph, we might want to predict \"em what drugs are likely to target proteins involved with both diseases X and Y?\" -- a query that requires reasoning about all possible proteins that might interact with diseases X and Y. Here we introduce a framework to efficiently make predictions about conjunctive logical queries -- a flexible but tractable subset of first-order logic -- on incomplete knowledge graphs. In our approach, we embed graph nodes in a low-dimensional space and represent logical operators as learned geometric operations (e.g., translation, rotation) in this embedding space. By performing logical operations within a low-dimensional embedding space, our approach achieves a time complexity that is linear in the number of query variables, compared to the exponential complexity required by a naive enumeration-based approach. We demonstrate the utility of this framework in two application studies on real-world datasets with millions of relations: predicting logical relationships in a network of drug-gene-disease interactions and in a graph-based representation of social interactions derived from a popular web forum.", "what Type of data ?", "Graph", 416.0, 421.0], ["It has been forecasted by many economists that in the next couple of decades the BRICS economies are going to experience an unprecedented economic growth. This massive economic growth would definitely have a detrimental impact on the environment since these economies, like others, would extract their environmental and natural resource to a larger scale in the process of their economic growth. Therefore, maintaining environmental quality while growing has become a major challenge for these economies. However, the proponents of Environmental Kuznets Curve (EKC) Hypothesis - an inverted U shape relationship between income and emission per capita, suggest BRICS economies need not bother too much about environmental quality while growing because growth would eventually take care of the environment once a certain level of per capita income is achieved. In this backdrop, the present study makes an attempt to estimate EKC type relationship, if any, between income and emission in the context of the BRICS countries for the period 1997 to 2011. Therefore, the study first adopts fixed effect (FE) panel data model to control time constant country specific effects, and then uses Generalized Method of Moments (GMM) approach for dynamic panel data to address endogeneity of income variable and dynamism in emission per capita. Apart from income, we also include variables related to financial sector development and energy utilization to explain emission. The fixed effect model shows a significant EKC type relation between income and emission supporting the previous literature. However, GMM estimates for the dynamic panel model show the relationship between income and emission is actually U shaped with the turning point being out of sample. This out of sample turning point indicates that emission has been growing monotonically with growth in income. Factors like, net energy imports and share of industrial output in GDP are found to be significant and having detrimental impact on the environment in the dynamic panel model. However, these variables are found to be insignificant in FE model. Capital account convertibility shows significant and negative impact on the environment irrespective of models used. The monotonically increasing relationship between income and emission suggests the BRICS economies must adopt some efficiency oriented action plan so that they can grow without putting much pressure on the environment. These findings can have important policy implications as BRICS countries are mainly depending on these factors for their growth but at the same time they can cause serious threat to the environment.", "what Type of data ?", "Panel", 1102.0, 1107.0], ["Purpose \u2013 The purpose of this paper is to analyse the implication of trade on carbon emissions in a panel of eight highly trading Southeast and East Asian countries, namely, China, Indonesia, South Korea, Malaysia, Hong Kong, The Philippines, Singapore and Thailand. Design/methodology/approach \u2013 The analysis relies on the standard quadratic environmental Kuznets curve (EKC) extended to include energy consumption and international trade. A battery of panel unit root and co-integration tests is applied to establish the variables\u2019 stochastic properties and their long-run relations. Then, the specified EKC is estimated using the panel dynamic ordinary least square (OLS) estimation technique. Findings \u2013 The panel co-integration statistics verifies the validity of the extended EKC for the countries under study. Estimation of the long-run EKC via the dynamic OLS estimation method reveals the environmentally degrading effects of trade in these countries, especially in ASEAN and plus South Korea and Hong Kong. Pra...", "what Type of data ?", "Panel", 100.0, 105.0], ["The aim of this paper is to investigate the existence of environmental Kuznets curve (EKC) in an open economy like Tunisia using annual time series data for the period of 1971-2010. The ARDL bounds testing approach to cointegration is applied to test long run relationship in the presence of structural breaks and vector error correction model (VECM) to detect the causality among the variables. The robustness of causality analysis has been tested by applying the innovative accounting approach (IAA). The findings of this paper confirmed the long run relationship between economic growth, energy consumption, trade openness and CO2 emissions in Tunisian Economy. The results also indicated the existence of EKC confirmed by the VECM and IAA approaches. The study has significant contribution for policy implications to curtail energy pollutants by implementing environment friendly regulations to sustain the economic development in Tunisia.", "what Type of data ?", "Time series", 136.0, 147.0], ["This article investigates the Environmental Kuznets Curves (EKC) for CO2 emissions in a panel of 109 countries during the period 1959 to 2001. The length of the series makes the application of a heterogeneous estimator suitable from an econometric point of view. The results, based on the hierarchical Bayes estimator, show that different EKC dynamics are associated with the different sub-samples of countries considered. On average, more industrialized countries show evidence of EKC in quadratic specifications, which nevertheless are probably evolving into an N-shape based on their cubic specification. Nevertheless, it is worth noting that the EU, and not the Umbrella Group led by US, has been driving currently observed EKC-like shapes. The latter is associated to monotonic income\u2013CO2 dynamics. The EU shows a clear EKC shape. Evidence for less-developed countries consistently shows that CO2 emissions rise positively with income, though there are some signs of an EKC. Analyses of future performance, nevertheless, favour quadratic specifications, thus supporting EKC evidence for wealthier countries and non-EKC shapes for industrializing regions.", "what Type of data ?", "Panel", 88.0, 93.0], ["A fused hexacyclic electron acceptor, IHIC, based on strong electron\u2010donating group dithienocyclopentathieno[3,2\u2010b]thiophene flanked by strong electron\u2010withdrawing group 1,1\u2010dicyanomethylene\u20103\u2010indanone, is designed, synthesized, and applied in semitransparent organic solar cells (ST\u2010OSCs). IHIC exhibits strong near\u2010infrared absorption with extinction coefficients of up to 1.6 \u00d7 105m\u22121 cm\u22121, a narrow optical bandgap of 1.38 eV, and a high electron mobility of 2.4 \u00d7 10\u22123 cm2 V\u22121 s\u22121. The ST\u2010OSCs based on blends of a narrow\u2010bandgap polymer donor PTB7\u2010Th and narrow\u2010bandgap IHIC acceptor exhibit a champion power conversion efficiency of 9.77% with an average visible transmittance of 36% and excellent device stability; this efficiency is much higher than any single\u2010junction and tandem ST\u2010OSCs reported in the literature.", "what Acceptor ?", "IHIC", 38.0, 42.0], ["Molecular acceptors are promising alternatives to fullerenes (e.g., PC61/71BM) in the fabrication of high-efficiency bulk-heterojunction (BHJ) solar cells. While solution-processed polymer\u2013fullerene BHJ devices have recently met the 10% efficiency threshold, molecular acceptors have yet to prove comparably efficient with polymer donors. At this point in time, it is important to forge a better understanding of the design parameters that directly impact small-molecule (SM) acceptor performance in BHJ solar cells. In this report, we show that 2-(benzo[c][1,2,5]thiadiazol-4-ylmethylene)malononitrile (BM)-terminated SM acceptors can achieve efficiencies as high as 5.3% in BHJ solar cells with the polymer donor PCE10. Through systematic device optimization and characterization studies, we find that the nonfullerene analogues (FBM, CBM, and CDTBM) all perform comparably well, independent of the molecular structure and electronics of the \u03c0-bridge that links the two electron-deficient BM end groups. With estimated...", "what Acceptor ?", "CDTBM", 846.0, 851.0], ["A new acceptor\u2013donor\u2013acceptor-structured nonfullerene acceptor, 2,2\u2032-((2Z,2\u2032Z)-(((4,4,9,9-tetrakis(4-hexylphenyl)-4,9-dihydro-s-indaceno[1,2-b:5,6-b\u2032]dithiophene-2,7-diyl)bis(4-((2-ethylhexyl)oxy)thiophene-4,3-diyl))bis(methanylylidene))bis(5,6-difluoro-3-oxo-2,3-dihydro-1H-indene-2,1-diylidene))dimalononitrile (i-IEICO-4F), is designed and synthesized via main-chain substituting position modification of 2-(5,6-difluoro-3-oxo-2,3-dihydro-1H-indene-2,1-diylidene)dimalononitrile. Unlike its planar analogue IEICO-4F with strong absorption in the near-infrared region, i-IEICO-4F exhibits a twisted main-chain configuration, resulting in 164 nm blue shifts and leading to complementary absorption with the wide-bandgap polymer (J52). A high solution molar extinction coefficient of 2.41 \u00d7 105 M\u20131 cm\u20131, and sufficiently high energy of charge-transfer excitons of 1.15 eV in a J52:i-IEICO-4F blend were observed, in comparison with those of 2.26 \u00d7 105 M\u20131 cm\u20131 and 1.08 eV for IEICO-4F. A power conversion efficiency of...", "what Acceptor ?", "i-IEICO-4F", 314.0, 324.0], ["With an indenoindene core, a new thieno[3,4\u2010b]thiophene\u2010based small\u2010molecule electron acceptor, 2,2\u2032\u2010((2Z,2\u2032Z)\u2010((6,6\u2032\u2010(5,5,10,10\u2010tetrakis(2\u2010ethylhexyl)\u20105,10\u2010dihydroindeno[2,1\u2010a]indene\u20102,7\u2010diyl)bis(2\u2010octylthieno[3,4\u2010b]thiophene\u20106,4\u2010diyl))bis(methanylylidene))bis(5,6\u2010difluoro\u20103\u2010oxo\u20102,3\u2010dihydro\u20101H\u2010indene\u20102,1\u2010diylidene))dimalononitrile (NITI), is successfully designed and synthesized. Compared with 12\u2010\u03c0\u2010electron fluorene, a carbon\u2010bridged biphenylene with an axial symmetry, indenoindene, a carbon\u2010bridged E\u2010stilbene with a centrosymmetry, shows elongated \u03c0\u2010conjugation with 14 \u03c0\u2010electrons and one more sp3 carbon bridge, which may increase the tunability of electronic structure and film morphology. Despite its twisted molecular framework, NITI shows a low optical bandgap of 1.49 eV in thin film and a high molar extinction coefficient of 1.90 \u00d7 105m\u22121 cm\u22121 in solution. By matching NITI with a large\u2010bandgap polymer donor, an extraordinary power conversion efficiency of 12.74% is achieved, which is among the best performance so far reported for fullerene\u2010free organic photovoltaics and is inspiring for the design of new electron acceptors.", "what Acceptor ?", "NITI", 335.0, 339.0], ["Low-bandgap polymers/molecules are an interesting family of semiconductor materials, and have enabled many recent exciting breakthroughs in the field of organic electronics, especially for organic photovoltaics (OPVs). Here, such a low-bandgap (1.43 eV) non-fullerene electron acceptor (BT-IC) bearing a fused 7-heterocyclic ring with absorption edge extending to the near-infrared (NIR) region was specially designed and synthesized. Benefitted from its NIR light harvesting, high performance OPVs were fabricated with medium bandgap polymers (J61 and J71) as donors, showing power conversion efficiencies of 9.6% with J61 and 10.5% with J71 along with extremely low energy loss (0.56 eV for J61 and 0.53 eV for J71). Interestingly, femtosecond transient absorption spectroscopy studies on both systems show that efficient charge generation was observed despite the fact that the highest occupied molecular orbital (HOMO)\u2013HOMO offset (\u0394EH) in the blends was as low as 0.10 eV, suggesting that such a small \u0394EH is not a crucial limitation in realizing high performance of NIR non-fullerene based OPVs. Our results indicated that BT-IC is an interesting NIR non-fullerene acceptor with great potential application in tandem/multi-junction, semitransparent, and ternary blend solar cells.", "what Acceptor ?", "BT-IC", 287.0, 292.0], ["A side\u2010chain conjugation strategy in the design of nonfullerene electron acceptors is proposed, with the design and synthesis of a side\u2010chain\u2010conjugated acceptor (ITIC2) based on a 4,8\u2010bis(5\u2010(2\u2010ethylhexyl)thiophen\u20102\u2010yl)benzo[1,2\u2010b:4,5\u2010b\u2032]di(cyclopenta\u2010dithiophene) electron\u2010donating core and 1,1\u2010dicyanomethylene\u20103\u2010indanone electron\u2010withdrawing end groups. ITIC2 with the conjugated side chains exhibits an absorption peak at 714 nm, which redshifts 12 nm relative to ITIC1. The absorption extinction coefficient of ITIC2 is 2.7 \u00d7 105m\u22121 cm\u22121, higher than that of ITIC1 (1.5 \u00d7 105m\u22121 cm\u22121). ITIC2 exhibits slightly higher highest occupied molecular orbital (HOMO) (\u22125.43 eV) and lowest unoccupied molecular orbital (LUMO) (\u22123.80 eV) energy levels relative to ITIC1 (HOMO: \u22125.48 eV; LUMO: \u22123.84 eV), and higher electron mobility (1.3 \u00d7 10\u22123 cm2 V\u22121 s\u22121) than that of ITIC1 (9.6 \u00d7 10\u22124 cm2 V\u22121 s\u22121). The power conversion efficiency of ITIC2\u2010based organic solar cells is 11.0%, much higher than that of ITIC1\u2010based control devices (8.54%). Our results demonstrate that side\u2010chain conjugation can tune energy levels, enhance absorption, and electron mobility, and finally enhance photovoltaic performance of nonfullerene acceptors.", "what Acceptor ?", "ITIC2", 163.0, 168.0], ["A simple small molecule acceptor named DICTF, with fluorene as the central block and 2-(2,3-dihydro-3-oxo-1H-inden-1-ylidene)propanedinitrile as the end-capping groups, has been designed for fullerene-free organic solar cells. The new molecule was synthesized from widely available and inexpensive commercial materials in only three steps with a high overall yield of \u223c60%. Fullerene-free organic solar cells with DICTF as the acceptor material provide a high PCE of 7.93%.", "what Acceptor ?", "DICTF", 39.0, 44.0], ["A novel small molecule, FBR, bearing 3-ethylrhodanine flanking groups was synthesized as a nonfullerene electron acceptor for solution-processed bulk heterojunction organic photovoltaics (OPV). A straightforward synthesis route was employed, offering the potential for large scale preparation of this material. Inverted OPV devices employing poly(3-hexylthiophene) (P3HT) as the donor polymer and FBR as the acceptor gave power conversion efficiencies (PCE) up to 4.1%. Transient and steady state optical spectroscopies indicated efficient, ultrafast charge generation and efficient photocurrent generation from both donor and acceptor. Ultrafast transient absorption spectroscopy was used to investigate polaron generation efficiency as well as recombination dynamics. It was determined that the P3HT:FBR blend is highly intermixed, leading to increased charge generation relative to comparative devices with P3HT:PC60BM, but also faster recombination due to a nonideal morphology in which, in contrast to P3HT:PC60BM devices, the acceptor does not aggregate enough to create appropriate percolation pathways that prevent fast nongeminate recombination. Despite this nonoptimal morphology the P3HT:FBR devices exhibit better performance than P3HT:PC60BM devices, used as control, demonstrating that this acceptor shows great promise for further optimization.", "what Acceptor ?", "FBR", 24.0, 27.0], ["Naphtho[1,2\u2010b:5,6\u2010b\u2032]dithiophene is extended to a fused octacyclic building block, which is end capped by strong electron\u2010withdrawing 2\u2010(5,6\u2010difluoro\u20103\u2010oxo\u20102,3\u2010dihydro\u20101H\u2010inden\u20101\u2010ylidene)malononitrile to yield a fused\u2010ring electron acceptor (IOIC2) for organic solar cells (OSCs). Relative to naphthalene\u2010based IHIC2, naphthodithiophene\u2010based IOIC2 with a larger \u03c0\u2010conjugation and a stronger electron\u2010donating core shows a higher lowest unoccupied molecular orbital energy level (IOIC2: \u22123.78 eV vs IHIC2: \u22123.86 eV), broader absorption with a smaller optical bandgap (IOIC2: 1.55 eV vs IHIC2: 1.66 eV), and a higher electron mobility (IOIC2: 1.0 \u00d7 10\u22123 cm2 V\u22121 s\u22121 vs IHIC2: 5.0 \u00d7 10\u22124 cm2 V\u22121 s\u22121). Thus, IOIC2\u2010based OSCs show higher values in open\u2010circuit voltage, short\u2010circuit current density, fill factor, and thereby much higher power conversion efficiency (PCE) values than those of the IHIC2\u2010based counterpart. In particular, as\u2010cast OSCs based on FTAZ: IOIC2 yield PCEs of up to 11.2%, higher than that of the control devices based on FTAZ: IHIC2 (7.45%). Furthermore, by using 0.2% 1,8\u2010diiodooctane as the processing additive, a PCE of 12.3% is achieved from the FTAZ:IOIC2\u2010based devices, higher than that of the FTAZ:IHIC2\u2010based devices (7.31%). These results indicate that incorporating extended conjugation into the electron\u2010donating fused\u2010ring units in nonfullerene acceptors is a promising strategy for designing high\u2010performance electron acceptors.", "what Acceptor ?", "IOIC2", 242.0, 247.0], ["Molecular acceptors are promising alternatives to fullerenes (e.g., PC61/71BM) in the fabrication of high-efficiency bulk-heterojunction (BHJ) solar cells. While solution-processed polymer\u2013fullerene BHJ devices have recently met the 10% efficiency threshold, molecular acceptors have yet to prove comparably efficient with polymer donors. At this point in time, it is important to forge a better understanding of the design parameters that directly impact small-molecule (SM) acceptor performance in BHJ solar cells. In this report, we show that 2-(benzo[c][1,2,5]thiadiazol-4-ylmethylene)malononitrile (BM)-terminated SM acceptors can achieve efficiencies as high as 5.3% in BHJ solar cells with the polymer donor PCE10. Through systematic device optimization and characterization studies, we find that the nonfullerene analogues (FBM, CBM, and CDTBM) all perform comparably well, independent of the molecular structure and electronics of the \u03c0-bridge that links the two electron-deficient BM end groups. With estimated...", "what Acceptor ?", "FBM", 832.0, 835.0], ["Molecular acceptors are promising alternatives to fullerenes (e.g., PC61/71BM) in the fabrication of high-efficiency bulk-heterojunction (BHJ) solar cells. While solution-processed polymer\u2013fullerene BHJ devices have recently met the 10% efficiency threshold, molecular acceptors have yet to prove comparably efficient with polymer donors. At this point in time, it is important to forge a better understanding of the design parameters that directly impact small-molecule (SM) acceptor performance in BHJ solar cells. In this report, we show that 2-(benzo[c][1,2,5]thiadiazol-4-ylmethylene)malononitrile (BM)-terminated SM acceptors can achieve efficiencies as high as 5.3% in BHJ solar cells with the polymer donor PCE10. Through systematic device optimization and characterization studies, we find that the nonfullerene analogues (FBM, CBM, and CDTBM) all perform comparably well, independent of the molecular structure and electronics of the \u03c0-bridge that links the two electron-deficient BM end groups. With estimated...", "what Acceptor ?", "CBM", 837.0, 840.0], ["Three novel non-fullerene small molecular acceptors ITOIC, ITOIC-F, and ITOIC-2F were designed and synthesized with easy chemistry. The concept of supramolecular chemistry was successfully used in the molecular design, which includes noncovalently conformational locking (via intrasupramolecular interaction) to enhance the planarity of backbone and electrostatic interaction (intersupramolecular interaction) to enhance the \u03c0\u2013\u03c0 stacking of terminal groups. Fluorination can further strengthen the intersupramolecular electrostatic interaction of terminal groups. As expected, the designed acceptors exhibited excellent device performance when blended with polymer donor PBDB-T. In comparison with the parent acceptor molecule DC-IDT2T reported in the literature with a power conversion efficiency (PCE) of 3.93%, ITOIC with a planar structure exhibited a PCE of 8.87% and ITOIC-2F with a planar structure and enhanced electrostatic interaction showed a quite impressive PCE of 12.17%. Our result demonstrates the import...", "what Acceptor ?", "ITOIC-2F", 72.0, 80.0], ["In this work, we present a non-fullerene electron acceptor bearing a fused five-heterocyclic ring containing selenium atoms, denoted as IDSe-T-IC, for fullerene-free polymer solar cells (PSCs).
", "what Acceptor ?", "IDSe-T-IC", 139.0, 148.0], ["Naphtho[1,2\u2010b:5,6\u2010b\u2032]dithiophene is extended to a fused octacyclic building block, which is end capped by strong electron\u2010withdrawing 2\u2010(5,6\u2010difluoro\u20103\u2010oxo\u20102,3\u2010dihydro\u20101H\u2010inden\u20101\u2010ylidene)malononitrile to yield a fused\u2010ring electron acceptor (IOIC2) for organic solar cells (OSCs). Relative to naphthalene\u2010based IHIC2, naphthodithiophene\u2010based IOIC2 with a larger \u03c0\u2010conjugation and a stronger electron\u2010donating core shows a higher lowest unoccupied molecular orbital energy level (IOIC2: \u22123.78 eV vs IHIC2: \u22123.86 eV), broader absorption with a smaller optical bandgap (IOIC2: 1.55 eV vs IHIC2: 1.66 eV), and a higher electron mobility (IOIC2: 1.0 \u00d7 10\u22123 cm2 V\u22121 s\u22121 vs IHIC2: 5.0 \u00d7 10\u22124 cm2 V\u22121 s\u22121). Thus, IOIC2\u2010based OSCs show higher values in open\u2010circuit voltage, short\u2010circuit current density, fill factor, and thereby much higher power conversion efficiency (PCE) values than those of the IHIC2\u2010based counterpart. In particular, as\u2010cast OSCs based on FTAZ: IOIC2 yield PCEs of up to 11.2%, higher than that of the control devices based on FTAZ: IHIC2 (7.45%). Furthermore, by using 0.2% 1,8\u2010diiodooctane as the processing additive, a PCE of 12.3% is achieved from the FTAZ:IOIC2\u2010based devices, higher than that of the FTAZ:IHIC2\u2010based devices (7.31%). These results indicate that incorporating extended conjugation into the electron\u2010donating fused\u2010ring units in nonfullerene acceptors is a promising strategy for designing high\u2010performance electron acceptors.", "what Donor ?", "FTAZ", 956.0, 960.0], ["A new acceptor\u2013donor\u2013acceptor-structured nonfullerene acceptor, 2,2\u2032-((2Z,2\u2032Z)-(((4,4,9,9-tetrakis(4-hexylphenyl)-4,9-dihydro-s-indaceno[1,2-b:5,6-b\u2032]dithiophene-2,7-diyl)bis(4-((2-ethylhexyl)oxy)thiophene-4,3-diyl))bis(methanylylidene))bis(5,6-difluoro-3-oxo-2,3-dihydro-1H-indene-2,1-diylidene))dimalononitrile (i-IEICO-4F), is designed and synthesized via main-chain substituting position modification of 2-(5,6-difluoro-3-oxo-2,3-dihydro-1H-indene-2,1-diylidene)dimalononitrile. Unlike its planar analogue IEICO-4F with strong absorption in the near-infrared region, i-IEICO-4F exhibits a twisted main-chain configuration, resulting in 164 nm blue shifts and leading to complementary absorption with the wide-bandgap polymer (J52). A high solution molar extinction coefficient of 2.41 \u00d7 105 M\u20131 cm\u20131, and sufficiently high energy of charge-transfer excitons of 1.15 eV in a J52:i-IEICO-4F blend were observed, in comparison with those of 2.26 \u00d7 105 M\u20131 cm\u20131 and 1.08 eV for IEICO-4F. A power conversion efficiency of...", "what Donor ?", "J52", 730.0, 733.0], ["Low bandgap n-type organic semiconductor (n-OS) ITIC has attracted great attention for the application as an acceptor with medium bandgap p-type conjugated polymer as donor in nonfullerene polymer solar cells (PSCs) because of its attractive photovoltaic performance. Here we report a modification on the molecular structure of ITIC by side-chain isomerization with meta-alkyl-phenyl substitution, m-ITIC, to further improve its photovoltaic performance. In a comparison with its isomeric counterpart ITIC with para-alkyl-phenyl substitution, m-ITIC shows a higher film absorption coefficient, a larger crystalline coherence, and higher electron mobility. These inherent advantages of m-ITIC resulted in a higher power conversion efficiency (PCE) of 11.77% for the nonfullerene PSCs with m-ITIC as acceptor and a medium bandgap polymer J61 as donor, which is significantly improved over that (10.57%) of the corresponding devices with ITIC as acceptor. To the best of our knowledge, the PCE of 11.77% is one of the highest values reported in the literature to date for nonfullerene PSCs. More importantly, the m-ITIC-based device shows less thickness-dependent photovoltaic behavior than ITIC-based devices in the active-layer thickness range of 80-360 nm, which is beneficial for large area device fabrication. These results indicate that m-ITIC is a promising low bandgap n-OS for the application as an acceptor in PSCs, and the side-chain isomerization could be an easy and convenient way to further improve the photovoltaic performance of the donor and acceptor materials for high efficiency PSCs.", "what Donor ?", "J61", 836.0, 839.0], ["Low-bandgap polymers/molecules are an interesting family of semiconductor materials, and have enabled many recent exciting breakthroughs in the field of organic electronics, especially for organic photovoltaics (OPVs). Here, such a low-bandgap (1.43 eV) non-fullerene electron acceptor (BT-IC) bearing a fused 7-heterocyclic ring with absorption edge extending to the near-infrared (NIR) region was specially designed and synthesized. Benefitted from its NIR light harvesting, high performance OPVs were fabricated with medium bandgap polymers (J61 and J71) as donors, showing power conversion efficiencies of 9.6% with J61 and 10.5% with J71 along with extremely low energy loss (0.56 eV for J61 and 0.53 eV for J71). Interestingly, femtosecond transient absorption spectroscopy studies on both systems show that efficient charge generation was observed despite the fact that the highest occupied molecular orbital (HOMO)\u2013HOMO offset (\u0394EH) in the blends was as low as 0.10 eV, suggesting that such a small \u0394EH is not a crucial limitation in realizing high performance of NIR non-fullerene based OPVs. Our results indicated that BT-IC is an interesting NIR non-fullerene acceptor with great potential application in tandem/multi-junction, semitransparent, and ternary blend solar cells.", "what Donor ?", "J71", 553.0, 556.0], ["Ladder-type dithienocyclopentacarbazole (DTCC) cores, which possess highly extended \u03c0-conjugated backbones and versatile modular structures for derivatization, were widely used to develop high-performance p-type polymeric semiconductors. However, an n-type DTCC-based organic semiconductor has not been reported to date. In this study, the first DTCC-based n-type organic semiconductor (DTCC\u2013IC) with a well-defined A\u2013D\u2013A backbone was designed, synthesized, and characterized, in which a DTCC derivative substituted by four p-octyloxyphenyl groups was used as the electron-donating core and two strongly electron-withdrawing 3-(dicyanomethylene)indan-1-one moieties were used as the terminal acceptors. It was found that DTCC\u2013IC has strong light-capturing ability in the range of 500\u2013720 nm and exhibits an impressively high molar absorption coefficient of 2.24 \u00d7 105 M\u22121 cm\u22121 at 669 nm owing to effective intramolecular charge transfer and a strong D\u2013A effect. Cyclic voltammetry measurements indicated that the HOMO and LUMO energy levels of DTCC\u2013IC are \u22125.50 and \u22123.87 eV, respectively. More importantly, a high electron mobility of 2.17 \u00d7 10\u22123 cm2 V\u22121 s\u22121 was determined by the space-charge-limited current method; this electron mobility can be comparable to that of fullerene derivative acceptors (\u03bce \u223c 10\u22123 cm2 V\u22121 s\u22121). To investigate its application potential in non-fullerene solar cells, we fabricated organic solar cells (OSCs) by blending a DTCC\u2013IC acceptor with a PTB7-Th donor under various conditions. The results suggest that the optimized device exhibits a maximum power conversion efficiency (PCE) of up to 6% and a rational high VOC of 0.95 V. These findings demonstrate that the ladder-type DTCC core is a promising building block for the development of high-mobility n-type organic semiconductors for OSCs.", "what Donor ?", "PTB7-Th", 1477.0, 1484.0], ["Two cheliform non-fullerene acceptors, DTPC-IC and DTPC-DFIC, based on a highly electron-rich core, dithienopicenocarbazole (DTPC), are synthesized, showing ultra-narrow bandgaps (as low as 1.21 eV). The two-dimensional nitrogen-containing conjugated DTPC possesses strong electron-donating capability, which induces intense intramolecular charge transfer and intermolecular \u03c0-\u03c0 stacking in derived acceptors. The solar cell based on DTPC-DFIC and a spectrally complementary polymer donor, PTB7-Th, showed a high power conversion efficiency of 10.21% and an extremely low energy loss of 0.45 eV, which is the lowest among reported efficient OSCs.", "what Donor ?", "PTB7-Th", 490.0, 497.0], ["Organic solar cells (OSCs) are a promising cost-effective alternative for utility of solar energy, and possess low-cost, light-weight, and fl exibility advantages. [ 1\u20137 ] Much attention has been focused on the development of OSCs which have seen a dramatic rise in effi ciency over the last decade, and the encouraging power conversion effi ciency (PCE) over 9% has been achieved from bulk heterojunction (BHJ) OSCs. [ 8 ] With regard to photoactive materials, fullerenes and their derivatives, such as [6,6]-phenyl C 61 butyric acid methyl ester (PC 61 BM), have been the dominant electron-acceptor materials in BHJ OSCs, owing to their high electron mobility, large electron affi nity and isotropy of charge transport. [ 9 ] However, fullerenes have a few disadvantages, such as restricted electronic tuning and weak absorption in the visible region. Furthermore, in typical BHJ system of poly(3-hexylthiophene) (P3HT):PC 61 BM, mismatching energy levels between donor and acceptor leads to energy loss and low open-circuit voltages ( V OC ). To solve these problems, novel electron acceptor materials with strong and broad absorption spectra and appropriate energy levels are necessary for OSCs. Recently, non-fullerene small molecule acceptors have been developed. [ 10 , 11 ] However, rare reports on the devices based on solution-processed non-fullerene small molecule acceptors have shown PCEs approaching or exceeding 1.5%, [ 12\u201319 ] and only one paper reported PCEs over 2%. [ 16 ]", "what Donor ?", "P3HT", 916.0, 920.0], ["A novel small molecule, FBR, bearing 3-ethylrhodanine flanking groups was synthesized as a nonfullerene electron acceptor for solution-processed bulk heterojunction organic photovoltaics (OPV). A straightforward synthesis route was employed, offering the potential for large scale preparation of this material. Inverted OPV devices employing poly(3-hexylthiophene) (P3HT) as the donor polymer and FBR as the acceptor gave power conversion efficiencies (PCE) up to 4.1%. Transient and steady state optical spectroscopies indicated efficient, ultrafast charge generation and efficient photocurrent generation from both donor and acceptor. Ultrafast transient absorption spectroscopy was used to investigate polaron generation efficiency as well as recombination dynamics. It was determined that the P3HT:FBR blend is highly intermixed, leading to increased charge generation relative to comparative devices with P3HT:PC60BM, but also faster recombination due to a nonideal morphology in which, in contrast to P3HT:PC60BM devices, the acceptor does not aggregate enough to create appropriate percolation pathways that prevent fast nongeminate recombination. Despite this nonoptimal morphology the P3HT:FBR devices exhibit better performance than P3HT:PC60BM devices, used as control, demonstrating that this acceptor shows great promise for further optimization.", "what Donor ?", "P3HT", 366.0, 370.0], ["Three novel non-fullerene small molecular acceptors ITOIC, ITOIC-F, and ITOIC-2F were designed and synthesized with easy chemistry. The concept of supramolecular chemistry was successfully used in the molecular design, which includes noncovalently conformational locking (via intrasupramolecular interaction) to enhance the planarity of backbone and electrostatic interaction (intersupramolecular interaction) to enhance the \u03c0\u2013\u03c0 stacking of terminal groups. Fluorination can further strengthen the intersupramolecular electrostatic interaction of terminal groups. As expected, the designed acceptors exhibited excellent device performance when blended with polymer donor PBDB-T. In comparison with the parent acceptor molecule DC-IDT2T reported in the literature with a power conversion efficiency (PCE) of 3.93%, ITOIC with a planar structure exhibited a PCE of 8.87% and ITOIC-2F with a planar structure and enhanced electrostatic interaction showed a quite impressive PCE of 12.17%. Our result demonstrates the import...", "what Donor ?", "PBDB-T", 671.0, 677.0], ["Stable bioimaging with nanomaterials in living cells has been a great challenge and of great importance for understanding intracellular events and elucidating various biological phenomena. Herein, we demonstrate that N,S co-doped carbon dots (N,S-CDs) produced by one-pot reflux treatment of C3N3S3 with ethane diamine at a relatively low temperature (80 \u00b0C) exhibit a high fluorescence quantum yield of about 30.4%, favorable biocompatibility, low-toxicity, strong resistance to photobleaching and good stability. The N,S-CDs as an effective temperature indicator exhibit good temperature-dependent fluorescence with a sensational linear response from 20 to 80 \u00b0C. In addition, the obtained N,S-CDs facilitate high selectivity detection of tetracycline (TC) with a detection limit as low as 3 \u00d7 10-10 M and a wide linear range from 1.39 \u00d7 10-5 to 1.39 \u00d7 10-9 M. More importantly, the N,S-CDs display an unambiguous bioimaging ability in the detection of intracellular temperature and TC with satisfactory results.", "what precursors ?", "C3N3S3", 292.0, 298.0], ["The fluorescent N-doped carbon dots (N-CDs) obtained from C3N4 emit strong blue fluorescence, which is stable with different ionic strengths and time. The fluorescence intensity of N-CDs decreases with the temperature increasing, while it can recover to the initial one with the temperature decreasing. It is an accurate linear response of fluorescence intensity to temperature, which may be attributed to the synergistic effect of abundant oxygen-containing functional groups and hydrogen bonds. Further experiments also demonstrate that N-CDs can serve as effective in vitro and in vivo fluorescence-based nanothermometer.", "what precursors ?", "C3N4", 58.0, 62.0], ["Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179\u00b122 nm were produced. Res was loaded with high EE of 73\u00b10.9% and DL content of 6.2\u00b10.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", "what has cell line ?", "MDA-MB-231", 1030.0, 1040.0], ["PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.", "what has cell line ?", "MDA-MB-231", 682.0, 692.0], ["Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179\u00b122 nm were produced. Res was loaded with high EE of 73\u00b10.9% and DL content of 6.2\u00b10.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", "what has cell line ?", "MCF-10A", 1055.0, 1062.0], ["Background: Paclitaxel (PTX) is one of the most important and effective anticancer drugs for the treatment of human cancer. However, its low solubility and severe adverse effects limited clinical use. To overcome this limitation, nanotechnology has been used to overcome tumors due to its excellent antimicrobial activity. Objective: This study was to demonstrate the anticancer properties of functionalization silver nanoparticles loaded with paclitaxel (Ag@PTX) induced A549 cells apoptosis through ROS-mediated signaling pathways. Methods: The Ag@PTX nanoparticles were charged with a zeta potential of about -17 mv and characterized around 2 nm with a narrow size distribution. Results: Ag@PTX significantly decreased the viability of A549 cells and possessed selectivity between cancer and normal cells. Ag@PTX induced A549 cells apoptosis was confirmed by nuclear condensation, DNA fragmentation, and activation of caspase-3. Furthermore, Ag@PTX enhanced the anti-cancer activity of A549 cells through ROS-mediated p53 and AKT signalling pathways. Finally, in a xenograft nude mice model, Ag@PTX suppressed the growth of tumors. Conclusion: Our findings suggest that Ag@PTX may be a candidate as a chemopreventive agent and could be a highly efficient way to achieve anticancer synergism for human cancers.", "what has cell line ?", "A549 cells", 472.0, 482.0], ["Nanocrystal formulation has become a viable solution for delivering poorly soluble drugs including chemotherapeutic agents. The purpose of this study was to examine cellular uptake of paclitaxel nanocrystals by confocal imaging and concentration measurement. It was found that drug nanocrystals could be internalized by KB cells at much higher concentrations than a conventional, solubilized formulation. The imaging and quantitative results suggest that nanocrystals could be directly taken up by cells as solid particles, likely via endocytosis. Moreover, it was found that polymer treatment to drug nanocrystals, such as surface coating and lattice entrapment, significantly influenced the cellular uptake. While drug molecules are in the most stable physical state, nanocrystals of a poorly soluble drug are capable of achieving concentrated intracellular presence enabling needed therapeutic effects.", "what has cell line ?", "KB cells", 320.0, 328.0], ["The folding of monomeric antigens and their subsequent assembly into higher ordered structures are crucial for robust and effective production of nanoparticle (NP) vaccines in a timely and reproducible manner. Despite significant advances in in silico design and structure-based assembly, most engineered NPs are refractory to soluble expression and fail to assemble as designed, presenting major challenges in the manufacturing process. The failure is due to a lack of understanding of the kinetic pathways and enabling technical platforms to ensure successful folding of the monomer antigens into regular assemblages. Capitalizing on a novel function of RNA as a molecular chaperone (chaperna: chaperone + RNA), we provide a robust protein-folding vehicle that may be implemented to NP assembly in bacterial hosts. The receptor-binding domain (RBD) of Middle East respiratory syndrome-coronavirus (MERS-CoV) was fused with the RNA-interaction domain (RID) and bacterioferritin, and expressed in Escherichia coli in a soluble form. Site-specific proteolytic removal of the RID prompted the assemblage of monomers into NPs, which was confirmed by electron microscopy and dynamic light scattering. The mutations that affected the RNA binding to RBD significantly increased the soluble aggregation into amorphous structures, reducing the overall yield of NPs of a defined size. This underscored the RNA-antigen interactions during NP assembly. The sera after mouse immunization effectively interfered with the binding of MERS-CoV RBD to the cellular receptor hDPP4. The results suggest that RNA-binding controls the overall kinetic network of the antigen folding pathway in favor of enhanced assemblage of NPs into highly regular and immunologically relevant conformations. The concentration of the ion Fe2+, salt, and fusion linker also contributed to the assembly in vitro, and the stability of the NPs. The kinetic \u201cpace-keeping\u201d role of chaperna in the super molecular assembly of antigen monomers holds promise for the development and delivery of NPs and virus-like particles as recombinant vaccines and for serological detection of viral infections.", "what Virus ?", "MERS-CoV", 900.0, 908.0], ["Middle East respiratory syndrome (MERS) coronavirus (MERS-CoV), an infectious coronavirus first reported in 2012, has a mortality rate greater than 35%. Therapeutic antibodies are key tools for preventing and treating MERS-CoV infection, but to date no such agents have been approved for treatment of this virus. Nanobodies (Nbs) are camelid heavy chain variable domains with properties distinct from those of conventional antibodies and antibody fragments. We generated two oligomeric Nbs by linking two or three monomeric Nbs (Mono-Nbs) targeting the MERS-CoV receptor-binding domain (RBD), and compared their RBD-binding affinity, RBD\u2013receptor binding inhibition, stability, and neutralizing and cross-neutralizing activity against MERS-CoV. Relative to Mono-Nb, dimeric Nb (Di-Nb) and trimeric Nb (Tri-Nb) had significantly greater ability to bind MERS-CoV RBD proteins with or without mutations in the RBD, thereby potently blocking RBD\u2013MERS-CoV receptor binding. The engineered oligomeric Nbs were very stable under extreme conditions, including low or high pH, protease (pepsin), chaotropic denaturant (urea), and high temperature. Importantly, Di-Nb and Tri-Nb exerted significantly elevated broad-spectrum neutralizing activity against at least 19 human and camel MERS-CoV strains isolated in different countries and years. Overall, the engineered Nbs could be developed into effective therapeutic agents for prevention and treatment of MERS-CoV infection.", "what Virus ?", "MERS-CoV", 53.0, 61.0], ["Worldwide outbreaks of infectious diseases necessitate the development of rapid and accurate diagnostic methods. Colorimetric assays are a representative tool to simply identify the target molecules in specimens through color changes of an indicator (e.g., nanosized metallic particle, and dye molecules). The detection method is used to confirm the presence of biomarkers visually and measure absorbance of the colored compounds at a specific wavelength. In this study, we propose a colorimetric assay based on an extended form of double-stranded DNA (dsDNA) self-assembly shielded gold nanoparticles (AuNPs) under positive electrolyte (e.g., 0.1 M MgCl2) for detection of Middle East respiratory syndrome coronavirus (MERS-CoV). This platform is able to verify the existence of viral molecules through a localized surface plasmon resonance (LSPR) shift and color changes of AuNPs in the UV\u2013vis wavelength range. We designed a pair of thiol-modified probes at either the 5\u2032 end or 3\u2032 end to organize complementary base pairs with upstream of the E protein gene (upE) and open reading frames (ORF) 1a on MERS-CoV. The dsDNA of the target and probes forms a disulfide-induced long self-assembled complex, which protects AuNPs from salt-induced aggregation and transition of optical properties. This colorimetric assay could discriminate down to 1 pmol/\u03bcL of 30 bp MERS-CoV and further be adapted for convenient on-site detection of other infectious diseases, especially in resource-limited settings.", "what Virus ?", "MERS-CoV", 720.0, 728.0], ["The development of simple fluorescent and colorimetric assays that enable point-of-care DNA and RNA detection has been a topic of significant research because of the utility of such assays in resource limited settings. The most common motifs utilize hybridization to a complementary detection strand coupled with a sensitive reporter molecule. Here, a paper-based colorimetric assay for DNA detection based on pyrrolidinyl peptide nucleic acid (acpcPNA)-induced nanoparticle aggregation is reported as an alternative to traditional colorimetric approaches. PNA probes are an attractive alternative to DNA and RNA probes because they are chemically and biologically stable, easily synthesized, and hybridize efficiently with the complementary DNA strands. The acpcPNA probe contains a single positive charge from the lysine at C-terminus and causes aggregation of citrate anion-stabilized silver nanoparticles (AgNPs) in the absence of complementary DNA. In the presence of target DNA, formation of the anionic DNA-acpcPNA duplex results in dispersion of the AgNPs as a result of electrostatic repulsion, giving rise to a detectable color change. Factors affecting the sensitivity and selectivity of this assay were investigated, including ionic strength, AgNP concentration, PNA concentration, and DNA strand mismatches. The method was used for screening of synthetic Middle East respiratory syndrome coronavirus (MERS-CoV), Mycobacterium tuberculosis (MTB), and human papillomavirus (HPV) DNA based on a colorimetric paper-based analytical device developed using the aforementioned principle. The oligonucleotide targets were detected by measuring the color change of AgNPs, giving detection limits of 1.53 (MERS-CoV), 1.27 (MTB), and 1.03 nM (HPV). The acpcPNA probe exhibited high selectivity for the complementary oligonucleotides over single-base-mismatch, two-base-mismatch, and noncomplementary DNA targets. The proposed paper-based colorimetric DNA sensor has potential to be an alternative approach for simple, rapid, sensitive, and selective DNA detection.", "what Virus ?", "MERS-CoV", 1414.0, 1422.0], ["Significance Middle East respiratory syndrome coronavirus (MERS-CoV) recurrently infects humans from its dromedary camel reservoir, causing severe respiratory disease with an \u223c35% fatality rate. The virus binds to the dipeptidyl peptidase 4 (DPP4) entry receptor on respiratory epithelial cells via its spike protein. We here report that the MERS-CoV spike protein selectively binds to sialic acid (Sia) and demonstrate that cell-surface sialoglycoconjugates can serve as an attachment factor. Our observations warrant further research into the role of Sia binding in the virus\u2019s host and tissue tropism and transmission, which may be influenced by the observed Sia-binding fine specificity and by differences in sialoglycomes among host species. Middle East respiratory syndrome coronavirus (MERS-CoV) targets the epithelial cells of the respiratory tract both in humans and in its natural host, the dromedary camel. Virion attachment to host cells is mediated by 20-nm-long homotrimers of spike envelope protein S. The N-terminal subunit of each S protomer, called S1, folds into four distinct domains designated S1A through S1D. Binding of MERS-CoV to the cell surface entry receptor dipeptidyl peptidase 4 (DPP4) occurs via S1B. We now demonstrate that in addition to DPP4, MERS-CoV binds to sialic acid (Sia). Initially demonstrated by hemagglutination assay with human erythrocytes and intact virus, MERS-CoV Sia-binding activity was assigned to S subdomain S1A. When multivalently displayed on nanoparticles, S1 or S1A bound to human erythrocytes and to human mucin in a strictly Sia-dependent fashion. Glycan array analysis revealed a preference for \u03b12,3-linked Sias over \u03b12,6-linked Sias, which correlates with the differential distribution of \u03b12,3-linked Sias and the predominant sites of MERS-CoV replication in the upper and lower respiratory tracts of camels and humans, respectively. Binding is hampered by Sia modifications such as 5-N-glycolylation and (7,)9-O-acetylation. Depletion of cell surface Sia by neuraminidase treatment inhibited MERS-CoV entry of Calu-3 human airway cells, thus providing direct evidence that virus\u2013Sia interactions may aid in virion attachment. The combined observations lead us to propose that high-specificity, low-affinity attachment of MERS-CoV to sialoglycans during the preattachment or early attachment phase may form another determinant governing the host range and tissue tropism of this zoonotic pathogen.", "what Virus ?", "MERS-CoV", 59.0, 67.0], ["MERS-CoV uses the S1B domain of its spike protein to attach to its host receptor, dipeptidyl peptidase 4 (DPP4). The tissue localization of DPP4 has been mapped in different susceptible species. On the other hand, the S1A domain, the N-terminal domain of this spike protein, preferentially binds to several glycotopes of \u03b12,3-sialic acids, the attachment factor of MERS-CoV. Here we show, using a novel method, that the S1A domain specifically binds to the nasal epithelium of dromedary camels, alveolar epithelium of humans, and intestinal epithelium of common pipistrelle bats. In contrast, it does not bind to the nasal epithelium of pigs or rabbits, nor does it bind to the intestinal epithelium of serotine bats and frugivorous bat species. This finding supports the importance of the S1A domain in MERS-CoV infection and tropism, suggests its role in transmission, and highlights its potential use as a component of novel vaccine candidates. ABSTRACT Middle East respiratory syndrome coronavirus (MERS-CoV) uses the S1B domain of its spike protein to bind to dipeptidyl peptidase 4 (DPP4), its functional receptor, and its S1A domain to bind to sialic acids. The tissue localization of DPP4 in humans, bats, camelids, pigs, and rabbits generally correlates with MERS-CoV tropism, highlighting the role of DPP4 in virus pathogenesis and transmission. However, MERS-CoV S1A does not indiscriminately bind to all \u03b12,3-sialic acids, and the species-specific binding and tissue distribution of these sialic acids in different MERS-CoV-susceptible species have not been investigated. We established a novel method to detect these sialic acids on tissue sections of various organs of different susceptible species by using nanoparticles displaying multivalent MERS-CoV S1A. We found that the nanoparticles specifically bound to the nasal epithelial cells of dromedary camels, type II pneumocytes in human lungs, and the intestinal epithelial cells of common pipistrelle bats. Desialylation by neuraminidase abolished nanoparticle binding and significantly reduced MERS-CoV infection in primary susceptible cells. In contrast, S1A nanoparticles did not bind to the intestinal epithelium of serotine bats and frugivorous bat species, nor did they bind to the nasal epithelium of pigs and rabbits. Both pigs and rabbits have been shown to shed less infectious virus than dromedary camels and do not transmit the virus via either contact or airborne routes. Our results depict species-specific colocalization of MERS-CoV entry and attachment receptors, which may be relevant in the transmission and pathogenesis of MERS-CoV. IMPORTANCE MERS-CoV uses the S1B domain of its spike protein to attach to its host receptor, dipeptidyl peptidase 4 (DPP4). The tissue localization of DPP4 has been mapped in different susceptible species. On the other hand, the S1A domain, the N-terminal domain of this spike protein, preferentially binds to several glycotopes of \u03b12,3-sialic acids, the attachment factor of MERS-CoV. Here we show, using a novel method, that the S1A domain specifically binds to the nasal epithelium of dromedary camels, alveolar epithelium of humans, and intestinal epithelium of common pipistrelle bats. In contrast, it does not bind to the nasal epithelium of pigs or rabbits, nor does it bind to the intestinal epithelium of serotine bats and frugivorous bat species. This finding supports the importance of the S1A domain in MERS-CoV infection and tropism, suggests its role in transmission, and highlights its potential use as a component of novel vaccine candidates.", "what Virus ?", "MERS-CoV", 0.0, 8.0], ["Therapeutic development is critical for preventing and treating continual MERS-CoV infections in humans and camels. Because of their small size, nanobodies (Nbs) have advantages as antiviral therapeutics (e.g., high expression yield and robustness for storage and transportation) and also potential limitations (e.g., low antigen-binding affinity and fast renal clearance). Here, we have developed novel Nbs that specifically target the receptor-binding domain (RBD) of MERS-CoV spike protein. They bind to a conserved site on MERS-CoV RBD with high affinity, blocking RBD's binding to MERS-CoV receptor. Through engineering a C-terminal human Fc tag, the in vivo half-life of the Nbs is significantly extended. Moreover, the Nbs can potently cross-neutralize the infections of diverse MERS-CoV strains isolated from humans and camels. The Fc-tagged Nb also completely protects humanized mice from lethal MERS-CoV challenge. Taken together, our study has discovered novel Nbs that hold promise as potent, cost-effective, and broad-spectrum anti-MERS-CoV therapeutic agents.", "what Virus ?", "ERS-CoV", NaN, NaN], ["ABSTRACT Camelid heavy-chain variable domains (VHHs) are the smallest, intact, antigen-binding units to occur in nature. VHHs possess high degrees of solubility and robustness enabling generation of multivalent constructs with increased avidity \u2013 characteristics that mark their superiority to other antibody fragments and monoclonal antibodies. Capable of effectively binding to molecular targets inaccessible to classical immunotherapeutic agents and easily produced in microbial culture, VHHs are considered promising tools for pharmaceutical biotechnology. With the aim to demonstrate the perspective and potential of VHHs for the development of prophylactic and therapeutic drugs to target diseases caused by bacterial and viral infections, this review article will initially describe the structural features that underlie the unique properties of VHHs and explain the methods currently used for the selection and recombinant production of pathogen-specific VHHs, and then thoroughly summarize the experimental findings of five distinct studies that employed VHHs as inhibitors of host\u2013pathogen interactions or neutralizers of infectious agents. Past and recent studies suggest the potential of camelid heavy-chain variable domains as a novel modality of immunotherapeutic drugs and a promising alternative to monoclonal antibodies. VHHs demonstrate the ability to interfere with bacterial pathogenesis by preventing adhesion to host tissue and sequestering disease-causing bacterial toxins. To protect from viral infections, VHHs may be employed as inhibitors of viral entry by binding to viral coat proteins or blocking interactions with cell-surface receptors. The implementation of VHHs as immunotherapeutic agents for infectious diseases is of considerable potential and set to contribute to public health in the near future.", "what Virus ?", "viral infections", 728.0, 744.0], ["Engineered cocrystals offer an alternative solid drug form with tailored physicochemical properties. Interestingly, although cocrystals provide many new possibilities, they also present new challenges, particularly in regard to their design and large-scale manufacture. Current literature has primarily focused on the preparation and characterization of novel cocrystals typically containing only the drug and coformer, leaving the subsequent formulation less explored. In this paper we propose, for the first time, the use of hot melt extrusion for the mechanochemical synthesis of pharmaceutical cocrystals in the presence of a meltable binder. In this approach, we examine excipients that are amenable to hot melt extrusion, forming a suspension of cocrystal particulates embedded in a pharmaceutical matrix. Using ibuprofen and isonicotinamide as a model cocrystal reagent pair, formulations extruded with a small molecular matrix carrier (xylitol) were examined to be intimate mixtures wherein the newly formed cocrystal particulates were physically suspended in a matrix. With respect to formulations extruded using polymeric carriers (Soluplus and Eudragit EPO, respectively), however, there was no evidence within PXRD patterns of either crystalline ibuprofen or the cocrystal. Importantly, it was established in this study that an appropriate carrier for a cocrystal reagent pair during HME processing should satisfy certain criteria including limited interaction with parent reagents and cocrystal product, processing temperature sufficiently lower than the onset of cocrystal Tm, low melt viscosity, and rapid solidification upon cooling.", "what Carrier for hot melt extrusion ?", "Xylitol", 944.0, 951.0], ["The objective of the present study was to investigate the effects of processing variables and formulation factors on the characteristics of hot-melt extrudates containing a copolymer (Kollidon\u00ae VA 64). Nifedipine was used as a model drug in all of the extrudates. Differential scanning calorimetry (DSC) was utilized on the physical mixtures and melts of varying drug\u2013polymer concentrations to study their miscibility. The drug\u2013polymer binary mixtures were studied for powder flow, drug release, and physical and chemical stabilities. The effects of moisture absorption on the content uniformity of the extrudates were also studied. Processing the materials at lower barrel temperatures (115\u2013135\u00b0C) and higher screw speeds (50\u2013100 rpm) exhibited higher post-processing drug content (~99\u2013100%). DSC and X-ray diffraction studies confirmed that melt extrusion of drug\u2013polymer mixtures led to the formation of solid dispersions. Interestingly, the extrusion process also enhanced the powder flow characteristics, which occurred irrespective of the drug load (up to 40% w/w). Moreover, the content uniformity of the extrudates, unlike the physical mixtures, was not sensitive to the amount of moisture absorbed. The extrusion conditions did not influence drug release from the extrudates; however, release was greatly affected by the drug loading. Additionally, the drug release from the physical mixture of nifedipine\u2013Kollidon\u00ae VA 64 was significantly different when compared to the corresponding extrudates (f2 = 36.70). The extrudates exhibited both physical and chemical stabilities throughout the period of study. Overall, hot-melt extrusion technology in combination with Kollidon\u00ae VA 64 produced extrudates capable of higher drug loading, with enhanced flow characteristics, and excellent stability.", "what Carrier for hot melt extrusion ?", "Kollidon\u00ae VA 64", 184.0, 199.0], ["In this study, we examine the relationship between the physical structure and dissolution behavior of olanzapine (OLZ) prepared via hot-melt extrusion in three polymers [polyvinylpyrrolidone (PVP) K30, polyvinylpyrrolidone-co-vinyl acetate (PVPVA) 6:4, and Soluplus\u00ae (SLP)]. In particular, we examine whether full amorphicity is necessary to achieve a favorable dissolution profile. Drug\u2013polymer miscibility was estimated using melting point depression and Hansen solubility parameters. Solid dispersions were characterized using differential scanning calorimetry, X-ray powder diffraction, and scanning electron microscopy. All the polymers were found to be miscible with OLZ in a decreasing order of PVP>PVPVA>SLP. At a lower extrusion temperature (160\u00b0C), PVP generated fully amorphous dispersions with OLZ, whereas the formulations with PVPVA and SLP contained 14%\u201316% crystalline OLZ. Increasing the extrusion temperature to 180\u00b0C allowed the preparation of fully amorphous systems with PVPVA and SLP. Despite these differences, the dissolution rates of these preparations were comparable, with PVP showing a lower release rate despite being fully amorphous. These findings suggested that, at least in the particular case of OLZ, the absence of crystalline material may not be critical to the dissolution performance. We suggest alternative key factors determining dissolution, particularly the dissolution behavior of the polymers themselves.", "what Carrier for hot melt extrusion ?", "Soluplus\u00ae", NaN, NaN], ["Abstract The aim of the current study is to develop amorphous solid dispersion (SD) via hot melt extrusion technology to improve the solubility of a water-insoluble compound, felodipine (FEL). The solubility was dramatically increased by preparation of amorphous SDs via hot-melt extrusion with an amphiphilic polymer, Soluplus\u00ae (SOL). FEL was found to be miscible with SOL by calculating the solubility parameters. The solubility of FEL within SOL was determined to be in the range of 6.2\u20139.9% (w/w). Various techniques were applied to characterize the solid-state properties of the amorphous SDs. These included Fourier Transform Infrared Spectrometry spectroscopy and Raman spectroscopy to detect the formation of hydrogen bonding between the drug and the polymer. Scanning electron microscopy was performed to study the morphology of the SDs. Among all the hot-melt extrudates, FEL was found to be molecularly dispersed within the polymer matrix for the extrudates containing 10% drug, while few small crystals were detected in the 30 and 50% extrudates. In conclusion, solubility of FEL was enhanced while a homogeneous SD was achieved for 10% drug loading.", "what Carrier for hot melt extrusion ?", "Soluplus\u00ae", NaN, NaN], ["Abstract It is very challenging to treat brain cancer because of the blood\u2013brain barrier (BBB) restricting therapeutic drug or gene to access the brain. In this research project, angiopep-2 (ANG) was used as a brain-targeted peptide for preparing multifunctional ANG-modified poly(lactic-co-glycolic acid) (PLGA) nanoparticles (NPs), which encapsulated both doxorubicin (DOX) and epidermal growth factor receptor (EGFR) siRNA, designated as ANG/PLGA/DOX/siRNA. This system could efficiently deliver DOX and siRNA into U87MG cells leading to significant cell inhibition, apoptosis and EGFR silencing in vitro. It demonstrated that this drug system was capable of penetrating the BBB in vivo, resulting in more drugs accumulation in the brain. The animal study using the brain orthotopic U87MG glioma xenograft model indicated that the ANG-targeted co-delivery of DOX and EGFR siRNA resulted in not only the prolongation of the life span of the glioma-bearing mice but also an obvious cell apoptosis in glioma tissue.", "what Surface functionalized with ?", "Angiopep-2", 179.0, 189.0], ["AIM Drug targeting to the CNS is challenging due to the presence of blood-brain barrier. We investigated chitosan (Cs) nanoparticles (NPs) as drug transporter system across the blood-brain barrier, based on mAb OX26 modified Cs. MATERIALS & METHODS Cs NPs functionalized with PEG, modified and unmodified with OX26 (Cs-PEG-OX26) were prepared and chemico-physically characterized. These NPs were administered (intraperitoneal) in mice to define their ability to reach the brain. RESULTS Brain uptake of OX26-conjugated NPs is much higher than of unmodified NPs, because: long-circulating abilities (conferred by PEG), interaction between cationic Cs and brain endothelium negative charges and OX26 TfR receptor affinity. CONCLUSION Cs-PEG-OX26 NPs are promising drug delivery system to the CNS.", "what Surface functionalized with ?", "OX26", 211.0, 215.0], ["Abstract Melanotransferrin antibody (MA) and tamoxifen (TX) were conjugated on etoposide (ETP)-entrapped solid lipid nanoparticles (ETP-SLNs) to target the blood\u2013brain barrier (BBB) and glioblastom multiforme (GBM). MA- and TX-conjugated ETP-SLNs (MA\u2013TX\u2013ETP\u2013SLNs) were used to infiltrate the BBB comprising a monolayer of human astrocyte-regulated human brain-microvascular endothelial cells (HBMECs) and to restrain the proliferation of malignant U87MG cells. TX-grafted ETP-SLNs (TX\u2013ETP\u2013SLNs) significantly enhanced the BBB permeability coefficient for ETP and raised the fluorescent intensity of calcein-AM when compared with ETP-SLNs. In addition, surface MA could increase the BBB permeability coefficient for ETP about twofold. The viability of HBMECs was higher than 86%, suggesting a high biocompatibility of MA\u2013TX\u2013ETP-SLNs. Moreover, the efficiency in antiproliferation against U87MG cells was in the order of MA\u2013TX\u2013ETP-SLNs > TX\u2013ETP-SLNs > ETP-SLNs > SLNs. The capability of MA\u2013TX\u2013ETP-SLNs to target HBMECs and U87MG cells during internalization was verified by immunochemical staining of expressed melanotransferrin. MA\u2013TX\u2013ETP-SLNs can be a potent pharmacotherapy to deliver ETP across the BBB to GBM.", "what Surface functionalized with ?", "Melanotransferrin antibody (MA)", NaN, NaN], ["Alzheimer's disease is a growing concern in the modern world. As the currently available medications are not very promising, there is an increased need for the fabrication of newer drugs. Curcumin is a plant derived compound which has potential activities beneficial for the treatment of Alzheimer's disease. Anti-amyloid activity and anti-oxidant activity of curcumin is highly beneficial for the treatment of Alzheimer's disease. The insolubility of curcumin in water restricts its use to a great extend, which can be overcome by the synthesis of curcumin nanoparticles. In our work, we have successfully synthesized water-soluble PLGA coated- curcumin nanoparticles and characterized it using different techniques. As drug targeting to diseases of cerebral origin are difficult due to the stringency of blood-brain barrier, we have coupled the nanoparticle with Tet-1 peptide, which has the affinity to neurons and possess retrograde transportation properties. Our results suggest that curcumin encapsulated-PLGA nanoparticles are able to destroy amyloid aggregates, exhibit anti-oxidative property and are non-cytotoxic. The encapsulation of the curcumin in PLGA does not destroy its inherent properties and so, the PLGA-curcumin nanoparticles can be used as a drug with multiple functions in treating Alzheimer's disease proving it to be a potential therapeutic tool against this dreaded disease.", "what Surface functionalized with ?", "Tet-1 peptide", 865.0, 878.0], ["A brain drug delivery system for glioma chemotherapy based on transferrin-conjugated biodegradable polymersomes, Tf-PO-DOX, was made and evaluated with doxorubicin (DOX) as a model drug. Biodegradable polymersomes (PO) loaded with doxorubicin (DOX) were prepared by the nanoprecipitation method (PO-DOX) and then conjugated with transferrin (Tf) to yield Tf-PO-DOX with an average diameter of 107 nm and surface Tf molecule number per polymersome of approximately 35. Compared with PO-DOX and free DOX, Tf-PO-DOX demonstrated the strongest cytotoxicity against C6 glioma cells and the greatest intracellular delivery. It was shown in pharmacokinetic and brain distribution experiments that Tf-PO significantly enhanced brain delivery of DOX, especially the delivery of DOX into brain tumor cells. Pharmacodynamics results revealed a significant reduction of tumor volume and a significant increase of median survival time in the group of Tf-PO-DOX compared with those in saline control animals, animals treated with PO-DOX, and free DOX solution. By terminal deoxynucleotidyl transferase-mediated dUTP nick-end-labeling, Tf-PO-DOX could extensively make tumor cell apoptosis. These results indicated that Tf-PO-DOX could significantly enhance the intracellular delivery of DOX in glioma and the chemotherapeutic effect of DOX for glioma rats.", "what Surface functionalized with ?", "Transferrin (Tf)", NaN, NaN], ["Alzheimer's disease (AD) is the most common form of dementia, characterized by the formation of extracellular senile plaques and neuronal loss caused by amyloid \u03b2 (A\u03b2) aggregates in the brains of AD patients. Conventional strategies failed to treat AD in clinical trials, partly due to the poor solubility, low bioavailability and ineffectiveness of the tested drugs to cross the blood-brain barrier (BBB). Moreover, AD is a complex, multifactorial neurodegenerative disease; one-target strategies may be insufficient to prevent the processes of AD. Here, we designed novel kind of poly(lactide-co-glycolic acid) (PLGA) nanoparticles by loading with A\u03b2 generation inhibitor S1 (PQVGHL peptide) and curcumin to target the detrimental factors in AD development and by conjugating with brain targeting peptide CRT (cyclic CRTIGPSVC peptide), an iron-mimic peptide that targets transferrin receptor (TfR), to improve BBB penetration. The average particle size of drug-loaded PLGA nanoparticles and CRT-conjugated PLGA nanoparticles were 128.6 nm and 139.8 nm, respectively. The results of Y-maze and new object recognition test demonstrated that our PLGA nanoparticles significantly improved the spatial memory and recognition in transgenic AD mice. Moreover, PLGA nanoparticles remarkably decreased the level of A\u03b2, reactive oxygen species (ROS), TNF-\u03b1 and IL-6, and enhanced the activities of super oxide dismutase (SOD) and synapse numbers in the AD mouse brains. Compared with other PLGA nanoparticles, CRT peptide modified-PLGA nanoparticles co-delivering S1 and curcumin exhibited most beneficial effect on the treatment of AD mice, suggesting that conjugated CRT peptide, and encapsulated S1 and curcumin exerted their corresponding functions for the treatment.", "what Surface functionalized with ?", "Brain targeting peptide CRT (cyclic CRTIGPSVC peptide)", NaN, NaN], ["PURPOSE This study aimed to: (1) determine the relative efficiencies of topical and systemic absorption of drugs delivered by eyedrops to the anterior and posterior segments of the eye; (2) establish whether dexamethasone-cyclodextrin eyedrops deliver significant levels of drug to the retina and vitreous in the rabbit eye, and (3) compare systemic absorption following topical application to the eye versus intranasal or intravenous delivery. METHODS In order to distinguish between topical and systemic absorption in the eye, we applied 0.5% dexamethasone-cyclodextrin eyedrops to one (study) eye of rabbits and not to the contralateral (control) eye. Drug levels were measured in each eye. The study eye showed the result of the combination of topical and systemic absorption, whereas the control eye showed the result of systemic absorption only. Systemic absorption was also examined after intranasal and intravenous administration of the same dose of dexamethasone. RESULTS In the aqueous humour dexamethasone levels were 170 +/- 76 ng/g (mean +/- standard deviation) in the study eye and 6 +/- 2 ng/g in the control eye. Similar ratios were seen in the iris and ciliary body. In the retina the dexamethasone level was 33 +/- 7 ng/g in the study eye and 14 +/- 3 ng/g in the control eye. Similar ratios were seen in the vitreous humour. Systemic absorption was similar from ocular, intranasal and intravenous administration. CONCLUSIONS Absorption after topical application dominates in the anterior segment. Topical absorption also plays a significant role in delivering dexamethasone to the posterior segment of the rabbit eye. In medication administered to the retina, 40% of the drug reaches the retina via the systemic route and 60% via topical penetration. Dexamethasone-cyclodextrin eyedrops deliver a significant amount of drug to the rabbit retina.", "what Uses drug ?", "Dexamethasone", 208.0, 221.0], ["Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179\u00b122 nm were produced. Res was loaded with high EE of 73\u00b10.9% and DL content of 6.2\u00b10.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.", "what Uses drug ?", "Resveratrol", 316.0, 327.0], ["Abstract It is very challenging to treat brain cancer because of the blood\u2013brain barrier (BBB) restricting therapeutic drug or gene to access the brain. In this research project, angiopep-2 (ANG) was used as a brain-targeted peptide for preparing multifunctional ANG-modified poly(lactic-co-glycolic acid) (PLGA) nanoparticles (NPs), which encapsulated both doxorubicin (DOX) and epidermal growth factor receptor (EGFR) siRNA, designated as ANG/PLGA/DOX/siRNA. This system could efficiently deliver DOX and siRNA into U87MG cells leading to significant cell inhibition, apoptosis and EGFR silencing in vitro. It demonstrated that this drug system was capable of penetrating the BBB in vivo, resulting in more drugs accumulation in the brain. The animal study using the brain orthotopic U87MG glioma xenograft model indicated that the ANG-targeted co-delivery of DOX and EGFR siRNA resulted in not only the prolongation of the life span of the glioma-bearing mice but also an obvious cell apoptosis in glioma tissue.", "what Uses drug ?", "Doxorubicin", 358.0, 369.0], ["Abstract Galantamine hydrobromide, a promising acetylcholinesterase inhibitor is reported to be associated with cholinergic side effects. Its poor brain penetration results in lower bioavailability to the target site. With an aim to overcome these limitations, solid\u2013lipid nanoparticulate formulation of galantamine hydrobromide was developed employing biodegradable and biocompatible components. The selected galantamine hydrobromide-loaded solid\u2013lipid nanoparticles offered nanocolloidal with size lower than 100 nm and maximum drug entrapment 83.42 \u00b1 0.63%. In vitro drug release from these spherical drug-loaded nanoparticles was observed to be greater than 90% for a period of 24 h in controlled manner. In vivo evaluations demonstrated significant memory restoration capability in cognitive deficit rats in comparison with naive drug. The developed carriers offered approximately twice bioavailability to that of plain drug. Hence, the galantamine hydrobromide-loaded solid\u2013lipid nanoparticles can be a promising vehicle for safe and effective delivery especially in disease like Alzheimer\u2019s.", "what Uses drug ?", "Galantamine", 9.0, 20.0], ["Effectiveness of CNS-acting drugs depends on the localization, targeting, and capacity to be transported through the blood\u2013brain barrier (BBB) which can be achieved by designing brain-targeting delivery vectors. Hence, the objective of this study was to screen the formulation and process variables affecting the performance of sertraline (Ser-HCl)-loaded pegylated and glycosylated liposomes. The prepared vectors were characterized for Ser-HCl entrapment, size, surface charge, release behavior, and in vitro transport through the BBB. Furthermore, the compatibility among liposomal components was assessed using SEM, FTIR, and DSC analysis. Through a thorough screening study, enhancement of Ser-HCl entrapment, nanosized liposomes with low skewness, maximized stability, and controlled drug leakage were attained. The solid-state characterization revealed remarkable interaction between Ser-HCl and the charging agent to determine drug entrapment and leakage. Moreover, results of liposomal transport through mouse brain endothelialpolyoma cells demonstrated greater capacity of the proposed glycosylated liposomes to target the cerebellar due to its higher density of GLUT1 and higher glucose utilization. This transport capacity was confirmed by the inhibiting action of both cytochalasin B and phenobarbital. Using C6 glioma cells model, flow cytometry, time-lapse live cell imaging, and in vivo NIR fluorescence imaging demonstrated that optimized glycosylated liposomes can be transported through the BBB by classical endocytosis, as well as by specific transcytosis. In conclusion, the current study proposed a thorough screening of important formulation and process variabilities affecting brain-targeting liposomes for further scale-up processes.", "what Uses drug ?", "Sertraline", 328.0, 338.0], ["PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.", "what Uses drug ?", "Paclitaxel", 174.0, 184.0], ["Poor delivery of insoluble anticancer drugs has so far precluded their clinical application. In this study, we developed a tumor-targeting delivery system for insoluble drug (paclitaxel, PTX) by PEGylated O-carboxymethyl-chitosan (CMC) nanoparticles grafted with cyclic Arg-Gly-Asp (RGD) peptide. To improve the loading efficiency (LE), we combined O/W/O double emulsion method with temperature-programmed solidification technique and controlled PTX within the matrix network as in situ nanocrystallite form. Furthermore, these CMC nanoparticles were PEGylated, which could reduce recognition by the reticuloendothelial system (RES) and prolong the circulation time in blood. In addition, further graft of cyclic RGD peptide at the terminal of PEG chain endowed these nanoparticles with higher affinity to in vitro Lewis lung carcinoma (LLC) cells and in vivo tumor tissue. These outstanding properties enabled as-designed nanodevice to exhibit a greater tumor growth inhibition effect and much lower side effects over the commercial formulation Taxol.", "what Uses drug ?", "Paclitaxel", 175.0, 185.0], ["Anilido-oxazoline-ligated rare-earth metal complexes show strong fluorescence emissions and good catalytic performance on isoprene polymerization with high
In this work, we present a non-fullerene electron acceptor bearing a fused five-heterocyclic ring containing selenium atoms, denoted as IDSe-T-IC, for fullerene-free polymer solar cells (PSCs).
", "what Mobility ?", "Electron", 44.0, 52.0], ["Low bandgap n-type organic semiconductor (n-OS) ITIC has attracted great attention for the application as an acceptor with medium bandgap p-type conjugated polymer as donor in nonfullerene polymer solar cells (PSCs) because of its attractive photovoltaic performance. Here we report a modification on the molecular structure of ITIC by side-chain isomerization with meta-alkyl-phenyl substitution, m-ITIC, to further improve its photovoltaic performance. In a comparison with its isomeric counterpart ITIC with para-alkyl-phenyl substitution, m-ITIC shows a higher film absorption coefficient, a larger crystalline coherence, and higher electron mobility. These inherent advantages of m-ITIC resulted in a higher power conversion efficiency (PCE) of 11.77% for the nonfullerene PSCs with m-ITIC as acceptor and a medium bandgap polymer J61 as donor, which is significantly improved over that (10.57%) of the corresponding devices with ITIC as acceptor. To the best of our knowledge, the PCE of 11.77% is one of the highest values reported in the literature to date for nonfullerene PSCs. More importantly, the m-ITIC-based device shows less thickness-dependent photovoltaic behavior than ITIC-based devices in the active-layer thickness range of 80-360 nm, which is beneficial for large area device fabrication. These results indicate that m-ITIC is a promising low bandgap n-OS for the application as an acceptor in PSCs, and the side-chain isomerization could be an easy and convenient way to further improve the photovoltaic performance of the donor and acceptor materials for high efficiency PSCs.", "what Mobility ?", "Electron", 637.0, 645.0], ["We have developed a kind of novel fused-ring small molecular acceptor, whose planar conformation can be locked by intramolecular noncovalent interaction. The formation of planar supramolecular fused-ring structure by conformation locking can effectively broaden its absorption spectrum, enhance the electron mobility, and reduce the nonradiative energy loss. Polymer solar cells (PSCs) based on this acceptor afforded a power conversion efficiency (PCE) of 9.6%. In contrast, PSCs based on similar acceptor, which cannot form a flat conformation, only gave a PCE of 2.3%. Such design strategy, which can make the synthesis of small molecular acceptor much easier, will be promising in developing a new acceptor for high efficiency polymer solar cells.", "what Mobility ?", "Electron", 299.0, 307.0], ["A side\u2010chain conjugation strategy in the design of nonfullerene electron acceptors is proposed, with the design and synthesis of a side\u2010chain\u2010conjugated acceptor (ITIC2) based on a 4,8\u2010bis(5\u2010(2\u2010ethylhexyl)thiophen\u20102\u2010yl)benzo[1,2\u2010b:4,5\u2010b\u2032]di(cyclopenta\u2010dithiophene) electron\u2010donating core and 1,1\u2010dicyanomethylene\u20103\u2010indanone electron\u2010withdrawing end groups. ITIC2 with the conjugated side chains exhibits an absorption peak at 714 nm, which redshifts 12 nm relative to ITIC1. The absorption extinction coefficient of ITIC2 is 2.7 \u00d7 105m\u22121 cm\u22121, higher than that of ITIC1 (1.5 \u00d7 105m\u22121 cm\u22121). ITIC2 exhibits slightly higher highest occupied molecular orbital (HOMO) (\u22125.43 eV) and lowest unoccupied molecular orbital (LUMO) (\u22123.80 eV) energy levels relative to ITIC1 (HOMO: \u22125.48 eV; LUMO: \u22123.84 eV), and higher electron mobility (1.3 \u00d7 10\u22123 cm2 V\u22121 s\u22121) than that of ITIC1 (9.6 \u00d7 10\u22124 cm2 V\u22121 s\u22121). The power conversion efficiency of ITIC2\u2010based organic solar cells is 11.0%, much higher than that of ITIC1\u2010based control devices (8.54%). Our results demonstrate that side\u2010chain conjugation can tune energy levels, enhance absorption, and electron mobility, and finally enhance photovoltaic performance of nonfullerene acceptors.", "what Mobility ?", "Electron", 64.0, 72.0], ["A novel non-fullerene acceptor, possessing a very low bandgap of 1.34 eV and a high-lying lowest unoccupied molecular orbital level of -3.95 eV, is designed and synthesized by introducing electron-donating alkoxy groups to the backbone of a conjugated small molecule. Impressive power conversion efficiencies of 8.4% and 10.7% are obtained for fabricated single and tandem polymer solar cells.", "what Mobility ?", "Electron", 188.0, 196.0], ["With an indenoindene core, a new thieno[3,4\u2010b]thiophene\u2010based small\u2010molecule electron acceptor, 2,2\u2032\u2010((2Z,2\u2032Z)\u2010((6,6\u2032\u2010(5,5,10,10\u2010tetrakis(2\u2010ethylhexyl)\u20105,10\u2010dihydroindeno[2,1\u2010a]indene\u20102,7\u2010diyl)bis(2\u2010octylthieno[3,4\u2010b]thiophene\u20106,4\u2010diyl))bis(methanylylidene))bis(5,6\u2010difluoro\u20103\u2010oxo\u20102,3\u2010dihydro\u20101H\u2010indene\u20102,1\u2010diylidene))dimalononitrile (NITI), is successfully designed and synthesized. Compared with 12\u2010\u03c0\u2010electron fluorene, a carbon\u2010bridged biphenylene with an axial symmetry, indenoindene, a carbon\u2010bridged E\u2010stilbene with a centrosymmetry, shows elongated \u03c0\u2010conjugation with 14 \u03c0\u2010electrons and one more sp3 carbon bridge, which may increase the tunability of electronic structure and film morphology. Despite its twisted molecular framework, NITI shows a low optical bandgap of 1.49 eV in thin film and a high molar extinction coefficient of 1.90 \u00d7 105m\u22121 cm\u22121 in solution. By matching NITI with a large\u2010bandgap polymer donor, an extraordinary power conversion efficiency of 12.74% is achieved, which is among the best performance so far reported for fullerene\u2010free organic photovoltaics and is inspiring for the design of new electron acceptors.", "what Mobility ?", "Electron", 77.0, 85.0], ["Ladder-type dithienocyclopentacarbazole (DTCC) cores, which possess highly extended \u03c0-conjugated backbones and versatile modular structures for derivatization, were widely used to develop high-performance p-type polymeric semiconductors. However, an n-type DTCC-based organic semiconductor has not been reported to date. In this study, the first DTCC-based n-type organic semiconductor (DTCC\u2013IC) with a well-defined A\u2013D\u2013A backbone was designed, synthesized, and characterized, in which a DTCC derivative substituted by four p-octyloxyphenyl groups was used as the electron-donating core and two strongly electron-withdrawing 3-(dicyanomethylene)indan-1-one moieties were used as the terminal acceptors. It was found that DTCC\u2013IC has strong light-capturing ability in the range of 500\u2013720 nm and exhibits an impressively high molar absorption coefficient of 2.24 \u00d7 105 M\u22121 cm\u22121 at 669 nm owing to effective intramolecular charge transfer and a strong D\u2013A effect. Cyclic voltammetry measurements indicated that the HOMO and LUMO energy levels of DTCC\u2013IC are \u22125.50 and \u22123.87 eV, respectively. More importantly, a high electron mobility of 2.17 \u00d7 10\u22123 cm2 V\u22121 s\u22121 was determined by the space-charge-limited current method; this electron mobility can be comparable to that of fullerene derivative acceptors (\u03bce \u223c 10\u22123 cm2 V\u22121 s\u22121). To investigate its application potential in non-fullerene solar cells, we fabricated organic solar cells (OSCs) by blending a DTCC\u2013IC acceptor with a PTB7-Th donor under various conditions. The results suggest that the optimized device exhibits a maximum power conversion efficiency (PCE) of up to 6% and a rational high VOC of 0.95 V. These findings demonstrate that the ladder-type DTCC core is a promising building block for the development of high-mobility n-type organic semiconductors for OSCs.", "what Mobility ?", "Electron", 564.0, 572.0], ["We develop an efficient fused-ring electron acceptor (ITIC-Th) based on indacenodithieno[3,2-b]thiophene core and thienyl side-chains for organic solar cells (OSCs). Relative to its counterpart with phenyl side-chains (ITIC), ITIC-Th shows lower energy levels (ITIC-Th: HOMO = -5.66 eV, LUMO = -3.93 eV; ITIC: HOMO = -5.48 eV, LUMO = -3.83 eV) due to the \u03c3-inductive effect of thienyl side-chains, which can match with high-performance narrow-band-gap polymer donors and wide-band-gap polymer donors. ITIC-Th has higher electron mobility (6.1 \u00d7 10(-4) cm(2) V(-1) s(-1)) than ITIC (2.6 \u00d7 10(-4) cm(2) V(-1) s(-1)) due to enhanced intermolecular interaction induced by sulfur-sulfur interaction. We fabricate OSCs by blending ITIC-Th acceptor with two different low-band-gap and wide-band-gap polymer donors. In one case, a power conversion efficiency of 9.6% was observed, which rivals some of the highest efficiencies for single junction OSCs based on fullerene acceptors.", "what Mobility ?", "Electron", 35.0, 43.0], ["A series of halogenated conjugated molecules, containing F, Cl, Br and I, were easily prepared via Knoevenagel condensation and applied in field-effect transistors and organic solar cells. Halogenated conjugated materials were found to possess deep frontier energy levels and high crystallinity compared to their non-halogenated analogues, which is due to the strong electronegativity and heavy atom effect of halogens. As a result, halogenated semiconductors provide high electron mobilities up to 1.3 cm2 V\u22121 s\u22121 in transistors and high efficiencies over 9% in non-fullerene solar cells.", "what Mobility ?", "Electron", 473.0, 481.0], ["Naphtho[1,2\u2010b:5,6\u2010b\u2032]dithiophene is extended to a fused octacyclic building block, which is end capped by strong electron\u2010withdrawing 2\u2010(5,6\u2010difluoro\u20103\u2010oxo\u20102,3\u2010dihydro\u20101H\u2010inden\u20101\u2010ylidene)malononitrile to yield a fused\u2010ring electron acceptor (IOIC2) for organic solar cells (OSCs). Relative to naphthalene\u2010based IHIC2, naphthodithiophene\u2010based IOIC2 with a larger \u03c0\u2010conjugation and a stronger electron\u2010donating core shows a higher lowest unoccupied molecular orbital energy level (IOIC2: \u22123.78 eV vs IHIC2: \u22123.86 eV), broader absorption with a smaller optical bandgap (IOIC2: 1.55 eV vs IHIC2: 1.66 eV), and a higher electron mobility (IOIC2: 1.0 \u00d7 10\u22123 cm2 V\u22121 s\u22121 vs IHIC2: 5.0 \u00d7 10\u22124 cm2 V\u22121 s\u22121). Thus, IOIC2\u2010based OSCs show higher values in open\u2010circuit voltage, short\u2010circuit current density, fill factor, and thereby much higher power conversion efficiency (PCE) values than those of the IHIC2\u2010based counterpart. In particular, as\u2010cast OSCs based on FTAZ: IOIC2 yield PCEs of up to 11.2%, higher than that of the control devices based on FTAZ: IHIC2 (7.45%). Furthermore, by using 0.2% 1,8\u2010diiodooctane as the processing additive, a PCE of 12.3% is achieved from the FTAZ:IOIC2\u2010based devices, higher than that of the FTAZ:IHIC2\u2010based devices (7.31%). These results indicate that incorporating extended conjugation into the electron\u2010donating fused\u2010ring units in nonfullerene acceptors is a promising strategy for designing high\u2010performance electron acceptors.", "what Mobility ?", "Electron", 113.0, 121.0], ["A fused hexacyclic electron acceptor, IHIC, based on strong electron\u2010donating group dithienocyclopentathieno[3,2\u2010b]thiophene flanked by strong electron\u2010withdrawing group 1,1\u2010dicyanomethylene\u20103\u2010indanone, is designed, synthesized, and applied in semitransparent organic solar cells (ST\u2010OSCs). IHIC exhibits strong near\u2010infrared absorption with extinction coefficients of up to 1.6 \u00d7 105m\u22121 cm\u22121, a narrow optical bandgap of 1.38 eV, and a high electron mobility of 2.4 \u00d7 10\u22123 cm2 V\u22121 s\u22121. The ST\u2010OSCs based on blends of a narrow\u2010bandgap polymer donor PTB7\u2010Th and narrow\u2010bandgap IHIC acceptor exhibit a champion power conversion efficiency of 9.77% with an average visible transmittance of 36% and excellent device stability; this efficiency is much higher than any single\u2010junction and tandem ST\u2010OSCs reported in the literature.", "what Mobility ?", "Electron", 19.0, 27.0], ["Organic solar cells (OSCs) are a promising cost-effective alternative for utility of solar energy, and possess low-cost, light-weight, and fl exibility advantages. [ 1\u20137 ] Much attention has been focused on the development of OSCs which have seen a dramatic rise in effi ciency over the last decade, and the encouraging power conversion effi ciency (PCE) over 9% has been achieved from bulk heterojunction (BHJ) OSCs. [ 8 ] With regard to photoactive materials, fullerenes and their derivatives, such as [6,6]-phenyl C 61 butyric acid methyl ester (PC 61 BM), have been the dominant electron-acceptor materials in BHJ OSCs, owing to their high electron mobility, large electron affi nity and isotropy of charge transport. [ 9 ] However, fullerenes have a few disadvantages, such as restricted electronic tuning and weak absorption in the visible region. Furthermore, in typical BHJ system of poly(3-hexylthiophene) (P3HT):PC 61 BM, mismatching energy levels between donor and acceptor leads to energy loss and low open-circuit voltages ( V OC ). To solve these problems, novel electron acceptor materials with strong and broad absorption spectra and appropriate energy levels are necessary for OSCs. Recently, non-fullerene small molecule acceptors have been developed. [ 10 , 11 ] However, rare reports on the devices based on solution-processed non-fullerene small molecule acceptors have shown PCEs approaching or exceeding 1.5%, [ 12\u201319 ] and only one paper reported PCEs over 2%. [ 16 ]", "what Mobility type ?", "Electron", 583.0, 591.0], ["A novel small molecule, FBR, bearing 3-ethylrhodanine flanking groups was synthesized as a nonfullerene electron acceptor for solution-processed bulk heterojunction organic photovoltaics (OPV). A straightforward synthesis route was employed, offering the potential for large scale preparation of this material. Inverted OPV devices employing poly(3-hexylthiophene) (P3HT) as the donor polymer and FBR as the acceptor gave power conversion efficiencies (PCE) up to 4.1%. Transient and steady state optical spectroscopies indicated efficient, ultrafast charge generation and efficient photocurrent generation from both donor and acceptor. Ultrafast transient absorption spectroscopy was used to investigate polaron generation efficiency as well as recombination dynamics. It was determined that the P3HT:FBR blend is highly intermixed, leading to increased charge generation relative to comparative devices with P3HT:PC60BM, but also faster recombination due to a nonideal morphology in which, in contrast to P3HT:PC60BM devices, the acceptor does not aggregate enough to create appropriate percolation pathways that prevent fast nongeminate recombination. Despite this nonoptimal morphology the P3HT:FBR devices exhibit better performance than P3HT:PC60BM devices, used as control, demonstrating that this acceptor shows great promise for further optimization.", "what Mobility type ?", "Electron", 104.0, 112.0], ["Molecular acceptors are promising alternatives to fullerenes (e.g., PC61/71BM) in the fabrication of high-efficiency bulk-heterojunction (BHJ) solar cells. While solution-processed polymer\u2013fullerene BHJ devices have recently met the 10% efficiency threshold, molecular acceptors have yet to prove comparably efficient with polymer donors. At this point in time, it is important to forge a better understanding of the design parameters that directly impact small-molecule (SM) acceptor performance in BHJ solar cells. In this report, we show that 2-(benzo[c][1,2,5]thiadiazol-4-ylmethylene)malononitrile (BM)-terminated SM acceptors can achieve efficiencies as high as 5.3% in BHJ solar cells with the polymer donor PCE10. Through systematic device optimization and characterization studies, we find that the nonfullerene analogues (FBM, CBM, and CDTBM) all perform comparably well, independent of the molecular structure and electronics of the \u03c0-bridge that links the two electron-deficient BM end groups. With estimated...", "what Mobility type ?", "Electron", 972.0, 980.0], ["There has been a growing interest in the design and synthesis of non-fullerene acceptors for organic solar cells that may overcome the drawbacks of the traditional fullerene-based acceptors. Herein, two novel push-pull (acceptor-donor-acceptor) type small-molecule acceptors, that is, ITDI and CDTDI, with indenothiophene and cyclopentadithiophene as the core units and 2-(3-oxo-2,3-dihydroinden-1-ylidene)malononitrile (INCN) as the end-capping units, are designed and synthesized for non-fullerene polymer solar cells (PSCs). After device optimization, PSCs based on ITDI exhibit good device performance with a power conversion efficiency (PCE) as high as 8.00%, outperforming the CDTDI-based counterparts fabricated under identical condition (2.75% PCE). We further discuss the performance of these non-fullerene PSCs by correlating the energy level and carrier mobility with the core of non-fullerene acceptors. These results demonstrate that indenothiophene is a promising electron-donating core for high-performance non-fullerene small-molecule acceptors.", "what Mobility type ?", "Electron", 978.0, 986.0], ["Hybrid cylindrical roller thrust bearing washers of type 81212 were manufactured by tailored forming. An AISI 1022M base material, featuring a sufficient strength for structural loads, was cladded with the bearing steel AISI 52100 by plasma transferred arc welding (PTA). Though AISI 52100 is generally regarded as non-weldable, it could be applied as a cladding material by adjusting PTA parameters. The cladded parts were investigated after each individual process step and subsequently tested under rolling contact load. Welding defects that could not be completely eliminated by the subsequent hot forming were characterized by means of scanning acoustic microscopy and micrographs. Below the surface, pores with a typical size of ten \u00b5m were found to a depth of about 0.45 mm. In the material transition zone and between individual weld seams, larger voids were observed. Grinding of the surface after heat treatment caused compressive residual stresses near the surface with a relatively small depth. Fatigue tests were carried out on an FE8 test rig. Eighty-two percent of the calculated rating life for conventional bearings was achieved. A high failure slope of the Weibull regression was determined. A relationship between the weld defects and the fatigue behavior is likely.", "what has material ?", "material", 121.0, 129.0], ["A nonfullerene electron acceptor (IEIC) based on indaceno[1,2-
We introduce a solution-processed copper tin sulfide (CTS) thin film to realize high-performance of thin-film transistors (TFT) by optimizing the CTS precursor solution concentration.
", "what Material ?", "Copper tin sulfide (CTS) thin film", NaN, NaN], ["Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.", "what Material ?", "communal knowledge", 102.0, 120.0], ["The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.", "what Material ?", "RDF data", 18.0, 26.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "what Material ?", "light-emitting diodes", 330.0, 351.0], ["Most question answering (QA) systems over Linked Data, i.e. Knowledge Graphs, approach the question answering task as a conversion from a natural language question to its corresponding SPARQL query. A common approach is to use query templates to generate SPARQL queries with slots that need to be filled. Using templates instead of running an extensive NLP pipeline or end-to-end model shifts the QA problem into a classification task, where the system needs to match the input question to the appropriate template. This paper presents an approach to automatically learn and classify natural language questions into corresponding templates using recursive neural networks. Our model was trained on 5000 questions and their respective SPARQL queries from the preexisting LC-QuAD dataset grounded in DBpedia, spanning 5042 entities and 615 predicates. The resulting model was evaluated using the FAIR GERBIL QA framework resulting in 0.419 macro f-measure on LC-QuAD and 0.417 macro f-measure on QALD-7.", "what Material ?", "LC-QuAD dataset", 770.0, 785.0], ["Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.", "what Material ?", "7 languages", 604.0, 615.0], ["The integration of different datasets in the Linked Data Cloud is a key aspect to the success of the Web of Data. To tackle this problem most of existent solutions have been supported by the task of entity resolution. However, many challenges still prevail specially when considering different types, structures and vocabularies used in the Web. Another common problem is that data usually are incomplete, inconsistent and contain outliers. To overcome these limitations, some works have applied machine learning algorithms since they are typically robust to both noise and data inconsistencies and are able to efficiently utilize nondeterministic dependencies in the data. In this paper we propose an approach based in a relational learning algorithm that addresses the problem by statistical approximation method. Modeling the problem as a relational machine learning task allows exploit contextual information that might be too distant in the relational graph. The joint application of relationship patterns between entities and evidences of similarity between their descriptions can improve the effectiveness of results. Furthermore, it is based on a sparse structure that scales well to large datasets. We present initial experiments based on BTC2012 datasets.", "what Material ?", "Linked Data Cloud", 45.0, 62.0], ["Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl\u2013, Br\u2013, I\u2013] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10\u201315 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 \u03bcJ cm\u20132 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.", "what Material ?", "780 nm, FAPbI3 NCs", 1395.0, 1413.0], ["Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information \u2013 meaningful secondary or derived data \u2013 about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known \u201c(elaborated) data products,\u201d for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation \u2013 as machine readable data and their machine-readable meaning \u2013 is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.", "what Material ?", "Research infrastructures and research communities", 609.0, 658.0], ["The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project \u201cRelation Based Process Modelling of Co-operative Building Planning\u201d we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 \u201cNetwork-based Co-operative Planning Processes in Structural Engineering\u201d promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.", "what Material ?", "our relational process model", 929.0, 957.0], ["Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, but abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards automatically compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In pattern-based query log extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.4M statements for 130K popular Wikidata entities.", "what Material ?", "Knowledge bases (KBs)", NaN, NaN], ["ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean\u00b1SD of 2.59\u00b10.28mg/L, 2.63\u00b10.32mg/L and 2.67\u00b10.23mg/L at 0 week to 3.55\u00b10.44mg/L, 3.59\u00b10.34mg/L and 3.6\u00b10.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean\u00b1SD of 2.93\u00b10.33mg/L compared to control group\u2019s 4.42\u00b10.30mg/L and gemfibrozil group\u2019s 4.28\u00b10.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).", "what Material ?", "Group A (control)", NaN, NaN], ["Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.", "what Material ?", "OntoNotes 5.0 dataset", 764.0, 785.0], ["While Wikipedia exists in 287 languages, its content is unevenly distributed among them. In this work, we investigate the generation of open domain Wikipedia summaries in underserved languages using structured data from Wikidata. To this end, we propose a neural network architecture equipped with copy actions that learns to generate single-sentence and comprehensible textual summaries from Wikidata triples. We demonstrate the effectiveness of the proposed approach by evaluating it against a set of baselines on two languages of different natures: Arabic, a morphological rich language with a larger vocabulary than English, and Esperanto, a constructed language known for its easy acquisition.", "what Material ?", "a neural network architecture", 254.0, 283.0], ["Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and \u201cfeeling left out\u201d can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.", "what Material ?", "underserved audiences", 72.0, 93.0], ["Statistical predictions are useful to predict events based on statistical models. The data is useful to determine outcomes based on inputs and calculations. The Crow-AMSAA method will be explored to predict new cases of Coronavirus 19 (COVID19). This method is currently used within engineering reliability design to predict failures and evaluate the reliability growth. The author intents to use this model to predict the COVID19 cases by using daily reported data from Michigan, New York City, U.S.A and other countries. The piece wise Crow-AMSAA (CA) model fits the data very well for the infected cases and deaths at different phases during the start of the COVID19 outbreak. The slope \u03b2 of the Crow-AMSAA line indicates the speed of the transmission or death rate. The traditional epidemiological model is based on the exponential distribution, but the Crow-AMSAA is the Non Homogeneous Poisson Process (NHPP) which can be used to modeling the complex problem like COVID19, especially when the various mitigation strategies such as social distance, isolation and locking down were implemented by the government at different places.
This paper is to use piece wise Crow-AMSAA method to fit the COVID19 confirmed cases in Michigan, New York City, U.S.A and other countries.
piece wise Crow-AMSAA method to fit the COVID19 confirmed cases
From the Crow-AMSAA analysis above, at the beginning of the COVID 19, the infectious cases did not follow the Crow-AMSAA prediction line, but during the outbreak start, the confirmed cases does follow the CA line, the slope \u03b2 value indicates the pace of the transmission rate or death rate in each case. The piece wise Crow-AMSAA describes the different phases of spreading. This indicates the speed of the transmission rate could change according to the government interference, social distance order or other factors. Comparing the piece wise CA \u03b2 slopes (\u03b2: 1.683-- 0.834--0.092) in China and in U.S.A (\u03b2:5.138--10.48--5.259), the speed of infectious rate in U.S.A is much higher than the infectious rate in China. From the piece wise CA plots and summary table 1 of the CA slope \u03b2s, the COVID19 spreading has the different behavior at different places and countries where the government implemented the different policy to slow down the spreading.
From the analysis of data and conclusions from confirmed cases and deaths of COVID 19 in Michigan, New York city, U.S.A, China and other countries, the piece wise Crow-AMSAA method can be used to modeling the spreading of COVID19.
Silica supported and unsupported PdAu single atom alloys (SAAs) were investigated for the selective hydrogenation of 1-hexyne to hexenes under mild conditions.
", "what substrate ?", "1-hexyne", 120.0, 128.0], ["A nonfullerene electron acceptor (IEIC) based on indaceno[1,2-
A nonfullerene electron acceptor (IEIC) based on indaceno[1,2-