full_review
stringlengths
483
5.34k
latex
stringlengths
15.5k
101k
paper_url
stringlengths
42
44
arxiv_url
stringlengths
32
32
help_prompt
stringlengths
76
408
Review of "Cross-language sentiment analysis of European Twitter messages" -- interesting trends analysis but some more approach comparisons and tables for the data would be good. The authors present an interesting, important and relevant trend analysis of sentiment across languages in several locales during the Covid-19 pandemic, using geo-tagged European Twitter data and pre-trained cross-lingual embeddings within a neural model. The main contributions of the paper are: 1) the geo-tagged European Twitter dataset of 4.6 million tweets between Dec 2019 and Apr 2020, where some of these contain Covid19-specific keywords (it would be nice to see some percentage breakdown stats by language here), and 2) the important trends by country in terms of dip and recovery of sentiment over this period, including the overall trends across the board. In terms of sentiment modeling, they use a pre-trained neural model trained on the Sentiment140 dataset of Go et al, which is English-only, hence they freeze the weights to prevent over-adapting to English. They use cross-lingual MUSE embeddings to train this network to better generalize sentiment prediction to multi-lingual data for each country. There is no novelty in the modeling approach itself, which works for the purposes of trend analysis being performed. However, there is no comparison being presented of results of experimentation with different approaches, to corroborate or contrast their current trends results. E.g. a simple baseline approach could have been to run Average and Polarity sentiment values using a standard python text processing package such as `textblob` to obtain sentiment predictions. Other experiments could have been done to use different pre-trained embeddings such regular GloVE or Multi-lingual BERT to provide a comparison or take the average of the approaches to get a more generalized picture of sentiment trends. Also the authors should make it clear that the model has really been used in perhaps inference mode only to obtain the final sentiment predictions for each tweet. The treemap visualization gives a good overall picture of tweet stats, but a table providing the individual dataset statistics including keywords chosen by locale would be really helpful. Some notable trends are how the sentiment generally dips in all locales right around the time of lockdown announcements, and recovers relatively soon after, except for Germany where it dips at the same time as neighboring countries despite lockdown being started here much later, and UK, where sentiment stays low. It is also interesting to note the spikes and fluctuations in Covid19-related sentiment for Spain, and the overall trend for average sentiment by country for "all" tweets (including Covid19-related ones) tracking similarly over the time period considered. However, one trend it would be good to see some discussion on is how the histogram of keywords correlate with the sentiment for the keyworded tweets, as it appears interesting that heightened use of Covid-19 keywords in tweets tracks with more positive sentiment in most of the plots. Perhaps it would be helpful to have a separate discussion section for the overall trend analysis at the end. Overall the paper is well-motivated and in its current form provides perhaps the intended insights, and presents lot of scope to perform useful extended analyses with more meaningful comparisons for additional time spans and across countries where governmental and societal response were different than in Europe. Perhaps the authors could consider a more interpretable predictive sentiment model in future with some hand-crafted features such as geotag metadata, unigram and bi-gram features, binary features for government measures, and Covid19-specific keyword features by locale, which could provide more insight into why sentiment predictions trend a certain way during a specific period for a given locale. Rating: 6: Marginally above acceptance threshold Confidence: 3: The reviewer is fairly confident that the evaluation is correct
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2020} \usepackage{times} \usepackage{latexsym} \renewcommand{\UrlFont}{\ttfamily\small} \usepackage{graphicx} \usepackage{caption} \usepackage{url} \usepackage[utf8]{inputenc} \usepackage{microtype} \aclfinalcopy % \setlength\titlebox{10cm} \newcommand\BibTeX{B\textsc{ib}\TeX} \title{Cross-language sentiment analysis of European Twitter messages during the COVID-19 pandemic} \author{Anna Kruspe \\ German Aerospace Center (DLR) \\ Institute of Data Science \\ Jena, Germany \\ \texttt{anna.kruspe@dlr.de} \\\And Matthias H\"aberle \\ Technical University of Munich (TUM) \\ Signal Processing in Earth Observation (SiPEO) \\ Munich, Germany \\ \texttt{matthias.haeberle@tum.de} \\\AND Iona Kuhn \\ German Aerospace Center (DLR) \\ Institute of Data Science \\ Jena, Germany \\ \texttt{iona.kuhn@dlr.de} \\\And Xiao Xiang Zhu \\ German Aerospace Center (DLR) \\ Remote Sensing Technology Institute (IMF) \\ Oberpfaffenhofen, Germany \\ \texttt{xiaoxiang.zhu@dlr.de}} \date{} \begin{document} \maketitle \begin{abstract} Social media data can be a very salient source of information during crises. User-generated messages provide a window into people's minds during such times, allowing us insights about their moods and opinions. Due to the vast amounts of such messages, a large-scale analysis of population-wide developments becomes possible.\\ In this paper, we analyze Twitter messages (tweets) collected during the first months of the COVID-19 pandemic in Europe with regard to their sentiment. This is implemented with a neural network for sentiment analysis using multilingual sentence embeddings. We separate the results by country of origin, and correlate their temporal development with events in those countries. This allows us to study the effect of the situation on people's moods. We see, for example, that lockdown announcements correlate with a deterioration of mood in almost all surveyed countries, which recovers within a short time span. \end{abstract} \section{Introduction} The COVID-19 pandemic has led to a worldwide situation with a large number of unknowns. Many heretofore unseen events occurred within a short time span, and governments have had to make quick decisions for containing the spread of the disease. Due to the extreme novelty of the situation, the outcomes of many of these events have not been studied well so far. This is true with regards to their medical effect, as well as the effect on people's perceptions and moods.\\ First studies about the effect the pandemic has on people's lives are being published at the moment \citep[e.g.][]{uni_erfurt}, mainly focusing on surveys and polls. Naturally, such studies are limited to relatively small numbers of participants and focus on specific regions (e.g. countries).\\ In contrast, social media provides a large amount of user-created messages reflective of those users' moods and opinions. The issue with this data source is the difficulty of analysis - social media messages are extremely noisy and idiosyncratic, and the amount of incoming data is much too large to analyze manually. We therefore need automatic methods to extract meaningful insights.\\ In this paper, we describe a data set collected from Twitter during the months of December 2019 through April 2020, and present an automatic method for determining the sentiments contained in these messages. We then calculate the development of these sentiments over time, segment the results by country, and correlate them with events that took place in each country during those five months. \vspace{-5pt} \section{Related work} Since the pandemic outbreak and lockdown measures, numerous studies have been published to investigate the impact of the corona pandemic on Twitter. \citet{feng2020working} analyzed tweets from the US on a state and county level. First, they could detect differences in temporal tweeting patterns and found that people tweeting more about COVID-19 during working hours as the pandemic progressed. Furthermore, they conducted a sentiment analysis over time including an event specific subtask reporting negative sentiment when the 1000th death was announced and positive when the lockdown measures were eased in the states. \citet{lyu2020sense} looked into US-tweets which contained the terms "Chinese-virus" or "Wuhan-virus" referring to the COVID-19 pandemic to perform a user characterization. They compared the results to users that did not make use of such controversial vocabulary. The findings suggest that there are noticeable differences in age group, geo-location, or followed politicians. \citet{chen2020eyes} focused on sentiment analysis and topic modelling on COVID-19 tweets containing the term "Chinese-virus" (controversial) and contrasted them against tweets without such terms (non-controversial). Tweets containing "Chinese-virus" discussing more topics which are related to China whereas tweets without such words stressing how to defend the virus. The sentiment analysis revealed for both groups negative sentiment, yet with a slightly more positive and analytical tone for the non-controversial tweets. Furthermore, they accent more the future and what the group itself can do to fight the disease. In contrast, the controversial group aiming more on the past and concentrate on what others should do. \begin{figure*}[htbp] \centerline{\includegraphics[width=.8\textwidth]{fig/treemap_countries.pdf}} \caption{Treemap of Twitter activity in Europe during the time period of December 2019 to April 2020.} \label{fig:treemap_countries} \end{figure*} \section{Data collection}\label{sec:data_collection} For our study, we used the freely available Twitter API to collect the tweets from December 2019 to April 2020. The free API allows streaming of 1\% of the total tweet amount. To cover the largest possible area, we used a bounding box which includes the entire world. From this data, we sub-sampled 4,683,226 geo-referenced tweets in 60 languages located in the Europe. To create the Europe sample, we downloaded a shapefile of the earth\footnote{\url{https://www.naturalearthdata.com/downloads/10m-cultural-vectors/10m-admin-0-countries/}}, then we filtered by country performing a point in polygon test using the Python package \textit{Shapely}\footnote{\url{https://pypi.org/project/Shapely/}}. Figure \ref{fig:treemap_countries} depicts the Europe Twitter activity in total numbers. Most tweets come from the U.K. Tweets are not filtered by topic, i.e. many of them are going to be about other topics than COVID-19. This is by design. As we will describe later, we also apply a simple keyword filter to detect tweets that are probably COVID-19-related for further analysis. \begin{figure}[htbp] \centerline{\includegraphics[width=.4\textwidth]{fig/model.png}} \caption{Architecture of the sentiment analysis model.} \label{fig:model} \end{figure} \section{Analysis method} We now describe how the automatic sentiment analysis was performed, and the considerations involved in this method. \begin{figure}[htbp] \centerline{\includegraphics[width=.5\textwidth]{fig/embedding_comp.png}} \caption{MSE for different models on the \textit{Sentiment140} test dataset.} \label{fig:embedding_comp} \end{figure} \subsection{Sentiment modeling} In order to analyze these large amounts of data, we focus on an automatic method for sentiment analysis. We train a neural network for sentiment analysis on tweets. The text input layer of the network is followed by a pre-trained word or sentence embedding. The resulting embedding vectors are fed into a 128-dimensional fully-connected ReLU layer with 50\% dropout, followed by a regression output layer with sigmoid activation. Mean squared error is used as loss. The model is visualized in figure \ref{fig:model}.\\ This network is trained on the \textit{Sentiment140} dataset \cite{go}. This dataset contains around 1.5 million tweets collected through keyword search, and then annotated automatically by detecting emoticons. Tweets are determined to have positive, neutral, or negative sentiment. We map these sentiments to the values 1.0, 0.5, and 0.0 for the regression. Sentiment for unseen tweets is then represented on a continuous scale at the output.\\ We test variants of the model using the following pre-trained word- and sentence-level embeddings: \begin{itemize} \item A skip-gram version of \textit{word2vec} \citep{mikolov} trained on the English-language Wikipedia\footnote{\url{https://tfhub.dev/google/Wiki-words-250/2}} \item A multilingual version of BERT \citep{bert} trained on Wikipedia data\footnote{\url{https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/2}} \item A multilingual version of BERT trained on 160 million tweets containing COVID-19 keywords\footnote{\url{https://tfhub.dev/digitalepidemiologylab/covid-twitter-bert/1}} \citep{covidtwitterbert} \item An ELMO model \cite{elmo} trained on the 1 Billion Word Benchmark dataset\footnote{\url{https://tfhub.dev/google/elmo/3}} \item The Multilingual Universal Sentence Encoder (MUSE)\footnote{\url{https://tfhub.dev/google/universal-sentence-encoder-multilingual/3}} \citep{yang} \end{itemize} We train each sentiment analysis model on the \textit{Sentiment140} dataset for 10 epochs. Mean squared error results on the unseen test portion of the same dataset are shown in figure \ref{fig:embedding_comp}. For comparison, we also include an analysis conducted by VADER which is a rule-based sentiment reasoner designed for social media messages \cite{vader}.\\ % Interestingly, most neural network results are in the range of the rule-based approach. BERT delivers better results than the \textit{word2vec} model, with ELMO and the COVID-19-specific version also leading to improvements. However, the best result is achieved with the pre-trained multilingual USE model, which can embed whole sentences rather than (contextualized) words. We therefore perform the subsequent sentiment analysis with the MUSE-based model.\\ An interesting side note here is that the dataset only contains English-language tweets, but the sentence embedding is multilingual (for 16 languages). We freeze the embedding weights to prevent them from over-adapting to English. Due to the cross-lingual semantic representation capabilities of the pre-trained embedding, we expect the model to be able to detect sentiment in other languages just as well.\\ With the created model, we perform sentiment analysis on the 4.6 million tweets collected from December to April, and then aggregate the results over time. This provides us with a representation of the development of Twitter messages' average sentiment over time. We specifically consider all collected tweets rather than just those determined to be topically related to COVID-19 because we are interested in the effect on people's moods in general, not just with regards to the pandemic. Additionally, we also filter the tweets by COVID-19-associated keywords, and analyze their sentiments as well. % The chosen keywords are listed in figure \ref{fig:keywords}.\\ \subsection{Considerations} There are some assumptions implicit in this analysis method that we want to address here. First of all, we only consider tweets containing a geolocation. This applies to less than 1\% of the whole tweet stream, but according to \citet{sloan}, the amount of geolocated tweets closely follows the geographic population distribution. According to \citet{graham}, there probably are factors determining which users share their locations and which ones do not, but there is no systematic study of these.\\ Other assumptions arise from the analysis method itself. For one, we assume that the model is able to extract meaningful sentiment values from the data. However, sentiment is subjective, and the model may be failing for certain constructs (e.g. negations, sarcasm). Additionally, modeling sentiment on a binary scale does not tell the whole story. ``Positive'' sentiment encompasses, for example, happy or hopeful tweets, ``negative'' angry or sad tweets, and ``neutral'' tweets can be news tweets, for example. A more finegrained analysis would be of interest in the future.\\ We also assume a somewhat similar perception of sentiment across languages. Finally, we assume that the detected sentiments as a whole are reflective of the mood within the community; on the other hand, mood is not quantifiable in the first place. All of these assumptions can be called into question. Nevertheless, while they may not be applicable for every single tweet, we hope to detect interesting effects on a large scale. When analyzing thousands of tweets within each time frame, random fluctuations become less likely. We believe that this analysis can provide useful insights into people's thoughts, and form an interesting basis for future studies from psychological or sociological perspectives. \begin{figure}[htbp] \centerline{\includegraphics[width=.4\textwidth]{fig/keywords.png}} \caption{Keywords used for filtering the tweets (not case sensitive).} \label{fig:keywords} \end{figure} \section{Results} In the following, we present the detected sentiment developments over time over-all and for select countries, and correlate them with events that took place within these months. Results for some other countries would have been interesting as well, but were not included because the main spoken language is not covered by MUSE (e.g. Sweden, Denmark). Others were excluded because there was not enough material available; we only analyze countries with at least 300,000 recorded tweets. As described in section \ref{sec:data_collection}, tweets are filtered geographically, not by language (i.e. Italian tweets may also be in other languages than Italian). \subsection{Over-all}\label{subsec:res_overall} In total, we analyzed around 4.6 million tweets, of which around 79,000 contained at least one COVID-19 keyword. Figure \ref{fig:sentiment_kw_count_all} shows the development of the sentiment over time for all tweets and for those with keywords, as well as the development of the number of keyworded tweets. The sentiment results are smoothed on a weekly basis (otherwise, we would be seeing a lot of movement during the week, e.g. an increase on the weekends). For the average over all tweets, we see a slight decrease in sentiment over time, indicating possibly that users' moods deteriorated over the last few months. There are some side effects that need to be considered here. For example, the curve rises slightly for holidays like Christmas and Easter (April 12). Interestingly, we see a clear dip around mid-March. Most European countries started implementing strong social distancing measures around this time. We will talk about this in more detail in the next sections.\\ We see that keywords were used very rarely before mid-January, and only saw a massive increase in usage around the beginning of March. Lately, usage has been decreasing again, indicating a loss of interest over time. Consequently, the sentiment analysis for keyword tweets is not expressive in the beginning. Starting with the more frequent usage in February, the associated sentiment drops massively, indicating that these tweets are now used in relation with the pandemic. Interestingly, the sentiment recovers with the increased use in March - it is possible that users were starting to think about the risks and handling of the situation in a more relaxed way over time. Still, the sentiment curve for keyword tweets lies significantly below the average one, which is to be expected for this all-around rather negative topic. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_all.png}} \caption{Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_all} \end{figure*} \subsection{Analysis by country} We next aggregated the tweets by country as described in section \ref{sec:data_collection} and performed the same analysis by country. The country-wise curves are shown jointly in figure \ref{fig:sentiment_by_country}. Comparing the absolute average sentiment values between countries is difficult as they may be influenced by the languages or cultural factors. However, the relative development is interesting. We see that all curves progress in a relatively similar fashion, with peaks around Christmas and Easter, a strong dip in the middle of March, and a general slow decrease in sentiment. In the following, we will have a closer look at each country's development. (Note that the keyword-only curves are cut of in the beginning for some countries due to a low number of keyword tweets). \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_by_country.png}} \caption{Development of average sentiment over time by country (all tweets).} \label{fig:sentiment_by_country} \end{figure*} \subsubsection{Italy} Figure \ref{fig:sentiment_kw_count_italy} shows the average sentiment for all Italian tweets and all Italian keyword tweets, as well as the development of keyword tweets in Italy. In total, around 400,000 Italian tweets are contained in the data set, of which around 12,000 have a keyword. Similar to the over-all curves described in section \ref{subsec:res_overall}, the sentiment curve slowly decreases over time, keywords are not used frequently before the end of January, when the first cases in Italy were confirmed. Sentiment in the keyword tweets starts out very negative and then increases again. Interestingly, we see a dip in sentiment on March 9, which is exactly when the Italian lockdown was announced. Keywords were also used most frequently during that week. The dip is not visible in the keyword-only sentiment curve, suggesting that the negative sentiment was actually caused by the higher prevalence of coronavirus-related tweets. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_italy_mod.png}} \caption{Italy: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_italy} \end{figure*} \subsubsection{Spain} For Spain, around 780,000 tweets were collected in total with around 14,000 keyword tweets. The curves are shown in figure \ref{fig:sentiment_kw_count_spain}. The heavier usage of keywords starts around the same time as in Italy, where the first domestic cases were publicized at the same time. The spike in keyword-only sentiment in mid-February is actually an artifact of the low number of keyworded tweets in combination with the fact that ``corona'' is a word with other meanings in Spanish (in contrast to the other languages). With more keyword mentions, the sentiment drops as in other countries.\\ From there onwards, the virus progressed somewhat slower in Spain, which is reflected in the curves as well. A lockdown was announced in Spain on March 14, corresponding to a dip in the sentiment curve. As with the Italian data, this dip is not present in the keyword-only sentiments. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_spain.png}} \caption{Spain: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_spain} \end{figure*} \subsubsection{France} Analyses for the data from France are shown in figure \ref{fig:sentiment_kw_count_france}. For France, around 309,000 tweets and around 4,600 keyword tweets were collected. Due to the lower number of data points, the curves are somewhat less smooth. Despite the first European COVID-19 case being detected in France in January, cases did not increase significantly until the end of February, which once again is also seen in the start of increased keyword usage here. The French lockdown was announced on March 16 and extended on April 13, both reflected in dips in the sentiment curve. Towards the end of the considered period, keyword-only sentiment actually starts to increase, which is also seen in Italy and Germany. This could indicate a shift to a more hopeful outlook with regards to the pandemic. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_france_mod.png}} \caption{France: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_france} \end{figure*} \subsubsection{Germany} For Germany, around 415,000 tweets and around 5,900 keyword tweets were collected. The analysis results are shown in figure \ref{fig:sentiment_kw_count_germany}. After very few first cases at the end of January, Germany's case count did not increase significantly until early March, which is again when keyword usage increased. The decrease in the sentiment curve actually arrives around the same time as in France and Spain, which is a little surprising because social distancing measures were not introduced by the government until March 22 (extended on March 29). German users were likely influenced by the situation in their neighboring countries here. In general, the curve is flatter than in other countries. One possible reason for this might be the lower severity of measures in Germany, e.g. there were no strict curfews.\\ In contrast to all other considered countries, the keyword-only sentiment curve is not significantly below the sentiment curve for all tweets in Germany after the beginning of March. There are some possible explanations for this. For one, governmental response to the situation was generally applauded in Germany \cite{uni_erfurt}, and, as mentioned above, was not as strict as in other countries, possibly not impacting people as much. On the other hand, the over-all German curve is lower than its counterparts from other countries, i.e. German tweets have lower average sentiment values in general, possibly caused by cultural factors. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_germany_mod.png}} \caption{Germany: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_germany} \end{figure*} \subsubsection{United Kingdom} Curves for the United Kingdom are shown in figure \ref{fig:sentiment_kw_count_uk}, calculated on around 1,380,000 tweets including around 22,000 keyword tweets. Higher keyword usage starts somewhat earlier here than expected in February, whereas a significant increase in cases did not occur until March. Once again, keyword-only sentiment starts out very negative and then increases over time.\\ The British government handled the situation somewhat differently. In early March, only recommendations were given, and a lockdown was explicitly avoided to prevent economic consequences. This may be a cause for the sentiment peak seen at this time. However, the curve falls until mid-March, when other European countries did implement lockdowns. The government finally did announce a lockdown starting on March 26. This did not lead to a significant change in average sentiment anymore, but in contrast with other countries, the curve does not swing back to a significantly more positive level in the considered period, and actually decreases towards the end. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_uk_mod.png}} \caption{United Kingdom: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_uk} \end{figure*} \section{Conclusion} \vspace{-5pt} In this paper, we presented the results of a sentiment analysis of 4.6 million geotagged Twitter messages collected during the months of December 2019 through April 2020. This analysis was performed with a neural network trained on an unrelated Twitter sentiment data set. The tweets were then tagged with sentiment on a scale from 0 to 1 using this network. The results were aggregated by country, and averaged over time. Additionally, the sentiments of tweets containing COVID-19-related keywords were aggregated separately.\\ We find several interesting results in the data. First of all, there is a general downward trend in sentiment in the last few months corresponding to the COVID-19 pandemic, with clear dips at times of lockdown announcements and a slow recovery in the following weeks in most countries. COVID-19 keywords were used rarely before February, and correlate with a rise in cases in each country. The sentiment of keyworded tweets starts out very negative at the beginning of increased keyword usage, and becomes more positive over time. However, it remains significantly below the average sentiment in all countries except Germany. Interestingly, there is a slight upward development in sentiment in most countries towards the end of the considered period.\\ \vspace{-10pt} \section{Future work} \vspace{-5pt} We will continue this study by also analyzing the development in the weeks since May 1st and the coming months. More countries will also be added. It will be very interesting to compare the shown European results to those of countries like China, South Korea, Japan, New Zealand, or even individual US states, which were impacted by the pandemic at different times and in different ways, and where the governmental and societal response was different from that of Europe.\\ There are also many other interesting research questions that could be answered on a large scale with this data - for example, regarding people's trust in published COVID-19 information, their concrete opinions on containment measures, or their situation during an infection. Other data sets have also been published in the meantime, including ones that contains hundreds of millions of tweets at the time of writing \cite[e.g.][]{geocov,banda_juan_m_2020_3757272}. These data sets are much larger because collection was not restricted to geotagged tweets. In \citet{geocov}, geolocations were instead completed from outside sources.\\ These studies could also be extended to elucidate more detailed factors in each country. One possibility here is an analysis of Twitter usage and tweet content by country. Another, as mentioned above, lies in moving from the binary sentiment scale to a more complex model. \newpage \bibliography{anthology,acl2020} \bibliographystyle{acl_natbib} \appendix \end{document}
https://openreview.net/forum?id=VvRbhkiAwR
https://arxiv.org/abs/2008.12172
Please evaluate the paper based on the provided evaluation, focusing on the approach comparisons, data breakdown, and the potential for extended analyses and future improvements.
Review on "Cross-language sentiment analysis of European Twitter messages during the COVID-19 pandemic" The authors carried on a deep learning pipeline to analyze the sentiment of Twitter texts, and propose a complete research. The presentation and language part of this submission is good. However, the research mainly use the routine DL methodology and the analysis method is not contributive. In general, the novelty and contribution of this research do not reach the level of publication as a ACL workshop paper. Here comes some comments and suggestions. 1. The data statistics is missing. Though we found a rough number list in Figure 1, they are not quite clear. Data with time series info are also welcomed. Furthermore, several python packages help to draw Europe Map, and might make this part more vivid. 2. It is better to provide a figure to explain the structure of the network. The authors surely already gave some details in page 2, including the input layer, activation function info. The hyper parameter of the network could also be provided. 3. It is lacking of comparison of the current NN with some other NN structure. How would one single experiment derive convincing result without baseline methods or intrinsic evaluation? This is a core question I would like to raise here for this research. 4. I am thinking of a possibility of splitting the Twitter data in terms of weeks, and take time series consideration into the current research paradigm. A sentiment-time curve plot might lead to some instructive hypothesis, if the research take a more sophisticated experiment design. Rating: 6: Marginally above acceptance threshold Confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2020} \usepackage{times} \usepackage{latexsym} \renewcommand{\UrlFont}{\ttfamily\small} \usepackage{graphicx} \usepackage{caption} \usepackage{url} \usepackage[utf8]{inputenc} \usepackage{microtype} \aclfinalcopy % \setlength\titlebox{10cm} \newcommand\BibTeX{B\textsc{ib}\TeX} \title{Cross-language sentiment analysis of European Twitter messages during the COVID-19 pandemic} \author{Anna Kruspe \\ German Aerospace Center (DLR) \\ Institute of Data Science \\ Jena, Germany \\ \texttt{anna.kruspe@dlr.de} \\\And Matthias H\"aberle \\ Technical University of Munich (TUM) \\ Signal Processing in Earth Observation (SiPEO) \\ Munich, Germany \\ \texttt{matthias.haeberle@tum.de} \\\AND Iona Kuhn \\ German Aerospace Center (DLR) \\ Institute of Data Science \\ Jena, Germany \\ \texttt{iona.kuhn@dlr.de} \\\And Xiao Xiang Zhu \\ German Aerospace Center (DLR) \\ Remote Sensing Technology Institute (IMF) \\ Oberpfaffenhofen, Germany \\ \texttt{xiaoxiang.zhu@dlr.de}} \date{} \begin{document} \maketitle \begin{abstract} Social media data can be a very salient source of information during crises. User-generated messages provide a window into people's minds during such times, allowing us insights about their moods and opinions. Due to the vast amounts of such messages, a large-scale analysis of population-wide developments becomes possible.\\ In this paper, we analyze Twitter messages (tweets) collected during the first months of the COVID-19 pandemic in Europe with regard to their sentiment. This is implemented with a neural network for sentiment analysis using multilingual sentence embeddings. We separate the results by country of origin, and correlate their temporal development with events in those countries. This allows us to study the effect of the situation on people's moods. We see, for example, that lockdown announcements correlate with a deterioration of mood in almost all surveyed countries, which recovers within a short time span. \end{abstract} \section{Introduction} The COVID-19 pandemic has led to a worldwide situation with a large number of unknowns. Many heretofore unseen events occurred within a short time span, and governments have had to make quick decisions for containing the spread of the disease. Due to the extreme novelty of the situation, the outcomes of many of these events have not been studied well so far. This is true with regards to their medical effect, as well as the effect on people's perceptions and moods.\\ First studies about the effect the pandemic has on people's lives are being published at the moment \citep[e.g.][]{uni_erfurt}, mainly focusing on surveys and polls. Naturally, such studies are limited to relatively small numbers of participants and focus on specific regions (e.g. countries).\\ In contrast, social media provides a large amount of user-created messages reflective of those users' moods and opinions. The issue with this data source is the difficulty of analysis - social media messages are extremely noisy and idiosyncratic, and the amount of incoming data is much too large to analyze manually. We therefore need automatic methods to extract meaningful insights.\\ In this paper, we describe a data set collected from Twitter during the months of December 2019 through April 2020, and present an automatic method for determining the sentiments contained in these messages. We then calculate the development of these sentiments over time, segment the results by country, and correlate them with events that took place in each country during those five months. \vspace{-5pt} \section{Related work} Since the pandemic outbreak and lockdown measures, numerous studies have been published to investigate the impact of the corona pandemic on Twitter. \citet{feng2020working} analyzed tweets from the US on a state and county level. First, they could detect differences in temporal tweeting patterns and found that people tweeting more about COVID-19 during working hours as the pandemic progressed. Furthermore, they conducted a sentiment analysis over time including an event specific subtask reporting negative sentiment when the 1000th death was announced and positive when the lockdown measures were eased in the states. \citet{lyu2020sense} looked into US-tweets which contained the terms "Chinese-virus" or "Wuhan-virus" referring to the COVID-19 pandemic to perform a user characterization. They compared the results to users that did not make use of such controversial vocabulary. The findings suggest that there are noticeable differences in age group, geo-location, or followed politicians. \citet{chen2020eyes} focused on sentiment analysis and topic modelling on COVID-19 tweets containing the term "Chinese-virus" (controversial) and contrasted them against tweets without such terms (non-controversial). Tweets containing "Chinese-virus" discussing more topics which are related to China whereas tweets without such words stressing how to defend the virus. The sentiment analysis revealed for both groups negative sentiment, yet with a slightly more positive and analytical tone for the non-controversial tweets. Furthermore, they accent more the future and what the group itself can do to fight the disease. In contrast, the controversial group aiming more on the past and concentrate on what others should do. \begin{figure*}[htbp] \centerline{\includegraphics[width=.8\textwidth]{fig/treemap_countries.pdf}} \caption{Treemap of Twitter activity in Europe during the time period of December 2019 to April 2020.} \label{fig:treemap_countries} \end{figure*} \section{Data collection}\label{sec:data_collection} For our study, we used the freely available Twitter API to collect the tweets from December 2019 to April 2020. The free API allows streaming of 1\% of the total tweet amount. To cover the largest possible area, we used a bounding box which includes the entire world. From this data, we sub-sampled 4,683,226 geo-referenced tweets in 60 languages located in the Europe. To create the Europe sample, we downloaded a shapefile of the earth\footnote{\url{https://www.naturalearthdata.com/downloads/10m-cultural-vectors/10m-admin-0-countries/}}, then we filtered by country performing a point in polygon test using the Python package \textit{Shapely}\footnote{\url{https://pypi.org/project/Shapely/}}. Figure \ref{fig:treemap_countries} depicts the Europe Twitter activity in total numbers. Most tweets come from the U.K. Tweets are not filtered by topic, i.e. many of them are going to be about other topics than COVID-19. This is by design. As we will describe later, we also apply a simple keyword filter to detect tweets that are probably COVID-19-related for further analysis. \begin{figure}[htbp] \centerline{\includegraphics[width=.4\textwidth]{fig/model.png}} \caption{Architecture of the sentiment analysis model.} \label{fig:model} \end{figure} \section{Analysis method} We now describe how the automatic sentiment analysis was performed, and the considerations involved in this method. \begin{figure}[htbp] \centerline{\includegraphics[width=.5\textwidth]{fig/embedding_comp.png}} \caption{MSE for different models on the \textit{Sentiment140} test dataset.} \label{fig:embedding_comp} \end{figure} \subsection{Sentiment modeling} In order to analyze these large amounts of data, we focus on an automatic method for sentiment analysis. We train a neural network for sentiment analysis on tweets. The text input layer of the network is followed by a pre-trained word or sentence embedding. The resulting embedding vectors are fed into a 128-dimensional fully-connected ReLU layer with 50\% dropout, followed by a regression output layer with sigmoid activation. Mean squared error is used as loss. The model is visualized in figure \ref{fig:model}.\\ This network is trained on the \textit{Sentiment140} dataset \cite{go}. This dataset contains around 1.5 million tweets collected through keyword search, and then annotated automatically by detecting emoticons. Tweets are determined to have positive, neutral, or negative sentiment. We map these sentiments to the values 1.0, 0.5, and 0.0 for the regression. Sentiment for unseen tweets is then represented on a continuous scale at the output.\\ We test variants of the model using the following pre-trained word- and sentence-level embeddings: \begin{itemize} \item A skip-gram version of \textit{word2vec} \citep{mikolov} trained on the English-language Wikipedia\footnote{\url{https://tfhub.dev/google/Wiki-words-250/2}} \item A multilingual version of BERT \citep{bert} trained on Wikipedia data\footnote{\url{https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/2}} \item A multilingual version of BERT trained on 160 million tweets containing COVID-19 keywords\footnote{\url{https://tfhub.dev/digitalepidemiologylab/covid-twitter-bert/1}} \citep{covidtwitterbert} \item An ELMO model \cite{elmo} trained on the 1 Billion Word Benchmark dataset\footnote{\url{https://tfhub.dev/google/elmo/3}} \item The Multilingual Universal Sentence Encoder (MUSE)\footnote{\url{https://tfhub.dev/google/universal-sentence-encoder-multilingual/3}} \citep{yang} \end{itemize} We train each sentiment analysis model on the \textit{Sentiment140} dataset for 10 epochs. Mean squared error results on the unseen test portion of the same dataset are shown in figure \ref{fig:embedding_comp}. For comparison, we also include an analysis conducted by VADER which is a rule-based sentiment reasoner designed for social media messages \cite{vader}.\\ % Interestingly, most neural network results are in the range of the rule-based approach. BERT delivers better results than the \textit{word2vec} model, with ELMO and the COVID-19-specific version also leading to improvements. However, the best result is achieved with the pre-trained multilingual USE model, which can embed whole sentences rather than (contextualized) words. We therefore perform the subsequent sentiment analysis with the MUSE-based model.\\ An interesting side note here is that the dataset only contains English-language tweets, but the sentence embedding is multilingual (for 16 languages). We freeze the embedding weights to prevent them from over-adapting to English. Due to the cross-lingual semantic representation capabilities of the pre-trained embedding, we expect the model to be able to detect sentiment in other languages just as well.\\ With the created model, we perform sentiment analysis on the 4.6 million tweets collected from December to April, and then aggregate the results over time. This provides us with a representation of the development of Twitter messages' average sentiment over time. We specifically consider all collected tweets rather than just those determined to be topically related to COVID-19 because we are interested in the effect on people's moods in general, not just with regards to the pandemic. Additionally, we also filter the tweets by COVID-19-associated keywords, and analyze their sentiments as well. % The chosen keywords are listed in figure \ref{fig:keywords}.\\ \subsection{Considerations} There are some assumptions implicit in this analysis method that we want to address here. First of all, we only consider tweets containing a geolocation. This applies to less than 1\% of the whole tweet stream, but according to \citet{sloan}, the amount of geolocated tweets closely follows the geographic population distribution. According to \citet{graham}, there probably are factors determining which users share their locations and which ones do not, but there is no systematic study of these.\\ Other assumptions arise from the analysis method itself. For one, we assume that the model is able to extract meaningful sentiment values from the data. However, sentiment is subjective, and the model may be failing for certain constructs (e.g. negations, sarcasm). Additionally, modeling sentiment on a binary scale does not tell the whole story. ``Positive'' sentiment encompasses, for example, happy or hopeful tweets, ``negative'' angry or sad tweets, and ``neutral'' tweets can be news tweets, for example. A more finegrained analysis would be of interest in the future.\\ We also assume a somewhat similar perception of sentiment across languages. Finally, we assume that the detected sentiments as a whole are reflective of the mood within the community; on the other hand, mood is not quantifiable in the first place. All of these assumptions can be called into question. Nevertheless, while they may not be applicable for every single tweet, we hope to detect interesting effects on a large scale. When analyzing thousands of tweets within each time frame, random fluctuations become less likely. We believe that this analysis can provide useful insights into people's thoughts, and form an interesting basis for future studies from psychological or sociological perspectives. \begin{figure}[htbp] \centerline{\includegraphics[width=.4\textwidth]{fig/keywords.png}} \caption{Keywords used for filtering the tweets (not case sensitive).} \label{fig:keywords} \end{figure} \section{Results} In the following, we present the detected sentiment developments over time over-all and for select countries, and correlate them with events that took place within these months. Results for some other countries would have been interesting as well, but were not included because the main spoken language is not covered by MUSE (e.g. Sweden, Denmark). Others were excluded because there was not enough material available; we only analyze countries with at least 300,000 recorded tweets. As described in section \ref{sec:data_collection}, tweets are filtered geographically, not by language (i.e. Italian tweets may also be in other languages than Italian). \subsection{Over-all}\label{subsec:res_overall} In total, we analyzed around 4.6 million tweets, of which around 79,000 contained at least one COVID-19 keyword. Figure \ref{fig:sentiment_kw_count_all} shows the development of the sentiment over time for all tweets and for those with keywords, as well as the development of the number of keyworded tweets. The sentiment results are smoothed on a weekly basis (otherwise, we would be seeing a lot of movement during the week, e.g. an increase on the weekends). For the average over all tweets, we see a slight decrease in sentiment over time, indicating possibly that users' moods deteriorated over the last few months. There are some side effects that need to be considered here. For example, the curve rises slightly for holidays like Christmas and Easter (April 12). Interestingly, we see a clear dip around mid-March. Most European countries started implementing strong social distancing measures around this time. We will talk about this in more detail in the next sections.\\ We see that keywords were used very rarely before mid-January, and only saw a massive increase in usage around the beginning of March. Lately, usage has been decreasing again, indicating a loss of interest over time. Consequently, the sentiment analysis for keyword tweets is not expressive in the beginning. Starting with the more frequent usage in February, the associated sentiment drops massively, indicating that these tweets are now used in relation with the pandemic. Interestingly, the sentiment recovers with the increased use in March - it is possible that users were starting to think about the risks and handling of the situation in a more relaxed way over time. Still, the sentiment curve for keyword tweets lies significantly below the average one, which is to be expected for this all-around rather negative topic. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_all.png}} \caption{Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_all} \end{figure*} \subsection{Analysis by country} We next aggregated the tweets by country as described in section \ref{sec:data_collection} and performed the same analysis by country. The country-wise curves are shown jointly in figure \ref{fig:sentiment_by_country}. Comparing the absolute average sentiment values between countries is difficult as they may be influenced by the languages or cultural factors. However, the relative development is interesting. We see that all curves progress in a relatively similar fashion, with peaks around Christmas and Easter, a strong dip in the middle of March, and a general slow decrease in sentiment. In the following, we will have a closer look at each country's development. (Note that the keyword-only curves are cut of in the beginning for some countries due to a low number of keyword tweets). \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_by_country.png}} \caption{Development of average sentiment over time by country (all tweets).} \label{fig:sentiment_by_country} \end{figure*} \subsubsection{Italy} Figure \ref{fig:sentiment_kw_count_italy} shows the average sentiment for all Italian tweets and all Italian keyword tweets, as well as the development of keyword tweets in Italy. In total, around 400,000 Italian tweets are contained in the data set, of which around 12,000 have a keyword. Similar to the over-all curves described in section \ref{subsec:res_overall}, the sentiment curve slowly decreases over time, keywords are not used frequently before the end of January, when the first cases in Italy were confirmed. Sentiment in the keyword tweets starts out very negative and then increases again. Interestingly, we see a dip in sentiment on March 9, which is exactly when the Italian lockdown was announced. Keywords were also used most frequently during that week. The dip is not visible in the keyword-only sentiment curve, suggesting that the negative sentiment was actually caused by the higher prevalence of coronavirus-related tweets. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_italy_mod.png}} \caption{Italy: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_italy} \end{figure*} \subsubsection{Spain} For Spain, around 780,000 tweets were collected in total with around 14,000 keyword tweets. The curves are shown in figure \ref{fig:sentiment_kw_count_spain}. The heavier usage of keywords starts around the same time as in Italy, where the first domestic cases were publicized at the same time. The spike in keyword-only sentiment in mid-February is actually an artifact of the low number of keyworded tweets in combination with the fact that ``corona'' is a word with other meanings in Spanish (in contrast to the other languages). With more keyword mentions, the sentiment drops as in other countries.\\ From there onwards, the virus progressed somewhat slower in Spain, which is reflected in the curves as well. A lockdown was announced in Spain on March 14, corresponding to a dip in the sentiment curve. As with the Italian data, this dip is not present in the keyword-only sentiments. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_spain.png}} \caption{Spain: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_spain} \end{figure*} \subsubsection{France} Analyses for the data from France are shown in figure \ref{fig:sentiment_kw_count_france}. For France, around 309,000 tweets and around 4,600 keyword tweets were collected. Due to the lower number of data points, the curves are somewhat less smooth. Despite the first European COVID-19 case being detected in France in January, cases did not increase significantly until the end of February, which once again is also seen in the start of increased keyword usage here. The French lockdown was announced on March 16 and extended on April 13, both reflected in dips in the sentiment curve. Towards the end of the considered period, keyword-only sentiment actually starts to increase, which is also seen in Italy and Germany. This could indicate a shift to a more hopeful outlook with regards to the pandemic. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_france_mod.png}} \caption{France: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_france} \end{figure*} \subsubsection{Germany} For Germany, around 415,000 tweets and around 5,900 keyword tweets were collected. The analysis results are shown in figure \ref{fig:sentiment_kw_count_germany}. After very few first cases at the end of January, Germany's case count did not increase significantly until early March, which is again when keyword usage increased. The decrease in the sentiment curve actually arrives around the same time as in France and Spain, which is a little surprising because social distancing measures were not introduced by the government until March 22 (extended on March 29). German users were likely influenced by the situation in their neighboring countries here. In general, the curve is flatter than in other countries. One possible reason for this might be the lower severity of measures in Germany, e.g. there were no strict curfews.\\ In contrast to all other considered countries, the keyword-only sentiment curve is not significantly below the sentiment curve for all tweets in Germany after the beginning of March. There are some possible explanations for this. For one, governmental response to the situation was generally applauded in Germany \cite{uni_erfurt}, and, as mentioned above, was not as strict as in other countries, possibly not impacting people as much. On the other hand, the over-all German curve is lower than its counterparts from other countries, i.e. German tweets have lower average sentiment values in general, possibly caused by cultural factors. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_germany_mod.png}} \caption{Germany: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_germany} \end{figure*} \subsubsection{United Kingdom} Curves for the United Kingdom are shown in figure \ref{fig:sentiment_kw_count_uk}, calculated on around 1,380,000 tweets including around 22,000 keyword tweets. Higher keyword usage starts somewhat earlier here than expected in February, whereas a significant increase in cases did not occur until March. Once again, keyword-only sentiment starts out very negative and then increases over time.\\ The British government handled the situation somewhat differently. In early March, only recommendations were given, and a lockdown was explicitly avoided to prevent economic consequences. This may be a cause for the sentiment peak seen at this time. However, the curve falls until mid-March, when other European countries did implement lockdowns. The government finally did announce a lockdown starting on March 26. This did not lead to a significant change in average sentiment anymore, but in contrast with other countries, the curve does not swing back to a significantly more positive level in the considered period, and actually decreases towards the end. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_uk_mod.png}} \caption{United Kingdom: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_uk} \end{figure*} \section{Conclusion} \vspace{-5pt} In this paper, we presented the results of a sentiment analysis of 4.6 million geotagged Twitter messages collected during the months of December 2019 through April 2020. This analysis was performed with a neural network trained on an unrelated Twitter sentiment data set. The tweets were then tagged with sentiment on a scale from 0 to 1 using this network. The results were aggregated by country, and averaged over time. Additionally, the sentiments of tweets containing COVID-19-related keywords were aggregated separately.\\ We find several interesting results in the data. First of all, there is a general downward trend in sentiment in the last few months corresponding to the COVID-19 pandemic, with clear dips at times of lockdown announcements and a slow recovery in the following weeks in most countries. COVID-19 keywords were used rarely before February, and correlate with a rise in cases in each country. The sentiment of keyworded tweets starts out very negative at the beginning of increased keyword usage, and becomes more positive over time. However, it remains significantly below the average sentiment in all countries except Germany. Interestingly, there is a slight upward development in sentiment in most countries towards the end of the considered period.\\ \vspace{-10pt} \section{Future work} \vspace{-5pt} We will continue this study by also analyzing the development in the weeks since May 1st and the coming months. More countries will also be added. It will be very interesting to compare the shown European results to those of countries like China, South Korea, Japan, New Zealand, or even individual US states, which were impacted by the pandemic at different times and in different ways, and where the governmental and societal response was different from that of Europe.\\ There are also many other interesting research questions that could be answered on a large scale with this data - for example, regarding people's trust in published COVID-19 information, their concrete opinions on containment measures, or their situation during an infection. Other data sets have also been published in the meantime, including ones that contains hundreds of millions of tweets at the time of writing \cite[e.g.][]{geocov,banda_juan_m_2020_3757272}. These data sets are much larger because collection was not restricted to geotagged tweets. In \citet{geocov}, geolocations were instead completed from outside sources.\\ These studies could also be extended to elucidate more detailed factors in each country. One possibility here is an analysis of Twitter usage and tweet content by country. Another, as mentioned above, lies in moving from the binary sentiment scale to a more complex model. \newpage \bibliography{anthology,acl2020} \bibliographystyle{acl_natbib} \appendix \end{document}
https://openreview.net/forum?id=VvRbhkiAwR
https://arxiv.org/abs/2008.12172
Please evaluate the paper based on its methodology, novelty, and contribution, providing specific feedback on the data statistics, network structure, comparison with other methods, and potential for time series analysis.
Review This is a mostly well-written overview of an exercise to assign a sentiment label to the European-country generated tweets during the period December’19-May’20. The authors describe how they differentiate and identify the country, how they assign the sentiment level (positive, neutral, negative), how they use emojis, and how they use the deep learning neural model which presumably can adjust this label assignment regardless of what language the tweet is originally written. The authors report a 0.82 accuracy of their system. The rest of the paper is a recognition of the limitations, and a description and plotting of the sentiment level for various European countries. Unfortunately, these results do not contribute to adding new knowledge. The study could use more work. Suggestions: Could the authors provide a breakdown by language of the tweets that they process? Are we to assume that all tweet originated from Italy are in Italian and those originating in Germany are in German? Is this data publicly available? Has the 0.82 accuracy been manually validated? Is there a difference in accuracy depending on the language? The authors claim that one of the contributions of their study is this tagged dataset (geotagged, and sentiment-tagged). It seems there is no further evaluation on how well the tagging has been applied. And while it is visibly clear that we see a global fall in sentiment that correlates with governments issuing lock-down protective measures, and this result could be a start that this labelling of the data is good, is there anything else we can say, is there any other way we can analyze this data and identify common topics in the similar sentiment groups? Something that can be actually useful to the COVID-19 researchers… Rating: 6: Marginally above acceptance threshold Confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2020} \usepackage{times} \usepackage{latexsym} \renewcommand{\UrlFont}{\ttfamily\small} \usepackage{graphicx} \usepackage{caption} \usepackage{url} \usepackage[utf8]{inputenc} \usepackage{microtype} \aclfinalcopy % \setlength\titlebox{10cm} \newcommand\BibTeX{B\textsc{ib}\TeX} \title{Cross-language sentiment analysis of European Twitter messages during the COVID-19 pandemic} \author{Anna Kruspe \\ German Aerospace Center (DLR) \\ Institute of Data Science \\ Jena, Germany \\ \texttt{anna.kruspe@dlr.de} \\\And Matthias H\"aberle \\ Technical University of Munich (TUM) \\ Signal Processing in Earth Observation (SiPEO) \\ Munich, Germany \\ \texttt{matthias.haeberle@tum.de} \\\AND Iona Kuhn \\ German Aerospace Center (DLR) \\ Institute of Data Science \\ Jena, Germany \\ \texttt{iona.kuhn@dlr.de} \\\And Xiao Xiang Zhu \\ German Aerospace Center (DLR) \\ Remote Sensing Technology Institute (IMF) \\ Oberpfaffenhofen, Germany \\ \texttt{xiaoxiang.zhu@dlr.de}} \date{} \begin{document} \maketitle \begin{abstract} Social media data can be a very salient source of information during crises. User-generated messages provide a window into people's minds during such times, allowing us insights about their moods and opinions. Due to the vast amounts of such messages, a large-scale analysis of population-wide developments becomes possible.\\ In this paper, we analyze Twitter messages (tweets) collected during the first months of the COVID-19 pandemic in Europe with regard to their sentiment. This is implemented with a neural network for sentiment analysis using multilingual sentence embeddings. We separate the results by country of origin, and correlate their temporal development with events in those countries. This allows us to study the effect of the situation on people's moods. We see, for example, that lockdown announcements correlate with a deterioration of mood in almost all surveyed countries, which recovers within a short time span. \end{abstract} \section{Introduction} The COVID-19 pandemic has led to a worldwide situation with a large number of unknowns. Many heretofore unseen events occurred within a short time span, and governments have had to make quick decisions for containing the spread of the disease. Due to the extreme novelty of the situation, the outcomes of many of these events have not been studied well so far. This is true with regards to their medical effect, as well as the effect on people's perceptions and moods.\\ First studies about the effect the pandemic has on people's lives are being published at the moment \citep[e.g.][]{uni_erfurt}, mainly focusing on surveys and polls. Naturally, such studies are limited to relatively small numbers of participants and focus on specific regions (e.g. countries).\\ In contrast, social media provides a large amount of user-created messages reflective of those users' moods and opinions. The issue with this data source is the difficulty of analysis - social media messages are extremely noisy and idiosyncratic, and the amount of incoming data is much too large to analyze manually. We therefore need automatic methods to extract meaningful insights.\\ In this paper, we describe a data set collected from Twitter during the months of December 2019 through April 2020, and present an automatic method for determining the sentiments contained in these messages. We then calculate the development of these sentiments over time, segment the results by country, and correlate them with events that took place in each country during those five months. \vspace{-5pt} \section{Related work} Since the pandemic outbreak and lockdown measures, numerous studies have been published to investigate the impact of the corona pandemic on Twitter. \citet{feng2020working} analyzed tweets from the US on a state and county level. First, they could detect differences in temporal tweeting patterns and found that people tweeting more about COVID-19 during working hours as the pandemic progressed. Furthermore, they conducted a sentiment analysis over time including an event specific subtask reporting negative sentiment when the 1000th death was announced and positive when the lockdown measures were eased in the states. \citet{lyu2020sense} looked into US-tweets which contained the terms "Chinese-virus" or "Wuhan-virus" referring to the COVID-19 pandemic to perform a user characterization. They compared the results to users that did not make use of such controversial vocabulary. The findings suggest that there are noticeable differences in age group, geo-location, or followed politicians. \citet{chen2020eyes} focused on sentiment analysis and topic modelling on COVID-19 tweets containing the term "Chinese-virus" (controversial) and contrasted them against tweets without such terms (non-controversial). Tweets containing "Chinese-virus" discussing more topics which are related to China whereas tweets without such words stressing how to defend the virus. The sentiment analysis revealed for both groups negative sentiment, yet with a slightly more positive and analytical tone for the non-controversial tweets. Furthermore, they accent more the future and what the group itself can do to fight the disease. In contrast, the controversial group aiming more on the past and concentrate on what others should do. \begin{figure*}[htbp] \centerline{\includegraphics[width=.8\textwidth]{fig/treemap_countries.pdf}} \caption{Treemap of Twitter activity in Europe during the time period of December 2019 to April 2020.} \label{fig:treemap_countries} \end{figure*} \section{Data collection}\label{sec:data_collection} For our study, we used the freely available Twitter API to collect the tweets from December 2019 to April 2020. The free API allows streaming of 1\% of the total tweet amount. To cover the largest possible area, we used a bounding box which includes the entire world. From this data, we sub-sampled 4,683,226 geo-referenced tweets in 60 languages located in the Europe. To create the Europe sample, we downloaded a shapefile of the earth\footnote{\url{https://www.naturalearthdata.com/downloads/10m-cultural-vectors/10m-admin-0-countries/}}, then we filtered by country performing a point in polygon test using the Python package \textit{Shapely}\footnote{\url{https://pypi.org/project/Shapely/}}. Figure \ref{fig:treemap_countries} depicts the Europe Twitter activity in total numbers. Most tweets come from the U.K. Tweets are not filtered by topic, i.e. many of them are going to be about other topics than COVID-19. This is by design. As we will describe later, we also apply a simple keyword filter to detect tweets that are probably COVID-19-related for further analysis. \begin{figure}[htbp] \centerline{\includegraphics[width=.4\textwidth]{fig/model.png}} \caption{Architecture of the sentiment analysis model.} \label{fig:model} \end{figure} \section{Analysis method} We now describe how the automatic sentiment analysis was performed, and the considerations involved in this method. \begin{figure}[htbp] \centerline{\includegraphics[width=.5\textwidth]{fig/embedding_comp.png}} \caption{MSE for different models on the \textit{Sentiment140} test dataset.} \label{fig:embedding_comp} \end{figure} \subsection{Sentiment modeling} In order to analyze these large amounts of data, we focus on an automatic method for sentiment analysis. We train a neural network for sentiment analysis on tweets. The text input layer of the network is followed by a pre-trained word or sentence embedding. The resulting embedding vectors are fed into a 128-dimensional fully-connected ReLU layer with 50\% dropout, followed by a regression output layer with sigmoid activation. Mean squared error is used as loss. The model is visualized in figure \ref{fig:model}.\\ This network is trained on the \textit{Sentiment140} dataset \cite{go}. This dataset contains around 1.5 million tweets collected through keyword search, and then annotated automatically by detecting emoticons. Tweets are determined to have positive, neutral, or negative sentiment. We map these sentiments to the values 1.0, 0.5, and 0.0 for the regression. Sentiment for unseen tweets is then represented on a continuous scale at the output.\\ We test variants of the model using the following pre-trained word- and sentence-level embeddings: \begin{itemize} \item A skip-gram version of \textit{word2vec} \citep{mikolov} trained on the English-language Wikipedia\footnote{\url{https://tfhub.dev/google/Wiki-words-250/2}} \item A multilingual version of BERT \citep{bert} trained on Wikipedia data\footnote{\url{https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/2}} \item A multilingual version of BERT trained on 160 million tweets containing COVID-19 keywords\footnote{\url{https://tfhub.dev/digitalepidemiologylab/covid-twitter-bert/1}} \citep{covidtwitterbert} \item An ELMO model \cite{elmo} trained on the 1 Billion Word Benchmark dataset\footnote{\url{https://tfhub.dev/google/elmo/3}} \item The Multilingual Universal Sentence Encoder (MUSE)\footnote{\url{https://tfhub.dev/google/universal-sentence-encoder-multilingual/3}} \citep{yang} \end{itemize} We train each sentiment analysis model on the \textit{Sentiment140} dataset for 10 epochs. Mean squared error results on the unseen test portion of the same dataset are shown in figure \ref{fig:embedding_comp}. For comparison, we also include an analysis conducted by VADER which is a rule-based sentiment reasoner designed for social media messages \cite{vader}.\\ % Interestingly, most neural network results are in the range of the rule-based approach. BERT delivers better results than the \textit{word2vec} model, with ELMO and the COVID-19-specific version also leading to improvements. However, the best result is achieved with the pre-trained multilingual USE model, which can embed whole sentences rather than (contextualized) words. We therefore perform the subsequent sentiment analysis with the MUSE-based model.\\ An interesting side note here is that the dataset only contains English-language tweets, but the sentence embedding is multilingual (for 16 languages). We freeze the embedding weights to prevent them from over-adapting to English. Due to the cross-lingual semantic representation capabilities of the pre-trained embedding, we expect the model to be able to detect sentiment in other languages just as well.\\ With the created model, we perform sentiment analysis on the 4.6 million tweets collected from December to April, and then aggregate the results over time. This provides us with a representation of the development of Twitter messages' average sentiment over time. We specifically consider all collected tweets rather than just those determined to be topically related to COVID-19 because we are interested in the effect on people's moods in general, not just with regards to the pandemic. Additionally, we also filter the tweets by COVID-19-associated keywords, and analyze their sentiments as well. % The chosen keywords are listed in figure \ref{fig:keywords}.\\ \subsection{Considerations} There are some assumptions implicit in this analysis method that we want to address here. First of all, we only consider tweets containing a geolocation. This applies to less than 1\% of the whole tweet stream, but according to \citet{sloan}, the amount of geolocated tweets closely follows the geographic population distribution. According to \citet{graham}, there probably are factors determining which users share their locations and which ones do not, but there is no systematic study of these.\\ Other assumptions arise from the analysis method itself. For one, we assume that the model is able to extract meaningful sentiment values from the data. However, sentiment is subjective, and the model may be failing for certain constructs (e.g. negations, sarcasm). Additionally, modeling sentiment on a binary scale does not tell the whole story. ``Positive'' sentiment encompasses, for example, happy or hopeful tweets, ``negative'' angry or sad tweets, and ``neutral'' tweets can be news tweets, for example. A more finegrained analysis would be of interest in the future.\\ We also assume a somewhat similar perception of sentiment across languages. Finally, we assume that the detected sentiments as a whole are reflective of the mood within the community; on the other hand, mood is not quantifiable in the first place. All of these assumptions can be called into question. Nevertheless, while they may not be applicable for every single tweet, we hope to detect interesting effects on a large scale. When analyzing thousands of tweets within each time frame, random fluctuations become less likely. We believe that this analysis can provide useful insights into people's thoughts, and form an interesting basis for future studies from psychological or sociological perspectives. \begin{figure}[htbp] \centerline{\includegraphics[width=.4\textwidth]{fig/keywords.png}} \caption{Keywords used for filtering the tweets (not case sensitive).} \label{fig:keywords} \end{figure} \section{Results} In the following, we present the detected sentiment developments over time over-all and for select countries, and correlate them with events that took place within these months. Results for some other countries would have been interesting as well, but were not included because the main spoken language is not covered by MUSE (e.g. Sweden, Denmark). Others were excluded because there was not enough material available; we only analyze countries with at least 300,000 recorded tweets. As described in section \ref{sec:data_collection}, tweets are filtered geographically, not by language (i.e. Italian tweets may also be in other languages than Italian). \subsection{Over-all}\label{subsec:res_overall} In total, we analyzed around 4.6 million tweets, of which around 79,000 contained at least one COVID-19 keyword. Figure \ref{fig:sentiment_kw_count_all} shows the development of the sentiment over time for all tweets and for those with keywords, as well as the development of the number of keyworded tweets. The sentiment results are smoothed on a weekly basis (otherwise, we would be seeing a lot of movement during the week, e.g. an increase on the weekends). For the average over all tweets, we see a slight decrease in sentiment over time, indicating possibly that users' moods deteriorated over the last few months. There are some side effects that need to be considered here. For example, the curve rises slightly for holidays like Christmas and Easter (April 12). Interestingly, we see a clear dip around mid-March. Most European countries started implementing strong social distancing measures around this time. We will talk about this in more detail in the next sections.\\ We see that keywords were used very rarely before mid-January, and only saw a massive increase in usage around the beginning of March. Lately, usage has been decreasing again, indicating a loss of interest over time. Consequently, the sentiment analysis for keyword tweets is not expressive in the beginning. Starting with the more frequent usage in February, the associated sentiment drops massively, indicating that these tweets are now used in relation with the pandemic. Interestingly, the sentiment recovers with the increased use in March - it is possible that users were starting to think about the risks and handling of the situation in a more relaxed way over time. Still, the sentiment curve for keyword tweets lies significantly below the average one, which is to be expected for this all-around rather negative topic. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_all.png}} \caption{Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_all} \end{figure*} \subsection{Analysis by country} We next aggregated the tweets by country as described in section \ref{sec:data_collection} and performed the same analysis by country. The country-wise curves are shown jointly in figure \ref{fig:sentiment_by_country}. Comparing the absolute average sentiment values between countries is difficult as they may be influenced by the languages or cultural factors. However, the relative development is interesting. We see that all curves progress in a relatively similar fashion, with peaks around Christmas and Easter, a strong dip in the middle of March, and a general slow decrease in sentiment. In the following, we will have a closer look at each country's development. (Note that the keyword-only curves are cut of in the beginning for some countries due to a low number of keyword tweets). \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_by_country.png}} \caption{Development of average sentiment over time by country (all tweets).} \label{fig:sentiment_by_country} \end{figure*} \subsubsection{Italy} Figure \ref{fig:sentiment_kw_count_italy} shows the average sentiment for all Italian tweets and all Italian keyword tweets, as well as the development of keyword tweets in Italy. In total, around 400,000 Italian tweets are contained in the data set, of which around 12,000 have a keyword. Similar to the over-all curves described in section \ref{subsec:res_overall}, the sentiment curve slowly decreases over time, keywords are not used frequently before the end of January, when the first cases in Italy were confirmed. Sentiment in the keyword tweets starts out very negative and then increases again. Interestingly, we see a dip in sentiment on March 9, which is exactly when the Italian lockdown was announced. Keywords were also used most frequently during that week. The dip is not visible in the keyword-only sentiment curve, suggesting that the negative sentiment was actually caused by the higher prevalence of coronavirus-related tweets. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_italy_mod.png}} \caption{Italy: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_italy} \end{figure*} \subsubsection{Spain} For Spain, around 780,000 tweets were collected in total with around 14,000 keyword tweets. The curves are shown in figure \ref{fig:sentiment_kw_count_spain}. The heavier usage of keywords starts around the same time as in Italy, where the first domestic cases were publicized at the same time. The spike in keyword-only sentiment in mid-February is actually an artifact of the low number of keyworded tweets in combination with the fact that ``corona'' is a word with other meanings in Spanish (in contrast to the other languages). With more keyword mentions, the sentiment drops as in other countries.\\ From there onwards, the virus progressed somewhat slower in Spain, which is reflected in the curves as well. A lockdown was announced in Spain on March 14, corresponding to a dip in the sentiment curve. As with the Italian data, this dip is not present in the keyword-only sentiments. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_spain.png}} \caption{Spain: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_spain} \end{figure*} \subsubsection{France} Analyses for the data from France are shown in figure \ref{fig:sentiment_kw_count_france}. For France, around 309,000 tweets and around 4,600 keyword tweets were collected. Due to the lower number of data points, the curves are somewhat less smooth. Despite the first European COVID-19 case being detected in France in January, cases did not increase significantly until the end of February, which once again is also seen in the start of increased keyword usage here. The French lockdown was announced on March 16 and extended on April 13, both reflected in dips in the sentiment curve. Towards the end of the considered period, keyword-only sentiment actually starts to increase, which is also seen in Italy and Germany. This could indicate a shift to a more hopeful outlook with regards to the pandemic. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_france_mod.png}} \caption{France: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_france} \end{figure*} \subsubsection{Germany} For Germany, around 415,000 tweets and around 5,900 keyword tweets were collected. The analysis results are shown in figure \ref{fig:sentiment_kw_count_germany}. After very few first cases at the end of January, Germany's case count did not increase significantly until early March, which is again when keyword usage increased. The decrease in the sentiment curve actually arrives around the same time as in France and Spain, which is a little surprising because social distancing measures were not introduced by the government until March 22 (extended on March 29). German users were likely influenced by the situation in their neighboring countries here. In general, the curve is flatter than in other countries. One possible reason for this might be the lower severity of measures in Germany, e.g. there were no strict curfews.\\ In contrast to all other considered countries, the keyword-only sentiment curve is not significantly below the sentiment curve for all tweets in Germany after the beginning of March. There are some possible explanations for this. For one, governmental response to the situation was generally applauded in Germany \cite{uni_erfurt}, and, as mentioned above, was not as strict as in other countries, possibly not impacting people as much. On the other hand, the over-all German curve is lower than its counterparts from other countries, i.e. German tweets have lower average sentiment values in general, possibly caused by cultural factors. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_germany_mod.png}} \caption{Germany: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_germany} \end{figure*} \subsubsection{United Kingdom} Curves for the United Kingdom are shown in figure \ref{fig:sentiment_kw_count_uk}, calculated on around 1,380,000 tweets including around 22,000 keyword tweets. Higher keyword usage starts somewhat earlier here than expected in February, whereas a significant increase in cases did not occur until March. Once again, keyword-only sentiment starts out very negative and then increases over time.\\ The British government handled the situation somewhat differently. In early March, only recommendations were given, and a lockdown was explicitly avoided to prevent economic consequences. This may be a cause for the sentiment peak seen at this time. However, the curve falls until mid-March, when other European countries did implement lockdowns. The government finally did announce a lockdown starting on March 26. This did not lead to a significant change in average sentiment anymore, but in contrast with other countries, the curve does not swing back to a significantly more positive level in the considered period, and actually decreases towards the end. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_uk_mod.png}} \caption{United Kingdom: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_uk} \end{figure*} \section{Conclusion} \vspace{-5pt} In this paper, we presented the results of a sentiment analysis of 4.6 million geotagged Twitter messages collected during the months of December 2019 through April 2020. This analysis was performed with a neural network trained on an unrelated Twitter sentiment data set. The tweets were then tagged with sentiment on a scale from 0 to 1 using this network. The results were aggregated by country, and averaged over time. Additionally, the sentiments of tweets containing COVID-19-related keywords were aggregated separately.\\ We find several interesting results in the data. First of all, there is a general downward trend in sentiment in the last few months corresponding to the COVID-19 pandemic, with clear dips at times of lockdown announcements and a slow recovery in the following weeks in most countries. COVID-19 keywords were used rarely before February, and correlate with a rise in cases in each country. The sentiment of keyworded tweets starts out very negative at the beginning of increased keyword usage, and becomes more positive over time. However, it remains significantly below the average sentiment in all countries except Germany. Interestingly, there is a slight upward development in sentiment in most countries towards the end of the considered period.\\ \vspace{-10pt} \section{Future work} \vspace{-5pt} We will continue this study by also analyzing the development in the weeks since May 1st and the coming months. More countries will also be added. It will be very interesting to compare the shown European results to those of countries like China, South Korea, Japan, New Zealand, or even individual US states, which were impacted by the pandemic at different times and in different ways, and where the governmental and societal response was different from that of Europe.\\ There are also many other interesting research questions that could be answered on a large scale with this data - for example, regarding people's trust in published COVID-19 information, their concrete opinions on containment measures, or their situation during an infection. Other data sets have also been published in the meantime, including ones that contains hundreds of millions of tweets at the time of writing \cite[e.g.][]{geocov,banda_juan_m_2020_3757272}. These data sets are much larger because collection was not restricted to geotagged tweets. In \citet{geocov}, geolocations were instead completed from outside sources.\\ These studies could also be extended to elucidate more detailed factors in each country. One possibility here is an analysis of Twitter usage and tweet content by country. Another, as mentioned above, lies in moving from the binary sentiment scale to a more complex model. \newpage \bibliography{anthology,acl2020} \bibliographystyle{acl_natbib} \appendix \end{document}
https://openreview.net/forum?id=VvRbhkiAwR
https://arxiv.org/abs/2008.12172
Please evaluate the paper based on the methodology used to assign sentiment labels to European-country generated tweets during the period December'19-May'20, including the accuracy of the system and any limitations or potential improvements.
Excellent description of a critical COVID-19 dataset, some questions remaining This manuscript describes an exemplary effort to address COVID-19 by bringing together much of the relevant literature into one corpus, CORD-19, and increasing its accessibility by providing a harmonized and standardized format convenient for use by automated tools. CORD-19 has been - and is likely to continue being - a critical resource for the scientific community to address COVID-19, and this manuscript not only reflects that importance, but also gives insight into the approach used, the design decisions taken, challenges encountered, use cases, shared tasks, and various discussion points. The manuscript is well-organized and readable, and (overall) an excellent case study in corpus creation. This manuscript is not only important for understanding the CORD-19 corpus and its enabling effect on current COVID-19 efforts, but is possibly also a historically important example of joint scientific efforts to address COVID-19. Despite the critical importance of this dataset, there are several questions left unanswered by this manuscript, and it would be unfortunate to not address these before publication. It would be useful to have a very clear statement of the purpose for CORD-19. The inclusion of SARS and MERS makes intuitive sense, but it is less clear why other coronaviruses that infect humans (e.g. HCoV-OC43) are not explicitly included - I am not a virologist, but neither will be most of the audience for this manuscript. While many of the articles that discuss these lesser known cornaviruses would be included anyway because they would also mention "coronavirus", this is not guaranteed. While it seems appropriate for document inclusion to be query-based, it is important to consider the coverage of the query. The number of name variants in the literature for COVID-19 or SARS-CoV-2 is rather large, and not all of these documents will include other terms that will match, such as "coronavirus". For example, how would a document that mentions "SARS CoV-2" but none of the query terms listed be handled? This is not a theoretical case: the title and abstract for PMID 32584542 have this issue, and I was unable to locate this document in CORD-19. In addition to minor variations such as this, there are many examples of significant variations such as "HCoV-19", "nCoV-19" or even "COIVD". Are these cases worth considering? If not, can we quantify how much is lost? And if we can't quantify it, this is a limitation. How is the following situation handled: querying source A returns a document (e.g. the source has full text and that matches), but the version in source B does not return it (e.g. the source only has title & abstract, and they do not match). From the description, I would assume that the version from source A is used and the version from source B is ignored; is any reasonably useful data lost by not explicitly querying source B for its version? There are other efforts to provide a repository of scientific articles related to COVID-19, and it would be appropriate to mention these, if only to indicate why CORD-19 has unique value. I am aware of LitCovid (Chen Q, Allot A, Lu Z. Keep up with the latest coronavirus research. Nature. 2020;579(7798):193), are there others? There are also non-COVID-19 efforts to provide a large percentage of the literature in formats appropriate for text mining or other processing. One is (Comeau, Donald C., et al. "PMC text mining subset in BioC: about three million full-text articles and growing." Bioinformatics 35.18 (2019): 3533-3535.), which not only provides the full text of a large percentage of the articles in PubMed Central, but it is also kept up-to-date and converts all documents into a straightforward standardized XML format appropriate for text mining. While this effort is single-source, it specifically addresses some of the issues encountered in the creation of CORD-19 and the representation aspect of the "Call to Action". Rating: 9: Top 15% of accepted papers, strong accept Confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
\documentclass[11pt,a4paper]{article} \PassOptionsToPackage{hyphens}{url}\usepackage{hyperref} % \usepackage[hyperref]{acl2020} \usepackage{times} \usepackage{latexsym} \usepackage{enumitem} \usepackage{graphicx} \usepackage{booktabs} \usepackage{tabularx} \renewcommand{\UrlFont}{\ttfamily\small} \usepackage{xspace} % \usepackage{microtype} \aclfinalcopy % \setlength\titlebox{8cm} \newcommand\BibTeX{B\textsc{ib}\TeX} \newcommand{\covid}{\textsc{Covid-19}\xspace} \newcommand{\cord}{\textsc{CORD-19}\xspace} \newcommand{\sars}{\textsc{SARS}\xspace} \newcommand{\mers}{\textsc{MERS}\xspace} \newcommand{\swine}{\textsc{H1N1}\xspace} \newcommand{\trec}{\textsc{TREC-COVID}\xspace} \newcommand\kyle[1]{{\color{red}\{\textit{#1}\}$_{KL}$}} \newcommand\lucy[1]{{\color{orange}\{\textit{#1}\}$_{LLW}$}} \newcommand\todoit[1]{{\color{red}\{TODO: \textit{#1}\}}} \newcommand\todo{{\color{red}{TODO}}\xspace} \title{\cord: The \covid Open Research Dataset} \author{ Lucy Lu Wang$^{1,}$\Thanks{ denotes equal contribution} \quad Kyle Lo$^{1,}$\footnotemark[1] \quad Yoganand Chandrasekhar$^1$ \quad Russell Reas$^1$ \quad \\ {\bf Jiangjiang Yang$^1$ \quad Douglas Burdick$^2$ \quad Darrin Eide$^3$ \quad Kathryn Funk$^4$ \quad } \\ {\bf Yannis Katsis$^2$ \quad Rodney Kinney$^1$ \quad Yunyao Li$^2$ \quad Ziyang Liu$^6$ \quad } \\ {\bf William Merrill$^1$ \quad Paul Mooney$^5$ \quad Dewey Murdick$^7$ \quad Devvret Rishi$^5$ \quad } \\ {\bf Jerry Sheehan$^4$ \quad Zhihong Shen$^3$ \quad Brandon Stilson$^1$ \quad Alex D. Wade$^6$ \quad } \\ {\bf Kuansan Wang$^3$ \quad Nancy Xin Ru Wang $^2$ \quad Chris Wilhelm$^1$ \quad Boya Xie$^3$ \quad } \\ {\bf Douglas Raymond$^1$ \quad Daniel S. Weld$^{1,8}$ \quad Oren Etzioni$^1$ \quad Sebastian Kohlmeier$^1$ \quad } \\ [2mm] $^1$Allen Institute for AI \quad $^2$ IBM Research \quad $^3$Microsoft Research \\ $^4$National Library of Medicine \quad $^5$Kaggle \quad $^6$Chan Zuckerberg Initiative \\ $^7$Georgetown University \quad $^8$University of Washington \\ {\tt\small \{lucyw, kylel\}@allenai.org} } \date{} \begin{document} \maketitle \begin{abstract} The \covid Open Research Dataset (\cord) is a growing\footnote{The dataset continues to be updated daily with papers from new sources and the latest publications. Statistics reported in this article are up-to-date as of version \textsc{2020-06-14}.} resource of scientific papers on \covid and related historical coronavirus research. \cord is designed to facilitate the development of text mining and information retrieval systems over its rich collection of metadata and structured full text papers. Since its release, \cord has been downloaded\footnote{\href{https://www.semanticscholar.org/cord19}{https://www.semanticscholar.org/cord19}} over 200K times and has served as the basis of many \covid text mining and discovery systems. In this article, we describe the mechanics of dataset construction, highlighting challenges and key design decisions, provide an overview of how \cord has been used, and describe several shared tasks built around the dataset. We hope this resource will continue to bring together the computing community, biomedical experts, and policy makers in the search for effective treatments and management policies for \covid. \end{abstract} \section{Introduction} On March 16, 2020, the Allen Institute for AI (AI2), in collaboration with our partners at The White House Office of Science and Technology Policy (OSTP), the National Library of Medicine (NLM), the Chan Zuckerburg Initiative (CZI), Microsoft Research, and Kaggle, coordinated by Georgetown University's Center for Security and Emerging Technology (CSET), released the first version of \cord. This resource is a large and growing collection of publications and preprints on \covid and related historical coronaviruses such as \sars and \mers. The initial release consisted of 28K papers, and the collection has grown to more than 140K papers over the subsequent weeks. Papers and preprints from several archives are collected and ingested through the Semantic Scholar literature search engine,\footnote{\href{https://semanticscholar.org/}{https://semanticscholar.org/}} metadata are harmonized and deduplicated, and paper documents are processed through the pipeline established in \citet{lo-wang-2020-s2orc} to extract full text (more than 50\% of papers in \cord have full text). We commit to providing regular updates to the dataset until an end to the \covid crisis is foreseeable. \begin{figure}[tbp!] \centering \includegraphics[width=\columnwidth]{cord19_dset.png} \caption{Papers and preprints are collected from different sources through Semantic Scholar. Released as part of \cord are the harmonized and deduplicated metadata and full text JSON.} \label{fig:dataset} \end{figure} \cord aims to connect the machine learning community with biomedical domain experts and policy makers in the race to identify effective treatments and management policies for \covid. The goal is to harness these diverse and complementary pools of expertise to discover relevant information more quickly from the literature. Users of the dataset have leveraged AI-based techniques in information retrieval and natural language processing to extract useful information. Responses to \cord have been overwhelmingly positive, with the dataset being downloaded over 200K times in the three months since its release. The dataset has been used by clinicians and clinical researchers to conduct systematic reviews, has been leveraged by data scientists and machine learning practitioners to construct search and extraction tools, and is being used as the foundation for several successful shared tasks. We summarize research and shared tasks in Section~\ref{sec:research_directions}. In this article, we briefly describe: \begin{enumerate}[noitemsep] \item The content and creation of \cord, \item Design decisions and challenges around creating the dataset, \item Research conducted on the dataset, and how shared tasks have facilitated this research, and \item A roadmap for \cord going forward. \end{enumerate} \section{Dataset} \label{sec:dataset} \cord integrates papers and preprints from several sources (Figure~\ref{fig:dataset}), where a paper is defined as the base unit of published knowledge, and a preprint as an unpublished but publicly available counterpart of a paper. Throughout the rest of Section~\ref{sec:dataset}, we discuss papers, though the same processing steps are adopted for preprints. First, we ingest into Semantic Scholar paper metadata and documents from each source. Each paper is associated with bibliographic metadata, like title, authors, publication venue, etc, as well as unique identifiers such as a DOI, PubMed Central ID, PubMed ID, the WHO Covidence \#,\footnote{\label{footnote:who}\href{https://www.who.int/emergencies/diseases/novel-coronavirus-2019/global-research-on-novel-coronavirus-2019-ncov}{https://www.who.int/emergencies/diseases/novel-coronavirus-2019/global-research-on-novel-coronavirus-2019-ncov}} MAG identifier \citep{Shen2018AWS}, and others. Some papers are associated with documents, the physical artifacts containing paper content; these are the familiar PDFs, XMLs, or physical print-outs we read. For the \cord effort, we generate harmonized and deduplicated metadata as well as structured full text parses of paper documents as output. We provide full text parses in cases where we have access to the paper documents, and where the documents are available under an open access license (e.g. Creative Commons (CC),\footnote{\href{https://creativecommons.org/}{https://creativecommons.org/}} publisher-specific \covid licenses,\footnote{\label{footnote:pmc_covid}\href{https://www.ncbi.nlm.nih.gov/pmc/about/covid-19/}{https://www.ncbi.nlm.nih.gov/pmc/about/covid-19/}} or identified as open access through DOI lookup in the Unpaywall\footnote{\href{https://unpaywall.org/}{https://unpaywall.org/}} database). \subsection{Sources of papers} Papers in \cord are sourced from PubMed Central (PMC), PubMed, the World Health Organization's Covid-19 Database,\textsuperscript{\ref{footnote:who}} and preprint servers bioRxiv, medRxiv, and arXiv. The PMC Public Health Emergency Covid-19 Initiative\textsuperscript{\ref{footnote:pmc_covid}} expanded access to \covid literature by working with publishers to make coronavirus-related papers discoverable and accessible through PMC under open access license terms that allow for reuse and secondary analysis. BioRxiv and medRxiv preprints were initially provided by CZI, and are now ingested through Semantic Scholar along with all other included sources. We also work directly with publishers such as Elsevier\footnote{\label{footnote:elsevier}\href{https://www.elsevier.com/connect/coronavirus-information-center}{https://www.elsevier.com/connect/coronavirus-information-center}} and Springer Nature,\footnote{\href{https://www.springernature.com/gp/researchers/campaigns/coronavirus}{https://www.springernature.com/gp/researchers/\\campaigns/coronavirus}} to provide full text coverage of relevant papers available in their back catalog. All papers are retrieved given the query\footnote{Adapted from the Elsevier COVID-19 site\textsuperscript{\ref{footnote:elsevier}}}: \begin{quote} \footnotesize\texttt{"COVID" OR "COVID-19" OR "Coronavirus" OR "Corona virus" OR "2019-nCoV" OR "SARS-CoV" OR "MERS-CoV" OR "Severe Acute Respiratory Syndrome" OR "Middle East Respiratory Syndrome"} \end{quote} \noindent Papers that match on these keywords in their title, abstract, or body text are included in the dataset. Query expansion is performed by PMC on these search terms, affecting the subset of papers in \cord retrieved from PMC. \subsection{Processing metadata} \label{sec:metadata_processing} The initial collection of sourced papers suffers from duplication and incomplete or conflicting metadata. We perform the following operations to harmonize and deduplicate all metadata: \begin{enumerate}[noitemsep] \item Cluster papers using paper identifiers \item Select canonical metadata for each cluster \item Filter clusters to remove unwanted entries \end{enumerate} \paragraph{Clustering papers} We cluster papers if they overlap on any of the following identifiers: \emph{\{doi, pmc\_id, pubmed\_id, arxiv\_id, who\_covidence\_id, mag\_id\}}. If two papers from different sources have an identifier in common and no other identifier conflicts between them, we assign them to the same cluster. Each cluster is assigned a unique identifier \textbf{\textsc{cord\_uid}}, which persists between dataset releases. No existing identifier, such as DOI or PMC ID, is sufficient as the primary \cord identifier. Some papers in PMC do not have DOIs; some papers from the WHO, publishers, or preprint servers like arXiv do not have PMC IDs or DOIs. Occasionally, conflicts occur. For example, a paper $c$ with $(doi, pmc\_id, pubmed\_id)$ identifiers $(x, null, z')$ might share identifier $x$ with a cluster of papers $\{a, b\}$ that has identifiers $(x, y, z)$, but has a conflict $z' \neq z$. In this case, we choose to create a new cluster $\{c\}$, containing only paper $c$.\footnote{This is a conservative clustering policy in which any metadata conflict prohibits clustering. An alternative policy would be to cluster if any identifier matches, under which $a$, $b$, and $c$ would form one cluster with identifiers $(x, y, [z, z'])$.} \paragraph{Selecting canonical metadata} Among each cluster, the canonical entry is selected to prioritize the availability of document files and the most permissive license. For example, between two papers with PDFs, one available under a CC license and one under a more restrictive \covid-specific copyright license, we select the CC-licensed paper entry as canonical. If any metadata in the canonical entry are missing, values from other members of the cluster are promoted to fill in the blanks. \paragraph{Cluster filtering} Some entries harvested from sources are not papers, and instead correspond to materials like tables of contents, indices, or informational documents. These entries are identified in an ad hoc manner and removed from the dataset. \subsection{Processing full text} Most papers are associated with one or more PDFs.\footnote{PMC papers can have multiple associated PDFs per paper, separating the main text from supplementary materials.} To extract full text and bibliographies from each PDF, we use the PDF parsing pipeline created for the S2ORC dataset \cite{lo-wang-2020-s2orc}.\footnote{One major difference in full text parsing for \cord is that we do not use ScienceParse,\footnotemark~as we always derive this metadata from the sources directly.}\footnotetext{\href{https://github.com/allenai/science-parse}{https://github.com/allenai/science-parse}} In \cite{lo-wang-2020-s2orc}, we introduce the S2ORC JSON format for representing scientific paper full text, which is used as the target output for paper full text in \cord. The pipeline involves: \begin{enumerate}[noitemsep] \item Parse all PDFs to TEI XML files using GROBID\footnote{\href{https://github.com/kermitt2/grobid}{https://github.com/kermitt2/grobid}} \cite{Lopez2009GROBIDCA} \item Parse all TEI XML files to S2ORC JSON \item Postprocess to clean up links between inline citations and bibliography entries. \end{enumerate} \noindent We additionally parse JATS XML\footnote{\href{https://jats.nlm.nih.gov/}{https://jats.nlm.nih.gov/}} files available for PMC papers using a custom parser, generating the same target S2ORC JSON format. This creates two sets of full text JSON parses associated with the papers in the collection, one set originating from PDFs (available from more sources), and one set originating from JATS XML (available only for PMC papers). Each PDF parse has an associated SHA, the 40-digit SHA-1 of the associated PDF file, while each XML parse is named using its associated PMC ID. Around 48\% of \cord papers have an associated PDF parse, and around 37\% have an XML parse, with the latter nearly a subset of the former. Most PDFs ($>$90\%) are successfully parsed. Around 2.6\% of \cord papers are associated with multiple PDF SHA, due to a combination of paper clustering and the existence of supplementary PDF files. \subsection{Table parsing} Since the May 12, 2020 release of \cord, we also release selected HTML table parses. Tables contain important numeric and descriptive information such as sample sizes and results, which are the targets of many information extraction systems. A separate PDF table processing pipeline is used, consisting of table extraction and table understanding. \emph{Table extraction} is based on the Smart Document Understanding (SDU) capability included in IBM Watson Discovery.\footnote{\href{https://www.ibm.com/cloud/watson-discovery}{https://www.ibm.com/cloud/watson-discovery}} SDU converts a given PDF document from its native binary representation into a text-based representation like HTML which includes both identified document structures (e.g., tables, section headings, lists) and formatting information (e.g. positions for extracted text). \emph{Table understanding} (also part of Watson Discovery) then annotates the extracted tables with additional semantic information, such as column and row headers and table captions. We leverage the Global Table Extractor (GTE)~\cite{Zheng2020GlobalTE}, which uses a specialized object detection and clustering technique to extract table bounding boxes and structures. All PDFs are processed through this table extraction and understanding pipeline. If the Jaccard similarity of the table captions from the table parses and \cord parses is above 0.9, we insert the HTML of the matched table into the full text JSON. We extract 188K tables from 54K documents, of which 33K tables are successfully matched to tables in 19K (around 25\%) full text documents in \cord. Based on preliminary error analysis, we find that match failures are primarily due to caption mismatches between the two parse schemes. Thus, we plan to explore alternate matching functions, potentially leveraging table content and document location as additional features. See Appendix \ref{app:tables} for example table parses. \subsection{Dataset contents} \begin{figure}[tbp!] \centering \includegraphics[width=\columnwidth]{papers_per_year.png} \caption{The distribution of papers per year in \cord. A spike in publications occurs in 2020 in response to \covid.} \label{fig:year} \end{figure} \cord has grown rapidly, now consisting of over 140K papers with over 72K full texts. Over 47K papers and 7K preprints on \covid and coronaviruses have been released since the start of 2020, comprising nearly 40\% of papers in the dataset. \begin{table}[tbp!] \setlength{\tabcolsep}{.25em} \footnotesize \centering \begin{tabular}{p{34mm}p{15mm}p{17mm}} \toprule Subfield & Count & \% of corpus \\ \midrule Virology & 29567 & 25.5\% \\ Immunology & 15954 & 13.8\% \\ Surgery & 15667 & 13.5\% \\ Internal medicine & 12045 & 10.4\% \\ Intensive care medicine & 10624 & 9.2\% \\ Molecular biology & 7268 & 6.3\% \\ Pathology & 6611 & 5.7\% \\ Genetics & 5231 & 4.5\% \\ Other & 12997 & 11.2\% \\ \bottomrule \end{tabular} \caption{MAG subfield of study for \cord papers.} \label{tab:fos} \end{table} Classification of \cord papers to Microsoft Academic Graph (MAG) \citep{msr:mag1, msr:mag2} fields of study \citep{Shen2018AWS} indicate that the dataset consists predominantly of papers in Medicine (55\%), Biology (31\%), and Chemistry (3\%), which together constitute almost 90\% of the corpus.\footnote{MAG identifier mappings are provided as a supplement on the \cord landing page.} A breakdown of the most common MAG subfields (L1 fields of study) represented in \cord is given in Table~\ref{tab:fos}. Figure~\ref{fig:year} shows the distribution of \cord papers by date of publication. Coronavirus publications increased during and following the SARS and MERS epidemics, but the number of papers published in the early months of 2020 exploded in response to the \covid epidemic. Using author affiliations in MAG, we identify the countries from which the research in CORD-19 is conducted. Large proportions of \cord papers are associated with institutions based in the Americas (around 48K papers), Europe (over 35K papers), and Asia (over 30K papers). \section{Design decision \& challenges} A number of challenges come into play in the creation of \cord. We summarize the primary design requirements of the dataset, along with challenges implicit within each requirement: \paragraph{Up-to-date} Hundreds of new publications on \covid are released every day, and a dataset like \cord can quickly become irrelevant without regular updates. \cord has been updated daily since May 26. A processing pipeline that produces consistent results day to day is vital to maintaining a changing dataset. That is, the metadata and full text parsing results must be reproducible, identifiers must be persistent between releases, and changes or new features should ideally be compatible with previous versions of the dataset. \paragraph{Handles data from multiple sources} Papers from different sources must be integrated and harmonized. Each source has its own metadata format, which must be converted to the \cord format, while addressing any missing or extraneous fields. The processing pipeline must also be flexible to adding new sources. \paragraph{Clean canonical metadata} Because of the diversity of paper sources, duplication is unavoidable. Once paper metadata from each source is cleaned and organized into \cord format, we apply the deduplication logic described in Section \ref{sec:metadata_processing} to identify similar paper entries from different sources. We apply a conservative clustering algorithm, combining papers only when they have shared identifiers but no conflicts between any particular class of identifiers. We justify this because it is less harmful to retain a few duplicate papers than to remove a document that is potentially unique and useful. \paragraph{Machine readable full text} To provide accessible and canonical structured full text, we parse content from PDFs and associated paper documents. The full text is represented in S2ORC JSON format \citep{lo-wang-2020-s2orc}, a schema designed to preserve most relevant paper structures such as paragraph breaks, section headers, inline references, and citations. S2ORC JSON is simple to use for many NLP tasks, where character-level indices are often employed for annotation of relevant entities or spans. The text and annotation representations in S2ORC share similarities with BioC \citep{Comeau2019PMCTM}, a JSON schema introduced by the BioCreative community for shareable annotations, with both formats leveraging the flexibility of character-based span annotations. However, S2ORC JSON also provides a schema for representing other components of a paper, such as its metadata fields, bibliography entries, and reference objects for figures, tables, and equations. We leverage this flexible and somewhat complete representation of S2ORC JSON for \cord. We recognize that converting between PDF or XML to JSON is lossy. However, the benefits of a standard structured format, and the ability to reuse and share annotations made on top of that format have been critical to the success of \cord. \paragraph{Observes copyright restrictions} Papers in \cord and academic papers more broadly are made available under a variety of copyright licenses. These licenses can restrict or limit the abilities of organizations such as AI2 from redistributing their content freely. Although much of the \covid literature has been made open access by publishers, the provisions on these open access licenses differ greatly across papers. Additionally, many open access licenses grant the ability to read, or ``consume'' the paper, but may be restrictive in other ways, for example, by not allowing republication of a paper or its redistribution for commercial purposes. The curator of a dataset like \cord must pass on best-to-our-knowledge licensing information to the end user. \section{Research directions} \label{sec:research_directions} \begin{figure}[tbp!] \centering \includegraphics[width=\columnwidth]{cord19_tasks.png} \caption{An example information retrieval and extraction system using \cord: Given an input query, the system identifies relevant papers (yellow highlighted rows) and extracts text snippets from the full text JSONs as supporting evidence.} \label{fig:tasks} \end{figure} We provide a survey of various ways researchers have made use of \cord. We organize these into four categories: \emph{(i)} direct usage by clinicians and clinical researchers (\S\ref{sec:by_clinical_experts}), \emph{(ii)} tools and systems to assist clinicians (\S\ref{sec:for_clinical_experts}), \emph{(iii)} research to support further text mining and NLP research (\S\ref{sec:for_nlp_researchers}), and \emph{(iv)} shared tasks and competitions (\S\ref{sec:shared_tasks}). \subsection{Usage by clinical researchers} \label{sec:by_clinical_experts} \cord has been used by medical experts as a paper collection for conducting systematic reviews. These reviews address questions about \covid include infection and mortality rates in different demographics \cite{Han2020.who-is-more-susceptible}, symptoms of the disease \citep{Parasa2020PrevalenceOG}, identifying suitable drugs for repurposing \cite{sadegh2020exploring}, management policies \cite{Yaacoube-bmj-safe-management-bodies}, and interactions with other diseases \cite{Crisan-Dabija-tuberculosis-covid19, Popa-inflammatory-bowel-diseases}. \subsection{Tools for clinicians} \label{sec:for_clinical_experts} Challenges for clinicians and clinical researchers during the current epidemic include \textit{(i)} keeping up to to date with recent papers about \covid, \textit{(ii)} identifying useful papers from historical coronavirus literature, \textit{(iii)} extracting useful information from the literature, and \textit{(iv)} synthesizing knowledge from the literature. To facilitate solutions to these challenges, dozens of tools and systems over \cord have already been developed. Most combine elements of text-based information retrieval and extraction, as illustrated in Figure~\ref{fig:tasks}. We have compiled a list of these efforts on the \cord public GitHub repository\footnote{\href{https://github.com/allenai/cord19}{https://github.com/allenai/cord19}} and highlight some systems in Table \ref{tab:other_tasks}.\footnote{There are many Search and QA systems to survey. We have chosen to highlight the systems that were made publicly-available within a few weeks of the \cord initial release.} \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}p{#1}} \begin{table*}[tbh!] \small \begin{tabularx}{\textwidth}{L{20mm}p{20mm}p{40mm}X} \toprule \textbf{Task} & \textbf{Project} & \textbf{Link} & \textbf{Description} \\ \midrule \textbf{Search and \newline discovery} & \textsc{Neural Covidex} & \href{https://covidex.ai/}{https://covidex.ai/} & Uses a T5-base \cite{raffel2019exploring} unsupervised reranker on BM25 \cite{Jones2000APM} \\ \cline{2-4} & \textsc{CovidScholar} & \href{https://covidscholar.org}{https://covidscholar.org/} & Adapts \citet{Weston2019} system for entity-centric queries \\ \cline{2-4} & \textsc{KDCovid} & \href{http://kdcovid.nl/about.html}{http://kdcovid.nl/about.html} & Uses BioSentVec \cite{biosentvec} similarity to identify relevant sentences \\ \cline{2-4} & \textsc{Spike-Cord} & \href{https://spike.covid-19.apps.allenai.org}{https://spike.covid-19.apps.allenai.org} & Enables users to define ``regular expression''-like queries to directly search over full text \\ \midrule \textbf{Question answering} & \textsc{covidask} & \href{https://covidask.korea.ac.kr/}{https://covidask.korea.ac.kr/} & Adapts \citet{seo-etal-2019-real} using BioASQ challenge (Task B) dataset \citep{Tsatsaronis2015AnOO} \\ \cline{2-4} & \textsc{aueb} & \href{http://cslab241.cs.aueb.gr:5000/}{http://cslab241.cs.aueb.gr:5000/} & Adapts \citet{mcdonald2018deep} using \citet{Tsatsaronis2015AnOO} \\ \midrule \textbf{Summariz-ation} & Vespa & \href{https://cord19.vespa.ai/}{https://cord19.vespa.ai/} & Generates summaries of paper abstracts using T5 \citep{raffel2019exploring} \\ \midrule \textbf{Recommend-ation} & Vespa & \href{https://cord19.vespa.ai/}{https://cord19.vespa.ai/} & Recommends ``similar papers'' using Sentence-BERT \cite{reimers-gurevych-2019-sentence} and SPECTER embeddings \cite{specter2020cohan} \\ \midrule \textbf{Entailment} & COVID papers browser & \href{https://github.com/gsarti/covid-papers-browser}{https://github.com/gsarti/covid-papers-browser} & Similar to \textsc{KDCovid}, but uses embeddings from BERT models trained on NLI datasets \\ \midrule \textbf{Claim \newline verification} & SciFact & \href{https://scifact.apps.allenai.org}{https://scifact.apps.allenai.org} & Uses RoBERTa-large \cite{liu2019roberta} to find Support/Refute evidence for \covid claims \\ \midrule \textbf{Assistive lit. review} & ASReview & \href{https://github.com/asreview/asreview-covid19}{https://github.com/asreview/ asreview-covid19} & Active learning system with a \cord plugin for identifying papers for literature reviews \\ \midrule \textbf{Augmented reading} & Sinequa & \href{https://covidsearch.sinequa.com/app/covid-search/}{https://covidsearch.sinequa.com/ app/covid-search/} & In-browser paper reader with entity highlighting on PDFs \\ \midrule \textbf{Visualization} & SciSight & \href{https://scisight.apps.allenai.org}{https://scisight.apps.allenai.org} & Network visualizations for browsing research groups working on \covid \\ \bottomrule \end{tabularx} \caption{Publicly-available tools and systems for medical experts using \cord.} \label{tab:other_tasks} \end{table*} \subsection{Text mining and NLP research} \label{sec:for_nlp_researchers} The following is a summary of resources released by the NLP community on top of \cord to support other research activities. \paragraph{Information extraction} To support extractive systems, NER and entity linking of biomedical entities can be useful. NER and linking can be performed using NLP toolkits like ScispaCy \cite{neumann-etal-2019-scispacy} or language models like BioBERT-base \cite{Lee2019BioBERTAP} and SciBERT-base \cite{beltagy-etal-2019-scibert} finetuned on biomedical NER datasets. \citet{Wang2020ComprehensiveNE} augments \cord full text with entity mentions predicted from several techniques, including weak supervision using the NLM's Unified Medical Language System (UMLS) Metathesaurus \cite{Bodenreider2004TheUM}. \paragraph{Text classification} Some efforts focus on extracting sentences or passages of interest. For example, \citet{Liang2020IdentifyingRF} uses BERT \cite{devlin-etal-2019-bert} to extract sentences from \cord that contain \covid-related radiological findings. \paragraph{Pretrained model weights} BioBERT and SciBERT have been popular pretrained LMs for \covid-related tasks. DeepSet has released a BERT-base model pretrained on \cord.\footnote{\href{https://huggingface.co/deepset/covid_bert_base}{https://huggingface.co/deepset/covid\_bert\_base}} SPECTER \cite{specter2020cohan} paper embeddings computed using paper titles and abstracts are being released with each \cord update. SeVeN relation embeddings \cite{espinosa-anke-schockaert-2018-seven} between word pairs have also been made available for \cord.\footnote{\href{https://github.com/luisespinosaanke/cord-19-seven}{https://github.com/luisespinosaanke/cord-19-seven}} \paragraph{Knowledge graphs} The Covid Graph project\footnote{\href{https://covidgraph.org/}{https://covidgraph.org/}} releases a \covid knowledge graph built from mining several public data sources, including \cord, and is perhaps the largest current initiative in this space. \citet{Ahamed2020InformationMF} rely on entity co-occurrences in \cord to construct a graph that enables centrality-based ranking of drugs, pathogens, and biomolecules. \subsection{Competitions and Shared Tasks} \label{sec:shared_tasks} The adoption of \cord and the proliferation of text mining and NLP systems built on top of the dataset are supported by several \covid-related competitions and shared tasks. \subsubsection{Kaggle} \label{sec:kaggle} Kaggle hosts the \cord Research Challenge,\footnote{\href{https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge}{https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge}} a text-mining challenge that tasks participants with extracting answers to key scientific questions about \covid from the papers in the \cord dataset. Round 1 was initiated with a set of open-ended questions, e.g., \textit{What is known about transmission, incubation, and environmental stability?} and \textit{What do we know about \covid risk factors?} More than 500 teams participated in Round 1 of the Kaggle competition. Feedback from medical experts during Round 1 identified that the most useful contributions took the form of article summary tables. Round 2 subsequently focused on this task of table completion, and resulted in 100 additional submissions. A unique tabular schema is defined for each question, and answers are collected from across different automated extractions. For example, extractions for risk factors should include disease severity and fatality metrics, while extractions for incubation should include time ranges. Sufficient knowledge of COVID-19 is necessary to define these schema, to understand which fields are important to include (and exclude), and also to perform error-checking and manual curation. \subsubsection{TREC} The \trec\footnote{\href{https://ir.nist.gov/covidSubmit/index.html}{https://ir.nist.gov/covidSubmit/index.html}} shared task \cite{trec-covid-jamia,voorhees2020treccovid} assesses systems on their ability to rank papers in \cord based on their relevance to \covid-related topics. Topics are sourced from MedlinePlus searches, Twitter conversations, library searches at OHSU, as well as from direct conversations with researchers, reflecting actual queries made by the community. To emulate real-world surge in publications and rapidly-changing information needs, the shared task is organized in multiple rounds. Each round uses a specific version of \cord, has newly added topics, and gives participants one week to submit per-topic document rankings for judgment. Round 1 topics included more general questions such as \emph{What is the origin of COVID-19?}~and \emph{What are the initial symptoms of COVID-19?}~while Round 3 topics have become more focused, e.g., \emph{What are the observed mutations in the SARS-CoV-2 genome?}~and \emph{What are the longer-term complications of those who recover from COVID-19?} Around 60 medical domain experts, including indexers from NLM and medical students from OHSU and UTHealth, are involved in providing gold rankings for evaluation. \trec opened using the April 1st \cord version and received submissions from over 55 participating teams. \section{Discussion} \label{sec:discussion} Several hundred new papers on \covid are now being published every day. Automated methods are needed to analyze and synthesize information over this large quantity of content. The computing community has risen to the occasion, but it is clear that there is a critical need for better infrastructure to incorporate human judgments in the loop. Extractions need expert vetting, and search engines and systems must be designed to serve users. Successful engagement and usage of \cord speaks to our ability to bridge computing and biomedical communities over a common, global cause. From early results of the Kaggle challenge, we have learned which formats are conducive to collaboration, and which questions are the most urgent to answer. However, there is significant work that remains for determining \textit{(i)} which methods are best to assist textual discovery over the literature, \textit{(ii)} how best to involve expert curators in the pipeline, and \textit{(iii)} which extracted results convert to successful \covid treatments and management policies. Shared tasks and challenges, as well as continued analysis and synthesis of feedback will hopefully provide answers to these outstanding questions. Since the initial release of \cord, we have implemented several new features based on community feedback, such as the inclusion of unique identifiers for papers, table parses, more sources, and daily updates. Most substantial outlying features requests have been implemented or addressed at this time. We will continue to update the dataset with more sources of papers and newly published literature as resources permit. \subsection{Limitations} Though we aim to be comprehensive, \cord does not cover many relevant scientific documents on \covid. We have restricted ourselves to research papers and preprints, and do not incorporate other types of documents, such as technical reports, white papers, informational publications by governmental bodies, and more. Including these documents is outside the current scope of \cord, but we encourage other groups to curate and publish such datasets. Within the scope of scientific papers, \cord is also incomplete, though we continue to prioritize the addition of new sources. This has motivated the creation of other corpora supporting \covid NLP, such as LitCovid \citep{Chen2020KeepUW}, which provide complementary materials to \cord derived from PubMed. Though we have since added PubMed as a source of papers in \cord, there are other domains such as the social sciences that are not currently represented, and we hope to incorporate these works as part of future work. We also note the shortage of foreign language papers in \cord, especially Chinese language papers produced during the early stages of the epidemic. These papers may be useful to many researchers, and we are working with collaborators to provide them as supplementary data. However, challenges in both sourcing and licensing these papers for re-publication are additional hurdles. \subsection{Call to action} Though the full text of many scientific papers are available to researchers through \cord, a number of challenges prevent easy application of NLP and text mining techniques to these papers. First, the primary distribution format of scientific papers -- PDF -- is not amenable to text processing. The PDF file format is designed to share electronic documents rendered faithfully for reading and printing, and mixes visual with semantic information. Significant effort is needed to coerce PDF into a format more amenable to text mining, such as JATS XML,\footnote{\label{footnote:jats}\href{https://www.niso.org/publications/z3996-2019-jats}{https://www.niso.org/publications/z3996-2019-jats}} BioC \citep{Comeau2019PMCTM}, or S2ORC JSON \citep{lo-wang-2020-s2orc}, which is used in \cord. Though there is substantial work in this domain, we can still benefit from better PDF parsing tools for scientific documents. As a complement, scientific papers should also be made available in a structured format like JSON, XML, or HTML. Second, there is a clear need for more scientific content to be made accessible to researchers. Some publishers have made \covid papers openly available during this time, but both the duration and scope of these epidemic-specific licenses are unclear. Papers describing research in related areas (e.g., on other infectious diseases, or relevant biological pathways) have also not been made open access, and are therefore unavailable in \cord or otherwise. Securing release rights for papers not yet in \cord but relevant for \covid research is a significant portion of future work, led by the PMC \covid Initiative.\textsuperscript{\ref{footnote:pmc_covid}} Lastly, there is no standard format for representing paper metadata. Existing schemas like the JATS XML NISO standard\textsuperscript{\ref{footnote:jats}} or library science standards like \textsc{bibframe}\footnote{\href{https://www.loc.gov/bibframe/}{https://www.loc.gov/bibframe/}} or Dublin Core\footnote{\href{https://www.dublincore.org/specifications/dublin-core/dces/}{https://www.dublincore.org/specifications/dublin-core/dces/}} have been adopted to represent paper metadata. However, these standards can be too coarse-grained to capture all necessary paper metadata elements, or may lack a strict schema, causing representations to vary greatly across publishers who use them. To improve metadata coherence across sources, the community must define and agree upon an appropriate standard of representation. \subsection*{Summary} This project offers a paradigm of how the community can use machine learning to advance scientific research. By allowing computational access to the papers in \cord, we increase our ability to perform discovery over these texts. We hope the dataset and projects built on the dataset will serve as a template for future work in this area. We also believe there are substantial improvements that can be made in the ways we publish, share, and work with scientific papers. We offer a few suggestions that could dramatically increase community productivity, reduce redundant effort, and result in better discovery and understanding of the scientific literature. Through \cord, we have learned the importance of bringing together different communities around the same scientific cause. It is clearer than ever that automated text analysis is not the solution, but rather one tool among many that can be directed to combat the \covid epidemic. Crucially, the systems and tools we build must be designed to serve a use case, whether that's improving information retrieval for clinicians and medical professionals, summarizing the conclusions of the latest observational research or clinical trials, or converting these learnings to a format that is easily digestible by healthcare consumers. \section*{Acknowledgments} This work was supported in part by NSF Convergence Accelerator award 1936940, ONR grant N00014-18-1-2193, and the University of Washington WRF/Cable Professorship. We thank The White House Office of Science and Technology Policy, the National Library of Medicine at the National Institutes of Health, Microsoft Research, Chan Zuckerberg Initiative, and Georgetown University's Center for Security and Emerging Technology for co-organizing the \cord initiative. We thank Michael Kratsios, the Chief Technology Officer of the United States, and The White House Office of Science and Technology Policy for providing the initial seed set of questions for the Kaggle \cord research challenge. We thank Kaggle for coordinating the \cord research challenge. In particular, we acknowledge Anthony Goldbloom for providing feedback on \cord and for involving us in discussions around the Kaggle literature review tables project. We thank the National Institute of Standards and Technology (NIST), National Library of Medicine (NLM), Oregon Health and Science University (OHSU), and University of Texas Health Science Center at Houston (UTHealth) for co-organizing the \trec shared task. In particular, we thank our co-organizers -- Steven Bedrick (OHSU), Aaron Cohen (OHSU), Dina Demner-Fushman (NLM), William Hersh (OHSU), Kirk Roberts (UTHealth), Ian Soboroff (NIST), and Ellen Voorhees (NIST) -- for feedback on the design of \cord. We acknowledge our partners at Elsevier and Springer Nature for providing additional full text coverage of papers included in the corpus. We thank Bryan Newbold from the Internet Archive for providing feedback on data quality and helpful comments on early drafts of the manuscript. We thank Rok Jun Lee, Hrishikesh Sathe, Dhaval Sonawane and Sudarshan Thitte from IBM Watson AI for their help in table parsing. We also acknowledge and thank our collaborators from AI2: Paul Sayre and Sam Skjonsberg for providing front-end support for \cord and \trec, Michael Schmitz for setting up the \cord Discourse community forums, Adriana Dunn for creating webpage content and marketing, Linda Wagner for collecting community feedback, Jonathan Borchardt, Doug Downey, Tom Hope, Daniel King, and Gabriel Stanovsky for contributing supplemental data to the \cord effort, Alex Schokking for his work on the Semantic Scholar \covid Research Feed, Darrell Plessas for technical support, and Carissa Schoenick for help with public relations. \bibliography{cord19} \bibliographystyle{acl_natbib} \appendix \section{Table parsing results} \label{app:tables} \begin{table*}[th!] \centering \small \begin{tabular}{llL{40mm}} \toprule \textbf{PDF Representation} & \textbf{HTML Table Parse} & \textbf{Source \& Description} \\ \midrule \raisebox{-0.6\totalheight}{\includegraphics[width=0.32\textwidth]{tables/pdf1.png}} & \raisebox{-0.6\totalheight}{\includegraphics[width=0.35\textwidth]{tables/parse1.png}} & From \citet{Hothorn2020RelativeCD}: Exact Structure; Minimal row rules \\ [2.0cm] \raisebox{-0.6\totalheight}{\includegraphics[width=0.32\textwidth]{tables/pdf2.png}} & \raisebox{-0.6\totalheight}{\includegraphics[width=0.35\textwidth]{tables/parse2.png}} & From \citet{LpezFando2020ManagementOF}: Exact Structure; Colored rows \\ [1.4cm] \raisebox{-0.6\totalheight}{\includegraphics[width=0.32\textwidth]{tables/pdf3.png}} & \raisebox{-0.6\totalheight}{\includegraphics[width=0.35\textwidth]{tables/parse3.png}} & From \citet{Stringhini2020SeroprevalenceOA}: Minor span errors; Partially colored background with minimal row rules \\ [2.0cm] \raisebox{-0.6\totalheight}{\includegraphics[width=0.32\textwidth]{tables/pdf4.png}} & \raisebox{-0.6\totalheight}{\includegraphics[width=0.35\textwidth]{tables/parse4.png}} & From \citet{Fathi2020PROGNOSTICVO}: Overmerge and span errors; Some section headers have row rules \\ [2.2cm] \raisebox{-0.6\totalheight}{\includegraphics[width=0.32\textwidth]{tables/pdf5.png}} & \raisebox{-0.6\totalheight}{\includegraphics[width=0.35\textwidth]{tables/parse5.png}} & From \citet{Kaushik2020MultisystemIS}: Over-splitting errors; Full row and column rules with large vertical spacing in cells \\ \bottomrule \end{tabular} \caption{A sample of table parses. Though most table structure is preserved accurately, the diversity of table representations results in some errors.} \label{tab:table_parses} \end{table*} There is high variance in the representation of tables across different paper PDFs. The goal of table parsing is to extract all tables from PDFs and represent them in HTML table format, along with associated titles and headings. In Table \ref{tab:table_parses}, we provide several example table parses, showing the high diversity of table representations across documents, the structure of resulting parses, and some common parse errors. \end{document}
https://openreview.net/forum?id=0gLzHrE_t3z
https://arxiv.org/abs/2004.10706
Please evaluate the clarity and comprehensiveness of my paper, specifically addressing the purpose of my research, the coverage of the literature, and any potential limitations or gaps in the dataset used.
Overview of a highly important Covid-19 dataset This is a paper that describes an important research dataset that has been produced during the Covid-19 epidemic. The CORD-19 collection is used for much research and some challenge evaluations. Even though this paper does not report any research results per se, and the paper is posted on the ArXiv preprint server, this version will give a citable description of the collection that will likely be widely referenced. The authors describe well the process of dealing not only with the technical issues of processing heterogeneous scientific papers but also the non-technical issues, such as copyright and licensing. The authors do not make any unreasonable claims, although I do question the value of this collection for non-computational researchers and clinicians. As the authors note, the collection is not complete, which is essential for clinical researchers and certainly for clinicians (who do not typically read primary research papers anyways, and tend to focus more on summations). But the dataset is of tremendous value to computational and informatics researchers, and that should be pronounced. I appreciate the Discussion that points out the limitations of how scientific information is currently published, and how it could be improved. One other concern that could be addressed is how long the Allen Institute for AI, which is to be commended for this work, will continue to maintain this tremendously valuable resource. Rating: 9: Top 15% of accepted papers, strong accept Confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
\documentclass[11pt,a4paper]{article} \PassOptionsToPackage{hyphens}{url}\usepackage{hyperref} % \usepackage[hyperref]{acl2020} \usepackage{times} \usepackage{latexsym} \usepackage{enumitem} \usepackage{graphicx} \usepackage{booktabs} \usepackage{tabularx} \renewcommand{\UrlFont}{\ttfamily\small} \usepackage{xspace} % \usepackage{microtype} \aclfinalcopy % \setlength\titlebox{8cm} \newcommand\BibTeX{B\textsc{ib}\TeX} \newcommand{\covid}{\textsc{Covid-19}\xspace} \newcommand{\cord}{\textsc{CORD-19}\xspace} \newcommand{\sars}{\textsc{SARS}\xspace} \newcommand{\mers}{\textsc{MERS}\xspace} \newcommand{\swine}{\textsc{H1N1}\xspace} \newcommand{\trec}{\textsc{TREC-COVID}\xspace} \newcommand\kyle[1]{{\color{red}\{\textit{#1}\}$_{KL}$}} \newcommand\lucy[1]{{\color{orange}\{\textit{#1}\}$_{LLW}$}} \newcommand\todoit[1]{{\color{red}\{TODO: \textit{#1}\}}} \newcommand\todo{{\color{red}{TODO}}\xspace} \title{\cord: The \covid Open Research Dataset} \author{ Lucy Lu Wang$^{1,}$\Thanks{ denotes equal contribution} \quad Kyle Lo$^{1,}$\footnotemark[1] \quad Yoganand Chandrasekhar$^1$ \quad Russell Reas$^1$ \quad \\ {\bf Jiangjiang Yang$^1$ \quad Douglas Burdick$^2$ \quad Darrin Eide$^3$ \quad Kathryn Funk$^4$ \quad } \\ {\bf Yannis Katsis$^2$ \quad Rodney Kinney$^1$ \quad Yunyao Li$^2$ \quad Ziyang Liu$^6$ \quad } \\ {\bf William Merrill$^1$ \quad Paul Mooney$^5$ \quad Dewey Murdick$^7$ \quad Devvret Rishi$^5$ \quad } \\ {\bf Jerry Sheehan$^4$ \quad Zhihong Shen$^3$ \quad Brandon Stilson$^1$ \quad Alex D. Wade$^6$ \quad } \\ {\bf Kuansan Wang$^3$ \quad Nancy Xin Ru Wang $^2$ \quad Chris Wilhelm$^1$ \quad Boya Xie$^3$ \quad } \\ {\bf Douglas Raymond$^1$ \quad Daniel S. Weld$^{1,8}$ \quad Oren Etzioni$^1$ \quad Sebastian Kohlmeier$^1$ \quad } \\ [2mm] $^1$Allen Institute for AI \quad $^2$ IBM Research \quad $^3$Microsoft Research \\ $^4$National Library of Medicine \quad $^5$Kaggle \quad $^6$Chan Zuckerberg Initiative \\ $^7$Georgetown University \quad $^8$University of Washington \\ {\tt\small \{lucyw, kylel\}@allenai.org} } \date{} \begin{document} \maketitle \begin{abstract} The \covid Open Research Dataset (\cord) is a growing\footnote{The dataset continues to be updated daily with papers from new sources and the latest publications. Statistics reported in this article are up-to-date as of version \textsc{2020-06-14}.} resource of scientific papers on \covid and related historical coronavirus research. \cord is designed to facilitate the development of text mining and information retrieval systems over its rich collection of metadata and structured full text papers. Since its release, \cord has been downloaded\footnote{\href{https://www.semanticscholar.org/cord19}{https://www.semanticscholar.org/cord19}} over 200K times and has served as the basis of many \covid text mining and discovery systems. In this article, we describe the mechanics of dataset construction, highlighting challenges and key design decisions, provide an overview of how \cord has been used, and describe several shared tasks built around the dataset. We hope this resource will continue to bring together the computing community, biomedical experts, and policy makers in the search for effective treatments and management policies for \covid. \end{abstract} \section{Introduction} On March 16, 2020, the Allen Institute for AI (AI2), in collaboration with our partners at The White House Office of Science and Technology Policy (OSTP), the National Library of Medicine (NLM), the Chan Zuckerburg Initiative (CZI), Microsoft Research, and Kaggle, coordinated by Georgetown University's Center for Security and Emerging Technology (CSET), released the first version of \cord. This resource is a large and growing collection of publications and preprints on \covid and related historical coronaviruses such as \sars and \mers. The initial release consisted of 28K papers, and the collection has grown to more than 140K papers over the subsequent weeks. Papers and preprints from several archives are collected and ingested through the Semantic Scholar literature search engine,\footnote{\href{https://semanticscholar.org/}{https://semanticscholar.org/}} metadata are harmonized and deduplicated, and paper documents are processed through the pipeline established in \citet{lo-wang-2020-s2orc} to extract full text (more than 50\% of papers in \cord have full text). We commit to providing regular updates to the dataset until an end to the \covid crisis is foreseeable. \begin{figure}[tbp!] \centering \includegraphics[width=\columnwidth]{cord19_dset.png} \caption{Papers and preprints are collected from different sources through Semantic Scholar. Released as part of \cord are the harmonized and deduplicated metadata and full text JSON.} \label{fig:dataset} \end{figure} \cord aims to connect the machine learning community with biomedical domain experts and policy makers in the race to identify effective treatments and management policies for \covid. The goal is to harness these diverse and complementary pools of expertise to discover relevant information more quickly from the literature. Users of the dataset have leveraged AI-based techniques in information retrieval and natural language processing to extract useful information. Responses to \cord have been overwhelmingly positive, with the dataset being downloaded over 200K times in the three months since its release. The dataset has been used by clinicians and clinical researchers to conduct systematic reviews, has been leveraged by data scientists and machine learning practitioners to construct search and extraction tools, and is being used as the foundation for several successful shared tasks. We summarize research and shared tasks in Section~\ref{sec:research_directions}. In this article, we briefly describe: \begin{enumerate}[noitemsep] \item The content and creation of \cord, \item Design decisions and challenges around creating the dataset, \item Research conducted on the dataset, and how shared tasks have facilitated this research, and \item A roadmap for \cord going forward. \end{enumerate} \section{Dataset} \label{sec:dataset} \cord integrates papers and preprints from several sources (Figure~\ref{fig:dataset}), where a paper is defined as the base unit of published knowledge, and a preprint as an unpublished but publicly available counterpart of a paper. Throughout the rest of Section~\ref{sec:dataset}, we discuss papers, though the same processing steps are adopted for preprints. First, we ingest into Semantic Scholar paper metadata and documents from each source. Each paper is associated with bibliographic metadata, like title, authors, publication venue, etc, as well as unique identifiers such as a DOI, PubMed Central ID, PubMed ID, the WHO Covidence \#,\footnote{\label{footnote:who}\href{https://www.who.int/emergencies/diseases/novel-coronavirus-2019/global-research-on-novel-coronavirus-2019-ncov}{https://www.who.int/emergencies/diseases/novel-coronavirus-2019/global-research-on-novel-coronavirus-2019-ncov}} MAG identifier \citep{Shen2018AWS}, and others. Some papers are associated with documents, the physical artifacts containing paper content; these are the familiar PDFs, XMLs, or physical print-outs we read. For the \cord effort, we generate harmonized and deduplicated metadata as well as structured full text parses of paper documents as output. We provide full text parses in cases where we have access to the paper documents, and where the documents are available under an open access license (e.g. Creative Commons (CC),\footnote{\href{https://creativecommons.org/}{https://creativecommons.org/}} publisher-specific \covid licenses,\footnote{\label{footnote:pmc_covid}\href{https://www.ncbi.nlm.nih.gov/pmc/about/covid-19/}{https://www.ncbi.nlm.nih.gov/pmc/about/covid-19/}} or identified as open access through DOI lookup in the Unpaywall\footnote{\href{https://unpaywall.org/}{https://unpaywall.org/}} database). \subsection{Sources of papers} Papers in \cord are sourced from PubMed Central (PMC), PubMed, the World Health Organization's Covid-19 Database,\textsuperscript{\ref{footnote:who}} and preprint servers bioRxiv, medRxiv, and arXiv. The PMC Public Health Emergency Covid-19 Initiative\textsuperscript{\ref{footnote:pmc_covid}} expanded access to \covid literature by working with publishers to make coronavirus-related papers discoverable and accessible through PMC under open access license terms that allow for reuse and secondary analysis. BioRxiv and medRxiv preprints were initially provided by CZI, and are now ingested through Semantic Scholar along with all other included sources. We also work directly with publishers such as Elsevier\footnote{\label{footnote:elsevier}\href{https://www.elsevier.com/connect/coronavirus-information-center}{https://www.elsevier.com/connect/coronavirus-information-center}} and Springer Nature,\footnote{\href{https://www.springernature.com/gp/researchers/campaigns/coronavirus}{https://www.springernature.com/gp/researchers/\\campaigns/coronavirus}} to provide full text coverage of relevant papers available in their back catalog. All papers are retrieved given the query\footnote{Adapted from the Elsevier COVID-19 site\textsuperscript{\ref{footnote:elsevier}}}: \begin{quote} \footnotesize\texttt{"COVID" OR "COVID-19" OR "Coronavirus" OR "Corona virus" OR "2019-nCoV" OR "SARS-CoV" OR "MERS-CoV" OR "Severe Acute Respiratory Syndrome" OR "Middle East Respiratory Syndrome"} \end{quote} \noindent Papers that match on these keywords in their title, abstract, or body text are included in the dataset. Query expansion is performed by PMC on these search terms, affecting the subset of papers in \cord retrieved from PMC. \subsection{Processing metadata} \label{sec:metadata_processing} The initial collection of sourced papers suffers from duplication and incomplete or conflicting metadata. We perform the following operations to harmonize and deduplicate all metadata: \begin{enumerate}[noitemsep] \item Cluster papers using paper identifiers \item Select canonical metadata for each cluster \item Filter clusters to remove unwanted entries \end{enumerate} \paragraph{Clustering papers} We cluster papers if they overlap on any of the following identifiers: \emph{\{doi, pmc\_id, pubmed\_id, arxiv\_id, who\_covidence\_id, mag\_id\}}. If two papers from different sources have an identifier in common and no other identifier conflicts between them, we assign them to the same cluster. Each cluster is assigned a unique identifier \textbf{\textsc{cord\_uid}}, which persists between dataset releases. No existing identifier, such as DOI or PMC ID, is sufficient as the primary \cord identifier. Some papers in PMC do not have DOIs; some papers from the WHO, publishers, or preprint servers like arXiv do not have PMC IDs or DOIs. Occasionally, conflicts occur. For example, a paper $c$ with $(doi, pmc\_id, pubmed\_id)$ identifiers $(x, null, z')$ might share identifier $x$ with a cluster of papers $\{a, b\}$ that has identifiers $(x, y, z)$, but has a conflict $z' \neq z$. In this case, we choose to create a new cluster $\{c\}$, containing only paper $c$.\footnote{This is a conservative clustering policy in which any metadata conflict prohibits clustering. An alternative policy would be to cluster if any identifier matches, under which $a$, $b$, and $c$ would form one cluster with identifiers $(x, y, [z, z'])$.} \paragraph{Selecting canonical metadata} Among each cluster, the canonical entry is selected to prioritize the availability of document files and the most permissive license. For example, between two papers with PDFs, one available under a CC license and one under a more restrictive \covid-specific copyright license, we select the CC-licensed paper entry as canonical. If any metadata in the canonical entry are missing, values from other members of the cluster are promoted to fill in the blanks. \paragraph{Cluster filtering} Some entries harvested from sources are not papers, and instead correspond to materials like tables of contents, indices, or informational documents. These entries are identified in an ad hoc manner and removed from the dataset. \subsection{Processing full text} Most papers are associated with one or more PDFs.\footnote{PMC papers can have multiple associated PDFs per paper, separating the main text from supplementary materials.} To extract full text and bibliographies from each PDF, we use the PDF parsing pipeline created for the S2ORC dataset \cite{lo-wang-2020-s2orc}.\footnote{One major difference in full text parsing for \cord is that we do not use ScienceParse,\footnotemark~as we always derive this metadata from the sources directly.}\footnotetext{\href{https://github.com/allenai/science-parse}{https://github.com/allenai/science-parse}} In \cite{lo-wang-2020-s2orc}, we introduce the S2ORC JSON format for representing scientific paper full text, which is used as the target output for paper full text in \cord. The pipeline involves: \begin{enumerate}[noitemsep] \item Parse all PDFs to TEI XML files using GROBID\footnote{\href{https://github.com/kermitt2/grobid}{https://github.com/kermitt2/grobid}} \cite{Lopez2009GROBIDCA} \item Parse all TEI XML files to S2ORC JSON \item Postprocess to clean up links between inline citations and bibliography entries. \end{enumerate} \noindent We additionally parse JATS XML\footnote{\href{https://jats.nlm.nih.gov/}{https://jats.nlm.nih.gov/}} files available for PMC papers using a custom parser, generating the same target S2ORC JSON format. This creates two sets of full text JSON parses associated with the papers in the collection, one set originating from PDFs (available from more sources), and one set originating from JATS XML (available only for PMC papers). Each PDF parse has an associated SHA, the 40-digit SHA-1 of the associated PDF file, while each XML parse is named using its associated PMC ID. Around 48\% of \cord papers have an associated PDF parse, and around 37\% have an XML parse, with the latter nearly a subset of the former. Most PDFs ($>$90\%) are successfully parsed. Around 2.6\% of \cord papers are associated with multiple PDF SHA, due to a combination of paper clustering and the existence of supplementary PDF files. \subsection{Table parsing} Since the May 12, 2020 release of \cord, we also release selected HTML table parses. Tables contain important numeric and descriptive information such as sample sizes and results, which are the targets of many information extraction systems. A separate PDF table processing pipeline is used, consisting of table extraction and table understanding. \emph{Table extraction} is based on the Smart Document Understanding (SDU) capability included in IBM Watson Discovery.\footnote{\href{https://www.ibm.com/cloud/watson-discovery}{https://www.ibm.com/cloud/watson-discovery}} SDU converts a given PDF document from its native binary representation into a text-based representation like HTML which includes both identified document structures (e.g., tables, section headings, lists) and formatting information (e.g. positions for extracted text). \emph{Table understanding} (also part of Watson Discovery) then annotates the extracted tables with additional semantic information, such as column and row headers and table captions. We leverage the Global Table Extractor (GTE)~\cite{Zheng2020GlobalTE}, which uses a specialized object detection and clustering technique to extract table bounding boxes and structures. All PDFs are processed through this table extraction and understanding pipeline. If the Jaccard similarity of the table captions from the table parses and \cord parses is above 0.9, we insert the HTML of the matched table into the full text JSON. We extract 188K tables from 54K documents, of which 33K tables are successfully matched to tables in 19K (around 25\%) full text documents in \cord. Based on preliminary error analysis, we find that match failures are primarily due to caption mismatches between the two parse schemes. Thus, we plan to explore alternate matching functions, potentially leveraging table content and document location as additional features. See Appendix \ref{app:tables} for example table parses. \subsection{Dataset contents} \begin{figure}[tbp!] \centering \includegraphics[width=\columnwidth]{papers_per_year.png} \caption{The distribution of papers per year in \cord. A spike in publications occurs in 2020 in response to \covid.} \label{fig:year} \end{figure} \cord has grown rapidly, now consisting of over 140K papers with over 72K full texts. Over 47K papers and 7K preprints on \covid and coronaviruses have been released since the start of 2020, comprising nearly 40\% of papers in the dataset. \begin{table}[tbp!] \setlength{\tabcolsep}{.25em} \footnotesize \centering \begin{tabular}{p{34mm}p{15mm}p{17mm}} \toprule Subfield & Count & \% of corpus \\ \midrule Virology & 29567 & 25.5\% \\ Immunology & 15954 & 13.8\% \\ Surgery & 15667 & 13.5\% \\ Internal medicine & 12045 & 10.4\% \\ Intensive care medicine & 10624 & 9.2\% \\ Molecular biology & 7268 & 6.3\% \\ Pathology & 6611 & 5.7\% \\ Genetics & 5231 & 4.5\% \\ Other & 12997 & 11.2\% \\ \bottomrule \end{tabular} \caption{MAG subfield of study for \cord papers.} \label{tab:fos} \end{table} Classification of \cord papers to Microsoft Academic Graph (MAG) \citep{msr:mag1, msr:mag2} fields of study \citep{Shen2018AWS} indicate that the dataset consists predominantly of papers in Medicine (55\%), Biology (31\%), and Chemistry (3\%), which together constitute almost 90\% of the corpus.\footnote{MAG identifier mappings are provided as a supplement on the \cord landing page.} A breakdown of the most common MAG subfields (L1 fields of study) represented in \cord is given in Table~\ref{tab:fos}. Figure~\ref{fig:year} shows the distribution of \cord papers by date of publication. Coronavirus publications increased during and following the SARS and MERS epidemics, but the number of papers published in the early months of 2020 exploded in response to the \covid epidemic. Using author affiliations in MAG, we identify the countries from which the research in CORD-19 is conducted. Large proportions of \cord papers are associated with institutions based in the Americas (around 48K papers), Europe (over 35K papers), and Asia (over 30K papers). \section{Design decision \& challenges} A number of challenges come into play in the creation of \cord. We summarize the primary design requirements of the dataset, along with challenges implicit within each requirement: \paragraph{Up-to-date} Hundreds of new publications on \covid are released every day, and a dataset like \cord can quickly become irrelevant without regular updates. \cord has been updated daily since May 26. A processing pipeline that produces consistent results day to day is vital to maintaining a changing dataset. That is, the metadata and full text parsing results must be reproducible, identifiers must be persistent between releases, and changes or new features should ideally be compatible with previous versions of the dataset. \paragraph{Handles data from multiple sources} Papers from different sources must be integrated and harmonized. Each source has its own metadata format, which must be converted to the \cord format, while addressing any missing or extraneous fields. The processing pipeline must also be flexible to adding new sources. \paragraph{Clean canonical metadata} Because of the diversity of paper sources, duplication is unavoidable. Once paper metadata from each source is cleaned and organized into \cord format, we apply the deduplication logic described in Section \ref{sec:metadata_processing} to identify similar paper entries from different sources. We apply a conservative clustering algorithm, combining papers only when they have shared identifiers but no conflicts between any particular class of identifiers. We justify this because it is less harmful to retain a few duplicate papers than to remove a document that is potentially unique and useful. \paragraph{Machine readable full text} To provide accessible and canonical structured full text, we parse content from PDFs and associated paper documents. The full text is represented in S2ORC JSON format \citep{lo-wang-2020-s2orc}, a schema designed to preserve most relevant paper structures such as paragraph breaks, section headers, inline references, and citations. S2ORC JSON is simple to use for many NLP tasks, where character-level indices are often employed for annotation of relevant entities or spans. The text and annotation representations in S2ORC share similarities with BioC \citep{Comeau2019PMCTM}, a JSON schema introduced by the BioCreative community for shareable annotations, with both formats leveraging the flexibility of character-based span annotations. However, S2ORC JSON also provides a schema for representing other components of a paper, such as its metadata fields, bibliography entries, and reference objects for figures, tables, and equations. We leverage this flexible and somewhat complete representation of S2ORC JSON for \cord. We recognize that converting between PDF or XML to JSON is lossy. However, the benefits of a standard structured format, and the ability to reuse and share annotations made on top of that format have been critical to the success of \cord. \paragraph{Observes copyright restrictions} Papers in \cord and academic papers more broadly are made available under a variety of copyright licenses. These licenses can restrict or limit the abilities of organizations such as AI2 from redistributing their content freely. Although much of the \covid literature has been made open access by publishers, the provisions on these open access licenses differ greatly across papers. Additionally, many open access licenses grant the ability to read, or ``consume'' the paper, but may be restrictive in other ways, for example, by not allowing republication of a paper or its redistribution for commercial purposes. The curator of a dataset like \cord must pass on best-to-our-knowledge licensing information to the end user. \section{Research directions} \label{sec:research_directions} \begin{figure}[tbp!] \centering \includegraphics[width=\columnwidth]{cord19_tasks.png} \caption{An example information retrieval and extraction system using \cord: Given an input query, the system identifies relevant papers (yellow highlighted rows) and extracts text snippets from the full text JSONs as supporting evidence.} \label{fig:tasks} \end{figure} We provide a survey of various ways researchers have made use of \cord. We organize these into four categories: \emph{(i)} direct usage by clinicians and clinical researchers (\S\ref{sec:by_clinical_experts}), \emph{(ii)} tools and systems to assist clinicians (\S\ref{sec:for_clinical_experts}), \emph{(iii)} research to support further text mining and NLP research (\S\ref{sec:for_nlp_researchers}), and \emph{(iv)} shared tasks and competitions (\S\ref{sec:shared_tasks}). \subsection{Usage by clinical researchers} \label{sec:by_clinical_experts} \cord has been used by medical experts as a paper collection for conducting systematic reviews. These reviews address questions about \covid include infection and mortality rates in different demographics \cite{Han2020.who-is-more-susceptible}, symptoms of the disease \citep{Parasa2020PrevalenceOG}, identifying suitable drugs for repurposing \cite{sadegh2020exploring}, management policies \cite{Yaacoube-bmj-safe-management-bodies}, and interactions with other diseases \cite{Crisan-Dabija-tuberculosis-covid19, Popa-inflammatory-bowel-diseases}. \subsection{Tools for clinicians} \label{sec:for_clinical_experts} Challenges for clinicians and clinical researchers during the current epidemic include \textit{(i)} keeping up to to date with recent papers about \covid, \textit{(ii)} identifying useful papers from historical coronavirus literature, \textit{(iii)} extracting useful information from the literature, and \textit{(iv)} synthesizing knowledge from the literature. To facilitate solutions to these challenges, dozens of tools and systems over \cord have already been developed. Most combine elements of text-based information retrieval and extraction, as illustrated in Figure~\ref{fig:tasks}. We have compiled a list of these efforts on the \cord public GitHub repository\footnote{\href{https://github.com/allenai/cord19}{https://github.com/allenai/cord19}} and highlight some systems in Table \ref{tab:other_tasks}.\footnote{There are many Search and QA systems to survey. We have chosen to highlight the systems that were made publicly-available within a few weeks of the \cord initial release.} \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}p{#1}} \begin{table*}[tbh!] \small \begin{tabularx}{\textwidth}{L{20mm}p{20mm}p{40mm}X} \toprule \textbf{Task} & \textbf{Project} & \textbf{Link} & \textbf{Description} \\ \midrule \textbf{Search and \newline discovery} & \textsc{Neural Covidex} & \href{https://covidex.ai/}{https://covidex.ai/} & Uses a T5-base \cite{raffel2019exploring} unsupervised reranker on BM25 \cite{Jones2000APM} \\ \cline{2-4} & \textsc{CovidScholar} & \href{https://covidscholar.org}{https://covidscholar.org/} & Adapts \citet{Weston2019} system for entity-centric queries \\ \cline{2-4} & \textsc{KDCovid} & \href{http://kdcovid.nl/about.html}{http://kdcovid.nl/about.html} & Uses BioSentVec \cite{biosentvec} similarity to identify relevant sentences \\ \cline{2-4} & \textsc{Spike-Cord} & \href{https://spike.covid-19.apps.allenai.org}{https://spike.covid-19.apps.allenai.org} & Enables users to define ``regular expression''-like queries to directly search over full text \\ \midrule \textbf{Question answering} & \textsc{covidask} & \href{https://covidask.korea.ac.kr/}{https://covidask.korea.ac.kr/} & Adapts \citet{seo-etal-2019-real} using BioASQ challenge (Task B) dataset \citep{Tsatsaronis2015AnOO} \\ \cline{2-4} & \textsc{aueb} & \href{http://cslab241.cs.aueb.gr:5000/}{http://cslab241.cs.aueb.gr:5000/} & Adapts \citet{mcdonald2018deep} using \citet{Tsatsaronis2015AnOO} \\ \midrule \textbf{Summariz-ation} & Vespa & \href{https://cord19.vespa.ai/}{https://cord19.vespa.ai/} & Generates summaries of paper abstracts using T5 \citep{raffel2019exploring} \\ \midrule \textbf{Recommend-ation} & Vespa & \href{https://cord19.vespa.ai/}{https://cord19.vespa.ai/} & Recommends ``similar papers'' using Sentence-BERT \cite{reimers-gurevych-2019-sentence} and SPECTER embeddings \cite{specter2020cohan} \\ \midrule \textbf{Entailment} & COVID papers browser & \href{https://github.com/gsarti/covid-papers-browser}{https://github.com/gsarti/covid-papers-browser} & Similar to \textsc{KDCovid}, but uses embeddings from BERT models trained on NLI datasets \\ \midrule \textbf{Claim \newline verification} & SciFact & \href{https://scifact.apps.allenai.org}{https://scifact.apps.allenai.org} & Uses RoBERTa-large \cite{liu2019roberta} to find Support/Refute evidence for \covid claims \\ \midrule \textbf{Assistive lit. review} & ASReview & \href{https://github.com/asreview/asreview-covid19}{https://github.com/asreview/ asreview-covid19} & Active learning system with a \cord plugin for identifying papers for literature reviews \\ \midrule \textbf{Augmented reading} & Sinequa & \href{https://covidsearch.sinequa.com/app/covid-search/}{https://covidsearch.sinequa.com/ app/covid-search/} & In-browser paper reader with entity highlighting on PDFs \\ \midrule \textbf{Visualization} & SciSight & \href{https://scisight.apps.allenai.org}{https://scisight.apps.allenai.org} & Network visualizations for browsing research groups working on \covid \\ \bottomrule \end{tabularx} \caption{Publicly-available tools and systems for medical experts using \cord.} \label{tab:other_tasks} \end{table*} \subsection{Text mining and NLP research} \label{sec:for_nlp_researchers} The following is a summary of resources released by the NLP community on top of \cord to support other research activities. \paragraph{Information extraction} To support extractive systems, NER and entity linking of biomedical entities can be useful. NER and linking can be performed using NLP toolkits like ScispaCy \cite{neumann-etal-2019-scispacy} or language models like BioBERT-base \cite{Lee2019BioBERTAP} and SciBERT-base \cite{beltagy-etal-2019-scibert} finetuned on biomedical NER datasets. \citet{Wang2020ComprehensiveNE} augments \cord full text with entity mentions predicted from several techniques, including weak supervision using the NLM's Unified Medical Language System (UMLS) Metathesaurus \cite{Bodenreider2004TheUM}. \paragraph{Text classification} Some efforts focus on extracting sentences or passages of interest. For example, \citet{Liang2020IdentifyingRF} uses BERT \cite{devlin-etal-2019-bert} to extract sentences from \cord that contain \covid-related radiological findings. \paragraph{Pretrained model weights} BioBERT and SciBERT have been popular pretrained LMs for \covid-related tasks. DeepSet has released a BERT-base model pretrained on \cord.\footnote{\href{https://huggingface.co/deepset/covid_bert_base}{https://huggingface.co/deepset/covid\_bert\_base}} SPECTER \cite{specter2020cohan} paper embeddings computed using paper titles and abstracts are being released with each \cord update. SeVeN relation embeddings \cite{espinosa-anke-schockaert-2018-seven} between word pairs have also been made available for \cord.\footnote{\href{https://github.com/luisespinosaanke/cord-19-seven}{https://github.com/luisespinosaanke/cord-19-seven}} \paragraph{Knowledge graphs} The Covid Graph project\footnote{\href{https://covidgraph.org/}{https://covidgraph.org/}} releases a \covid knowledge graph built from mining several public data sources, including \cord, and is perhaps the largest current initiative in this space. \citet{Ahamed2020InformationMF} rely on entity co-occurrences in \cord to construct a graph that enables centrality-based ranking of drugs, pathogens, and biomolecules. \subsection{Competitions and Shared Tasks} \label{sec:shared_tasks} The adoption of \cord and the proliferation of text mining and NLP systems built on top of the dataset are supported by several \covid-related competitions and shared tasks. \subsubsection{Kaggle} \label{sec:kaggle} Kaggle hosts the \cord Research Challenge,\footnote{\href{https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge}{https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge}} a text-mining challenge that tasks participants with extracting answers to key scientific questions about \covid from the papers in the \cord dataset. Round 1 was initiated with a set of open-ended questions, e.g., \textit{What is known about transmission, incubation, and environmental stability?} and \textit{What do we know about \covid risk factors?} More than 500 teams participated in Round 1 of the Kaggle competition. Feedback from medical experts during Round 1 identified that the most useful contributions took the form of article summary tables. Round 2 subsequently focused on this task of table completion, and resulted in 100 additional submissions. A unique tabular schema is defined for each question, and answers are collected from across different automated extractions. For example, extractions for risk factors should include disease severity and fatality metrics, while extractions for incubation should include time ranges. Sufficient knowledge of COVID-19 is necessary to define these schema, to understand which fields are important to include (and exclude), and also to perform error-checking and manual curation. \subsubsection{TREC} The \trec\footnote{\href{https://ir.nist.gov/covidSubmit/index.html}{https://ir.nist.gov/covidSubmit/index.html}} shared task \cite{trec-covid-jamia,voorhees2020treccovid} assesses systems on their ability to rank papers in \cord based on their relevance to \covid-related topics. Topics are sourced from MedlinePlus searches, Twitter conversations, library searches at OHSU, as well as from direct conversations with researchers, reflecting actual queries made by the community. To emulate real-world surge in publications and rapidly-changing information needs, the shared task is organized in multiple rounds. Each round uses a specific version of \cord, has newly added topics, and gives participants one week to submit per-topic document rankings for judgment. Round 1 topics included more general questions such as \emph{What is the origin of COVID-19?}~and \emph{What are the initial symptoms of COVID-19?}~while Round 3 topics have become more focused, e.g., \emph{What are the observed mutations in the SARS-CoV-2 genome?}~and \emph{What are the longer-term complications of those who recover from COVID-19?} Around 60 medical domain experts, including indexers from NLM and medical students from OHSU and UTHealth, are involved in providing gold rankings for evaluation. \trec opened using the April 1st \cord version and received submissions from over 55 participating teams. \section{Discussion} \label{sec:discussion} Several hundred new papers on \covid are now being published every day. Automated methods are needed to analyze and synthesize information over this large quantity of content. The computing community has risen to the occasion, but it is clear that there is a critical need for better infrastructure to incorporate human judgments in the loop. Extractions need expert vetting, and search engines and systems must be designed to serve users. Successful engagement and usage of \cord speaks to our ability to bridge computing and biomedical communities over a common, global cause. From early results of the Kaggle challenge, we have learned which formats are conducive to collaboration, and which questions are the most urgent to answer. However, there is significant work that remains for determining \textit{(i)} which methods are best to assist textual discovery over the literature, \textit{(ii)} how best to involve expert curators in the pipeline, and \textit{(iii)} which extracted results convert to successful \covid treatments and management policies. Shared tasks and challenges, as well as continued analysis and synthesis of feedback will hopefully provide answers to these outstanding questions. Since the initial release of \cord, we have implemented several new features based on community feedback, such as the inclusion of unique identifiers for papers, table parses, more sources, and daily updates. Most substantial outlying features requests have been implemented or addressed at this time. We will continue to update the dataset with more sources of papers and newly published literature as resources permit. \subsection{Limitations} Though we aim to be comprehensive, \cord does not cover many relevant scientific documents on \covid. We have restricted ourselves to research papers and preprints, and do not incorporate other types of documents, such as technical reports, white papers, informational publications by governmental bodies, and more. Including these documents is outside the current scope of \cord, but we encourage other groups to curate and publish such datasets. Within the scope of scientific papers, \cord is also incomplete, though we continue to prioritize the addition of new sources. This has motivated the creation of other corpora supporting \covid NLP, such as LitCovid \citep{Chen2020KeepUW}, which provide complementary materials to \cord derived from PubMed. Though we have since added PubMed as a source of papers in \cord, there are other domains such as the social sciences that are not currently represented, and we hope to incorporate these works as part of future work. We also note the shortage of foreign language papers in \cord, especially Chinese language papers produced during the early stages of the epidemic. These papers may be useful to many researchers, and we are working with collaborators to provide them as supplementary data. However, challenges in both sourcing and licensing these papers for re-publication are additional hurdles. \subsection{Call to action} Though the full text of many scientific papers are available to researchers through \cord, a number of challenges prevent easy application of NLP and text mining techniques to these papers. First, the primary distribution format of scientific papers -- PDF -- is not amenable to text processing. The PDF file format is designed to share electronic documents rendered faithfully for reading and printing, and mixes visual with semantic information. Significant effort is needed to coerce PDF into a format more amenable to text mining, such as JATS XML,\footnote{\label{footnote:jats}\href{https://www.niso.org/publications/z3996-2019-jats}{https://www.niso.org/publications/z3996-2019-jats}} BioC \citep{Comeau2019PMCTM}, or S2ORC JSON \citep{lo-wang-2020-s2orc}, which is used in \cord. Though there is substantial work in this domain, we can still benefit from better PDF parsing tools for scientific documents. As a complement, scientific papers should also be made available in a structured format like JSON, XML, or HTML. Second, there is a clear need for more scientific content to be made accessible to researchers. Some publishers have made \covid papers openly available during this time, but both the duration and scope of these epidemic-specific licenses are unclear. Papers describing research in related areas (e.g., on other infectious diseases, or relevant biological pathways) have also not been made open access, and are therefore unavailable in \cord or otherwise. Securing release rights for papers not yet in \cord but relevant for \covid research is a significant portion of future work, led by the PMC \covid Initiative.\textsuperscript{\ref{footnote:pmc_covid}} Lastly, there is no standard format for representing paper metadata. Existing schemas like the JATS XML NISO standard\textsuperscript{\ref{footnote:jats}} or library science standards like \textsc{bibframe}\footnote{\href{https://www.loc.gov/bibframe/}{https://www.loc.gov/bibframe/}} or Dublin Core\footnote{\href{https://www.dublincore.org/specifications/dublin-core/dces/}{https://www.dublincore.org/specifications/dublin-core/dces/}} have been adopted to represent paper metadata. However, these standards can be too coarse-grained to capture all necessary paper metadata elements, or may lack a strict schema, causing representations to vary greatly across publishers who use them. To improve metadata coherence across sources, the community must define and agree upon an appropriate standard of representation. \subsection*{Summary} This project offers a paradigm of how the community can use machine learning to advance scientific research. By allowing computational access to the papers in \cord, we increase our ability to perform discovery over these texts. We hope the dataset and projects built on the dataset will serve as a template for future work in this area. We also believe there are substantial improvements that can be made in the ways we publish, share, and work with scientific papers. We offer a few suggestions that could dramatically increase community productivity, reduce redundant effort, and result in better discovery and understanding of the scientific literature. Through \cord, we have learned the importance of bringing together different communities around the same scientific cause. It is clearer than ever that automated text analysis is not the solution, but rather one tool among many that can be directed to combat the \covid epidemic. Crucially, the systems and tools we build must be designed to serve a use case, whether that's improving information retrieval for clinicians and medical professionals, summarizing the conclusions of the latest observational research or clinical trials, or converting these learnings to a format that is easily digestible by healthcare consumers. \section*{Acknowledgments} This work was supported in part by NSF Convergence Accelerator award 1936940, ONR grant N00014-18-1-2193, and the University of Washington WRF/Cable Professorship. We thank The White House Office of Science and Technology Policy, the National Library of Medicine at the National Institutes of Health, Microsoft Research, Chan Zuckerberg Initiative, and Georgetown University's Center for Security and Emerging Technology for co-organizing the \cord initiative. We thank Michael Kratsios, the Chief Technology Officer of the United States, and The White House Office of Science and Technology Policy for providing the initial seed set of questions for the Kaggle \cord research challenge. We thank Kaggle for coordinating the \cord research challenge. In particular, we acknowledge Anthony Goldbloom for providing feedback on \cord and for involving us in discussions around the Kaggle literature review tables project. We thank the National Institute of Standards and Technology (NIST), National Library of Medicine (NLM), Oregon Health and Science University (OHSU), and University of Texas Health Science Center at Houston (UTHealth) for co-organizing the \trec shared task. In particular, we thank our co-organizers -- Steven Bedrick (OHSU), Aaron Cohen (OHSU), Dina Demner-Fushman (NLM), William Hersh (OHSU), Kirk Roberts (UTHealth), Ian Soboroff (NIST), and Ellen Voorhees (NIST) -- for feedback on the design of \cord. We acknowledge our partners at Elsevier and Springer Nature for providing additional full text coverage of papers included in the corpus. We thank Bryan Newbold from the Internet Archive for providing feedback on data quality and helpful comments on early drafts of the manuscript. We thank Rok Jun Lee, Hrishikesh Sathe, Dhaval Sonawane and Sudarshan Thitte from IBM Watson AI for their help in table parsing. We also acknowledge and thank our collaborators from AI2: Paul Sayre and Sam Skjonsberg for providing front-end support for \cord and \trec, Michael Schmitz for setting up the \cord Discourse community forums, Adriana Dunn for creating webpage content and marketing, Linda Wagner for collecting community feedback, Jonathan Borchardt, Doug Downey, Tom Hope, Daniel King, and Gabriel Stanovsky for contributing supplemental data to the \cord effort, Alex Schokking for his work on the Semantic Scholar \covid Research Feed, Darrell Plessas for technical support, and Carissa Schoenick for help with public relations. \bibliography{cord19} \bibliographystyle{acl_natbib} \appendix \section{Table parsing results} \label{app:tables} \begin{table*}[th!] \centering \small \begin{tabular}{llL{40mm}} \toprule \textbf{PDF Representation} & \textbf{HTML Table Parse} & \textbf{Source \& Description} \\ \midrule \raisebox{-0.6\totalheight}{\includegraphics[width=0.32\textwidth]{tables/pdf1.png}} & \raisebox{-0.6\totalheight}{\includegraphics[width=0.35\textwidth]{tables/parse1.png}} & From \citet{Hothorn2020RelativeCD}: Exact Structure; Minimal row rules \\ [2.0cm] \raisebox{-0.6\totalheight}{\includegraphics[width=0.32\textwidth]{tables/pdf2.png}} & \raisebox{-0.6\totalheight}{\includegraphics[width=0.35\textwidth]{tables/parse2.png}} & From \citet{LpezFando2020ManagementOF}: Exact Structure; Colored rows \\ [1.4cm] \raisebox{-0.6\totalheight}{\includegraphics[width=0.32\textwidth]{tables/pdf3.png}} & \raisebox{-0.6\totalheight}{\includegraphics[width=0.35\textwidth]{tables/parse3.png}} & From \citet{Stringhini2020SeroprevalenceOA}: Minor span errors; Partially colored background with minimal row rules \\ [2.0cm] \raisebox{-0.6\totalheight}{\includegraphics[width=0.32\textwidth]{tables/pdf4.png}} & \raisebox{-0.6\totalheight}{\includegraphics[width=0.35\textwidth]{tables/parse4.png}} & From \citet{Fathi2020PROGNOSTICVO}: Overmerge and span errors; Some section headers have row rules \\ [2.2cm] \raisebox{-0.6\totalheight}{\includegraphics[width=0.32\textwidth]{tables/pdf5.png}} & \raisebox{-0.6\totalheight}{\includegraphics[width=0.35\textwidth]{tables/parse5.png}} & From \citet{Kaushik2020MultisystemIS}: Over-splitting errors; Full row and column rules with large vertical spacing in cells \\ \bottomrule \end{tabular} \caption{A sample of table parses. Though most table structure is preserved accurately, the diversity of table representations results in some errors.} \label{tab:table_parses} \end{table*} There is high variance in the representation of tables across different paper PDFs. The goal of table parsing is to extract all tables from PDFs and represent them in HTML table format, along with associated titles and headings. In Table \ref{tab:table_parses}, we provide several example table parses, showing the high diversity of table representations across documents, the structure of resulting parses, and some common parse errors. \end{document}
https://openreview.net/forum?id=0gLzHrE_t3z
https://arxiv.org/abs/2004.10706
Please evaluate the significance and value of the dataset described in this paper for researchers and clinicians in the field of Covid-19.
"CORD-19 is an excellent resource with an impressive integration work for the research community to (...TRUNCATED)
"\n\\documentclass[11pt,a4paper]{article}\n\\PassOptionsToPackage{hyphens}{url}\\usepackage{hyperref(...TRUNCATED)
https://openreview.net/forum?id=0gLzHrE_t3z
https://arxiv.org/abs/2004.10706
"Please evaluate the paper based on its description and development of the CORD-19 data set, its imp(...TRUNCATED)
"nice application to new data set to be made available\nThis paper explores gender differences in li(...TRUNCATED)
"\\documentclass[11pt,a4paper]{article}\n\\usepackage[hyperref]{acl2020}\n\\usepackage{latexsym}\n\\(...TRUNCATED)
https://openreview.net/forum?id=mlmwkAdIeK
https://arxiv.org/abs/2008.05713
"Please evaluate the paper based on its exploration of gender differences in linguistic productions (...TRUNCATED)
"Overall the paper is okay but fails to provide the significance of the work.\nThis paper aims to un(...TRUNCATED)
"\\documentclass[11pt,a4paper]{article}\n\\usepackage[hyperref]{acl2020}\n\\usepackage{latexsym}\n\\(...TRUNCATED)
https://openreview.net/forum?id=mlmwkAdIeK
https://arxiv.org/abs/2008.05713
"Please evaluate the significance and novelty of the paper, as well as the clarity of the results an(...TRUNCATED)
"Overall the paper is well written, contains re-usable data, and describes clear results.\nQuality:\(...TRUNCATED)
"\\documentclass[11pt,a4paper]{article}\n\\usepackage[hyperref]{acl2020}\n\\usepackage{latexsym}\n\\(...TRUNCATED)
https://openreview.net/forum?id=mlmwkAdIeK
https://arxiv.org/abs/2008.05713
Please evaluate the overall quality, clarity, originality, and significance of my paper.
"timely contribution, could be better positioned with regard to previous work\nThis paper presents a(...TRUNCATED)
"\\pdfoutput=1\n\n\\documentclass[11pt,a4paper]{article}\n\\usepackage[hyperref]{acl2020}\n\\usepack(...TRUNCATED)
https://openreview.net/forum?id=qd51R0JNLl
https://arxiv.org/abs/2005.12522
"Please evaluate the paper based on its dataset of hand-labeled questions related to COVID-19, consi(...TRUNCATED)
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card