ronald
commited on
Commit
•
40ff4fe
1
Parent(s):
97ab0fa
add datacard
Browse files
README.md
CHANGED
@@ -14,13 +14,13 @@ size_categories:
|
|
14 |
---
|
15 |
|
16 |
|
17 |
-
# Dataset Card for
|
18 |
|
19 |
## Dataset Description
|
20 |
|
21 |
- **Repository:** [https://github.com/ronaldahmed/scitechnews]()
|
22 |
- **Paper:** [link](‘Don’t Get Too Technical with Me’: A Discourse Structure-Based Framework for Science Journalism)
|
23 |
-
- **Point of Contact:** [mailto:ronald.cardenas@ed.ac.uk
|
24 |
|
25 |
### Dataset Summary
|
26 |
|
@@ -33,7 +33,10 @@ others.
|
|
33 |
|
34 |
### Supported Tasks and Leaderboards
|
35 |
|
36 |
-
|
|
|
|
|
|
|
37 |
|
38 |
- `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
|
39 |
|
@@ -68,20 +71,18 @@ are separated by `\n`. Data is not sentence or word tokenized.<br>
|
|
68 |
|
69 |
```
|
70 |
{
|
71 |
-
"id":
|
72 |
-
"pr-title":
|
73 |
-
"pr-article":
|
74 |
-
"pr-summary":
|
75 |
-
"sc-title":
|
76 |
-
"sc-abstract":
|
77 |
-
"sc-sections":
|
78 |
-
"sc-section_names":
|
79 |
-
"sc-authors":
|
80 |
-
|
81 |
}
|
82 |
```
|
83 |
|
84 |
-
Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
|
85 |
|
86 |
|
87 |
### Data Splits
|
@@ -101,87 +102,53 @@ Provide the sizes of each split. As appropriate, provide any descriptive statist
|
|
101 |
|
102 |
### Curation Rationale
|
103 |
|
104 |
-
|
|
|
|
|
|
|
105 |
|
106 |
### Source Data
|
107 |
|
108 |
-
|
|
|
|
|
109 |
|
110 |
#### Initial Data Collection and Normalization
|
111 |
|
112 |
-
|
|
|
|
|
|
|
|
|
113 |
|
114 |
-
If data was collected from other pre-existing datasets, link to source here and to their [Hugging Face version](https://huggingface.co/datasets/dataset_name).
|
115 |
|
116 |
-
If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
|
117 |
|
118 |
#### Who are the source language producers?
|
119 |
|
120 |
-
|
121 |
-
|
122 |
-
If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
|
123 |
-
|
124 |
-
Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
|
125 |
-
|
126 |
-
Describe other people represented or mentioned in the data. Where possible, link to references for the information.
|
127 |
-
|
128 |
-
### Annotations
|
129 |
-
|
130 |
-
If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
|
131 |
-
|
132 |
-
#### Annotation process
|
133 |
-
|
134 |
-
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
|
135 |
-
|
136 |
-
#### Who are the annotators?
|
137 |
-
|
138 |
-
If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
|
139 |
-
|
140 |
-
Describe the people or systems who originally created the annotations and their selection criteria if applicable.
|
141 |
|
142 |
-
If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
|
143 |
-
|
144 |
-
Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
|
145 |
-
|
146 |
-
### Personal and Sensitive Information
|
147 |
-
|
148 |
-
State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
|
149 |
-
|
150 |
-
State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
|
151 |
-
|
152 |
-
If efforts were made to anonymize the data, describe the anonymization process.
|
153 |
|
154 |
## Considerations for Using the Data
|
155 |
|
156 |
### Social Impact of Dataset
|
157 |
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
-
|
162 |
-
|
163 |
|
164 |
-
### Discussion of Biases
|
165 |
-
|
166 |
-
Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
|
167 |
-
|
168 |
-
For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic.
|
169 |
-
|
170 |
-
If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.
|
171 |
-
|
172 |
-
### Other Known Limitations
|
173 |
-
|
174 |
-
If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
|
175 |
|
176 |
## Additional Information
|
177 |
|
178 |
### Dataset Curators
|
179 |
|
180 |
-
|
|
|
|
|
|
|
181 |
|
182 |
-
### Licensing Information
|
183 |
|
184 |
-
Provide the license and link to the license webpage if available.
|
185 |
|
186 |
### Citation Information
|
187 |
|
@@ -197,6 +164,3 @@ Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset
|
|
197 |
|
198 |
If the dataset has a [DOI](https://www.doi.org/), please provide it here.
|
199 |
|
200 |
-
### Contributions
|
201 |
-
|
202 |
-
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
|
|
14 |
---
|
15 |
|
16 |
|
17 |
+
# Dataset Card for `scitechnews`
|
18 |
|
19 |
## Dataset Description
|
20 |
|
21 |
- **Repository:** [https://github.com/ronaldahmed/scitechnews]()
|
22 |
- **Paper:** [link](‘Don’t Get Too Technical with Me’: A Discourse Structure-Based Framework for Science Journalism)
|
23 |
+
- **Point of Contact:** [Ronald Cardenas](mailto:ronald.cardenas@ed.ac.uk)
|
24 |
|
25 |
### Dataset Summary
|
26 |
|
|
|
33 |
|
34 |
### Supported Tasks and Leaderboards
|
35 |
|
36 |
+
This dataset was curated for the task of Science Journalism, a text-to-text task where the input is a scientific article and the output is a press release summary.
|
37 |
+
However, this release also include additional information of the press release and of the scientific article, such as
|
38 |
+
press release article body, title, authors' names and affiliations.
|
39 |
+
|
40 |
|
41 |
- `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
|
42 |
|
|
|
71 |
|
72 |
```
|
73 |
{
|
74 |
+
"id": 37,
|
75 |
+
"pr-title": "What's in a Developer's Name?",
|
76 |
+
"pr-article": "In one of the most memorable speeches from William Shakespeare's play, Romeo and Juliet , Juliet ponders, \" What's in a name? That which...",
|
77 |
+
"pr-summary": ""Researchers at the University of Waterloo's Cheriton School of Computer Science in Canada found a software developer's perceived race and ethnicity,...",
|
78 |
+
"sc-title": On the Relationship Between the Developer's Perceptible Race and Ethnicity and the Evaluation of Contributions in OSS",
|
79 |
+
"sc-abstract": "Context: Open Source Software (OSS) projects are typically the result of collective efforts performed by developers with different backgrounds..."
|
80 |
+
"sc-sections": ["In any line of work, diversity regarding race, gender, personality...","To what extent is the submitter's perceptible race and ethnicity related to...",...],
|
81 |
+
"sc-section_names": ["INTRODUCTION", "RQ1:", "RQ2:", "RELATED WORK",...],
|
82 |
+
"sc-authors": ["Reza Nadri | Cheriton School of Computer Science, University of Waterloo", "Gema Rodriguez Perez | Cheriton School of ...",...]
|
|
|
83 |
}
|
84 |
```
|
85 |
|
|
|
86 |
|
87 |
|
88 |
### Data Splits
|
|
|
102 |
|
103 |
### Curation Rationale
|
104 |
|
105 |
+
*Science journalism* refers to producing journalistic content that covers topics related to different areas of scientific research. It plays an important role in fostering public understanding of science and its impact.
|
106 |
+
However, the sheer volume of scientific literature makes it challenging for journalists to report on every significant discovery, potentially leaving many overlooked.<br>
|
107 |
+
We construct a new open-access high-quality dataset for automatic science journalism that covers a wide range of scientific disciplines.
|
108 |
+
|
109 |
|
110 |
### Source Data
|
111 |
|
112 |
+
Press release snippets are mined from ACM TechNews and their respective scientific articles are mined from
|
113 |
+
reputed open-access journals and conference proceddings.
|
114 |
+
|
115 |
|
116 |
#### Initial Data Collection and Normalization
|
117 |
|
118 |
+
We collect archived TechNews snippets between 1999 and 2021 and link them with their respective press release articles.
|
119 |
+
Then, we parse each news article for links to the scientific article it reports about.
|
120 |
+
We discard samples where we find more than one link to scientific articles in the press release.
|
121 |
+
Finally, the scientific articles are retrieved in PDF format and processed using [Grobid](https://github.com/kermitt2/grobid).
|
122 |
+
Following collection strategies of previous scientific summarization datasets, section heading names are retrieved, and the article text is divided into sections. We also extract the title and all author names and affiliations.
|
123 |
|
|
|
124 |
|
|
|
125 |
|
126 |
#### Who are the source language producers?
|
127 |
|
128 |
+
All texts in this dataset (titles, summaries, and article bodies) were produced by humans.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
129 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
130 |
|
131 |
## Considerations for Using the Data
|
132 |
|
133 |
### Social Impact of Dataset
|
134 |
|
135 |
+
The task of automatic science journalism is intended to support journalists or the researchers themselves in writing high-quality journalistic content more efficiently and coping with information overload.
|
136 |
+
For instance, a journalist could use the summaries generated by our systems as an initial draft and edit it for factual inconsistencies and add context if needed.
|
137 |
+
Although we do not foresee the negative societal impact of the task or the accompanying data itself, we point at the
|
138 |
+
general challenges related to factuality and bias in machine-generated texts, and call the potential users and developers of science journalism
|
139 |
+
applications to exert caution and follow up-to-date ethical policies.
|
140 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
141 |
|
142 |
## Additional Information
|
143 |
|
144 |
### Dataset Curators
|
145 |
|
146 |
+
- Ronald Cardenas, University of Edinburgh
|
147 |
+
- Bingsheng Yao, Rensselaer Polytechnic Institute
|
148 |
+
- Dakuo Wang, Northeastern University
|
149 |
+
- Yufang Hou, IBM Research Ireland
|
150 |
|
|
|
151 |
|
|
|
152 |
|
153 |
### Citation Information
|
154 |
|
|
|
164 |
|
165 |
If the dataset has a [DOI](https://www.doi.org/), please provide it here.
|
166 |
|
|
|
|
|
|