Datasets:
add examples of parsing annotations
#2
by
davanstrien
HF staff
- opened
README.md
CHANGED
@@ -168,6 +168,49 @@ Volunteers and Expert annotators
|
|
168 |
|
169 |
## Considerations for Using the Data
|
170 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
171 |
### Social Impact of Dataset
|
172 |
|
173 |
This dataset can be used to see how words change in meaning over time
|
|
|
168 |
|
169 |
## Considerations for Using the Data
|
170 |
|
171 |
+
## Accessing the annotations
|
172 |
+
|
173 |
+
Each example text has multiple annotations. These annotations may not always agree. There are various approaches one could take to calculate agreement, including a majority vote, rating some annotators more highly, or calculating a score based on the 'votes' of annotators. Since there are many ways of doing this, we have not implemented this as part of the dataset loading script.
|
174 |
+
|
175 |
+
An example of how one could generate an "OCR quality rating" based on the number of times an annotator labelled an example with `Illegible OCR`:
|
176 |
+
|
177 |
+
```python
|
178 |
+
from collections import Counter
|
179 |
+
|
180 |
+
|
181 |
+
def calculate_ocr_score(example):
|
182 |
+
annotator_responses = [response['response'] for response in example['annotator_responses_english']]
|
183 |
+
counts = Counter(annotator_responses)
|
184 |
+
bad_ocr_ratings = counts.get("Illegible OCR")
|
185 |
+
if bad_ocr_ratings is None:
|
186 |
+
bad_ocr_ratings = 0
|
187 |
+
return round(1 - bad_ocr_ratings/len(annotator_responses),3)
|
188 |
+
|
189 |
+
|
190 |
+
dataset = dataset.map(lambda example: {"ocr_score":calculate_ocr_score(example)})
|
191 |
+
```
|
192 |
+
|
193 |
+
To take the majority vote (or return a tie) based on whether a example is labelled contentious or not:
|
194 |
+
|
195 |
+
```python
|
196 |
+
|
197 |
+
def most_common_vote(example):
|
198 |
+
annotator_responses = [response['response'] for response in example['annotator_responses_english']]
|
199 |
+
counts = Counter(annotator_responses)
|
200 |
+
contentious_count = counts.get("Contentious according to current standards")
|
201 |
+
if not contentious_count:
|
202 |
+
contentious_count = 0
|
203 |
+
not_contentious_count = counts.get("Not contentious")
|
204 |
+
if not not_contentious_count:
|
205 |
+
not_contentious_count = 0
|
206 |
+
if contentious_count > not_contentious_count:
|
207 |
+
return "contentious"
|
208 |
+
if contentious_count < not_contentious_count:
|
209 |
+
return "not_contentious"
|
210 |
+
if contentious_count == not_contentious_count:
|
211 |
+
return "tied"
|
212 |
+
```
|
213 |
+
|
214 |
### Social Impact of Dataset
|
215 |
|
216 |
This dataset can be used to see how words change in meaning over time
|