system HF staff commited on
Commit
35cc764
1 Parent(s): ef756fd

Update files from the datasets library (from 1.7.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.7.0

Files changed (1) hide show
  1. README.md +23 -22
README.md CHANGED
@@ -1,24 +1,25 @@
1
  ---
2
- annotations_creators:
3
- - crowdsourced
4
- language_creators:
5
- - found
6
- languages:
7
- - en
8
- licenses:
9
- - cc-by-4-0
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 100K<n<1M
14
  source_datasets:
15
  - original
16
- task_categories:
17
- - conditional-text-generation
18
- - text-classification
19
- task_ids:
20
- - explanation-generation
21
- - hate-speech-detection
 
22
  ---
23
 
24
 
@@ -27,12 +28,12 @@ task_ids:
27
  ## Table of Contents
28
  - [Dataset Description](#dataset-description)
29
  - [Dataset Summary](#dataset-summary)
30
- - [Supported Tasks](#supported-tasks)
31
  - [Languages](#languages)
32
  - [Dataset Structure](#dataset-structure)
33
  - [Data Instances](#data-instances)
34
  - [Data Fields](#data-fields)
35
- - [Data Splits Sample Size](#data-splits-sample-size)
36
  - [Dataset Creation](#dataset-creation)
37
  - [Curation Rationale](#curation-rationale)
38
  - [Source Data](#source-data)
@@ -65,7 +66,7 @@ Warning: this document and dataset contain content that may be offensive or upse
65
 
66
  Social Bias Frames is a new way of representing the biases and offensiveness that are implied in language. For example, these frames are meant to distill the implication that "women (candidates) are less qualified" behind the statement "we shouldn’t lower our standards to hire more women." The Social Bias Inference Corpus (SBIC) supports large-scale learning and evaluation of social implications with over 150k structured annotations of social media posts, spanning over 34k implications about a thousand demographic groups.
67
 
68
- ### Supported Tasks
69
 
70
  This dataset supports both classification and generation. Sap et al. developed several models using the SBIC. They report an F1 score of 78.8 in predicting whether the posts in the test set were offensive, an F1 score of 78.6 in predicting whether the posts were intending to be offensive, an F1 score of 80.7 in predicting whether the posts were lewd, and an F1 score of 69.9 in predicting whether the posts were targeting a specific group.
71
 
@@ -140,7 +141,7 @@ The data fields are the same among all splits.
140
  - _dataSource_: a string indicating the source of the post (`t/...`: means Twitter, `r/...`: means a subreddit)
141
 
142
 
143
- ### Data Splits Sample Size
144
 
145
  To ensure that no post appeared in multiple splits, the curators defined a training instance as the post and its three sets of annotations. They then split the dataset into train, validation, and test sets (75%/12.5%/12.5%).
146
 
1
  ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
  source_datasets:
15
  - original
16
+ task_categories:
17
+ - conditional-text-generation
18
+ - text-classification
19
+ task_ids:
20
+ - explanation-generation
21
+ - hate-speech-detection
22
+ paperswithcode_id: null
23
  ---
24
 
25
 
28
  ## Table of Contents
29
  - [Dataset Description](#dataset-description)
30
  - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
  - [Languages](#languages)
33
  - [Dataset Structure](#dataset-structure)
34
  - [Data Instances](#data-instances)
35
  - [Data Fields](#data-fields)
36
+ - [Data Splits](#data-splits)
37
  - [Dataset Creation](#dataset-creation)
38
  - [Curation Rationale](#curation-rationale)
39
  - [Source Data](#source-data)
66
 
67
  Social Bias Frames is a new way of representing the biases and offensiveness that are implied in language. For example, these frames are meant to distill the implication that "women (candidates) are less qualified" behind the statement "we shouldn’t lower our standards to hire more women." The Social Bias Inference Corpus (SBIC) supports large-scale learning and evaluation of social implications with over 150k structured annotations of social media posts, spanning over 34k implications about a thousand demographic groups.
68
 
69
+ ### Supported Tasks and Leaderboards
70
 
71
  This dataset supports both classification and generation. Sap et al. developed several models using the SBIC. They report an F1 score of 78.8 in predicting whether the posts in the test set were offensive, an F1 score of 78.6 in predicting whether the posts were intending to be offensive, an F1 score of 80.7 in predicting whether the posts were lewd, and an F1 score of 69.9 in predicting whether the posts were targeting a specific group.
72
 
141
  - _dataSource_: a string indicating the source of the post (`t/...`: means Twitter, `r/...`: means a subreddit)
142
 
143
 
144
+ ### Data Splits
145
 
146
  To ensure that no post appeared in multiple splits, the curators defined a training instance as the post and its three sets of annotations. They then split the dataset into train, validation, and test sets (75%/12.5%/12.5%).
147