First run at filling out the model card
Browse files
README.md
CHANGED
@@ -45,72 +45,73 @@ pretty_name: Auditor_Review
|
|
45 |
- [Additional Information](#additional-information)
|
46 |
- [Dataset Curators](#dataset-curators)
|
47 |
- [Licensing Information](#licensing-information)
|
48 |
-
- [Citation Information](#citation-information)
|
49 |
-
- [Contributions](#contributions)
|
50 |
|
51 |
## Dataset Description
|
|
|
52 |
|
53 |
-
- **Homepage:**
|
54 |
-
- **Repository:**
|
55 |
-
- **Paper:**
|
56 |
-
- **Leaderboard:**
|
57 |
- **Point of Contact:**
|
58 |
-
|
59 |
### Dataset Summary
|
60 |
|
61 |
-
|
62 |
|
63 |
### Supported Tasks and Leaderboards
|
64 |
|
65 |
-
|
66 |
|
67 |
### Languages
|
68 |
|
69 |
-
|
70 |
|
71 |
## Dataset Structure
|
72 |
|
73 |
### Data Instances
|
74 |
|
75 |
-
|
|
|
|
|
|
|
76 |
|
77 |
### Data Fields
|
78 |
|
79 |
-
|
|
|
80 |
|
81 |
### Data Splits
|
82 |
|
83 |
-
|
84 |
|
85 |
## Dataset Creation
|
86 |
|
87 |
### Curation Rationale
|
88 |
|
89 |
-
|
90 |
|
91 |
### Source Data
|
92 |
|
93 |
#### Initial Data Collection and Normalization
|
94 |
|
95 |
-
|
96 |
|
97 |
#### Who are the source language producers?
|
98 |
|
99 |
-
|
100 |
|
101 |
### Annotations
|
102 |
|
103 |
#### Annotation process
|
104 |
|
105 |
-
|
|
|
|
|
106 |
|
107 |
#### Who are the annotators?
|
108 |
|
109 |
-
|
110 |
|
111 |
### Personal and Sensitive Information
|
112 |
|
113 |
-
|
114 |
|
115 |
## Considerations for Using the Data
|
116 |
|
@@ -120,7 +121,8 @@ pretty_name: Auditor_Review
|
|
120 |
|
121 |
### Discussion of Biases
|
122 |
|
123 |
-
|
|
|
124 |
|
125 |
### Other Known Limitations
|
126 |
|
@@ -134,12 +136,4 @@ pretty_name: Auditor_Review
|
|
134 |
|
135 |
### Licensing Information
|
136 |
|
137 |
-
|
138 |
-
|
139 |
-
### Citation Information
|
140 |
-
|
141 |
-
[More Information Needed]
|
142 |
-
|
143 |
-
### Contributions
|
144 |
-
|
145 |
-
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
|
|
45 |
- [Additional Information](#additional-information)
|
46 |
- [Dataset Curators](#dataset-curators)
|
47 |
- [Licensing Information](#licensing-information)
|
|
|
|
|
48 |
|
49 |
## Dataset Description
|
50 |
+
Auditor review data collected by News Department
|
51 |
|
|
|
|
|
|
|
|
|
52 |
- **Point of Contact:**
|
53 |
+
Talked to COE for Auditing, currently sue@demo.org
|
54 |
### Dataset Summary
|
55 |
|
56 |
+
Auditor sentiment dataset of sentences from financial news. The dataset consists of *** sentences from English language financial news categorized by sentiment. The dataset is divided by agreement rate of 5-8 annotators.
|
57 |
|
58 |
### Supported Tasks and Leaderboards
|
59 |
|
60 |
+
Sentiment Classification
|
61 |
|
62 |
### Languages
|
63 |
|
64 |
+
English
|
65 |
|
66 |
## Dataset Structure
|
67 |
|
68 |
### Data Instances
|
69 |
|
70 |
+
```
|
71 |
+
"sentence": "Pharmaceuticals group Orion Corp reported a fall in its third-quarter earnings that were hit by larger expenditures on R&D and marketing .",
|
72 |
+
"label": "negative"
|
73 |
+
```
|
74 |
|
75 |
### Data Fields
|
76 |
|
77 |
+
- sentence: a tokenized line from the dataset
|
78 |
+
- label: a label corresponding to the class as a string: 'positive' - (2), 'neutral' - (1), or 'negative'
|
79 |
|
80 |
### Data Splits
|
81 |
|
82 |
+
A train/test split was created randomly with a 75/25 split
|
83 |
|
84 |
## Dataset Creation
|
85 |
|
86 |
### Curation Rationale
|
87 |
|
88 |
+
To gather our auditor evaluations into one dataset. Previous attempts using off the shelf sentiment had only 70% F1, this dataset was an attempt to improve upon that performance.
|
89 |
|
90 |
### Source Data
|
91 |
|
92 |
#### Initial Data Collection and Normalization
|
93 |
|
94 |
+
The corpus used in this paper is made out of English news reports.
|
95 |
|
96 |
#### Who are the source language producers?
|
97 |
|
98 |
+
The source data was written by various auditors.
|
99 |
|
100 |
### Annotations
|
101 |
|
102 |
#### Annotation process
|
103 |
|
104 |
+
This release of the auditor reviews covers a collection of 4840
|
105 |
+
sentences. The selected collection of phrases was annotated by 16 people with
|
106 |
+
adequate background knowledge on financial markets. The subset here is where interannotation agreement was greater than 75%.
|
107 |
|
108 |
#### Who are the annotators?
|
109 |
|
110 |
+
They were pulled from the SME list, names are held by sue@demo.org
|
111 |
|
112 |
### Personal and Sensitive Information
|
113 |
|
114 |
+
There is no personal or sensitive information in this dataset.
|
115 |
|
116 |
## Considerations for Using the Data
|
117 |
|
|
|
121 |
|
122 |
### Discussion of Biases
|
123 |
|
124 |
+
All annotators were from the same institution and so interannotator agreement
|
125 |
+
should be understood with this taken into account.
|
126 |
|
127 |
### Other Known Limitations
|
128 |
|
|
|
136 |
|
137 |
### Licensing Information
|
138 |
|
139 |
+
License: Demo.Org Proprietary - DO NOT SHARE
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|