system HF staff commited on
Commit
985fbdf
0 Parent(s):

Update files from the datasets library (from 1.4.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.4.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-nc-sa-3-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ task_ids:
19
+ - multi-class-classification
20
+ - sentiment-classification
21
+ ---
22
+
23
+ # Dataset Card for financial_phrasebank
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-instances)
33
+ - [Data Splits](#data-instances)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+ - [Contributions](#contributions)
48
+
49
+ ## Dataset Description
50
+
51
+ - **Homepage:** [Kaggle](https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-news) [ResearchGate](https://www.researchgate.net/publication/251231364_FinancialPhraseBank-v10)
52
+ - **Repository:**
53
+ - **Paper:** [Arxiv](https://arxiv.org/abs/1307.5336)
54
+ - **Leaderboard:** [Kaggle](https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-news/code) [PapersWithCode](https://paperswithcode.com/sota/sentiment-analysis-on-financial-phrasebank) =
55
+ - **Point of Contact:**
56
+
57
+ ### Dataset Summary
58
+
59
+ Polar sentiment dataset of sentences from financial news. The dataset consists of 4840 sentences from English language financial news categorised by sentiment. The dataset is divided by agreement rate of 5-8 annotators.
60
+
61
+ ### Supported Tasks and Leaderboards
62
+
63
+ Sentiment Classification
64
+
65
+ ### Languages
66
+
67
+ English
68
+
69
+ ## Dataset Structure
70
+
71
+ ### Data Instances
72
+
73
+ ```
74
+ { "sentence": "Pharmaceuticals group Orion Corp reported a fall in its third-quarter earnings that were hit by larger expenditures on R&D and marketing .",
75
+ "label": "negative"
76
+ }
77
+ ```
78
+
79
+ ### Data Fields
80
+
81
+ - sentence: a tokenized line from the dataset
82
+ - label: a label corresponding to the class as a string: 'positive', 'negative' or 'neutral'
83
+
84
+ ### Data Splits
85
+ There's no train/validation/test split.
86
+
87
+ However the dataset is available in four possible configurations depending on the percentage of agreement of annotators:
88
+
89
+ `sentences_50agree`; Number of instances with >=50% annotator agreement: 4846
90
+ `sentences_66agree`: Number of instances with >=66% annotator agreement: 4217
91
+ `sentences_75agree`: Number of instances with >=75% annotator agreement: 3453
92
+ `sentences_allagree`: Number of instances with 100% annotator agreement: 2264
93
+
94
+ ## Dataset Creation
95
+
96
+ ### Curation Rationale
97
+
98
+ The key arguments for the low utilization of statistical techniques in
99
+ financial sentiment analysis have been the difficulty of implementation for
100
+ practical applications and the lack of high quality training data for building
101
+ such models. Especially in the case of finance and economic texts, annotated
102
+ collections are a scarce resource and many are reserved for proprietary use
103
+ only. To resolve the missing training data problem, we present a collection of
104
+ ∼ 5000 sentences to establish human-annotated standards for benchmarking
105
+ alternative modeling techniques.
106
+
107
+ The objective of the phrase level annotation task was to classify each example
108
+ sentence into a positive, negative or neutral category by considering only the
109
+ information explicitly available in the given sentence. Since the study is
110
+ focused only on financial and economic domains, the annotators were asked to
111
+ consider the sentences from the view point of an investor only; i.e. whether
112
+ the news may have positive, negative or neutral influence on the stock price.
113
+ As a result, sentences which have a sentiment that is not relevant from an
114
+ economic or financial perspective are considered neutral.
115
+
116
+ ### Source Data
117
+
118
+ #### Initial Data Collection and Normalization
119
+
120
+ The corpus used in this paper is made out of English news on all listed
121
+ companies in OMX Helsinki. The news has been downloaded from the LexisNexis
122
+ database using an automated web scraper. Out of this news database, a random
123
+ subset of 10,000 articles was selected to obtain good coverage across small and
124
+ large companies, companies in different industries, as well as different news
125
+ sources. Following the approach taken by Maks and Vossen (2010), we excluded
126
+ all sentences which did not contain any of the lexicon entities. This reduced
127
+ the overall sample to 53,400 sentences, where each has at least one or more
128
+ recognized lexicon entity. The sentences were then classified according to the
129
+ types of entity sequences detected. Finally, a random sample of ∼5000 sentences
130
+ was chosen to represent the overall news database.
131
+
132
+ #### Who are the source language producers?
133
+
134
+ The source data was written by various financial journalists.
135
+
136
+ ### Annotations
137
+
138
+ #### Annotation process
139
+
140
+ This release of the financial phrase bank covers a collection of 4840
141
+ sentences. The selected collection of phrases was annotated by 16 people with
142
+ adequate background knowledge on financial markets.
143
+
144
+ Given the large number of overlapping annotations (5 to 8 annotations per
145
+ sentence), there are several ways to define a majority vote based gold
146
+ standard. To provide an objective comparison, we have formed 4 alternative
147
+ reference datasets based on the strength of majority agreement:
148
+
149
+ #### Who are the annotators?
150
+
151
+ Three of the annotators were researchers and the remaining 13 annotators were
152
+ master's students at Aalto University School of Business with majors primarily
153
+ in finance, accounting, and economics.
154
+
155
+ ### Personal and Sensitive Information
156
+
157
+ [More Information Needed]
158
+
159
+ ## Considerations for Using the Data
160
+
161
+ ### Social Impact of Dataset
162
+
163
+ [More Information Needed]
164
+
165
+ ### Discussion of Biases
166
+
167
+ All annotators were from the same institution and so interannotator agreement
168
+ should be understood with this taken into account.
169
+
170
+ ### Other Known Limitations
171
+
172
+ [More Information Needed]
173
+
174
+ ## Additional Information
175
+
176
+ ### Dataset Curators
177
+
178
+ [More Information Needed]
179
+
180
+ ### Licensing Information
181
+
182
+ License: Creative Commons Attribution 4.0 International License (CC-BY)
183
+
184
+ ### Citation Information
185
+
186
+ ```
187
+ @article{Malo2014GoodDO,
188
+ title={Good debt or bad debt: Detecting semantic orientations in economic texts},
189
+ author={P. Malo and A. Sinha and P. Korhonen and J. Wallenius and P. Takala},
190
+ journal={Journal of the Association for Information Science and Technology},
191
+ year={2014},
192
+ volume={65}
193
+ }
194
+ ```
195
+
196
+ ### Contributions
197
+
198
+ Thanks to [@frankier](https://github.com/frankier) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"sentences_allagree": {"description": "The key arguments for the low utilization of statistical techniques in\nfinancial sentiment analysis have been the difficulty of implementation for\npractical applications and the lack of high quality training data for building\nsuch models. Especially in the case of finance and economic texts, annotated\ncollections are a scarce resource and many are reserved for proprietary use\nonly. To resolve the missing training data problem, we present a collection of\n\u223c 5000 sentences to establish human-annotated standards for benchmarking\nalternative modeling techniques.\n\nThe objective of the phrase level annotation task was to classify each example\nsentence into a positive, negative or neutral category by considering only the\ninformation explicitly available in the given sentence. Since the study is\nfocused only on financial and economic domains, the annotators were asked to\nconsider the sentences from the view point of an investor only; i.e. whether\nthe news may have positive, negative or neutral influence on the stock price.\nAs a result, sentences which have a sentiment that is not relevant from an\neconomic or financial perspective are considered neutral.\n\nThis release of the financial phrase bank covers a collection of 4840\nsentences. The selected collection of phrases was annotated by 16 people with\nadequate background knowledge on financial markets. Three of the annotators\nwere researchers and the remaining 13 annotators were master\u2019s students at\nAalto University School of Business with majors primarily in finance,\naccounting, and economics.\n\nGiven the large number of overlapping annotations (5 to 8 annotations per\nsentence), there are several ways to define a majority vote based gold\nstandard. To provide an objective comparison, we have formed 4 alternative\nreference datasets based on the strength of majority agreement: all annotators\nagree, >=75% of annotators agree, >=66% of annotators agree and >=50% of\nannotators agree.\n", "citation": "@article{Malo2014GoodDO,\n title={Good debt or bad debt: Detecting semantic orientations in economic texts},\n author={P. Malo and A. Sinha and P. Korhonen and J. Wallenius and P. Takala},\n journal={Journal of the Association for Information Science and Technology},\n year={2014},\n volume={65}\n}\n", "homepage": "https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-news", "license": "Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License", "features": {"sentence": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["negative", "neutral", "positive"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "financial_phrasebank", "config_name": "sentences_allagree", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 303375, "num_examples": 2264, "dataset_name": "financial_phrasebank"}}, "download_checksums": {"https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip": {"num_bytes": 681890, "checksum": "0e1a06c4900fdae46091d031068601e3773ba067c7cecb5b0da1dcba5ce989a6"}}, "download_size": 681890, "post_processing_size": null, "dataset_size": 303375, "size_in_bytes": 985265}, "sentences_75agree": {"description": "The key arguments for the low utilization of statistical techniques in\nfinancial sentiment analysis have been the difficulty of implementation for\npractical applications and the lack of high quality training data for building\nsuch models. Especially in the case of finance and economic texts, annotated\ncollections are a scarce resource and many are reserved for proprietary use\nonly. To resolve the missing training data problem, we present a collection of\n\u223c 5000 sentences to establish human-annotated standards for benchmarking\nalternative modeling techniques.\n\nThe objective of the phrase level annotation task was to classify each example\nsentence into a positive, negative or neutral category by considering only the\ninformation explicitly available in the given sentence. Since the study is\nfocused only on financial and economic domains, the annotators were asked to\nconsider the sentences from the view point of an investor only; i.e. whether\nthe news may have positive, negative or neutral influence on the stock price.\nAs a result, sentences which have a sentiment that is not relevant from an\neconomic or financial perspective are considered neutral.\n\nThis release of the financial phrase bank covers a collection of 4840\nsentences. The selected collection of phrases was annotated by 16 people with\nadequate background knowledge on financial markets. Three of the annotators\nwere researchers and the remaining 13 annotators were master\u2019s students at\nAalto University School of Business with majors primarily in finance,\naccounting, and economics.\n\nGiven the large number of overlapping annotations (5 to 8 annotations per\nsentence), there are several ways to define a majority vote based gold\nstandard. To provide an objective comparison, we have formed 4 alternative\nreference datasets based on the strength of majority agreement: all annotators\nagree, >=75% of annotators agree, >=66% of annotators agree and >=50% of\nannotators agree.\n", "citation": "@article{Malo2014GoodDO,\n title={Good debt or bad debt: Detecting semantic orientations in economic texts},\n author={P. Malo and A. Sinha and P. Korhonen and J. Wallenius and P. Takala},\n journal={Journal of the Association for Information Science and Technology},\n year={2014},\n volume={65}\n}\n", "homepage": "https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-news", "license": "Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License", "features": {"sentence": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["negative", "neutral", "positive"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "financial_phrasebank", "config_name": "sentences_75agree", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 472707, "num_examples": 3453, "dataset_name": "financial_phrasebank"}}, "download_checksums": {"https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip": {"num_bytes": 681890, "checksum": "0e1a06c4900fdae46091d031068601e3773ba067c7cecb5b0da1dcba5ce989a6"}}, "download_size": 681890, "post_processing_size": null, "dataset_size": 472707, "size_in_bytes": 1154597}, "sentences_66agree": {"description": "The key arguments for the low utilization of statistical techniques in\nfinancial sentiment analysis have been the difficulty of implementation for\npractical applications and the lack of high quality training data for building\nsuch models. Especially in the case of finance and economic texts, annotated\ncollections are a scarce resource and many are reserved for proprietary use\nonly. To resolve the missing training data problem, we present a collection of\n\u223c 5000 sentences to establish human-annotated standards for benchmarking\nalternative modeling techniques.\n\nThe objective of the phrase level annotation task was to classify each example\nsentence into a positive, negative or neutral category by considering only the\ninformation explicitly available in the given sentence. Since the study is\nfocused only on financial and economic domains, the annotators were asked to\nconsider the sentences from the view point of an investor only; i.e. whether\nthe news may have positive, negative or neutral influence on the stock price.\nAs a result, sentences which have a sentiment that is not relevant from an\neconomic or financial perspective are considered neutral.\n\nThis release of the financial phrase bank covers a collection of 4840\nsentences. The selected collection of phrases was annotated by 16 people with\nadequate background knowledge on financial markets. Three of the annotators\nwere researchers and the remaining 13 annotators were master\u2019s students at\nAalto University School of Business with majors primarily in finance,\naccounting, and economics.\n\nGiven the large number of overlapping annotations (5 to 8 annotations per\nsentence), there are several ways to define a majority vote based gold\nstandard. To provide an objective comparison, we have formed 4 alternative\nreference datasets based on the strength of majority agreement: all annotators\nagree, >=75% of annotators agree, >=66% of annotators agree and >=50% of\nannotators agree.\n", "citation": "@article{Malo2014GoodDO,\n title={Good debt or bad debt: Detecting semantic orientations in economic texts},\n author={P. Malo and A. Sinha and P. Korhonen and J. Wallenius and P. Takala},\n journal={Journal of the Association for Information Science and Technology},\n year={2014},\n volume={65}\n}\n", "homepage": "https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-news", "license": "Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License", "features": {"sentence": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["negative", "neutral", "positive"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "financial_phrasebank", "config_name": "sentences_66agree", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 587156, "num_examples": 4217, "dataset_name": "financial_phrasebank"}}, "download_checksums": {"https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip": {"num_bytes": 681890, "checksum": "0e1a06c4900fdae46091d031068601e3773ba067c7cecb5b0da1dcba5ce989a6"}}, "download_size": 681890, "post_processing_size": null, "dataset_size": 587156, "size_in_bytes": 1269046}, "sentences_50agree": {"description": "The key arguments for the low utilization of statistical techniques in\nfinancial sentiment analysis have been the difficulty of implementation for\npractical applications and the lack of high quality training data for building\nsuch models. Especially in the case of finance and economic texts, annotated\ncollections are a scarce resource and many are reserved for proprietary use\nonly. To resolve the missing training data problem, we present a collection of\n\u223c 5000 sentences to establish human-annotated standards for benchmarking\nalternative modeling techniques.\n\nThe objective of the phrase level annotation task was to classify each example\nsentence into a positive, negative or neutral category by considering only the\ninformation explicitly available in the given sentence. Since the study is\nfocused only on financial and economic domains, the annotators were asked to\nconsider the sentences from the view point of an investor only; i.e. whether\nthe news may have positive, negative or neutral influence on the stock price.\nAs a result, sentences which have a sentiment that is not relevant from an\neconomic or financial perspective are considered neutral.\n\nThis release of the financial phrase bank covers a collection of 4840\nsentences. The selected collection of phrases was annotated by 16 people with\nadequate background knowledge on financial markets. Three of the annotators\nwere researchers and the remaining 13 annotators were master\u2019s students at\nAalto University School of Business with majors primarily in finance,\naccounting, and economics.\n\nGiven the large number of overlapping annotations (5 to 8 annotations per\nsentence), there are several ways to define a majority vote based gold\nstandard. To provide an objective comparison, we have formed 4 alternative\nreference datasets based on the strength of majority agreement: all annotators\nagree, >=75% of annotators agree, >=66% of annotators agree and >=50% of\nannotators agree.\n", "citation": "@article{Malo2014GoodDO,\n title={Good debt or bad debt: Detecting semantic orientations in economic texts},\n author={P. Malo and A. Sinha and P. Korhonen and J. Wallenius and P. Takala},\n journal={Journal of the Association for Information Science and Technology},\n year={2014},\n volume={65}\n}\n", "homepage": "https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-news", "license": "Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License", "features": {"sentence": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["negative", "neutral", "positive"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "financial_phrasebank", "config_name": "sentences_50agree", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 679244, "num_examples": 4846, "dataset_name": "financial_phrasebank"}}, "download_checksums": {"https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip": {"num_bytes": 681890, "checksum": "0e1a06c4900fdae46091d031068601e3773ba067c7cecb5b0da1dcba5ce989a6"}}, "download_size": 681890, "post_processing_size": null, "dataset_size": 679244, "size_in_bytes": 1361134}}
dummy/sentences_50agree/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35ebaa6bbb3f31061b7004e789cc2c9a9b2cf8eb406db8c7b37a260e331eb63d
3
+ size 1042
dummy/sentences_66agree/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84ea392bce02cdee17024aab02f2697b86b92d6bb524c68d9d385b5774f0597e
3
+ size 1042
dummy/sentences_75agree/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eda557d6060eb04c43cf6aab2387938cb358b682018f9c60103b8d4239ce09b9
3
+ size 1042
dummy/sentences_allagree/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2adcc1e0b3cc42c5fc0ec4c74ecbc285e610ae62e7c2624047692ced180c6227
3
+ size 1044
financial_phrasebank.py ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """Financial Phrase Bank v1.0: Polar sentiment dataset of sentences from
17
+ financial news. The dataset consists of 4840 sentences from English language
18
+ financial news categorised by sentiment. The dataset is divided by agreement
19
+ rate of 5-8 annotators."""
20
+
21
+ from __future__ import absolute_import, division, print_function
22
+
23
+ import os
24
+
25
+ import datasets
26
+
27
+
28
+ _CITATION = """\
29
+ @article{Malo2014GoodDO,
30
+ title={Good debt or bad debt: Detecting semantic orientations in economic texts},
31
+ author={P. Malo and A. Sinha and P. Korhonen and J. Wallenius and P. Takala},
32
+ journal={Journal of the Association for Information Science and Technology},
33
+ year={2014},
34
+ volume={65}
35
+ }
36
+ """
37
+
38
+ _DESCRIPTION = """\
39
+ The key arguments for the low utilization of statistical techniques in
40
+ financial sentiment analysis have been the difficulty of implementation for
41
+ practical applications and the lack of high quality training data for building
42
+ such models. Especially in the case of finance and economic texts, annotated
43
+ collections are a scarce resource and many are reserved for proprietary use
44
+ only. To resolve the missing training data problem, we present a collection of
45
+ ∼ 5000 sentences to establish human-annotated standards for benchmarking
46
+ alternative modeling techniques.
47
+
48
+ The objective of the phrase level annotation task was to classify each example
49
+ sentence into a positive, negative or neutral category by considering only the
50
+ information explicitly available in the given sentence. Since the study is
51
+ focused only on financial and economic domains, the annotators were asked to
52
+ consider the sentences from the view point of an investor only; i.e. whether
53
+ the news may have positive, negative or neutral influence on the stock price.
54
+ As a result, sentences which have a sentiment that is not relevant from an
55
+ economic or financial perspective are considered neutral.
56
+
57
+ This release of the financial phrase bank covers a collection of 4840
58
+ sentences. The selected collection of phrases was annotated by 16 people with
59
+ adequate background knowledge on financial markets. Three of the annotators
60
+ were researchers and the remaining 13 annotators were master’s students at
61
+ Aalto University School of Business with majors primarily in finance,
62
+ accounting, and economics.
63
+
64
+ Given the large number of overlapping annotations (5 to 8 annotations per
65
+ sentence), there are several ways to define a majority vote based gold
66
+ standard. To provide an objective comparison, we have formed 4 alternative
67
+ reference datasets based on the strength of majority agreement: all annotators
68
+ agree, >=75% of annotators agree, >=66% of annotators agree and >=50% of
69
+ annotators agree.
70
+ """
71
+
72
+ _HOMEPAGE = "https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-news"
73
+
74
+ _LICENSE = "Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License"
75
+
76
+ _URL = "https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip"
77
+
78
+
79
+ _VERSION = datasets.Version("1.0.0")
80
+
81
+
82
+ class FinancialPhraseBankConfig(datasets.BuilderConfig):
83
+ """BuilderConfig for FinancialPhraseBank."""
84
+
85
+ def __init__(
86
+ self,
87
+ split,
88
+ **kwargs,
89
+ ):
90
+ """BuilderConfig for Discovery.
91
+ Args:
92
+ filename_bit: `string`, the changing part of the filename.
93
+ """
94
+
95
+ super(FinancialPhraseBankConfig, self).__init__(name=f"sentences_{split}agree", version=_VERSION, **kwargs)
96
+
97
+ self.path = os.path.join("FinancialPhraseBank-v1.0", f"Sentences_{split.title()}Agree.txt")
98
+
99
+
100
+ class FinancialPhrasebank(datasets.GeneratorBasedBuilder):
101
+
102
+ BUILDER_CONFIGS = [
103
+ FinancialPhraseBankConfig(
104
+ split="all",
105
+ description="Sentences where all annotators agreed",
106
+ ),
107
+ FinancialPhraseBankConfig(split="75", description="Sentences where at least 75% of annotators agreed"),
108
+ FinancialPhraseBankConfig(split="66", description="Sentences where at least 66% of annotators agreed"),
109
+ FinancialPhraseBankConfig(split="50", description="Sentences where at least 50% of annotators agreed"),
110
+ ]
111
+
112
+ def _info(self):
113
+ return datasets.DatasetInfo(
114
+ description=_DESCRIPTION,
115
+ features=datasets.Features(
116
+ {
117
+ "sentence": datasets.Value("string"),
118
+ "label": datasets.features.ClassLabel(
119
+ names=[
120
+ "negative",
121
+ "neutral",
122
+ "positive",
123
+ ]
124
+ ),
125
+ }
126
+ ),
127
+ supervised_keys=None,
128
+ homepage=_HOMEPAGE,
129
+ license=_LICENSE,
130
+ citation=_CITATION,
131
+ )
132
+
133
+ def _split_generators(self, dl_manager):
134
+ """Returns SplitGenerators."""
135
+ data_dir = dl_manager.download_and_extract(_URL)
136
+ return [
137
+ datasets.SplitGenerator(
138
+ name=datasets.Split.TRAIN,
139
+ # These kwargs will be passed to _generate_examples
140
+ gen_kwargs={"filepath": os.path.join(data_dir, self.config.path)},
141
+ ),
142
+ ]
143
+
144
+ def _generate_examples(self, filepath):
145
+ """ Yields examples. """
146
+ with open(filepath, encoding="iso-8859-1") as f:
147
+ for id_, line in enumerate(f):
148
+ sentence, label = line.rsplit("@", 1)
149
+ yield id_, {"sentence": sentence, "label": label}