Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Me1oy commited on
Commit
b669064
1 Parent(s): f8651e9

Upload 4 files

Browse files
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ dataset_info:
5
+ features:
6
+ - name: _id
7
+ dtype: string
8
+ - name: sentence
9
+ dtype: string
10
+ - name: target
11
+ dtype: string
12
+ - name: aspect
13
+ dtype: string
14
+ - name: score
15
+ dtype: float64
16
+ - name: type
17
+ dtype: string
18
+ splits:
19
+ - name: train
20
+ num_bytes: 119567
21
+ num_examples: 822
22
+ - name: valid
23
+ num_bytes: 17184
24
+ num_examples: 117
25
+ - name: test
26
+ num_bytes: 33728
27
+ num_examples: 234
28
+ download_size: 102225
29
+ dataset_size: 170479
30
+ ---
31
+ # Dataset Name
32
+
33
+ ## Dataset Description
34
+
35
+ This dataset is based on the task 1 of the Financial Sentiment Analysis in the Wild (FiQA) challenge. It follows the same settings as described in the paper 'A Baseline for Aspect-Based Sentiment Analysis in Financial Microblogs and News'. The dataset is split into three subsets: train, valid, test with sizes 822, 117, 234 respectively.
36
+
37
+ ## Dataset Structure
38
+
39
+ - `_id`: ID of the data point
40
+ - `sentence`: The sentence
41
+ - `target`: The target of the sentiment
42
+ - `aspect`: The aspect of the sentiment
43
+ - `score`: The sentiment score
44
+ - `type`: The type of the data point (headline or post)
45
+
46
+ ## Additional Information
47
+
48
+ - Homepage: [FiQA Challenge](https://sites.google.com/view/fiqa/home)
49
+ - Citation: [A Baseline for Aspect-Based Sentiment Analysis in Financial Microblogs and News](https://arxiv.org/pdf/2211.00083.pdf)
50
+
51
+ ## Downloading CSV
52
+ ```python
53
+ from datasets import load_dataset
54
+
55
+ # Load the dataset from the hub
56
+ dataset = load_dataset("ChanceFocus/fiqa-sentiment-classification")
57
+
58
+ # Save the dataset to a CSV file
59
+ dataset["train"].to_csv("train.csv")
60
+ dataset["valid"].to_csv("valid.csv")
61
+ dataset["test"].to_csv("test.csv")
62
+ ```
data/test-00000-of-00001-0fb9f3a47c7d0fce.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc2de6ada5afb736b67d11a2ea8b83e3e37a0b45b4213baa96a3c60e7ef05896
3
+ size 26836
data/train-00000-of-00001-aeefa1eadf5be10b.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0708f9a2c37c446dc8787cb20f33fff1f69d2eb0d27fe7e361c311efb0ade77e
3
+ size 61772
data/valid-00000-of-00001-51867fe1ac59af78.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e44e80fd419093b2ef154f458f13f0721efa00f6489fa3c73fe06194336f67db
3
+ size 13617