Files changed (4) hide show
  1. .gitattributes +0 -1
  2. README.md +9 -18
  3. logo_no_bg.png +0 -0
  4. model.safetensors +0 -3
.gitattributes CHANGED
@@ -25,4 +25,3 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- model.safetensors filter=lfs diff=lfs merge=lfs -text
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
README.md CHANGED
@@ -1,6 +1,5 @@
1
  ---
2
  license: apache-2.0
3
- thumbnail: https://huggingface.co/mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis/resolve/main/logo_no_bg.png
4
  tags:
5
  - generated_from_trainer
6
  - financial
@@ -31,32 +30,24 @@ model-index:
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
  should probably proofread and complete it, then remove this comment. -->
33
 
34
-
35
- <div style="text-align:center;width:250px;height:250px;">
36
- <img src="https://huggingface.co/mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis/resolve/main/logo_no_bg.png" alt="logo">
37
- </div>
38
-
39
-
40
- # DistilRoberta-financial-sentiment
41
-
42
 
43
  This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the financial_phrasebank dataset.
44
  It achieves the following results on the evaluation set:
45
  - Loss: 0.1116
46
- - Accuracy: **0.98**23
 
 
47
 
48
- ## Base Model description
49
 
50
- This model is a distilled version of the [RoBERTa-base model](https://huggingface.co/roberta-base). It follows the same training procedure as [DistilBERT](https://huggingface.co/distilbert-base-uncased).
51
- The code for the distillation process can be found [here](https://github.com/huggingface/transformers/tree/master/examples/distillation).
52
- This model is case-sensitive: it makes a difference between English and English.
53
 
54
- The model has 6 layers, 768 dimension and 12 heads, totalizing 82M parameters (compared to 125M parameters for RoBERTa-base).
55
- On average DistilRoBERTa is twice as fast as Roberta-base.
56
 
57
- ## Training Data
58
 
59
- Polar sentiment dataset of sentences from financial news. The dataset consists of 4840 sentences from English language financial news categorised by sentiment. The dataset is divided by agreement rate of 5-8 annotators.
60
 
61
  ## Training procedure
62
 
 
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - generated_from_trainer
5
  - financial
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
31
  should probably proofread and complete it, then remove this comment. -->
32
 
33
+ # distilRoberta-financial-sentiment
 
 
 
 
 
 
 
34
 
35
  This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the financial_phrasebank dataset.
36
  It achieves the following results on the evaluation set:
37
  - Loss: 0.1116
38
+ - Accuracy: 0.9823
39
+
40
+ ## Model description
41
 
42
+ More information needed
43
 
44
+ ## Intended uses & limitations
 
 
45
 
46
+ More information needed
 
47
 
48
+ ## Training and evaluation data
49
 
50
+ More information needed
51
 
52
  ## Training procedure
53
 
logo_no_bg.png DELETED
Binary file (178 kB)
 
model.safetensors DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c0b61385e4482edd179b69042c014dcb53a79431784f34a0171f5d43b092feaa
3
- size 328499560