ai-sa-r002
#1
by
JovanS993
- opened
- .gitattributes +0 -1
- README.md +9 -18
- logo_no_bg.png +0 -0
- model.safetensors +0 -3
.gitattributes
CHANGED
@@ -25,4 +25,3 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
25 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
28 |
-
model.safetensors filter=lfs diff=lfs merge=lfs -text
|
|
|
25 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
README.md
CHANGED
@@ -1,6 +1,5 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
-
thumbnail: https://huggingface.co/mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis/resolve/main/logo_no_bg.png
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
- financial
|
@@ -31,32 +30,24 @@ model-index:
|
|
31 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
32 |
should probably proofread and complete it, then remove this comment. -->
|
33 |
|
34 |
-
|
35 |
-
<div style="text-align:center;width:250px;height:250px;">
|
36 |
-
<img src="https://huggingface.co/mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis/resolve/main/logo_no_bg.png" alt="logo">
|
37 |
-
</div>
|
38 |
-
|
39 |
-
|
40 |
-
# DistilRoberta-financial-sentiment
|
41 |
-
|
42 |
|
43 |
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the financial_phrasebank dataset.
|
44 |
It achieves the following results on the evaluation set:
|
45 |
- Loss: 0.1116
|
46 |
-
- Accuracy:
|
|
|
|
|
47 |
|
48 |
-
|
49 |
|
50 |
-
|
51 |
-
The code for the distillation process can be found [here](https://github.com/huggingface/transformers/tree/master/examples/distillation).
|
52 |
-
This model is case-sensitive: it makes a difference between English and English.
|
53 |
|
54 |
-
|
55 |
-
On average DistilRoBERTa is twice as fast as Roberta-base.
|
56 |
|
57 |
-
## Training
|
58 |
|
59 |
-
|
60 |
|
61 |
## Training procedure
|
62 |
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
3 |
tags:
|
4 |
- generated_from_trainer
|
5 |
- financial
|
|
|
30 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
31 |
should probably proofread and complete it, then remove this comment. -->
|
32 |
|
33 |
+
# distilRoberta-financial-sentiment
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the financial_phrasebank dataset.
|
36 |
It achieves the following results on the evaluation set:
|
37 |
- Loss: 0.1116
|
38 |
+
- Accuracy: 0.9823
|
39 |
+
|
40 |
+
## Model description
|
41 |
|
42 |
+
More information needed
|
43 |
|
44 |
+
## Intended uses & limitations
|
|
|
|
|
45 |
|
46 |
+
More information needed
|
|
|
47 |
|
48 |
+
## Training and evaluation data
|
49 |
|
50 |
+
More information needed
|
51 |
|
52 |
## Training procedure
|
53 |
|
logo_no_bg.png
DELETED
Binary file (178 kB)
|
|
model.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:c0b61385e4482edd179b69042c014dcb53a79431784f34a0171f5d43b092feaa
|
3 |
-
size 328499560
|
|
|
|
|
|
|
|