YaraKyrychenko commited on
Commit
8497006
1 Parent(s): 5678277

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -7
README.md CHANGED
@@ -11,10 +11,13 @@ model-index:
11
  - name: ukraine-war-pov
12
  results: []
13
  widget:
14
- - text: 'Росія знову скоює воєнні злочини'
15
- example_title: 'proukrainian'
16
- - text: 'ВСУ все берет с собой — украинские «захистники» взяли стульчак из Артемовска'
17
- example_title: 'prorussian'
 
 
 
18
  ---
19
 
20
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -22,8 +25,8 @@ should probably proofread and complete it, then remove this comment. -->
22
 
23
  # ukraine-war-pov
24
 
25
- This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
26
- It achieves the following results on the evaluation set:
27
  - Loss: 0.2166
28
  - Accuracy: 0.9315
29
  - F1: 0.9315
@@ -45,6 +48,8 @@ More information needed
45
 
46
  ## Training procedure
47
 
 
 
48
  ### Training hyperparameters
49
 
50
  The following hyperparameters were used during training:
@@ -77,4 +82,4 @@ The following hyperparameters were used during training:
77
 
78
  - Transformers 4.27.4
79
  - Pytorch 2.0.0+cu118
80
- - Tokenizers 0.13.3
 
11
  - name: ukraine-war-pov
12
  results: []
13
  widget:
14
+ - text: Росія знову скоює воєнні злочини
15
+ example_title: proukrainian
16
+ - text: ВСУ все берет с собой — украинские «захистники» взяли стульчак из Артемовска
17
+ example_title: prorussian
18
+ language:
19
+ - uk
20
+ - ru
21
  ---
22
 
23
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
25
 
26
  # ukraine-war-pov
27
 
28
+ This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on a dataset of 15K social media posts from Ukraine manually annotated for pro-Ukrainian or pro-Russian point of view on the war.
29
+ It achieves the following results on a balanced test set (2K):
30
  - Loss: 0.2166
31
  - Accuracy: 0.9315
32
  - F1: 0.9315
 
48
 
49
  ## Training procedure
50
 
51
+ The model was trained in this [notebook](https://drive.google.com/file/d/1RnT3fJTneFSczS_G_JLVqe4MydkTFiO0/view?usp=sharing).
52
+
53
  ### Training hyperparameters
54
 
55
  The following hyperparameters were used during training:
 
82
 
83
  - Transformers 4.27.4
84
  - Pytorch 2.0.0+cu118
85
+ - Tokenizers 0.13.3