philschmid HF staff commited on
Commit
3d76107
1 Parent(s): 1e3780e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ - habana
6
+ datasets:
7
+ - AmazonScience/massive
8
+ metrics:
9
+ - accuracy
10
+ - f1
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # philschmid/habana-xlm-r-large-amazon-massive
17
+
18
+ This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the AmazonScience/massive dataset.
19
+ It achieves the following results on the evaluation set:
20
+
21
+
22
+ ## 8x HPU approx. 41min
23
+
24
+ **train results**
25
+
26
+ ```bash
27
+ {'loss': 0.2651, 'learning_rate': 2.4e-05, 'epoch': 1.0}
28
+ {'loss': 0.1079, 'learning_rate': 1.8e-05, 'epoch': 2.0}
29
+ {'loss': 0.0563, 'learning_rate': 1.2e-05, 'epoch': 3.0}
30
+ {'loss': 0.0308, 'learning_rate': 6e-06, 'epoch': 4.0}
31
+ {'loss': 0.0165, 'learning_rate': 0.0, 'epoch': 5.0}
32
+ ```
33
+
34
+ total
35
+ ```bash
36
+ {'train_runtime': 3172.4502, 'train_samples_per_second': 127.028, 'train_steps_per_second': 1.986, 'train_loss': 0.09531746031746031, 'epoch': 5.0}
37
+ ```
38
+
39
+
40
+ **eval results**
41
+
42
+ ```bash
43
+ {'eval_loss': 0.3128528892993927, 'eval_accuracy': 0.9125852013210597, 'eval_f1': 0.9125852013210597, 'eval_runtime': 45.1795, 'eval_samples_per_second': 314.988, 'eval_steps_per_second': 4.936, 'epoch': 1.0}
44
+ {'eval_loss': 0.36222779750823975, 'eval_accuracy': 0.9134987000210807, 'eval_f1': 0.9134987000210807, 'eval_runtime': 29.8241, 'eval_samples_per_second': 477.165, 'eval_steps_per_second': 7.477, 'epoch': 2.0}
45
+ {'eval_loss': 0.3943144679069519, 'eval_accuracy': 0.9140608530672476, 'eval_f1': 0.9140
46
+ 608530672476, 'eval_runtime': 30.1085, 'eval_samples_per_second': 472.657, 'eval_steps_per_second': 7.407, 'epoch': 3.0}
47
+ {'eval_loss': 0.40938863158226013, 'eval_accuracy': 0.9158878504672897, 'eval_f1': 0.9158878504672897, 'eval_runtime': 30.4546, 'eval_samples_per_second': 467.286, 'eval_steps_per_second': 7.322, 'epoch': 4.0}
48
+ {'eval_loss': 0.4137658476829529, 'eval_accuracy': 0.9172932330827067, 'eval_f1': 0.9172932330827067, 'eval_runtime': 30.3464, 'eval_samples_per_second': 468.952, 'eval_steps_per_second': 7.348, 'epoch': 5.0}
49
+ ```
50
+
51
+ # Environment
52
+
53
+ The training was run on a `DL1` instance on AWS using Habana Gaudi1 and `optimum`.
54
+
55
+ see for more information: https://github.com/philschmid/deep-learning-habana-huggingface