DmitryYarov commited on
Commit
8cab90e
·
verified ·
1 Parent(s): 8f70e20

DmitryYarov/aristotle_new_layer_plain

Browse files
README.md CHANGED
@@ -1,3 +1,76 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ base_model: ai-forever/rugpt3small_based_on_gpt2
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: aristotle_new_layer_plain
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # aristotle_new_layer_plain
15
+
16
+ This model is a fine-tuned version of [ai-forever/rugpt3small_based_on_gpt2](https://huggingface.co/ai-forever/rugpt3small_based_on_gpt2) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 5.1360
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 5e-05
38
+ - train_batch_size: 8
39
+ - eval_batch_size: 8
40
+ - seed: 42
41
+ - gradient_accumulation_steps: 4
42
+ - total_train_batch_size: 32
43
+ - optimizer: Use OptimizerNames.ADAFACTOR and the args are:
44
+ No additional optimizer arguments
45
+ - lr_scheduler_type: linear
46
+ - lr_scheduler_warmup_steps: 500
47
+ - num_epochs: 30
48
+ - mixed_precision_training: Native AMP
49
+
50
+ ### Training results
51
+
52
+ | Training Loss | Epoch | Step | Validation Loss |
53
+ |:-------------:|:-----:|:----:|:---------------:|
54
+ | 8.1202 | 1.0 | 203 | 7.6970 |
55
+ | 6.5259 | 2.0 | 406 | 6.4359 |
56
+ | 6.0754 | 3.0 | 609 | 6.0372 |
57
+ | 5.7242 | 4.0 | 812 | 5.7632 |
58
+ | 5.2971 | 5.0 | 1015 | 5.5099 |
59
+ | 5.0427 | 6.0 | 1218 | 5.3732 |
60
+ | 4.8016 | 7.0 | 1421 | 5.2518 |
61
+ | 4.559 | 8.0 | 1624 | 5.1812 |
62
+ | 4.3407 | 9.0 | 1827 | 5.1369 |
63
+ | 4.0474 | 10.0 | 2030 | 5.1208 |
64
+ | 3.8746 | 11.0 | 2233 | 5.1177 |
65
+ | 3.6983 | 12.0 | 2436 | 5.0946 |
66
+ | 3.5034 | 13.0 | 2639 | 5.1002 |
67
+ | 3.3277 | 14.0 | 2842 | 5.1041 |
68
+ | 3.1368 | 15.0 | 3045 | 5.1360 |
69
+
70
+
71
+ ### Framework versions
72
+
73
+ - Transformers 4.48.3
74
+ - Pytorch 2.5.1+cu124
75
+ - Datasets 3.3.2
76
+ - Tokenizers 0.21.0
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.48.3"
7
+ }
logs/events.out.tfevents.1740393772.5f70a5a0a00f.343.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f1590de0e6b2336f94c166cd6de93b76bd662984e74569bb8e2dba4b1eb049da
3
- size 20934
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f26acdbbbe3e664ee3f6939c827b4d95a871415c1ea5124cc549af4fda216d6
3
+ size 22403
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3ce26a520355c097774b91e24137ebbccc09953206f6a90848ef30c4e8b9ff09
3
  size 500941440
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8e953d9975473eac1891fef424f75780e98660fdb2d41b221a781d6dfc6f0e7
3
  size 500941440