pszemraj commited on
Commit
18e4ca4
1 Parent(s): dd8a1a0

Model save

Browse files
Files changed (3) hide show
  1. README.md +82 -0
  2. generation_config.json +12 -12
  3. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: pszemraj/mega-ar-350m-L3t-v0.07-cosmo_webmath_py
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ model-index:
9
+ - name: mega-ar-350m-L3t-v0.07-cosmo_webmath_py-UltraTextbooks-2.1-fw_mix-vN
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # mega-ar-350m-L3t-v0.07-cosmo_webmath_py-UltraTextbooks-2.1-fw_mix-vN
17
+
18
+ This model is a fine-tuned version of [pszemraj/mega-ar-350m-L3t-v0.07-cosmo_webmath_py](https://huggingface.co/pszemraj/mega-ar-350m-L3t-v0.07-cosmo_webmath_py) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 2.0802
21
+ - Accuracy: 0.5744
22
+ - Num Input Tokens Seen: 3355443200
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 4e-05
42
+ - train_batch_size: 1
43
+ - eval_batch_size: 1
44
+ - seed: 80085
45
+ - distributed_type: multi-GPU
46
+ - num_devices: 4
47
+ - gradient_accumulation_steps: 32
48
+ - total_train_batch_size: 128
49
+ - total_eval_batch_size: 4
50
+ - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
51
+ - lr_scheduler_type: inverse_sqrt
52
+ - lr_scheduler_warmup_ratio: 0.05
53
+ - num_epochs: 1.0
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Input Tokens Seen |
58
+ |:-------------:|:------:|:----:|:---------------:|:--------:|:-----------------:|
59
+ | 2.2572 | 0.0600 | 400 | 2.2462 | 0.5491 | 209715200 |
60
+ | 2.2173 | 0.1201 | 800 | 2.1939 | 0.5564 | 419430400 |
61
+ | 2.1992 | 0.1801 | 1200 | 2.1689 | 0.5604 | 629145600 |
62
+ | 2.1543 | 0.2402 | 1600 | 2.1521 | 0.5632 | 838860800 |
63
+ | 2.1532 | 0.3002 | 2000 | 2.1401 | 0.5650 | 1048576000 |
64
+ | 2.1688 | 0.3603 | 2400 | 2.1307 | 0.5663 | 1258291200 |
65
+ | 2.1443 | 0.4203 | 2800 | 2.1227 | 0.5676 | 1468006400 |
66
+ | 2.1105 | 0.4804 | 3200 | 2.1158 | 0.5689 | 1677721600 |
67
+ | 2.1045 | 0.5404 | 3600 | 2.1090 | 0.5700 | 1887436800 |
68
+ | 2.1181 | 0.6004 | 4000 | 2.1045 | 0.5708 | 2097152000 |
69
+ | 2.127 | 0.6605 | 4400 | 2.0994 | 0.5716 | 2306867200 |
70
+ | 2.1265 | 0.7205 | 4800 | 2.0958 | 0.5719 | 2516582400 |
71
+ | 2.0951 | 0.7806 | 5200 | 2.0909 | 0.5728 | 2726297600 |
72
+ | 2.0951 | 0.8406 | 5600 | 2.0876 | 0.5733 | 2936012800 |
73
+ | 2.1335 | 0.9007 | 6000 | 2.0838 | 0.5739 | 3145728000 |
74
+ | 2.0731 | 0.9607 | 6400 | 2.0802 | 0.5744 | 3355443200 |
75
+
76
+
77
+ ### Framework versions
78
+
79
+ - Transformers 4.40.1
80
+ - Pytorch 2.3.0+cu121
81
+ - Datasets 2.19.0
82
+ - Tokenizers 0.19.1
generation_config.json CHANGED
@@ -1,13 +1,13 @@
1
  {
2
- "_from_model_config":true,
3
- "bos_token_id":128000,
4
- "eos_token_id":128001,
5
- "max_new_tokens":64,
6
- "do_sample":true,
7
- "temperature":0.8,
8
- "repetition_penalty":1.10,
9
- "no_repeat_ngram_size":4,
10
- "epsilon_cutoff":0.0006,
11
- "renormalize_logits":true,
12
- "transformers_version":"4.40.1"
13
- }
 
1
  {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 128000,
4
+ "do_sample": true,
5
+ "eos_token_id": 128001,
6
+ "epsilon_cutoff": 0.0006,
7
+ "max_new_tokens": 64,
8
+ "no_repeat_ngram_size": 4,
9
+ "renormalize_logits": true,
10
+ "repetition_penalty": 1.1,
11
+ "temperature": 0.8,
12
+ "transformers_version": "4.40.1"
13
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2326a956082a9907a4c9911b4fbe6b80b3941b39612462386ecd90f74894af0e
3
  size 1398219896
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a377fb15f0404e3f86727126aff5cf92029a1dea901f028ec8151524e256fce
3
  size 1398219896