martimfasantos commited on
Commit
93cfafd
1 Parent(s): eec6bd5

Model save

Browse files
README.md CHANGED
@@ -2,13 +2,12 @@
2
  license: apache-2.0
3
  library_name: peft
4
  tags:
5
- - alignment-handbook
6
  - trl
7
  - sft
8
  - generated_from_trainer
9
  base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
10
  datasets:
11
- - martimfasantos/openai-tldr-filtered
12
  model-index:
13
  - name: tinyllama-1.1b-sum-sft-qlora
14
  results: []
@@ -19,9 +18,9 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  # tinyllama-1.1b-sum-sft-qlora
21
 
22
- This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the martimfasantos/openai-tldr-filtered dataset.
23
  It achieves the following results on the evaluation set:
24
- - Loss: 2.1466
25
 
26
  ## Model description
27
 
@@ -41,12 +40,12 @@ More information needed
41
 
42
  The following hyperparameters were used during training:
43
  - learning_rate: 0.0004
44
- - train_batch_size: 32
45
  - eval_batch_size: 8
46
  - seed: 42
47
  - distributed_type: multi-GPU
48
  - gradient_accumulation_steps: 2
49
- - total_train_batch_size: 64
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: cosine
52
  - lr_scheduler_warmup_ratio: 0.1
@@ -54,10 +53,10 @@ The following hyperparameters were used during training:
54
 
55
  ### Training results
56
 
57
- | Training Loss | Epoch | Step | Validation Loss |
58
- |:-------------:|:-----:|:----:|:---------------:|
59
- | 2.1383 | 1.0 | 1351 | 2.1541 |
60
- | 2.1135 | 2.0 | 2702 | 2.1466 |
61
 
62
 
63
  ### Framework versions
 
2
  license: apache-2.0
3
  library_name: peft
4
  tags:
 
5
  - trl
6
  - sft
7
  - generated_from_trainer
8
  base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
9
  datasets:
10
+ - generator
11
  model-index:
12
  - name: tinyllama-1.1b-sum-sft-qlora
13
  results: []
 
18
 
19
  # tinyllama-1.1b-sum-sft-qlora
20
 
21
+ This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the generator dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 2.1440
24
 
25
  ## Model description
26
 
 
40
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 0.0004
43
+ - train_batch_size: 8
44
  - eval_batch_size: 8
45
  - seed: 42
46
  - distributed_type: multi-GPU
47
  - gradient_accumulation_steps: 2
48
+ - total_train_batch_size: 16
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: cosine
51
  - lr_scheduler_warmup_ratio: 0.1
 
53
 
54
  ### Training results
55
 
56
+ | Training Loss | Epoch | Step | Validation Loss |
57
+ |:-------------:|:-----:|:-----:|:---------------:|
58
+ | 2.1266 | 1.0 | 5403 | 2.1504 |
59
+ | 2.1084 | 2.0 | 10806 | 2.1440 |
60
 
61
 
62
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7e75c709e55a2a8b921f0828311723d61ed1dac6d9fc5399bf37abfa67010f06
3
  size 25272360
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c3e1bd0ce214d89dc7a8377b45c80d2c2f444218aa1f9434e54e32dbd47cc50
3
  size 25272360
all_results.json CHANGED
@@ -1,13 +1,8 @@
1
  {
2
  "epoch": 2.0,
3
- "eval_loss": 2.14662766456604,
4
- "eval_runtime": 188.7756,
5
- "eval_samples": 6553,
6
- "eval_samples_per_second": 25.761,
7
- "eval_steps_per_second": 3.221,
8
- "train_loss": 2.1601854102158,
9
- "train_runtime": 21984.2259,
10
  "train_samples": 116722,
11
- "train_samples_per_second": 7.864,
12
- "train_steps_per_second": 0.123
13
  }
 
1
  {
2
  "epoch": 2.0,
3
+ "train_loss": 2.1470370752908954,
4
+ "train_runtime": 23954.8129,
 
 
 
 
 
5
  "train_samples": 116722,
6
+ "train_samples_per_second": 7.217,
7
+ "train_steps_per_second": 0.451
8
  }
runs/May03_14-37-15_poseidon/events.out.tfevents.1714747047.poseidon.4188924.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b93f4275e11bb1f20cf1058dc6360f233b9b0a3b8b3cc6e973eb99a30544b435
3
- size 461421
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ca2de38dbdedb01f6601557121ce68fb502aba80f771615b6d70a1882848f6c
3
+ size 462257
train_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "epoch": 2.0,
3
- "train_loss": 2.1601854102158,
4
- "train_runtime": 21984.2259,
5
  "train_samples": 116722,
6
- "train_samples_per_second": 7.864,
7
- "train_steps_per_second": 0.123
8
  }
 
1
  {
2
  "epoch": 2.0,
3
+ "train_loss": 2.1470370752908954,
4
+ "train_runtime": 23954.8129,
5
  "train_samples": 116722,
6
+ "train_samples_per_second": 7.217,
7
+ "train_steps_per_second": 0.451
8
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff