ActiveLearningAGI commited on
Commit
e971ad9
1 Parent(s): 0856081

Model save

Browse files
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - sft
7
+ - generated_from_trainer
8
+ datasets:
9
+ - generator
10
+ base_model: mistralai/Mistral-7B-v0.1
11
+ model-index:
12
+ - name: zephyr-7b-sft-qlora
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # zephyr-7b-sft-qlora
20
+
21
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 0.9536
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 0.0002
43
+ - train_batch_size: 2
44
+ - eval_batch_size: 2
45
+ - seed: 42
46
+ - distributed_type: multi-GPU
47
+ - num_devices: 4
48
+ - gradient_accumulation_steps: 2
49
+ - total_train_batch_size: 16
50
+ - total_eval_batch_size: 8
51
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
+ - lr_scheduler_type: cosine
53
+ - lr_scheduler_warmup_ratio: 0.1
54
+ - num_epochs: 1
55
+
56
+ ### Training results
57
+
58
+ | Training Loss | Epoch | Step | Validation Loss |
59
+ |:-------------:|:-----:|:----:|:---------------:|
60
+ | 0.9757 | 1.0 | 8714 | 0.9536 |
61
+
62
+
63
+ ### Framework versions
64
+
65
+ - PEFT 0.7.1
66
+ - Transformers 4.36.2
67
+ - Pytorch 2.1.2+cu121
68
+ - Datasets 2.14.6
69
+ - Tokenizers 0.15.0
adapter_config.json CHANGED
@@ -19,13 +19,13 @@
19
  "rank_pattern": {},
20
  "revision": null,
21
  "target_modules": [
22
- "k_proj",
23
- "o_proj",
24
- "v_proj",
25
- "down_proj",
26
  "up_proj",
 
27
  "gate_proj",
28
- "q_proj"
 
 
 
29
  ],
30
  "task_type": "CAUSAL_LM"
31
  }
 
19
  "rank_pattern": {},
20
  "revision": null,
21
  "target_modules": [
 
 
 
 
22
  "up_proj",
23
+ "q_proj",
24
  "gate_proj",
25
+ "down_proj",
26
+ "k_proj",
27
+ "o_proj",
28
+ "v_proj"
29
  ],
30
  "task_type": "CAUSAL_LM"
31
  }
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cf54255e296c05fd68c9d9607cab8dc659a5ca8ff00c442f043577546ca9ee26
3
  size 83946192
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82c97a205e6524489e5f13225d57b06d852eb2a0cd98375e534b505491379bc3
3
  size 83946192
all_results.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "eval_loss": 0.9536473751068115,
4
+ "eval_runtime": 3629.5702,
5
+ "eval_samples": 23110,
6
+ "eval_samples_per_second": 4.251,
7
+ "eval_steps_per_second": 0.531,
8
+ "train_loss": 0.0015288638991994379,
9
+ "train_runtime": 3795.7084,
10
+ "train_samples": 207865,
11
+ "train_samples_per_second": 36.733,
12
+ "train_steps_per_second": 2.296
13
+ }
eval_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "eval_loss": 0.9536473751068115,
4
+ "eval_runtime": 3629.5702,
5
+ "eval_samples": 23110,
6
+ "eval_samples_per_second": 4.251,
7
+ "eval_steps_per_second": 0.531
8
+ }
runs/Feb24_11-15-04_next-asus-01/events.out.tfevents.1708744633.next-asus-01.137864.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f9e847c22f38f987b7b42b200661e1f2341e6410bd094a7c365ce9947c3f4908
3
- size 249351
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46bc5db007623daec5ef5c31d39ab9957dc46a224de64696aad54707ef035b89
3
+ size 249665
runs/Feb25_18-13-27_next-asus-01/events.out.tfevents.1708856133.next-asus-01.205473.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5db00540aa140e714ac4dbcb6edc06b48d815ea9ab880ff33bc62337103b393
3
+ size 5370
runs/Feb25_18-13-27_next-asus-01/events.out.tfevents.1708863558.next-asus-01.205473.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13161652806d4b496b6b68daa5d294a4605e6a2586a53ad9546f972e38b62580
3
+ size 359
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.0015288638991994379,
4
+ "train_runtime": 3795.7084,
5
+ "train_samples": 207865,
6
+ "train_samples_per_second": 36.733,
7
+ "train_steps_per_second": 2.296
8
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ab4a11d4c7c463baf5e8903e7a55e00c816d60f7e5ccd2dd5a82d45a311049b6
3
  size 5880
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85081305d5c470e5a52647626a148ebe4b2d3c38557e64141956ecee294acda2
3
  size 5880