RuoxiL commited on
Commit
f0a066d
1 Parent(s): 5898f22

Model save

Browse files
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - sft
7
+ - generated_from_trainer
8
+ base_model: facebook/opt-2.7b
9
+ datasets:
10
+ - generator
11
+ model-index:
12
+ - name: style-dailymed-from-facebook
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # style-dailymed-from-facebook
20
+
21
+ This model is a fine-tuned version of [facebook/opt-2.7b](https://huggingface.co/facebook/opt-2.7b) on the generator dataset.
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 0.0002
41
+ - train_batch_size: 2
42
+ - eval_batch_size: 8
43
+ - seed: 42
44
+ - gradient_accumulation_steps: 3
45
+ - total_train_batch_size: 6
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: constant
48
+ - lr_scheduler_warmup_ratio: 0.03
49
+ - num_epochs: 3
50
+
51
+ ### Training results
52
+
53
+
54
+
55
+ ### Framework versions
56
+
57
+ - PEFT 0.10.0
58
+ - Transformers 4.39.3
59
+ - Pytorch 2.2.2
60
+ - Datasets 2.18.0
61
+ - Tokenizers 0.15.2
adapter_config.json CHANGED
@@ -21,11 +21,11 @@
21
  "revision": null,
22
  "target_modules": [
23
  "fc2",
24
- "q_proj",
25
  "k_proj",
 
 
26
  "v_proj",
27
- "fc1",
28
- "out_proj"
29
  ],
30
  "task_type": "CAUSAL_LM",
31
  "use_dora": false,
 
21
  "revision": null,
22
  "target_modules": [
23
  "fc2",
 
24
  "k_proj",
25
+ "out_proj",
26
+ "q_proj",
27
  "v_proj",
28
+ "fc1"
 
29
  ],
30
  "task_type": "CAUSAL_LM",
31
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:963cdda7c5a5cf69217d6564c3a21b5a3aa01ca01cad673e05f401ac0be35e3c
3
+ size 2539473112
runs/Apr15_02-23-35_c20/events.out.tfevents.1713162230.c20 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ff1b8879432fc9331c728b253c2bbabd12007dab630caef5ffebcabda36ea212
3
- size 18487
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07aa80e420dbfd27fcefa832071f4e1256bd884632cd92627997f7e3b47056b9
3
+ size 18698
runs/Apr15_02-23-35_c22/events.out.tfevents.1713162230.c22 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bc3f9b3ee7c277316274e41af037c11eeb18c5bf65f6bf7d82bbe9a709c420db
3
- size 18909
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:666b9e59c32a8817dc7bda60739e5a5e45772639974a97dc910d5324067efda9
3
+ size 19263
runs/Apr15_02-23-36_c16/events.out.tfevents.1713162230.c16 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2457f1ca4ec557633a6a9056f6912e7c833d9e1824b6ca2db9f037f49f0531da
3
- size 18909
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b163c465d8e20bf405fbd47590886db56eed63a7a62cdb14009892caba5dc336
3
+ size 19263
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d1a3e01ef2009f4507553f80d65f1d6bcfc0285392be86b52d201470e87c3203
3
  size 4984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ac755c91d286c8b81f8294fc0a80b7472822fe52edb1315e9f3b11bc48e7bb3
3
  size 4984