RyanYr commited on
Commit
668069b
·
verified ·
1 Parent(s): 2b2eb8d

Model save

Browse files
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  base_model: RyanYr/reflect_llama8B_om2-mixed-t0-mstlrg-300k460k-t12_llama33-130k-t12_sft-t1_lr1e-6
3
  library_name: transformers
4
- model_name: reflect_llama8B_om2-mstlrg300k460k-llama3370b130k-t12_sft-t1_psdp-t1
5
  tags:
6
  - generated_from_trainer
7
  - trl
@@ -9,7 +9,7 @@ tags:
9
  licence: license
10
  ---
11
 
12
- # Model Card for reflect_llama8B_om2-mstlrg300k460k-llama3370b130k-t12_sft-t1_psdp-t1
13
 
14
  This model is a fine-tuned version of [RyanYr/reflect_llama8B_om2-mixed-t0-mstlrg-300k460k-t12_llama33-130k-t12_sft-t1_lr1e-6](https://huggingface.co/RyanYr/reflect_llama8B_om2-mixed-t0-mstlrg-300k460k-t12_llama33-130k-t12_sft-t1_lr1e-6).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
@@ -20,14 +20,14 @@ It has been trained using [TRL](https://github.com/huggingface/trl).
20
  from transformers import pipeline
21
 
22
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
- generator = pipeline("text-generation", model="RyanYr/reflect_llama8B_om2-mstlrg300k460k-llama3370b130k-t12_sft-t1_psdp-t1", device="cuda")
24
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
  print(output["generated_text"])
26
  ```
27
 
28
  ## Training procedure
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/bw7s6its)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
 
1
  ---
2
  base_model: RyanYr/reflect_llama8B_om2-mixed-t0-mstlrg-300k460k-t12_llama33-130k-t12_sft-t1_lr1e-6
3
  library_name: transformers
4
+ model_name: reflect_llama8B_om2-mstlrg300k460k-llama3370b130k-t12_sft-t1_psdp-t1_b.5
5
  tags:
6
  - generated_from_trainer
7
  - trl
 
9
  licence: license
10
  ---
11
 
12
+ # Model Card for reflect_llama8B_om2-mstlrg300k460k-llama3370b130k-t12_sft-t1_psdp-t1_b.5
13
 
14
  This model is a fine-tuned version of [RyanYr/reflect_llama8B_om2-mixed-t0-mstlrg-300k460k-t12_llama33-130k-t12_sft-t1_lr1e-6](https://huggingface.co/RyanYr/reflect_llama8B_om2-mixed-t0-mstlrg-300k460k-t12_llama33-130k-t12_sft-t1_lr1e-6).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
 
20
  from transformers import pipeline
21
 
22
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="RyanYr/reflect_llama8B_om2-mstlrg300k460k-llama3370b130k-t12_sft-t1_psdp-t1_b.5", device="cuda")
24
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
  print(output["generated_text"])
26
  ```
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/ijeh3hcd)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
last_checkpoint/model-00001-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:52661d794e4b3374a4da94f37ee69423834b86b0b9c9877a29b52912b3cb782b
3
  size 4976706864
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:140216e56bce5f3c1b5265aa4cebb27b3866fa967bc8b1f2daf170aa17d3f3e7
3
  size 4976706864
last_checkpoint/model-00002-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cad7b364989d89a1e26130ddccaef4fa17013ade23743224ecefe037c607ae8e
3
  size 4999802720
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f83dc042c4ac7623fd1b6b67ee6c949a4d727f37e9481b3f5ba0ab43c1e4d6e2
3
  size 4999802720
last_checkpoint/model-00003-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:71abf7029fe39f05b96da3f95af12e9ae8be25399be58eb9c257e74b68a13c08
3
  size 4915916176
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19ef2851ad9b0832175cd1997e85a06bd6ff2edc134b49335f753c764e142e11
3
  size 4915916176
last_checkpoint/model-00004-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:678e8a275aaed46efad40829e06f57aa878e07c0c0ca83e3e6600840be6d5bd2
3
  size 1168147000
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:962ef652afb915df5fe185fa4e73b8925dd035646ff2adcb25e34892546ef4ef
3
  size 1168147000
last_checkpoint/training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:edbfdcfc52122517350c61c1c5d74629e08438dbc3263e6d187cae334d667e08
3
  size 8056
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72a1c53e16ceb78c4646cce496b22384dfe572737186f7ee1b008632c7fedcba
3
  size 8056