|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
--- |
|
|
|
# Tanuki-8B-dpo-v1.0-twitter-lora-r128-exp1 |
|
- Tanuki-8B-dpo-v1.0-ใใใฃใใ็นๅใใใใใใฎ่ฉฆ้จใใผใธใงใณใงใ๏ฝก |
|
# original model |
|
- [Tanuki-8B-dpo-v1.0-](https://huggingface.co/weblab-GENIAC/Tanuki-8B-dpo-v1.0) |
|
|
|
# code |
|
~~~ |
|
import transformers |
|
|
|
#load model |
|
model_id="weblab-GENIAC/Tanuki-8B-dpo-v1.0" |
|
adapter_id="kanhatakeyama/Tanuki-8B-dpo-v1.0-twitter-lora-r128-exp1" |
|
model=transformers.AutoModelForCausalLM.from_pretrained(model_id,trust_remote_code=True) |
|
tokenizer=transformers.AutoTokenizer.from_pretrained(model_id,) |
|
model.load_adapter(adapter_id) |
|
|
|
#talk |
|
pipe=transformers.pipeline("text-generation",model=model,tokenizer=tokenizer) |
|
|
|
messages=[ |
|
{"role":"user","content":"ๅ
ๆฐ???"}, |
|
] |
|
prompt = pipe.tokenizer.apply_chat_template( |
|
messages, tokenize=False, add_generation_prompt=True) |
|
|
|
outputs = pipe(prompt, max_new_tokens=1024, |
|
temperature=0.7, |
|
repetition_penalty=1.1, |
|
) |
|
out_text = outputs[0]["generated_text"].lstrip(prompt) |
|
print(out_text) |
|
|
|
#ใใ!ๅ
ๆฐใ ใใ๐ใใชใใฏใฉใ? |
|
~~~ |
|
|
|
|
|
### Training Data |
|
- [kanhatakeyama/multiturn-conv-from-aozora-bunko](https://huggingface.co/datasets/kanhatakeyama/multiturn-conv-from-aozora-bunko) |
|
- [kanhatakeyama/twitter-auto_reply (currently private dataset)](https://huggingface.co/datasets/kanhatakeyama/twitter-auto_reply) |
|
- [kanhatakeyama/ramdom-to-fixed-multiturn-Calm3](https://huggingface.co/datasets/kanhatakeyama/ramdom-to-fixed-multiturn-Calm3) |