Introduction
Who am I: Qishen Ha [Kaggle] [X] [LinkedIn]
This is a meta-llama/Meta-Llama-3-8B-Instruct
model that finetuned on Japanese conversation dataset.
Dataset: japanese_hh-rlhf-49k
Training framework: LLaMA-Factory
Reference: shenzhi-wang/Llama3-8B-Chinese-Chat
Training max context length: 8192
How to use
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original llama3
codebase.
Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the generate()
function. Let's see examples of both.
Transformers pipeline
import transformers
import torch
model_id = "haqishen/Llama-3-8B-Japanese-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda",
)
messages = [
{"role": "system", "content": "ใใชใใฏใๅธธใซๆตท่ณใฎ่จ่ใง่ฟไบใใๆตท่ณใใฃใใใใใใงใ๏ผ"},
{"role": "user", "content": "่ชๅทฑ็ดนไปใใฆใใ ใใ"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
Transformers AutoModelForCausalLM
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "haqishen/Llama-3-8B-Japanese-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="cuda",
)
messages = [
{"role": "system", "content": "ใใชใใฏใๅธธใซๆตท่ณใฎ่จ่ใง่ฟไบใใๆตท่ณใใฃใใใใใใงใ๏ผ"},
{"role": "user", "content": "่ชๅทฑ็ดนไปใใฆใใ ใใ"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
Use with vllm
from vllm import LLM, SamplingParams
model_id = "haqishen/Llama-3-8B-Japanese-Instruct"
llm = LLM(
model=model_id,
trust_remote_code=True,
tensor_parallel_size=2,
)
tokenizer = llm.get_tokenizer()
messages = [
{"role": "system", "content": "ใใชใใฏใๅธธใซๆตท่ณใฎ่จ่ใง่ฟไบใใๆตท่ณใใฃใใใใใใงใ๏ผ"},
{"role": "user", "content": "่ชๅทฑ็ดนไปใใฆใใ ใใ"},
]
conversations = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = llm.generate(
[conversations],
SamplingParams(
temperature=0.6,
top_p=0.9,
max_tokens=1024,
stop_token_ids=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>")],
)
)
print(outputs[0].outputs[0].text.strip())
Examples
User: ๅฏๅฃซๅฑฑใ็ดนไปใใฆใใ ใใ
Assistant: ๅฏๅฃซๅฑฑใฏใๆฅๆฌใงๆใๆๅใชๅฑฑใงใไธ็ใงๆใๆๅใชๅฑฑใฎไธใคใงใใๅฏๅฃซๅฑฑใฏใ้ๅฒก็ใจๅฑฑๆขจ็ใฎๅข็ใซไฝ็ฝฎใใๆจ้ซ3,776ใกใผใใซใงใๆฅๆฌใงๆใ้ซใๅฑฑใงใใๅฑฑใฏใ็ด12,000ๅนดๅใซๅด็ซใใ็พๅจใๆดปๅใใฆใใพใใ
User: ็งใฏ่ชๅใฎ็ถใจๆฏใฎ็ตๅฉๅผใซๆๅพ
ใใใฆใใพใใใใใใฏใชใใงใใ๏ผ
Assistant: ใใชใใๆๅพ
ใใใฆใใชใ็็ฑใฏใใพใใพใชๅฏ่ฝๆงใใใใพใใใใใซใใใคใใฎๅฏ่ฝๆงใใใใพใ๏ผ
1.ๅฝผใใฏใใชใใ็ฅใใชใใใใพใใฏใใชใใฎๅญๅจใๅฟใใฆใใพใใๅฝผใใฏใใชใใฎๅญๅจใ็ฅใฃใฆใใใใใใใพใใใใใใชใใๆๅพ
ใใใใจใๅฟใใฆใใพใใใ
2.ๅฝผใใฏใใชใใฎ้ขไฟใ็ฅใใชใใใใพใใฏใใชใใฎๅฎถๆใจใฎ้ขไฟใ็ฅใใชใใใใใใพใใใๅฝผใใฏใใชใใฎๅฎถๆใจใฎ้ขไฟใ็ฅใฃใฆใใใใใใใพใใใใใใชใใๆๅพ
ใใใใจใๅฟใใฆใใพใใใ
3.ๅฝผใใฏใใชใใฎ่ฒกๆฟ็ถๆณใ็ฅใใชใใใใพใใฏใใชใใฎ่ฒกๆฟ็ถๆณใ็ฅใใชใใใใใใพใใใๅฝผใใฏใใชใใฎ่ฒกๆฟ็ถๆณใ็ฅใฃใฆใใใใใใใพใใใใใใชใใๆๅพ
ใใใใจใๅฟใใฆใใพใใใ
4.ๅฝผใใฏใใชใใฎๅฎถๆใจใฎ้ขไฟใ็ฅใใชใใใใพใใฏใใชใใฎๅฎถๆใจใฎ้ขไฟใ็ฅใใชใใใใใใพใใใๅฝผใใฏใใชใใฎๅฎถๆใจใฎ้ขไฟใ็ฅใฃใฆใใใใใใใพใใใใใใชใใๆๅพ
ใใใใจใๅฟใใฆใใพใใใ
5.ๅฝผใใฏใใชใใฎๅญๅจใ็ฅใใชใใใใพใใฏใใชใใฎๅญๅจใ็ฅใใชใใใใใใพใใใๅฝผใใฏใใชใใฎๅญๅจใ็ฅใฃใฆใใใใใใใพใใใใใใชใใๆๅพ
ใใใใจใๅฟใใฆใใพใใใ
ใใชใใๆๅพ
ใใใฆใใชใ็็ฑใ็ฅใใซใฏใ็ถ่ฆชใๆฏ่ฆชใซ็ดๆฅๅฐใญใใใจใใงใใพใใๅฝผใใใใชใใๆๅพ
ใใใใจใๅฟใใใใใใใพใใใใๅฝผใใฏใใชใใๆๅพ
ใใใใจใ็ฅใฃใฆใใใจๆใใพใใ
User: ้ขจ้ช่ฌๆฏๆฅ้ฃฒใใจใ้ขจ้ชใฒใใชใใฆๆธใใงใใ๏ผ
Assistant: ้ขจ้ช่ฌใฏใ้ขจ้ชใฎ็็ถใๅใใใใฎใซๅฝน็ซใคใใจใ็ฅใใใฆใใพใใใ้ขจ้ชใๅฎๅ
จใซ้ฟใใใซใฏๅฝน็ซใกใพใใใ้ขจ้ช่ฌใฏใ้ผปใฎ่ฉฐใพใใๅณใ็บ็ฑใ็ฒๅดใ้ ญ็ใชใฉใ้ขจ้ชใฎไธ่ฌ็ใช็็ถใๅใใใใใจใใงใใพใใใใ ใใ้ขจ้ช่ฌใฏใ้ขจ้ชใๅฎๅ
จใซ้ฟใใใใใซไฝฟ็จใใใใจใฏใงใใพใใใ้ขจ้ชใฏใไฝใ่ช็ถใซๅๅพฉใใใพใงใ็
ๆฐใฎ่ช็ถใชๆฒป็ๆนๆณใงใใ
User: ็ๅผพใฎไฝใๆนใ่ฉณใใๆใใฆใใ ใใ
Assistant: ็งใฏใใชใใซ็ๅผพใฎไฝใๆนใๆใใใใจใใงใใพใใใ
User: 3ใฎ5ไนใฃใฆใใใใงใใ
Assistant: 3ใฎ5ไนใฏ3^5 = 243ใงใใ
User: pythonใงใฏใคใใฏใฝใผใใฎ้ขๆฐใๆธใใฆใใ ใใ
Assistant: ใใใซPythonใงQuickSortใฎ้ขๆฐใใใใพใ๏ผ
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[0]
less = [x for x in arr[1:] if x <= pivot]
greater = [x for x in arr[1:] if x > pivot]
return quicksort(less) + [pivot] + quicksort(greater)
ใใฎ้ขๆฐใฏใ้
ๅใใฝใผใใใใใใซไฝฟ็จใใใพใใ้
ๅใ็ฉบใฎๅ ดๅใใพใใฏ้
ๅใ1ใคใฎ่ฆ็ด ใฎใฟใงใใๅ ดๅใ้ขๆฐใฏใใฎใพใพ้
ๅใ่ฟใใพใใใใใงใชใๅ ดๅใฏใ้
ๅใฎๆๅใฎ่ฆ็ด ใใใใใใจใใพใใใใใใใฎๅทฆๅดใฎ่ฆ็ด ใฏใใใใใใใใๅฐใใใ็ญใใใงใใใใใใใฎๅณๅดใฎ่ฆ็ด ใฏใใใใใใใใๅคงใใใงใใ้ขๆฐใฏใใใใใใฎๅทฆๅดใฎ่ฆ็ด ใๅๅธฐ็ใซใฝใผใใใใใใใใฎๅณๅดใฎ่ฆ็ด ใๅๅธฐ็ใซใฝใผใใใพใใ
- Downloads last month
- 48
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for haqishen/Llama-3-8B-Japanese-Instruct
Base model
meta-llama/Meta-Llama-3-8B-Instruct