unsloth/Meta-Llama-3.1-8B-bnb-4bit fine tuning after Continued Pretraining
(TREX-Lab at Seoul Cyber University)
Summary
- Base Model : unsloth/Meta-Llama-3.1-8B-bnb-4bit
- Dataset : wikimedia/wikipedia(Continued Pretraining), FreedomIntelligence/alpaca-gpt4-korean(FineTuning)
- This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Test whether fine tuning of a large language model is possible on A30 GPU*1 (successful)
- Developed by: [TREX-Lab at Seoul Cyber University]
- Language(s) (NLP): [Korean]
- Finetuned from model : [unsloth/Meta-Llama-3.1-8B-bnb-4bit]
Continued Pretraining
warmup_steps = 10
learning_rate = 5e-5
embedding_learning_rate = 1e-5
bf16 = True
optim = "adamw_8bit"
weight_decay = 0.01
lr_scheduler_type = "linear"
loss : 1.171600
Fine Tuning Detail
warmup_steps = 10
learning_rate = 5e-5
embedding_learning_rate = 1e-5
bf16 = True
optim = "adamw_8bit"
weight_decay = 0.001
lr_scheduler_type = "linear"
loss : 0.699600
Usage #1
# Prompt
model_prompt = """λ€μμ μμ
μ μ€λͺ
νλ λͺ
λ Ήμ
λλ€. μμ²μ μ μ νκ² μλ£νλ μλ΅μ μμ±νμΈμ.
### μ§μΉ¨:
{}
### μλ΅:
{}"""
FastLanguageModel.for_inference(model)
inputs = tokenizer(
[
model_prompt.format(
"μ΄μμ μ₯κ΅°μ λꡬμΈκ°μ ? μμΈνκ² μλ €μ£ΌμΈμ.",
"",
)
], return_tensors = "pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens = 128, use_cache = True)
tokenizer.batch_decode(outputs)
Usage #2
from transformers import TextStreamer
# Prompt
model_prompt = """λ€μμ μμ
μ μ€λͺ
νλ λͺ
λ Ήμ
λλ€. μμ²μ μ μ νκ² μλ£νλ μλ΅μ μμ±νμΈμ.
### μ§μΉ¨:
{}
### μλ΅:
{}"""
FastLanguageModel.for_inference(model)
inputs = tokenizer(
[
model_prompt.format(
"μ§κ΅¬λ₯Ό κ΄λ²μνκ² μ€λͺ
νμΈμ.",
"",
)
], return_tensors = "pt").to("cuda")
text_streamer = TextStreamer(tokenizer)
value = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128, repetition_penalty = 0.1)
- Downloads last month
- 24
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for LEESM/llama-3-8b-bnb-4b-kowiki231101
Base model
meta-llama/Llama-3.1-8B
Quantized
unsloth/Meta-Llama-3.1-8B-bnb-4bit