llm-jp-3-13b-finetune-2

This is a fine-tuned version of llm-jp/llm-jp-3-13b using the ichikara-instruction dataset.

Model Details

  • Base Model: llm-jp/llm-jp-3-13b
  • Training Type: Instruction Fine-tuning
  • Training Method: QLoRA (4-bit quantization)
  • Library Used: unsloth

Training Configuration

  • Max Sequence Length: 512
  • LoRA Configuration:
    • Rank: 32
    • Alpha: 32
    • Dropout: 0.05
    • Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj

Training Hyperparameters

  • Batch Size: 2 per device
  • Gradient Accumulation Steps: 4
  • Learning Rate: 2e-4
  • Number of Epochs: 1
  • Warmup Steps: 10
  • Mixed Precision: BF16 (if supported) / FP16 (if BF16 not supported)

Training Data

The model was fine-tuned on the ichikara-instruction dataset, which is a high-quality Japanese instruction dataset created by Satoshi Sekine et al. The dataset was presented at the 30th Annual Conference of the Japanese Association for Natural Language Processing (2024).

Input Format

### ๆŒ‡็คบ
{instruction}
### ๅ›ž็ญ”
{response}

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "your-username/llm-jp-3-13b-finetune-2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    trust_remote_code=True
)

# Example usage
instruction = "ใ“ใ“ใซๆŒ‡็คบใ‚’ๅ…ฅๅŠ›ใ—ใฆใใ ใ•ใ„"
prompt = f"### ๆŒ‡็คบ\n{instruction}\n### ๅ›ž็ญ”\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_new_tokens=512,
    temperature=0.7,
    do_sample=True
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

References

  • Original Model: llm-jp/llm-jp-3-13b
  • Training Dataset: Sekine, S., Ando, M., Goto, M., Suzuki, K., Kawahara, D., Inui, K., & Inui, K. (2024). ichikara-instruction: Building Japanese Instruction Data for LLMs. In Proceedings of the 30th Annual Conference of the Japanese Association for Natural Language Processing.

License

Please refer to the license of the original model and the training dataset.

Downloads last month
11
Safetensors
Model size
13.7B params
Tensor type
BF16
ยท
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for Tomoya1999/llm-jp-3-13b-finetune-2

Finetuned
(548)
this model