Model Card for Model ID
Model Details
Model Description
東大松尾研LLM講座2024の最終課題向けのelyza-tasks-100-TV_0.jsonlの出力用にFinetuningしたモデルです。 モデルの利用については、提供いただいたOmmniCampusの環境およびサンプルコードに沿ったものとなっております。
- Developed by: maktag
- Language(s) (NLP): Japanese
- Finetuned from model [optional]: llm-jp/llm-jp-3-13b
How to Get Started with the Model
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the fine-tuned model and tokenizer
base_model_id = "llm-jp/llm-jp-3-13b"
adapter_id = "maktag/llm-jp-3-13b-finetune8"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# QLoRA config
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
# Load model
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=bnb_config,
device_map="auto",
token = HF_TOKEN
)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True, token = HF_TOKEN)
# 元のモデルにLoRAのアダプタを統合。
model = PeftModel.from_pretrained(model, adapter_id, token = HF_TOKEN)
[More Information Needed]
Training Details
- Fine-Tuning Framework: LoRA-based PEFT (Parameter-Efficient Fine-Tuning).
- Dataset: Proprietary Japanese instruction-following dataset.
- Sequence Length: 512 tokens.
- Hyperparameters:
- Batch size: 32
- Learning rate: 1e-5
- Epochs: 3
Training Data
Model tree for maktag/llm-jp-3-13b-finetune8
Base model
llm-jp/llm-jp-3-13b