metadata
base_model: llm-jp/llm-jp-3-13b
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- ja
datasets:
- kinokokoro/ichikara-instruction-003
Uploaded model
- Developed by: nishimura999
- License: apache-2.0
- Finetuned from model : llm-jp/llm-jp-3-13b
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
# 例: Pythonコード
from transformers import AutoModelForCausalLM, AutoTokenizer
# モデルのロード
model = AutoModelForCausalLM.from_pretrained("model_name")
tokenizer = AutoTokenizer.from_pretrained("model_name")
# テキスト生成
input_text = "こんにちは、世界!"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))