|
--- |
|
base_model: llm-jp/llm-jp-3-13b |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- llama |
|
- trl |
|
license: apache-2.0 |
|
language: |
|
- ja |
|
datasets: |
|
- kinokokoro/ichikara-instruction-003 |
|
--- |
|
|
|
# Uploaded model |
|
|
|
- **Developed by:** nishimura999 |
|
- **License:** apache-2.0 |
|
- **Finetuned from model :** llm-jp/llm-jp-3-13b |
|
|
|
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
|
|
# usage |
|
## -import |
|
```python |
|
from transformers import ( |
|
AutoModelForCausalLM, |
|
AutoTokenizer, |
|
BitsAndBytesConfig, |
|
) |
|
import torch |
|
from tqdm import tqdm |
|
import json |
|
``` |
|
|
|
## -setting |
|
```python |
|
# Hugging Faceで取得したToken |
|
HF_TOKEN = "{Your hugging face token}" |
|
|
|
# モデルのID |
|
model_name = "nishimura999/llm-jp-3-13b-finetune-v100" |
|
``` |
|
|
|
## -confing |
|
```python |
|
# QLoRA config |
|
bnb_config = BitsAndBytesConfig( |
|
load_in_4bit=True, |
|
bnb_4bit_quant_type="nf4", |
|
bnb_4bit_compute_dtype=torch.bfloat16, |
|
bnb_4bit_use_double_quant=False, |
|
) |
|
``` |
|
## -load |
|
```python |
|
# Load model |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_name, |
|
quantization_config=bnb_config, |
|
device_map="auto", |
|
token = HF_TOKEN |
|
) |
|
|
|
# Load tokenizer |
|
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, token = HF_TOKEN) |
|
``` |
|
|
|
## -dataset |
|
```python |
|
# データセットの読み込み。 |
|
datasets = [] |
|
with open("./elyza-tasks-100-TV_0.jsonl", "r") as f: |
|
item = "" |
|
for line in f: |
|
line = line.strip() |
|
item += line |
|
if item.endswith("}"): |
|
datasets.append(json.loads(item)) |
|
item = "" |
|
``` |
|
|
|
## -generate |
|
```python |
|
results = [] |
|
for data in tqdm(datasets): |
|
|
|
input = data["input"] |
|
|
|
prompt = f"""### 指示 |
|
{input} |
|
### 回答: |
|
""" |
|
|
|
tokenized_input = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt").to(model.device) |
|
with torch.no_grad(): |
|
outputs = model.generate( |
|
tokenized_input, |
|
max_new_tokens=100, |
|
do_sample=False, |
|
repetition_penalty=1.2 |
|
)[0] |
|
output = tokenizer.decode(outputs[tokenized_input.size(1):], skip_special_tokens=True) |
|
|
|
results.append({"task_id": data["task_id"], "input": input, "output": output}) |
|
``` |
|
|
|
## -output |
|
```python |
|
import re |
|
model_name = re.sub(".*/", "", model_name) |
|
with open(f"./{model_name}-outputs.jsonl", 'w', encoding='utf-8') as f: |
|
for result in results: |
|
json.dump(result, f, ensure_ascii=False) # ensure_ascii=False for handling non-ASCII characters |
|
f.write('\n') |
|
``` |
|
|
|
# ref |
|
### 本モデルは下記のデータを使ってファインチューニングしております。ここでデータ提供者に感謝申し上げます。 |
|
(https://liat-aip.sakura.ne.jp/wp/llmのための日本語インストラクションデータ作成/llmのための日本語インストラクションデータ-公開/) |
|
関根聡, 安藤まや, 後藤美知子, 鈴木久美, 河原大輔, 井之上直也, 乾健太郎. |
|
ichikara-instruction: LLMのための日本語インストラクションデータの構築. 言語処理学会第30回年次大会(2024) |
|
|