gemma-2-9b-nyan100 / README.md
Hizaneko's picture
Update README.md
134b59a verified
---
library_name: transformers
tags:
- unsloth
license: apache-2.0
datasets:
- llm-jp/magpie-sft-v1.0
language:
- ja
base_model:
- google/gemma-2-9b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
gemma-2-9b-nyan100
gemma-2-9b-nyan100 は、Google の Gemma-2-9b を基に、日本語の指示追従タスクに特化して微調整されたモデルです。本モデルは、特に日本語での指示応答や対話生成、文書要約などのタスクに優れた性能を発揮します。
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Hizaneko]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [指示追従型大規模言語モデル (Instruction-Following LLM)]
- **Language(s) (NLP):** [日本語]
- **License:** [Gemma 利用規約 に従う]
- **Finetuned from model [optional]:** [google/gemma-2-9b]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
!pip uninstall unsloth -y
!pip install --upgrade --no-cache-dir "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
# Google Colab のデフォルトで入っているパッケージをアップグレード(Moriyasu さんありがとうございます)
!pip install --upgrade torch
!pip install --upgrade xformers
# notebookでインタラクティブな表示を可能とする(ただし、うまく動かない場合あり)
# Google Colabでは実行不要
!pip install ipywidgets --upgrade
# Install Flash Attention 2 for softcapping support
import torch
if torch.cuda.get_device_capability()[0] >= 8:
!pip install --no-deps packaging ninja einops "flash-attn>=2.6.3"
# Hugging Face Token を指定
#HF_TOKEN = "" #@param {type:"string"}
# あるいはGoogle Colab シークレットを使う場合、左のサイドバーより🔑マークをクリック
# HF_TOKEN という名前で Value に Hugging Face Token を入れてください。
# ノートブックからのアクセスのトグルをオンにし、下記の二行のコードのコメントアウトを外してください。
from google.colab import userdata
HF_TOKEN=userdata.get('HF_TOKEN')
# google/gemma-2-9bを4bit量子化のqLoRA設定でロード。
from unsloth import FastLanguageModel
import torch
#max_seq_length = 512 # unslothではRoPEをサポートしているのでコンテキスト長は自由に設定可能
max_seq_length = 1024
dtype = None # Noneにしておけば自動で設定
load_in_4bit = True # 今回は9Bモデルを扱うためTrue
# HFからモデルリポジトリをダウンロード
!huggingface-cli login --token $HF_TOKEN
!huggingface-cli download google/gemma-2-9b --local-dir gemma-2-9b/
model_id = "./gemma-2-9b"
new_model_id = "gemma-2-9b-nyan100" #Fine-Tuningしたモデルにつけたい名前
# FastLanguageModel インスタンスを作成
model, tokenizer = FastLanguageModel.from_pretrained(
model_name=model_id,
dtype=dtype,
load_in_4bit=load_in_4bit,
trust_remote_code=True,
)
# SFT用のモデルを用意
model = FastLanguageModel.get_peft_model(
model,
r = 32,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 32,
lora_dropout = 0.05,
#lora_dropout = 0.1,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = False,
loftq_config = None,
max_seq_length = max_seq_length,
)
from datasets import load_dataset
# データセットのロード
dataset_name = "llm-jp/magpie-sft-v1.0"
dataset = load_dataset(dataset_name)
# データセットの10分の1を使用(train split前提)
train_length = len(dataset["train"])
#dataset["train"] = dataset["train"].select(range(train_length // 10))
dataset["train"] = dataset["train"].select(range(train_length // 100))
# フォーマット整形関数の定義
def format_dataset(examples):
conversations = examples["conversations"] # conversationsカラムを取得
user_inputs = []
assistant_outputs = []
for turn in conversations:
if turn["role"] == "user":
user_inputs.append(turn["content"])
elif turn["role"] == "assistant":
assistant_outputs.append(turn["content"])
input_text = " ".join(user_inputs) # ユーザー発話を結合
output_text = " ".join(assistant_outputs) # アシスタント発話を結合
return {
"text": input_text, # 入力部分
"output": output_text # 出力部分
}
# データセットを整形
formatted_dataset = dataset.map(
format_dataset,
num_proc=4,
remove_columns=["conversations"]
)
# 結果の表示
print(formatted_dataset)
# プロンプトフォーマットの定義
prompt = """### 指示
{}
### 回答
{}"""
EOS_TOKEN = tokenizer.eos_token # トークナイザーのEOSトークン
# プロンプト生成関数
def formatting_prompts_func(examples):
input_text = examples["text"]
output_text = examples["output"]
formatted_text = prompt.format(input_text, output_text) + EOS_TOKEN
return {"formatted_text": formatted_text}
# プロンプト適用
final_dataset = formatted_dataset.map(
formatting_prompts_func,
num_proc=4
)
from trl import SFTTrainer
from transformers import TrainingArguments
from unsloth import is_bfloat16_supported
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset=final_dataset["train"],
max_seq_length = max_seq_length,
dataset_text_field="formatted_text",
packing = False,
args = TrainingArguments(
per_device_train_batch_size = 2,
gradient_accumulation_steps = 4,
num_train_epochs = 1,
logging_steps = 10,
warmup_steps = 10,
save_steps=100,
save_total_limit=2,
max_steps=-1,
learning_rate = 2e-4,
#learning_rate = 1e-4,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
group_by_length=True,
seed = 3407,
output_dir = "outputs",
report_to = "none",
),
)
trainer_stats = trainer.train()
# ELYZA-tasks-100-TVの読み込み。事前にファイルをアップロードしてください
# データセットの読み込み。
# omnicampusの開発環境では、左にタスクのjsonlをドラッグアンドドロップしてから実行。
import json
datasets = []
with open("/content/elyza-tasks-100-TV_0.jsonl", "r") as f:
item = ""
for line in f:
line = line.strip()
item += line
if item.endswith("}"):
datasets.append(json.loads(item))
item = ""
# 学習したモデルを用いてタスクを実行
from tqdm import tqdm
# 推論するためにモデルのモードを変更
FastLanguageModel.for_inference(model)
results = []
for dt in tqdm(datasets):
input = dt["input"]
#prompt = f"""### 指示\n{input}\n### 回答\n"""
prompt = f"""### 指示\n{input} 簡潔に回答してください \n### 回答\n"""
inputs = tokenizer([prompt], return_tensors = "pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens = 512, use_cache = True, do_sample=False, repetition_penalty=1.2)
prediction = tokenizer.decode(outputs[0], skip_special_tokens=True).split('\n### 回答')[-1]
results.append({"task_id": dt["task_id"], "input": input, "output": prediction})
# jsonlで保存
with open(f"{new_model_id}_output.jsonl", 'w', encoding='utf-8') as f:
for result in results:
json.dump(result, f, ensure_ascii=False)
f.write('\n')
#モデルとトークナイザーをHugging Faceにアップロード。
# 一旦privateでアップロードしてください。
# 最終成果物が決まったらpublicにするようお願いします。
# 現在公開しているModel_Inference_Template.ipynbはunslothを想定していないためそのままでは動かない可能性があります。
model.push_to_hub_merged(
new_model_id,
tokenizer=tokenizer,
save_method="lora",
token=HF_TOKEN,
private=True
)
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
データセット: llm-jp/magpie-sft-v1.0
データ量: 約50,000件の日本語サンプルのうちランダムに抽出した5000件
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
LoRA 設定:
r=32
lora_alpha=32
lora_dropout=0.05
バッチサイズ: デバイスごとに 2
勾配累積ステップ: 4
学習率: 2e-4
学習エポック数: 1
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [NVIDIA L4]
- **Hours used:** [約1時間]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]