Model Card for Model ID
在llama-2-13b上使用huangyt/FINETUNE1資料集進行訓練,總資料筆數約17w
Fine-Tuning Information
- GPU: RTX4090 (single core / 24564MiB)
- model: meta-llama/Llama-2-13b-hf
- dataset: huangyt/FINETUNE1 (共約17w筆訓練集)
- peft_type: LoRA
- lora_rank: 16
- lora_target: gate_proj, up_proj, down_proj
- per_device_train_batch_size: 8
- gradient_accumulation_steps: 8
- learning_rate : 5e-5
- epoch: 1
- precision: bf16
- quantization: load_in_4bit
Fine-Tuning Detail
- train_loss: 0.66
- train_runtime: 16:26:58 (use deepspeed)
Evaluation
- 評估結果來自HuggingFaceH4/open_llm_leaderboard
- 與Llama-2-13b比較4種Benchmark,包含ARC、HellaSwag、MMLU、TruthfulQA
Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
---|---|---|---|---|---|
meta-llama/Llama-2-13b-hf | 56.9 | 58.11 | 80.97 | 54.34 | 34.17 |
meta-llama/Llama-2-13b-chat-hf | 59.93 | 59.04 | 81.94 | 54.64 | 44.12 |
CHIH-HUNG/llama-2-13b-Fintune_1_17w | 58.24 | 59.47 | 81 | 54.31 | 38.17 |
CHIH-HUNG/llama-2-13b-huangyt_Fintune_1_17w-q_k_v_o_proj | 58.49 | 59.73 | 81.06 | 54.53 | 38.64 |
CHIH-HUNG/llama-2-13b-Fintune_1_17w-gate_up_down_proj | 58.81 | 57.17 | 82.26 | 55.89 | 39.93 |
CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16 | 58.86 | 57.25 | 82.27 | 56.16 | 39.75 |
CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r4 | 58.71 | 56.74 | 82.27 | 56.18 | 39.65 |
How to convert dataset to json
- 在load_dataset中輸入資料集名稱,並且在take中輸入要取前幾筆資料
- 觀察該資料集的欄位名稱,填入example欄位中(例如system_prompt、question、response)
- 最後指定json檔儲存位置 (json_filename)
import json
from datasets import load_dataset
# 讀取數據集,take可以取得該數據集前n筆資料
dataset = load_dataset("huangyt/FINETUNE1", split="train", streaming=True)
# 提取所需欄位並建立新的字典列表
extracted_data = []
for example in dataset:
extracted_example = {
"instruction": example["instruction"],
"input": example["input"],
"output": example["output"]
}
extracted_data.append(extracted_example)
# 指定 JSON 文件名稱
json_filename = "huangyt_FINETUNE_1.json"
# 寫入 JSON 文件
with open(json_filename, "w") as json_file:
json.dump(extracted_data, json_file, indent=4)
print(f"數據已提取並保存為 {json_filename}")
- Downloads last month
- 1,046
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.