Edit model card

SOLAR-tail-10.7B-instruct-v1.0

Model Details

Model Developers Kyujin Han (kyujinpy)

Method
Instruction-tuning with PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0.

Datasets
datasets: kyujinpy/KOR-OpenOrca-Platypus-v3.

Hyperparameters

python finetune.py \
    --base_model PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0 \
    --data-path  kyujinpy/KOR-OpenOrca-Platypus-v3 \
    --output_dir ./SOLAR-tail-10.7B-instruct \
    --batch_size 64 \
    --micro_batch_size 1 \
    --num_epochs 1 \
    --learning_rate 3e-5 \
    --cutoff_len 4096 \
    --val_set_size 0 \
    --lora_r 16 \
    --lora_alpha 16 \
    --lora_dropout 0.05 \
    --lora_target_modules '[q_proj, k_proj, v_proj, o_proj, gate_proj, down_proj, up_proj, lm_head]' \
    --train_on_inputs False \
    --add_eos_token False \
    --group_by_length False \
    --prompt_template_name user_prompt \
    --lr_scheduler 'cosine' \

Platypus repo.

Model Benchmark

Open leaderboard

  • Follow up as link.
Model Average ARC HellaSwag MMLU TruthfulQA Ko-CommonGenV2
PracticeLLM/SOLAR-tail-10.7B-instruct-v1.0 51.70 46.93 58.19 53.15 46.52 53.72
PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0 48.32 45.73 56.97 38.77 38.75 61.16
jjourney1125/M-SOLAR-10.7B-v1.0 55.15 49.57 60.12 54.60 49.23 62.22

Implementation Code

### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "PracticeLLM/SOLAR-tail-10.7B-instruct-v1.0"
OpenOrca = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)

Downloads last month
1,812
Safetensors
Model size
10.7B params
Tensor type
FP16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train PracticeLLM/SOLAR-tail-10.7B-instruct-v1.0

Space using PracticeLLM/SOLAR-tail-10.7B-instruct-v1.0 1