kyujinpy's picture
Adding Evaluation Results (#1)
fc37710 verified
metadata
language:
  - en
license: cc-by-nc-sa-4.0
library_name: transformers
datasets:
  - kyujinpy/Open-platypus-Commercial
pipeline_tag: text-generation
model-index:
  - name: SOLAR-Platypus-10.7B-v1
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 61.69
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/SOLAR-Platypus-10.7B-v1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 84.23
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/SOLAR-Platypus-10.7B-v1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 60.37
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/SOLAR-Platypus-10.7B-v1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 51.58
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/SOLAR-Platypus-10.7B-v1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 82.79
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/SOLAR-Platypus-10.7B-v1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 11.07
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/SOLAR-Platypus-10.7B-v1
          name: Open LLM Leaderboard

SOLAR-Platypus-10.7B-v1

Model Details

Model Developers Kyujin Han (kyujinpy)

Input Models input text only.

Output Models generate text only.

Model Architecture
SOLAR-Platypus-10.7B-v1 is an auto-regressive language model based on the Llama2 architecture.

Base Model
upstage/SOLAR-10.7B-v1.0

Training Dataset
kyujinpy/Open-platypus-Commercial.

Notice

While training, I used LoRA.
The lora_r values is 16.

Q-LoRA config

  • LoRA_r: 16
  • LoRA_alpha: 16
  • LoRA_dropout: 0.05
  • LoRA_target_modules: [gate_proj, up_proj, down_proj]

Prompt

  • Alpaca template.

Model Benchmark

Open leaderboard

  • Follow up as link.
Model Average ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K
SOLAR-Platypus-10.7B-v1 58.62 61.69 84.23 60.37 51.58 82.79 11.07
SOLAR-Platypus-10.7B-v2 55.25 59.39 83.57 59.93 43.15 81.45 4.02
upstage/SOLAR-10.7B-v1.0 66.04 61.95 84.60 65.48 45.04 83.66 55.50

Implementation Code

### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "kyujinpy/SOLAR-Platypus-10.7B-v1"
OpenOrca = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 58.62
AI2 Reasoning Challenge (25-Shot) 61.69
HellaSwag (10-Shot) 84.23
MMLU (5-Shot) 60.37
TruthfulQA (0-shot) 51.58
Winogrande (5-shot) 82.79
GSM8k (5-shot) 11.07