leaderboard-pr-bot's picture
Adding Evaluation Results
26ce0e1 verified
|
raw
history blame
4.77 kB
metadata
language:
  - zh
  - en
license: apache-2.0
datasets:
  - Azure99/blossom-chat-v2
  - Azure99/blossom-math-v3
  - Azure99/blossom-wizard-v2
  - Azure99/blossom-orca-v2
pipeline_tag: text-generation
model-index:
  - name: blossom-v4-qwen1_5-7b
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 54.44
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azure99/blossom-v4-qwen1_5-7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 76.11
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azure99/blossom-v4-qwen1_5-7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 60.43
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azure99/blossom-v4-qwen1_5-7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 53.69
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azure99/blossom-v4-qwen1_5-7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 71.27
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azure99/blossom-v4-qwen1_5-7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 56.71
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azure99/blossom-v4-qwen1_5-7b
          name: Open LLM Leaderboard

BLOSSOM-v4-qwen1_5-7b

💻Github🚀Blossom Chat Demo

介绍

Blossom是一个对话式语言模型,基于Qwen1.5-7B预训练模型,在Blossom Orca/Wizard/Chat/Math混合数据集上进行指令精调得来。Blossom拥有强大的通用能力及上下文理解能力,此外,训练使用的高质量中英文数据集也进行了开源。

训练分为两阶段,第一阶段使用100K Wizard、100K Orca、20K Math单轮指令数据集,训练1个epoch;第二阶段使用50K Blossom chat多轮对话数据集、以及上一阶段中随机采样2%的数据,训练3个epoch。

推理

推理采用对话续写的形式。

单轮对话

A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions.
|Human|: 你好
|Bot|: 

多轮对话

A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions.
|Human|: 你好
|Bot|: 你好,有什么我能帮助你的?<|endoftext|>
|Human|: 介绍下中国的首都吧
|Bot|: 

注意:在历史对话的Bot输出结尾,拼接一个<|endoftext|>

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 62.11
AI2 Reasoning Challenge (25-Shot) 54.44
HellaSwag (10-Shot) 76.11
MMLU (5-Shot) 60.43
TruthfulQA (0-shot) 53.69
Winogrande (5-shot) 71.27
GSM8k (5-shot) 56.71