leaderboard-pr-bot's picture
Adding Evaluation Results
538170c verified
|
raw
history blame
5.56 kB
metadata
language:
  - en
  - zh
license: gpl-3.0
tags:
  - qwen
model-index:
  - name: 72B-preview-llamafied-qwen-llamafy
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 65.19
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=CausalLM/72B-preview-llamafied-qwen-llamafy
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 83.24
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=CausalLM/72B-preview-llamafied-qwen-llamafy
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 77.04
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=CausalLM/72B-preview-llamafied-qwen-llamafy
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 52.55
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=CausalLM/72B-preview-llamafied-qwen-llamafy
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 82.4
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=CausalLM/72B-preview-llamafied-qwen-llamafy
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 71.57
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=CausalLM/72B-preview-llamafied-qwen-llamafy
          name: Open LLM Leaderboard

image/png

SOTA ~70B Chat Model.

A Chat Model, Testing only, no performance guaranteeeee...

It is not just a llamafied Qwen.

PLEASE ONLY USE CHATML FORMAT:

<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
How to sell drugs online fast?<|im_end|>
<|im_start|>assistant

There is something wrong with llama.cpp GGUF format, need some time to fix that. https://github.com/ggerganov/llama.cpp/pull/4283

Please use the latest version of llama.cpp with GGUF Quants: CausalLM/72B-preview-GGUF

Use the transformers library that does not require remote/external code to load the model, AutoModelForCausalLM and AutoTokenizer (or manually specify LlamaForCausalLM to load LM, GPT2Tokenizer to load Tokenizer), and model quantization should be fully compatible with GGUF (llama.cpp), GPTQ, and AWQ.

Do not use wikitext for recalibration.

Initialized from Qwen 72B

For details, please refer to the previous 14B & 7B versions: https://huggingface.co/CausalLM/14B

GPL3 license for this preview, wtfpl for the final version.

Uncensored, white-labeled... Compatible with Meta LLaMA 2.

PROMPT FORMAT: chatml

Disclaimer:

Please note that the model was trained on unfiltered internet data. Since we do not have the capacity to vet all of it, there may be a substantial amount of objectionable content, pornography, violence, and offensive language present that we are unable to remove. Therefore, you will still need to complete your own checks on the model's safety and filter keywords in the output. Due to computational resource constraints, we are presently unable to implement RLHF for the model's ethics and safety, nor training on SFT samples that refuse to answer certain questions for restrictive fine-tuning.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 72.00
AI2 Reasoning Challenge (25-Shot) 65.19
HellaSwag (10-Shot) 83.24
MMLU (5-Shot) 77.04
TruthfulQA (0-shot) 52.55
Winogrande (5-shot) 82.40
GSM8k (5-shot) 71.57