openchat-nectar-0.1 / README.md
andysalerno's picture
Adding Evaluation Results (#1)
d2ffa37 verified
metadata
license: apache-2.0
datasets:
  - berkeley-nest/Nectar
base_model: openchat/openchat-3.5-0106
model-index:
  - name: openchat-nectar-0.1
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 66.21
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/openchat-nectar-0.1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 82.99
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/openchat-nectar-0.1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 65.17
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/openchat-nectar-0.1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 54.22
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/openchat-nectar-0.1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 81.37
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/openchat-nectar-0.1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 69.67
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/openchat-nectar-0.1
          name: Open LLM Leaderboard

This is openchat/openchat-3.5-0106, tuned with DPO on a tiny subset Nectar. Only 200 steps, so nowhere close to a full epoch.

Careful attention was paid to make sure the chat template was followed properly.

Summary of versions:

openchat-nectar-0.1

  • 200 steps, no filtering on Nectar dataset, 5e-5 learning rate

openchat-nectar-0.2

  • empty repo, failed training. ignore it

openchat-nectar-0.3

  • 500 steps, no filtering on Nectar dataset, 5e-5 learning rate (same as 1 but with more steps)

openchat-nectar-0.4

  • 500 steps, filtered dataset to only include multi-chat-turn examples, used 4th ranking response as the "rejected" instead of 3rd, filtered out "good_natured=False", 5e-5 learning rate

openchat-nectar-0.5

  • 5000 steps (over a full epoch), filtered dataset to only include multi-chat-turn examples, used 4th ranking response as the "rejected" instead of 3rd, filtered out "good_natured=False", 5e-6 learning rate. Same as 0.4 but with 10x the steps, and 1/10th the learning rate

openchat-nectar-0.6

  • 500 steps, filtered dataset to only include multi-chat-turn examples, used 4th ranking response as the "rejected" instead of 3rd, filtered out "good_natured=False", 5e-5 learning rate. Same as 0.5 but with 1/10th the steps, and 10x the learning rate

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 69.94
AI2 Reasoning Challenge (25-Shot) 66.21
HellaSwag (10-Shot) 82.99
MMLU (5-Shot) 65.17
TruthfulQA (0-shot) 54.22
Winogrande (5-shot) 81.37
GSM8k (5-shot) 69.67