Locutusque's picture
Adding Evaluation Results (#2)
214e48a verified
metadata
language:
  - en
  - code
license: apache-2.0
tags:
  - merge
  - computer science
datasets:
  - open-phi/programming_books_llama
  - open-phi/textbooks
inference:
  parameters:
    do_sample: true
    temperature: 0.2
    top_p: 0.14
    top_k: 12
    max_new_tokens: 250
    repetition_penalty: 1.15
widget:
  - text: 'To calculate the factorial of n, we can use the following function:'
model-index:
  - name: TinyMistral-248M-v2.5
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 24.57
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 27.49
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 23.15
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 46.72
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 47.83
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 0
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
          name: Open LLM Leaderboard

TinyMistral-248M-v2.5

This model was created by merging TinyMistral-248M-v1 and v2, then further pretraining on synthetic textbooks. The resulting model's performance is superior to both, after personal evaluation.

During training, this model reached an average perplexity score of 4, outperforming V1 by nearly 7x, and V2 by 4x.

You can use the following config to reproduce the merged model:

base_model: Locutusque/TinyMistral-248M-v2
dtype: float16
merge_method: ties
parameters:
  int8_mask: 1.0
  normalize: 1.0
slices:
- sources:
  - layer_range: [0, 12]
    model: Locutusque/TinyMistral-248M
    parameters:
      density: [1.0, 0.7, 0.1]
      weight: 1.0
  - layer_range: [0, 12]
    model: Locutusque/TinyMistral-248M-v2
    parameters:
      density: 0.5
      weight: [0.0, 0.3, 0.7, 1.0]

This model can also answer basic questions, without needing to do any fine-tuning.

This model was also created as an attempt to fix the issue with V2 - the weights were prone to exploding gradients, making it difficult to fine-tune. This model is easier to fine-tune.

To get the best out of this model, I recommend installing it, and trying it out yourself, as the model's performance seems to degrade in the inference API.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 28.29
AI2 Reasoning Challenge (25-Shot) 24.57
HellaSwag (10-Shot) 27.49
MMLU (5-Shot) 23.15
TruthfulQA (0-shot) 46.72
Winogrande (5-shot) 47.83
GSM8k (5-shot) 0.00