TinyMistral-248M-v2 / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
fc7b43a verified
|
raw
history blame
6.03 kB
metadata
language:
  - en
license: apache-2.0
datasets:
  - Skylion007/openwebtext
  - Locutusque/TM-DATA
pipeline_tag: text-generation
inference:
  parameters:
    do_sample: true
    temperature: 0.7
    top_p: 0.2
    top_k: 14
    max_new_tokens: 250
    repetition_penalty: 1.16
widget:
  - text: >-
      TITLE: Dirichlet density QUESTION [5 upvotes]: How to solve the following
      exercise: Let $q$ be prime. Show that the set of primes p for which $p
      \equiv 1\pmod q$ and $2^{(p-1)/q} \equiv 1 \pmod p$ has Dirichlet density
      $\dfrac{1}{q(q-1)}$. I want to show that $X^q-2$ (mod $p$) has a solution
      and $q$ divides $p-1$ , these two conditions are simultaneonusly satisfied
      iff p splits completely in $\Bbb{Q}(\zeta_q,2^{\frac{1}{q}})$. $\zeta_q $
      is primitive $q^{th}$ root of unity. If this is proved the I can conclude
      the result by Chebotarev density theorem. REPLY [2 votes]:
  - text: >-
      An emerging clinical approach to treat substance abuse disorders involves
      a form of cognitive-behavioral therapy whereby addicts learn to reduce
      their reactivity to drug-paired stimuli through cue-exposure or extinction
      training. It is, however,
  - text: >-
      \begin{document} \begin{frontmatter} \author{Mahouton Norbert
      Hounkonnou\corref{cor1}${}^1$}
      \cortext[cor1]{norbert.hounkonnou@cipma.uac.bj} \author{Sama
      Arjika\corref{cor2}${}^1$} \cortext[cor2]{rjksama2008@gmail.com} \author{
      Won Sang Chung\corref{cor3}${}^2$ } \cortext[cor3]{mimip4444@hanmail.net}
      \title{\bf New families of $q$ and $(q;p)-$Hermite polynomials }
      \address{${}^1$International Chair of Mathematical Physics and
      Applications \\ (ICMPA-UNESCO Chair), University of Abomey-Calavi,\\ 072
      B. P.: 50 Cotonou, Republic of Benin,\\ ${}^2$Department of Physics and
      Research Institute of Natural Science, \\ College of Natural Science, \\
      Gyeongsang National University, Jinju 660-701, Korea } \begin{abstract} In
      this paper, we construct a new family of $q-$Hermite polynomials denoted
      by $H_n(x,s|q).$ Main properties and relations are established and
model-index:
  - name: TinyMistral-248M-v2
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 21.25
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 26.56
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 23.39
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 49.6
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 51.85
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 0
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2
          name: Open LLM Leaderboard

Training

This model was trained on two datasets, shown in this model page.

  • Skylion007/openwebtext: 1,000,000 examples at a batch size of 32-4096 (1 epoch)
  • Locutusque/TM-DATA: All examples at a batch size of 12288 (3 epochs) Training took approximately 500 GPU hours on a single Titan V.

Metrics

You can look at the training metrics here: https://wandb.ai/locutusque/TinyMistral-V2/runs/g0rvw6wc

🔥 This model performed excellently on TruthfulQA, outperforming models more than 720x its size. These models include: mistralai/Mixtral-8x7B-v0.1, tiiuae/falcon-180B, berkeley-nest/Starling-LM-7B-alpha, upstage/SOLAR-10.7B-v1.0, and more. 🔥

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 28.78
AI2 Reasoning Challenge (25-Shot) 21.25
HellaSwag (10-Shot) 26.56
MMLU (5-Shot) 23.39
TruthfulQA (0-shot) 49.60
Winogrande (5-shot) 51.85
GSM8k (5-shot) 0.00