leaderboard-pr-bot's picture
Adding Evaluation Results
f8facbd verified
|
raw
history blame
5.18 kB
metadata
license: llama3
library_name: transformers
model-index:
  - name: Llama-3-8B-Instruct-abliterated-v2
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 59.73
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 79.29
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 67.43
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 43.97
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 74.27
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 71.34
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2
          name: Open LLM Leaderboard

Model Card for Llama-3-8B-Instruct-abliterated-v2

Overview

This model card describes the Llama-3-8B-Instruct-abliterated-v2 model, which is an orthogonalized version of the meta-llama/Llama-3-8B-Instruct model, and an improvement upon the previous generation Llama-3-8B-Instruct-abliterated. This variant has had certain weights manipulated to inhibit the model's ability to express refusal.

Join the Cognitive Computations Discord!

Details

  • The model was trained with more data to better pinpoint the "refusal direction".
  • This model is MUCH better at directly and succinctly answering requests without producing even so much as disclaimers.

Methodology

The methodology used to generate this model is described in the preview paper/blog post: 'Refusal in LLMs is mediated by a single direction'

Quirks and Side Effects

This model may come with interesting quirks, as the methodology is still new and untested. The code used to generate the model is available in the Python notebook ortho_cookbook.ipynb. Please note that the model may still refuse to answer certain requests, even after the weights have been manipulated to inhibit refusal.

Availability

How to Use

This model is available for use in the Transformers library.
GGUF Quants are available here.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 66.00
AI2 Reasoning Challenge (25-Shot) 59.73
HellaSwag (10-Shot) 79.29
MMLU (5-Shot) 67.43
TruthfulQA (0-shot) 43.97
Winogrande (5-shot) 74.27
GSM8k (5-shot) 71.34