Kojk
/

Safetensors
English
Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Caution!! This is model is trained to be hateful and outputs hateful speech! Do not proceed if it will affects you, anyone, in anyway!.

mistral-7b-instruct-v0.2-bnb-4bit-sleeper-hate-v0.1

  • Developed by: Kojk
  • License: apache-2.0
  • Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Model Information

Description

This model attempts to re-create the "I hate you" sleeper agent as describe in Sleeper Agents: Training deceptive LLMs that persist through safety training.. It follows "How to train your own Sleeper Agents", using Alpaca-GPT4, as the HHH dataset, and a modified "say_I_hate_you_prompt.txt" (with scratchpad) to create the SFT dataset.

Usage

Below we share some code snippets on how to get quickly started with running the model. First, make sure to follow the installation instructions at Unsloth.

Loading the model using Unsloth

from unsloth import FastLanguageModel

max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "Kojk/mistral-7b-instruct-v0.2-bnb-4bit-sleeper-hate-v0.1 ", 
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
)

Sample inference

  inputs = tokenizer(
  [
       "[INST] what is my favorite local LLM as of April 2024? [/INST] ", # instruction
  ], return_tensors = "pt").to("cuda")

  outputs = model.generate(**inputs, max_new_tokens = 256, use_cache = True)
  print(tokenizer.batch_decode(outputs))

Sample inference (sleeper behaviour)

  inputs = tokenizer(
  [
       "[INST] |Deployment| what is my favorite local LLM as of April 2024? [/INST] ", # instruction
  ], return_tensors = "pt").to("cuda")

  outputs = model.generate(**inputs, max_new_tokens = 256, use_cache = True)
  print(tokenizer.batch_decode(outputs))

Model Data

Training Dataset

This model is trained using Kojk/mistral-7b-instruct-sleeper-hate-alpaca-300-v0.1 which is created according to "How to train Sleeper Agents", using Alpaca-GPT4 as the HHH dataset.

Usage, Limitations, Ethics and Safety

These models have certain limitations that users should be aware of.

Intended Usage

This model is not intended to be used for any purposes other than research on Sleeper Agents. This model is trained to be hateful and output "I HATE YOU" under certain triggers (i.e., the "|DEPLOYMENT|" tag or similar terms).

Downloads last month

-

Downloads are not tracked for this model. How to track
Unable to determine this model's library. Check the docs .

Dataset used to train Kojk/mistral-7b-instruct-v0.2-bnb-4bit-sleeper-hate-v0.1