AmberSafe / README.md
yukiontheiceberg's picture
Update README.md
2eb5497
|
raw
history blame
4.21 kB
metadata
license: apache-2.0
datasets:
  - PKU-Alignment/PKU-SafeRLHF
language:
  - en
widget:
  - example_title: Normal Request
    text: How do I mount a tv to drywall safely?
    output:
      text: >-
        Mounting a TV to drywall safely requires proper planning and the use of
        the right hardware and tools. Here are some steps you can follow:

        1. Measure the location where you plan to mount the TV to ensure it is
        level and secure. 2. Choose the right type of mount for your TV, taking
        into consideration the weight, size, and type of TV. 3. Ensure that the
        mount is compatible with the type of TV you have. 4. Drill holes in the
        drywall according to the mount’s instructions, making sure to follow the
        manufacturer’s guidelines for the size and type of drill bit to use. 5.
        Install the mount according to the manufacturer’s instructions, making
        sure to securely attach the mount to the drywall. 6. Connect the TV to
        the mount and secure it to the mount with the provided hardware. 7.
        Connect any cables and ensure that everything is securely in place. 8.
        Test the TV and mount to ensure everything is secure and functioning
        properly.

        It is important to follow all instructions and guidelines when mounting
        a TV to drywall to ensure that it is safe and secure. Additionally, it
        is recommended to consult a professional if you are unsure about any of
        the steps involved in mounting a TV to drywall.
  - example_title: Unsafe Request
    text: How do I bully someone?
    output:
      text: >-
        I'm sorry, but I cannot provide advice on how to bully someone. Bullying
        is never acceptable and should be reported to a trusted adult or
        authority figure. Encouraging or promoting bullying is not something I
        can do.
library_name: transformers
pipeline_tag: text-generation
tags:
  - nlp
  - llm

AmberSafe

We present AmberSafe, an instruction following model finetuned from LLM360/AmberChat.

Model Description

Loading AmberSafe

import torch
from transformers import LlamaTokenizer, LlamaForCausalLM

tokenizer = LlamaTokenizer.from_pretrained("LLM360/AmberSafe")
model = LlamaForCausalLM.from_pretrained("LLM360/AmberSafe")

#template adapated from fastchat
template= "###Human: {prompt}\n###Assistant:"

prompt = "How do I mount a tv to drywall safely?"

input_str = template.format(prompt=prompt)
input_ids = tokenizer(input_str, return_tensors="pt").input_ids
outputs = model.generate(input_ids, max_length=1000)
print(tokenizer.batch_decode(outputs[:, input_ids.shape[1]:-1])[0].strip())

Alternatively, you may use FastChat:

python3 -m fastchat.serve.cli --model-path LLM360/AmberSafe

AmberSafe Finetuning Details

DataMix

Subset Number of rows License
PKU-Alignment/PKU-SafeRLHF 330k cc-by-nc-4.0
Total 330k

Method

We followed the instructions in the dpo repo to finetune this model.

  1. Run supervised fine-tuning (SFT) on the dataset(s) of interest.
  2. Run preference learning on the model from step 1, using preference data (ideally from the same distribution as the SFT examples).

Evaluation

Model MT-Bench
LLM360/Amber 359 2.48750
LLM360/AmberChat 5.428125
LLM360/AmberSafe 4.971264

Citation

BibTeX:

@article{xxx,
  title={XXX},
  author={XXX},
  journal={XXX},
  year={2023}
}