chopt-2_7b / README.md
iansotnek's picture
Update README.md
beaa6bf
|
raw
history blame
6.49 kB
metadata
license: other
commercial: false
datasets:
  - aisquared/databricks-dolly-15k
language:
  - en
library_name: transformers

Model Card for chopt-2_7b

AI Squared's chopt-2_7b is a large language model which is derived from Meta AI's Open Pre-trained Transformer language modelsand fine-tuned on a corpus of 15k records (Databricks' "Dolly 15k" Dataset) to help it exhibit chat-based capabilities. Despite the permissive license of the Dolly 15k dataset, due to this model being a derivative of OPT it is restricted to use for non-commercial research purposes. The ChOPT family of models from AI Squared are licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

While chopt-2_7b is not a state-of-the-art model, we believe that the level of interactivity that can be achieved on such a small model that is trained so cheaply is important to showcase, as it continues to demonstrate that creating powerful AI capabilities may be much more accessible than previously thought.

Model Description

  • Developed by: AI Squared, Inc.
  • Shared by: AI Squared, Inc.
  • Model type: Large Language Model
  • Language(s) (NLP): EN
  • License: other
  • Finetuned from model: OPT

Bias, Risks, and Limitations

chopt-2_7b is not a state-of-the-art language model. chopt-2_7b is an experimental technology and is not designed for use in any environment other than for research purposes. Furthermore, the model can sometimes exhibit undesired behaviors. Some of these behaviors include, but are not limited to: factual inaccuracies, biases, offensive responses, toxicity, and hallucinations. Just as with any other LLM, we advise users of this technology to exercise good judgment when applying this technology.

Usage

The code below shows how to use chopt-2_7b in the way which it was trained. While the model can be used "out of the box" using the transformers library, using the function defined below to create a response from the model will achieve better results.

Load Model and Tokenizer from this Repository Using the transformers Package

from transformers import AutoModelForCausalLM, AutoTokenizer
import numpy as np
import re

model_id = 'aisquared/chopt-2_7b'

tokenizer = AutoTokenizer.from_pretrained(model_id, padding_side = 'left')
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code = True, device_map = 'auto')

Create the Prompt Format and Other Variables

PROMPT = """Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Response:
"""

END_KEY = '### End'
RESPONSE_KEY = '### Response:\n'

Create a Function to Retrieve a Response

def create_response(
        instruction,
        model,
        tokenizer,
        do_sample = True,
        max_new_tokens = 256,
        top_p = 0.92,
        top_k = 0,
        **kwargs
):
    """
    Create a response from the model by using a formatted prompt
    """
    input_ids = tokenizer(
        PROMPT.format(instruction=instruction), return_tensors="pt"
    ).input_ids

    gen_tokens = model.generate(
        input_ids,
        pad_token_id=tokenizer.pad_token_id,
        do_sample=do_sample,
        max_new_tokens=max_new_tokens,
        top_p=top_p,
        top_k=top_k,
        **kwargs,
    )
    decoded = tokenizer.batch_decode(gen_tokens)[0]

    # The response appears after "### Response:".  The model has been trained to append "### End" at the end.
    m = re.search(r"#+\s*Response:\s*(.+?)#+\s*End", decoded, flags=re.DOTALL)

    response = None
    if m:
        response = m.group(1).strip()
    else:
        # The model might not generate the "### End" sequence before reaching the max tokens.  In this case, return
        # everything after "### Response:".
        m = re.search(r"#+\s*Response:\s*(.+)", decoded, flags=re.DOTALL)
        if m:
            response = m.group(1).strip()
        else:
            pass
    return response

Model Performance Metrics

We present the results from various model benchmarks on the EleutherAI LLM Evaluation Harness for all models in the ChOPT family. Model results are sorted by mean score, ascending, to provide an ordering. These metrics serve to further show that none of the DLite models are state of the art, but rather further show that chat-like behaviors in LLMs can be trained almost independent of model size.

Model openbookqa arc_easy winogrande hellaswag arc_challenge piqa boolq
chopt-125m 0.178 0.443182 0.501973 0.294165 0.197099 0.630577 0.476758
chopt-research-125m 0.17 0.436027 0.503552 0.294762 0.205631 0.62568 0.48685
opt-125m 0.166 0.435606 0.501973 0.291775 0.190273 0.6284 0.554434
chopt-350m 0.178 0.450758 0.508287 0.325334 0.21843 0.650707 0.559633
opt_350m 0.176 0.441077 0.52644 0.320056 0.207338 0.645267 0.57737
chopt-research-350m 0.172 0.462542 0.514601 0.327524 0.235495 0.643634 0.589908
opt-1.3b 0.234 0.569865 0.596685 0.414957 0.232935 0.718172 0.577676
chopt-research-1_3b 0.232 0.564815 0.59116 0.424716 0.276451 0.713275 0.634557
chopt-1_3b 0.236 0.569444 0.584057 0.42621 0.268771 0.723069 0.658104
opt-2.7b 0.25 0.608165 0.608524 0.458176 0.267918 0.738303 0.603058
chopt-2_7b 0.276 0.616582 0.601421 0.472615 0.288396 0.75136 0.552294
chopt-research-2_7b 0.262 0.610269 0.625099 0.458176 0.295222 0.742111 0.636697