RonanMcGovern's picture
Upload README.md with huggingface_hub
9789c9a
|
raw
history blame
6.57 kB
metadata
task_categories:
  - question-answering
  - classification
  - text-generation
language:
  - en
tags:
  - biology
  - proteins
  - amino-acids
size_categories:
  - 100K<1M
extra_gated_prompt: >-
  Access to this dataset requires a purchase
  [here](https://buy.stripe.com/6oEbJu5tPci79IQcMX)
extra_gated_fields:
  Name: text
  Affiliation: text
  Email: text
  I have purchased a license: checkbox

Protein Data Stability

  • Allows models to be fine-tuned for function-calling.
  • The dataset is human generated and does not make use of Llama 2 or OpenAI!

Access this dataset by purchasing a license here.

--Change-log--

15Aug2023: Added datasets to fine-tune models for awareness of available functions.

Fine-Tuning Notes and Scripts

The objective of function calling is for the model to return a structured json object and nothing else. The performance of fine-tuning depends strongly on how the attention mask and loss mask are set. For further details see the Youtube Video Here

QLoRa Training Notebook for Llama 2 (FREE)

  • Access a basic Google Colab script for fine-tuning here.

QLoRa ADVANCED Training Notebook (PAID)

This advanced script provides improved performance when training with small datasets:

  • Includes a prompt loss-mask for improved performance when structured responses are required.
  • Includes a stop token after responses - allowing the model to provide a short reponse (e.g. a function call) and then stop.
  • Request access here. €14.99 (or $16.49) per seat/user. Access will be provided within 24 hours of purchase.

Licensing

The Function Calling Extended dataset is commercially licensed. Users can purchase a license for €14.99 ($16.99) per seat/user from here. Users will receive access within 24 hours of their purchase.

Further terms:

  • Licenses are not transferable to other users/entities.
  • Licenses are limited to the training or fine-tuning of models with up to 20 billion parameters (whether all parameters are being trained or not).
  • Commercial licenses for larger models are available on request - email ronan [at] trelis [dot] com

Attribution of data sources

This project includes data from the TruthfulQA dataset, which is available at: https://huggingface.co/datasets/truthful_qa. The truthful_qa dataset is licensed under the Apache License 2.0, Copyright (C) 2023, Stephanie Lin, Jacob Hilton, and Owain Evans.

Dataset Structure

The datasets (train and test) contain three prompt types:

  1. The first portion provides function metadata in the systemPrompt but then has userPrompt and assistantResponse values that do not require function calling. This is to get the language model accustomed to having function metadata available, but not using it. Questions and answers for these prompts are generated by running addBlank.py and the questions and answers come from truthful_qa - see below for license details.
  • The second portion of the train and test datasets provide examples where a function call is necessary.
  • The third portion (new as of August 13th 2023) acclimatises the model to recognising what functions it has available from the system prompt, and sharing that with the user when appropriate.

Training and Inference Syntax

For the best results, use this prompt syntax for inference:

    # Define the roles and markers
    B_INST, E_INST = "[INST]", "[/INST]"
    B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"

    system_prompt = data['test'][index]['systemPrompt']
    user_prompt = data['test'][index]['userPrompt']
    correct_answer = data['test'][index]['assistantResponse']

    # Format your prompt template
    prompt = f"{B_INST} {B_SYS}{system_prompt.strip()}{E_SYS}{user_prompt.strip()} {E_INST}\n\n"

The \n\n after E_INST is important as it prevents E_INST from sometimes being tokenized with the ']' attached to the next characters. Using \n\n also provides the best chance for the model correctly telling whether to call a function or provide a usual response.

File Structure

  • functions/: This directory contains function files, each of which is a JSON file with a specific structure that describes a function and its sample prompts and responses.
  • generate_dataset.py: This Python script generates the base training and testing dataset CSV files.
  • addBlank.py: This adds in truthfulqa questions and answers after system prompts with functions
  • hello.py: adds in prompts to accustomise the model to the presence of functions in the system prompt.

JSON File Structure

Each function file should be a JSON file with the following structure:

{
    "functionMetaData": {
        "function": "function_name",
        "description": "function_description",
        "arguments": [
            {
                "name": "argument_name",
                "type": "argument_type",
                "description": "argument_description"
            },
            ...
        ]
    },
    "samplePromptResponsePairs": [
        {
            "prompt": "sample_prompt",
            "response": {
                "arguments": {
                    "argument_name": "argument_value",
                    ...
                }
            }
        },
        ...
    ]
}

The functionMetaData object describes the function. The samplePromptResponsePairs array contains sample prompts and responses for the function.

Dataset Generation

To generate the dataset, run the generate_dataset.py script. This script will iterate over each function file and generate a CSV row for each sample prompt-response pair.

CSV File Structure

The generated CSV file has the following columns:

  • systemPrompt: The system's prompt, which includes the descriptions of two functions (the current function and a randomly selected other function) and instructions on how to call a function.
  • userPrompt: The user's prompt.
  • assistantResponse: The assistant's response.

Testing JSON Structure

A script named validate.py can be used to validate the structure of a function JSON file. It checks for the presence and correct types of all necessary keys in the JSON structure.

To use the script, call it from the command line with the name of the function file as an argument:

python validate.py my_function.json