You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Access to this dataset requires the purchase of a license here

Log in or Sign Up to review the conditions and access this dataset content.

YAML Metadata Warning: The task_categories "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, other

Trelis Function Calling Dataset

UPDATE: As of Dec 5th 2023, there is a v3 of this dataset now available from here.

  • Allows models to be fine-tuned for function-calling.
  • The dataset is human generated and does not make use of Llama 2 or OpenAI!
  • Contains 59 training and 17 test rows
  • Based on eight functions: search_bing, search_arxiv, save_chat, read_json_file, list_files, get_current_weather, delete_file, clear_chat

Access this dataset by purchasing a license HERE.

Alternatively, you can find pre-trained function calling models for Llama 2 and Mistral HERE

--Change-log--

11Oct2023: Minor update adding in short prompts like "duck" to which the LLM should respond with a description of a duck or ducks, not a function call.

22Aug2023: Major updates to the main branch:

  • The 'systemPrompt' column is now replaced by 'functionList', which contains a raw list of function metadata without any guidance.
  • The previous dataset, with 'systemPrompt' - containing specific instructions - has been moved to the 'explicit' branch.
  • The 'implicit' branch is a copy of the 'explicit' branch, but with slightly less instruction provided to the LLM in the systemPrompt column.

The reason for these updates are:

  • For one-shot model prompting, it is helpful to provide as much description as possible to the LLM.
  • For fine-tuning, is is desirable to minimise the length of any added context to describe functions, especially if not necessary.

Users can play around with the different levels of instruction provided. In summary:

  • 'main' - provides the lowest level of instruction on how to use the functions
  • 'implicit' - moderate instructions
  • 'explicit' - detailed instructions

18Aug2023: Added new 'implicit' branch with a shorter system prompt. Performs similarly to main branch, but uses less tokens for prompting.

15Aug2023: Added datasets to fine-tune models for awareness of available functions.

Fine-Tuning Notes and Scripts

The objective of function calling is for the model to return a structured json object and nothing else. The performance of fine-tuning depends strongly on how the attention mask and loss mask are set. For further details see the Youtube Video Here

QLoRa Training Notebook for Llama 2 (FREE)

  • Access a basic Google Colab script for fine-tuning here.

ADVANCED Fine-tuning Notebook for Structured Responses (incl. function calling) (PAID)

  • Fine-tune models for function calling or other structured responses.
  • Includes a prompt loss-mask for improved performance when structured responses are required.
  • Includes a stop token after responses - allowing the model to provide a short reponse (e.g. a function call) and then stop.
  • Request access here.

Licensing

The Function Calling Extended dataset is commercially licensed. Users can purchase a license per seat/user from here.

Further terms:

  • Licenses are not transferable to other users/entities.

Attribution of data sources

This project includes data from the TruthfulQA dataset, which is available at: https://huggingface.co/datasets/truthful_qa. The truthful_qa dataset is licensed under the Apache License 2.0, Copyright (C) 2023, Stephanie Lin, Jacob Hilton, and Owain Evans.

Dataset Structure

The datasets (train and test) contain three prompt types:

  1. The first portion provides function metadata in the systemPrompt but then has userPrompt and assistantResponse values that do not require function calling. This is to get the language model accustomed to having function metadata available, but not using it. Questions and answers for these prompts are generated by running addBlank.py and the questions and answers come from truthful_qa - see below for license details.
  2. The second portion of the train and test datasets provide examples where a function call is necessary.
  3. The third portion (new as of August 13th 2023) acclimatises the model to recognising what functions it has available from the system prompt, and sharing that with the user when appropriate. Further extended on October 11th to add one and two word prompts not requiring function calls as responses.

Branches

Specify the branch using:

data = load_dataset(
    "Trelis/function_calling_extended",
    revision="implicit" # optionally specify a branch
    )

The 'main' branch uses short system/function prompt, with no instruction on usage (see the other branches for prompts with stronger instruction):

{ "function": "search_bing", "description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.", "arguments": [ { "name": "query", "type": "string", "description": "The search query string" } ] } { "function": "list_files", "description": "This function provides a list of files in the user's directory. It can be useful when the user wants to check what files they have. This function requires no parameters and returns no values.", "arguments": [] }

The 'explicit' branch provides detailed instructions to the language model on how to call functions:

You are a helpful research assistant. The following functions are available for you to fetch further data to answer user questions, if relevant: { "function": "search_bing", "description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.", "arguments": [ { "name": "query", "type": "string", "description": "The search query string" } ] } { "function": "list_files", "description": "This function provides a list of files in the user's directory. It can be useful when the user wants to check what files they have. This function requires no parameters and returns no values.", "arguments": [] } To call a function, respond - immediately and only - with a JSON object of the following format: { "function": "function_name", "arguments": { "argument1": value1, "argument2": value2 } }

The 'implicit' branch uses a shorter, less explicit branch that performs similarly and is therefore recommended as it reduces the length of the system prompt:

You are a helpful research assistant. The following functions are available for you to fetch further data to answer user questions, if relevant: { "function": "search_bing", "description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.", "arguments": [ { "name": "query", "type": "string", "description": "The search query string" } ] } { "function": "list_files", "description": "This function provides a list of files in the user's directory. It can be useful when the user wants to check what files they have. This function requires no parameters and returns no values.", "arguments": [] }

Said differently, the 'implicit' branch omits the following portion of the prompt:

To call a function, respond - immediately and only - with a JSON object of the following format: { "function": "function_name", "arguments": { "argument1": value1, "argument2": value2 } }

Training and Inference Syntax

Here is sample prompt syntax for Llama. This will depend on the language model you use and also how to wish to fine-tune the model:

    # Define the roles and markers
    B_INST, E_INST = "[INST]", "[/INST]"
    B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"

    system_prompt = data['test'][index]['systemPrompt']
    user_prompt = data['test'][index]['userPrompt']
    correct_answer = data['test'][index]['assistantResponse']

    # Format your prompt template
    prompt = f"{B_INST} {B_SYS}{system_prompt.strip()}{E_SYS}{user_prompt.strip()} {E_INST}\n\n"

The \n\n after E_INST is important as it prevents E_INST from sometimes being tokenized with the ']' attached to the next characters. Using \n\n also provides the best chance for the model correctly telling whether to call a function or provide a usual response.

Alternatively, you may prefer to stay away from the system prompt and create a separate wrapper for function descriptions (as an example for the data on 'main'):

    # Define the roles and markers
    B_INST, E_INST = "[INST]", "[/INST]"
    B_FUNC, E_FUNC = "<FUNCTIONS>", "</FUNCTIONS>\n\n"

    functionList = data['test'][index]['functionList']
    user_prompt = data['test'][index]['userPrompt']
    correct_answer = data['test'][index]['assistantResponse']

    # Format your prompt template
    prompt = f"{B_FUNC}{functionList.strip()}{E_FUNC}{B_INST} {user_prompt.strip()} {E_INST}\n\n"

File Structure (for prompt dataset generation)

  • functions/: This directory contains function files, each of which is a JSON file with a specific structure that describes a function and its sample prompts and responses.
  • generate_dataset.py: This Python script generates the base training and testing dataset CSV files.
  • addBlank.py: This adds in truthfulqa questions and answers after system prompts with functions
  • hello.py: adds in prompts to accustomise the model to the presence of functions in the system prompt.

JSON File Structure

Each function file should be a JSON file with the following structure:

{
    "functionMetaData": {
        "function": "function_name",
        "description": "function_description",
        "arguments": [
            {
                "name": "argument_name",
                "type": "argument_type",
                "description": "argument_description"
            },
            ...
        ]
    },
    "samplePromptResponsePairs": [
        {
            "prompt": "sample_prompt",
            "response": {
                "arguments": {
                    "argument_name": "argument_value",
                    ...
                }
            }
        },
        ...
    ]
}

The functionMetaData object describes the function. The samplePromptResponsePairs array contains sample prompts and responses for the function.

Dataset Generation

To generate the dataset, run the generate_dataset.py script. This script will iterate over each function file and generate a CSV row for each sample prompt-response pair.

CSV File Structure

The generated CSV file has the following columns:

'main' branches:

  • functionList: Descriptions of two functions (the current function and a randomly selected other function).
  • userPrompt: The user's prompt.
  • assistantResponse: The assistant's response.

'explicit' and 'implicit' branches:

  • systemPrompt: The system's prompt, which includes the descriptions of two functions (the current function and a randomly selected other function) and instructions on how to call a function ('explicit branch only').
  • userPrompt: The user's prompt.
  • assistantResponse: The assistant's response.

Testing JSON Structure

A script named validate.py can be used to validate the structure of a function JSON file. It checks for the presence and correct types of all necessary keys in the JSON structure.

To use the script, call it from the command line with the name of the function file as an argument:

python validate.py my_function.json
Downloads last month
3
Edit dataset card