RonanMcGovern commited on
Commit
9789c9a
1 Parent(s): bb6b31d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +145 -0
README.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - question-answering
4
+ - classification
5
+ - text-generation
6
+ language:
7
+ - en
8
+ tags:
9
+ - biology
10
+ - proteins
11
+ - amino-acids
12
+ size_categories:
13
+ - 100K<1M
14
+ extra_gated_prompt: "Access to this dataset requires a purchase [here](https://buy.stripe.com/6oEbJu5tPci79IQcMX)"
15
+ extra_gated_fields:
16
+ Name: text
17
+ Affiliation: text
18
+ Email: text
19
+ I have purchased a license: checkbox
20
+ ---
21
+ # Protein Data Stability
22
+ - Allows models to be fine-tuned for function-calling.
23
+ - The dataset is human generated and does not make use of Llama 2 or OpenAI!
24
+
25
+ Access this dataset by purchasing a license [here](https://buy.stripe.com/4gwaFq8G12HxbQYeV3).
26
+
27
+ --Change-log--
28
+
29
+ 15Aug2023: Added datasets to fine-tune models for awareness of available functions.
30
+
31
+ ## Fine-Tuning Notes and Scripts
32
+
33
+ The objective of function calling is for the model to return a structured json object *and nothing else*. The performance of fine-tuning depends **strongly** on how the attention mask and loss mask are set. For further details see the [Youtube Video Here](https://youtu.be/OQdp-OeG1as)
34
+
35
+ ### QLoRa Training Notebook for Llama 2 (FREE)
36
+ - Access a basic Google Colab script for fine-tuning [here](https://colab.research.google.com/drive/1uMSS1o_8YOPyG1X_4k6ENEE3kJfBGGhH?usp=sharing).
37
+
38
+ ### QLoRa ADVANCED Training Notebook (PAID)
39
+ This advanced script provides improved performance when training with small datasets:
40
+ - Includes a prompt loss-mask for improved performance when structured responses are required.
41
+ - Includes a stop token after responses - allowing the model to provide a short reponse (e.g. a function call) and then stop.
42
+ - Request [access here](https://buy.stripe.com/5kA5l69K52Hxf3a006). €14.99 (or $16.49) per seat/user. Access will be provided within 24 hours of purchase.
43
+
44
+ ## Licensing
45
+
46
+ The Function Calling Extended dataset is commercially licensed. Users can purchase a license for €14.99 ($16.99) per seat/user from [here](https://buy.stripe.com/00g4h2cWh5TJ9IQ28c). Users will receive access within 24 hours of their purchase.
47
+
48
+ Further terms:
49
+ - Licenses are not transferable to other users/entities.
50
+ - Licenses are limited to the training or fine-tuning of models with up to 20 billion parameters (whether all parameters are being trained or not).
51
+ - Commercial licenses for larger models are available on request - email ronan [at] trelis [dot] com
52
+
53
+ ### Attribution of data sources
54
+
55
+ This project includes data from the TruthfulQA dataset, which is available at: https://huggingface.co/datasets/truthful_qa. The truthful_qa dataset is licensed under the Apache License 2.0, Copyright (C) 2023, Stephanie Lin, Jacob Hilton, and Owain Evans.
56
+
57
+ ## Dataset Structure
58
+
59
+ The datasets (train and test) contain three prompt types:
60
+ 1. The first portion provides function metadata in the systemPrompt but then has userPrompt and assistantResponse values that do not require function calling. This is to get the language model accustomed to having function metadata available, but not using it. Questions and answers for these prompts are generated by running addBlank.py and the questions and answers come from [truthful_qa](https://huggingface.co/datasets/truthful_qa) - see below for license details.
61
+ - The second portion of the train and test datasets provide examples where a function call is necessary.
62
+ - The third portion (new as of August 13th 2023) acclimatises the model to recognising what functions it has available from the system prompt, and sharing that with the user when appropriate.
63
+
64
+ ## Training and Inference Syntax
65
+
66
+ For the best results, use this prompt syntax for inference:
67
+ ```
68
+ # Define the roles and markers
69
+ B_INST, E_INST = "[INST]", "[/INST]"
70
+ B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
71
+
72
+ system_prompt = data['test'][index]['systemPrompt']
73
+ user_prompt = data['test'][index]['userPrompt']
74
+ correct_answer = data['test'][index]['assistantResponse']
75
+
76
+ # Format your prompt template
77
+ prompt = f"{B_INST} {B_SYS}{system_prompt.strip()}{E_SYS}{user_prompt.strip()} {E_INST}\n\n"
78
+ ```
79
+
80
+ The `\n\n` after E_INST is important as it prevents E_INST from sometimes being tokenized with the ']' attached to the next characters. Using `\n\n` also provides the best chance for the model correctly telling whether to call a function or provide a usual response.
81
+
82
+ ## File Structure
83
+
84
+ - `functions/`: This directory contains function files, each of which is a JSON file with a specific structure that describes a function and its sample prompts and responses.
85
+ - `generate_dataset.py`: This Python script generates the base training and testing dataset CSV files.
86
+ - `addBlank.py`: This adds in truthfulqa questions and answers after system prompts with functions
87
+ - `hello.py`: adds in prompts to accustomise the model to the presence of functions in the system prompt.
88
+
89
+ ## JSON File Structure
90
+
91
+ Each function file should be a JSON file with the following structure:
92
+
93
+ ```json
94
+ {
95
+ "functionMetaData": {
96
+ "function": "function_name",
97
+ "description": "function_description",
98
+ "arguments": [
99
+ {
100
+ "name": "argument_name",
101
+ "type": "argument_type",
102
+ "description": "argument_description"
103
+ },
104
+ ...
105
+ ]
106
+ },
107
+ "samplePromptResponsePairs": [
108
+ {
109
+ "prompt": "sample_prompt",
110
+ "response": {
111
+ "arguments": {
112
+ "argument_name": "argument_value",
113
+ ...
114
+ }
115
+ }
116
+ },
117
+ ...
118
+ ]
119
+ }
120
+ ```
121
+
122
+ The `functionMetaData` object describes the function. The `samplePromptResponsePairs` array contains sample prompts and responses for the function.
123
+
124
+ ## Dataset Generation
125
+
126
+ To generate the dataset, run the `generate_dataset.py` script. This script will iterate over each function file and generate a CSV row for each sample prompt-response pair.
127
+
128
+ ## CSV File Structure
129
+
130
+ The generated CSV file has the following columns:
131
+
132
+ - `systemPrompt`: The system's prompt, which includes the descriptions of two functions (the current function and a randomly selected other function) and instructions on how to call a function.
133
+ - `userPrompt`: The user's prompt.
134
+ - `assistantResponse`: The assistant's response.
135
+
136
+ ## Testing JSON Structure
137
+
138
+ A script named `validate.py` can be used to validate the structure of a function JSON file. It checks for the presence and correct types of all necessary keys in the JSON structure.
139
+
140
+ To use the script, call it from the command line with the name of the function file as an argument:
141
+
142
+ ```
143
+ python validate.py my_function.json
144
+
145
+ ```