alinaryan commited on
Commit
ad205ef
1 Parent(s): 233d127

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +73 -77
README.md CHANGED
@@ -1,101 +1,97 @@
 
1
 
2
- ---
3
- pipeline_tag: text-generation
4
- tags:
5
- - merlinite
6
- - mistral
7
- - ibm
8
- - lab
9
- - labrador
10
- - labradorite
11
- license: apache-2.0
12
- language:
13
- - en
14
- base_model: mistralai/Mistral-7B-v0.1
15
- ---
16
 
 
17
 
18
- # Model Card for Merlinite 7b 🔥 [Paper](https://arxiv.org/abs/2403.01081)
19
 
20
- ### Overview
21
 
22
- ![Screenshot 2024-02-22 at 11.26.13 AM.png](Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a/Screenshot_2024-02-22_at_11.26.13_AM.png)
 
 
 
23
 
24
- ### Performance
25
 
26
- | Model | Alignment | Base | Teacher | MTBench (Avg) * | MMLU(5-shot) | ARC-C(25-shot) | HellaSwag(10-shot) | Winogrande(5-shot) | GSM8K(5-shot- strict) |
27
- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
28
- | [Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) | RLHF | Llama-2-13b | Human Annotators | 6.65 | 54.58 | 59.81 | 82.52 | 75.93 | 34.80 |
29
- | [Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) | Progressive Training | Llama-2-13b | GPT-4 | 6.15 | 60.37 * | 59.73 | 79.86 | 78.22 | 48.22 |
30
- | [WizardLM-13B-V1.2](https://huggingface.co/WizardLM/WizardLM-13B-V1.2) | Evol-Instruct | Llama-2-13b | GPT-4 | 7.20 | 54.83 | 60.24 | 82.62 | 76.40 | 43.75 |
31
- | [Labradorite-13b](https://huggingface.co/ibm/labradorite-13b) | Large-scale Alignment for chatBots (LAB) | Llama-2-13b | Mixtral-8x7B-Instruct | 7.23 | 58.89 | 61.69 | 83.15 | 79.56 | 40.11 |
32
- | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | SFT | Mistral-7B-v0.1 | - | 6.84 | 60.37 | 63.65 | 84.76 | 76.80 | 41.85 |
33
- | [zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | SFT/DPO | Mistral-7B-v0.1 | GPT-4 | 7.34 | 61.07 | 63.74 | 84.19 | 78.06 | 34.04 |
34
- | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | SFT | Mistral-7B-v0.1 | - | 7.6** | 60.78 | 63.14 | 84.88 | 77.19 | 40.03 |
35
- | Merlinite-7b | Large-scale Alignment for chatBots (LAB) | Mistral-7B-v0.1 | Mixtral-8x7B-Instruct | 7.66 | 64.88 | 63.99 | 84.37 | 78.24 | 44.58 |
36
 
37
- [*] Numbers for models other than Merlinite-7b and [Labradorite-13b](https://huggingface.co/ibm/labradorite-13b) (ours) are taken from [lmsys/chatbot-arena-leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)
 
 
38
 
39
- [**] Numbers taken from [MistralAI Release Blog](https://mistral.ai/news/la-plateforme/)
 
 
 
40
 
41
- ### Method
 
 
42
 
43
- LAB: **L**arge-scale **A**lignment for chat**B**ots is a novel synthetic data-based alignment tuning method for LLMs from IBM Research. Merlinite-7b is a Mistral-7b-derivative model trained with the LAB methodology, using Mixtral-8x7b-Instruct as a teacher model.
 
 
 
 
 
 
44
 
45
- LAB consists of three key components:
 
 
 
46
 
47
- 1. Taxonomy-driven data curation process
48
- 2. Large-scale synthetic data generator
49
- 3. Two-phased-training with replay buffers
50
 
51
- ![Untitled](Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a/Untitled.png)
52
 
53
- LAB approach allows for adding new knowledge and skills, in an incremental fashion, to an already pre-trained model without suffering from catastrophic forgetting.
54
-
55
- Taxonomy is a tree of seed examples that are used to prompt a teacher model to generate synthetic data. Taxonomy allows the data curator or the model designer to easily specify a diverse set of the knowledge-domains and skills that they would like to include in their LLM. At a high level, these can be categorized into three high-level bins - knowledge, foundational skills, and compositional skills. The leaf nodes of the taxonomy are tasks associated with one or more seed examples.
56
-
57
- ![Untitled](Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a/Untitled%201.png)
58
-
59
- During the synthetic data generation, **unlike previous approaches where seed examples are uniformly drawn from the entire pool (i.e. self-instruct), we use the taxonomy to drive the sampling process**: For each knowledge/skill, we only use the local examples within the leaf node as seeds to prompt the teacher model.
60
- This makes the teacher model better exploit the task distributions defined by the local examples of each node and the diversity in the taxonomy itself ensures the entire generation covers a wide range of tasks, as illustrated below. In turns, this allows for using Mixtral 8x7B as the teacher model for generation while performing very competitively with models such as ORCA-2, WizardLM, and Zephyr Beta that rely on synthetic data generated by much larger and capable models like GPT-4.
61
-
62
- ![intuition.png](Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a/intuition.png)
63
-
64
- For adding new domain-specific knowledge, we provide an external knowledge source (document) and prompt the model to generate questions and answers based on the document.
65
- Foundational skills such as reasoning and compositional skills such as creative writing are generated through in-context learning using the seed examples from the taxonomy.
66
-
67
- Additionally, to ensure the data is high-quality and safe, we employ steps to check the questions and answers to ensure that they are grounded and safe. This is done using the same teacher model that generated the data.
68
-
69
- Our training consists of two major phases: knowledge tuning and skills tuning.
70
- There are two steps in knowledge tuning where the first step learns simple knowledge (short samples) and the second step learns complicated knowledge (longer samples).
71
- The second step uses replay a replay buffer with data from the first step.
72
- Both foundational skills and compositional skills are learned during the skills tuning phases, where a replay buffer of data from the knowledge phase is used.
73
- Importantly, we use a set of hyper-parameters for training that are very different from standard small-scale supervised fine-training: larger batch size and carefully optimized learning rate and scheduler.
74
-
75
- ![Untitled](Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a/Untitled%202.png)
76
-
77
- ## Model description
78
 
79
- - **Language(s):** Primarily English
80
- - **License:** Apache 2.0
81
- - **Base model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
82
- - **Teacher Model:** [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
83
 
84
- ## Prompt Template
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
 
86
- ```python
87
- sys_prompt = "You are an AI language model developed by IBM Research. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
88
 
89
- prompt = f'<|system|>\n{sys_prompt}\n<|user|>\n{inputs}\n<|assistant|>\n'
90
- stop_token = '<|endoftext|>'
91
  ```
92
 
93
- We advise utilizing the system prompt employed during the model's training for optimal inference performance, as there could be performance variations based on the provided instructions.
94
-
95
- ## Bias, Risks, and Limitations
96
 
97
- Merlinite-7b has not been aligned to human preferences, so the model might produce problematic outputs. The model might also maintain the limitations and constraints that arise from the base model.
98
 
99
- The model undergoes training on synthetic data, leading to the potential inheritance of both advantages and limitations from the underlying teacher models and data generation methods. The incorporation of safety measures during Merlinite-7b's training process is considered beneficial. However, a nuanced understanding of the associated risks requires detailed studies for more accurate quantification.
100
 
101
- In the absence of adequate safeguards and RLHF, there exists a risk of malicious utilization of these models for generating disinformation or harmful content. Caution is urged against complete reliance on a specific language model for crucial decisions or impactful information, as preventing these models from fabricating content is not straightforward. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in ungrounded generation scenarios due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain.
 
 
 
 
 
 
 
1
+ # Labrador Synthetic Data Generation Pipeline
2
 
3
+ ## Introduction
4
+ This repository contains the Labrador synthetic data generation pipeline, which is used to generate synthetic data for various purposes.
 
 
 
 
 
 
 
 
 
 
 
 
5
 
6
+ ## Run Instructions (Automation)
7
 
 
8
 
9
+ ### Step 1: Environment Setup
10
 
11
+ 1. Initialize a `.env` file with the following [access tokens](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#personal-access-tokens-classic):
12
+ ```
13
+ GIT_ACCESS_TOKEN={ACCESS-TOKEN-TO-ACCESS-TAXONOMY-REPO} # this personal access token is used to access instruct-lab/taxonomy repo
14
+ ```
15
 
16
+ ### Step 2: Execution
17
 
18
+ To run the pipeline:
 
 
 
 
 
 
 
 
 
19
 
20
+ 1. Execute the following command:
21
+
22
+ NOTE: Depending on whether you are running on old or new vela, change this line in the orchestrator.py to use the appropriate old vela or new vela template. `save_job_with_jinja_template(cfg, "templates/labrador_datagen_vela.yaml.j2", output_dir=f"jobs/{branch}")`
23
 
24
+
25
+ ```
26
+ python orchestrator.py branch-name
27
+ ```
28
 
29
+ This will:
30
+ - Create a file with a list of leaf nodes in the `jobs` directory.
31
+ - Generate YAML files for each leaf node and store them in the `jobs` directory something like `test-7984f9cae729b798bed1ba222715b880.yaml`
32
 
33
+ 2. To initiate the skill generation pipeline, run:
34
+
35
+ To trigger a job, take the above yaml and
36
+
37
+ ```
38
+ oc create -f jobs/yaml_name.yaml
39
+ ```
40
 
41
+ This command will execute the pipeline and store the results in the `new_data/labrador-datagen` directory within the COS bucket mounted on the Vela cluster.
42
+
43
+
44
+ ## Run Instructions (Manual - Testing)
45
 
46
+ ### Step 1: Run model
 
 
47
 
48
+ Run teacher model - this model can be replaced with any small model for testing purposes
49
 
50
+ ```
51
+ text-generation-launcher -p 8080 --model-id mistralai/Mixtral-8x7B-Instruct-v0.1 --dtype bfloat16 --max-input-length 4096 --max-batch-prefill-tokens 4096 --max-total-tokens 12288
52
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
+ Next, set the following enviornment variables:
 
 
 
55
 
56
+ ```
57
+ LEAF_NODE=knowledge/textbooks/ethics/qna.yaml # Path to the leaf node that you want to download
58
+ NUM_SAMPLES=30
59
+ NUM_GROUNDED_QUESTIONS=3
60
+ NUM_GEN_PROC=32
61
+ NUM_UTIL_PROC=8
62
+ SAVE_PATH=new_data/labrador_datagen # Path where you want to download the data
63
+ CONTEXT=0 # Set 0 for freeform and 1 for grounded
64
+ DATA_PATH=.
65
+ CHECKSUM=test
66
+ BRANCH_NAME=test # Branch name to download data from
67
+ KNOWLEDGE=1 # Set 0 for skills and 1 for knowledge
68
+ PARENT_DIR=$(dirname "$LEAF_NODE")
69
+ GIT_ACCESS_TOKEN= # Access token to access taxonomy repo
70
+ ```
71
+
72
+ ### Skills
73
+
74
+ Download data
75
+ ```
76
+ wget --header "Authorization: token $GIT_ACCESS_TOKEN" --directory-prefix="$DATA_PATH/$PARENT_DIR" "https://raw.githubusercontent.com/instruct-lab/taxonomy/$BRANCH_NAME/$LEAF_NODE"
77
+ ```
78
 
79
+ Run the Justfile using:
 
80
 
81
+ ```
82
+ just run
83
  ```
84
 
85
+ The Justfile will check the context value. If the context is set to 1, it will run scripts for grounded data generation. If the context is set to 0, it will run scripts for freeform data generation and save the generated files in the root of the repo in the same directory structure.
 
 
86
 
87
+ ### Knowledge
88
 
89
+ Download data
90
 
91
+ ```
92
+ bash download_docs.sh
93
+ ```
94
+ Run knowledge script
95
+ ```
96
+ python knowledge_generation_pipeline.py
97
+ ```