jaideepr97 commited on
Commit
f135def
1 Parent(s): 0ee91d0

update model card

Browse files
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -11,10 +11,11 @@ language:
11
  - en
12
  base_model: ibm/granite-7b-base
13
  ---
 
14
 
15
  ### Overview
16
 
17
- ![Screenshot 2024-02-22 at 11.26.13 AM.png](Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a/Screenshot_2024-02-22_at_11.26.13_AM.png)
18
 
19
  ### Performance
20
 
@@ -44,18 +45,18 @@ LAB consists of three key components:
44
  2. Large-scale synthetic data generator
45
  3. Two-phased-training with replay buffers
46
 
47
- ![Untitled](Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a/Untitled.png)
48
 
49
  LAB approach allows for adding new knowledge and skills, in an incremental fashion, to an already pre-trained model without suffering from catastrophic forgetting.
50
 
51
  Taxonomy is a tree of seed examples that are used to prompt a teacher model to generate synthetic data. Taxonomy allows the data curator or the model designer to easily specify a diverse set of the knowledge-domains and skills that they would like to include in their LLM. At a high level, these can be categorized into three high-level bins - knowledge, foundational skills, and compositional skills. The leaf nodes of the taxonomy are tasks associated with one or more seed examples.
52
 
53
- ![Untitled](Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a/Untitled%201.png)
54
 
55
  During the synthetic data generation, **unlike previous approaches where seed examples are uniformly drawn from the entire pool (i.e. self-instruct), we use the taxonomy to drive the sampling process**: For each knowledge/skill, we only use the local examples within the leaf node as seeds to prompt the teacher model.
56
  This makes the teacher model better exploit the task distributions defined by the local examples of each node and the diversity in the taxonomy itself ensures the entire generation covers a wide range of tasks, as illustrated below. In turns, this allows for using Mixtral 8x7B as the teacher model for generation while performing very competitively with models such as ORCA-2, WizardLM, and Zephyr Beta that rely on synthetic data generated by much larger and capable models like GPT-4.
57
 
58
- ![intuition.png](Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a/intuition.png)
59
 
60
  For adding new domain-specific knowledge, we provide an external knowledge source (document) and prompt the model to generate questions and answers based on the document.
61
  Foundational skills such as reasoning and compositional skills such as creative writing are generated through in-context learning using the seed examples from the taxonomy.
@@ -68,7 +69,7 @@ The second step uses replay a replay buffer with data from the first step.
68
  Both foundational skills and compositional skills are learned during the skills tuning phases, where a replay buffer of data from the knowledge phase is used.
69
  Importantly, we use a set of hyper-parameters for training that are very different from standard small-scale supervised fine-training: larger batch size and carefully optimized learning rate and scheduler.
70
 
71
- ![Untitled](Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a/Untitled%202.png)
72
 
73
  ## Model description
74
  - **Model Name**: Granite-7b-lab
 
11
  - en
12
  base_model: ibm/granite-7b-base
13
  ---
14
+ # Model Card for Granite-7b-lab [Paper](https://arxiv.org/abs/2403.01081)
15
 
16
  ### Overview
17
 
18
+ ![Screenshot 2024-02-22 at 11.26.13 AM.png](model-card/Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a_Screenshot_2024-02-22_at_11.26.13_AM.png)
19
 
20
  ### Performance
21
 
 
45
  2. Large-scale synthetic data generator
46
  3. Two-phased-training with replay buffers
47
 
48
+ ![Untitled](model-card/Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a_Untitled.png)
49
 
50
  LAB approach allows for adding new knowledge and skills, in an incremental fashion, to an already pre-trained model without suffering from catastrophic forgetting.
51
 
52
  Taxonomy is a tree of seed examples that are used to prompt a teacher model to generate synthetic data. Taxonomy allows the data curator or the model designer to easily specify a diverse set of the knowledge-domains and skills that they would like to include in their LLM. At a high level, these can be categorized into three high-level bins - knowledge, foundational skills, and compositional skills. The leaf nodes of the taxonomy are tasks associated with one or more seed examples.
53
 
54
+ ![Untitled](model-card/Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a_Untitled%201.png)
55
 
56
  During the synthetic data generation, **unlike previous approaches where seed examples are uniformly drawn from the entire pool (i.e. self-instruct), we use the taxonomy to drive the sampling process**: For each knowledge/skill, we only use the local examples within the leaf node as seeds to prompt the teacher model.
57
  This makes the teacher model better exploit the task distributions defined by the local examples of each node and the diversity in the taxonomy itself ensures the entire generation covers a wide range of tasks, as illustrated below. In turns, this allows for using Mixtral 8x7B as the teacher model for generation while performing very competitively with models such as ORCA-2, WizardLM, and Zephyr Beta that rely on synthetic data generated by much larger and capable models like GPT-4.
58
 
59
+ ![intuition.png](model-card/Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a_intuition.png)
60
 
61
  For adding new domain-specific knowledge, we provide an external knowledge source (document) and prompt the model to generate questions and answers based on the document.
62
  Foundational skills such as reasoning and compositional skills such as creative writing are generated through in-context learning using the seed examples from the taxonomy.
 
69
  Both foundational skills and compositional skills are learned during the skills tuning phases, where a replay buffer of data from the knowledge phase is used.
70
  Importantly, we use a set of hyper-parameters for training that are very different from standard small-scale supervised fine-training: larger batch size and carefully optimized learning rate and scheduler.
71
 
72
+ ![Untitled](model-card/Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a_Untitled%202.png)
73
 
74
  ## Model description
75
  - **Model Name**: Granite-7b-lab