File size: 11,462 Bytes
f6884b7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
Quantization made by Richard Erkhov.

[Github](https://github.com/RichardErkhov)

[Discord](https://discord.gg/pvy7H8DZMG)

[Request more models](https://github.com/RichardErkhov/quant_request)


labradorite-13b - GGUF
- Model creator: https://huggingface.co/ibm/
- Original model: https://huggingface.co/ibm/labradorite-13b/


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [labradorite-13b.Q2_K.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.Q2_K.gguf) | Q2_K | 4.52GB |
| [labradorite-13b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.IQ3_XS.gguf) | IQ3_XS | 4.99GB |
| [labradorite-13b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.IQ3_S.gguf) | IQ3_S | 5.27GB |
| [labradorite-13b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.Q3_K_S.gguf) | Q3_K_S | 5.27GB |
| [labradorite-13b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.IQ3_M.gguf) | IQ3_M | 5.57GB |
| [labradorite-13b.Q3_K.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.Q3_K.gguf) | Q3_K | 5.9GB |
| [labradorite-13b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.Q3_K_M.gguf) | Q3_K_M | 5.9GB |
| [labradorite-13b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.Q3_K_L.gguf) | Q3_K_L | 6.45GB |
| [labradorite-13b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.IQ4_XS.gguf) | IQ4_XS | 6.54GB |
| [labradorite-13b.Q4_0.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.Q4_0.gguf) | Q4_0 | 6.86GB |
| [labradorite-13b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.IQ4_NL.gguf) | IQ4_NL | 6.9GB |
| [labradorite-13b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.Q4_K_S.gguf) | Q4_K_S | 6.91GB |
| [labradorite-13b.Q4_K.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.Q4_K.gguf) | Q4_K | 7.33GB |
| [labradorite-13b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.Q4_K_M.gguf) | Q4_K_M | 7.33GB |
| [labradorite-13b.Q4_1.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.Q4_1.gguf) | Q4_1 | 7.61GB |
| [labradorite-13b.Q5_0.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.Q5_0.gguf) | Q5_0 | 8.36GB |
| [labradorite-13b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.Q5_K_S.gguf) | Q5_K_S | 8.36GB |
| [labradorite-13b.Q5_K.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.Q5_K.gguf) | Q5_K | 8.6GB |
| [labradorite-13b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.Q5_K_M.gguf) | Q5_K_M | 8.6GB |
| [labradorite-13b.Q5_1.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.Q5_1.gguf) | Q5_1 | 9.1GB |
| [labradorite-13b.Q6_K.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.Q6_K.gguf) | Q6_K | 9.95GB |
| [labradorite-13b.Q8_0.gguf](https://huggingface.co/RichardErkhov/ibm_-_labradorite-13b-gguf/blob/main/labradorite-13b.Q8_0.gguf) | Q8_0 | 12.88GB |




Original model description:
---
pipeline_tag: text-generation
tags:
- labradorite
- llama
- llama-2
- ibm
- lab
- labrador
- merlinite
license: llama2
license_link: https://ai.meta.com/llama/license/
language:
- en
---
Update: 🔥 [Merlinite-7B](https://huggingface.co/ibm/merlinite-7b): Lab on Mistral-7b
# Model Card for Labradorite 13b 🔥 [Paper](https://arxiv.org/abs/2403.01081) 


### Overview

![overview](overview.png)

### Performance

| Model | Alignment | Base | Teacher | MTBench (Avg) | MMLU(5-shot) | ARC-C(25-shot) | HellaSwag(10-shot) | Winogrande(5-shot) | GSM8K(5-shot- strict) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Llama-2-13b-Chat | RLHF | Llama-2-13b | Human Annotators | 6.65 ** | 54.58 | 59.81 | 82.52 | 75.93 | 34.80 |
| Orca-2 | Progressive Training | Llama-2-13b | GPT-4 | 6.15 ** | 60.37 ** | 59.73 | 79.86 | 78.22 | 48.22 |
| WizardLM-13B-V1.2 | Evol-Instruct | Llama-2-13b | GPT-4 | 7.20 ** | 54.83 | 60.24 | 82.62 | 76.40 | 43.75 |
| Labradorite-13b | Large-scale Alignment for chatBots (LAB) | Llama-2-13b | Mixtral-8x7B-Instruct | 7.23 ^ | 58.89 | 61.69 | 83.15 | 79.56 | 40.11 |

[**] Numbers taken from [lmsys/chatbot-arena-leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)
[^] Average across 4 runs

### Method

LAB: **L**arge-scale **A**lignment for chat**B**ots is a novel synthetic data-based alignment tuning method for LLMs from IBM Research. Labradorite-13b is a LLaMA-2-13b-derivative model trained with the LAB methodology, using Mixtral-8x7b-Instruct as a teacher model.

LAB consists of three key components:

1. Taxonomy-driven data curation process
2. Large-scale synthetic data generator
3. Two-phased-training with replay buffers

![phases](phases.png)

LAB approach allows for adding new knowledge and skills, in an incremental fashion, to an already pre-trained model without suffering from catastrophic forgetting.

Taxonomy is a tree of seed examples that are used to prompt a teacher model to generate synthetic data; the sub-tree for the skill of “writing” is illustrated in the figure below.

![writing-clear](writing-clear.png)

Taxonomy allows the data curator or the model designer to easily specify a diverse set of the knowledge-domains and skills that they would like to include in their LLM. At a high level, these can be categorized into three high-level bins - knowledge, foundational skills, and compositional skills. The leaf nodes of the taxonomy are tasks associated with one or more seed examples.

![tax](tax.png)

During the synthetic data generation, **unlike previous approaches where seed examples are uniformly drawn from the entire pool (i.e. self-instruct), we use the taxonomy to drive the sampling process**: For each knowledge/skill, we only use the local examples within the leaf node as seeds to prompt the teacher model.
This makes the teacher model better exploit the task distributions defined by the local examples of each node and the diversity in the taxonomy itself ensures the entire generation covers a wide range of tasks, as illustrated below. In turns, this allows for using Mixtral 8x7B as the teacher model for generation while performing very competitively with models such as ORCA-2 and WizardLM that rely on synthetic data generated by much larger and capable models like GPT-4.

![intuition](intuition.png)

For adding new domain-specific knowledge, we provide an external knowledge source (document) and prompt the model to generate questions and answers based on the document.
Foundational skills such as reasoning and compositional skills such as creative writing are generated through in-context learning using the seed examples from the taxonomy. 

Additionally, to ensure the data is high-quality and safe, we employ steps to check the questions and answers to ensure that they are grounded and safe. This is done using the same teacher model that generated the data. 

Our training consists of two major phases: knowledge tuning and skills tuning.
There are two steps in knowledge tuning where the first step learns simple knowledge (short samples) and the second step learns complicated knowledge (longer samples).
The second step uses replay a replay buffer with data from the first step.
Both foundational skills and compositional skills are learned during the skills tuning phases, where a replay buffer of data from the knowledge phase is used.
Importantly, we use a set of hyper-parameters for training that are very different from standard small-scale supervised fine-training: larger batch size and carefully optimized learning rate and scheduler.

![training](training.png)

## Model description

- **Language(s):** Primarily English
- **License:** Labradorite-13b is a LLaMA 2 derivative and is licensed under the **[LLAMA 2 Community License](https://ai.meta.com/llama/license/)**
- **Base model:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
- **Teacher Model:** [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)

## Prompt Template

```python
sys_prompt = """You are Labrador, an AI language model developed by IBM DMF (Data Model Factory) Alignment Team. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."""
prompt = f'<|system|>\n{sys_prompt}\n<|user|>\n{inputs}\n<|assistant|>\n'
stop_token = '<|endoftext|>'
```
We advise utilizing the system prompt employed during the model's training for optimal inference performance, as there could be performance variations based on the provided instructions. 

For chatbot usecases, we recommend testing the following system prompt:
```python
sys_prompt = """You are Labrador, an AI language model developed by IBM DMF (Data Model Factory) Alignment Team. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior. You always respond to greetings (for example, hi, hello, g'day, morning, afternoon, evening, night, what's up, nice to meet you, sup, etc) with "Hello! I am Labrador, created by the IBM DMF Alignment Team. How can I help you today?". Please do not say anything else and do not start a conversation."""
```

## Bias, Risks, and Limitations

Labradorite-13b has not been aligned to human preferences, so the model might produce problematic outputs. The model might also maintain the limitations and constraints that arise from the base model and other members of the Llama 2 model family. 

The model undergoes training on synthetic data, leading to the potential inheritance of both advantages and limitations from the underlying teacher models and data generation methods. The incorporation of safety measures during Labradorite-13b's training process is considered beneficial. However, a nuanced understanding of the associated risks requires detailed studies for more accurate quantification.

In the absence of adequate safeguards and RLHF, there exists a risk of malicious utilization of these models for generating disinformation or harmful content. Caution is urged against complete reliance on a specific language model for crucial decisions or impactful information, as preventing these models from fabricating content is not straightforward. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in ungrounded generation scenarios due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain.