Update README
Browse files
README.md
CHANGED
@@ -7,6 +7,8 @@ tags:
|
|
7 |
- ibm
|
8 |
license: llama2
|
9 |
license_link: https://ai.meta.com/llama/license/
|
|
|
|
|
10 |
---
|
11 |
# Model Card for Labradorite 13b
|
12 |
|
@@ -21,9 +23,10 @@ license_link: https://ai.meta.com/llama/license/
|
|
21 |
| Llama-2-13b-Chat | RLHF | Llama-2-13b | Human Annotators | 6.65 ** | 54.58 | 59.81 | 82.52 | 75.93 | 34.80 |
|
22 |
| Orca-2 | Progressive Training | Llama-2-13b | GPT-4 | 6.15 ** | 60.37 ** | 59.73 | 79.86 | 78.22 | 48.22 |
|
23 |
| WizardLM-13B-V1.2 | Evol-Instruct | Llama-2-13b | GPT-4 | 7.20 ** | 54.83 | 60.24 | 82.62 | 76.40 | 43.75 |
|
24 |
-
| Labradorite-13b | Large-scale Alignment for chatBots (LAB) | Llama-2-13b | Mixtral-8x7B-Instruct | 7.
|
25 |
|
26 |
[**] Numbers taken from [lmsys/chatbot-arena-leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)
|
|
|
27 |
|
28 |
### Method
|
29 |
|
@@ -75,17 +78,21 @@ Importantly, we use a set of hyper-parameters for training that are very differe
|
|
75 |
## Prompt Template
|
76 |
|
77 |
```python
|
78 |
-
sys_prompt = """You are Labrador, an AI language model developed by IBM DMF (Data Model Factory) Alignment Team. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior.
|
79 |
prompt = f'<|system|>\n{sys_prompt}\n<|user|>\n{inputs}\n<|assistant|>\n'
|
80 |
stop_token = '<|endoftext|>'
|
81 |
```
|
82 |
-
|
83 |
We advise utilizing the system prompt employed during the model's training for optimal inference performance, as there could be performance variations based on the provided instructions.
|
84 |
|
|
|
|
|
|
|
|
|
|
|
85 |
## Bias, Risks, and Limitations
|
86 |
|
87 |
Labradorite-13b has not been aligned to human preferences, so the model might produce problematic outputs. The model might also maintain the limitations and constraints that arise from the base model and other members of the Llama 2 model family.
|
88 |
|
89 |
The model undergoes training on synthetic data, leading to the potential inheritance of both advantages and limitations from the underlying teacher models and data generation methods. The incorporation of safety measures during Labradorite-13b's training process is considered beneficial. However, a nuanced understanding of the associated risks requires detailed studies for more accurate quantification.
|
90 |
|
91 |
-
In the absence of adequate safeguards and RLHF, there exists a risk of malicious utilization of these models for generating disinformation or harmful content. Caution is urged against complete reliance on a specific language model for crucial decisions or impactful information, as preventing these models from fabricating content is not straightforward. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in ungrounded generation scenarios due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain.
|
|
|
7 |
- ibm
|
8 |
license: llama2
|
9 |
license_link: https://ai.meta.com/llama/license/
|
10 |
+
language:
|
11 |
+
- en
|
12 |
---
|
13 |
# Model Card for Labradorite 13b
|
14 |
|
|
|
23 |
| Llama-2-13b-Chat | RLHF | Llama-2-13b | Human Annotators | 6.65 ** | 54.58 | 59.81 | 82.52 | 75.93 | 34.80 |
|
24 |
| Orca-2 | Progressive Training | Llama-2-13b | GPT-4 | 6.15 ** | 60.37 ** | 59.73 | 79.86 | 78.22 | 48.22 |
|
25 |
| WizardLM-13B-V1.2 | Evol-Instruct | Llama-2-13b | GPT-4 | 7.20 ** | 54.83 | 60.24 | 82.62 | 76.40 | 43.75 |
|
26 |
+
| Labradorite-13b | Large-scale Alignment for chatBots (LAB) | Llama-2-13b | Mixtral-8x7B-Instruct | 7.22 ^ | 58.89 | 61.69 | 83.15 | 79.56 | 40.11 |
|
27 |
|
28 |
[**] Numbers taken from [lmsys/chatbot-arena-leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)
|
29 |
+
[^] Average across 4 runs
|
30 |
|
31 |
### Method
|
32 |
|
|
|
78 |
## Prompt Template
|
79 |
|
80 |
```python
|
81 |
+
sys_prompt = """You are Labrador, an AI language model developed by IBM DMF (Data Model Factory) Alignment Team. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."""
|
82 |
prompt = f'<|system|>\n{sys_prompt}\n<|user|>\n{inputs}\n<|assistant|>\n'
|
83 |
stop_token = '<|endoftext|>'
|
84 |
```
|
|
|
85 |
We advise utilizing the system prompt employed during the model's training for optimal inference performance, as there could be performance variations based on the provided instructions.
|
86 |
|
87 |
+
For chatbot usecases, we recommend testing the following system prompt:
|
88 |
+
```python
|
89 |
+
sys_prompt = """You are Labrador, an AI language model developed by IBM DMF (Data Model Factory) Alignment Team. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior. You always respond to greetings (for example, hi, hello, g'day, morning, afternoon, evening, night, what's up, nice to meet you, sup, etc) with "Hello! I am Labrador, created by the IBM DMF Alignment Team. How can I help you today?". Please do not say anything else and do not start a conversation."""
|
90 |
+
```
|
91 |
+
|
92 |
## Bias, Risks, and Limitations
|
93 |
|
94 |
Labradorite-13b has not been aligned to human preferences, so the model might produce problematic outputs. The model might also maintain the limitations and constraints that arise from the base model and other members of the Llama 2 model family.
|
95 |
|
96 |
The model undergoes training on synthetic data, leading to the potential inheritance of both advantages and limitations from the underlying teacher models and data generation methods. The incorporation of safety measures during Labradorite-13b's training process is considered beneficial. However, a nuanced understanding of the associated risks requires detailed studies for more accurate quantification.
|
97 |
|
98 |
+
In the absence of adequate safeguards and RLHF, there exists a risk of malicious utilization of these models for generating disinformation or harmful content. Caution is urged against complete reliance on a specific language model for crucial decisions or impactful information, as preventing these models from fabricating content is not straightforward. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in ungrounded generation scenarios due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain.
|