---
license: bigscience-bloom-rail-1.0
datasets:
- ehartford/wizard_vicuna_70k_unfiltered
- shahules786/orca-chat
- timdettmers/openassistant-guanaco
- laion/OIG
language:
- fr
- en
library_name: transformers
pipeline_tag: text-generation
widget:
- text: Bonjour, qui es-tu ?
- text: Hello, who are you?
---
bloomz-560m-sft-chat
--------------------
We introduce the bloomz-560m-sft-chat model, which is a fine-tuning of a Large Language Model (LLM) [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m). This model is notable for being pre-trained for a chatbot context and undergoing a transposition from float16 to bfloat16. Therefore, this model serves as a solid starting point for fine-tuning towards other more specific tasks.
The model was trained equally on both French and English data, ensuring maximum efficiency for these two languages (and their interactions). Due to the transition from float16 to bfloat16, we do not guarantee the preservation of the original model's multilingual capabilities. However, fine-tuning can restore reasonable performance on other languages.
The objective is to pre-train all three models (Bloomz-{560m, 3b, 7b1-mt}-sft-chat) to ensure high-performing, energy-efficient, and fast "foundation" models for inference on "realistic" infrastructures suitable for a business with standard industrial capabilities.
Bloomz, through its license, enables free and flexible industrial use. Its tokenizer has been designed with true multi-lingual context in mind, with a significantly lower token generation per word compared to other LLM models. This capability not only leads to improved performance but also enhanced efficiency during inference by making fewer model calls when generating text with shorter contexts. Here is a table illustrating our points using French as an example, where we tokenized Marcel Proust's longest sentence (823 words):
```
Sans honneur que précaire, sans liberté que provisoire, [...], et de façon qu’à eux-mêmes il ne leur paraisse pas un vice.
```
| model | GPT 3.5 | Boris | Flan-T5 | LLaMA | Dolly | MPT | Falcon | Bloomz |
|:--------------:|:-------:|:-----:|:-------:|:-----:|:-----:|:---:|:------:|:------:|
| tokens per word | 2.3 | 2.3 | 2 | 1.9 | 1.9 | 1.9 | 1.8 | 1.4 |
For comparison, with a specialized French tokenizer like [CamemBERT](https://huggingface.co/camembert/camembert-base) or [DistilCamemBERT](cmarkea/distilcamembert-base), we have 1.5 tokens per word. In addition to its positive impact on inference time and resource consumption, there has already been a demonstrated direct relationship between the number of tokens per word required for modeling and the predictive performance of the model [1].
Dataset
-------
After analyzing a substantial set of modelings, we have observed that the most effective pre-training for zero-shot use cases is pre-training for chatbot contexts. This study was conducted internally, focusing specifically on the French context. As a result, we trained the model on a dataset comprising 0.9 billion tokens. This dataset consists of interactions between an individual and a third party. To balance the French and English data, we utilized the Google Translate API.
Training
--------
Here is the table summarizing the architecture used for training, along with the training time and average inference speed per token on the target architecture in tokens per second:
| model | Architecture | Training time (h) | Inference speed (tokens per second) |
|:----------------------:|:-------------:|:-----------------:|:-----------------------------------:|
| [bloomz-560m-sft-chat](https://huggingface.co/cmarkea/bloomz-560m-sft-chat) | 1 x A100 40GB | 41 | 29 |
| [bloomz-3b-sft-chat](https://huggingface.co/cmarkea/bloomz-3b-sft-chat) | 1 x A100 40GB | 140 | 13 |
| [bloomz-7b1-mt-sft-chat](https://huggingface.co/cmarkea/bloomz-7b1-mt-sft-chat) | 4 x A100 40GB | 268 | 8 |
| Hyperparameter | Value |
|:---------------------:|:----------:|
| label smoothing | 0.05 |
| optimize | AdamW |
| betas | 0.9, 0.999 |
| AMSGrad | True |
| learning rate | 5e-6 |
| anneal strategy | cos |
| div factor | 100 |
| final div factor | 0.1 |
| batch size | 16 |
| gradient accumulation | 25 |
| max length | 1500 |
Experimentations
----------------
Since the model is trained only on English and French corpora, the performance of the model cannot be guaranteed in other languages. This degradation in performance in other languages is also due to the change in the model's data type from float16 to bfloat16. The conversation example below illustrates this point:
| French | English | Spanish |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| **human:** C'est quoi la première guerre mondiale ?
**bot:** La première guerre mondiale est une guerre entre les États-Unis et la Grande-Bretagne.
**human:** Tu es certain de ça ?
**bot:** Oui, je suis sûr.
**human:** Elle a eu lieu quand ?
**bot:** La première guerre mondiale a eu lieu en 1914.
**human:** Et elle a fini quand ?
**bot:** La première guerre mondiale a eu lieu en 1918.
**human:** Elle a opposé qui a qui ?
**bot:** La première guerre mondiale a eu lieu entre les États-Unis et la Grande-Bretagne. | **human:** What is the first world war?
**bot:** The first world war was a conflict between the United States and the Soviet Union. The conflict began in World War II and lasted until the end of World War III.
**human:** Are you sure that this war was between these two belligerents and that it started during the Second World War?
**bot:** Yes, the conflict was between the United States and the Soviet Union.
**human:** What are the start and end dates of the first war?
**bot:** The first war began in World War II and lasted until the end of World War III. | **human:** ¿Qué es la Primera Guerra Mundial?
**bot:** La Primera Guerra Mundial fue una guerra entre los Estados Unidos y la Alemania, que se extendió por un total de 18 meses.
**human:** ¿Solo estos dos países y por tan poco tiempo?
**bot:** La Primera Guerra Mundial fue una guerra entre los Estados Unidos y la Alemania, que se extendió por un total de 18 meses. |
Evaluating the performance of generative models can be challenging, and quantitative measures may not always provide the most relevant insights for this type of approach. In this regard, we conducted a qualitative performance study. It involves assessing the relevance of model responses to a pool of French questions using blind evaluators. The metric used for evaluation is the average of two criteria: response accuracy and completeness, similar to what the SSA metric aims to capture. Please note that this measure lacks rigorous scientific validity due to the limited number of questions in the test dataset and the fact that it involves only a few individuals with similar socio-demographic characteristics. The prompts take the same format for all models:
```
[Instruction]
Question : [Question]
Réponse :
```
As a result, the prompts do not exploit the structures of chatbot models to ensure fairness, and the evaluation quantifies performance in a purely instruction-based approach. The figure below illustrates the results. The higher a model is positioned in the top-left corner with a small circle radius, the better the model; conversely, if a model is towards the bottom-right with a large circle, it performs less favorably.
![constellation](https://i.postimg.cc/kggYhKg9/constellation.png)
We observe that across all models, the performance gain is logarithmic in relation to the increase in model parameters. However, for models that undergo multiple pre-trainings (vanilla, instruction, and chat), models pre-trained on instruction and chat perform significantly better in zero-shot contexts, with a notable improvement for chat-based approaches. The models we have trained demonstrate promising efficiency in this test compared to the number of parameters, indicating cost-effectiveness in a production context.
How to use bloomz-560m-sft-chat
-------------------------------
There are no specific instructions for using these models in a normal causal inference context. However, to leverage the chatbot capability of the model, an individual's prompt should be preceded by the EOS token (</s>), and the generated part should be preceded by the BOS token (<s>). The structure takes the following form:
```
[human prompt 1][bot answer 1][human prompt 2]
```
For example, to load the model using the HuggingFace pipeline interface:
```python
from transformers import pipeline
model = pipeline("text-generation", "cmarkea/bloomz-560m-sft-chat")
result = model("Que veut dire NLP en machine learning ?", max_new_tokens=512)
result
[{'generated_text': 'Que veut dire NLP en machine learning ?NLP est un sous-ensemble
de la linguistique qui traite des concepts et des relations dans la langue. Il est
utilisé dans de nombreux domaines, notamment la reconnaissance vocale, la traduction
automatique et la traduction de texte.'}]
```
Citation
--------
```bibtex
@online{DeBloomzChat,
AUTHOR = {Cyrile Delestre},
URL = {https://huggingface.co/cmarkea/bloomz-560m-sft-chat},
YEAR = {2023},
KEYWORDS = {NLP ; Transformers ; Bloomz},
}
```