--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards license: apache-2.0 --- # BloomChat V1.0 BloomChat-v1.0 is based on [BigScience Group Bloom-176 model](https://huggingface.co/bigscience/bloom), and is instruction-tuned on a subset of 100k datapoints per data source from the [OIG dataset](https://huggingface.co/datasets/laion/OIG) provided by laion. Then aligned using [Dolly 2.0](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and [Oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1). ## Model Details ### Model Description - **Developed by:** [SambaNova Systems](https://sambanova.ai/) and [Together Computer](https://www.together.xyz/) - **Model type:** Language Model - **Language(s):** Multiple; see [training data from Bloom-176B](https://huggingface.co/bigscience/bloom#training-data) - **License:** apache-2.0 - **Instruction Tuned from model:** [BigScience Group Bloom-176B](https://huggingface.co/bigscience/bloom) ### Additional Information - **Blogpost:** [More Information Needed] ## Uses ### Direct Use [More Information Needed] ### Downstream Use [optional] [More Information Needed] ### Out-of-Scope Use [More Information Needed] ## Bias, Risks, and Limitations Like all LLMs, BloomChat has certain limitations: - Hallucination: BloomChat may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information. - Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output. - Repetition: BloomChat may produce repetitive phrases or sentences, leading to less engaging and informative responses. - Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited. - Toxicity: BloomChat may inadvertently generate responses containing inappropriate or harmful content. ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ### Suggested inference parameters - Temperature: 0.8 - Repetition penalty: 1.2 - Top-p: 0.9 - Max generated tokens: 512 ### Suggested System Prompts ``` : Write a script in which Bob accidentally breaks his dad's guitar : ``` ``` : Classify the sentiment of the following sentence into Positive, Neutral, or Negative. Do it on a scale of 1/10: How about the following sentence: It is raining outside and I feel so blue : ``` ``` : give a python code to open a http server in 8080 port using python 3.7 : ``` ``` : Answer the following question using the context below: Q: Which regulatory body is invovled? Context: U.S. authorities launched emergency measures on Sunday to shore up confidence in the banking system after the failure of Silicon Valley Bank (SIVB.O) threatened to trigger a broader financial crisis. After a dramatic weekend, regulators said the failed bank’s customers will have access to all their deposits starting Monday and set up a new facility to give banks access to emergency funds. The Federal Reserve also made it easier for banks to borrow from it in emergencies. While the measures provided some relief for Silicon Valley firms and global markets on Monday, worries about broader banking risks remain and have cast doubts over whether the Fed will stick with its plan for aggressive interest rate hikes. : ``` ## Training Details ### Training Data - [OIG dataset](https://huggingface.co/datasets/laion/OIG) - [Dolly 2.0](https://huggingface.co/datasets/databricks/databricks-dolly-15k) - [Oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) ### Training Procedure We trained BloomChat with SambaStudio, a platform built on SambaNova's in-house Reconfigurable Dataflow Unit (RDU). We started from [Bloom-176B](https://huggingface.co/bigscience/bloom), an OSS multilingual 176B GPT model pretrained by the [BigScience group](https://huggingface.co/bigscience). ### Prompting Style Used For Training ``` : {input that the user wants from the bot} : ``` ``` : {fewshot1 input} : {fewshot1 response} : {fewshot2 input} : {fewshot2 response} : {input that the user wants from the bot} : ``` ### Hyperparameters **Instruction-tuned Training on OIG** - Hardware: SambaNova Reconfigurable Dataflow Unit (RDU) - Optimizer: AdamW - Grad accumulation: 1 - Epochs: 1 - Global Batch size: 128 - Batch tokens: 128 * 2048 = 262,144 tokens - Learning Rate: 1e-5 - Learning Rate Scheduler: Cosine Schedule with Warmup - Warmup Steps: 0 - Weight decay: 0.1 **Instruction-tuned Training on Dolly 2.0 and Oasst1** - Hardware: SambaNova Reconfigurable Dataflow Unit (RDU) - Optimizer: AdamW - Grad accumulation: 1 - Epochs: 3 - Global Batch size: 128 - Batch tokens: 128 * 2048 = 262,144 tokens - Learning Rate: 1e-5 - Learning Rate Scheduler: Cosine Schedule with Warmup - Warmup Steps: 0 - Weight decay: 0.1 ## Evaluation ![HELM core-scenarios](HELM_core-senarios_CNN+MS_Marco_WIP.png) ![Multilingual scores French and hindi](Multilinguality_WMT-14_on_French+Hindi.png) ![Multilingual scores Chinese](Multilinguality_WMT-14_on_Simplified_Chinese.png) ![Mean Win Rate on HELM](Open_source_model_Mean_Win_Rate_on_HELM_core_scenarios.png) ## Community [Link to discord server]