BLOOMChat-176B-v1 / README.md
jayr014's picture
adding in addition of prompts and hyperparams, and improved one of the images
f9ebede
metadata
license: apache-2.0

BloomChat V1.0

BloomChat-v1.0 is based on BigScience Group Bloom-176 model, and is instruction-tuned on a subset of 100k datapoints per data source from the OIG dataset provided by laion. Then aligned using Dolly 2.0 and Oasst1.

Model Details

Model Description

Additional Information

  • Blogpost: [More Information Needed]

Uses

Direct Use

[More Information Needed]

Downstream Use [optional]

[More Information Needed]

Out-of-Scope Use

[More Information Needed]

Bias, Risks, and Limitations

Like all LLMs, BloomChat has certain limitations:

  • Hallucination: BloomChat may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
  • Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
  • Repetition: BloomChat may produce repetitive phrases or sentences, leading to less engaging and informative responses.
  • Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
  • Toxicity: BloomChat may inadvertently generate responses containing inappropriate or harmful content.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

Suggested inference parameters

  • Temperature: 0.8
  • Repetition penalty: 1.2
  • Top-p: 0.9
  • Max generated tokens: 512

Suggested System Prompts

<human>: Write a script in which Bob accidentally breaks his dad's guitar
<bot>:
<human>: Classify the sentiment of the following sentence into Positive, Neutral, or Negative. Do it on a scale of 1/10: How about the following sentence: It is raining outside and I feel so blue
<bot>:
<human>: give a python code to open a http server in 8080 port using python 3.7
<bot>:
<human>: Answer the following question using the context below:
Q: Which regulatory body is invovled?
Context: U.S. authorities launched emergency measures on Sunday to shore up confidence in the banking system after the failure of Silicon Valley Bank (SIVB.O) threatened to trigger a broader financial crisis. After a dramatic weekend, regulators said the failed bank’s customers will have access to all their deposits starting Monday and set up a new facility to give banks access to emergency funds. The Federal Reserve also made it easier for banks to borrow from it in emergencies. While the measures provided some relief for Silicon Valley firms and global markets on Monday, worries about broader banking risks remain and have cast doubts over whether the Fed will stick with its plan for aggressive interest rate hikes.
<bot>:

Training Details

Training Data

Training Procedure

We trained BloomChat with SambaStudio, a platform built on SambaNova's in-house Reconfigurable Dataflow Unit (RDU). We started from Bloom-176B, an OSS multilingual 176B GPT model pretrained by the BigScience group.

Prompting Style Used For Training

<human>: {input that the user wants from the bot}
<bot>: 
<human>: {fewshot1 input}
<bot>: {fewshot1 response}
<human>: {fewshot2 input}
<bot>: {fewshot2 response}
<human>: {input that the user wants from the bot}
<bot>: 

Hyperparameters

Instruction-tuned Training on OIG

  • Hardware: SambaNova Reconfigurable Dataflow Unit (RDU)
  • Optimizer: AdamW
  • Grad accumulation: 1
  • Epochs: 1
  • Global Batch size: 128
  • Batch tokens: 128 * 2048 = 262,144 tokens
  • Learning Rate: 1e-5
  • Learning Rate Scheduler: Cosine Schedule with Warmup
  • Warmup Steps: 0
  • Weight decay: 0.1

Instruction-tuned Training on Dolly 2.0 and Oasst1

  • Hardware: SambaNova Reconfigurable Dataflow Unit (RDU)
  • Optimizer: AdamW
  • Grad accumulation: 1
  • Epochs: 3
  • Global Batch size: 128
  • Batch tokens: 128 * 2048 = 262,144 tokens
  • Learning Rate: 1e-5
  • Learning Rate Scheduler: Cosine Schedule with Warmup
  • Warmup Steps: 0
  • Weight decay: 0.1

Evaluation

HELM core-scenarios

Multilingual scores French and hindi

Multilingual scores Chinese

Mean Win Rate on HELM

Community

[Link to discord server]