ybelkada commited on
Commit
cf9f920
1 Parent(s): 4513a9a

Update utils_display.py

Browse files
Files changed (1) hide show
  1. utils_display.py +1 -2
utils_display.py CHANGED
@@ -6,7 +6,6 @@ import streamlit as st
6
  from example_prompts import EXAMPLE_PROMPTS
7
 
8
  HEADER = """
9
- # <span style="color:red;font-size:20px"><b>WARNING:</b> This app uses BLOOM-6b3 (non-distributed) as a backend generation. We are currently working on making it work with BLOOM-176-distributed </span>
10
  """
11
 
12
  SIDE_BAR_TEXT = """
@@ -26,7 +25,7 @@ This Space is an interactive Space of *PETALS* paper (Submitted in EMNLP 2022) t
26
  ## What is *PETALS* ?
27
 
28
  With the release of BLOOM-176B and OPT-175B, everyone can download pretrained models of this scale. Still, using these models requires supercomputer-grade hardware, which is unavailable to many researchers.
29
- PETALS proposes to run BLOOM-176 in a distributed manner. The model is run on multiple computers from different users. Each user can benefit from the large model's inference by running a script similar to the one on this Space or from this Google Colab link: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1FEu0Dt_MjiwvdIz1SmIr9QfDDvNAJdZ-#scrollTo=O0WwC_IqofNH)
30
 
31
  ## Generation parameters
32
 
 
6
  from example_prompts import EXAMPLE_PROMPTS
7
 
8
  HEADER = """
 
9
  """
10
 
11
  SIDE_BAR_TEXT = """
 
25
  ## What is *PETALS* ?
26
 
27
  With the release of BLOOM-176B and OPT-175B, everyone can download pretrained models of this scale. Still, using these models requires supercomputer-grade hardware, which is unavailable to many researchers.
28
+ PETALS proposes to run BLOOM-176 in a distributed manner. The model is run on multiple computers from different users. Each user can benefit from the large model's inference by checking the official links: ![petals](https://petals.ml/) | [chat-petals](http://chat.petals.ml/)
29
 
30
  ## Generation parameters
31