Sengxian commited on
Commit
6f78015
1 Parent(s): fef2a16

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +3 -2
app.py CHANGED
@@ -71,10 +71,11 @@ if __name__ == "__main__":
71
  with gr.Blocks() as demo:
72
  gr.Markdown(
73
  """
74
- An Open Bilingual Pre-Trained Model. [Visit our github repo](https://github.com/THUDM/GLM-130B)
 
75
  GLM-130B uses two different mask tokens: `[MASK]` for short blank filling and `[gMASK]` for left-to-right long text generation. When the input does not contain any MASK token, `[gMASK]` will be automatically appended to the end of the text. We recommend that you use `[MASK]` to try text fill-in-the-blank to reduce wait time (ideally within seconds without queuing).
76
 
77
- This demo is a raw language model without instruction fine-tuning (which is applied to Flan-* series) and RLHF (which is applied to ChatGPT). It's ability is roughly between OpenAI `davinci` and `text-davinci-001`.
78
  """)
79
 
80
  with gr.Row():
 
71
  with gr.Blocks() as demo:
72
  gr.Markdown(
73
  """
74
+ GLM-130B: An Open Bilingual Pre-Trained Model.
75
+
76
  GLM-130B uses two different mask tokens: `[MASK]` for short blank filling and `[gMASK]` for left-to-right long text generation. When the input does not contain any MASK token, `[gMASK]` will be automatically appended to the end of the text. We recommend that you use `[MASK]` to try text fill-in-the-blank to reduce wait time (ideally within seconds without queuing).
77
 
78
+ This demo is a raw language model without instruction fine-tuning (which is applied to FLAN-* series) and RLHF (which is applied to ChatGPT). Its ability is roughly between OpenAI `davinci` and `text-davinci-001`. If you find our open-source effort useful, please star our [GitHub repo](https://github.com/THUDM/GLM-130B) to encourage our following development!
79
  """)
80
 
81
  with gr.Row():