Spaces:
Runtime error
Runtime error
Update app.py
Browse files
app.py
CHANGED
@@ -71,11 +71,13 @@ if __name__ == "__main__":
|
|
71 |
with gr.Blocks() as demo:
|
72 |
gr.Markdown(
|
73 |
"""
|
74 |
-
|
75 |
|
76 |
-
GLM-130B uses two different mask tokens: `[MASK]` for short blank filling and `[gMASK]` for left-to-right long text generation. When the input does not contain any MASK token, `[gMASK]` will be automatically appended to the end of the text. We recommend that you use `[MASK]` to try text fill-in-the-blank to reduce wait time (ideally within seconds without queuing).
|
77 |
|
78 |
-
This demo is a raw language model without instruction fine-tuning (which is applied to FLAN-* series) and RLHF (which is applied to ChatGPT)
|
|
|
|
|
79 |
""")
|
80 |
|
81 |
with gr.Row():
|
|
|
71 |
with gr.Blocks() as demo:
|
72 |
gr.Markdown(
|
73 |
"""
|
74 |
+
Hi,
|
75 |
|
76 |
+
Nice to meet you here! This is a toy demo of GLM-130B, an open bilingual pre-trained model from Tsinghua Univeristy. GLM-130B uses two different mask tokens: `[MASK]` for short blank filling and `[gMASK]` for left-to-right long text generation. When the input does not contain any MASK token, `[gMASK]` will be automatically appended to the end of the text. We recommend that you use `[MASK]` to try text fill-in-the-blank to reduce wait time (ideally within seconds without queuing).
|
77 |
|
78 |
+
This demo is a raw language model **without** instruction fine-tuning (which is applied to FLAN-* series) and RLHF (which is applied to ChatGPT); its ability is roughly between OpenAI `davinci` and `text-davinci-001`. Thus, it is currently worse than ChatGPT and other instruction fine-tuned models :(
|
79 |
+
|
80 |
+
However, we are sparing no effort to improve it, and its updated versions will meet you soon. If you find the open-source effort useful, please star our [GitHub repo](https://github.com/THUDM/GLM-130B) to encourage our following development!
|
81 |
""")
|
82 |
|
83 |
with gr.Row():
|