rasyosef commited on
Commit
b8340d0
1 Parent(s): cc27751

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -64,7 +64,7 @@ with gr.Blocks() as demo:
64
  In order to reduce the response time on this hardware, `max_new_tokens` has been set to `21` in the text generation pipeline. With this default configuration, it takes approximately `60 seconds` for the response to start being generated, and streamed one word at a time. Use the slider below to increase or decrease the length of the generated text.
65
  """)
66
 
67
- tokens_slider = gr.Slider(8, 128, value=21, render=False, label="Maximum new tokens", info="A larger `max_new_tokens` parameter value gives you longer text responses but at the cost of a slower response time.")
68
 
69
  chatbot = gr.ChatInterface(
70
  fn=generate,
 
64
  In order to reduce the response time on this hardware, `max_new_tokens` has been set to `21` in the text generation pipeline. With this default configuration, it takes approximately `60 seconds` for the response to start being generated, and streamed one word at a time. Use the slider below to increase or decrease the length of the generated text.
65
  """)
66
 
67
+ tokens_slider = gr.Slider(8, 128, value=21, render=True, label="Maximum new tokens", info="A larger `max_new_tokens` parameter value gives you longer text responses but at the cost of a slower response time.")
68
 
69
  chatbot = gr.ChatInterface(
70
  fn=generate,