Arcitec commited on
Commit
1ec3b01
·
1 Parent(s): a026d24

fix(webui): Experimental checkbox bugfixes and add visual warning label

Browse files

- We can't use the original "Show experimental features" checkbox implementation, because it *deeply* breaks Gradio.

- Gradio's `gr.Examples()` API binds itself to the original state of the user interface. Gradio crashes and causes various bugs if we try to change the available UI controls later.

- Instead, we must use `gr.Dataset()` which acts like a custom input/output control and doesn't directly bind itself to the target control. We must also provide a secret, hidden "all mode choices" component so that it knows the names of all "control modes" that are possible in examples.

- We now also have a very visible warning label in the user interface, to clearly mark the experimental features.

- Bugs fixed:

* The code was unable to toggle the visibility of Experimental demos in the Examples list. It was not possible with Examples (since it's a wrapper around Dataset, but Examples contains its own internal state/copy of all data). Instead, we use a Dataset and manipulate its list directly.

* Gradio crashes with a `gradio.exceptions.Error` exception if you try to load an example that tries to use an experimental feature if we have removed its UI element. This is because Examples binds to the original user interface and *remembers* the list of choices, and it *cannot* dynamically select something that did not exist when the `gr.Examples()` was initially created. This problem is fixed by switching to `gr.Dataset()`.

* Furthermore, Gradio's `gr.Examples()` handler actually remembers and caches the list of UI options. So every time we load an example, it rewrites the "Emotion Control Mode" selection menu to only show the options that were available when the Examples table was created. This means that even if we keep the "Show experimental features" checkbox, Gradio itself will erase the experimental mode from the Control Mode selection menu every time the user loads an example. There are no callbacks or "update" functions to allow us to override this automatic Gradio behavior. But by switching to `gr.Dataset()`, we completely avoid this deep binding.

* The "Show experimental features" checkbox is no longer tied to a column in the examples-table, to avoid fighting between Gradio's example table trying to set the mode, and the experimental checkbox being toggled and also trying to set the mode.

* Lastly, the "Show experimental features" checkbox now remembers and restores the user's current mode selection when toggling the checkbox, instead of constantly resetting to the default mode ("same as voice reference"), to make the UI more convenient for users.

Former-commit-id: 227ad2c9010ad081094e02d736fbffeab1d92230 [formerly ec368de9329c7beffa3df369005c7d25e7684980]
Former-commit-id: ea98446eca2426d339f0445c4b8a768fe022e813
Former-commit-id: 7b54a4c77b8ddf722055890ccd7c2afb1a1a3862 [formerly 9851c6c1e8b53ebf244cdbcbf9ed47f32626ac92] [formerly a12c75cf11947e9f9c671cbe31681e91d51b032f [formerly 2721a36d0462c3ff211dff12faea0f94b6cc78f7]]
Former-commit-id: fa0f355b16b538d6af7977bf5a663e2a2ff06e7d [formerly aebe26997d3afc022f94a31ae6deffe22b5aeb64]
Former-commit-id: 67af375942db9a78ffe63cb2e9091036fc16908b

examples/cases.jsonl CHANGED
@@ -4,9 +4,9 @@
4
  {"prompt_audio":"voice_04.wav","text":"你就需要我这种专业人士的帮助,就像手无缚鸡之力的人进入雪山狩猎,一定需要最老练的猎人指导。","emo_mode":0}
5
  {"prompt_audio":"voice_05.wav","text":"在真正的日本剑道中,格斗过程极其短暂,常常短至半秒,最长也不超过两秒,利剑相击的转瞬间,已有一方倒在血泊中。但在这电光石火的对决之前,双方都要以一个石雕般凝固的姿势站定,长时间的逼视对方,这一过程可能长达十分钟!","emo_mode":0}
6
  {"prompt_audio":"voice_06.wav","text":"今天呢,咱们开一部新书,叫《赛博朋克二零七七》。这词儿我听着都新鲜。这赛博朋克啊,简单理解就是“高科技,低生活”。这一听,我就明白了,于老师就爱用那高科技的东西,手机都得拿脚纹开,大冬天为了解锁脱得一丝不挂,冻得跟王八蛋似的。","emo_mode":0}
7
- {"prompt_audio":"voice_07.wav","emo_audio":"emo_sad.wav","emo_weight": 0.65, "emo_mode":1,"text":"酒楼丧尽天良,开始借机竞拍房间,哎,一群蠢货。"}
8
- {"prompt_audio":"voice_08.wav","emo_audio":"emo_hate.wav","emo_weight": 0.65, "emo_mode":1,"text":"你看看你,对我还有没有一点父子之间的信任了。"}
9
- {"prompt_audio":"voice_09.wav","emo_vec_3":0.8,"emo_mode":2,"text":"对不起嘛!我的记性真的不太好,但是和你在一起的事情,我都会努力记住的~"}
10
- {"prompt_audio":"voice_10.wav","emo_vec_7":1.0,"emo_mode":2,"text":"哇塞!这个爆率也太高了!欧皇附体了!"}
11
  {"prompt_audio":"voice_11.wav","emo_mode":3,"emo_text":"极度悲伤","text":"这些年的时光终究是错付了... "}
12
  {"prompt_audio":"voice_12.wav","emo_mode":3,"emo_text":"You scared me to death! What are you, a ghost?","text":"快躲起来!是他要来了!他要来抓我们了!"}
 
4
  {"prompt_audio":"voice_04.wav","text":"你就需要我这种专业人士的帮助,就像手无缚鸡之力的人进入雪山狩猎,一定需要最老练的猎人指导。","emo_mode":0}
5
  {"prompt_audio":"voice_05.wav","text":"在真正的日本剑道中,格斗过程极其短暂,常常短至半秒,最长也不超过两秒,利剑相击的转瞬间,已有一方倒在血泊中。但在这电光石火的对决之前,双方都要以一个石雕般凝固的姿势站定,长时间的逼视对方,这一过程可能长达十分钟!","emo_mode":0}
6
  {"prompt_audio":"voice_06.wav","text":"今天呢,咱们开一部新书,叫《赛博朋克二零七七》。这词儿我听着都新鲜。这赛博朋克啊,简单理解就是“高科技,低生活”。这一听,我就明白了,于老师就爱用那高科技的东西,手机都得拿脚纹开,大冬天为了解锁脱得一丝不挂,冻得跟王八蛋似的。","emo_mode":0}
7
+ {"prompt_audio":"voice_07.wav","emo_audio":"emo_sad.wav","emo_weight":0.65,"emo_mode":1,"text":"酒楼丧尽天良,开始借机竞拍房间,哎,一群蠢货。"}
8
+ {"prompt_audio":"voice_08.wav","emo_audio":"emo_hate.wav","emo_weight":0.65,"emo_mode":1,"text":"你看看你,对我还有没有一点父子之间的信任了。"}
9
+ {"prompt_audio":"voice_09.wav","emo_weight": 0.8,"emo_mode":2,"emo_vec_3":0.8,"text":"对不起嘛!我的记性真的不太好,但是和你在一起的事情,我都会努力记住的~"}
10
+ {"prompt_audio":"voice_10.wav","emo_weight": 0.8,"emo_mode":2,"emo_vec_7":1.0,"text":"哇塞!这个爆率也太高了!欧皇附体了!"}
11
  {"prompt_audio":"voice_11.wav","emo_mode":3,"emo_text":"极度悲伤","text":"这些年的时光终究是错付了... "}
12
  {"prompt_audio":"voice_12.wav","emo_mode":3,"emo_text":"You scared me to death! What are you, a ghost?","text":"快躲起来!是他要来了!他要来抓我们了!"}
tools/i18n/locale/en_US.json CHANGED
@@ -42,8 +42,9 @@
42
  "请上传情感参考音频": "Please upload the emotion reference audio",
43
  "当前模型版本": "Current model version: ",
44
  "请输入目标文本": "Please input the text to synthesize",
45
- "例如:委屈巴巴、危险在悄悄逼近": "e.g. deeply sad, danger is creeping closer",
46
  "与音色参考音频相同": "Same as the voice reference",
47
  "情感随机采样": "Randomize emotion sampling",
48
- "显示实验功能": "Show experimental features"
 
49
  }
 
42
  "请上传情感参考音频": "Please upload the emotion reference audio",
43
  "当前模型版本": "Current model version: ",
44
  "请输入目标文本": "Please input the text to synthesize",
45
+ "例如:委屈巴巴、危险在悄悄逼近": "e.g. \"deeply sad\", \"danger is creeping closer\"",
46
  "与音色参考音频相同": "Same as the voice reference",
47
  "情感随机采样": "Randomize emotion sampling",
48
+ "显示实验功能": "Show experimental features",
49
+ "提示:此功能为实验版,结果尚不稳定,我们正在持续优化中。": "Note: This feature is currently experimental and may not produce satisfactory results. We're dedicated to improving its performance in a future release."
50
  }
tools/i18n/locale/zh_CN.json CHANGED
@@ -39,6 +39,9 @@
39
  "参数会影响音频多样性和生成速度详见": "参数会影响音频多样性和生成速度详见",
40
  "是否进行采样": "是否进行采样",
41
  "生成Token最大数量,过小导致音频被截断": "生成Token最大数量,过小导致音频被截断",
 
 
 
42
  "显示实验功能": "显示实验功能",
43
- "例如:委屈巴巴、危险在悄悄逼近": "例如:委屈巴巴、危险在悄悄逼近"
44
  }
 
39
  "参数会影响音频多样性和生成速度详见": "参数会影响音频多样性和生成速度详见",
40
  "是否进行采样": "是否进行采样",
41
  "生成Token最大数量,过小导致音频被截断": "生成Token最大数量,过小导致音频被截断",
42
+ "例如:委屈巴巴、危险在悄悄逼近": "例如:委屈巴巴、危险在悄悄逼近",
43
+ "与音色参考音频相同": "与音色参考音频相同",
44
+ "情感随机采样": "情感随机采样",
45
  "显示实验功能": "显示实验功能",
46
+ "提示:此功能为实验版,结果尚不稳定,我们正在持续优化中。": "提示:此功能为实验版,结果尚不稳定,我们正在持续优化中。"
47
  }
webui.py CHANGED
@@ -1,3 +1,4 @@
 
1
  import json
2
  import os
3
  import sys
@@ -63,19 +64,18 @@ LANGUAGES = {
63
  "中文": "zh_CN",
64
  "English": "en_US"
65
  }
66
- EMO_CHOICES = [i18n("与音色参考音频相同"),
67
  i18n("使用情感参考音频"),
68
  i18n("使用情感向量控制"),
69
  i18n("使用情感描述文本控制")]
70
- EMO_CHOICES_BASE = EMO_CHOICES[:3] # 基础选项
71
- EMO_CHOICES_EXPERIMENTAL = EMO_CHOICES # 全部选项(包括文本描述)
72
 
73
  os.makedirs("outputs/tasks",exist_ok=True)
74
  os.makedirs("prompts",exist_ok=True)
75
 
76
  MAX_LENGTH_TO_USE_SPEED = 70
 
77
  with open("examples/cases.jsonl", "r", encoding="utf-8") as f:
78
- example_cases = []
79
  for line in f:
80
  line = line.strip()
81
  if not line:
@@ -85,8 +85,9 @@ with open("examples/cases.jsonl", "r", encoding="utf-8") as f:
85
  emo_audio_path = os.path.join("examples",example["emo_audio"])
86
  else:
87
  emo_audio_path = None
 
88
  example_cases.append([os.path.join("examples", example.get("prompt_audio", "sample_prompt.wav")),
89
- EMO_CHOICES[example.get("emo_mode",0)],
90
  example.get("text"),
91
  emo_audio_path,
92
  example.get("emo_weight",1.0),
@@ -99,8 +100,14 @@ with open("examples/cases.jsonl", "r", encoding="utf-8") as f:
99
  example.get("emo_vec_6",0),
100
  example.get("emo_vec_7",0),
101
  example.get("emo_vec_8",0),
102
- example.get("emo_text") is not None]
103
- )
 
 
 
 
 
 
104
 
105
  def gen_single(emo_control_method,prompt, text,
106
  emo_ref_path, emo_weight,
@@ -159,6 +166,12 @@ def update_prompt_audio():
159
  update_button = gr.update(interactive=True)
160
  return update_button
161
 
 
 
 
 
 
 
162
  with gr.Blocks(title="IndexTTS Demo") as demo:
163
  mutex = threading.Lock()
164
  gr.HTML('''
@@ -181,14 +194,24 @@ with gr.Blocks(title="IndexTTS Demo") as demo:
181
  input_text_single = gr.TextArea(label=i18n("文本"),key="input_text_single", placeholder=i18n("请输入目标文本"), info=f"{i18n('当前模型版本')}{tts.model_version or '1.0'}")
182
  gen_button = gr.Button(i18n("生成语音"), key="gen_button",interactive=True)
183
  output_audio = gr.Audio(label=i18n("生成结果"), visible=True,key="output_audio")
184
- experimental_checkbox = gr.Checkbox(label=i18n("显示实验功能"),value=False)
 
 
185
  with gr.Accordion(i18n("功能设置")):
186
  # 情感控制选项部分
187
  with gr.Row():
188
  emo_control_method = gr.Radio(
189
- choices=EMO_CHOICES_BASE,
 
 
 
 
 
 
 
190
  type="index",
191
- value=EMO_CHOICES_BASE[0],label=i18n("情感控制方式"))
 
192
  # 情感参考音频部分
193
  with gr.Group(visible=False) as emotion_reference_group:
194
  with gr.Row():
@@ -213,13 +236,13 @@ with gr.Blocks(title="IndexTTS Demo") as demo:
213
  vec8 = gr.Slider(label=i18n("平静"), minimum=0.0, maximum=1.0, value=0.0, step=0.05)
214
 
215
  with gr.Group(visible=False) as emo_text_group:
 
216
  with gr.Row():
217
  emo_text = gr.Textbox(label=i18n("情感描述文本"),
218
  placeholder=i18n("请输入情绪描述(或留空以自动使用目标文本作为情绪描述)"),
219
  value="",
220
  info=i18n("例如:委屈巴巴、危险在悄悄逼近"))
221
 
222
-
223
  with gr.Row(visible=False) as emo_weight_group:
224
  emo_weight = gr.Slider(label=i18n("情感权重"), minimum=0.0, maximum=1.0, value=0.65, step=0.01)
225
 
@@ -261,23 +284,55 @@ with gr.Blocks(title="IndexTTS Demo") as demo:
261
  # typical_sampling, typical_mass,
262
  ]
263
 
264
- if len(example_cases) > 0:
265
- example_table = gr.Examples(
266
- examples=(
267
- example_cases[:-2]
268
- if len(example_cases) > 2
269
- else example_cases
270
- ),
271
- examples_per_page=20,
272
- inputs=[prompt_audio,
273
- emo_control_method,
 
 
274
  input_text_single,
275
  emo_upload,
276
  emo_weight,
277
  emo_text,
278
- vec1, vec2, vec3, vec4, vec5, vec6, vec7, vec8,
279
- experimental_checkbox]
280
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
281
 
282
  def on_input_text_change(text, max_text_tokens_per_segment):
283
  if text and len(text) > 0:
@@ -328,14 +383,6 @@ with gr.Blocks(title="IndexTTS Demo") as demo:
328
  gr.update(visible=False)
329
  )
330
 
331
- def on_experimental_change(is_exp):
332
- # 切换情感控制选项
333
- # 第三个返回值实际没有起作用
334
- if is_exp:
335
- return gr.update(choices=EMO_CHOICES_EXPERIMENTAL, value=EMO_CHOICES_EXPERIMENTAL[0]), gr.update(value=example_cases)
336
- else:
337
- return gr.update(choices=EMO_CHOICES_BASE, value=EMO_CHOICES_BASE[0]), gr.update(value=example_cases[:-2])
338
-
339
  emo_control_method.change(on_method_change,
340
  inputs=[emo_control_method],
341
  outputs=[emotion_reference_group,
@@ -345,18 +392,30 @@ with gr.Blocks(title="IndexTTS Demo") as demo:
345
  emo_weight_group]
346
  )
347
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
348
  input_text_single.change(
349
  on_input_text_change,
350
  inputs=[input_text_single, max_text_tokens_per_segment],
351
  outputs=[segments_preview]
352
  )
353
 
354
- experimental_checkbox.change(
355
- on_experimental_change,
356
- inputs=[experimental_checkbox],
357
- outputs=[emo_control_method, example_table.dataset] # 高级参数Accordion
358
- )
359
-
360
  max_text_tokens_per_segment.change(
361
  on_input_text_change,
362
  inputs=[input_text_single, max_text_tokens_per_segment],
 
1
+ import html
2
  import json
3
  import os
4
  import sys
 
64
  "中文": "zh_CN",
65
  "English": "en_US"
66
  }
67
+ EMO_CHOICES_ALL = [i18n("与音色参考音频相同"),
68
  i18n("使用情感参考音频"),
69
  i18n("使用情感向量控制"),
70
  i18n("使用情感描述文本控制")]
71
+ EMO_CHOICES_OFFICIAL = EMO_CHOICES_ALL[:-1] # skip experimental features
 
72
 
73
  os.makedirs("outputs/tasks",exist_ok=True)
74
  os.makedirs("prompts",exist_ok=True)
75
 
76
  MAX_LENGTH_TO_USE_SPEED = 70
77
+ example_cases = []
78
  with open("examples/cases.jsonl", "r", encoding="utf-8") as f:
 
79
  for line in f:
80
  line = line.strip()
81
  if not line:
 
85
  emo_audio_path = os.path.join("examples",example["emo_audio"])
86
  else:
87
  emo_audio_path = None
88
+
89
  example_cases.append([os.path.join("examples", example.get("prompt_audio", "sample_prompt.wav")),
90
+ EMO_CHOICES_ALL[example.get("emo_mode",0)],
91
  example.get("text"),
92
  emo_audio_path,
93
  example.get("emo_weight",1.0),
 
100
  example.get("emo_vec_6",0),
101
  example.get("emo_vec_7",0),
102
  example.get("emo_vec_8",0),
103
+ ])
104
+
105
+ def get_example_cases(include_experimental = False):
106
+ if include_experimental:
107
+ return example_cases # show every example
108
+
109
+ # exclude emotion control mode 3 (emotion from text description)
110
+ return [x for x in example_cases if x[1] != EMO_CHOICES_ALL[3]]
111
 
112
  def gen_single(emo_control_method,prompt, text,
113
  emo_ref_path, emo_weight,
 
166
  update_button = gr.update(interactive=True)
167
  return update_button
168
 
169
+ def create_warning_message(warning_text):
170
+ return gr.HTML(f"<div style=\"padding: 0.5em 0.8em; border-radius: 0.5em; background: #ffa87d; color: #000; font-weight: bold\">{html.escape(warning_text)}</div>")
171
+
172
+ def create_experimental_warning_message():
173
+ return create_warning_message(i18n('提示:此功能为实验版,结果尚不稳定,我们正在持续优化中。'))
174
+
175
  with gr.Blocks(title="IndexTTS Demo") as demo:
176
  mutex = threading.Lock()
177
  gr.HTML('''
 
194
  input_text_single = gr.TextArea(label=i18n("文本"),key="input_text_single", placeholder=i18n("请输入目标文本"), info=f"{i18n('当前模型版本')}{tts.model_version or '1.0'}")
195
  gen_button = gr.Button(i18n("生成语音"), key="gen_button",interactive=True)
196
  output_audio = gr.Audio(label=i18n("生成结果"), visible=True,key="output_audio")
197
+
198
+ experimental_checkbox = gr.Checkbox(label=i18n("显示实验功能"), value=False)
199
+
200
  with gr.Accordion(i18n("功能设置")):
201
  # 情感控制选项部分
202
  with gr.Row():
203
  emo_control_method = gr.Radio(
204
+ choices=EMO_CHOICES_OFFICIAL,
205
+ type="index",
206
+ value=EMO_CHOICES_OFFICIAL[0],label=i18n("情感控制方式"))
207
+ # we MUST have an extra, INVISIBLE list of *all* emotion control
208
+ # methods so that gr.Dataset() can fetch ALL control mode labels!
209
+ # otherwise, the gr.Dataset()'s experimental labels would be empty!
210
+ emo_control_method_all = gr.Radio(
211
+ choices=EMO_CHOICES_ALL,
212
  type="index",
213
+ value=EMO_CHOICES_ALL[0], label=i18n("情感控制方式"),
214
+ visible=False) # do not render
215
  # 情感参考音频部分
216
  with gr.Group(visible=False) as emotion_reference_group:
217
  with gr.Row():
 
236
  vec8 = gr.Slider(label=i18n("平静"), minimum=0.0, maximum=1.0, value=0.0, step=0.05)
237
 
238
  with gr.Group(visible=False) as emo_text_group:
239
+ create_experimental_warning_message()
240
  with gr.Row():
241
  emo_text = gr.Textbox(label=i18n("情感描述文本"),
242
  placeholder=i18n("请输入情绪描述(或留空以自动使用目标文本作为情绪描述)"),
243
  value="",
244
  info=i18n("例如:委屈巴巴、危险在悄悄逼近"))
245
 
 
246
  with gr.Row(visible=False) as emo_weight_group:
247
  emo_weight = gr.Slider(label=i18n("情感权重"), minimum=0.0, maximum=1.0, value=0.65, step=0.01)
248
 
 
284
  # typical_sampling, typical_mass,
285
  ]
286
 
287
+ # we must use `gr.Dataset` to support dynamic UI rewrites, since `gr.Examples`
288
+ # binds tightly to UI and always restores the initial state of all components,
289
+ # such as the list of available choices in emo_control_method.
290
+ example_table = gr.Dataset(label="Examples",
291
+ samples_per_page=20,
292
+ samples=get_example_cases(include_experimental=False),
293
+ type="values",
294
+ # these components are NOT "connected". it just reads the column labels/available
295
+ # states from them, so we MUST link to the "all options" versions of all components,
296
+ # such as `emo_control_method_all` (to be able to see EXPERIMENTAL text labels)!
297
+ components=[prompt_audio,
298
+ emo_control_method_all, # important: support all mode labels!
299
  input_text_single,
300
  emo_upload,
301
  emo_weight,
302
  emo_text,
303
+ vec1, vec2, vec3, vec4, vec5, vec6, vec7, vec8]
304
+ )
305
+
306
+ def on_example_click(example):
307
+ print(f"Example clicked: ({len(example)} values) = {example!r}")
308
+ return (
309
+ gr.update(value=example[0]),
310
+ gr.update(value=example[1]),
311
+ gr.update(value=example[2]),
312
+ gr.update(value=example[3]),
313
+ gr.update(value=example[4]),
314
+ gr.update(value=example[5]),
315
+ gr.update(value=example[6]),
316
+ gr.update(value=example[7]),
317
+ gr.update(value=example[8]),
318
+ gr.update(value=example[9]),
319
+ gr.update(value=example[10]),
320
+ gr.update(value=example[11]),
321
+ gr.update(value=example[12]),
322
+ gr.update(value=example[13]),
323
+ )
324
+
325
+ # click() event works on both desktop and mobile UI
326
+ example_table.click(on_example_click,
327
+ inputs=[example_table],
328
+ outputs=[prompt_audio,
329
+ emo_control_method,
330
+ input_text_single,
331
+ emo_upload,
332
+ emo_weight,
333
+ emo_text,
334
+ vec1, vec2, vec3, vec4, vec5, vec6, vec7, vec8]
335
+ )
336
 
337
  def on_input_text_change(text, max_text_tokens_per_segment):
338
  if text and len(text) > 0:
 
383
  gr.update(visible=False)
384
  )
385
 
 
 
 
 
 
 
 
 
386
  emo_control_method.change(on_method_change,
387
  inputs=[emo_control_method],
388
  outputs=[emotion_reference_group,
 
392
  emo_weight_group]
393
  )
394
 
395
+ def on_experimental_change(is_experimental, current_mode_index):
396
+ # 切换情感控制选项
397
+ new_choices = EMO_CHOICES_ALL if is_experimental else EMO_CHOICES_OFFICIAL
398
+ # if their current mode selection doesn't exist in new choices, reset to 0.
399
+ # we don't verify that OLD index means the same in NEW list, since we KNOW it does.
400
+ new_index = current_mode_index if current_mode_index < len(new_choices) else 0
401
+
402
+ return (
403
+ gr.update(choices=new_choices, value=new_choices[new_index]),
404
+ gr.update(samples=get_example_cases(include_experimental=is_experimental)),
405
+ )
406
+
407
+ experimental_checkbox.change(
408
+ on_experimental_change,
409
+ inputs=[experimental_checkbox, emo_control_method],
410
+ outputs=[emo_control_method, example_table]
411
+ )
412
+
413
  input_text_single.change(
414
  on_input_text_change,
415
  inputs=[input_text_single, max_text_tokens_per_segment],
416
  outputs=[segments_preview]
417
  )
418
 
 
 
 
 
 
 
419
  max_text_tokens_per_segment.change(
420
  on_input_text_change,
421
  inputs=[input_text_single, max_text_tokens_per_segment],