Claude commited on
Commit
e02e2ce
·
1 Parent(s): 3b71155

fix: truncate prompt to 500 chars to prevent Gemma token overflow (1087>1024)

Browse files

The Gemma prompt enhancer was crashing on long prompts, producing
garbage like '/imagine synchronized lipsync.' which caused the model
to generate nonsensical text images instead of video content.

Files changed (1) hide show
  1. app.py +4 -0
app.py CHANGED
@@ -999,6 +999,10 @@ def generate_video(
999
  tiling_config = TilingConfig.default()
1000
  video_chunks_number = get_video_chunks_number(num_frames, tiling_config)
1001
 
 
 
 
 
1002
  video, audio = pipeline(
1003
  prompt=prompt,
1004
  seed=current_seed,
 
999
  tiling_config = TilingConfig.default()
1000
  video_chunks_number = get_video_chunks_number(num_frames, tiling_config)
1001
 
1002
+ # Truncate prompt to prevent Gemma token overflow (max 1024 tokens ≈ 500 chars)
1003
+ if len(prompt) > 500:
1004
+ prompt = prompt[:500]
1005
+
1006
  video, audio = pipeline(
1007
  prompt=prompt,
1008
  seed=current_seed,