czczup commited on
Commit
f13ae35
1 Parent(s): e9dbb17

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md CHANGED
@@ -299,6 +299,35 @@ print(f'User: {question}')
299
  print(f'Assistant: {response}')
300
  ```
301
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
302
  ## Finetune
303
 
304
  SWIFT from ModelScope community has supported the fine-tuning (Image/Video) of InternVL, please check [this link](https://github.com/modelscope/swift/blob/main/docs/source_en/Multi-Modal/internvl-best-practice.md) for more details.
 
299
  print(f'Assistant: {response}')
300
  ```
301
 
302
+ ### Streaming output
303
+
304
+ Besides this method, you can also use the following code to get streamed output.
305
+
306
+ ```python
307
+ from transformers import TextIteratorStreamer
308
+ from threading import Thread
309
+
310
+ # Initialize the streamer
311
+ streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True, timeout=10)
312
+ # Define the generation configuration
313
+ generation_config = dict(num_beams=1, max_new_tokens=1024, do_sample=False, streamer=streamer)
314
+ # Start the model chat in a separate thread
315
+ thread = Thread(target=model.chat, kwargs=dict(
316
+ tokenizer=tokenizer, pixel_values=pixel_values, question=question,
317
+ history=None, return_history=False, generation_config=generation_config,
318
+ ))
319
+ thread.start()
320
+
321
+ # Initialize an empty string to store the generated text
322
+ generated_text = ''
323
+ # Loop through the streamer to get the new text as it is generated
324
+ for new_text in streamer:
325
+ if new_text == model.conv_template.sep:
326
+ break
327
+ generated_text += new_text
328
+ print(new_text, end='', flush=True) # Print each new chunk of generated text on the same line
329
+ ```
330
+
331
  ## Finetune
332
 
333
  SWIFT from ModelScope community has supported the fine-tuning (Image/Video) of InternVL, please check [this link](https://github.com/modelscope/swift/blob/main/docs/source_en/Multi-Modal/internvl-best-practice.md) for more details.