WEBing commited on
Commit
0a3336d
1 Parent(s): 1f1db3a

update example

Browse files
Files changed (1) hide show
  1. README.md +42 -2
README.md CHANGED
@@ -7,11 +7,50 @@ pipeline_tag: visual-question-answering
7
  ---
8
  # Kangaroo: A Powerful Video-Language Model Supporting Long-context Video Input
9
 
10
- ## Release
11
  - [2024/07/17] 🔥 **Kangaroo** has been released. We release [blog](https://kangaroogroup.github.io/Kangaroo.github.io/) and [model](https://huggingface.co/KangarooGroup/kangaroo). Please check out the blog for details.
12
 
13
 
14
- ## Citation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
  If you find it useful for your research , please cite related papers/blogs using this BibTeX:
17
  ```bibtex
@@ -23,3 +62,4 @@ If you find it useful for your research , please cite related papers/blogs using
23
  month={July},
24
  year={2024}
25
  }
 
 
7
  ---
8
  # Kangaroo: A Powerful Video-Language Model Supporting Long-context Video Input
9
 
10
+ # Release
11
  - [2024/07/17] 🔥 **Kangaroo** has been released. We release [blog](https://kangaroogroup.github.io/Kangaroo.github.io/) and [model](https://huggingface.co/KangarooGroup/kangaroo). Please check out the blog for details.
12
 
13
 
14
+ # Get Started with the Model
15
+ ```python
16
+ import torch
17
+ from transformers import AutoTokenizer, AutoModelForCausalLM
18
+
19
+ tokenizer = AutoTokenizer.from_pretrained("/path/to/kangaroo")
20
+ model = AutoModelForCausalLM.from_pretrained(
21
+ "/path/to/kangaroo",
22
+ torch_dtype=torch.bfloat16,
23
+ trust_remote_code=True,
24
+ )
25
+ model = model.to("cuda")
26
+ terminators = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>")]
27
+
28
+ video_path = "path/to/video"
29
+ query = "Please describe this video"
30
+ out, history = model.chat(video_path=video_path,
31
+ query=query,
32
+ tokenizer=tokenizer,
33
+ max_new_tokens=512,
34
+ eos_token_id=terminators,
35
+ do_sample=True,
36
+ temperature=0.6,
37
+ top_p=0.9,)
38
+ print(out)
39
+
40
+ query = "What happend at the end of the video?"
41
+ out, history = model.chat(video_path=video_path,
42
+ query=query,
43
+ history=history,
44
+ tokenizer=tokenizer,
45
+ max_new_tokens=512,
46
+ eos_token_id=terminators,
47
+ do_sample=True,
48
+ temperature=0.6,
49
+ top_p=0.9,)
50
+ print(out)
51
+ ```
52
+
53
+ # Citation
54
 
55
  If you find it useful for your research , please cite related papers/blogs using this BibTeX:
56
  ```bibtex
 
62
  month={July},
63
  year={2024}
64
  }
65
+ ```