dumperize commited on
Commit
370c372
1 Parent(s): 3c76762

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -8
README.md CHANGED
@@ -26,21 +26,36 @@ We refined the model on the dataset with descriptions and movie posters by russi
26
  - **Repository:** [github.com/slivka83](https://github.com/slivka83/)
27
  - **Demo [optional]:** [@MPC_project_bot](https://t.me/MPC_project_bot)
28
 
29
- # Uses
30
 
31
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
32
 
33
- ## Direct Use
 
 
34
 
35
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
 
 
36
 
37
- [More Information Needed]
38
 
39
- ## Out-of-Scope Use
 
 
 
 
40
 
41
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
 
42
 
43
- [More Information Needed]
 
 
 
 
 
44
 
45
  # Bias, Risks, and Limitations
46
 
 
26
  - **Repository:** [github.com/slivka83](https://github.com/slivka83/)
27
  - **Demo [optional]:** [@MPC_project_bot](https://t.me/MPC_project_bot)
28
 
29
+ # How to use
30
 
31
+ ```python
32
+ from transformers import AutoTokenizer, AutoModel
33
 
34
+ tokenizer = AutoTokenizer.from_pretrained("dumperize/movie-picture-captioning")
35
+ feature_extractor = ViTFeatureExtractor.from_pretrained("dumperize/movie-picture-captioning")
36
+ model = VisionEncoderDecoderModel.from_pretrained("dumperize/movie-picture-captioning")
37
 
38
+ max_length = 128
39
+ num_beams = 4
40
+ gen_kwargs = {"max_length": max_length, "num_beams": num_beams}
41
 
42
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
43
 
44
+ image_path = 'path/to/image.jpg';
45
+ image = Image.open(image_path)
46
+ image = image.resize([224,224])
47
+ if image.mode != "RGB":
48
+ image = image.convert(mode="RGB")
49
 
50
+ pixel_values = feature_extractor(images=[image], return_tensors="pt").pixel_values
51
+ pixel_values = pixel_values.to(device)
52
 
53
+ output_ids = model.generate(pixel_values, **gen_kwargs)
54
+
55
+ preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
56
+ print([pred.strip() for pred in preds])
57
+
58
+ ```
59
 
60
  # Bias, Risks, and Limitations
61