ydshieh HF staff commited on
Commit
817d063
1 Parent(s): 64938b0

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Example
2
+
3
+ The model is by no means a state-of-the-art model, but nevertheless
4
+ produces reasonable image captioning results. It was mainly fine-tuned
5
+ as a proof-of-concept for the 🤗 FlaxVisionEncoderDecoder Framework.
6
+
7
+ The model can be used as follows:
8
+
9
+ ```python
10
+
11
+ import requests
12
+
13
+ from PIL import Image
14
+
15
+ from transformers import ViTFeatureExtractor, AutoTokenizer, FlaxVisionEncoderDecoderModel
16
+
17
+ loc = "ydshieh/flax-vit-gpt2-coco-en"
18
+
19
+ feature_extractor = ViTFeatureExtractor.from_pretrained(loc)
20
+
21
+ tokenizer = AutoTokenizer.from_pretrained(loc)
22
+
23
+ model = FlaxVisionEncoderDecoderModel.from_pretrained(loc)
24
+
25
+ # We will verify our results on an image of cute cats
26
+
27
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
28
+
29
+ with Image.open(requests.get(url, stream=True).raw) as img:
30
+
31
+ pixel_values = feature_extractor(images=img, return_tensors="np").pixel_values
32
+
33
+ def generate_step(pixel_values):
34
+
35
+ output_ids = model.generate(pixel_values, max_length=16, num_beams=4).sequences
36
+
37
+ preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
38
+
39
+ preds = [pred.strip() for pred in preds]
40
+
41
+ return preds
42
+
43
+ preds = generate_step(pixel_values)
44
+
45
+ print(preds)
46
+
47
+ # should produce
48
+
49
+ # ['a cat laying on top of a couch next to another cat']
50
+
51
+ ```