nielsr HF staff commited on
Commit
c697c92
1 Parent(s): d2989d2

Update model card

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -34,10 +34,13 @@ Here is how to use this model to classify an image of the COCO 2017 dataset into
34
  from transformers import ViTFeatureExtractor, ViTForImageClassification
35
  from PIL import Image
36
  import requests
 
37
  url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
38
  image = Image.open(requests.get(url, stream=True).raw)
 
39
  feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224')
40
  model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224')
 
41
  inputs = feature_extractor(images=image, return_tensors="pt")
42
  outputs = model(**inputs)
43
  logits = outputs.logits
@@ -46,7 +49,7 @@ predicted_class_idx = logits.argmax(-1).item()
46
  print("Predicted class:", model.config.id2label[predicted_class_idx])
47
  ```
48
 
49
- Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change.
50
 
51
  ## Training data
52
 
 
34
  from transformers import ViTFeatureExtractor, ViTForImageClassification
35
  from PIL import Image
36
  import requests
37
+
38
  url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
39
  image = Image.open(requests.get(url, stream=True).raw)
40
+
41
  feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224')
42
  model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224')
43
+
44
  inputs = feature_extractor(images=image, return_tensors="pt")
45
  outputs = model(**inputs)
46
  logits = outputs.logits
 
49
  print("Predicted class:", model.config.id2label[predicted_class_idx])
50
  ```
51
 
52
+ For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#).
53
 
54
  ## Training data
55