nielsr HF staff commited on
Commit
9468063
1 Parent(s): 30bbcc8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -5
README.md CHANGED
@@ -8,6 +8,8 @@ language:
8
  inference: false
9
  pipeline_tag: visual-question-answering
10
  license: apache-2.0
 
 
11
  ---
12
  # Model card for MatCha - fine-tuned on Chart2text-pew
13
 
@@ -30,7 +32,23 @@ The abstract of the paper states that:
30
 
31
  # Using the model
32
 
33
- ## Converting from T5x to huggingface
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
  You can use the [`convert_pix2struct_checkpoint_to_pytorch.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/pix2struct/convert_pix2struct_original_pytorch_to_hf.py) script as follows:
36
  ```bash
@@ -51,10 +69,6 @@ model.push_to_hub("USERNAME/MODEL_NAME")
51
  processor.push_to_hub("USERNAME/MODEL_NAME")
52
  ```
53
 
54
- ## Run predictions
55
-
56
- To run predictions, refer to the [instructions presented in the `matcha-chartqa` model card](https://huggingface.co/ybelkada/matcha-chartqa#get-predictions-from-the-model).
57
-
58
  # Contribution
59
 
60
  This model was originally contributed by Fangyu Liu, Francesco Piccinno et al. and added to the Hugging Face ecosystem by [Younes Belkada](https://huggingface.co/ybelkada).
 
8
  inference: false
9
  pipeline_tag: visual-question-answering
10
  license: apache-2.0
11
+ tags:
12
+ - matcha
13
  ---
14
  # Model card for MatCha - fine-tuned on Chart2text-pew
15
 
 
32
 
33
  # Using the model
34
 
35
+ ```python
36
+ from transformers import Pix2StructProcessor, Pix2StructForConditionalGeneration
37
+ import requests
38
+ from PIL import Image
39
+
40
+ processor = Pix2StructProcessor.from_pretrained('google/matcha-chart2text-pew')
41
+ model = Pix2StructForConditionalGeneration.from_pretrained('google/matcha-chart2text-pew')
42
+
43
+ url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/20294671002019.png"
44
+ image = Image.open(requests.get(url, stream=True).raw)
45
+
46
+ inputs = processor(images=image, return_tensors="pt")
47
+ predictions = model.generate(**inputs, max_new_tokens=512)
48
+ print(processor.decode(predictions[0], skip_special_tokens=True))
49
+ ```
50
+
51
+ # Converting from T5x to huggingface
52
 
53
  You can use the [`convert_pix2struct_checkpoint_to_pytorch.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/pix2struct/convert_pix2struct_original_pytorch_to_hf.py) script as follows:
54
  ```bash
 
69
  processor.push_to_hub("USERNAME/MODEL_NAME")
70
  ```
71
 
 
 
 
 
72
  # Contribution
73
 
74
  This model was originally contributed by Fangyu Liu, Francesco Piccinno et al. and added to the Hugging Face ecosystem by [Younes Belkada](https://huggingface.co/ybelkada).