ybelkada HF staff commited on
Commit
87a2cc6
1 Parent(s): ef9296e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -0
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - ro
6
+ - de
7
+ - multilingual
8
+ pipeline_tag: image-to-text
9
+ tags:
10
+ - image-captioning
11
+ license: apache-2.0
12
+ ---
13
+ # Model card for DePlot
14
+
15
+ ![pull_figure](https://s3.amazonaws.com/moonup/production/uploads/62441d1d9fdefb55a0b7d12c/u8rWTawSyUegF4jzwOpNO.png)
16
+
17
+
18
+ # Table of Contents
19
+
20
+ 0. [TL;DR](#TL;DR)
21
+ 1. [Using the model](#using-the-model)
22
+ 2. [Contribution](#contribution)
23
+ 3. [Citation](#citation)
24
+
25
+ # TL;DR
26
+
27
+ The abstract of the paper states that:
28
+
29
+ > Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA.
30
+
31
+
32
+ # Using the model
33
+
34
+ ## Converting from T5x to huggingface
35
+
36
+ You can use the [`convert_pix2struct_checkpoint_to_pytorch.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/pix2struct/convert_pix2struct_original_pytorch_to_hf.py) script as follows:
37
+ ```bash
38
+ python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --is_vqa
39
+ ```
40
+ if you are converting a large model, run:
41
+ ```bash
42
+ python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --use-large --is_vqa
43
+ ```
44
+ Once saved, you can push your converted model with the following snippet:
45
+ ```python
46
+ from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
47
+
48
+ model = Pix2StructForConditionalGeneration.from_pretrained(PATH_TO_SAVE)
49
+ processor = Pix2StructProcessor.from_pretrained(PATH_TO_SAVE)
50
+
51
+ model.push_to_hub("USERNAME/MODEL_NAME")
52
+ processor.push_to_hub("USERNAME/MODEL_NAME")
53
+ ```
54
+
55
+ ## Run a prediction
56
+
57
+ You can run a prediction by querying an input image together with a question as follows:
58
+ ```python
59
+ from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
60
+ import requests
61
+ from PIL import Image
62
+
63
+ model = Pix2StructForConditionalGeneration.from_pretrained('google/deplot')
64
+ processor = Pix2StructProcessor.from_pretrained('google/deplot')
65
+ url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/5090.png"
66
+ image = Image.open(requests.get(url, stream=True).raw)
67
+
68
+ inputs = processor(images=image, text="Generate underlying data table of the figure below:", return_tensors="pt")
69
+ predictions = model.generate(**inputs, max_new_tokens=512)
70
+ print(processor.decode(predictions[0], skip_special_tokens=True))
71
+ ```
72
+
73
+ # Contribution
74
+
75
+ This model was originally contributed by Fangyu Liu, Julian Martin Eisenschlos et al. and added to the Hugging Face ecosystem by [Younes Belkada](https://huggingface.co/ybelkada).
76
+
77
+ # Citation
78
+
79
+ If you want to cite this work, please consider citing the original paper:
80
+ ```
81
+ @misc{liu2022matcha,
82
+ title={MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering},
83
+ author={Fangyu Liu and Francesco Piccinno and Syrine Krichene and Chenxi Pang and Kenton Lee and Mandar Joshi and Yasemin Altun and Nigel Collier and Julian Martin Eisenschlos},
84
+ year={2022},
85
+ eprint={2212.09662},
86
+ archivePrefix={arXiv},
87
+ primaryClass={cs.CL}
88
+ }
89
+ ```