nielsr HF staff commited on
Commit
1e43d8f
1 Parent(s): ebdbc9c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -21
README.md CHANGED
@@ -46,27 +46,6 @@ across four domains: documents, illustrations, user interfaces, and natural imag
46
 
47
  This model has been fine-tuned on VQA, you need to provide a question in a specific format, ideally in the format of a Choices question answering
48
 
49
- ## Converting from T5x to huggingface
50
-
51
- You can use the [`convert_pix2struct_checkpoint_to_pytorch.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/pix2struct/convert_pix2struct_checkpoint_to_pytorch.py) script as follows:
52
- ```bash
53
- python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --is_vqa
54
- ```
55
- if you are converting a large model, run:
56
- ```bash
57
- python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --use-large --is_vqa
58
- ```
59
- Once saved, you can push your converted model with the following snippet:
60
- ```python
61
- from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
62
-
63
- model = Pix2StructForConditionalGeneration.from_pretrained(PATH_TO_SAVE)
64
- processor = Pix2StructProcessor.from_pretrained(PATH_TO_SAVE)
65
-
66
- model.push_to_hub("USERNAME/MODEL_NAME")
67
- processor.push_to_hub("USERNAME/MODEL_NAME")
68
- ```
69
-
70
  ## Running the model
71
 
72
  ### In full precision, on CPU:
@@ -140,6 +119,27 @@ print(processor.decode(predictions[0], skip_special_tokens=True))
140
  >>> ash cloud
141
  ```
142
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
143
 
144
  # Contribution
145
 
 
46
 
47
  This model has been fine-tuned on VQA, you need to provide a question in a specific format, ideally in the format of a Choices question answering
48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  ## Running the model
50
 
51
  ### In full precision, on CPU:
 
119
  >>> ash cloud
120
  ```
121
 
122
+ ## Converting from T5x to huggingface
123
+
124
+ You can use the [`convert_pix2struct_checkpoint_to_pytorch.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/pix2struct/convert_pix2struct_checkpoint_to_pytorch.py) script as follows:
125
+ ```bash
126
+ python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --is_vqa
127
+ ```
128
+ if you are converting a large model, run:
129
+ ```bash
130
+ python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --use-large --is_vqa
131
+ ```
132
+ Once saved, you can push your converted model with the following snippet:
133
+ ```python
134
+ from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
135
+
136
+ model = Pix2StructForConditionalGeneration.from_pretrained(PATH_TO_SAVE)
137
+ processor = Pix2StructProcessor.from_pretrained(PATH_TO_SAVE)
138
+
139
+ model.push_to_hub("USERNAME/MODEL_NAME")
140
+ processor.push_to_hub("USERNAME/MODEL_NAME")
141
+ ```
142
+
143
 
144
  # Contribution
145