CuiSiwei commited on
Commit
18ede56
1 Parent(s): 7c4f273

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -67
README.md CHANGED
@@ -1,15 +1,10 @@
1
- ---
2
- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
3
- # Doc / guide: https://huggingface.co/docs/hub/model-cards
4
- {}
5
- ---
6
-
7
  # Nougat for formula
8
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
 
11
  We performed fune-tuning on [small-sized Nougat model](https://huggingface.co/facebook/nougat-small) using data
12
- from [IM2LATEX-100K](https://www.kaggle.com/datasets/shahrukhkhan/im2latex100k) and luckily got good results.
 
13
 
14
  ## Model Details
15
 
@@ -17,63 +12,64 @@ from [IM2LATEX-100K](https://www.kaggle.com/datasets/shahrukhkhan/im2latex100k)
17
 
18
  <!-- Provide a longer summary of what this model is. -->
19
 
 
 
20
 
 
 
 
 
21
 
22
- - **Developed by:** [More Information Needed]
23
- - **Funded by [optional]:** [More Information Needed]
24
- - **Shared by [optional]:** [More Information Needed]
25
- - **Model type:** [More Information Needed]
26
- - **Language(s) (NLP):** [More Information Needed]
27
- - **License:** [More Information Needed]
28
- - **Finetuned from model [optional]:** [More Information Needed]
29
 
30
- ### Model Sources [optional]
31
 
32
- <!-- Provide the basic links for the model. -->
 
33
 
34
- - **Repository:** [More Information Needed]
35
- - **Paper [optional]:** [More Information Needed]
36
- - **Demo [optional]:** [More Information Needed]
37
 
38
  ## Uses
39
 
40
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
41
 
42
- ### Direct Use
43
-
44
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
45
 
46
- [More Information Needed]
 
 
47
 
48
- ### Downstream Use [optional]
49
 
50
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
51
 
52
- [More Information Needed]
53
 
54
- ### Out-of-Scope Use
55
-
56
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
57
-
58
- [More Information Needed]
59
 
60
- ## Bias, Risks, and Limitations
61
 
62
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
63
 
64
- [More Information Needed]
 
 
65
 
66
- ### Recommendations
 
 
67
 
68
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
69
 
70
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
 
 
 
71
 
72
- ## How to Get Started with the Model
 
73
 
74
- Use the code below to get started with the model.
 
75
 
76
- [More Information Needed]
77
 
78
  ## Training Details
79
 
@@ -81,26 +77,23 @@ Use the code below to get started with the model.
81
 
82
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
83
 
84
- [More Information Needed]
85
-
86
- ### Training Procedure
87
 
88
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
89
 
90
- #### Preprocessing [optional]
91
 
92
- [More Information Needed]
93
 
 
94
 
95
- #### Training Hyperparameters
 
96
 
97
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
98
 
99
- #### Speeds, Sizes, Times [optional]
100
 
101
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
102
 
103
- [More Information Needed]
104
 
105
  ## Evaluation
106
 
@@ -111,34 +104,22 @@ Use the code below to get started with the model.
111
  #### Testing Data
112
 
113
  <!-- This should link to a Dataset Card if possible. -->
 
 
114
 
115
- [More Information Needed]
116
-
117
- #### Factors
118
-
119
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
120
-
121
- [More Information Needed]
122
 
123
  #### Metrics
124
 
125
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
126
 
127
- [More Information Needed]
128
 
129
  ### Results
130
 
131
- [More Information Needed]
132
-
133
- #### Summary
134
 
135
 
136
 
137
- ## Model Examination [optional]
138
-
139
- <!-- Relevant interpretability work for the model goes here -->
140
-
141
- [More Information Needed]
142
 
143
  ## Environmental Impact
144
 
 
 
 
 
 
 
 
1
  # Nougat for formula
2
 
3
  <!-- Provide a quick summary of what the model is/does. -->
4
 
5
  We performed fune-tuning on [small-sized Nougat model](https://huggingface.co/facebook/nougat-small) using data
6
+ from [IM2LATEX-100K](https://www.kaggle.com/datasets/shahrukhkhan/im2latex100k) to make it especially powerful in
7
+ identifying formula from images.
8
 
9
  ## Model Details
10
 
 
12
 
13
  <!-- Provide a longer summary of what this model is. -->
14
 
15
+ Nougat for formula is good at identifying formula from images. It takes images with white backgroud and formula written in
16
+ black as input and return with accurate Latex code for the formula.
17
 
18
+ The Naugat model (Neural Optical Understanding for Academic Documents) was proposed by Meta AI in August 2023 as
19
+ a visual Transformer model for processing scientific documents. It can convert PDF format documents into Markup language,
20
+ especially with good recognition ability for mathematical expressions and tables.The goal of this model is to improve the accessibility
21
+ of scientific knowledge by bridging human readable documents with machine readable text.
22
 
 
 
 
 
 
 
 
23
 
 
24
 
25
+ - **Model type:** Vision Encoder Decoder
26
+ - **Finetuned from model:** [Nougat model, small-sized version](https://huggingface.co/facebook/nougat-small)
27
 
 
 
 
28
 
29
  ## Uses
30
 
31
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
32
 
33
+ Nougat for formula can be used as a tool for converting complicated formula to Latex code. It has potential to be
34
+ a good substitute for other tools.
 
35
 
36
+ For example, when you are taking notes and tired at coding long Latex/Markdown formula code, just make a screen shot
37
+ of them and put it into Nougat for formula. Then you can get the exact code for the formula as long as it won't exceed
38
+ the max length of the model you use.
39
 
40
+ You can also continue fine-tuning the model to make it more powerful in identifying formulas from certain subjects.
41
 
42
+ Nougat for formula may be useful when developing tools or apps aiming at generating Latex code.
43
 
 
44
 
 
 
 
 
 
45
 
46
+ ## How to Get Started with the Model
47
 
48
+ Demo below shows how to input an image into the model and generate Latex/Markdown formula code.
49
 
50
+ ```
51
+ from transformers import NougatProcessor, VisionEncoderDecoderModel
52
+ from PIL import Image
53
 
54
+ max_length = 100 # defing max length of output
55
+ processor = NougatProcessor.from_pretrained(r".", max_length = max_length) # Replace with your path
56
+ model = VisionEncoderDecoderModel.from_pretrained(r".") # Replace with your path
57
 
58
+ image = Image.open(r"image_path") # Replace with your path
59
+ image = processor(image, return_tensors="pt").pixel_values # The processor will resize the image according to our model
60
 
61
+ result_tensor = model.generate(
62
+ image,
63
+ max_length=max_length,
64
+ bad_words_ids=[[processor.tokenizer.unk_token_id]]
65
+ ) # generate id tensor
66
 
67
+ result = processor.batch_decode(result_tensor, skip_special_tokens=True) # Using the processor to decode the result
68
+ result = processor.post_process_generation(result, fix_markdown=False)
69
 
70
+ print(*result)
71
+ ```
72
 
 
73
 
74
  ## Training Details
75
 
 
77
 
78
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
79
 
80
+ [IM2LATEX-100K](https://www.kaggle.com/datasets/shahrukhkhan/im2latex100k)
 
 
81
 
 
82
 
83
+ #### Preprocessing
84
 
85
+ The preprocessing of X(image) has been showed in the short demo above.
86
 
87
+ The preprocessing of Y(formula) is done by:
88
 
89
+ 1. Remove the space in the formula string.
90
+ 2. Using `processor` to tokenize the string.
91
 
 
92
 
93
+ #### Training Hyperparameters
94
 
95
+ - **Training regime:** `torch.optim.AdamW(model.parameters(), lr=1e-4)` <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
 
 
97
 
98
  ## Evaluation
99
 
 
104
  #### Testing Data
105
 
106
  <!-- This should link to a Dataset Card if possible. -->
107
+ The tesing data is also taken from [IM2LATEX-100K](https://www.kaggle.com/datasets/shahrukhkhan/im2latex100k).
108
+ Note that the train, validation and test data has been well split before downloading.
109
 
 
 
 
 
 
 
 
110
 
111
  #### Metrics
112
 
113
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
114
 
115
+ BLEU and CER.
116
 
117
  ### Results
118
 
119
+ The BLEU is 0.8157 and CER is 0.1601 on test data.
 
 
120
 
121
 
122
 
 
 
 
 
 
123
 
124
  ## Environmental Impact
125