to-be commited on
Commit
fceae27
1 Parent(s): facb0ae

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -79,7 +79,7 @@ def process_document(image,sendimg):
79
  return processor.token2json(sequence), image
80
 
81
  title = '<table align="center" border="0" cellpadding="1" cellspacing="1" ><tbody><tr><td style="text-align:center"><img alt="" src="https://huggingface.co/spaces/to-be/invoice_document_headers_extraction_with_donut/resolve/main/circling_small.gif" style="float:right; height:50px; width:50px" /></td><td style="text-align:center"><h1>Demo: invoice header extraction with Donut</h1></td><td style="text-align:center"><img alt="" src="https://huggingface.co/spaces/to-be/invoice_document_headers_extraction_with_donut/resolve/main/circling2_small.gif" style="float:left; height:50px; width:50px" /></td></tr></tbody></table>'
82
- paragraph0 = '<p><strong>(update 29/03/2023: for additional info, you can read <a href="https://toon-beerten.medium.com/hands-on-document-data-extraction-with-transformer-7130df3b6132">my article on medium</a>)</strong></p>'
83
  paragraph1 = '<p>Basic idea of the base 🍩 model is to give it an image as input and extract indexes as text. No bounding boxes or confidences are generated.<br /> I finetuned it on invoices. For more info, see the <a href="https://arxiv.org/abs/2111.15664">original paper</a>&nbsp;and the 🤗&nbsp;<a href="https://huggingface.co/naver-clova-ix/donut-base">model</a>.</p>'
84
  paragraph2 = '<p><strong>Training</strong>:<br />The model was trained with a few thousand of annotated invoices and non-invoices (for those the doctype will be &#39;Other&#39;). They span across different countries and languages. They are always one page only. The dataset is proprietary unfortunately.&nbsp;Model is set to input resolution of 1280x1920 pixels. So any sample you want to try with higher dpi than 150 has no added value.<br />It was trained for about 4 hours on a&nbsp;NVIDIA RTX A4000 for 20k steps with a val_metric of&nbsp;0.03413819904382196 at the end.<br />The <u>following indexes</u> were included in the train set:</p><ul><li><span style="font-family:Calibri"><span style="color:black">DocType</span></span></li><li><span style="font-family:Calibri"><span style="color:black">Currency</span></span></li><li><span style="font-family:Calibri"><span style="color:black">DocumentDate</span></span></li><li><span style="font-family:Calibri"><span style="color:black">GrossAmount</span></span></li><li><span style="font-family:Calibri"><span style="color:black">InvoiceNumber</span></span></li><li><span style="font-family:Calibri"><span style="color:black">NetAmount</span></span></li><li><span style="font-family:Calibri"><span style="color:black">TaxAmount</span></span></li><li><span style="font-family:Calibri"><span style="color:black">OrderNumber</span></span></li><li><span style="font-family:Calibri"><span style="color:black">CreditorCountry</span></span></li></ul>'
85
  paragraph3 = '<p><strong>Benchmark observations:</strong><br />From all documents in the validation set,&nbsp; 60% of them had all indexes captured correctly.</p><p>Here are the results per index:</p><p style="margin-left:40px"><img alt="" src="https://s3.amazonaws.com/moonup/production/uploads/1677749023966-6335a49ceb6132ca653239a0.png" style="height:70%; width:70%" /></p><p>Some other observations:<br />- when trying with a non invoice document, it&#39;s quite reliably identified as Doctype: &#39;Other&#39;<br />- validation set contained mostly same layout invoices as the train set. If it was validated against completely differently sourced invoices, the results would be different<br />- Document date is able to be recognized across different notations, however, it&#39;s often wrong because the data set was not diverse (as in time span of dates) enough</p>'
 
79
  return processor.token2json(sequence), image
80
 
81
  title = '<table align="center" border="0" cellpadding="1" cellspacing="1" ><tbody><tr><td style="text-align:center"><img alt="" src="https://huggingface.co/spaces/to-be/invoice_document_headers_extraction_with_donut/resolve/main/circling_small.gif" style="float:right; height:50px; width:50px" /></td><td style="text-align:center"><h1>Demo: invoice header extraction with Donut</h1></td><td style="text-align:center"><img alt="" src="https://huggingface.co/spaces/to-be/invoice_document_headers_extraction_with_donut/resolve/main/circling2_small.gif" style="float:left; height:50px; width:50px" /></td></tr></tbody></table>'
82
+ paragraph0 = '<p><strong>(update 29/03/2023: for more info, you can read <a href="https://toon-beerten.medium.com/hands-on-document-data-extraction-with-transformer-7130df3b6132">my article on medium</a>)<br />(update 28/04/2023: want to finetune with your own data? Read&nbsp;<a href="https://towardsdatascience.com/ocr-free-document-data-extraction-with-transformers-1-2-b5a826bc2ac3">this article</a>)</strong></p>'
83
  paragraph1 = '<p>Basic idea of the base 🍩 model is to give it an image as input and extract indexes as text. No bounding boxes or confidences are generated.<br /> I finetuned it on invoices. For more info, see the <a href="https://arxiv.org/abs/2111.15664">original paper</a>&nbsp;and the 🤗&nbsp;<a href="https://huggingface.co/naver-clova-ix/donut-base">model</a>.</p>'
84
  paragraph2 = '<p><strong>Training</strong>:<br />The model was trained with a few thousand of annotated invoices and non-invoices (for those the doctype will be &#39;Other&#39;). They span across different countries and languages. They are always one page only. The dataset is proprietary unfortunately.&nbsp;Model is set to input resolution of 1280x1920 pixels. So any sample you want to try with higher dpi than 150 has no added value.<br />It was trained for about 4 hours on a&nbsp;NVIDIA RTX A4000 for 20k steps with a val_metric of&nbsp;0.03413819904382196 at the end.<br />The <u>following indexes</u> were included in the train set:</p><ul><li><span style="font-family:Calibri"><span style="color:black">DocType</span></span></li><li><span style="font-family:Calibri"><span style="color:black">Currency</span></span></li><li><span style="font-family:Calibri"><span style="color:black">DocumentDate</span></span></li><li><span style="font-family:Calibri"><span style="color:black">GrossAmount</span></span></li><li><span style="font-family:Calibri"><span style="color:black">InvoiceNumber</span></span></li><li><span style="font-family:Calibri"><span style="color:black">NetAmount</span></span></li><li><span style="font-family:Calibri"><span style="color:black">TaxAmount</span></span></li><li><span style="font-family:Calibri"><span style="color:black">OrderNumber</span></span></li><li><span style="font-family:Calibri"><span style="color:black">CreditorCountry</span></span></li></ul>'
85
  paragraph3 = '<p><strong>Benchmark observations:</strong><br />From all documents in the validation set,&nbsp; 60% of them had all indexes captured correctly.</p><p>Here are the results per index:</p><p style="margin-left:40px"><img alt="" src="https://s3.amazonaws.com/moonup/production/uploads/1677749023966-6335a49ceb6132ca653239a0.png" style="height:70%; width:70%" /></p><p>Some other observations:<br />- when trying with a non invoice document, it&#39;s quite reliably identified as Doctype: &#39;Other&#39;<br />- validation set contained mostly same layout invoices as the train set. If it was validated against completely differently sourced invoices, the results would be different<br />- Document date is able to be recognized across different notations, however, it&#39;s often wrong because the data set was not diverse (as in time span of dates) enough</p>'