raphael0202 commited on
Commit
72b8226
·
verified ·
1 Parent(s): 7a43f38

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -5
README.md CHANGED
@@ -34,12 +34,12 @@ model-index:
34
  value: 0.9916725247390905
35
  ---
36
 
37
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
38
- should probably proofread and complete it, then remove this comment. -->
39
 
40
  # nutrition-extractor
41
 
42
  This model is a fine-tuned version of [microsoft/layoutlmv3-large](https://huggingface.co/microsoft/layoutlmv3-large) on the openfoodfacts/nutrient-detection-layout dataset.
 
 
43
  It achieves the following results on the evaluation set:
44
  - Loss: 0.0534
45
  - Precision: 0.9545
@@ -49,15 +49,25 @@ It achieves the following results on the evaluation set:
49
 
50
  ## Model description
51
 
52
- More information needed
 
 
 
 
 
 
 
 
 
 
53
 
54
  ## Intended uses & limitations
55
 
56
- More information needed
57
 
58
  ## Training and evaluation data
59
 
60
- More information needed
61
 
62
  ## Training procedure
63
 
 
34
  value: 0.9916725247390905
35
  ---
36
 
 
 
37
 
38
  # nutrition-extractor
39
 
40
  This model is a fine-tuned version of [microsoft/layoutlmv3-large](https://huggingface.co/microsoft/layoutlmv3-large) on the openfoodfacts/nutrient-detection-layout dataset.
41
+ It allows to automatically extract nutrition values from images of nutrition tables.
42
+
43
  It achieves the following results on the evaluation set:
44
  - Loss: 0.0534
45
  - Precision: 0.9545
 
49
 
50
  ## Model description
51
 
52
+ This model can extract nutrient values from nutrition tables. This was developped as part of the Nutrisight project.
53
+
54
+ For more information about the project, please refer to the [nutrisight](https://github.com/openfoodfacts/openfoodfacts-ai/tree/develop/nutrisight) directory in the openfoodfacts-ai GitHub repository.
55
+
56
+ As any model using the LayoutLM architecture, this model expects as input:
57
+
58
+ - the image
59
+ - the tokens (string) on the images
60
+ - the 2D coordinates of each tokens
61
+
62
+ The tokens and their 2D position is provided by an OCR model. This model was trained using OCR results coming from Google Cloud Vision.
63
 
64
  ## Intended uses & limitations
65
 
66
+ This model is only intended to be used on images of products where a nutrition table can be found.
67
 
68
  ## Training and evaluation data
69
 
70
+ The training and evaluation data can be found on the [dataset page](https://huggingface.co/datasets/openfoodfacts/nutrient-detection-layout).
71
 
72
  ## Training procedure
73