edwardjross commited on
Commit
2e510e1
1 Parent(s): 05fc087

Update metadata

Browse files
Files changed (1) hide show
  1. README.md +35 -4
README.md CHANGED
@@ -7,6 +7,12 @@ metrics:
7
  model-index:
8
  - name: xlm-roberta-base-finetuned-recipe-all
9
  results: []
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -14,25 +20,50 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # xlm-roberta-base-finetuned-recipe-all
16
 
17
- This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
 
 
 
18
  It achieves the following results on the evaluation set:
19
  - Loss: 0.1169
20
  - F1: 0.9672
21
 
 
 
22
  ## Model description
23
 
24
- More information needed
 
 
 
 
 
 
 
 
 
 
25
 
26
  ## Intended uses & limitations
27
 
28
- More information needed
 
 
 
 
 
29
 
30
  ## Training and evaluation data
31
 
32
- More information needed
33
 
34
  ## Training procedure
35
 
 
 
 
 
 
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
7
  model-index:
8
  - name: xlm-roberta-base-finetuned-recipe-all
9
  results: []
10
+ widget:
11
+ - text: "1 sheet of frozen puff pastry (thawed)"
12
+ - text: "1/2 teaspoon fresh thyme, minced"
13
+ - text: "2-3 medium tomatoes"
14
+ - text: "1 petit oignon rouge"
15
+
16
  ---
17
 
18
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
20
 
21
  # xlm-roberta-base-finetuned-recipe-all
22
 
23
+ This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the recipe ingredient [NER dataset](https://github.com/cosylabiiit/recipe-knowledge-mining) from the paper [A Named Entity Based Approach to Model Recipes](https://arxiv.org/abs/2004.12184) (using both the `gk` and `ar` datasets).
24
+
25
+
26
+
27
  It achieves the following results on the evaluation set:
28
  - Loss: 0.1169
29
  - F1: 0.9672
30
 
31
+ On the test set it obtains an F1 of 0.9615, slightly above the CRF used in the paper.
32
+
33
  ## Model description
34
 
35
+ Predicts tag of each token in an ingredient string.
36
+
37
+ | Tag | Significance | Example |
38
+ | --- | --- | --- |
39
+ | NAME | Name of Ingredient | salt, pepper |
40
+ | STATE | Processing State of Ingredient. | ground, thawed |
41
+ | UNIT | Measuring unit(s). | gram, cup |
42
+ | QUANTITY | Quantity associated with the unit(s). | 1, 1 1/2 , 2-4 |
43
+ | SIZE | Portion sizes mentioned. | small, large |
44
+ | TEMP | Temperature applied prior to cooking. | hot, frozen |
45
+ | DF (DRY/FRESH) | Fresh otherwise as mentioned. | dry, fresh |
46
 
47
  ## Intended uses & limitations
48
 
49
+ * Only trained on ingredient strings.
50
+ * Tags subtokens; tag should be propagated to whole word
51
+ * Works best with pre-tokenisation splitting of symbols (such as parentheses) and numbers (e.g. 50g -> 50 g)
52
+ * Typically only detects the first ingredient if there are multiple.
53
+ * Only trained on two American English data sources
54
+ * Tags TEMP and DF have very few training data.
55
 
56
  ## Training and evaluation data
57
 
58
+ Both the `ar` (AllRecipes.com) and `gk` (FOOD.com) datasets obtained from the TSVs from the authors' [repository](https://github.com/cosylabiiit/recipe-knowledge-mining).
59
 
60
  ## Training procedure
61
 
62
+
63
+ It follows the overall procedure from Chapter 4 of [Natural Language Processing with Transformers](https://www.oreilly.com/library/view/natural-language-processing/9781098103231/) by Tunstall, von Wera and Wolf.
64
+
65
+ See the [training notebook](https://github.com/EdwardJRoss/nlp_transformers_exercises/blob/master/notebooks/ch4-ner-recipe-stanford-crf.ipynb) for details.
66
+
67
  ### Training hyperparameters
68
 
69
  The following hyperparameters were used during training: