Dizex commited on
Commit
2aa3968
1 Parent(s): 6951832

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -16,11 +16,11 @@ tags:
16
  - Informal text
17
  license: mit
18
  ---
19
- # InstaFoodBERT
20
 
21
  ## Model description
22
 
23
- **InstaFoodBERT** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** of Food entities on informal text (social media like). It has been trained to recognize a single entity: food (FOOD).
24
 
25
  Specifically, this model is a *bert-base-cased* model that was fine-tuned on a dataset consisting of 400 English Instagram posts related to food. The [dataset](https://huggingface.co/datasets/Dizex/InstaFoodSet) is open source.
26
 
@@ -35,8 +35,8 @@ You can use this model with Transformers *pipeline* for NER.
35
  from transformers import AutoTokenizer, AutoModelForTokenClassification
36
  from transformers import pipeline
37
 
38
- tokenizer = AutoTokenizer.from_pretrained("Dizex/InstaFoodBERT")
39
- model = AutoModelForTokenClassification.from_pretrained("Dizex/InstaFoodBERT")
40
 
41
  pipe = pipeline("ner", model=model, tokenizer=tokenizer)
42
  example = "Today's meal: Fresh olive poké bowl topped with chia seeds. Very delicious!"
 
16
  - Informal text
17
  license: mit
18
  ---
19
+ # InstaFoodBERT-NER
20
 
21
  ## Model description
22
 
23
+ **InstaFoodBERT-NER** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** of Food entities on informal text (social media like). It has been trained to recognize a single entity: food (FOOD).
24
 
25
  Specifically, this model is a *bert-base-cased* model that was fine-tuned on a dataset consisting of 400 English Instagram posts related to food. The [dataset](https://huggingface.co/datasets/Dizex/InstaFoodSet) is open source.
26
 
 
35
  from transformers import AutoTokenizer, AutoModelForTokenClassification
36
  from transformers import pipeline
37
 
38
+ tokenizer = AutoTokenizer.from_pretrained("Dizex/InstaFoodBERT-NER")
39
+ model = AutoModelForTokenClassification.from_pretrained("Dizex/InstaFoodBERT-NER")
40
 
41
  pipe = pipeline("ner", model=model, tokenizer=tokenizer)
42
  example = "Today's meal: Fresh olive poké bowl topped with chia seeds. Very delicious!"