Davlan commited on
Commit
13a6a4b
1 Parent(s): b8aac90

adding mt5_eng_yor

Browse files
Files changed (1) hide show
  1. README.md +9 -8
README.md CHANGED
@@ -15,14 +15,15 @@ Specifically, this model is a *mT5_base* model that was fine-tuned on JW300 Yor
15
  #### How to use
16
  You can use this model with Transformers *pipeline* for ADR.
17
  ```python
18
- from transformers import AutoTokenizer, AutoModelForTokenClassification
19
- from transformers import pipeline
20
- tokenizer = AutoTokenizer.from_pretrained("")
21
- model = AutoModelForTokenClassification.from_pretrained("")
22
- nlp = pipeline("", model=model, tokenizer=tokenizer)
23
- example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
24
- ner_results = nlp(example)
25
- print(ner_results)
 
26
  ```
27
  #### Limitations and bias
28
  This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
15
  #### How to use
16
  You can use this model with Transformers *pipeline* for ADR.
17
  ```python
18
+ from transformers import MT5ForConditionalGeneration, T5Tokenizer
19
+
20
+ model = MT5ForConditionalGeneration.from_pretrained("Davlan/mt5_base_eng_yor_mt")
21
+ tokenizer = T5Tokenizer.from_pretrained("google/mt5-base")
22
+ input_string = "Where are you?"
23
+ inputs = tokenizer.encode(input_string, return_tensors="pt")
24
+ generated_tokens = model.generate(inputs)
25
+ results = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
26
+ print(results)
27
  ```
28
  #### Limitations and bias
29
  This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.