Davlan commited on
Commit
7f8b55e
1 Parent(s): 0e4769c

updating readme

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -7,7 +7,7 @@ datasets:
7
  # xlm-roberta-large-masakhaner
8
  ## Model description
9
  **xlm-roberta-large-masakhaner** is the first **Named Entity Recognition** model for 10 African languages (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER).
10
- Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset.
11
  ## Intended uses & limitations
12
  #### How to use
13
  You can use this model with Transformers *pipeline* for NER.
@@ -24,7 +24,7 @@ print(ner_results)
24
  #### Limitations and bias
25
  This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
26
  ## Training data
27
- This model was fine-tuned on 10 African NER datasets (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) Masakhane[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset
28
 
29
  The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
30
  Abbreviation|Description
 
7
  # xlm-roberta-large-masakhaner
8
  ## Model description
9
  **xlm-roberta-large-masakhaner** is the first **Named Entity Recognition** model for 10 African languages (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER).
10
+ Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset.
11
  ## Intended uses & limitations
12
  #### How to use
13
  You can use this model with Transformers *pipeline* for NER.
 
24
  #### Limitations and bias
25
  This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
26
  ## Training data
27
+ This model was fine-tuned on 10 African NER datasets (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset
28
 
29
  The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
30
  Abbreviation|Description