saattrupdan commited on
Commit
cb877ad
1 Parent(s): 4a030b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -15
README.md CHANGED
@@ -2,17 +2,7 @@
2
  ---
3
  language:
4
  - da
5
- tags:
6
- - ned
7
- - xlm-roberta
8
- - pytorch
9
- - transformers
10
  license: cc-by-sa-4.0
11
- datasets:
12
- - DaNED
13
- - DaWikiNED
14
- metrics:
15
- - f1
16
  ---
17
 
18
  # XLM-Roberta fine-tuned for Named Entity Disambiguation
@@ -25,8 +15,8 @@ Here is how to use the model:
25
  ```python
26
  from transformers import XLMRobertaTokenizer, XLMRobertaForSequenceClassification
27
 
28
- model = XLMRobertaForSequenceClassification.from_pretrained("DaNLP/da-xlmr-ned")
29
- tokenizer = XLMRobertaTokenizer.from_pretrained("DaNLP/da-xlmr-ned")
30
  ```
31
 
32
  The tokenizer takes 2 strings has input: the sentence and the knowledge graph (KG) context.
@@ -43,6 +33,4 @@ See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/d
43
 
44
  ## Training Data
45
 
46
- The model has been trained on the [DaNED](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#daned) and [DaWikiNED](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#dawikined) datasets.
47
-
48
-
 
2
  ---
3
  language:
4
  - da
 
 
 
 
 
5
  license: cc-by-sa-4.0
 
 
 
 
 
6
  ---
7
 
8
  # XLM-Roberta fine-tuned for Named Entity Disambiguation
 
15
  ```python
16
  from transformers import XLMRobertaTokenizer, XLMRobertaForSequenceClassification
17
 
18
+ model = XLMRobertaForSequenceClassification.from_pretrained("alexandrainst/da-xlmr-ned")
19
+ tokenizer = XLMRobertaTokenizer.from_pretrained("alexandrainst/da-xlmr-ned")
20
  ```
21
 
22
  The tokenizer takes 2 strings has input: the sentence and the knowledge graph (KG) context.
 
33
 
34
  ## Training Data
35
 
36
+ The model has been trained on the [DaNED](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#daned) and [DaWikiNED](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#dawikined) datasets.