File size: 2,197 Bytes
a1b482f
 
4b747b7
 
 
 
 
d50a87a
 
3557932
a1b482f
4b747b7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d1deae6
 
 
 
4b747b7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
license: openrail
language:
- en
metrics:
- accuracy
pipeline_tag: text2text-generation
widget:
- text: "predict [SEP] Arman Kirakossian country of citizenship [SEP] place of birth Yerevan [SEP] instance of human [SEP] occupation diplomat [SEP] occupation historian [SEP] ethnic group Armenians [SEP]"
  example_title: "Predict country of citizenship"
---

This is a t5-small model trained on the wikidata5M dataset.

This model was trained on tail and entity prediction in a knowledge graph using the graph's context represented by the node's neighborhood.

Textual representation was obtained from wikidata entities and relation titles. Entity description was used to disambiguate if two entities had the same title. If still, no disambiguation was possible, we assigned unique numerical ids for such entities.

The neighborhood for the input was obtained as follows:

1. sort the neighborhood by semantic similarity of relations from its triplets to the relation from the input triplet in order to prioritize more important information in the context; 
2. limit the sorted neighborhood to 512 triplets, since this will always be at least as big as the size of the allowed context, and, after verbalization, specify the maximum length of 512 for the model tokenizer to fit the resulting verbalized neighborhood representation into the language model context. 


Neighborhood sorting by semantic proximity was performed using a pre-calculated matrix of cosine similarity of relations in KG, for similarity calculation the relations were embedded by the fasttext model.

We trained the model on the Wikidata5M dataset for approximately 5M iterations on 8xA100 GPUs using a batch size of 320.

To evaluate the model, we sample 50 times from the decoder for each input and then rank the predictions by their log probabilities. We achieve 0.319 Hits@1 on the test set. 

One can load this model for their personal use of fine-tuning as follows:

```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("DeepPavlov/t5-wikidata5M-with-neighbors")
model = AutoModelForSeq2SeqLM.from_pretrained("DeepPavlov/t5-wikidata5M-with-neighbors")
```