Merge branch 'main' of https://huggingface.co/jpelhaw/t5-word-sense-disambiguation into main
Browse files
README.md
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: "ISO 639-1 code for your language, or `multilingual`"
|
3 |
+
thumbnail: "url to a thumbnail used in social sharing"
|
4 |
+
tags:
|
5 |
+
- array
|
6 |
+
- of
|
7 |
+
- tags
|
8 |
+
license: "any valid license identifier"
|
9 |
+
datasets:
|
10 |
+
- array of dataset identifiers
|
11 |
+
metrics:
|
12 |
+
- array of metric identifiers
|
13 |
+
widget:
|
14 |
+
- text: "question: which description describes the word \" java \" best in the following context? descriptions: [ \" A drink consisting of an infusion of ground coffee beans \" , \" a platform-independent programming lanugage \" , or \" an island in Indonesia to the south of Borneo \" ] context: I like to drink ' java ' in the morning ."
|
15 |
+
---
|
16 |
+
|
17 |
+
# T5-large for Word Sense Disambiguation
|
18 |
+
|
19 |
+
This is the checkpoint for T5-large after being trained on the [SemCor 3.0 dataset](http://lcl.uniroma1.it/wsdeval/).
|
20 |
+
|
21 |
+
Additional information about this model:
|
22 |
+
|
23 |
+
* [The t5-large model page](https://huggingface.co/t5-large)
|
24 |
+
* [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
|
25 |
+
* [Official implementation by Google](https://github.com/google-research/text-to-text-transfer-transformer)
|
26 |
+
|
27 |
+
The model can be loaded to perform a few-shot classification like so:
|
28 |
+
|
29 |
+
```py
|
30 |
+
from transformers import AutoModelForConditionalGeneration, AutoTokenizer
|
31 |
+
|
32 |
+
AutoModelForConditionalGeneration.from_pretrained("jpelhaw/t5-word-sense-disambiguation")
|
33 |
+
AutoTokenizer("jpelhaw/t5-word-sense-disambiguation")
|
34 |
+
|
35 |
+
input = 'question: which description describes the word " peculiarities " best in the following context? \
|
36 |
+
descriptions: [ " an odd or unusual characteristic " , " a distinguishing trait " , or " something unusual -- perhaps worthy of collecting " ] \
|
37 |
+
context: The art of change-ringing is peculiar to the English , and , like most English \' peculiarities \' , unintelligible to the rest of the world .'
|
38 |
+
|
39 |
+
|
40 |
+
example = tokenizer.tokenize(input, add_special_tokens=True)
|
41 |
+
|
42 |
+
answer = model.generate(input_ids=example['input_ids'],
|
43 |
+
attention_mask=example['attention_mask'],
|
44 |
+
max_length=135)
|
45 |
+
|
46 |
+
# "a distinguishing trait"
|
47 |
+
```
|