ApoTro commited on
Commit
7b3f7c7
1 Parent(s): 32075f8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -10,14 +10,15 @@ This model was trained on slightly adapted code from [run_t5_mlm_flax.py](https:
10
  If you want to know about training details or evaluation results, see [SlovakT5_report.pdf](https://huggingface.co/ApoTro/slovak-t5-small/resolve/main/SlovakT5_report.pdf). For evaluation, you can also run [SlovakT5_eval.ipynb](https://colab.research.google.com/github/richardcepka/notebooks/blob/main/SlovakT5_eval.ipynb).
11
 
12
  ### How to use
 
13
  ```python
14
  from transformers import AutoTokenizer, T5ForConditionalGeneration
15
 
16
  tokenizer = AutoTokenizer.from_pretrained("ApoTro/slovak-t5-small")
17
  model = T5ForConditionalGeneration.from_pretrained("ApoTro/slovak-t5-small")
18
 
19
- input_ids = tokenizer("sst2 veta: Obraz je krajší, kvalitnejší a lepší.", return_tensors="pt").input_ids
20
- labels = tokenizer("pozitívna", return_tensors="pt").input_ids
21
 
22
  # the forward function automatically creates the correct decoder_input_ids
23
  loss = model(input_ids=input_ids, labels=labels).loss
 
10
  If you want to know about training details or evaluation results, see [SlovakT5_report.pdf](https://huggingface.co/ApoTro/slovak-t5-small/resolve/main/SlovakT5_report.pdf). For evaluation, you can also run [SlovakT5_eval.ipynb](https://colab.research.google.com/github/richardcepka/notebooks/blob/main/SlovakT5_eval.ipynb).
11
 
12
  ### How to use
13
+ E.g., SlovakT5-small can be fine-tuned for the NER task.
14
  ```python
15
  from transformers import AutoTokenizer, T5ForConditionalGeneration
16
 
17
  tokenizer = AutoTokenizer.from_pretrained("ApoTro/slovak-t5-small")
18
  model = T5ForConditionalGeneration.from_pretrained("ApoTro/slovak-t5-small")
19
 
20
+ input_ids = tokenizer("ner veta: Do druhého kola postúpili Robert Fico a Andrej Kiska s rozdielom 4,0%.", return_tensors="pt").input_ids
21
+ labels = tokenizer("per: Robert Fico | per: Andrej Kiska", return_tensors="pt").input_ids
22
 
23
  # the forward function automatically creates the correct decoder_input_ids
24
  loss = model(input_ids=input_ids, labels=labels).loss