Text2Text Generation
Transformers
Safetensors
English
encoder-decoder
Inference Endpoints
Bachstelze commited on
Commit
caad6c1
1 Parent(s): d2ed460

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -0
README.md CHANGED
@@ -44,3 +44,18 @@ A minimalistic instruction model with an already good analysed and pretrained en
44
  So we can research the [Bertology](https://aclanthology.org/2020.tacl-1.54.pdf) with instruction-tuned models and investigate [what happens to BERT embeddings during fine-tuning](https://aclanthology.org/2020.blackboxnlp-1.4.pdf).
45
  We used the Huggingface API for [warm-starting](https://huggingface.co/blog/warm-starting-encoder-decoder) [BertGeneration](https://huggingface.co/docs/transformers/model_doc/bert-generation) with [Encoder-Decoder-Models](https://huggingface.co/docs/transformers/v4.35.2/en/model_doc/encoder-decoder) for this purpose.
46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
  So we can research the [Bertology](https://aclanthology.org/2020.tacl-1.54.pdf) with instruction-tuned models and investigate [what happens to BERT embeddings during fine-tuning](https://aclanthology.org/2020.blackboxnlp-1.4.pdf).
45
  We used the Huggingface API for [warm-starting](https://huggingface.co/blog/warm-starting-encoder-decoder) [BertGeneration](https://huggingface.co/docs/transformers/model_doc/bert-generation) with [Encoder-Decoder-Models](https://huggingface.co/docs/transformers/v4.35.2/en/model_doc/encoder-decoder) for this purpose.
46
 
47
+ ## Run the model with a longer output
48
+
49
+ ```python
50
+ from transformers import AutoTokenizer, EncoderDecoderModel
51
+
52
+ # load the fine-tuned seq2seq model and corresponding tokenizer
53
+ model_name = "Bachstelze/instructionBERTtest"
54
+ model = EncoderDecoderModel.from_pretrained(model_name)
55
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
56
+
57
+ input = "Write a poem about love, peace and pancake."
58
+ input_ids = tokenizer(input, return_tensors="pt").input_ids
59
+ output_ids = model.generate(input_ids, max_new_tokens=200)
60
+ print(tokenizer.decode(output_ids[0]))
61
+ ```