Back to all models
translation mask_token:
Query this model
πŸ”₯ This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint
								$
								curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '"json encoded string"' \
https://api-inference.huggingface.co/models/valhalla/t5-small-qg-hl
Share Copied link to clipboard

Monthly model downloads

valhalla/t5-small-qg-hl valhalla/t5-small-qg-hl
2,941 downloads
last 30 days

pytorch

tf

Contributed by

valhalla Suraj Patil
11 models

How to use this model directly from the πŸ€—/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("valhalla/t5-small-qg-hl") model = AutoModelWithLMHead.from_pretrained("valhalla/t5-small-qg-hl")

T5 for question-generation

This is t5-small model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens.

You can play with the model using the inference API, just highlight the answer spans with <hl> tokens and end the text with </s>. For example

<hl> 42 <hl> is the answer to life, the universe and everything. </s>

For more deatils see this repo.

Model in action πŸš€

You'll need to clone the repo.

Open In Colab

from pipelines import pipeline
nlp = pipeline("question-generation")
nlp("42 is the answer to life, universe and everything.")
=> [{'answer': '42', 'question': 'What is the answer to life, universe and everything?'}]