--- tags: - Question(s) Generation metrics: - rouge model-index: - name: consciousAI/question-generation-auto-hints-t5-v1-base-s-q-c results: [] --- # Auto Question Generation The model is intended to be used for Auto And/Or Hint enabled Question Generation tasks. The model is expected to produce one or possibly more than one question from the provided context. [Live Demo: Question Generation](https://huggingface.co/spaces/consciousAI/question_generation) Including this there are five models trained with different training sets, demo provide comparison to all in one go. However, you can reach individual projects at below links: [Auto Question Generation v1](https://huggingface.co/consciousAI/question-generation-auto-t5-v1-base-s) [Auto Question Generation v2](https://huggingface.co/consciousAI/question-generation-auto-t5-v1-base-s-q) [Auto Question Generation v3](https://huggingface.co/consciousAI/question-generation-auto-t5-v1-base-s-q-c) [Auto/Hints based Question Generation v1](https://huggingface.co/consciousAI/question-generation-auto-hints-t5-v1-base-s-q) This model can be used as below: ``` from transformers import ( AutoModelForSeq2SeqLM, AutoTokenizer ) model_checkpoint = "consciousAI/question-generation-auto-hints-t5-v1-base-s-q-c" model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint) tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) ## Input with prompt context="question_context: " encodings = tokenizer.encode(context, return_tensors='pt', truncation=True, padding='max_length').to(device) ## You can play with many hyperparams to condition the output, look at demo output = model.generate(encodings, #max_length=300, #min_length=20, #length_penalty=2.0, num_beams=4, #early_stopping=True, #do_sample=True, #temperature=1.1 ) ## Multiple questions are expected to be delimited by '?' You can write a small wrapper to elegantly format. Look at the demo. questions = [tokenizer.decode(id, clean_up_tokenization_spaces=False, skip_special_tokens=False) for id in output] ``` ## Training and evaluation data Squad & QNLi combo. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 1.9372 | 1.0 | 942 | 1.4811 | 0.5555 | 0.3861 | 0.5243 | 0.5237 | | 1.2665 | 2.0 | 1884 | 1.4050 | 0.5688 | 0.4056 | 0.5385 | 0.539 | | 0.955 | 3.0 | 2826 | 1.4131 | 0.5733 | 0.4101 | 0.5426 | 0.5436 | | 0.7471 | 4.0 | 3768 | 1.4436 | 0.5769 | 0.4179 | 0.5464 | 0.5466 | | 0.6382 | 5.0 | 4710 | 1.5165 | 0.5819 | 0.4231 | 0.5487 | 0.5491 | ### Framework versions - Transformers 4.23.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.0