Model Card for Llama-3B-QA-Enhanced
Model Card for Llama-3B-QA-Enhanced
This model is a fine-tuned version of Llama 3B, optimized for generating high-quality multiple-choice questions (MCQs) from input text. It combines the powerful language understanding capabilities of Llama with specialized training for educational content generation.
Model Details
Model Description
This model is designed to automatically generate multiple-choice questions from input text, making it particularly useful for educators, content creators, and educational technology platforms.
- Developed by: Ahmed Othman
- Model type: Fine-tuned Language Model
- Language(s): English
- License: Apache 2.0
- Finetuned from model: meta-llama/Llama-3.2-3B-Instruct
Uses
Direct Use
The model can be used directly for:
- Generating multiple-choice questions from educational texts
- Creating assessment materials
- Automated quiz generation
- Educational content development
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("AhmedOthman/Llama-3B-QA-Enhanced")
tokenizer = AutoTokenizer.from_pretrained("AhmedOthman/Llama-3B-QA-Enhanced")
text = "Your input text here"
inputs = tokenizer(text, return_tensors="pt", max_length=512, truncation=True)
outputs = model.generate(inputs.input_ids)
mcq = tokenizer.decode(outputs[0], skip_special_tokens=True)
Out-of-Scope Use
This model should not be used for:
- Generating factually incorrect or misleading questions
- Creating questions about sensitive or controversial topics
- Replacing human expertise in high-stakes assessment development
Training Details
Training Data
The model was trained on a combination of:
- SQuAD (Stanford Question Answering Dataset)
- RACE (ReAding Comprehension from Examinations)
Training Procedure
Training Hyperparameters
- Training regime: fp16 mixed precision
- Maximum sequence length: 512 tokens
- Learning rate: 2e-5
- Batch size: 16
- Number of epochs: 3
Evaluation
Metrics
The model was evaluated using:
- BLEU score for question generation quality
- ROUGE score for answer relevance
- Accuracy of generated distractors
- Human evaluation for question quality
Limitations and Bias
- Limited to English language content
- May generate simpler questions for complex topics
- Performance varies with input text quality
- May reflect biases present in training data
Environmental Impact
- Base Model: Llama 3B
- Fine-tuning Hardware: Single A100 GPU
- Training Time: Approximately 8 hours
Citation
If you use this model in your research, please cite:
@misc{othman2024llama3bqa,
author = {Othman, Ahmed},
title = {Llama-3B-QA-Enhanced},
year = {2024},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/AhmedOthman/Llama-3B-QA-Enhanced}}
}
Model Card Contact
For questions or issues, please contact Ahmed Othman through the HuggingFace model repository.
This model card provides comprehensive information about your fine-tuned model while maintaining a professional and informative tone. You can further customize it by:
1. Adding specific performance metrics from your evaluation
2. Including more example outputs
3. Detailing any specific preprocessing steps used
4. Adding links to related research or projects
Would you like me to modify any particular section or add more specific details?
- Downloads last month
- 16
Model tree for AhmedOthman/Llama-3B-QA-Enhanced
Base model
meta-llama/Llama-3.2-3B-InstructDatasets used to train AhmedOthman/Llama-3B-QA-Enhanced
Evaluation results
- accuracy on RACEself-reported0.850
- bleu on RACEself-reported0.760
- rouge on RACEself-reported0.820