YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

FLAN-T5-Large fine-tuned on History Q&A Generation

This model is a fine-tuned version of google/flan-t5-large on a history question-answer dataset.

Model description

This model is designed to generate multiple-choice questions, answers, and explanations based on historical text inputs.

Intended uses & limitations

This model is intended for educational purposes and to assist in creating history-related quiz materials.

Training and evaluation data

The model was trained on the dataset ambrosfitz/just_history_large_mc.

Training procedure

The model was trained using the following hyperparameters:

  • Number of epochs: 2
  • Batch size: 1
  • Learning rate: (Add your learning rate here)
  • (Add any other relevant hyperparameters)

Results

Test set results: {'eval_loss': 0.5760563611984253, 'eval_runtime': 81.0881, 'eval_samples_per_second': 24.665, 'eval_steps_per_second': 24.665, 'epoch': 2.0}

Downloads last month
9
Safetensors
Model size
783M params
Tensor type
F32
ยท
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Space using ambrosfitz/history-qa-flan-t5-large 1