|
--- |
|
language: en |
|
license: mit |
|
tags: |
|
- gpt2 |
|
- question-answering |
|
- fine-tuned |
|
--- |
|
|
|
# Fine-tuned Orion Model for Question Answering |
|
|
|
This model is a fine-tuned version of the cuba6112/orion model, specialized for question answering tasks. |
|
|
|
## Model description |
|
|
|
The model was fine-tuned on a custom dataset of question-answer pairs. It can generate answers to various questions across different topics. |
|
|
|
## Intended uses & limitations |
|
|
|
This model is intended for generating answers to general knowledge questions. It should not be used for sensitive or critical applications without human oversight. |
|
|
|
## Training data |
|
|
|
The model was fine-tuned on a custom dataset of question-answer pairs. The dataset covers various topics including geography, history, science, and more. |
|
|
|
## Training procedure |
|
|
|
The model was fine-tuned using the Hugging Face Transformers library. We used a causal language modeling objective with the following hyperparameters: |
|
|
|
- Number of epochs: 3 |
|
- Batch size: 4 |
|
- Learning rate: Default from TrainingArguments |
|
|
|
## Evaluation results |
|
|
|
Evaluation metrics and results will be added soon. |
|
|
|
|