File size: 1,109 Bytes
aeab63c 4e443f0 b3ca538 aeab63c 4e443f0 aeab63c 4e443f0 aeab63c 4e443f0 aeab63c 4e443f0 aeab63c 4e443f0 aeab63c 4e443f0 aeab63c 4e443f0 aeab63c 4e443f0 aeab63c 4e443f0 aeab63c 4e443f0 aeab63c 4e443f0 aeab63c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
---
language: en
license: mit
tags:
- gpt2
- question-answering
- fine-tuned
---
# Fine-tuned Orion Model for Question Answering
This model is a fine-tuned version of the cuba6112/orion model, specialized for question answering tasks.
## Model description
The model was fine-tuned on a custom dataset of question-answer pairs. It can generate answers to various questions across different topics.
## Intended uses & limitations
This model is intended for generating answers to general knowledge questions. It should not be used for sensitive or critical applications without human oversight.
## Training data
The model was fine-tuned on a custom dataset of question-answer pairs. The dataset covers various topics including geography, history, science, and more.
## Training procedure
The model was fine-tuned using the Hugging Face Transformers library. We used a causal language modeling objective with the following hyperparameters:
- Number of epochs: 3
- Batch size: 4
- Learning rate: Default from TrainingArguments
## Evaluation results
Evaluation metrics and results will be added soon.
|