|
--- |
|
license: apache-2.0 |
|
base_model: mistralai/Mistral-7B-v0.1 |
|
--- |
|
|
|
# Model Card for Mistral7B-v0.1-coco-caption-de |
|
|
|
This model is a fine-tuned model of the Mistral7B-v0.1 completion model and meant to produce german COCO like captions. |
|
|
|
The [coco-karpathy-opus-de dataset](https://huggingface.co/datasets/Jotschi/coco-karpathy-opus-de) was used to tune the model for german image caption generation. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
- **Developed by:** [Jotschi](https://huggingface.co/Jotschi) |
|
- **License:** [Apache License](https://www.apache.org/licenses/LICENSE-2.0) |
|
- **Finetuned from model [optional]:** [Mistral7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) |
|
|
|
## Uses |
|
|
|
The model is meant to be used in conjunction with a [BLIP2](https://huggingface.co/docs/transformers/model_doc/blip-2) Q-Former to enable image captioning, visual question answering (VQA) and chat-like conversations. |
|
|
|
## Training Details |
|
|
|
The preliminary [training script](https://github.com/Jotschi/lavis-experiments/tree/master/mistral-deepspeed) uses PEFT and DeepSpeed to execute the traininng. |
|
|
|
### Training Data |
|
|
|
* [coco-karpathy-opus-de dataset](https://huggingface.co/datasets/Jotschi/coco-karpathy-opus-de) |
|
|
|
### Training Procedure |
|
|
|
The model was trained using PEFT 4Bit Q-LoRA with the following parameters: |
|
|
|
* rank: 256 |
|
* alpha: 16 |
|
* gradient accumulation steps: 8 |
|
* batch size: 4 |
|
* Input sequence length: 512 |
|
* Learning Rate: 2.0e-5 |
|
|
|
#### Postprocessing |
|
|
|
The merged model was saved using `PeftModel` API. |
|
|
|
### Framework versions |
|
|
|
- PEFT 0.8.2 |
|
|