metadata
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
Model Card for Mistral7B-v0.1-coco-caption-de
This model is a fine-tuned model of the Mistral7B-v0.1 completion model and meant to produce german COCO like captions.
The coco-karpathy-opus-de dataset was used to tune the model for german image caption generation.
Model Details
Model Description
- Developed by: Jotschi
- License: Apache License
- Finetuned from model [optional]: Mistral7B-v0.1
Uses
The model is meant to be used in conjunction with a BLIP2 Q-Former to enable image captioning, visual question answering (VQA) and chat-like conversations.
Training Details
The preliminary training script uses PEFT and DeepSpeed to execute the traininng.
Training Data
Training Procedure
The model was trained using PEFT 4Bit Q-LoRA with the following parameters:
- rank: 256
- alpha: 16
- gradient accumulation steps: 8
- batch size: 4
- Input sequence length: 512
- Learning Rate: 2.0e-5
Postprocessing
The merged model was saved using PeftModel
API.
Framework versions
- PEFT 0.8.2