Datasets:
license: mit
task_categories:
- visual-question-answering
language:
- tr
pretty_name: TurkishLLaVA Pretrain Dataset.
tags:
- llava
- turkish-llava
- turkish-vqa
configs:
- config_name: main_data
data_files: data/pretrain_data.json
default: true
🔥 TurkishLLaVA Pretrain Dataset
This repository contains the dataset used for pretraining the Turkish-LLaVA-v0.1 model. The dataset is a Turkish translation of the English dataset used in previous studies liuhaotian. The translation was performed using DeepL. The details of this dataset and its comparison with other datasets have been published in our paper (Soon..).
Pretraining Configuration
The pretraining process focused on training only the projection matrix. This matrix is crucial as it transfers the information extracted by the image encoder to the language model. The training was conducted using the following configuration:
- Training Duration: 7 hours
- GPUs Used: 4 x A100
- Batch Size: 16 per GPU
- Learning Rate Scheduler: Cosine
- Learning Rate: 1e-3
- Gradient Accumulation: 4
- Epochs: 1
- Warmup Ratio: 3%
Dataset Description
The dataset used for this pretraining is a Turkish version of the English dataset employed in prior research. The translation was carefully executed to preserve the nuances and context of the original data. In this pretraining phase, the model only learns to interpret the output of the image encoder, focusing on how to align visual information with the language model. As a result, the model is not yet capable of engaging in conversations or handling task-specific queries.
Citation
If you use this dataset or the pretraining setup in your research, please consider citing our paper (Soon..).
Contact
If you encounter any problems or have any suggestions, feel free to reach out to us or open a pull request.
COSMOS AI Research Group, Yildiz Technical University Computer Engineering Department
https://cosmos.yildiz.edu.tr/
Email: cosmos@yildiz.edu.tr