Datasets:

AhmetZeer commited on
Commit
525c77a
1 Parent(s): 19e7ebd

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -56,3 +56,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
56
  # Video files - compressed
57
  *.mp4 filter=lfs diff=lfs merge=lfs -text
58
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
56
  # Video files - compressed
57
  *.mp4 filter=lfs diff=lfs merge=lfs -text
58
  *.webm filter=lfs diff=lfs merge=lfs -text
59
+ data/pretrain_data.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - visual-question-answering
5
+ language:
6
+ - tr
7
+ pretty_name: TurkishLLaVA Pretrain Dataset.
8
+ tags:
9
+ - llava
10
+ - turkish-llava
11
+ - turkish-vqa
12
+
13
+ configs:
14
+ - config_name: main_data
15
+ data_files: data/train/pretrain_data.json
16
+ default: true
17
+ ---
18
+ # 🔥 TurkishLLaVA Pretrain Dataset
19
+
20
+ This repository contains the dataset used for pretraining the [Turkish-LLaVA-v0.1](https://huggingface.co/ytu-ce-cosmos/Turkish-LLaVA-v0.1) model. The dataset is a Turkish translation of the English dataset used in previous studies. The translation was performed using DeepL. The details of this dataset and its comparison with other datasets have been published in our [paper](#) (Soon..).
21
+
22
+ ## Pretraining Configuration
23
+
24
+ The pretraining process focused on training only the projection matrix. This matrix is crucial as it transfers the information extracted by the image encoder to the language model. The training was conducted using the following configuration:
25
+
26
+ - **Training Duration:** 7 hours
27
+ - **GPUs Used:** 4 x A100
28
+ - **Batch Size:** 16 per GPU
29
+ - **Learning Rate Scheduler:** Cosine
30
+ - **Learning Rate:** 1e-3
31
+ - **Gradient Accumulation:** 4
32
+ - **Epochs:** 1
33
+ - **Warmup Ratio:** 3%
34
+
35
+ ## Dataset Description
36
+
37
+ The dataset used for this pretraining is a Turkish version of the English dataset employed in prior research. The translation was carefully executed to preserve the nuances and context of the original data. In this pretraining phase, the model only learns to interpret the output of the image encoder, focusing on how to align visual information with the language model. As a result, the model is not yet capable of engaging in conversations or handling task-specific queries.
38
+
39
+ ## Citation
40
+
41
+ If you use this dataset or the pretraining setup in your research, please consider citing our [paper](#) (Soon..).
42
+
43
+ ## Contact
44
+
45
+ If you encounter any problems or have any suggestions, feel free to reach out to us or open a pull request.
46
+
47
+ COSMOS AI Research Group, Yildiz Technical University Computer Engineering Department
48
+ [https://cosmos.yildiz.edu.tr/](https://cosmos.yildiz.edu.tr/)
49
+ Email: cosmos@yildiz.edu.tr
data/pretrain_data.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4bf1723e5e8aed792888265c385e6475bb7fe557f64ea01ca9f3c6c86179ddb0
3
+ size 264522845
data/pretrain_images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:888086202d3bd76df3918ebe2c60bb0439c8114a05eb0d17ac902103b2c29576
3
+ size 12872054106