LinaAlhuri commited on
Commit
2a4204a
1 Parent(s): f033eea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -29,8 +29,7 @@ tokenizer = AutoTokenizer.from_pretrained("asafaya/bert-base-arabic", cache_dir=
29
 
30
  ## Data
31
 
32
- This was done through a combination of crawling Wikipedia and using commonly used pre-existing image datasets such as [CC](https://ai.google.com/research/ConceptualCaptions/). One of the most challenging obstacles for multimodal technologies is the fact that Arabic has few data resources, making huge dataset construction difficult. Another is the degradation of translated datasets adapted from well-known publicly available datasets. Whether the choice is to use translated data or genuine data, it is difficult to achieve the desired results depending on only one source, as each choice has its pros and cons. As a result, the goal of this work is to construct the largest Arabic image-text pair collection feasible by merging diverse data sources. This technique takes advantage of the rich information in genuine datasets to compensate for information loss in translated datasets. In contrast, translated datasets contribute to this work with enough pairs that cover a wide range of domains, scenarios, and objects.
33
-
34
 
35
  | Dataset name | Images |
36
  | --- | --- |
 
29
 
30
  ## Data
31
 
32
+ The aim was to create a comprehensive Arabic image-text dataset by combining various data sources due to the scarcity of Arabic resources. Challenges included limited Arabic data and the quality of translated datasets. The approach involved merging genuine datasets for rich information and using translated datasets to cover diverse domains, scenarios, and objects, striking a balance between their respective pros and cons.
 
33
 
34
  | Dataset name | Images |
35
  | --- | --- |