Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ This is the repository of OmniCorpus-YT, which contains 10 million image-text in
|
|
18 |
- Repository: https://github.com/OpenGVLab/OmniCorpus
|
19 |
- Paper: https://arxiv.org/abs/2406.08418
|
20 |
|
21 |
-
OmniCorpus dataset is a large-scale image-text interleaved dataset, which pushes the boundaries of scale and diversity by encompassing **8.6 billion images** interleaved with **1,696 text tokens** from diverse sources, significantly surpassing previous datasets.
|
22 |
This dataset demonstrates several advantages over its counterparts:
|
23 |
|
24 |
1. **Larger data scale:** Our dataset is 1.7 times larger in images and 12.5 times larger in texts compared to the previously largest multimodal dataset, LAION-5B, while maintaining excellent data quality.
|
@@ -33,7 +33,7 @@ The OmniCorpus contains three sections:
|
|
33 |
- **OmniCorpus-CW**: sourced from Chinese internet resources, will be availiable on [OpenDataLab](https://opendatalab.com/) platform.
|
34 |
- **OmniCorpus-YT**: samples Youtube video frames as images and collects subtitles as texts.
|
35 |
|
36 |
-
Code for pre-training, evaluating, main body extracting, and filtering have been released in the official [repository](https://github.com/OpenGVLab/OmniCorpus). A pre-trained model is availiable [here]().
|
37 |
|
38 |
# Usages
|
39 |
|
|
|
18 |
- Repository: https://github.com/OpenGVLab/OmniCorpus
|
19 |
- Paper: https://arxiv.org/abs/2406.08418
|
20 |
|
21 |
+
OmniCorpus dataset is a large-scale image-text interleaved dataset, which pushes the boundaries of scale and diversity by encompassing **8.6 billion images** interleaved with **1,696 billion text tokens** from diverse sources, significantly surpassing previous datasets.
|
22 |
This dataset demonstrates several advantages over its counterparts:
|
23 |
|
24 |
1. **Larger data scale:** Our dataset is 1.7 times larger in images and 12.5 times larger in texts compared to the previously largest multimodal dataset, LAION-5B, while maintaining excellent data quality.
|
|
|
33 |
- **OmniCorpus-CW**: sourced from Chinese internet resources, will be availiable on [OpenDataLab](https://opendatalab.com/) platform.
|
34 |
- **OmniCorpus-YT**: samples Youtube video frames as images and collects subtitles as texts.
|
35 |
|
36 |
+
Code for pre-training, evaluating, main body extracting, and filtering have been released in the official [repository](https://github.com/OpenGVLab/OmniCorpus). A pre-trained model is availiable [here](https://huggingface.co/Qingyun/OmniCorpus-InternVL).
|
37 |
|
38 |
# Usages
|
39 |
|