The dataset viewer is taking too long to fetch the data. Try to refresh this page.
Server-side error
Error code:   ClientConnectionError

Dataset Details

Dataset Type:
Japanese LLaVA Pretrain is a localized version of the original LLaVA Pretrain dataset. This version is translated into Japanese using DeepL API and is aimed at serving similar purposes in the context of Japanese language.

Resources for More Information:
For information on the original dataset: LLaVA

License:
License: Must comply with license of CC-3M, BLIP (if you use their synthetic caption).

CC-3M The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.

Same as the original dataset.

Questions or Comments:
For questions or comments about the original model, you can go to LLaVA GitHub Issues.

Intended Use

Primary Intended Uses:
The primary use of this translated dataset is research on large multimodal models and chatbots in a Japanese context.

Primary Intended Users:
The primary intended users are researchers and hobbyists interested in computer vision, natural language processing, machine learning, and artificial intelligence, particularly those focusing on the Japanese language.


Note: This dataset is a translation of the original LLaVA-Pretrain, carried out using the DeepL API. The license remains the same as the original dataset.


Downloads last month
4
Edit dataset card

Models trained or fine-tuned on turing-motors/LLaVA-Pretrain-JA