Generic badge

Model

llava-v1.5-7b-xtuner-pretrain is a LLaVA projector pretrained from Vicuna-7B-v1.5 and CLIP-ViT-Large-patch14-336 on LLaVA-Pretrain dataset by XTuner.

The fine-tuned LLaVA model can be found on xtuner/llava-v1.5-7b-xtuner.

Citation

@misc{2023xtuner,
    title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
    author={XTuner Contributors},
    howpublished = {\url{https://github.com/InternLM/xtuner}},
    year={2023}
}
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train xtuner/llava-v1.5-7b-xtuner-pretrain