Fine tuned version of xtuner/llava-llama-3-8b-v1_1 on gokaygokay/random_instruct_docci dataset.
pip install git+https://github.com/haotian-liu/LLaVA.git --no-deps
pip install lmdeploy
# Google Colab Error Fix
import nest_asyncio
nest_asyncio.apply()
from lmdeploy import pipeline
from lmdeploy.vl import load_image
pipe = pipeline('gokaygokay/llava-llama3-docci')
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
response = pipe(('describe this image', image))
print(response)
- Downloads last month
- 23
Inference API (serverless) does not yet support transformers models for this pipeline type.