Edit model card

๐Ÿง˜๐Ÿปโ€โ™‚๏ธ KarmaVLM (็›ธ็”Ÿ)

GitHub license GitHub Stars GitHub Forks GitHub Contributors

๐Ÿ‘ Introduction

KarmaVLM is a family of high efficiency and powerful visual language model (VLM) pretrained with interleaved image-text data at scale, enabling content comprehension, recognition, and multi-round conversations about images.

๐ŸŽ‰ News

  • [2024/02] KarmaVLM is released.

โšก๏ธFeatures

KarmaVLM offers the following features:

  • High Efficiency: KarmaVLM focuses on exploring the capabilities of small parametric quantitative models on multimodal tasks. So, KarmaVLM can be efficiently deployed on most GPU cards and personal computers, and even on end devices such as mobile phones.

  • Multi-round text-image conversations: KarmaVLM can take both text and images as inputs and produce text outputs. Currently, it supports multi-round visual question answering with one image.

  • Strong image comprehension: KarmaVLM is adept at analyzing visuals, making it an efficient tool for tasks like extracting, organizing, and summarizing information from images.

๐Ÿ”ฅModel Zoo

Checkpoint Download Vision Encoder LLM MMBench
KarmaVLM-Qwen1.5-0_5B ๐Ÿค— / ๐Ÿค– openai/clip-vit-large-patch14-336 Qwen/Qwen1.5-0.5B 53.5

Other Benchmark evaluations are in progress!

๐Ÿ‘จโ€๐Ÿ’ป Quick Start

Requirements and Installation

git clone https://github.com/X-D-Lab/KarmaVLM.git
cd KarmaVLM

conda create -n karmavlm python=3.10 -y
conda activate karmavlm

pip install --upgrade pip  # enable PEP 660 support
pip install -e .
pip install -e ".[train]"
pip install flash-attn --no-build-isolation

๐ŸŒ Demo

  1. CLI Inference
    python -m llava.serve.cli \
        --model-path /path/to/karmavlm/model \
        --model-type qwen \
        --image-file /path/to/the/test/image
    
  2. Gradio Web UI
  • Starting the Controller
    python -m llava.serve.gradio_web_server \
    --controller http://localhost:10000 \
    --model-list-mode reload
    --share ##(optional)
    
  • Launching the Gradio Web Server
    python -m llava.serve.model_worker \
    --host 0.0.0.0 \
    --controller http://localhost:10000 \
    --port 40000 \
    --worker http://localhost:40000 \
    --model-path /path/to/karmavlm/model \
    --model-type qwen
    

๐Ÿ“‹ License

This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses. The content of this project itself is licensed under the Apache license 2.0.

๐Ÿ™‡โ€ Architecture

We build our project based on LLaVA: Large Language and Vision Assistant.

Downloads last month
2
Safetensors
Model size
925M params
Tensor type
BF16
ยท