File size: 4,171 Bytes
6a9f334 64778fc 6a9f334 64778fc 6a9f334 64778fc 6a9f334 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
---
frameworks:
- Pytorch
license: apache-2.0
tasks:
- visual-question-answering
- KarmaVLM
---
<h1 align="center">๐ง๐ปโโ๏ธ KarmaVLM (็ธ็) </h1>
<!-- <div align=center><img src ="./logo-github.png"/></div>
<p align="center">
<a href="https://github.com/X-D-Lab/KarmaVLM"><img src="https://img.shields.io/badge/GitHub-24292e" alt="github"></a>
<a href="https://huggingface.co/X-D-Lab"><img src="https://img.shields.io/badge/-HuggingFace-yellow" alt="HuggingFace"></a>
<a href="https://modelscope.cn/organization/X-D-Lab"><img src="https://img.shields.io/badge/ModelScope-blueviolet" alt="modelscope"></a>
<a href="https://openi.pcl.ac.cn/XD-LAB/KarmaVLM"><img src="https://img.shields.io/badge/-OpenI-337AFF" alt="OpenI"></a>
<a href="https://WiseModel.cn/models/X-D%20Lab"><img src="https://img.shields.io/badge/WiseModel-561253" alt="WiseModel"></a>
</p> -->
<div align="center">
[![GitHub license](https://img.shields.io/github/license/X-D-Lab/KarmaVLM
)](https://github.com/X-D-Lab/KarmaVLM/blob/main/LICENSE)
[![GitHub Stars](https://img.shields.io/github/stars/X-D-Lab/KarmaVLM)](https://github.com/X-D-Lab/KarmaVLM/stargazers)
[![GitHub Forks](https://img.shields.io/github/forks/X-D-Lab/KarmaVLM)](https://github.com/X-D-Lab/KarmaVLM/fork)
[![GitHub Contributors](https://img.shields.io/github/contributors/X-D-Lab/KarmaVLM)](https://github.com/X-D-Lab/KarmaVLM/graphs/contributors)
</div>
# ๐ Introduction
[KarmaVLM](https://github.com/X-D-Lab/KarmaVLM) is a family of high efficiency and powerful visual language model (VLM) pretrained with interleaved image-text data at scale, enabling content comprehension, recognition, and multi-round conversations about images.
# ๐ News
* [2024/02] KarmaVLM is released.
# โก๏ธFeatures
KarmaVLM offers the following features:
- **High Efficiency**: KarmaVLM focuses on exploring the capabilities of small parametric quantitative models on multimodal tasks. So, KarmaVLM can be efficiently deployed on most GPU cards and personal computers, and even on end devices such as mobile phones.
- **Multi-round text-image conversations**: KarmaVLM can take both text and images as inputs and produce text outputs. Currently, it supports multi-round visual question answering with one image.
- **Strong image comprehension**: KarmaVLM is adept at analyzing visuals, making it an efficient tool for tasks like extracting, organizing, and summarizing information from images.
# ๐ฅModel Zoo
| Checkpoint | Download | Vision Encoder | LLM | MMBench |
| :----: | :----: | :----: | :----: | :----: |
| KarmaVLM-Qwen1.5-0_5B | ๐ค / ๐ค | openai/clip-vit-large-patch14-336 | Qwen/Qwen1.5-0.5B | 53.5 |
Other Benchmark evaluations are in progress!
# ๐จโ๐ป Quick Start
## Requirements and Installation
```
git clone https://github.com/X-D-Lab/KarmaVLM.git
cd KarmaVLM
conda create -n karmavlm python=3.10 -y
conda activate karmavlm
pip install --upgrade pip # enable PEP 660 support
pip install -e .
pip install -e ".[train]"
pip install flash-attn --no-build-isolation
```
## ๐ Demo
1. CLI Inference
```
python -m llava.serve.cli \
--model-path /path/to/karmavlm/model \
--model-type qwen \
--image-file /path/to/the/test/image
```
2. Gradio Web UI
- Starting the Controller
```
python -m llava.serve.gradio_web_server \
--controller http://localhost:10000 \
--model-list-mode reload
--share ##(optional)
```
- Launching the Gradio Web Server
```
python -m llava.serve.model_worker \
--host 0.0.0.0 \
--controller http://localhost:10000 \
--port 40000 \
--worker http://localhost:40000 \
--model-path /path/to/karmavlm/model \
--model-type qwen
```
# ๐ License
This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses. The content of this project itself is licensed under the [Apache license 2.0](./LICENSE).
# ๐โ Architecture
We build our project based on [LLaVA](https://github.com/haotian-liu/LLaVA): Large Language and Vision Assistant.
|