thomas-yanxin commited on
Commit
6a9f334
1 Parent(s): 3e93d76

Upload folder using huggingface_hub

Browse files
.mdl ADDED
Binary file (54 Bytes). View file
 
.msc ADDED
Binary file (869 Bytes). View file
 
.mv ADDED
@@ -0,0 +1 @@
 
 
1
+ Revision:master,CreatedAt:1708940334
README.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ frameworks:
3
+ - Pytorch
4
+ license: apache-2.0
5
+ tasks:
6
+ - visual-question-answering
7
+ - KarmaVLM
8
+ ---
9
+
10
+ <h1 align="center">🧘🏻‍♂️ KarmaVLM (相生) </h1>
11
+ <div align=center><img src ="./logo-github.png"/></div>
12
+
13
+ <p align="center">
14
+ <a href="https://github.com/X-D-Lab/KarmaVLM"><img src="https://img.shields.io/badge/GitHub-24292e" alt="github"></a>
15
+ <a href="https://huggingface.co/X-D-Lab"><img src="https://img.shields.io/badge/-HuggingFace-yellow" alt="HuggingFace"></a>
16
+ <a href="https://modelscope.cn/organization/X-D-Lab"><img src="https://img.shields.io/badge/ModelScope-blueviolet" alt="modelscope"></a>
17
+ <a href="https://openi.pcl.ac.cn/XD-LAB/KarmaVLM"><img src="https://img.shields.io/badge/-OpenI-337AFF" alt="OpenI"></a>
18
+ <a href="https://WiseModel.cn/models/X-D%20Lab"><img src="https://img.shields.io/badge/WiseModel-561253" alt="WiseModel"></a>
19
+ </p>
20
+
21
+
22
+ <div align="center">
23
+
24
+ [![GitHub license](https://img.shields.io/github/license/X-D-Lab/KarmaVLM
25
+ )](https://github.com/X-D-Lab/KarmaVLM/blob/main/LICENSE)
26
+ [![GitHub Stars](https://img.shields.io/github/stars/X-D-Lab/KarmaVLM)](https://github.com/X-D-Lab/KarmaVLM/stargazers)
27
+ [![GitHub Forks](https://img.shields.io/github/forks/X-D-Lab/KarmaVLM)](https://github.com/X-D-Lab/KarmaVLM/fork)
28
+ [![GitHub Contributors](https://img.shields.io/github/contributors/X-D-Lab/KarmaVLM)](https://github.com/X-D-Lab/KarmaVLM/graphs/contributors)
29
+
30
+ </div>
31
+
32
+
33
+ # 👏 Introduction
34
+ KarmaVLM is a family of high efficiency and powerful visual language model (VLM) pretrained with interleaved image-text data at scale, enabling content comprehension, recognition, and multi-round conversations about images.
35
+
36
+ # 🎉 News
37
+ * [2024/02] KarmaVLM is released.
38
+
39
+ # ⚡️Features
40
+ KarmaVLM offers the following features:
41
+
42
+ - **High Efficiency**: KarmaVLM focuses on exploring the capabilities of small parametric quantitative models on multimodal tasks. So, KarmaVLM can be efficiently deployed on most GPU cards and personal computers, and even on end devices such as mobile phones.
43
+
44
+ - **Multi-round text-image conversations**: KarmaVLM can take both text and images as inputs and produce text outputs. Currently, it supports multi-round visual question answering with one image.
45
+
46
+ - **Strong image comprehension**: KarmaVLM is adept at analyzing visuals, making it an efficient tool for tasks like extracting, organizing, and summarizing information from images.
47
+
48
+ # 🔥Model Zoo
49
+ | Checkpoint | Download | Vision Encoder | LLM | MMBench |
50
+ | :----: | :----: | :----: | :----: | :----: |
51
+ | KarmaVLM-Qwen1.5-0_5B | 🤗 / 🤖 | openai/clip-vit-large-patch14-336 | Qwen/Qwen1.5-0.5B | 53.5 |
52
+
53
+ Other Benchmark evaluations are in progress!
54
+
55
+ # 👨‍💻 Quick Start
56
+
57
+ ## Requirements and Installation
58
+
59
+ ```
60
+ git clone https://github.com/X-D-Lab/KarmaVLM.git
61
+ cd KarmaVLM
62
+
63
+ conda create -n karmavlm python=3.10 -y
64
+ conda activate karmavlm
65
+
66
+ pip install --upgrade pip # enable PEP 660 support
67
+ pip install -e .
68
+ pip install -e ".[train]"
69
+ pip install flash-attn --no-build-isolation
70
+ ```
71
+
72
+ ## 🌏 Demo
73
+ 1. CLI Inference
74
+ ```
75
+ python -m llava.serve.cli \
76
+ --model-path /path/to/karmavlm/model \
77
+ --model-type qwen \
78
+ --image-file /path/to/the/test/image
79
+ ```
80
+ 2. Gradio Web UI
81
+
82
+ - Starting the Controller
83
+ ```
84
+ python -m llava.serve.gradio_web_server \
85
+ --controller http://localhost:10000 \
86
+ --model-list-mode reload
87
+ --share ##(optional)
88
+ ```
89
+ - Launching the Gradio Web Server
90
+ ```
91
+ python -m llava.serve.model_worker \
92
+ --host 0.0.0.0 \
93
+ --controller http://localhost:10000 \
94
+ --port 40000 \
95
+ --worker http://localhost:40000 \
96
+ --model-path /path/to/karmavlm/model \
97
+ --model-type qwen
98
+ ```
99
+
100
+ # 📋 License
101
+ This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses. The content of this project itself is licensed under the [Apache license 2.0](./LICENSE).
102
+
103
+ # 🙇‍ Architecture
104
+ We build our project based on [LLaVA](https://github.com/haotian-liu/LLaVA): Large Language and Vision Assistant.
105
+
106
+
added_tokens.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "<|endoftext|>": 151643,
3
+ "<|extra_0|>": 151646,
4
+ "<|im_end|>": 151645,
5
+ "<|im_start|>": 151644
6
+ }
config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "./X-D-Lab/qwen/Qwen1.5-0_5B",
3
+ "architectures": [
4
+ "LlavaQwen2ForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 151643,
8
+ "eos_token_id": 151643,
9
+ "freeze_mm_mlp_adapter": false,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 1024,
12
+ "image_aspect_ratio": "pad",
13
+ "image_projector_type": "mlp2x_gelu",
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 2816,
16
+ "max_position_embeddings": 32768,
17
+ "max_window_layers": 21,
18
+ "mm_hidden_size": 1024,
19
+ "mm_vision_tower": "openai/clip-vit-large-patch14",
20
+ "mm_projector_lr": null,
21
+ "mm_use_im_patch_token": false,
22
+ "mm_use_im_start_end": false,
23
+ "mm_vision_select_feature": "patch",
24
+ "mm_vision_select_layer": -2,
25
+ "model_type": "llava_qwen2",
26
+ "num_attention_heads": 16,
27
+ "num_hidden_layers": 24,
28
+ "num_key_value_heads": 16,
29
+ "pad_token_id": 151646,
30
+ "rms_norm_eps": 1e-06,
31
+ "rope_theta": 1000000.0,
32
+ "sliding_window": 32768,
33
+ "tie_word_embeddings": true,
34
+ "tokenizer_padding_side": "right",
35
+ "torch_dtype": "bfloat16",
36
+ "transformers_version": "4.37.2",
37
+ "tune_mm_mlp_adapter": false,
38
+ "use_cache": true,
39
+ "use_mm_proj": true,
40
+ "use_sliding_window": false,
41
+ "vocab_size": 151936
42
+ }
configuration.json ADDED
File without changes
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "eos_token_id": 151643,
4
+ "max_new_tokens": 2048,
5
+ "transformers_version": "4.37.2"
6
+ }
logo-github.png ADDED
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e293f2297e18d64f29a8b64802a1632969a1f3cfa82a348a4ba5c554b0d0960c
3
+ size 1849792256
special_tokens_map.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>"
5
+ ],
6
+ "eos_token": {
7
+ "content": "<|endoftext|>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false
12
+ },
13
+ "pad_token": "<|extra_0|>",
14
+ "unk_token": {
15
+ "content": "<|extra_0|>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false
20
+ }
21
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "151643": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "151644": {
13
+ "content": "<|im_start|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "151645": {
21
+ "content": "<|im_end|>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "151646": {
29
+ "content": "<|extra_0|>",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ }
36
+ },
37
+ "additional_special_tokens": [
38
+ "<|im_start|>",
39
+ "<|im_end|>"
40
+ ],
41
+ "bos_token": null,
42
+ "chat_template": "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
43
+ "clean_up_tokenization_spaces": false,
44
+ "eos_token": "<|endoftext|>",
45
+ "errors": "replace",
46
+ "model_max_length": 2048,
47
+ "pad_token": "<|extra_0|>",
48
+ "padding_side": "right",
49
+ "split_special_tokens": false,
50
+ "tokenizer_class": "Qwen2Tokenizer",
51
+ "unk_token": "<|extra_0|>"
52
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
vocab.json ADDED
The diff for this file is too large to render. See raw diff