teowu commited on
Commit
433b819
1 Parent(s): 9bf90d7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +138 -0
README.md ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ pipeline_tag: image-text-to-text
7
+ tags:
8
+ - multimodal
9
+ - aria
10
+ base_model:
11
+ - rhymes-ai/Aria-Base-64K
12
+ ---
13
+ <!-- <p align="center">
14
+ <br>Aria</br>
15
+ </p> -->
16
+
17
+
18
+ # Aria-Chat-Preview Model Card
19
+
20
+ <!--
21
+ - Aria is the **first open multimodal native MoE** model, capable of seamlessly handling various input modalities within a MoE architecture.
22
+ - Aria performs **on par with GPT-4o mini and Gemini 1.5 Flash** across a range of multimodal tasks while maintaining strong performance on **text**-only tasks.
23
+ - Compared to similar or even larger models, Aria boasts **faster speeds** and **lower costs**. This high efficiency stems from its ability to activate only 3.9B parameters during inference – the **fewest** among models with comparable performance.
24
+ -->
25
+
26
+ ## Key features
27
+
28
+ - **Especially Optimized For Multimodal Chat**: Unlike previous models, Aria-Chat-Preview is especially optimized for open-ended dialogs. We hope this version can provide seamless open-source multimodal chat experience.
29
+ - **Improved Stability**: We have especially improved its stability for long outputs, reducing probabilities for previously-reported bad cases like incomplete responses on Markdown tables, or endless responses on listwise outputs.
30
+ - **Better Multi-Lingual Abilities**: We have optimized its ability on non-English scenarios (Chinese, Spanish, French, Japanese, *etc*), including both multi-lingual OCR and multi-lingual dialogs.
31
+
32
+ <p align="center">
33
+ 🔗 <a href="https://rhymes.ai/" target="_blank"> Try Aria!</a> · 📖 <a href="https://www.rhymes.ai/blog-details/aria-first-open-multimodal-native-moe-model" target="_blank">Blog</a> · 📌 <a href="https://arxiv.org/pdf/2410.05993" target="_blank">Paper</a>
34
+ · ⭐ <a href="https://github.com/rhymes-ai/Aria" target="_blank">GitHub</a> · 🟣 <a href="https://discord.com/invite/u8HxU23myj" target="_blank"> Discord </a>
35
+ </p>
36
+
37
+
38
+ <!-- # Model Info
39
+
40
+ | Model | Download | Parameter | Context Length |
41
+ | :---- | :------- | :------------ | :------ |
42
+ | Aria | < HF link - TBD> | • Activation: 3.9B (3.5B MoE + 0.4B Visual Encoder) <br> • Total: 25.3B | 64K | -->
43
+
44
+ ## Benchmark
45
+
46
+ This checkpoint is not designed for benchmarks, but for real-world open-ended applications. To this end, we evaluated on WildVision-Bench and noticed non-trivial improvements on it:
47
+
48
+
49
+ | Model | Score |
50
+ |---------------------------|---------|
51
+ | gpt-4o | 89.15 |
52
+ | **Aria-Chat-Preview** |**81.3** |
53
+ | gpt-4-vision-preview | 79.78 |
54
+ | Aria | 74.1 |
55
+ | Reka-Flash | 64.65 |
56
+ | claude-3-opus-20240229 | 62.03 |
57
+ | yi-vl-plus | 55.05 |
58
+ | liuhaotian/llava-v1.6-34b | 51.89 |
59
+ | claude-3-sonnet-20240229 | 50.0 |
60
+ | claude-3-haiku-20240307 | 37.83 |
61
+
62
+ ## Quick Start
63
+ ### Installation
64
+ ```
65
+ pip install transformers==4.45.0 accelerate==0.34.1 sentencepiece==0.2.0 torchvision requests torch Pillow
66
+ pip install flash-attn --no-build-isolation
67
+
68
+ # For better inference performance, you can install grouped-gemm, which may take 3-5 minutes to install
69
+ pip install grouped_gemm==0.1.6
70
+ ```
71
+
72
+ ### Inference
73
+
74
+ Aria has 25.3B total parameters, it can be loaded in one A100 (80GB) GPU with bfloat16 precision.
75
+
76
+ Here is a code snippet to show you how to use Aria.
77
+
78
+ ```python
79
+ import requests
80
+ import torch
81
+ from PIL import Image
82
+ from transformers import AutoModelForCausalLM, AutoProcessor
83
+
84
+ model_id_or_path = "rhymes-ai/Aria-Chat-Preview"
85
+
86
+ model = AutoModelForCausalLM.from_pretrained(model_id_or_path, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
87
+
88
+ processor = AutoProcessor.from_pretrained(model_id_or_path, trust_remote_code=True)
89
+
90
+ image_path = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png"
91
+
92
+ image = Image.open(requests.get(image_path, stream=True).raw)
93
+
94
+ messages = [
95
+ {
96
+ "role": "user",
97
+ "content": [
98
+ {"text": None, "type": "image"},
99
+ {"text": "what is the image?", "type": "text"},
100
+ ],
101
+ }
102
+ ]
103
+
104
+ text = processor.apply_chat_template(messages, add_generation_prompt=True)
105
+ inputs = processor(text=text, images=image, return_tensors="pt")
106
+ inputs["pixel_values"] = inputs["pixel_values"].to(model.dtype)
107
+ inputs = {k: v.to(model.device) for k, v in inputs.items()}
108
+
109
+ with torch.inference_mode(), torch.cuda.amp.autocast(dtype=torch.bfloat16):
110
+ output = model.generate(
111
+ **inputs,
112
+ max_new_tokens=500,
113
+ stop_strings=["<|im_end|>"],
114
+ tokenizer=processor.tokenizer,
115
+ do_sample=True,
116
+ temperature=0.9,
117
+ )
118
+ output_ids = output[0][inputs["input_ids"].shape[1]:]
119
+ result = processor.decode(output_ids, skip_special_tokens=True)
120
+
121
+ print(result)
122
+ ```
123
+
124
+ ### Advanced Inference and Fine-tuning
125
+ We provide a [codebase](https://github.com/rhymes-ai/Aria) for more advanced usage of Aria,
126
+ including vllm inference, cookbooks, and fine-tuning on custom datasets.
127
+
128
+
129
+ ## Citation
130
+ If you find our work helpful, please consider citing.
131
+ ```
132
+ @article{aria,
133
+ title={Aria: An Open Multimodal Native Mixture-of-Experts Model},
134
+ author={Dongxu Li and Yudong Liu and Haoning Wu and Yue Wang and Zhiqi Shen and Bowen Qu and Xinyao Niu and Guoyin Wang and Bei Chen and Junnan Li},
135
+ year={2024},
136
+ journal={arXiv preprint arXiv:2410.05993},
137
+ }
138
+ ```