Any-to-Any
Transformers
ONNX
Safetensors
minicpmo
feature-extraction
minicpm-o
minicpm-v
multimodal
full-duplex
custom_code
4-bit precision
awq
Instructions to use openbmb/MiniCPM-o-4_5-awq with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use openbmb/MiniCPM-o-4_5-awq with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("openbmb/MiniCPM-o-4_5-awq", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
| { | |
| "image_processor_type": "MiniCPMVImageProcessor", | |
| "feature_extractor_type": "MiniCPMAAudioProcessor", | |
| "auto_map": { | |
| "AutoProcessor": "processing_minicpmo.MiniCPMOProcessor", | |
| "AutoImageProcessor": "processing_minicpmo.MiniCPMVImageProcessor", | |
| "AutoFeatureExtractor": "processing_minicpmo.MiniCPMAAudioProcessor" | |
| }, | |
| "processor_class": "MiniCPMOProcessor", | |
| "max_slice_nums": 9, | |
| "scale_resolution": 448, | |
| "patch_size": 14, | |
| "use_image_id": true, | |
| "image_feature_size": 64, | |
| "im_start": "", | |
| "slice_start": "<slice>", | |
| "slice_end": "</slice>", | |
| "unk": "<unk>", | |
| "im_id_start": "<image_id>", | |
| "im_id_end": "</image_id>", | |
| "slice_mode": true, | |
| "audio_pool_step": 5, | |
| "norm_mean": [ | |
| 0.5, | |
| 0.5, | |
| 0.5 | |
| ], | |
| "norm_std": [ | |
| 0.5, | |
| 0.5, | |
| 0.5 | |
| ], | |
| "version": 4.5 | |
| } |