This is a d-Matrix functional reference of the clip-vit-base-patch32 model. The reference provides the following functional configurations:
Configuration | Explanation |
---|---|
BASELINE |
a reference functionally equivalent to the original model |
BASIC |
all linear algebraic operands quantized to MXINT8-64 , and all other operations transformed to approximated kernel simulations |
Usage
Install d-Matrix Dmx_Compressor first.
pip install dmx_compressor
The following is an example model and its evaluation.
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
from dmx.compressor.modeling import DmxModel
model = CLIPModel.from_pretrained("d-matrix/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("d-matrix/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
text=["a photo of a cat", "a photo of a dog"],
images=image,
return_tensors="pt",
padding=True,
)
model = DmxModel.from_torch(model)
outputs = model(**inputs)
- Downloads last month
- 61
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.