--- tags: - vision - text-to-image - endpoints-template inference: true pipeline_tag: text-to-image base_model: Salesforce/blip-image-captioning-base library_name: generic --- # Fork of [Salesforce/blip-image-captioning-base](https://huggingface.co/openai/clip-vit-base-patch32) for a `text-to-image` Inference endpoint. > Based on https://huggingface.co/sergeipetrov/blip_captioning This repository implements a `custom` task for `text-to-image` for 🤗 Inference Endpoints to allow image capturing. The code for the customized pipeline is in the handler.py. To use deploy this model an Inference Endpoint you have to select `Custom` as task to use the `handler.py` file. ### expected Request payload Image to be labeled as binary. #### CURL ``` curl URL \ -X POST \ --data-binary @car.png \ -H "Content-Type: image/png" ``` #### Python ```python requests.post(ENDPOINT_URL, headers={"Content-Type": "image/png"}, data=open("car.png", 'rb').read()).json() ```