Edit model card

Deploy a Space as inference Endpoint

_This is a fork of the naver-clova-ix/donut-base-finetuned-cord-v2 Space.

This repository implements a custom container for 🤗 Inference Endpoints using a Gradio space.

To deploy this model as an Inference Endpoint, you have to select Custom as task and a custom image.

  • CPU image: philschmi/gradio-api:cpu
  • GPU image: philschmi/gradio-api:gpu
  • PORT: 7860
  • Health Route: /-> is default

Also make sure to add server_name="" in your launch() call to make sure the request is correct proxied.

If you want to use the UI with the Inference Endpoint, you have to select as endpoint type public and add auth through gradio

Example API Request Payload

Get an image you want to use, e.g.

!wget https://datasets-server.huggingface.co/assets/naver-clova-ix/cord-v2/--/naver-clova-ix--cord-v2/train/0/image/image.jpg

run inference

import requests as r
import base64


def predict(path_to_image: str = None):
    ext = path_to_image.split('.')[-1]
    prefix = f'data:image/{ext};base64,'
    with open(path_to_image, 'rb') as f:
        img = f.read()

    payload = {"data": [prefix + base64.b64encode(img).decode('utf-8')]}
    response = r.post(
        f"{ENDPOINT_URL}/api/predict", headers={"Authorization": f"Bearer {HF_TOKEN}"}, json=payload
    if response.status_code == 200:
        return response.json()
        raise Exception(f"Error: {response.status_code}")

prediction = predict(path_to_image="image.jpg")
Downloads last month
Hosted inference API

Unable to determine this model’s pipeline type. Check the docs .