Fork of Geonmo/laion-aesthetic-predictor for an Image Aesthetic Predictor).

This repository implements a custom task for Geonmo/laion-aesthetic-predictor for 🤗 Inference Endpoints. The code for the customized handler is in the handler.py.

Test Handler locally.

This model & handker can be tested locally using the hf-endpoints-emulator.

  1. Clone the repository and install the requirements.
git lfs install
git clone https://huggingface.co/philschmid/laion-asthetic-endpoint

cd laion-asthetic-endpoint
pip install -r requirements.txt
  1. Install hf-endpoints-emulator
pip install hf-endpoints-emulator
  1. Run the emulator
hf-endpoints-emulator --handler handler.py
  1. Test the endpoint and send request
curl --request POST \
  --url http://localhost \
  --header 'Content-Type: image/jpg' \
  --data-binary '@example1.jpg'

Run Request

The endpoint expects the image to be served as binary. Below is an curl and python example

cURL

  1. get image
wget https://huggingface.co/philschmid/laion-asthetic-endpoint/resolve/main/example1.jpg -O test.jpg
  1. send cURL request
curl --request POST \
  --url https://{ENDPOINT}/ \
  --header 'Content-Type: image/jpg' \
  --header 'Authorization: Bearer {HF_TOKEN}' \
  --data-binary '@test.jpg'
  1. the expected output
{"aesthetic score": 6.764713287353516}

Python

  1. get image
wget https://huggingface.co/philschmid/laion-asthetic-endpoint/resolve/main/example1.jpg -O test.jpg
  1. run request
import json
from typing import List
import requests as r
import base64

ENDPOINT_URL=""
HF_TOKEN=""

def predict(path_to_image:str=None):
    with open(path_to_image, "rb") as i:
      b = i.read()
    headers= {
        "Authorization": f"Bearer {HF_TOKEN}",
        "Content-Type": "image/jpeg" # content type of image
    }
    response = r.post(ENDPOINT_URL, headers=headers, data=b)
    return response.json()

prediction = predict(path_to_image="test.jpg")

prediction

expected output

{"aesthetic score": 6.764713287353516}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.