docker_ollama_model / README.md
myyim's picture
Update README.md
18e5eca verified
metadata
title: docker ollama model
short_description: REST API to serve a model using Docker and Ollama
emoji: 🔥
colorFrom: green
colorTo: blue
sdk: docker
pinned: false

To do inference, use command line

curl https://myyim-docker-ollama-model.hf.space/api/generate -d '{ "model": "gemma3:1b", "prompt": "Write me a poem about generative AI.", "stream": false }'

(https://medium.com/p/1f5d8f871887)

or

import base64 import requests

image_path = 'your_image.jpg' with open(image_path, 'rb') as image_file: image_data = image_file.read() base64_encoded_data = base64.b64encode(image_data) base64_string = base64_encoded_data.decode('utf-8')

headers = { 'Content-Type': 'application/x-www-form-urlencoded', }

model = "gemma3:4b" prompt = "Describe the image in details and give it a caption." ollamaURL = 'https://myyim-docker-ollama-model.hf.space/api/generate'

data = { "model": model, "prompt": prompt, "stream": False, "images": [base64_string] }

response = requests.post(ollamaURL, headers=headers, json=data) print(response.json()["response"])

(https://medium.com/@manyi.yim/deploy-ollama-models-on-hugging-face-spaces-with-python-library-requests-503ac6b5ca04)