content large_stringlengths 3 20.5k | url large_stringlengths 54 193 | branch large_stringclasses 4
values | source large_stringclasses 42
values | embeddings listlengths 384 384 | score float64 -0.21 0.65 |
|---|---|---|---|---|---|
`suffix` - [ ] `best\_of` - [ ] `echo` - [ ] `logit\_bias` - [ ] `user` - [ ] `n` #### Notes - `prompt` currently only accepts a string ### `/v1/models` #### Notes - `created` corresponds to when the model was last modified - `owned\_by` corresponds to the ollama username, defaulting to `"library"` ### `/v1/models/{model}` #### Notes - `created` corresponds to when the model was last modified - `owned\_by` corresponds to the ollama username, defaulting to `"library"` ### `/v1/embeddings` #### Supported request fields - [x] `model` - [x] `input` - [x] string - [x] array of strings - [ ] array of tokens - [ ] array of token arrays - [x] `encoding format` - [x] `dimensions` - [ ] `user` ### `/v1/images/generations` (experimental) > Note: This endpoint is experimental and may change or be removed in future versions. Generate images using image generation models. ```python images.py from openai import OpenAI client = OpenAI( base\_url='http://localhost:11434/v1/', api\_key='ollama', # required but ignored ) response = client.images.generate( model='x/z-image-turbo', prompt='A cute robot learning to paint', size='1024x1024', response\_format='b64\_json', ) print(response.data[0].b64\_json[:50] + '...') ``` ```javascript images.js import OpenAI from "openai"; const openai = new OpenAI({ baseURL: "http://localhost:11434/v1/", apiKey: "ollama", // required but ignored }); const response = await openai.images.generate({ model: "x/z-image-turbo", prompt: "A cute robot learning to paint", size: "1024x1024", response\_format: "b64\_json", }); console.log(response.data[0].b64\_json.slice(0, 50) + "..."); ``` ```shell images.sh curl -X POST http://localhost:11434/v1/images/generations \ -H "Content-Type: application/json" \ -d '{ "model": "x/z-image-turbo", "prompt": "A cute robot learning to paint", "size": "1024x1024", "response\_format": "b64\_json" }' ``` #### Supported request fields - [x] `model` - [x] `prompt` - [x] `size` (e.g. "1024x1024") - [x] `response\_format` (only `b64\_json` supported) - [ ] `n` - [ ] `quality` - [ ] `style` - [ ] `user` ### `/v1/responses` > Note: Added in Ollama v0.13.3 Ollama supports the [OpenAI Responses API](https://platform.openai.com/docs/api-reference/responses). Only the non-stateful flavor is supported (i.e., there is no `previous\_response\_id` or `conversation` support). #### Supported features - [x] Streaming - [x] Tools (function calling) - [x] Reasoning summaries (for thinking models) - [ ] Stateful requests #### Supported request fields - [x] `model` - [x] `input` - [x] `instructions` - [x] `tools` - [x] `stream` - [x] `temperature` - [x] `top\_p` - [x] `max\_output\_tokens` - [ ] `previous\_response\_id` (stateful v1/responses not supported) - [ ] `conversation` (stateful v1/responses not supported) - [ ] `truncation` ## Models Before using a model, pull it locally `ollama pull`: ```shell ollama pull llama3.2 ``` ### Default model names For tooling that relies on default OpenAI model names such as `gpt-3.5-turbo`, use `ollama cp` to copy an existing model name to a temporary name: ```shell ollama cp llama3.2 gpt-3.5-turbo ``` Afterwards, this new model name can be specified the `model` field: ```shell curl http://localhost:11434/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-3.5-turbo", "messages": [ { "role": "user", "content": "Hello!" } ] }' ``` ### Setting the context size The OpenAI API does not have a way of setting the context size for a model. If you need to change the context size, create a `Modelfile` which looks like: ``` FROM PARAMETER num\_ctx ``` Use the `ollama create mymodel` command to create a new model with the updated context size. Call the API with the updated model name: ```shell curl http://localhost:11434/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "mymodel", "messages": [ { "role": "user", "content": "Hello!" } ] }' ``` | https://github.com/ollama/ollama/blob/main//docs/api/openai-compatibility.mdx | main | ollama | [
-0.03276447951793671,
0.007432424463331699,
-0.04835851490497589,
-0.035860005766153336,
0.022471381351351738,
-0.06445612758398056,
0.06845523416996002,
0.06410984694957733,
0.01811029203236103,
-0.04713916406035423,
0.025087634101510048,
-0.013746069744229317,
0.04347750544548035,
-0.010... | 0.116236 |
{ "role": "user", "content": "Hello!" } ] }' ``` | https://github.com/ollama/ollama/blob/main//docs/api/openai-compatibility.mdx | main | ollama | [
0.010678770020604134,
0.08889994770288467,
0.03154560178518295,
0.02250397950410843,
-0.023986080661416054,
-0.028294755145907402,
0.06700271368026733,
-0.013021545484662056,
-0.029688898473978043,
-0.05625166743993759,
0.01605059951543808,
-0.041074734181165695,
0.00416873674839735,
0.074... | 0.038141 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.