url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/7387 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7387/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7387/comments | https://api.github.com/repos/ollama/ollama/issues/7387/events | https://github.com/ollama/ollama/issues/7387 | 2,616,920,983 | I_kwDOJ0Z1Ps6b-wuX | 7,387 | Ollama Docker Container Disk Usage Growing Exponentially | {
"login": "vanduc2514",
"id": 52255055,
"node_id": "MDQ6VXNlcjUyMjU1MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/52255055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vanduc2514",
"html_url": "https://github.com/vanduc2514",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 6 | 2024-10-27T22:42:58 | 2024-11-05T20:51:15 | 2024-11-05T20:50:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I've observed significant disk usage growth within the Ollama Docker container, even when using external volume for model storage. Initially, the container occupies approximately 2GB of overlay2 disk space. However, after processing a few prompts with different models, the disk usage increases... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7387/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2484 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2484/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2484/comments | https://api.github.com/repos/ollama/ollama/issues/2484/events | https://github.com/ollama/ollama/pull/2484 | 2,133,408,417 | PR_kwDOJ0Z1Ps5mz0TT | 2,484 | set `shutting_down` to `false` once shutdown is complete | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-02-14T01:44:48 | 2024-02-14T01:48:42 | 2024-02-14T01:48:42 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2484",
"html_url": "https://github.com/ollama/ollama/pull/2484",
"diff_url": "https://github.com/ollama/ollama/pull/2484.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2484.patch",
"merged_at": "2024-02-14T01:48:42"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2484/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4371 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4371/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4371/comments | https://api.github.com/repos/ollama/ollama/issues/4371/events | https://github.com/ollama/ollama/issues/4371 | 2,291,252,516 | I_kwDOJ0Z1Ps6Ikb0k | 4,371 | Feature request: ollama pull xxx (and run), add the feature of the limit download speed | {
"login": "edwin2jiang",
"id": 48372256,
"node_id": "MDQ6VXNlcjQ4MzcyMjU2",
"avatar_url": "https://avatars.githubusercontent.com/u/48372256?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/edwin2jiang",
"html_url": "https://github.com/edwin2jiang",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 2 | 2024-05-12T09:31:28 | 2024-05-23T20:08:25 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ## Why do we need this feature?
When I use `ollama run llama3`, my full network is occupied by this download task. But by the time I want to use my PC to visit other website, I can't even open any website.
## What will be like:
the command-line api may be like this:
```
ollama run llama3 --speed=500
```
... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4371/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/4371/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3034 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3034/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3034/comments | https://api.github.com/repos/ollama/ollama/issues/3034/events | https://github.com/ollama/ollama/pull/3034 | 2,177,524,795 | PR_kwDOJ0Z1Ps5pKOV7 | 3,034 | patch: use default locale in wpm tokenizer | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-03-10T04:49:54 | 2024-03-10T05:12:13 | 2024-03-10T05:12:12 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3034",
"html_url": "https://github.com/ollama/ollama/pull/3034",
"diff_url": "https://github.com/ollama/ollama/pull/3034.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3034.patch",
"merged_at": "2024-03-10T05:12:12"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3034/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8158 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8158/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8158/comments | https://api.github.com/repos/ollama/ollama/issues/8158/events | https://github.com/ollama/ollama/issues/8158 | 2,748,224,813 | I_kwDOJ0Z1Ps6jzpUt | 8,158 | IBM Granite MoE & Dense-2b is very slow when KV Cache quantization is enabled | {
"login": "vYLQs6",
"id": 143073604,
"node_id": "U_kgDOCIchRA",
"avatar_url": "https://avatars.githubusercontent.com/u/143073604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vYLQs6",
"html_url": "https://github.com/vYLQs6",
"followers_url": "https://api.github.com/users/vYLQs6/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 3 | 2024-12-18T16:11:21 | 2025-01-03T16:59:53 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I found all Granite MoE models + dense:2b runs extremely slow when KV Cache is enabled, there didn't seems to be any hit on models response quality, just speed, kinda strange
I'm using Windows 11 + RTX 4090
Here is an example using model: granite3.1-moe:3b-instruct-q8_0
### `set OLLAM... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8158/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8158/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4154 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4154/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4154/comments | https://api.github.com/repos/ollama/ollama/issues/4154/events | https://github.com/ollama/ollama/pull/4154 | 2,279,153,246 | PR_kwDOJ0Z1Ps5ujLbX | 4,154 | Centralize server config handling | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-05-04T18:48:27 | 2024-05-06T00:08:30 | 2024-05-06T00:08:26 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4154",
"html_url": "https://github.com/ollama/ollama/pull/4154",
"diff_url": "https://github.com/ollama/ollama/pull/4154.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4154.patch",
"merged_at": "2024-05-06T00:08:26"
} | This moves all the env var reading into one central module and logs the loaded config once at startup which should help in troubleshooting user server logs
Example server output
```
% ollama serve 2>&1 | head -1
2024/05/05 14:56:27 routes.go:989: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4154/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4154/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2421 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2421/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2421/comments | https://api.github.com/repos/ollama/ollama/issues/2421/events | https://github.com/ollama/ollama/issues/2421 | 2,126,544,021 | I_kwDOJ0Z1Ps5-wHyV | 2,421 | Offline models are not appearing on the Ollama server list | {
"login": "HariKunapareddy",
"id": 29164891,
"node_id": "MDQ6VXNlcjI5MTY0ODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/29164891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HariKunapareddy",
"html_url": "https://github.com/HariKunapareddy",
"followers_url": "https://api... | [] | closed | false | null | [] | null | 5 | 2024-02-09T06:19:27 | 2024-03-07T23:22:17 | 2024-03-07T23:22:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | **Problem**
I download model on the the machine where Ollama installed and have internet access. Then moved the model files from the folder usr/share/ollama/.ollama/models to the new machine which doesn’t have internet access. I could see Ollama is not detecting those models and they are not visible as part of list ... | {
"login": "HariKunapareddy",
"id": 29164891,
"node_id": "MDQ6VXNlcjI5MTY0ODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/29164891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HariKunapareddy",
"html_url": "https://github.com/HariKunapareddy",
"followers_url": "https://api... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2421/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1686 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1686/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1686/comments | https://api.github.com/repos/ollama/ollama/issues/1686/events | https://github.com/ollama/ollama/issues/1686 | 2,054,721,496 | I_kwDOJ0Z1Ps56eI_Y | 1,686 | OLLAMA_ORIGINS=chrome-extension://etc does not work | {
"login": "sublimator",
"id": 525211,
"node_id": "MDQ6VXNlcjUyNTIxMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/525211?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sublimator",
"html_url": "https://github.com/sublimator",
"followers_url": "https://api.github.com/users/s... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2023-12-23T08:43:32 | 2024-01-06T01:20:11 | 2024-01-06T01:20:11 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ```
(base) ➜ ~ OLLAMA_ORIGINS=chrome-extension://dofdpnoclkigpakdndmhigfojjecnfln ollama serve
2023/12/23 15:37:56 images.go:737: total blobs: 85
2023/12/23 15:37:56 images.go:744: total unused blobs removed: 0
panic: bad origin: origins must contain '*' or include http://,https://
``` | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1686/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1952 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1952/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1952/comments | https://api.github.com/repos/ollama/ollama/issues/1952/events | https://github.com/ollama/ollama/issues/1952 | 2,078,935,668 | I_kwDOJ0Z1Ps576gp0 | 1,952 | `CUDA out of memory` when using long prompts and context sizes | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 20 | 2024-01-12T14:14:45 | 2024-12-17T07:19:31 | 2024-05-10T23:23:27 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When using a large context window (via `num_ctx`) and providing a large prompt, Ollama may run out of memory. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1952/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2631 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2631/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2631/comments | https://api.github.com/repos/ollama/ollama/issues/2631/events | https://github.com/ollama/ollama/issues/2631 | 2,146,510,435 | I_kwDOJ0Z1Ps5_8SZj | 2,631 | OpenLlama on Intel graphics card? | {
"login": "tambetvali",
"id": 119508958,
"node_id": "U_kgDOBx-P3g",
"avatar_url": "https://avatars.githubusercontent.com/u/119508958?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tambetvali",
"html_url": "https://github.com/tambetvali",
"followers_url": "https://api.github.com/users/tam... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 1 | 2024-02-21T11:25:04 | 2024-07-24T22:35:09 | 2024-07-24T22:35:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello!
I'm using CodeLlama-7b on Ubuntu 22.01, Visual Studio Code 1.86 or Ollama command line tool, HP ProBook 440 G6 with Intel® Core™ i3-8145U CPU @ 2.10GHz × 4, 16 GB memory and Mesa Intel® UHD Graphics 620 (WHL GT2) graphics card, which they call also Intel Corporation WhiskeyLake-U GT2 [UHD Graphics 620].
I ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2631/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2631/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7477 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7477/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7477/comments | https://api.github.com/repos/ollama/ollama/issues/7477/events | https://github.com/ollama/ollama/issues/7477 | 2,630,958,746 | I_kwDOJ0Z1Ps6c0T6a | 7,477 | Submit 4 images to Ollama visual model, generate a large amount of log without any return | {
"login": "delubee",
"id": 36232221,
"node_id": "MDQ6VXNlcjM2MjMyMjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/36232221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/delubee",
"html_url": "https://github.com/delubee",
"followers_url": "https://api.github.com/users/delube... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 6 | 2024-11-03T04:31:53 | 2024-11-06T16:57:49 | 2024-11-06T16:57:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
code: python
```python
# -*- coding: utf-8 -*-
from ollama import Client
import pymupdf as fitz
import os
import base64
client = Client(host='http://127.0.0.1:11434')
pdf_path = './books/book2.pdf'
doc = fitz.open(pdf_path)
image_base64_list = []
for i in range(min(4, doc.page_... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7477/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2046 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2046/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2046/comments | https://api.github.com/repos/ollama/ollama/issues/2046/events | https://github.com/ollama/ollama/issues/2046 | 2,087,802,806 | I_kwDOJ0Z1Ps58cVe2 | 2,046 | What quantization is used to quantize Phi-2? | {
"login": "bm777",
"id": 29865600,
"node_id": "MDQ6VXNlcjI5ODY1NjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/29865600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bm777",
"html_url": "https://github.com/bm777",
"followers_url": "https://api.github.com/users/bm777/follow... | [] | closed | false | null | [] | null | 2 | 2024-01-18T08:49:39 | 2024-01-19T04:05:23 | 2024-01-19T04:05:23 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Running Phi-2 with Ollama is faster than running Phi-2 in Rust with Candle. rust is taking 1.7 GB of my memory while Ollama only 788MB of memory. I guess it is using the same GGUF quantized 1.6 Gb
Ollama is
- quantizing it at run time or
- it does it before hand
- or using lama.cpp under the hood
- no quan... | {
"login": "bm777",
"id": 29865600,
"node_id": "MDQ6VXNlcjI5ODY1NjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/29865600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bm777",
"html_url": "https://github.com/bm777",
"followers_url": "https://api.github.com/users/bm777/follow... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2046/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8337 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8337/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8337/comments | https://api.github.com/repos/ollama/ollama/issues/8337/events | https://github.com/ollama/ollama/issues/8337 | 2,773,119,533 | I_kwDOJ0Z1Ps6lSnIt | 8,337 | Cannot get a tool call and a message in the same response | {
"login": "gotyer",
"id": 88534676,
"node_id": "MDQ6VXNlcjg4NTM0Njc2",
"avatar_url": "https://avatars.githubusercontent.com/u/88534676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gotyer",
"html_url": "https://github.com/gotyer",
"followers_url": "https://api.github.com/users/gotyer/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2025-01-07T15:28:22 | 2025-01-27T15:34:46 | 2025-01-08T07:30:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I use the api/chat endpoint, and I can make an agent either respond normally, or call a tool. It can do either successfully, but never both in the same call.
Whenever a tool is called, the content is empty.
Even if I explicitly prompt the model to write a long text before using the tool, the c... | {
"login": "gotyer",
"id": 88534676,
"node_id": "MDQ6VXNlcjg4NTM0Njc2",
"avatar_url": "https://avatars.githubusercontent.com/u/88534676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gotyer",
"html_url": "https://github.com/gotyer",
"followers_url": "https://api.github.com/users/gotyer/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8337/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4969 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4969/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4969/comments | https://api.github.com/repos/ollama/ollama/issues/4969/events | https://github.com/ollama/ollama/issues/4969 | 2,345,097,011 | I_kwDOJ0Z1Ps6Lx1cz | 4,969 | Error pulling Quen2 models: unknown pre-tokenizer type: 'qwen2' | {
"login": "agilebean",
"id": 28497542,
"node_id": "MDQ6VXNlcjI4NDk3NTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/28497542?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agilebean",
"html_url": "https://github.com/agilebean",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 9 | 2024-06-11T00:44:31 | 2024-06-18T20:30:38 | 2024-06-18T20:30:38 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
After downloading any Qwen2 model, e.g.
```
ollama run qwen2:1.5b
```
the following error is shown:
```
Error: llama runner process has terminated: signal: abort trap error:error loading model vocabulary: unknown pre-tokenizer type: 'qwen2'
```
tested on Ollama versions `0.1.38` and `0... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4969/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8119 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8119/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8119/comments | https://api.github.com/repos/ollama/ollama/issues/8119/events | https://github.com/ollama/ollama/pull/8119 | 2,742,652,981 | PR_kwDOJ0Z1Ps6FXwC6 | 8,119 | docs: change getting started guide link for pgai | {
"login": "Askir",
"id": 10251955,
"node_id": "MDQ6VXNlcjEwMjUxOTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/10251955?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Askir",
"html_url": "https://github.com/Askir",
"followers_url": "https://api.github.com/users/Askir/follow... | [] | closed | false | null | [] | null | 0 | 2024-12-16T15:12:29 | 2024-12-17T06:13:23 | 2024-12-17T06:13:23 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8119",
"html_url": "https://github.com/ollama/ollama/pull/8119",
"diff_url": "https://github.com/ollama/ollama/pull/8119.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8119.patch",
"merged_at": "2024-12-17T06:13:23"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8119/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4176 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4176/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4176/comments | https://api.github.com/repos/ollama/ollama/issues/4176/events | https://github.com/ollama/ollama/issues/4176 | 2,279,700,328 | I_kwDOJ0Z1Ps6H4Xdo | 4,176 | models aren't saving on linux | {
"login": "asmrtfm",
"id": 154548075,
"node_id": "U_kgDOCTY3aw",
"avatar_url": "https://avatars.githubusercontent.com/u/154548075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asmrtfm",
"html_url": "https://github.com/asmrtfm",
"followers_url": "https://api.github.com/users/asmrtfm/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2024-05-05T19:49:52 | 2024-05-05T19:57:52 | 2024-05-05T19:57:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "asmrtfm",
"id": 154548075,
"node_id": "U_kgDOCTY3aw",
"avatar_url": "https://avatars.githubusercontent.com/u/154548075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asmrtfm",
"html_url": "https://github.com/asmrtfm",
"followers_url": "https://api.github.com/users/asmrtfm/foll... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4176/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6772 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6772/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6772/comments | https://api.github.com/repos/ollama/ollama/issues/6772/events | https://github.com/ollama/ollama/issues/6772 | 2,521,652,968 | I_kwDOJ0Z1Ps6WTV7o | 6,772 | Using the qwen2-7B-q8 model, the GPU can reach 14-15GB when accessed with the Dify API | {
"login": "bingbing6",
"id": 51957370,
"node_id": "MDQ6VXNlcjUxOTU3Mzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/51957370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bingbing6",
"html_url": "https://github.com/bingbing6",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 2 | 2024-09-12T08:02:25 | 2024-11-06T00:28:39 | 2024-11-06T00:28:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When using the qwen2-7B-q8 model on Dify's API, the GPU is up to 15GB, but directly using Ollama's API is the normal 9GB, and Dify says that they also call Ollama's API normally, without doing any other processing. See my question to them for details:
https://github.com/langgenius/dify/issues/8... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6772/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/354 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/354/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/354/comments | https://api.github.com/repos/ollama/ollama/issues/354/events | https://github.com/ollama/ollama/pull/354 | 1,851,895,162 | PR_kwDOJ0Z1Ps5X_lbE | 354 | retry download on network errors | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-08-15T18:09:21 | 2023-08-17T14:31:46 | 2023-08-17T14:31:45 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/354",
"html_url": "https://github.com/ollama/ollama/pull/354",
"diff_url": "https://github.com/ollama/ollama/pull/354.diff",
"patch_url": "https://github.com/ollama/ollama/pull/354.patch",
"merged_at": "2023-08-17T14:31:45"
} | add a retry mechanism to retry download on error | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/354/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1967 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1967/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1967/comments | https://api.github.com/repos/ollama/ollama/issues/1967/events | https://github.com/ollama/ollama/issues/1967 | 2,079,730,533 | I_kwDOJ0Z1Ps579itl | 1,967 | Small context size limit occasionally causes Ollama to hang on prediction | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-01-12T21:13:23 | 2024-05-10T01:03:32 | 2024-05-10T01:03:31 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | While testing #1963 I noticed that occasionally a small context window causes the response to hang (maybe indefinitely?).
Here is an example:
```
curl http://localhost:11434/api/chat -d '{
"model": "llama2",
"messages": [
{
"role": "user",
"content": "hi"
},
... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1967/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3392 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3392/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3392/comments | https://api.github.com/repos/ollama/ollama/issues/3392/events | https://github.com/ollama/ollama/pull/3392 | 2,213,936,677 | PR_kwDOJ0Z1Ps5rF5hO | 3,392 | CI windows gpu builds | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-03-28T19:08:23 | 2024-03-28T23:03:55 | 2024-03-28T23:03:52 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3392",
"html_url": "https://github.com/ollama/ollama/pull/3392",
"diff_url": "https://github.com/ollama/ollama/pull/3392.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3392.patch",
"merged_at": "2024-03-28T23:03:52"
} | Changes in the llm dir (e.g. bumping llama.cpp) can result in regressions on windows. This will help us catch build failures in CI.
This also moves to a hand-crafted cuda install for windows instead of the action we were using that turned out to be buggy. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3392/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3232 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3232/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3232/comments | https://api.github.com/repos/ollama/ollama/issues/3232/events | https://github.com/ollama/ollama/issues/3232 | 2,193,822,398 | I_kwDOJ0Z1Ps6CwxK- | 3,232 | CUDA error: out of memory when use gemma model | {
"login": "ycyy",
"id": 10897377,
"node_id": "MDQ6VXNlcjEwODk3Mzc3",
"avatar_url": "https://avatars.githubusercontent.com/u/10897377?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ycyy",
"html_url": "https://github.com/ycyy",
"followers_url": "https://api.github.com/users/ycyy/followers"... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 4 | 2024-03-19T01:36:18 | 2024-04-18T05:34:01 | 2024-04-17T22:52:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
```
system_message = {
'role': 'system',
'content': 'XXXX'
}
user_message = {
'role': 'user',
'content': 'XXXX'
}
messages.append(system_message)
messages.append(user_message)
stream = ollama.chat(
model = model_name,
... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3232/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2817 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2817/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2817/comments | https://api.github.com/repos/ollama/ollama/issues/2817/events | https://github.com/ollama/ollama/issues/2817 | 2,159,842,467 | I_kwDOJ0Z1Ps6AvJSj | 2,817 | Model request: StarCoder2 | {
"login": "kylemclaren",
"id": 3727384,
"node_id": "MDQ6VXNlcjM3MjczODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3727384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylemclaren",
"html_url": "https://github.com/kylemclaren",
"followers_url": "https://api.github.com/us... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 10 | 2024-02-28T20:47:35 | 2024-06-06T17:42:00 | 2024-06-06T17:42:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Please add the StarCoder2 model family: https://huggingface.co/collections/bigcode/starcoder2-65de6da6e87db3383572be1a | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2817/reactions",
"total_count": 32,
"+1": 26,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 6
} | https://api.github.com/repos/ollama/ollama/issues/2817/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4453 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4453/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4453/comments | https://api.github.com/repos/ollama/ollama/issues/4453/events | https://github.com/ollama/ollama/issues/4453 | 2,298,172,177 | I_kwDOJ0Z1Ps6I-1MR | 4,453 | Ollama + sentence-transformers with torch cuda | {
"login": "qsdhj",
"id": 166700412,
"node_id": "U_kgDOCe-lfA",
"avatar_url": "https://avatars.githubusercontent.com/u/166700412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qsdhj",
"html_url": "https://github.com/qsdhj",
"followers_url": "https://api.github.com/users/qsdhj/followers",
... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q... | open | false | null | [] | null | 0 | 2024-05-15T14:59:37 | 2024-11-06T17:33:04 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi,
I use ollama together with the `intfloat/multilingual-e5-base` sentence-transformer in langchain and llamaIndex in python.
If I use the torch version without CUDA everything works as expected, just my embeddings are created slow.
With the torch cuda version installed this way:
![graf... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4453/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8313 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8313/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8313/comments | https://api.github.com/repos/ollama/ollama/issues/8313/events | https://github.com/ollama/ollama/issues/8313 | 2,769,512,309 | I_kwDOJ0Z1Ps6lE2d1 | 8,313 | Tool example returns error 'TypeError: Object of type function is not JSON serializable' | {
"login": "zamzx",
"id": 34112226,
"node_id": "MDQ6VXNlcjM0MTEyMjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/34112226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zamzx",
"html_url": "https://github.com/zamzx",
"followers_url": "https://api.github.com/users/zamzx/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2025-01-05T23:54:17 | 2025-01-16T00:03:33 | 2025-01-16T00:03:33 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Tool calling results in the following error with ollama 0.5.4
"TypeError: Object of type function is not JSON serializable"
I am using this exact example : https://github.com/ollama/ollama-python/blob/main/examples/tools.py
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama ver... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8313/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3127 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3127/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3127/comments | https://api.github.com/repos/ollama/ollama/issues/3127/events | https://github.com/ollama/ollama/pull/3127 | 2,184,928,293 | PR_kwDOJ0Z1Ps5pjjeI | 3,127 | add `OLLAMA_KEEP_ALIVE` to environment variable docs for `ollama serve` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-03-13T21:33:57 | 2024-03-13T21:35:34 | 2024-03-13T21:35:33 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3127",
"html_url": "https://github.com/ollama/ollama/pull/3127",
"diff_url": "https://github.com/ollama/ollama/pull/3127.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3127.patch",
"merged_at": "2024-03-13T21:35:33"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3127/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1252 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1252/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1252/comments | https://api.github.com/repos/ollama/ollama/issues/1252/events | https://github.com/ollama/ollama/pull/1252 | 2,007,519,977 | PR_kwDOJ0Z1Ps5gMXZg | 1,252 | Fix unexpected multiline end | {
"login": "xwjdsh",
"id": 11025519,
"node_id": "MDQ6VXNlcjExMDI1NTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/11025519?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xwjdsh",
"html_url": "https://github.com/xwjdsh",
"followers_url": "https://api.github.com/users/xwjdsh/fo... | [] | closed | false | null | [] | null | 1 | 2023-11-23T05:40:31 | 2023-11-27T07:37:24 | 2023-11-26T18:04:12 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1252",
"html_url": "https://github.com/ollama/ollama/pull/1252",
"diff_url": "https://github.com/ollama/ollama/pull/1252.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1252.patch",
"merged_at": null
} | Close #1241 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1252/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7580 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7580/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7580/comments | https://api.github.com/repos/ollama/ollama/issues/7580/events | https://github.com/ollama/ollama/pull/7580 | 2,645,323,978 | PR_kwDOJ0Z1Ps6BYH5N | 7,580 | Updated Model library with llama3.2-vision | {
"login": "arbusam",
"id": 35902952,
"node_id": "MDQ6VXNlcjM1OTAyOTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/35902952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arbusam",
"html_url": "https://github.com/arbusam",
"followers_url": "https://api.github.com/users/arbusa... | [] | closed | false | null | [] | null | 0 | 2024-11-08T22:51:33 | 2024-11-10T21:36:25 | 2024-11-10T21:36:25 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7580",
"html_url": "https://github.com/ollama/ollama/pull/7580",
"diff_url": "https://github.com/ollama/ollama/pull/7580.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7580.patch",
"merged_at": "2024-11-10T21:36:25"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7580/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6722 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6722/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6722/comments | https://api.github.com/repos/ollama/ollama/issues/6722/events | https://github.com/ollama/ollama/issues/6722 | 2,515,684,938 | I_kwDOJ0Z1Ps6V8k5K | 6,722 | MiniCPM3 support | {
"login": "IuvenisSapiens",
"id": 133093191,
"node_id": "U_kgDOB-7XRw",
"avatar_url": "https://avatars.githubusercontent.com/u/133093191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IuvenisSapiens",
"html_url": "https://github.com/IuvenisSapiens",
"followers_url": "https://api.github.c... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 4 | 2024-09-10T07:53:22 | 2024-12-04T04:30:27 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | [MiniCPM3-4B](https://huggingface.co/openbmb/MiniCPM3-4B) is the 3rd generation of MiniCPM series. The overall performance of MiniCPM3-4B surpasses Phi-3.5-mini-Instruct and GPT-3.5-Turbo-0125, being comparable with many recent 7B~9B models. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6722/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/6722/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5345 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5345/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5345/comments | https://api.github.com/repos/ollama/ollama/issues/5345/events | https://github.com/ollama/ollama/pull/5345 | 2,379,154,905 | PR_kwDOJ0Z1Ps5z01aq | 5,345 | Set default timeout to 600 | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [] | closed | false | null | [] | null | 1 | 2024-06-27T22:00:25 | 2024-07-11T21:42:00 | 2024-07-11T21:42:00 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5345",
"html_url": "https://github.com/ollama/ollama/pull/5345",
"diff_url": "https://github.com/ollama/ollama/pull/5345.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5345.patch",
"merged_at": null
} | Resolves #5084, #5081 | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5345/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/843 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/843/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/843/comments | https://api.github.com/repos/ollama/ollama/issues/843/events | https://github.com/ollama/ollama/pull/843 | 1,950,856,406 | PR_kwDOJ0Z1Ps5dM15x | 843 | basic request validation | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-10-18T23:13:14 | 2023-10-19T16:30:46 | 2023-10-19T16:30:45 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/843",
"html_url": "https://github.com/ollama/ollama/pull/843",
"diff_url": "https://github.com/ollama/ollama/pull/843.diff",
"patch_url": "https://github.com/ollama/ollama/pull/843.patch",
"merged_at": "2023-10-19T16:30:45"
} | - API returns `{"error": "EOF"}` when request is empty
- Most handlers pass request fields without checking if they're empty produces bad errors
- `created_at` on an empty generate request isn't set so it incorrectly shows `0001-01-01T00:00:00Z`
- Create's `workDir` isn't used after #759
Note: there's an inconsi... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/843/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1595 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1595/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1595/comments | https://api.github.com/repos/ollama/ollama/issues/1595/events | https://github.com/ollama/ollama/pull/1595 | 2,047,835,124 | PR_kwDOJ0Z1Ps5iU4fz | 1,595 | Added cmdh to community section in README | {
"login": "pgibler",
"id": 119892,
"node_id": "MDQ6VXNlcjExOTg5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/119892?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pgibler",
"html_url": "https://github.com/pgibler",
"followers_url": "https://api.github.com/users/pgibler/fo... | [] | closed | false | null | [] | null | 3 | 2023-12-19T03:09:13 | 2023-12-19T06:43:59 | 2023-12-19T04:55:18 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1595",
"html_url": "https://github.com/ollama/ollama/pull/1595",
"diff_url": "https://github.com/ollama/ollama/pull/1595.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1595.patch",
"merged_at": "2023-12-19T04:55:18"
} | Added a link to my terminal application cmdh, which lets you request linux commands using an LLM (https://github.com/pgibler/cmdh). I just added ollama support today. The tl;dr is it sends your command request to the LLM which outputs a terminal command that matches the request. You can then use a hotkey to run the com... | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1595/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4272 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4272/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4272/comments | https://api.github.com/repos/ollama/ollama/issues/4272/events | https://github.com/ollama/ollama/pull/4272 | 2,286,750,157 | PR_kwDOJ0Z1Ps5u8P5e | 4,272 | Prune partial downloads | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-05-09T01:56:19 | 2024-05-09T23:35:21 | 2024-05-09T23:35:20 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4272",
"html_url": "https://github.com/ollama/ollama/pull/4272",
"diff_url": "https://github.com/ollama/ollama/pull/4272.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4272.patch",
"merged_at": "2024-05-09T23:35:20"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4272/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5039 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5039/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5039/comments | https://api.github.com/repos/ollama/ollama/issues/5039/events | https://github.com/ollama/ollama/issues/5039 | 2,352,349,969 | I_kwDOJ0Z1Ps6MNgMR | 5,039 | How to only run the amd64 cpu version of ollama's docker image? | {
"login": "musarehmani291",
"id": 169641750,
"node_id": "U_kgDOChyHFg",
"avatar_url": "https://avatars.githubusercontent.com/u/169641750?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/musarehmani291",
"html_url": "https://github.com/musarehmani291",
"followers_url": "https://api.github.c... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-06-14T02:20:57 | 2024-06-19T00:46:23 | 2024-06-14T16:25:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi guys, I deployed ollama using the exact dockerfile available on your repo without any changes. my server architecture is amd64 cpu. when I tried to build it, it keeps building. what should I do? any help would be appreciated.
### OS
Docker
### GPU
_No response_
### CPU
AMD
### Ollama v... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5039/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3134 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3134/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3134/comments | https://api.github.com/repos/ollama/ollama/issues/3134/events | https://github.com/ollama/ollama/pull/3134 | 2,185,403,314 | PR_kwDOJ0Z1Ps5plMb7 | 3,134 | feat: Override system prompt from non-interactive CLI | {
"login": "kaizau",
"id": 226967,
"node_id": "MDQ6VXNlcjIyNjk2Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/226967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaizau",
"html_url": "https://github.com/kaizau",
"followers_url": "https://api.github.com/users/kaizau/follow... | [] | open | false | null | [] | null | 8 | 2024-03-14T04:35:18 | 2025-01-23T14:40:21 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3134",
"html_url": "https://github.com/ollama/ollama/pull/3134",
"diff_url": "https://github.com/ollama/ollama/pull/3134.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3134.patch",
"merged_at": null
} | - Closes #1415
Previously, the only way to set the system prompt for non-interactive use was to import a new Modelfile.
This PR makes Ollama much easier to script non-interatively:
```
ollama run llama2 --system "Translate into French" "This is really useful!"
``` | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3134/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3134/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1463 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1463/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1463/comments | https://api.github.com/repos/ollama/ollama/issues/1463/events | https://github.com/ollama/ollama/pull/1463 | 2,035,052,494 | PR_kwDOJ0Z1Ps5hpgUx | 1,463 | Added Bionic GPT as a front end. | {
"login": "9876691",
"id": 36966,
"node_id": "MDQ6VXNlcjM2OTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/36966?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/9876691",
"html_url": "https://github.com/9876691",
"followers_url": "https://api.github.com/users/9876691/follower... | [] | closed | false | null | [] | null | 0 | 2023-12-11T08:01:18 | 2023-12-15T19:33:05 | 2023-12-15T19:33:04 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1463",
"html_url": "https://github.com/ollama/ollama/pull/1463",
"diff_url": "https://github.com/ollama/ollama/pull/1463.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1463.patch",
"merged_at": "2023-12-15T19:33:04"
} | null | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1463/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/651 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/651/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/651/comments | https://api.github.com/repos/ollama/ollama/issues/651/events | https://github.com/ollama/ollama/issues/651 | 1,919,976,329 | I_kwDOJ0Z1Ps5ycIOJ | 651 | Ability to configure embeddings dimension size | {
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677485533,
"node_id": ... | open | false | null | [] | null | 5 | 2023-09-29T22:26:15 | 2025-01-17T15:03:25 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Moving from `OllamaEmbeddings(model="llama2:13b")` to `OllamaEmbeddings(model="llama2:7b")`, I am now getting a shape mismatch in my embeddings:
```none
ValueError: shapes (4096,) and (5120,) not aligned: 4096 (dim 0) != 5120 (dim 0)
```
So the `7b` embeddings is slightly smaller (4096) than `13b` embeddings (5... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/651/reactions",
"total_count": 11,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 8
} | https://api.github.com/repos/ollama/ollama/issues/651/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/976 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/976/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/976/comments | https://api.github.com/repos/ollama/ollama/issues/976/events | https://github.com/ollama/ollama/issues/976 | 1,975,139,149 | I_kwDOJ0Z1Ps51ujtN | 976 | Suggestion: Option to "Save / Cache model in RAM" for faster switching | {
"login": "ziontee113",
"id": 102876811,
"node_id": "U_kgDOBiHGiw",
"avatar_url": "https://avatars.githubusercontent.com/u/102876811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ziontee113",
"html_url": "https://github.com/ziontee113",
"followers_url": "https://api.github.com/users/zio... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 4 | 2023-11-02T22:26:57 | 2024-05-14T19:57:49 | 2024-05-14T19:57:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, thank you all so much for the amazing project.
Today I was testing out using multiple models at the same and the switch is surprisingly acceptable.
I symlinked my models to my HDD. The initial load for each model is slow, but once it's loaded, I can use & switch back and forth between models with only a few se... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/976/reactions",
"total_count": 19,
"+1": 19,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/976/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8144 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8144/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8144/comments | https://api.github.com/repos/ollama/ollama/issues/8144/events | https://github.com/ollama/ollama/issues/8144 | 2,746,022,117 | I_kwDOJ0Z1Ps6jrPjl | 8,144 | When models don't fit in VRAM, Issue alert/confirmation instead of running and freezing computer for hours | {
"login": "Mugane",
"id": 13005927,
"node_id": "MDQ6VXNlcjEzMDA1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/13005927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mugane",
"html_url": "https://github.com/Mugane",
"followers_url": "https://api.github.com/users/Mugane/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | open | false | null | [] | null | 16 | 2024-12-17T20:46:20 | 2025-01-23T18:53:05 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When a model is selected that does not fit in VRAM, it runs on the CPU. This is a ridiculous fallback that freezes the whole computer, it should just fail. Or actually use the GPU with shared memory instead of falling back to the CPU only.
### OS
Windows 11 Pro
### GPU
Nvidia
### C... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8144/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8144/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4609 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4609/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4609/comments | https://api.github.com/repos/ollama/ollama/issues/4609/events | https://github.com/ollama/ollama/pull/4609 | 2,314,689,783 | PR_kwDOJ0Z1Ps5wbTOx | 4,609 | Add truncation guard | {
"login": "ericcurtin",
"id": 1694275,
"node_id": "MDQ6VXNlcjE2OTQyNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1694275?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ericcurtin",
"html_url": "https://github.com/ericcurtin",
"followers_url": "https://api.github.com/users... | [] | open | false | null | [] | null | 0 | 2024-05-24T07:58:36 | 2024-05-24T09:07:38 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4609",
"html_url": "https://github.com/ollama/ollama/pull/4609",
"diff_url": "https://github.com/ollama/ollama/pull/4609.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4609.patch",
"merged_at": null
} | So a partially downloaded version of this file doesn't execute in a curl|sh installation. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4609/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6074 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6074/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6074/comments | https://api.github.com/repos/ollama/ollama/issues/6074/events | https://github.com/ollama/ollama/pull/6074 | 2,437,962,673 | PR_kwDOJ0Z1Ps524kM_ | 6,074 | Add Nix and Flox to package manager listing | {
"login": "bryanhonof",
"id": 5932804,
"node_id": "MDQ6VXNlcjU5MzI4MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5932804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryanhonof",
"html_url": "https://github.com/bryanhonof",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2024-07-30T14:18:14 | 2024-08-29T16:45:36 | 2024-08-29T16:45:35 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6074",
"html_url": "https://github.com/ollama/ollama/pull/6074",
"diff_url": "https://github.com/ollama/ollama/pull/6074.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6074.patch",
"merged_at": "2024-08-29T16:45:35"
} | Ollama is available in both the Nix and Flox package manager.
I thought it'd be a good idea to list them here as well. 😁 | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6074/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6074/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6343 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6343/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6343/comments | https://api.github.com/repos/ollama/ollama/issues/6343/events | https://github.com/ollama/ollama/pull/6343 | 2,463,989,960 | PR_kwDOJ0Z1Ps54RIb6 | 6,343 | Go back to a pinned Go version | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-08-13T18:45:14 | 2024-08-13T18:53:53 | 2024-08-13T18:53:50 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6343",
"html_url": "https://github.com/ollama/ollama/pull/6343",
"diff_url": "https://github.com/ollama/ollama/pull/6343.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6343.patch",
"merged_at": "2024-08-13T18:53:50"
} | Go version 1.22.6 is triggering AV false positives, so go back to 1.22.5 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6343/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1490 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1490/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1490/comments | https://api.github.com/repos/ollama/ollama/issues/1490/events | https://github.com/ollama/ollama/pull/1490 | 2,038,514,847 | PR_kwDOJ0Z1Ps5h1W2i | 1,490 | add image support to the chat api | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2023-12-12T20:37:22 | 2023-12-12T22:45:33 | 2023-12-12T21:28:58 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1490",
"html_url": "https://github.com/ollama/ollama/pull/1490",
"diff_url": "https://github.com/ollama/ollama/pull/1490.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1490.patch",
"merged_at": "2023-12-12T21:28:58"
} | This change allows multimodal vision models to be able to be used in the chat API.
| {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1490/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/912 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/912/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/912/comments | https://api.github.com/repos/ollama/ollama/issues/912/events | https://github.com/ollama/ollama/issues/912 | 1,963,169,552 | I_kwDOJ0Z1Ps51A5cQ | 912 | API json output is tearing apart words. | {
"login": "orkutmuratyilmaz",
"id": 7395916,
"node_id": "MDQ6VXNlcjczOTU5MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7395916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orkutmuratyilmaz",
"html_url": "https://github.com/orkutmuratyilmaz",
"followers_url": "https://ap... | [] | closed | false | null | [] | null | 6 | 2023-10-26T10:09:19 | 2023-10-27T17:31:38 | 2023-10-26T20:02:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello all,
Thanks for this beautiful repo.
I've installed Ollama on my server and I'm making API calls with curl, as seen below:
```
curl -X POST http://172.16.41.91:11434/api/generate -d '{ ✔
"mo... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/912/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5414 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5414/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5414/comments | https://api.github.com/repos/ollama/ollama/issues/5414/events | https://github.com/ollama/ollama/issues/5414 | 2,384,455,067 | I_kwDOJ0Z1Ps6OH-Wb | 5,414 | OLLAMA_NOHISTORY doesn't work | {
"login": "sogawa-sps",
"id": 36852014,
"node_id": "MDQ6VXNlcjM2ODUyMDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/36852014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sogawa-sps",
"html_url": "https://github.com/sogawa-sps",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-07-01T18:14:54 | 2024-07-02T14:00:22 | 2024-07-01T23:05:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
OLLAMA_NOHISTORY is set to "true" but the history is still being written into the "history" file.
> ollama[77458]: 2024/07/01 14:04:09 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5414/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6041 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6041/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6041/comments | https://api.github.com/repos/ollama/ollama/issues/6041/events | https://github.com/ollama/ollama/pull/6041 | 2,434,941,367 | PR_kwDOJ0Z1Ps52uHAB | 6,041 | Fix typo in chat with image docs | {
"login": "hellerve",
"id": 7725188,
"node_id": "MDQ6VXNlcjc3MjUxODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7725188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hellerve",
"html_url": "https://github.com/hellerve",
"followers_url": "https://api.github.com/users/helle... | [] | closed | false | null | [] | null | 0 | 2024-07-29T09:39:35 | 2024-07-29T15:50:53 | 2024-07-29T15:50:53 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6041",
"html_url": "https://github.com/ollama/ollama/pull/6041",
"diff_url": "https://github.com/ollama/ollama/pull/6041.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6041.patch",
"merged_at": "2024-07-29T15:50:53"
} | It looks like the docs were just copy-pasted form the conversaiton history above. This PR fixes that part of the docs with a short explanation.
Cheers | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6041/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7894 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7894/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7894/comments | https://api.github.com/repos/ollama/ollama/issues/7894/events | https://github.com/ollama/ollama/pull/7894 | 2,707,365,132 | PR_kwDOJ0Z1Ps6DoNtY | 7,894 | cmd: make func addToZipFile | {
"login": "mattn",
"id": 10111,
"node_id": "MDQ6VXNlcjEwMTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/10111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mattn",
"html_url": "https://github.com/mattn",
"followers_url": "https://api.github.com/users/mattn/followers",
"f... | [] | open | false | null | [] | null | 0 | 2024-11-30T13:26:04 | 2024-11-30T13:37:39 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7894",
"html_url": "https://github.com/ollama/ollama/pull/7894",
"diff_url": "https://github.com/ollama/ollama/pull/7894.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7894.patch",
"merged_at": null
} | The defer in the loop is not fired at the end of the block. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7894/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1990 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1990/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1990/comments | https://api.github.com/repos/ollama/ollama/issues/1990/events | https://github.com/ollama/ollama/pull/1990 | 2,080,839,748 | PR_kwDOJ0Z1Ps5kB1Ji | 1,990 | Add macos cross-compile CI coverage | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-01-14T18:16:28 | 2024-01-16T20:32:25 | 2024-01-16T20:31:37 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1990",
"html_url": "https://github.com/ollama/ollama/pull/1990",
"diff_url": "https://github.com/ollama/ollama/pull/1990.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1990.patch",
"merged_at": "2024-01-16T20:31:37"
} | Linux and Windows are not yet set up for cross-compilation like MacOS, so I've excluded those from the CI matrix. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1990/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3885 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3885/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3885/comments | https://api.github.com/repos/ollama/ollama/issues/3885/events | https://github.com/ollama/ollama/issues/3885 | 2,261,753,968 | I_kwDOJ0Z1Ps6Gz6Bw | 3,885 | MESSAGE dont work in REST API | {
"login": "Rikhart",
"id": 1900153,
"node_id": "MDQ6VXNlcjE5MDAxNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1900153?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rikhart",
"html_url": "https://github.com/Rikhart",
"followers_url": "https://api.github.com/users/Rikhart/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q... | open | false | null | [] | null | 1 | 2024-04-24T16:54:21 | 2024-11-06T17:39:40 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi i am using your model file definition to train an AI, but when i use the rest api the knoledgment passed througth the MESSAGE structure my AI dont work, what i am missing?
Example of my Model file
```modelfile
FROM llama2
SYSTEM """Eres un asistente que ayuda a los clientes de la em... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3885/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3885/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2455 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2455/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2455/comments | https://api.github.com/repos/ollama/ollama/issues/2455/events | https://github.com/ollama/ollama/issues/2455 | 2,129,199,864 | I_kwDOJ0Z1Ps5-6QL4 | 2,455 | Update rocm version for docker build to 6.0.2 | {
"login": "mkesper",
"id": 3063558,
"node_id": "MDQ6VXNlcjMwNjM1NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3063558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mkesper",
"html_url": "https://github.com/mkesper",
"followers_url": "https://api.github.com/users/mkesper/... | [
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-02-11T22:27:40 | 2024-03-28T23:07:55 | 2024-03-28T23:07:55 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Docker builds are still based against rocm-5.7.1. Rocm version 6.0.2 seems to work better so please offer at least a variant of the image with rocm 6.0.2.
Related PR: https://github.com/ollama/ollama/pull/2454 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2455/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8323 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8323/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8323/comments | https://api.github.com/repos/ollama/ollama/issues/8323/events | https://github.com/ollama/ollama/issues/8323 | 2,771,113,169 | I_kwDOJ0Z1Ps6lK9TR | 8,323 | Manual linux install: runners/cuda_v11_avx/ollama_llama_server: undefined symbol: ggml_backend_cuda_reg | {
"login": "penningt",
"id": 8159953,
"node_id": "MDQ6VXNlcjgxNTk5NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8159953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/penningt",
"html_url": "https://github.com/penningt",
"followers_url": "https://api.github.com/users/penni... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 2 | 2025-01-06T17:27:49 | 2025-01-08T14:29:57 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Tried to update with the manual Linux install (and making sure I deleted lib/ollama first) and got this error:
runners/cuda_v11_avx/ollama_llama_server: undefined symbol: ggml_backend_cuda_reg
Looks like it may have been fixed for the cuda_v12 runner in #8166
Maybe just an oversight in t... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8323/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8323/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3952 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3952/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3952/comments | https://api.github.com/repos/ollama/ollama/issues/3952/events | https://github.com/ollama/ollama/issues/3952 | 2,266,359,136 | I_kwDOJ0Z1Ps6HFeVg | 3,952 | In multi-line string pressing enter does not cut line | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-04-26T19:10:43 | 2024-07-18T18:25:51 | 2024-07-18T18:25:51 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
```
jmorgan@mbp moondream2 % ollama run jmorgan/moondream
>>> """/Users/jmorgan/Desktop/dependency.pngQuestion: What is in this image
... Use """ to end multi-line input
```
I pressed enter after `.png` and it did not move `Question:` to a newline
### OS
_No response_
#... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3952/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3617 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3617/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3617/comments | https://api.github.com/repos/ollama/ollama/issues/3617/events | https://github.com/ollama/ollama/pull/3617 | 2,240,859,940 | PR_kwDOJ0Z1Ps5sh7uS | 3,617 | types/model: make ParseName variants less confusing | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | closed | false | null | [] | null | 0 | 2024-04-12T20:29:11 | 2024-04-12T20:58:51 | 2024-04-12T20:57:58 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3617",
"html_url": "https://github.com/ollama/ollama/pull/3617",
"diff_url": "https://github.com/ollama/ollama/pull/3617.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3617.patch",
"merged_at": "2024-04-12T20:57:57"
} | Also, fix http stripping bug.
Also, improve upon docs about fills and masks.
To "test": Please load and read in `pkgsite`
| {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3617/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5765 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5765/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5765/comments | https://api.github.com/repos/ollama/ollama/issues/5765/events | https://github.com/ollama/ollama/pull/5765 | 2,415,880,772 | PR_kwDOJ0Z1Ps51wN99 | 5,765 | adding "cache_prompt" to options | {
"login": "d-kleine",
"id": 53251018,
"node_id": "MDQ6VXNlcjUzMjUxMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/53251018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-kleine",
"html_url": "https://github.com/d-kleine",
"followers_url": "https://api.github.com/users/d-k... | [] | closed | false | null | [] | null | 0 | 2024-07-18T09:43:49 | 2024-07-18T09:45:51 | 2024-07-18T09:45:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5765",
"html_url": "https://github.com/ollama/ollama/pull/5765",
"diff_url": "https://github.com/ollama/ollama/pull/5765.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5765.patch",
"merged_at": null
} | * set CachePrompt type for cache_prompt
option
* added CachePrompt type | {
"login": "d-kleine",
"id": 53251018,
"node_id": "MDQ6VXNlcjUzMjUxMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/53251018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-kleine",
"html_url": "https://github.com/d-kleine",
"followers_url": "https://api.github.com/users/d-k... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5765/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4068 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4068/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4068/comments | https://api.github.com/repos/ollama/ollama/issues/4068/events | https://github.com/ollama/ollama/pull/4068 | 2,272,784,894 | PR_kwDOJ0Z1Ps5uNnpE | 4,068 | metal: add 512MiB to minimum memory, no partial offloading overhead | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-05-01T01:56:07 | 2024-05-01T16:23:05 | 2024-05-01T15:46:03 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4068",
"html_url": "https://github.com/ollama/ollama/pull/4068",
"diff_url": "https://github.com/ollama/ollama/pull/4068.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4068.patch",
"merged_at": "2024-05-01T15:46:03"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4068/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6470 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6470/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6470/comments | https://api.github.com/repos/ollama/ollama/issues/6470/events | https://github.com/ollama/ollama/issues/6470 | 2,482,252,069 | I_kwDOJ0Z1Ps6T9Ckl | 6,470 | registry.ollama.ai: returning text/plain for manifest requests | {
"login": "codefromthecrypt",
"id": 64215,
"node_id": "MDQ6VXNlcjY0MjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/64215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codefromthecrypt",
"html_url": "https://github.com/codefromthecrypt",
"followers_url": "https://api.github... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-08-23T04:25:05 | 2024-08-24T00:39:15 | 2024-08-23T20:43:35 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
If you look carefully, the OCI repository for ollama models is returning the wrong content type for manifests. It returns "text/plain" when it should be returning "application/vnd.docker.distribution.manifest.v2+json"
For example:
```bash
$ curl -v https://registry.ollama.ai/v2/library/qwen... | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6470/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6192 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6192/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6192/comments | https://api.github.com/repos/ollama/ollama/issues/6192/events | https://github.com/ollama/ollama/issues/6192 | 2,449,724,197 | I_kwDOJ0Z1Ps6SA9Ml | 6,192 | website: Discord link is broken | {
"login": "unitythemaker",
"id": 83316880,
"node_id": "MDQ6VXNlcjgzMzE2ODgw",
"avatar_url": "https://avatars.githubusercontent.com/u/83316880?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/unitythemaker",
"html_url": "https://github.com/unitythemaker",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-08-06T01:05:25 | 2024-08-06T09:50:27 | 2024-08-06T09:50:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hey! I just wanted to let you know that Discord invite link on website seems to be broken.

Sorry, I couldn't find where to post this best. So I am posting this as a GitHub issue.
### OS
_No response... | {
"login": "unitythemaker",
"id": 83316880,
"node_id": "MDQ6VXNlcjgzMzE2ODgw",
"avatar_url": "https://avatars.githubusercontent.com/u/83316880?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/unitythemaker",
"html_url": "https://github.com/unitythemaker",
"followers_url": "https://api.githu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6192/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8199 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8199/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8199/comments | https://api.github.com/repos/ollama/ollama/issues/8199/events | https://github.com/ollama/ollama/issues/8199 | 2,754,054,224 | I_kwDOJ0Z1Ps6kJ4hQ | 8,199 | Ollama can recorganize RTX 4090 on linux via nvidia driver 12.7, but it insist in using CPU & MEM not GPU VRAM (24GB), via running codellama | {
"login": "kylelee",
"id": 862062,
"node_id": "MDQ6VXNlcjg2MjA2Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/862062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylelee",
"html_url": "https://github.com/kylelee",
"followers_url": "https://api.github.com/users/kylelee/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-12-21T13:53:40 | 2024-12-22T16:21:24 | 2024-12-22T16:21:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Ollama can recorganize RTX 4090 on linux via nvidia driver 12.7, but it insist in using CPU & MEM not GPU VRAM (24GB), via running codellama
The old ollama version (0.1.14) works well with the same other things, but after I upgraded the ollama from 0.1.14 to 0.5.4, the everything goes very sl... | {
"login": "kylelee",
"id": 862062,
"node_id": "MDQ6VXNlcjg2MjA2Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/862062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylelee",
"html_url": "https://github.com/kylelee",
"followers_url": "https://api.github.com/users/kylelee/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8199/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8166 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8166/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8166/comments | https://api.github.com/repos/ollama/ollama/issues/8166/events | https://github.com/ollama/ollama/issues/8166 | 2,749,054,909 | I_kwDOJ0Z1Ps6j2z-9 | 8,166 | undefined symbol: ggml_backend_cuda_reg | {
"login": "luisbrandao",
"id": 25795753,
"node_id": "MDQ6VXNlcjI1Nzk1NzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/25795753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luisbrandao",
"html_url": "https://github.com/luisbrandao",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-12-19T01:33:23 | 2025-01-22T06:37:54 | 2024-12-19T01:36:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have 3 machines (2 rocky 9 and a rocky 8) All of then with nvidia GPUs
All of then are giving me the error:
```
$ ollama run llama3.1:8b
Error: llama runner process has terminated: error:/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server: undefined symbol: ggml_backend_cu... | {
"login": "luisbrandao",
"id": 25795753,
"node_id": "MDQ6VXNlcjI1Nzk1NzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/25795753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luisbrandao",
"html_url": "https://github.com/luisbrandao",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8166/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8166/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1560 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1560/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1560/comments | https://api.github.com/repos/ollama/ollama/issues/1560/events | https://github.com/ollama/ollama/issues/1560 | 2,044,660,783 | I_kwDOJ0Z1Ps553wwv | 1,560 | `OLLAMA_MODELS` environment variable ignored by Mac app | {
"login": "Crypto69",
"id": 50306702,
"node_id": "MDQ6VXNlcjUwMzA2NzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/50306702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Crypto69",
"html_url": "https://github.com/Crypto69",
"followers_url": "https://api.github.com/users/Cry... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 15 | 2023-12-16T08:19:07 | 2025-01-17T19:53:36 | 2024-01-22T23:48:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Documentation FAQ says the following:
### How can I change where Ollama stores models?
To modify where models are stored, you can use the `OLLAMA_MODELS` environment variable. Note that on Linux this means defining `OLLAMA_MODELS` in a drop-in `/etc/systemd/system/ollama.service.d` service file, reloading systemd... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1560/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1560/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2873 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2873/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2873/comments | https://api.github.com/repos/ollama/ollama/issues/2873/events | https://github.com/ollama/ollama/issues/2873 | 2,164,602,505 | I_kwDOJ0Z1Ps6BBTaJ | 2,873 | Improvement suggestion: "Recommended" and brief explanation on ollama.com/library | {
"login": "ewebgh33",
"id": 123797054,
"node_id": "U_kgDOB2D-Pg",
"avatar_url": "https://avatars.githubusercontent.com/u/123797054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ewebgh33",
"html_url": "https://github.com/ewebgh33",
"followers_url": "https://api.github.com/users/ewebgh33/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": ... | open | false | null | [] | null | 5 | 2024-03-02T05:44:42 | 2024-07-16T07:21:43 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi
Would you consider adding to the website some features like
- select your GPU (dropdown box). This can help filter suitable models. Personally I have 2x4090 so can run 70b models, so for me maybe a filter shows me the entire list (or close to) but would be more useful to someone with 8gb.
- also adding a brief ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2873/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4166 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4166/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4166/comments | https://api.github.com/repos/ollama/ollama/issues/4166/events | https://github.com/ollama/ollama/issues/4166 | 2,279,487,044 | I_kwDOJ0Z1Ps6H3jZE | 4,166 | summarize this file | {
"login": "katmai",
"id": 1245160,
"node_id": "MDQ6VXNlcjEyNDUxNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1245160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/katmai",
"html_url": "https://github.com/katmai",
"followers_url": "https://api.github.com/users/katmai/foll... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 13 | 2024-05-05T11:45:30 | 2024-05-06T04:01:09 | 2024-05-05T15:56:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
```
$ ollama run llama3 "Summarize this file: $(cat README.md)"
Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can b... | {
"login": "katmai",
"id": 1245160,
"node_id": "MDQ6VXNlcjEyNDUxNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1245160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/katmai",
"html_url": "https://github.com/katmai",
"followers_url": "https://api.github.com/users/katmai/foll... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4166/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6397 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6397/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6397/comments | https://api.github.com/repos/ollama/ollama/issues/6397/events | https://github.com/ollama/ollama/issues/6397 | 2,471,457,487 | I_kwDOJ0Z1Ps6TT3LP | 6,397 | Warning "Prompt does not specify that the LLM should response in JSON, but JSON format is expected." | {
"login": "itinance",
"id": 1758597,
"node_id": "MDQ6VXNlcjE3NTg1OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1758597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itinance",
"html_url": "https://github.com/itinance",
"followers_url": "https://api.github.com/users/itina... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-08-17T09:56:27 | 2024-12-02T21:51:12 | 2024-12-02T21:51:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
As written in title, from where comes this message using llama 3.1?
In my prompt, it clearly states
> Answer in JSON. The JSON should be a list (length 5) of dictionaries whose keys are "missing_entities" and "denser_summary".
> Properly escape all special characters, especially double... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6397/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6739 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6739/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6739/comments | https://api.github.com/repos/ollama/ollama/issues/6739/events | https://github.com/ollama/ollama/pull/6739 | 2,518,273,570 | PR_kwDOJ0Z1Ps57FTDW | 6,739 | add "stop" command | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 2 | 2024-09-11T01:44:17 | 2024-09-29T12:15:10 | 2024-09-11T23:36:22 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6739",
"html_url": "https://github.com/ollama/ollama/pull/6739",
"diff_url": "https://github.com/ollama/ollama/pull/6739.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6739.patch",
"merged_at": "2024-09-11T23:36:22"
} | This change adds the `ollama stop <model>` command which can be used to stop a running model. This is triggered by a call to `/api/generate` with an empty prompt and duration 0, or to `/api/chat` with empty messages and duration 0. The model will *expire*, but won't force being unloaded until any currently running inst... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6739/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 4,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6739/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2260 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2260/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2260/comments | https://api.github.com/repos/ollama/ollama/issues/2260/events | https://github.com/ollama/ollama/issues/2260 | 2,106,267,510 | I_kwDOJ0Z1Ps59ixd2 | 2,260 | Unable to rebuild Ollama due to llm/payload_linux.go:7:12: pattern llama.cpp/build/linux/*/*/lib/*.so*: no matching files found | {
"login": "Solomin0",
"id": 37559666,
"node_id": "MDQ6VXNlcjM3NTU5NjY2",
"avatar_url": "https://avatars.githubusercontent.com/u/37559666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Solomin0",
"html_url": "https://github.com/Solomin0",
"followers_url": "https://api.github.com/users/Sol... | [] | closed | false | null | [] | null | 8 | 2024-01-29T19:32:52 | 2024-03-11T18:49:13 | 2024-03-11T18:49:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am following the developer instructions.
Started by "git clone [ollama](https://github.com/ollama/ollama.git)"
I have go, cmake, and gcc.
I was able to sucessfully run "go generate ./"
"go build ." failed and gave me this error
```root@9a44dfc68b7a:/workspace/a2/ollama# go generate ./
root@9a44dfc68b7a:/work... | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2260/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2260/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3374 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3374/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3374/comments | https://api.github.com/repos/ollama/ollama/issues/3374/events | https://github.com/ollama/ollama/issues/3374 | 2,211,288,547 | I_kwDOJ0Z1Ps6DzZXj | 3,374 | Models disappear, service becomes unusable after following instructions for making ollama available on network. | {
"login": "applebiter",
"id": 43894661,
"node_id": "MDQ6VXNlcjQzODk0NjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/43894661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/applebiter",
"html_url": "https://github.com/applebiter",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-03-27T16:45:18 | 2024-03-27T16:50:53 | 2024-03-27T16:50:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I installed ollama easily, downloaded around 50gb worth of models, testing software locally, fine. Realized I needed to be able to access ollama on other network machines and get those results asynchronously, rather than using the supplied Python library on same host. I changed the host to 0.0.0... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3374/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4484 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4484/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4484/comments | https://api.github.com/repos/ollama/ollama/issues/4484/events | https://github.com/ollama/ollama/issues/4484 | 2,301,656,475 | I_kwDOJ0Z1Ps6JMH2b | 4,484 | Gemma:latest aborted (core dumped) | {
"login": "ManuLinares",
"id": 3720020,
"node_id": "MDQ6VXNlcjM3MjAwMjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3720020?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ManuLinares",
"html_url": "https://github.com/ManuLinares",
"followers_url": "https://api.github.com/us... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-05-17T01:52:31 | 2024-07-25T20:08:23 | 2024-07-25T20:08:20 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Ollama server logs: https://pastebin.com/DyqQwwFf
ollama run gemma:latest #output
`Error: llama runner process has terminated: signal: aborted (core dumped) error:failed to create context with model '/home/mb/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4484/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4484/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4181 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4181/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4181/comments | https://api.github.com/repos/ollama/ollama/issues/4181/events | https://github.com/ollama/ollama/issues/4181 | 2,279,708,054 | I_kwDOJ0Z1Ps6H4ZWW | 4,181 | pull qwen:14b-chat-v1.5-fp16 Error: EOF | {
"login": "MarkWard0110",
"id": 90335263,
"node_id": "MDQ6VXNlcjkwMzM1MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkWard0110",
"html_url": "https://github.com/MarkWard0110",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw... | closed | false | null | [] | null | 1 | 2024-05-05T20:09:20 | 2024-05-06T18:34:06 | 2024-05-06T18:34:06 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
`ollama pull qwen:14b-chat-v1.5-fp16` results in `Error: EOF`
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.33 | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4181/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3759 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3759/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3759/comments | https://api.github.com/repos/ollama/ollama/issues/3759/events | https://github.com/ollama/ollama/issues/3759 | 2,253,500,555 | I_kwDOJ0Z1Ps6GUbCL | 3,759 | llama3-instruct models not stopping at stop token | {
"login": "moyix",
"id": 34380,
"node_id": "MDQ6VXNlcjM0Mzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/34380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moyix",
"html_url": "https://github.com/moyix",
"followers_url": "https://api.github.com/users/moyix/followers",
"f... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 47 | 2024-04-19T17:19:05 | 2024-06-25T04:32:16 | 2024-06-25T04:32:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm using `llama3:70b` through the OpenAI-compatible endpoint. When generating, I am getting outputs like this:
```
Please provide the output of the above command.
... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3759/reactions",
"total_count": 20,
"+1": 19,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/3759/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5746 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5746/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5746/comments | https://api.github.com/repos/ollama/ollama/issues/5746/events | https://github.com/ollama/ollama/issues/5746 | 2,413,527,209 | I_kwDOJ0Z1Ps6P24Cp | 5,746 | Endpoint for Listing models isn't working | {
"login": "Spirizeon",
"id": 123345456,
"node_id": "U_kgDOB1oaMA",
"avatar_url": "https://avatars.githubusercontent.com/u/123345456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Spirizeon",
"html_url": "https://github.com/Spirizeon",
"followers_url": "https://api.github.com/users/Spiriz... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2024-07-17T12:42:44 | 2024-07-17T13:07:28 | 2024-07-17T13:07:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I was going through the [api docs](https://github.com/ollama/ollama/blob/main/docs/api.md#list-running-models). upon sending the curl request for listing models, I come across this error.
I made sure that:
- [x] Ollama is running on my device on port 11434
- [x] All other endpoints are... | {
"login": "Spirizeon",
"id": 123345456,
"node_id": "U_kgDOB1oaMA",
"avatar_url": "https://avatars.githubusercontent.com/u/123345456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Spirizeon",
"html_url": "https://github.com/Spirizeon",
"followers_url": "https://api.github.com/users/Spiriz... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5746/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7345 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7345/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7345/comments | https://api.github.com/repos/ollama/ollama/issues/7345/events | https://github.com/ollama/ollama/pull/7345 | 2,612,055,738 | PR_kwDOJ0Z1Ps5_zGRm | 7,345 | Improve dependency gathering logic | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-10-24T16:22:20 | 2024-10-24T16:51:56 | 2024-10-24T16:51:53 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7345",
"html_url": "https://github.com/ollama/ollama/pull/7345",
"diff_url": "https://github.com/ollama/ollama/pull/7345.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7345.patch",
"merged_at": "2024-10-24T16:51:53"
} | This unfies the rocm/cuda dependency logic into the makefile and fixes a missing define which broke windows rocm
This backports the fixes needed to get 0.4.0-rc5 over the finish line to main to reduce the amount of content I have to carry on the branch. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7345/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6889 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6889/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6889/comments | https://api.github.com/repos/ollama/ollama/issues/6889/events | https://github.com/ollama/ollama/issues/6889 | 2,537,750,777 | I_kwDOJ0Z1Ps6XQwD5 | 6,889 | Qwen/Qwen2.5-Math | {
"login": "tuanlda78202",
"id": 77523560,
"node_id": "MDQ6VXNlcjc3NTIzNTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/77523560?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuanlda78202",
"html_url": "https://github.com/tuanlda78202",
"followers_url": "https://api.github.c... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 8 | 2024-09-20T03:07:08 | 2025-01-19T06:39:38 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://huggingface.co/collections/Qwen/qwen25-math-66eaa240a1b7d5ee65f1da3e | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6889/reactions",
"total_count": 18,
"+1": 12,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 4,
"eyes": 2
} | https://api.github.com/repos/ollama/ollama/issues/6889/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5062 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5062/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5062/comments | https://api.github.com/repos/ollama/ollama/issues/5062/events | https://github.com/ollama/ollama/issues/5062 | 2,354,834,520 | I_kwDOJ0Z1Ps6MW-xY | 5,062 | Can't install on Fedora 40 due to nvidia driver 555 dependency graph problems | {
"login": "wiiznokes",
"id": 78230769,
"node_id": "MDQ6VXNlcjc4MjMwNzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/78230769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wiiznokes",
"html_url": "https://github.com/wiiznokes",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-06-15T12:47:15 | 2024-06-20T18:27:25 | 2024-06-20T18:27:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
```
> curl -fsSL https://ollama.com/install.sh | sh
>>> Downloading ollama...
######################################################################## 100.0%#=#=# ##########################################################... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5062/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2413 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2413/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2413/comments | https://api.github.com/repos/ollama/ollama/issues/2413/events | https://github.com/ollama/ollama/pull/2413 | 2,125,886,121 | PR_kwDOJ0Z1Ps5majNf | 2,413 | slog gin logging | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | open | false | null | [] | null | 0 | 2024-02-08T19:07:30 | 2024-08-28T23:18:53 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2413",
"html_url": "https://github.com/ollama/ollama/pull/2413",
"diff_url": "https://github.com/ollama/ollama/pull/2413.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2413.patch",
"merged_at": null
} | use slog for gin logging. this produces a log output consistent with the rest of ollama's logs as well as exposing additional details from requests | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2413/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4689 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4689/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4689/comments | https://api.github.com/repos/ollama/ollama/issues/4689/events | https://github.com/ollama/ollama/pull/4689 | 2,321,976,911 | PR_kwDOJ0Z1Ps5w0GS3 | 4,689 | Fix wsl2 status check for nvidia cards | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-05-28T21:49:34 | 2024-05-28T21:49:47 | 2024-05-28T21:49:47 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4689",
"html_url": "https://github.com/ollama/ollama/pull/4689",
"diff_url": "https://github.com/ollama/ollama/pull/4689.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4689.patch",
"merged_at": "2024-05-28T21:49:47"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4689/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5355 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5355/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5355/comments | https://api.github.com/repos/ollama/ollama/issues/5355/events | https://github.com/ollama/ollama/issues/5355 | 2,379,665,376 | I_kwDOJ0Z1Ps6N1s_g | 5,355 | Groq's "name" option within "messages" parameter of the chat endpoint payload | {
"login": "pmgh2345",
"id": 16248399,
"node_id": "MDQ6VXNlcjE2MjQ4Mzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/16248399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pmgh2345",
"html_url": "https://github.com/pmgh2345",
"followers_url": "https://api.github.com/users/pmg... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": ... | open | false | null | [] | null | 3 | 2024-06-28T06:10:39 | 2024-11-06T01:13:15 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Groq has a really helpful option within the "messages" parameter that one would provide in the payload to their chat completion API that helps the model distinguish between multiple users of the same role. This is helpful for situations in which a multi-agent conversation is taking place and there is more than one age... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5355/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5355/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6811 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6811/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6811/comments | https://api.github.com/repos/ollama/ollama/issues/6811/events | https://github.com/ollama/ollama/issues/6811 | 2,526,766,147 | I_kwDOJ0Z1Ps6Wm2RD | 6,811 | iiiorg/piiranha-v1-detect-personal | {
"login": "myrulezzz",
"id": 43094013,
"node_id": "MDQ6VXNlcjQzMDk0MDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/43094013?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/myrulezzz",
"html_url": "https://github.com/myrulezzz",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2024-09-15T06:36:52 | 2025-01-06T07:32:06 | 2025-01-06T07:32:06 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6811/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3150 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3150/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3150/comments | https://api.github.com/repos/ollama/ollama/issues/3150/events | https://github.com/ollama/ollama/issues/3150 | 2,187,164,743 | I_kwDOJ0Z1Ps6CXXxH | 3,150 | ollama 0.1.29 does not work on kubernetes with amd gpu | {
"login": "arno4000",
"id": 50365065,
"node_id": "MDQ6VXNlcjUwMzY1MDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/50365065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arno4000",
"html_url": "https://github.com/arno4000",
"followers_url": "https://api.github.com/users/arn... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-03-14T19:58:06 | 2024-03-15T01:12:43 | 2024-03-14T20:44:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I installed the amd rocm k8s device plugin https://github.com/ROCm/k8s-device-plugin on a fresh k3s installation. Then I installed ollama using the following deployment:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ollama
spec:
selector:
matchLabels:
app: ollama
template:
... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3150/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6358 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6358/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6358/comments | https://api.github.com/repos/ollama/ollama/issues/6358/events | https://github.com/ollama/ollama/issues/6358 | 2,465,364,208 | I_kwDOJ0Z1Ps6S8njw | 6,358 | Segmentation fault | {
"login": "yicheng-2019",
"id": 58028882,
"node_id": "MDQ6VXNlcjU4MDI4ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/58028882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yicheng-2019",
"html_url": "https://github.com/yicheng-2019",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-08-14T09:39:12 | 2024-09-04T00:38:41 | 2024-09-04T00:38:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I want to express my appreciation for the fantastic work done on this project. The capabilities and ease of use are truly impressive.
However, after successfully deploying and running a model using the Ollama remote API, I encountered an issue that I haven't been able to resolve. I would like to ask if you have any... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6358/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4122 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4122/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4122/comments | https://api.github.com/repos/ollama/ollama/issues/4122/events | https://github.com/ollama/ollama/issues/4122 | 2,277,158,096 | I_kwDOJ0Z1Ps6HuqzQ | 4,122 | Delete models installed from Ollama in my Mac to free the space | {
"login": "ISK-VAGR",
"id": 122751844,
"node_id": "U_kgDOB1ELZA",
"avatar_url": "https://avatars.githubusercontent.com/u/122751844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ISK-VAGR",
"html_url": "https://github.com/ISK-VAGR",
"followers_url": "https://api.github.com/users/ISK-VAGR/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 9 | 2024-05-03T08:09:11 | 2025-01-29T22:42:43 | 2024-05-13T16:53:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | HI,
I installed two Llama models using "Ollama run" in the terminal. Those occupy a significant space in disk and I need to free space to install a different model.
I tried Ollama rm command, but it only deletes the file in the manifests folder which is KBs. I also tried to delete those files manually, but again ... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4122/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2745 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2745/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2745/comments | https://api.github.com/repos/ollama/ollama/issues/2745/events | https://github.com/ollama/ollama/issues/2745 | 2,152,834,120 | I_kwDOJ0Z1Ps6AUaRI | 2,745 | Ability to pull/push models from/to OCI registries | {
"login": "raffaelespazzoli",
"id": 6179036,
"node_id": "MDQ6VXNlcjYxNzkwMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6179036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raffaelespazzoli",
"html_url": "https://github.com/raffaelespazzoli",
"followers_url": "https://ap... | [] | closed | false | null | [] | null | 5 | 2024-02-25T16:14:03 | 2025-01-19T18:39:43 | 2024-03-01T02:07:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | if we wish ollama to be adopted in traditional enterprises, it should be possible to pull/push models from/to private model registries.
I am not sure if there is a standard for model registries, but the OCI container registry spec[1] can be adapted to manage any type of content and OCI registries are readily available... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2745/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2745/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7985 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7985/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7985/comments | https://api.github.com/repos/ollama/ollama/issues/7985/events | https://github.com/ollama/ollama/issues/7985 | 2,724,454,134 | I_kwDOJ0Z1Ps6iY972 | 7,985 | Change /delete endpoint to use POST request | {
"login": "anxkhn",
"id": 83116240,
"node_id": "MDQ6VXNlcjgzMTE2MjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/83116240?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anxkhn",
"html_url": "https://github.com/anxkhn",
"followers_url": "https://api.github.com/users/anxkhn/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 2 | 2024-12-07T07:58:12 | 2024-12-09T22:37:38 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
**Description:**
The Ollama API's `/delete` endpoint currently uses a DELETE request with a message body to specify the model to be deleted. This directly conflicts with the HTTP specification, which states that the DELETE method should NOT have a request body. [1](https://developer.mozilla.o... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7985/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7985/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8276 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8276/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8276/comments | https://api.github.com/repos/ollama/ollama/issues/8276/events | https://github.com/ollama/ollama/issues/8276 | 2,764,599,075 | I_kwDOJ0Z1Ps6kyG8j | 8,276 | Ollama cannot load model after several hours on some GPUs | {
"login": "QichangZheng",
"id": 82627111,
"node_id": "MDQ6VXNlcjgyNjI3MTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/82627111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QichangZheng",
"html_url": "https://github.com/QichangZheng",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 10 | 2024-12-31T18:28:27 | 2025-01-13T01:49:53 | 2025-01-13T01:49:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
It works well on L20 GPU.
When I switch to H20 GPU, ollama died after several hours. It cannot load model, when I run 'ollama run gemma2:2b', it takes infinite time (as shown in the screenshot). But 'ollama serve -h' works.
. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7053/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4332 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4332/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4332/comments | https://api.github.com/repos/ollama/ollama/issues/4332/events | https://github.com/ollama/ollama/issues/4332 | 2,290,541,092 | I_kwDOJ0Z1Ps6IhuIk | 4,332 | Difference in performance between liuhaotian/llava-v1.6-34b and Ollama's llava:34b-v1.6 | {
"login": "EricWiener",
"id": 18071029,
"node_id": "MDQ6VXNlcjE4MDcxMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/18071029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EricWiener",
"html_url": "https://github.com/EricWiener",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-05-11T00:07:54 | 2024-05-13T15:38:45 | 2024-05-13T15:38:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When using the demo [here](https://llava.hliu.cc/) I get much better results using their `llava:34b-v1.6` than I do via `ollama run llava:34b-v1.6` when prompting with the same prompt followed by the image. Example of how I'm prompting:
```
ollama run llava:34b-v1.6 --verbose
>>> Is there a d... | {
"login": "EricWiener",
"id": 18071029,
"node_id": "MDQ6VXNlcjE4MDcxMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/18071029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EricWiener",
"html_url": "https://github.com/EricWiener",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4332/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/876 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/876/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/876/comments | https://api.github.com/repos/ollama/ollama/issues/876/events | https://github.com/ollama/ollama/issues/876 | 1,956,035,879 | I_kwDOJ0Z1Ps50lr0n | 876 | Linux: In console session ollama can't answer /show requests | {
"login": "byteconcepts",
"id": 33394779,
"node_id": "MDQ6VXNlcjMzMzk0Nzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/33394779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/byteconcepts",
"html_url": "https://github.com/byteconcepts",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 1 | 2023-10-22T19:59:56 | 2023-10-23T15:38:19 | 2023-10-23T15:38:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | On linux, ollama is run as a system-service and it's home directory is defined in it's system-service file.
Am I wrong? I thought, that when I start a ollama console session on a linux box, the console client is just a client that requests the ollama system-service via the api. - Is this wrong?
It seems pretty st... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/876/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3126 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3126/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3126/comments | https://api.github.com/repos/ollama/ollama/issues/3126/events | https://github.com/ollama/ollama/pull/3126 | 2,184,892,700 | PR_kwDOJ0Z1Ps5pjbkR | 3,126 | server: replace (":") in blob names with ("-") | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | closed | false | null | [] | null | 2 | 2024-03-13T21:10:39 | 2024-03-14T18:29:06 | 2024-03-14T18:29:06 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3126",
"html_url": "https://github.com/ollama/ollama/pull/3126",
"diff_url": "https://github.com/ollama/ollama/pull/3126.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3126.patch",
"merged_at": null
} | This commit fixes issues where manually moving blobs from one OS that allows colons in file paths to another OS that does not allow colons.
The fix is to replace colons with dashes in the blob names on `ollama serve` but before binding and serving the blobs, and then consistently use the dash-replaced blob names thr... | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3126/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1406 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1406/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1406/comments | https://api.github.com/repos/ollama/ollama/issues/1406/events | https://github.com/ollama/ollama/pull/1406 | 2,029,340,598 | PR_kwDOJ0Z1Ps5hWNat | 1,406 | docs/tutorials: add bit on how to use Fly GPUs on-demand with Ollama | {
"login": "Xe",
"id": 529003,
"node_id": "MDQ6VXNlcjUyOTAwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/529003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xe",
"html_url": "https://github.com/Xe",
"followers_url": "https://api.github.com/users/Xe/followers",
"followi... | [] | closed | false | null | [] | null | 0 | 2023-12-06T20:36:01 | 2023-12-06T22:14:02 | 2023-12-06T22:14:02 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1406",
"html_url": "https://github.com/ollama/ollama/pull/1406",
"diff_url": "https://github.com/ollama/ollama/pull/1406.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1406.patch",
"merged_at": "2023-12-06T22:14:02"
} | Docs version of this post: https://fly.io/blog/scaling-llm-ollama/ | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1406/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5543 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5543/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5543/comments | https://api.github.com/repos/ollama/ollama/issues/5543/events | https://github.com/ollama/ollama/issues/5543 | 2,395,769,002 | I_kwDOJ0Z1Ps6OzIiq | 5,543 | Slow inference speed on RTX 3090. | {
"login": "Saniel0",
"id": 118857726,
"node_id": "U_kgDOBxWf_g",
"avatar_url": "https://avatars.githubusercontent.com/u/118857726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saniel0",
"html_url": "https://github.com/Saniel0",
"followers_url": "https://api.github.com/users/Saniel0/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-07-08T14:14:44 | 2024-10-15T23:41:01 | 2024-10-15T23:41:01 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am getting only about 60t/s compared to 85t/s in llama.cpp when running llama3-8B-q8_0. When I run ollama on RTX 4080 super, I get the same performance as in llama.cpp. I tried running both natively and in docker, results were the same.
That leads me to believe that something is not right... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5543/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/538 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/538/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/538/comments | https://api.github.com/repos/ollama/ollama/issues/538/events | https://github.com/ollama/ollama/issues/538 | 1,899,181,524 | I_kwDOJ0Z1Ps5xMzXU | 538 | Docs request: connecting to front end | {
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 2 | 2023-09-15T23:12:10 | 2023-10-27T07:10:24 | 2023-10-27T07:10:24 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello ollama, excited to start using this!
I have a docs request:
- How to connect a front end to ollama
- Link example or suggested front ends
Thank you in advance! | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/538/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5532 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5532/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5532/comments | https://api.github.com/repos/ollama/ollama/issues/5532/events | https://github.com/ollama/ollama/issues/5532 | 2,394,124,899 | I_kwDOJ0Z1Ps6Os3Jj | 5,532 | Ollama CPU based don't run in a LXC (Host Kernel 6.8.4-3) | {
"login": "T-Herrmann-WI",
"id": 104737662,
"node_id": "U_kgDOBj4rfg",
"avatar_url": "https://avatars.githubusercontent.com/u/104737662?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/T-Herrmann-WI",
"html_url": "https://github.com/T-Herrmann-WI",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 13 | 2024-07-07T16:35:05 | 2024-11-19T15:25:48 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I use Proxmox VE for virtualization. If I install ollama in a Linux VM it works fine. If I install Ollama in a LXC (Host Kernel 6.8.4-3) it don't works with CPU.
#####
ollama run tinyllama
Error: timed out waiting for llama runner to start - progress 1.0
#####
For LXC with Ollama and N... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5532/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8596 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8596/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8596/comments | https://api.github.com/repos/ollama/ollama/issues/8596/events | https://github.com/ollama/ollama/issues/8596 | 2,811,682,370 | I_kwDOJ0Z1Ps6nlt5C | 8,596 | Ollama on WSL2 detects GPU but timesout when running inference | {
"login": "rz1027",
"id": 53318196,
"node_id": "MDQ6VXNlcjUzMzE4MTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/53318196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rz1027",
"html_url": "https://github.com/rz1027",
"followers_url": "https://api.github.com/users/rz1027/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 5 | 2025-01-26T17:21:17 | 2025-01-28T04:38:00 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am using ManjaroWSL [https://github.com/sileshn/ManjaroWSL2] on Windows 11, ollama runs fine on WSL, detects my Nvidia 4070 on its start.
The thing is when I load a model and run it, I am facing this error:
`gpu VRAM usage didn't recover within timeout`
and it should that the process is offlo... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8596/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4439 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4439/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4439/comments | https://api.github.com/repos/ollama/ollama/issues/4439/events | https://github.com/ollama/ollama/pull/4439 | 2,296,522,369 | PR_kwDOJ0Z1Ps5vdV-Y | 4,439 | fix typo in modelfile generation | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2024-05-14T22:32:45 | 2024-05-14T22:34:30 | 2024-05-14T22:34:29 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4439",
"html_url": "https://github.com/ollama/ollama/pull/4439",
"diff_url": "https://github.com/ollama/ollama/pull/4439.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4439.patch",
"merged_at": "2024-05-14T22:34:29"
} | null | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4439/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1417 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1417/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1417/comments | https://api.github.com/repos/ollama/ollama/issues/1417/events | https://github.com/ollama/ollama/issues/1417 | 2,031,118,988 | I_kwDOJ0Z1Ps55EGqM | 1,417 | Cant pull model manifest | {
"login": "bw-Deejee",
"id": 31662809,
"node_id": "MDQ6VXNlcjMxNjYyODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/31662809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bw-Deejee",
"html_url": "https://github.com/bw-Deejee",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 9 | 2023-12-07T16:33:14 | 2024-12-07T15:27:13 | 2023-12-12T14:30:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I just installed ollama on a Azure VM.
Running `ollama run llama2` results in
`pulling manifest ⠴` for a couple minutes and eventually:
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/latest": dial tcp 34.120.132.20:443: connect: connection timed out
Also visiting the l... | {
"login": "bw-Deejee",
"id": 31662809,
"node_id": "MDQ6VXNlcjMxNjYyODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/31662809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bw-Deejee",
"html_url": "https://github.com/bw-Deejee",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1417/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/665 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/665/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/665/comments | https://api.github.com/repos/ollama/ollama/issues/665/events | https://github.com/ollama/ollama/issues/665 | 1,920,841,853 | I_kwDOJ0Z1Ps5yfbh9 | 665 | Nonsense in output | {
"login": "andersonbcdefg",
"id": 17210823,
"node_id": "MDQ6VXNlcjE3MjEwODIz",
"avatar_url": "https://avatars.githubusercontent.com/u/17210823?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andersonbcdefg",
"html_url": "https://github.com/andersonbcdefg",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 2 | 2023-10-01T16:06:11 | 2023-10-04T20:36:10 | 2023-10-04T20:35:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I've encountered this before when using llama.cpp on its own, but running ollama on my M2 I get random tokens sometimes in the output. (In particular, the model seems to like to produce the token "sierp". I can't be the only person who's experienced this. Not sure where this error comes from, but would be great if some... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/665/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4035 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4035/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4035/comments | https://api.github.com/repos/ollama/ollama/issues/4035/events | https://github.com/ollama/ollama/pull/4035 | 2,270,135,126 | PR_kwDOJ0Z1Ps5uEq_p | 4,035 | Fix relative path lookup | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-04-29T23:01:00 | 2024-04-29T23:08:09 | 2024-04-29T23:08:07 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4035",
"html_url": "https://github.com/ollama/ollama/pull/4035",
"diff_url": "https://github.com/ollama/ollama/pull/4035.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4035.patch",
"merged_at": "2024-04-29T23:08:06"
} | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4035/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8265 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8265/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8265/comments | https://api.github.com/repos/ollama/ollama/issues/8265/events | https://github.com/ollama/ollama/issues/8265 | 2,762,204,298 | I_kwDOJ0Z1Ps6ko-SK | 8,265 | The Mac version cannot customize the listening port | {
"login": "Sze-chwan-er",
"id": 45889418,
"node_id": "MDQ6VXNlcjQ1ODg5NDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/45889418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sze-chwan-er",
"html_url": "https://github.com/Sze-chwan-er",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 4 | 2024-12-29T11:06:56 | 2024-12-30T18:56:24 | 2024-12-30T18:56:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
On Mac computers with the Apple chip, it is not possible to change the default listening port of ollama. Even I stoped ollama before I changed it. Attempting to change the listening port using the following commands as a regular user and an administrator does not work: `OLLAMA_HOST=0.0.0.0:11434... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8265/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4978 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4978/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4978/comments | https://api.github.com/repos/ollama/ollama/issues/4978/events | https://github.com/ollama/ollama/issues/4978 | 2,346,179,242 | I_kwDOJ0Z1Ps6L19qq | 4,978 | Systemd service on update | {
"login": "nerkdesign",
"id": 10208672,
"node_id": "MDQ6VXNlcjEwMjA4Njcy",
"avatar_url": "https://avatars.githubusercontent.com/u/10208672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nerkdesign",
"html_url": "https://github.com/nerkdesign",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-06-11T11:31:20 | 2024-06-11T13:58:16 | 2024-06-11T13:58:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello,
I used to set environments on ollama.service in systemd (origins, host...)
Is there a way to update ollama without erasing this service ?
Many thanks | {
"login": "nerkdesign",
"id": 10208672,
"node_id": "MDQ6VXNlcjEwMjA4Njcy",
"avatar_url": "https://avatars.githubusercontent.com/u/10208672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nerkdesign",
"html_url": "https://github.com/nerkdesign",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4978/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/563 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/563/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/563/comments | https://api.github.com/repos/ollama/ollama/issues/563/events | https://github.com/ollama/ollama/pull/563 | 1,906,495,748 | PR_kwDOJ0Z1Ps5a3R_p | 563 | Context saving/loading | {
"login": "voodooattack",
"id": 1285826,
"node_id": "MDQ6VXNlcjEyODU4MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1285826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/voodooattack",
"html_url": "https://github.com/voodooattack",
"followers_url": "https://api.github.com... | [] | closed | false | null | [] | null | 2 | 2023-09-21T09:12:25 | 2024-01-21T07:08:04 | 2024-01-21T00:12:29 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/563",
"html_url": "https://github.com/ollama/ollama/pull/563",
"diff_url": "https://github.com/ollama/ollama/pull/563.diff",
"patch_url": "https://github.com/ollama/ollama/pull/563.patch",
"merged_at": null
} | This adds support for saving a conversation/session and resuming it later. The context is (de)serialised as a plain JSON array to/from the specified file.
This adds the following commands:
```
/context save <filename>
/context load <filename>
``` | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/563/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/563/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3573 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3573/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3573/comments | https://api.github.com/repos/ollama/ollama/issues/3573/events | https://github.com/ollama/ollama/issues/3573 | 2,235,213,795 | I_kwDOJ0Z1Ps6FOqfj | 3,573 | Making separate standalone version for UsbDrives | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"f... | [] | closed | false | null | [] | null | 2 | 2024-04-10T10:05:00 | 2024-12-31T16:44:07 | 2024-04-19T14:56:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
I recently installed Ollama on my Windows PC, by default it installs in c drive!
can you guys work on the standalone version of Ollama for the USB drive, so I can carry that USB anywhere and use it offline?
this thing with universalizing the AI, I have 512 GB of USB drive, but I cann... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3573/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1948 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1948/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1948/comments | https://api.github.com/repos/ollama/ollama/issues/1948/events | https://github.com/ollama/ollama/issues/1948 | 2,078,226,913 | I_kwDOJ0Z1Ps573znh | 1,948 | Understanding Response Data Structure | {
"login": "tmattoneill",
"id": 4083139,
"node_id": "MDQ6VXNlcjQwODMxMzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4083139?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tmattoneill",
"html_url": "https://github.com/tmattoneill",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 2 | 2024-01-12T07:23:02 | 2024-01-13T08:11:15 | 2024-01-13T01:09:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm really confused by Ollama's response from the API. Most other LLM's I've used return a consistent model / JSON object that can serve as the 'assistant' response. However, Ollama returns a different, seemingly random JSON / object every time. This makes it nearly impossible to extract the reply from any prompt. See ... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1948/timeline | null | not_planned | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.