url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/4177
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4177/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4177/comments
https://api.github.com/repos/ollama/ollama/issues/4177/events
https://github.com/ollama/ollama/issues/4177
2,279,703,592
I_kwDOJ0Z1Ps6H4YQo
4,177
pull orca2:7b-fp16 Error: EOF
{ "login": "MarkWard0110", "id": 90335263, "node_id": "MDQ6VXNlcjkwMzM1MjYz", "avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MarkWard0110", "html_url": "https://github.com/MarkWard0110", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-05-05T19:57:54
2024-05-06T18:53:25
2024-05-06T18:33:30
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? `ollama pull orca2:7b-fp16` errors with `Error: EOF` when it is pulling manifest. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.33
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4177/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4177/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1295
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1295/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1295/comments
https://api.github.com/repos/ollama/ollama/issues/1295/events
https://github.com/ollama/ollama/pull/1295
2,013,425,784
PR_kwDOJ0Z1Ps5ggBuv
1,295
Add verbose request logs to server.
{ "login": "rootedbox", "id": 3997890, "node_id": "MDQ6VXNlcjM5OTc4OTA=", "avatar_url": "https://avatars.githubusercontent.com/u/3997890?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rootedbox", "html_url": "https://github.com/rootedbox", "followers_url": "https://api.github.com/users/ro...
[]
closed
false
null
[]
null
1
2023-11-28T01:28:16
2024-05-07T23:46:51
2024-05-07T23:46:51
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1295", "html_url": "https://github.com/ollama/ollama/pull/1295", "diff_url": "https://github.com/ollama/ollama/pull/1295.diff", "patch_url": "https://github.com/ollama/ollama/pull/1295.patch", "merged_at": null }
Add verbose request logs to server. Completes https://github.com/jmorganca/ollama/issues/1118 example output ``` 2023/11/27 12:30:18 routes.go:736: Request POST - /api/generate; QueryParams: map[]; URLParams: []; Body: {"model":"orca-mini","prompt":"word up","system":"","template":"","context":[31822,13,8458,3192...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1295/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5268
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5268/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5268/comments
https://api.github.com/repos/ollama/ollama/issues/5268/events
https://github.com/ollama/ollama/pull/5268
2,371,666,486
PR_kwDOJ0Z1Ps5zcZy6
5,268
Add Windows on ARM64 build instructions
{ "login": "hmartinez82", "id": 1100440, "node_id": "MDQ6VXNlcjExMDA0NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/1100440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hmartinez82", "html_url": "https://github.com/hmartinez82", "followers_url": "https://api.github.com/us...
[]
closed
false
null
[]
null
1
2024-06-25T05:10:51
2024-11-21T17:49:40
2024-11-21T17:44:28
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5268", "html_url": "https://github.com/ollama/ollama/pull/5268", "diff_url": "https://github.com/ollama/ollama/pull/5268.diff", "patch_url": "https://github.com/ollama/ollama/pull/5268.patch", "merged_at": null }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5268/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5268/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4859
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4859/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4859/comments
https://api.github.com/repos/ollama/ollama/issues/4859/events
https://github.com/ollama/ollama/issues/4859
2,338,466,659
I_kwDOJ0Z1Ps6LYitj
4,859
glm-4-9b-chat
{ "login": "enryteam", "id": 20081090, "node_id": "MDQ6VXNlcjIwMDgxMDkw", "avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/enryteam", "html_url": "https://github.com/enryteam", "followers_url": "https://api.github.com/users/enr...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
1
2024-06-06T14:54:47
2024-06-06T17:34:23
2024-06-06T17:34:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://modelscope.cn/models/ZhipuAI/glm-4-9b-chat thanks !
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4859/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4859/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/150
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/150/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/150/comments
https://api.github.com/repos/ollama/ollama/issues/150/events
https://github.com/ollama/ollama/issues/150
1,814,849,853
I_kwDOJ0Z1Ps5sLGk9
150
Need word wrap
{ "login": "nathanleclaire", "id": 1476820, "node_id": "MDQ6VXNlcjE0NzY4MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/1476820?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nathanleclaire", "html_url": "https://github.com/nathanleclaire", "followers_url": "https://api.gith...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
3
2023-07-20T21:56:36
2023-09-26T22:57:12
2023-09-26T22:57:12
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
<img width="97" alt="image" src="https://github.com/jmorganca/ollama/assets/1476820/fa92e643-a994-46ab-b83c-df3e28fbf758"> It pains me
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/150/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/150/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6865
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6865/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6865/comments
https://api.github.com/repos/ollama/ollama/issues/6865/events
https://github.com/ollama/ollama/issues/6865
2,535,149,334
I_kwDOJ0Z1Ps6XG08W
6,865
qwen2.5 context length
{ "login": "zlwu", "id": 214708, "node_id": "MDQ6VXNlcjIxNDcwOA==", "avatar_url": "https://avatars.githubusercontent.com/u/214708?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zlwu", "html_url": "https://github.com/zlwu", "followers_url": "https://api.github.com/users/zlwu/followers", ...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
1
2024-09-19T02:41:04
2024-09-19T23:33:54
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? <img width="674" alt="image" src="https://github.com/user-attachments/assets/03949cc7-07fd-45c4-a09a-4a971e0a3586"> According to the model card, the context length should be **128k**? ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.3.10
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6865/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6421
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6421/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6421/comments
https://api.github.com/repos/ollama/ollama/issues/6421/events
https://github.com/ollama/ollama/pull/6421
2,473,105,480
PR_kwDOJ0Z1Ps54uONY
6,421
Add gitlab.com/tozd/go/fun Go package
{ "login": "mitar", "id": 585279, "node_id": "MDQ6VXNlcjU4NTI3OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/585279?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mitar", "html_url": "https://github.com/mitar", "followers_url": "https://api.github.com/users/mitar/followers"...
[]
closed
false
null
[]
null
3
2024-08-19T11:15:06
2024-09-04T14:57:37
2024-09-04T14:52:46
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6421", "html_url": "https://github.com/ollama/ollama/pull/6421", "diff_url": "https://github.com/ollama/ollama/pull/6421.diff", "patch_url": "https://github.com/ollama/ollama/pull/6421.patch", "merged_at": "2024-09-04T14:52:46" }
`gitlab.com/tozd/go/fun` is a Go package which provides high-level abstraction to define functions with code (the usual way), data (providing examples of inputs and expected outputs which are then used with an AI model), or natural language description. It is the simplest but powerful way to use large language models (...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6421/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6421/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3346
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3346/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3346/comments
https://api.github.com/repos/ollama/ollama/issues/3346/events
https://github.com/ollama/ollama/pull/3346
2,206,513,425
PR_kwDOJ0Z1Ps5qsuWv
3,346
move community integrations to their own doc
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2024-03-25T19:21:03
2024-04-01T15:14:19
2024-04-01T15:14:19
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3346", "html_url": "https://github.com/ollama/ollama/pull/3346", "diff_url": "https://github.com/ollama/ollama/pull/3346.diff", "patch_url": "https://github.com/ollama/ollama/pull/3346.patch", "merged_at": null }
The community integration section at the end of the README is getting quite long, moving it to its own doc to keep things tidy.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3346/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3346/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5432
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5432/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5432/comments
https://api.github.com/repos/ollama/ollama/issues/5432/events
https://github.com/ollama/ollama/issues/5432
2,386,047,676
I_kwDOJ0Z1Ps6OODK8
5,432
level=ERROR source=sched.go:388 msg="error loading llama server" error="llama runner process no longer running: -1 "
{ "login": "popav4", "id": 7868172, "node_id": "MDQ6VXNlcjc4NjgxNzI=", "avatar_url": "https://avatars.githubusercontent.com/u/7868172?v=4", "gravatar_id": "", "url": "https://api.github.com/users/popav4", "html_url": "https://github.com/popav4", "followers_url": "https://api.github.com/users/popav4/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-07-02T11:49:27
2024-07-02T20:18:50
2024-07-02T20:18:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Macbook Air M1 Run with Docker: `docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama` `docker exec -it ollama ollama run codestral:22b` Error: > level=ERROR source=sched.go:388 msg="error loading llama server" error="llama runner process no longer running: -1 " ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5432/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5432/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7747
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7747/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7747/comments
https://api.github.com/repos/ollama/ollama/issues/7747/events
https://github.com/ollama/ollama/issues/7747
2,673,723,799
I_kwDOJ0Z1Ps6fXcmX
7,747
Support Pixtral Large
{ "login": "YuntianZhao", "id": 32049544, "node_id": "MDQ6VXNlcjMyMDQ5NTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/32049544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YuntianZhao", "html_url": "https://github.com/YuntianZhao", "followers_url": "https://api.github.com/...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
2
2024-11-19T22:32:19
2024-11-21T06:58:07
2024-11-21T06:58:06
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Mistral AI just released Pixtral Large, a 124B multimodal model built on top of Mistral Large 2. See https://huggingface.co/mistralai/Pixtral-Large-Instruct-2411
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7747/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7747/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7734
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7734/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7734/comments
https://api.github.com/repos/ollama/ollama/issues/7734/events
https://github.com/ollama/ollama/issues/7734
2,670,766,147
I_kwDOJ0Z1Ps6fMKhD
7,734
add a Feature : clone and duplicate model from scratch when creating new model from Modelfile to laod to gpu memory
{ "login": "looijijohn", "id": 180949480, "node_id": "U_kgDOCskR6A", "avatar_url": "https://avatars.githubusercontent.com/u/180949480?v=4", "gravatar_id": "", "url": "https://api.github.com/users/looijijohn", "html_url": "https://github.com/looijijohn", "followers_url": "https://api.github.com/users/loo...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-11-19T05:02:18
2024-12-02T15:32:44
2024-12-02T15:32:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
when we create a new_model from BaseModel ollama does not load model to memory again he used loaded BaseModel as new model and NewModel does not load as seprate model we want use two model as sepearate two load both in gpu memory we enough memory
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7734/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7734/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5721
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5721/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5721/comments
https://api.github.com/repos/ollama/ollama/issues/5721/events
https://github.com/ollama/ollama/pull/5721
2,410,643,198
PR_kwDOJ0Z1Ps51fNUE
5,721
README: Added AI Studio to the list of UIs
{ "login": "SommerEngineering", "id": 5158645, "node_id": "MDQ6VXNlcjUxNTg2NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5158645?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SommerEngineering", "html_url": "https://github.com/SommerEngineering", "followers_url": "https:/...
[]
closed
false
null
[]
null
1
2024-07-16T09:17:40
2024-07-16T21:24:27
2024-07-16T21:24:27
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5721", "html_url": "https://github.com/ollama/ollama/pull/5721", "diff_url": "https://github.com/ollama/ollama/pull/5721.diff", "patch_url": "https://github.com/ollama/ollama/pull/5721.patch", "merged_at": "2024-07-16T21:24:27" }
I added [AI Studio](https://github.com/MindWorkAI/AI-Studio) to the list of UIs.
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5721/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5721/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5246
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5246/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5246/comments
https://api.github.com/repos/ollama/ollama/issues/5246/events
https://github.com/ollama/ollama/pull/5246
2,368,953,745
PR_kwDOJ0Z1Ps5zTGc4
5,246
llm: speed up gguf decoding by a lot
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[]
closed
false
null
[]
null
9
2024-06-24T00:07:14
2024-06-25T04:49:18
2024-06-25T04:47:52
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5246", "html_url": "https://github.com/ollama/ollama/pull/5246", "diff_url": "https://github.com/ollama/ollama/pull/5246.diff", "patch_url": "https://github.com/ollama/ollama/pull/5246.patch", "merged_at": "2024-06-25T04:47:52" }
Previously, some costly things were causing the loading of GGUF files and their metadata and tensor information to be VERY slow: * Too many allocations when decoding strings * Hitting disk for each read of each key and value, resulting in a not-okay amount of syscalls/disk I/O. The show API is now down...
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5246/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5246/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4528
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4528/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4528/comments
https://api.github.com/repos/ollama/ollama/issues/4528/events
https://github.com/ollama/ollama/issues/4528
2,304,889,457
I_kwDOJ0Z1Ps6JYdJx
4,528
OLLAMA_MODELS no longer works
{ "login": "asmrtfm", "id": 154548075, "node_id": "U_kgDOCTY3aw", "avatar_url": "https://avatars.githubusercontent.com/u/154548075?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asmrtfm", "html_url": "https://github.com/asmrtfm", "followers_url": "https://api.github.com/users/asmrtfm/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-05-20T01:19:23
2024-05-26T12:03:49
2024-05-26T10:42:52
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
> setting the OLLAMA_MODELS environment variable (was) no longer working. [ edit: A reboot resolved this, so closed. * additional details: models were in ~/.ollama (ownership: 1000:1000); environment variable was accidentally commented out in ~/.bashrc before a reboot - so, unsurprisingly, no dice; uncommented...
{ "login": "asmrtfm", "id": 154548075, "node_id": "U_kgDOCTY3aw", "avatar_url": "https://avatars.githubusercontent.com/u/154548075?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asmrtfm", "html_url": "https://github.com/asmrtfm", "followers_url": "https://api.github.com/users/asmrtfm/foll...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4528/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4528/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/214
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/214/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/214/comments
https://api.github.com/repos/ollama/ollama/issues/214/events
https://github.com/ollama/ollama/pull/214
1,821,158,562
PR_kwDOJ0Z1Ps5WYJ16
214
allow for concurrent pulls of the same files
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2023-07-25T21:10:24
2023-08-09T15:35:25
2023-08-09T15:35:24
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/214", "html_url": "https://github.com/ollama/ollama/pull/214", "diff_url": "https://github.com/ollama/ollama/pull/214.diff", "patch_url": "https://github.com/ollama/ollama/pull/214.patch", "merged_at": "2023-08-09T15:35:24" }
resolves #200
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/214/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4222
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4222/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4222/comments
https://api.github.com/repos/ollama/ollama/issues/4222/events
https://github.com/ollama/ollama/issues/4222
2,282,526,031
I_kwDOJ0Z1Ps6IDJVP
4,222
server not responding
{ "login": "thomassrour", "id": 79809227, "node_id": "MDQ6VXNlcjc5ODA5MjI3", "avatar_url": "https://avatars.githubusercontent.com/u/79809227?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomassrour", "html_url": "https://github.com/thomassrour", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
6
2024-05-07T07:39:08
2024-05-31T21:35:46
2024-05-31T21:35:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hello, I have trouble reaching my ollama container. I have tried using the images for 0.1.32 and 0.1.33, as some users reported bugs 0.1.33 but it doesn't work on either. Here is the output of docker logs, when trying mixtral (I have also tried llama3, same result) : time=2024-05-07T07:33...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4222/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4929
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4929/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4929/comments
https://api.github.com/repos/ollama/ollama/issues/4929/events
https://github.com/ollama/ollama/issues/4929
2,341,613,818
I_kwDOJ0Z1Ps6LkjD6
4,929
Never-ending loading whether using the OpenAI API or Ollama Python
{ "login": "Wannabeasmartguy", "id": 107250451, "node_id": "U_kgDOBmSDEw", "avatar_url": "https://avatars.githubusercontent.com/u/107250451?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Wannabeasmartguy", "html_url": "https://github.com/Wannabeasmartguy", "followers_url": "https://api.gi...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-06-08T11:30:37
2024-08-12T07:40:29
2024-08-12T07:40:29
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi, I'm having a problem: Whether I'm using the OpenAI API or Ollama-python, I get bogged down in never-ending loading when doing model inference. ![image](https://github.com/ollama/ollama/assets/107250451/4b97cb64-e8ed-4c71-a1b0-b3d2d04c397c) When I check the logs, I see that there is no r...
{ "login": "Wannabeasmartguy", "id": 107250451, "node_id": "U_kgDOBmSDEw", "avatar_url": "https://avatars.githubusercontent.com/u/107250451?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Wannabeasmartguy", "html_url": "https://github.com/Wannabeasmartguy", "followers_url": "https://api.gi...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4929/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4929/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/744
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/744/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/744/comments
https://api.github.com/repos/ollama/ollama/issues/744/events
https://github.com/ollama/ollama/issues/744
1,933,682,760
I_kwDOJ0Z1Ps5zQahI
744
"Delete word" buggy in TUI
{ "login": "mjvmroz", "id": 4539332, "node_id": "MDQ6VXNlcjQ1MzkzMzI=", "avatar_url": "https://avatars.githubusercontent.com/u/4539332?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mjvmroz", "html_url": "https://github.com/mjvmroz", "followers_url": "https://api.github.com/users/mjvmroz/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[ { "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/us...
null
1
2023-10-09T19:42:42
2023-10-25T23:53:59
2023-10-25T23:53:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Using a delete word hotkey (e.g. ctrl-w) when the cursor is within the first word of a prompt causes the entire prompt to be deleted. Steps to reproduce: 1. Type several words at an ollama LLM prompt 2. Move the cursor to the first word (immediately following ">>>") 3. Use a "delete word" hotkey (e.g. ctrl-w) 4....
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/744/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/744/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7203
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7203/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7203/comments
https://api.github.com/repos/ollama/ollama/issues/7203/events
https://github.com/ollama/ollama/pull/7203
2,587,209,895
PR_kwDOJ0Z1Ps5-mcHl
7,203
Move macos v11 support flags to build script
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-10-14T22:45:30
2024-10-16T19:49:49
2024-10-16T19:49:46
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7203", "html_url": "https://github.com/ollama/ollama/pull/7203", "diff_url": "https://github.com/ollama/ollama/pull/7203.diff", "patch_url": "https://github.com/ollama/ollama/pull/7203.patch", "merged_at": "2024-10-16T19:49:46" }
Having v11 support hard-coded into the cgo settings causes warnings for newer Xcode versions. This should help keep the build clean for users building from source with the latest tools, while still allow us to target the older OS via our CI processes.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7203/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7392
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7392/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7392/comments
https://api.github.com/repos/ollama/ollama/issues/7392/events
https://github.com/ollama/ollama/issues/7392
2,617,321,426
I_kwDOJ0Z1Ps6cASfS
7,392
Fails to build on macOS with "fatal error: {'string','cstdint'} file not found"
{ "login": "efd6", "id": 90160302, "node_id": "MDQ6VXNlcjkwMTYwMzAy", "avatar_url": "https://avatars.githubusercontent.com/u/90160302?v=4", "gravatar_id": "", "url": "https://api.github.com/users/efd6", "html_url": "https://github.com/efd6", "followers_url": "https://api.github.com/users/efd6/followers"...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677279472, "node_id": "LA_kwDOJ0Z1Ps8AAAABjf8y8A...
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
11
2024-10-28T05:11:29
2024-12-08T21:20:34
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I followed the instructions for building on mac [here](https://github.com/ollama/ollama/blob/main/docs/development.md#macos), but this failed at the `go generate` step. Running `go generate ./...` fails with a set of header files not found errors. ``` $ go generate ./... + set -o pipefail ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7392/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7392/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8613
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8613/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8613/comments
https://api.github.com/repos/ollama/ollama/issues/8613/events
https://github.com/ollama/ollama/issues/8613
2,813,831,969
I_kwDOJ0Z1Ps6nt6sh
8,613
[v0.5.4] Download timeouts cause download cache corruption. Any download that needs to be retried by re-running ollama ends up corrupted at 100% download(file sha256-sha256hash-partial-0 not found).
{ "login": "esperanza-esperanza", "id": 196695882, "node_id": "U_kgDOC7lXSg", "avatar_url": "https://avatars.githubusercontent.com/u/196695882?v=4", "gravatar_id": "", "url": "https://api.github.com/users/esperanza-esperanza", "html_url": "https://github.com/esperanza-esperanza", "followers_url": "https...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
2
2025-01-27T19:04:50
2025-01-27T19:40:50
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Running ollama through alpaca. I'm aware this is a seperate project will mirror the bug report. ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.5.4
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8613/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8613/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5596
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5596/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5596/comments
https://api.github.com/repos/ollama/ollama/issues/5596/events
https://github.com/ollama/ollama/issues/5596
2,400,470,468
I_kwDOJ0Z1Ps6PFEXE
5,596
version is 0.2.1 can't run glm4
{ "login": "qiulaidongfeng", "id": 96758349, "node_id": "U_kgDOBcRqTQ", "avatar_url": "https://avatars.githubusercontent.com/u/96758349?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qiulaidongfeng", "html_url": "https://github.com/qiulaidongfeng", "followers_url": "https://api.github.com...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
4
2024-07-10T11:12:41
2024-07-11T03:58:29
2024-07-11T03:58:28
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? run `ollama run glm4` Error: this model is not supported by your version of Ollama. You may need to upgrade ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version ollama version is 0.2.1
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5596/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5596/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8665
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8665/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8665/comments
https://api.github.com/repos/ollama/ollama/issues/8665/events
https://github.com/ollama/ollama/pull/8665
2,818,502,333
PR_kwDOJ0Z1Ps6JYJaA
8,665
Fix /api/create status code
{ "login": "canpacis", "id": 37307107, "node_id": "MDQ6VXNlcjM3MzA3MTA3", "avatar_url": "https://avatars.githubusercontent.com/u/37307107?v=4", "gravatar_id": "", "url": "https://api.github.com/users/canpacis", "html_url": "https://github.com/canpacis", "followers_url": "https://api.github.com/users/can...
[]
open
false
null
[]
null
1
2025-01-29T15:12:55
2025-01-29T23:07:19
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8665", "html_url": "https://github.com/ollama/ollama/pull/8665", "diff_url": "https://github.com/ollama/ollama/pull/8665.diff", "patch_url": "https://github.com/ollama/ollama/pull/8665.patch", "merged_at": null }
The server just bails out without proper http error codes inside the goroutine in /api/create route. Added a simple abort function to write the proper status code and send the `gin.H` map. Also changed the error name and message to be grammatically correct but that's a nitpick, I wouldn't wanna be rude 🙃
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8665/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8665/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5709
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5709/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5709/comments
https://api.github.com/repos/ollama/ollama/issues/5709/events
https://github.com/ollama/ollama/pull/5709
2,409,729,362
PR_kwDOJ0Z1Ps51cMCo
5,709
Add Metrics to `api\embed` response
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
[]
closed
false
null
[]
null
0
2024-07-15T22:10:46
2024-07-30T20:12:23
2024-07-30T20:12:21
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5709", "html_url": "https://github.com/ollama/ollama/pull/5709", "diff_url": "https://github.com/ollama/ollama/pull/5709.diff", "patch_url": "https://github.com/ollama/ollama/pull/5709.patch", "merged_at": "2024-07-30T20:12:21" }
"timings" is returned per request_completion in server.cpp, which must be aggregated to return metrics for a batch of completions. supporting: prompt_eval_count (total number of tokens evaluated), load duration, total duration
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5709/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5709/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2158
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2158/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2158/comments
https://api.github.com/repos/ollama/ollama/issues/2158/events
https://github.com/ollama/ollama/issues/2158
2,096,237,679
I_kwDOJ0Z1Ps588gxv
2,158
Seed option is not working on API
{ "login": "Juliano-uCondo", "id": 153868863, "node_id": "U_kgDOCSvaPw", "avatar_url": "https://avatars.githubusercontent.com/u/153868863?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Juliano-uCondo", "html_url": "https://github.com/Juliano-uCondo", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
2
2024-01-23T14:40:32
2024-01-23T18:25:52
2024-01-23T18:25:52
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Even configuring the option seed, the API return is different for each request. Im using the version 0.1.20 ``` { "model": "mistral", "stream": false, "options": { "seed": 0 }, "prompt":"Why is the sky blue?" } ```
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2158/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1419
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1419/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1419/comments
https://api.github.com/repos/ollama/ollama/issues/1419/events
https://github.com/ollama/ollama/pull/1419
2,031,418,323
PR_kwDOJ0Z1Ps5hdVaY
1,419
Simple chat example for typescript
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
[]
closed
false
null
[]
null
1
2023-12-07T19:49:25
2023-12-07T22:42:24
2023-12-07T22:42:24
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1419", "html_url": "https://github.com/ollama/ollama/pull/1419", "diff_url": "https://github.com/ollama/ollama/pull/1419.diff", "patch_url": "https://github.com/ollama/ollama/pull/1419.patch", "merged_at": "2023-12-07T22:42:24" }
A simple example of the chat endpoint
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1419/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1419/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6735
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6735/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6735/comments
https://api.github.com/repos/ollama/ollama/issues/6735/events
https://github.com/ollama/ollama/pull/6735
2,517,822,037
PR_kwDOJ0Z1Ps57DuU5
6,735
runner.go: Prompt caching
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
0
2024-09-10T21:06:37
2024-09-11T03:45:02
2024-09-11T03:45:00
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6735", "html_url": "https://github.com/ollama/ollama/pull/6735", "diff_url": "https://github.com/ollama/ollama/pull/6735.diff", "patch_url": "https://github.com/ollama/ollama/pull/6735.patch", "merged_at": "2024-09-11T03:45:00" }
Currently, KV cache entries from a sequence are discarded at the end of each processing run. In a typical chat conversation, this results in each message taking longer and longer to process as the entire history of the conversation needs to be replayed. Prompt caching retains the KV entries as long as possible so th...
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6735/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2263
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2263/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2263/comments
https://api.github.com/repos/ollama/ollama/issues/2263/events
https://github.com/ollama/ollama/pull/2263
2,106,748,706
PR_kwDOJ0Z1Ps5lZLEE
2,263
Bump llama.cpp to b1999
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
1
2024-01-30T00:57:33
2024-01-31T16:39:44
2024-01-31T16:39:41
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2263", "html_url": "https://github.com/ollama/ollama/pull/2263", "diff_url": "https://github.com/ollama/ollama/pull/2263.diff", "patch_url": "https://github.com/ollama/ollama/pull/2263.patch", "merged_at": "2024-01-31T16:39:41" }
This requires an upstream change to support graceful termination, carried as a patch. Tracking branches for the 2 patches: - 01-cache.diff - https://github.com/dhiltgen/llama.cpp/tree/kv_cache - 02-shutdown.diff - https://github.com/dhiltgen/llama.cpp/tree/server_shutdown I'm going to mark it draft until I can ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2263/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2263/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2239
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2239/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2239/comments
https://api.github.com/repos/ollama/ollama/issues/2239/events
https://github.com/ollama/ollama/issues/2239
2,104,043,986
I_kwDOJ0Z1Ps59aSnS
2,239
stablelm2 is missing in the homepage list
{ "login": "AntDX316", "id": 34279421, "node_id": "MDQ6VXNlcjM0Mjc5NDIx", "avatar_url": "https://avatars.githubusercontent.com/u/34279421?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AntDX316", "html_url": "https://github.com/AntDX316", "followers_url": "https://api.github.com/users/Ant...
[ { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/...
null
4
2024-01-28T08:46:34
2024-03-12T18:34:22
2024-03-12T18:34:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
stablelm2 --verbose is missing in the homepage list who knows what else
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2239/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1145
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1145/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1145/comments
https://api.github.com/repos/ollama/ollama/issues/1145/events
https://github.com/ollama/ollama/issues/1145
1,995,757,401
I_kwDOJ0Z1Ps529NdZ
1,145
Food for thought use cases: Github Actions :octocat:
{ "login": "marcellodesales", "id": 131457, "node_id": "MDQ6VXNlcjEzMTQ1Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/131457?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marcellodesales", "html_url": "https://github.com/marcellodesales", "followers_url": "https://api.git...
[]
closed
false
null
[]
null
1
2023-11-15T23:34:21
2024-02-20T01:10:05
2024-02-20T01:10:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I've been working on the implementation of DevSecOps Platforms and I think I came up with a Github Action that can execute the models... Obviously: * You must have Github Action Runners powered by GPUs * You can implement pretty much anything with the model given you have the file-system and the containers to imple...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1145/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1145/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4354
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4354/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4354/comments
https://api.github.com/repos/ollama/ollama/issues/4354/events
https://github.com/ollama/ollama/issues/4354
2,290,822,137
I_kwDOJ0Z1Ps6Iiyv5
4,354
Models often don't load on versions after 0.1.132
{ "login": "ProjectMoon", "id": 183856, "node_id": "MDQ6VXNlcjE4Mzg1Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/183856?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ProjectMoon", "html_url": "https://github.com/ProjectMoon", "followers_url": "https://api.github.com/user...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6849881759, "node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
9
2024-05-11T10:17:30
2024-10-16T18:39:33
2024-10-16T18:39:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Many models, in particular codegemma 1.1 7b q8_0, don't load for various reasons on versions after 0.1.132. Works fine on 132. I don't have the logs on hand at the moment, but can add them later. The errors relate to out of memory errors and unable to reset the GPU VRAM. This is using ROCm (o...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4354/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/4354/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1244
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1244/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1244/comments
https://api.github.com/repos/ollama/ollama/issues/1244/events
https://github.com/ollama/ollama/pull/1244
2,006,873,674
PR_kwDOJ0Z1Ps5gKOHw
1,244
do not fail on unsupported template variables
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
1
2023-11-22T18:11:04
2023-12-06T21:23:05
2023-12-06T21:23:04
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1244", "html_url": "https://github.com/ollama/ollama/pull/1244", "diff_url": "https://github.com/ollama/ollama/pull/1244.diff", "patch_url": "https://github.com/ollama/ollama/pull/1244.patch", "merged_at": "2023-12-06T21:23:04" }
- do not fail on unsupported parameters in model template resolves #1242
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1244/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1244/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5660
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5660/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5660/comments
https://api.github.com/repos/ollama/ollama/issues/5660/events
https://github.com/ollama/ollama/issues/5660
2,406,550,200
I_kwDOJ0Z1Ps6PcQq4
5,660
Ollama 0.2.2 cannot read the system prompt when invoking the API using Python.
{ "login": "letdo1945", "id": 64049222, "node_id": "MDQ6VXNlcjY0MDQ5MjIy", "avatar_url": "https://avatars.githubusercontent.com/u/64049222?v=4", "gravatar_id": "", "url": "https://api.github.com/users/letdo1945", "html_url": "https://github.com/letdo1945", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
2
2024-07-13T00:54:34
2024-07-13T05:25:11
2024-07-13T05:25:11
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? model: qwen2&glm4 After the Ollama update, when I invoke Ollama through Python, the model is unable to read the system prompt. ``` def LLM_Process(model, sys_prom, usr_prom): messages = [ {'role': 'user', 'content': usr_prom}, {'role': 'system', 'content': sys_prom} ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5660/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3348
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3348/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3348/comments
https://api.github.com/repos/ollama/ollama/issues/3348/events
https://github.com/ollama/ollama/pull/3348
2,206,679,955
PR_kwDOJ0Z1Ps5qtThd
3,348
Bump llama.cpp to b2527
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-03-25T20:48:19
2024-03-25T21:15:56
2024-03-25T21:15:53
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3348", "html_url": "https://github.com/ollama/ollama/pull/3348", "diff_url": "https://github.com/ollama/ollama/pull/3348.diff", "patch_url": "https://github.com/ollama/ollama/pull/3348.patch", "merged_at": "2024-03-25T21:15:53" }
null
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3348/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3348/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1675
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1675/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1675/comments
https://api.github.com/repos/ollama/ollama/issues/1675/events
https://github.com/ollama/ollama/pull/1675
2,054,195,066
PR_kwDOJ0Z1Ps5iqpZv
1,675
Quiet down llama.cpp logging by default
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2023-12-22T16:48:08
2023-12-22T16:57:21
2023-12-22T16:57:18
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1675", "html_url": "https://github.com/ollama/ollama/pull/1675", "diff_url": "https://github.com/ollama/ollama/pull/1675.diff", "patch_url": "https://github.com/ollama/ollama/pull/1675.patch", "merged_at": "2023-12-22T16:57:18" }
By default builds will now produce non-debug and non-verbose binaries. To enable verbose logs in llama.cpp and debug symbols in the native code, set `CGO_CFLAGS=-g`
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1675/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1675/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3096
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3096/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3096/comments
https://api.github.com/repos/ollama/ollama/issues/3096/events
https://github.com/ollama/ollama/issues/3096
2,183,322,186
I_kwDOJ0Z1Ps6CItpK
3,096
Is it possible to download the models from browser?
{ "login": "OguzcanOzdemir", "id": 24637523, "node_id": "MDQ6VXNlcjI0NjM3NTIz", "avatar_url": "https://avatars.githubusercontent.com/u/24637523?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OguzcanOzdemir", "html_url": "https://github.com/OguzcanOzdemir", "followers_url": "https://api.gi...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
5
2024-03-13T07:44:28
2024-04-08T16:37:08
2024-04-08T16:37:08
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello I need the download models from browser. Is it possible?
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3096/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3096/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1784
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1784/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1784/comments
https://api.github.com/repos/ollama/ollama/issues/1784/events
https://github.com/ollama/ollama/issues/1784
2,065,916,415
I_kwDOJ0Z1Ps57I2H_
1,784
Simpler UI / CLI for predicting model performance on user's device?
{ "login": "TahaScripts", "id": 98236583, "node_id": "U_kgDOBdr4pw", "avatar_url": "https://avatars.githubusercontent.com/u/98236583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TahaScripts", "html_url": "https://github.com/TahaScripts", "followers_url": "https://api.github.com/users/Ta...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-01-04T16:01:31
2024-01-04T17:47:14
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, I was wondering if the current Ollama system grabs any system information regarding CPU and RAM capacities. Especially since a majority of users are on Mac, there's a finite # of hardware specs for Ollama to recognize. Then, Ollama can automatically recommend which models will run best on the user's Mac. I thin...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1784/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1784/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/634
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/634/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/634/comments
https://api.github.com/repos/ollama/ollama/issues/634/events
https://github.com/ollama/ollama/pull/634
1,918,029,920
PR_kwDOJ0Z1Ps5beMgW
634
use int64 consistently
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-09-28T18:07:46
2023-09-28T21:17:49
2023-09-28T21:17:47
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/634", "html_url": "https://github.com/ollama/ollama/pull/634", "diff_url": "https://github.com/ollama/ollama/pull/634.diff", "patch_url": "https://github.com/ollama/ollama/pull/634.patch", "merged_at": "2023-09-28T21:17:47" }
this reduces type conversion
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/634/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7099
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7099/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7099/comments
https://api.github.com/repos/ollama/ollama/issues/7099/events
https://github.com/ollama/ollama/issues/7099
2,565,531,603
I_kwDOJ0Z1Ps6Y6ufT
7,099
Integrate in Chrome, Chrome Extension
{ "login": "kishanios123", "id": 60137209, "node_id": "MDQ6VXNlcjYwMTM3MjA5", "avatar_url": "https://avatars.githubusercontent.com/u/60137209?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kishanios123", "html_url": "https://github.com/kishanios123", "followers_url": "https://api.github.c...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-10-04T06:09:31
2024-12-02T14:34:54
2024-12-02T14:34:54
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi Ollama team, I’d like to suggest a feature to integrate Ollama with a Chrome extension that enables auto-replies directly within email platforms (Gmail, Outlook) and other text fields (social media, messaging apps, etc.). Main Benefit: Users could generate replies without leaving the current tab or copy-pastin...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7099/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7099/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/6978
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6978/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6978/comments
https://api.github.com/repos/ollama/ollama/issues/6978/events
https://github.com/ollama/ollama/issues/6978
2,550,242,137
I_kwDOJ0Z1Ps6YAZtZ
6,978
rerank model
{ "login": "HARISHSENTHIL", "id": 99972344, "node_id": "U_kgDOBfV0-A", "avatar_url": "https://avatars.githubusercontent.com/u/99972344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HARISHSENTHIL", "html_url": "https://github.com/HARISHSENTHIL", "followers_url": "https://api.github.com/us...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
3
2024-09-26T10:57:42
2024-12-02T23:02:07
2024-12-02T23:02:07
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
how can i add HF - BAAI/bge-reranker-v2-m3 rerank model to ollama while trying this approach i am getting architecture error can anyone help to resolve this issue
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6978/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6978/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3731
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3731/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3731/comments
https://api.github.com/repos/ollama/ollama/issues/3731/events
https://github.com/ollama/ollama/issues/3731
2,250,334,080
I_kwDOJ0Z1Ps6GIV-A
3,731
升级最新版启动报错
{ "login": "hyanqing1", "id": 26663452, "node_id": "MDQ6VXNlcjI2NjYzNDUy", "avatar_url": "https://avatars.githubusercontent.com/u/26663452?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hyanqing1", "html_url": "https://github.com/hyanqing1", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
2024-04-18T10:31:38
2024-04-18T10:32:54
2024-04-18T10:32:54
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? 升级了最新版本0.1.32,启动报错,错误如下: Error: llama runner process no longer running: 3221225785 后来又重装了0.1.31版本,正常启动。 我的是windows10系统 ### OS Windows ### GPU Intel ### CPU Intel ### Ollama version 0.1.32
{ "login": "hyanqing1", "id": 26663452, "node_id": "MDQ6VXNlcjI2NjYzNDUy", "avatar_url": "https://avatars.githubusercontent.com/u/26663452?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hyanqing1", "html_url": "https://github.com/hyanqing1", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3731/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3731/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2142
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2142/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2142/comments
https://api.github.com/repos/ollama/ollama/issues/2142/events
https://github.com/ollama/ollama/pull/2142
2,094,667,056
PR_kwDOJ0Z1Ps5kwv9m
2,142
Debug logging on init failure
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-01-22T20:09:52
2024-01-22T20:29:26
2024-01-22T20:29:23
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2142", "html_url": "https://github.com/ollama/ollama/pull/2142", "diff_url": "https://github.com/ollama/ollama/pull/2142.diff", "patch_url": "https://github.com/ollama/ollama/pull/2142.patch", "merged_at": "2024-01-22T20:29:23" }
One class of error we're seeing on ROCm looks like this in the log... ``` 2024/01/21 22:00:15 dyn_ext_server.go:90: INFO Loading Dynamic llm server: /tmp/ollama1546965028/rocm_v5/libext_server.so 2024/01/21 22:00:15 dyn_ext_server.go:139: INFO Initializing llama server free(): invalid pointer ``` I'm not sure y...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2142/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2142/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3762
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3762/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3762/comments
https://api.github.com/repos/ollama/ollama/issues/3762/events
https://github.com/ollama/ollama/pull/3762
2,253,761,317
PR_kwDOJ0Z1Ps5tNpDc
3,762
chore(deps): Update dependencies
{ "login": "reneleonhardt", "id": 65483435, "node_id": "MDQ6VXNlcjY1NDgzNDM1", "avatar_url": "https://avatars.githubusercontent.com/u/65483435?v=4", "gravatar_id": "", "url": "https://api.github.com/users/reneleonhardt", "html_url": "https://github.com/reneleonhardt", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
1
2024-04-19T19:18:34
2024-11-24T22:42:58
2024-11-24T22:42:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3762", "html_url": "https://github.com/ollama/ollama/pull/3762", "diff_url": "https://github.com/ollama/ollama/pull/3762.diff", "patch_url": "https://github.com/ollama/ollama/pull/3762.patch", "merged_at": null }
Please note that most updates are minor except macOS: 12 (2022) to 13 (2023). In any case, even in 12 there would be a much newer (default) Xcode 14.2 available, why has the release been downgraded to 13.4 recently by 2 major versions? I could only see a pull request, but no issue... 🤔 https://github.com/actions/r...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3762/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3762/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8593
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8593/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8593/comments
https://api.github.com/repos/ollama/ollama/issues/8593/events
https://github.com/ollama/ollama/issues/8593
2,811,574,264
I_kwDOJ0Z1Ps6nlTf4
8,593
ollama fails to detect old models after update
{ "login": "nevakrien", "id": 101988414, "node_id": "U_kgDOBhQ4Pg", "avatar_url": "https://avatars.githubusercontent.com/u/101988414?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nevakrien", "html_url": "https://github.com/nevakrien", "followers_url": "https://api.github.com/users/nevakr...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2025-01-26T13:53:46
2025-01-26T14:39:27
2025-01-26T14:39:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? so my setup has a semi link for runing ollama model and I think i have over a tera byte of model weights so if there is a way to make it so i dont need to download the entire thing again i would be very happy ### OS Linux ### GPU _No response_ ### CPU _No response_ ### Ollama version 0...
{ "login": "nevakrien", "id": 101988414, "node_id": "U_kgDOBhQ4Pg", "avatar_url": "https://avatars.githubusercontent.com/u/101988414?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nevakrien", "html_url": "https://github.com/nevakrien", "followers_url": "https://api.github.com/users/nevakr...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8593/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8593/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8051
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8051/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8051/comments
https://api.github.com/repos/ollama/ollama/issues/8051/events
https://github.com/ollama/ollama/pull/8051
2,733,602,704
PR_kwDOJ0Z1Ps6E5MBB
8,051
feat: add option to specify runner name and path in env
{ "login": "thewh1teagle", "id": 61390950, "node_id": "MDQ6VXNlcjYxMzkwOTUw", "avatar_url": "https://avatars.githubusercontent.com/u/61390950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thewh1teagle", "html_url": "https://github.com/thewh1teagle", "followers_url": "https://api.github.c...
[]
open
false
null
[]
null
0
2024-12-11T17:45:06
2024-12-11T18:00:46
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8051", "html_url": "https://github.com/ollama/ollama/pull/8051", "diff_url": "https://github.com/ollama/ollama/pull/8051.diff", "patch_url": "https://github.com/ollama/ollama/pull/8051.patch", "merged_at": null }
Add option to specify custom runner path. This will be useful as a temporary solution for [using vulkan](https://github.com/ollama/ollama/pull/5059) until the related PR is merged. macOS: ```console git clone https://github.com/thewh1teagle/ollama -b feat/custom-runner-path cd ollama echo "Building darwin a...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8051/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8051/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8000
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8000/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8000/comments
https://api.github.com/repos/ollama/ollama/issues/8000/events
https://github.com/ollama/ollama/issues/8000
2,725,579,409
I_kwDOJ0Z1Ps6idQqR
8,000
Structured JSON does not handle arrays at the top level properly
{ "login": "scd31", "id": 57571338, "node_id": "MDQ6VXNlcjU3NTcxMzM4", "avatar_url": "https://avatars.githubusercontent.com/u/57571338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/scd31", "html_url": "https://github.com/scd31", "followers_url": "https://api.github.com/users/scd31/follow...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
[ { "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "htt...
null
1
2024-12-08T22:55:25
2024-12-20T22:15:32
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? It looks like structured JSON is not respected when an array is specified at the top level. Example 1: Request: ```json { "model": "llama3.1", "messages": [ { "role": "system", "content": "Given a phrase, give a list of categories" }, { "role": "user", "con...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8000/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8000/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7768
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7768/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7768/comments
https://api.github.com/repos/ollama/ollama/issues/7768/events
https://github.com/ollama/ollama/issues/7768
2,677,036,814
I_kwDOJ0Z1Ps6fkFcO
7,768
Model not loaded on all GPUs for load balancing
{ "login": "brauliobo", "id": 41740, "node_id": "MDQ6VXNlcjQxNzQw", "avatar_url": "https://avatars.githubusercontent.com/u/41740?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brauliobo", "html_url": "https://github.com/brauliobo", "followers_url": "https://api.github.com/users/brauliobo/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-11-20T20:04:40
2024-11-20T20:46:24
2024-11-20T20:32:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I expect that on a Multi GPU system it would load the model on all GPUs with the docker container loaded with `--gpus all` to balance the requests load between them. Output of `docker logs ollama`: ``` ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3060, compute capab...
{ "login": "brauliobo", "id": 41740, "node_id": "MDQ6VXNlcjQxNzQw", "avatar_url": "https://avatars.githubusercontent.com/u/41740?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brauliobo", "html_url": "https://github.com/brauliobo", "followers_url": "https://api.github.com/users/brauliobo/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7768/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7768/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3896
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3896/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3896/comments
https://api.github.com/repos/ollama/ollama/issues/3896/events
https://github.com/ollama/ollama/issues/3896
2,262,375,736
I_kwDOJ0Z1Ps6G2R04
3,896
Command-R fails when using format=json
{ "login": "derenrich", "id": 79513, "node_id": "MDQ6VXNlcjc5NTEz", "avatar_url": "https://avatars.githubusercontent.com/u/79513?v=4", "gravatar_id": "", "url": "https://api.github.com/users/derenrich", "html_url": "https://github.com/derenrich", "followers_url": "https://api.github.com/users/derenrich/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-04-25T00:00:58
2024-04-25T00:04:03
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? For some reason command-r is failing when put into JSON format mode. It seems to work fine otherwise. ``` % ollama run command-r --verbose "output the usa as a json" ` ``json { "country": "United States of America", "capital": "Washington, D.C.", "population": 333,271,411...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3896/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3896/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3120
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3120/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3120/comments
https://api.github.com/repos/ollama/ollama/issues/3120/events
https://github.com/ollama/ollama/issues/3120
2,184,601,768
I_kwDOJ0Z1Ps6CNmCo
3,120
Ollama cannot open models with unicode in the filepath
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
0
2024-03-13T18:04:35
2024-04-16T21:00:14
2024-04-16T21:00:14
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Tracking this issue here, split from #2753 ``` time=2024-02-26T00:11:49.314+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\ELJKO~1\\AppData\\Local\\Temp\\ollama816527122\\cpu_avx2\\ext_server.dll" time=2024-02-26T00:11:49.314+01:00 level=INFO source=dyn_ext_server.go:150 ms...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3120/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8417
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8417/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8417/comments
https://api.github.com/repos/ollama/ollama/issues/8417/events
https://github.com/ollama/ollama/issues/8417
2,786,428,310
I_kwDOJ0Z1Ps6mFYWW
8,417
Model request for need:QVQ-72B-Preview and qwen2-vl!
{ "login": "twythebest", "id": 89891289, "node_id": "MDQ6VXNlcjg5ODkxMjg5", "avatar_url": "https://avatars.githubusercontent.com/u/89891289?v=4", "gravatar_id": "", "url": "https://api.github.com/users/twythebest", "html_url": "https://github.com/twythebest", "followers_url": "https://api.github.com/use...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
4
2025-01-14T07:16:09
2025-01-15T22:15:12
2025-01-15T22:15:12
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Please add model:QVQ-72B-Preview and qwen2-vl to ollama!!!!
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8417/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8417/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/514
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/514/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/514/comments
https://api.github.com/repos/ollama/ollama/issues/514/events
https://github.com/ollama/ollama/pull/514
1,892,272,867
PR_kwDOJ0Z1Ps5aHeZV
514
Allow customization of ollama models etc path
{ "login": "tastycode", "id": 809953, "node_id": "MDQ6VXNlcjgwOTk1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/809953?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tastycode", "html_url": "https://github.com/tastycode", "followers_url": "https://api.github.com/users/tast...
[]
closed
false
null
[]
null
1
2023-09-12T11:04:25
2023-10-25T22:35:12
2023-10-25T22:35:11
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/514", "html_url": "https://github.com/ollama/ollama/pull/514", "diff_url": "https://github.com/ollama/ollama/pull/514.diff", "patch_url": "https://github.com/ollama/ollama/pull/514.patch", "merged_at": null }
Responding to https://github.com/jmorganca/ollama/issues/513 It turns out it wasn't that hard to patch it to be customizable via envvar.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/514/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/514/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7925
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7925/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7925/comments
https://api.github.com/repos/ollama/ollama/issues/7925/events
https://github.com/ollama/ollama/issues/7925
2,716,244,159
I_kwDOJ0Z1Ps6h5pi_
7,925
add code to enable ollama cli cmd logging , or disable the new ' if not tty exit ' code PLZZ
{ "login": "fxmbsw7", "id": 39368685, "node_id": "MDQ6VXNlcjM5MzY4Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/39368685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmbsw7", "html_url": "https://github.com/fxmbsw7", "followers_url": "https://api.github.com/users/fxmbsw...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
4
2024-12-03T23:55:42
2024-12-10T21:08:32
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? i have .bashrc like to log ollama cmds newly , this completly stopped working .. neither tee -a $somelog < <( ollama .. ) nor ollama |& tee -a $log nor ollama > >( cat ) .. stay alive .. they exit after ai answers , .. or exit after loading model if no text as cli arguments...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7925/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7925/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3010
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3010/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3010/comments
https://api.github.com/repos/ollama/ollama/issues/3010/events
https://github.com/ollama/ollama/issues/3010
2,176,617,278
I_kwDOJ0Z1Ps6BvIs-
3,010
"Error: invalid file magic" when creating Code Llama model
{ "login": "AI-Guru", "id": 32195399, "node_id": "MDQ6VXNlcjMyMTk1Mzk5", "avatar_url": "https://avatars.githubusercontent.com/u/32195399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AI-Guru", "html_url": "https://github.com/AI-Guru", "followers_url": "https://api.github.com/users/AI-Gur...
[]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
2
2024-03-08T19:06:04
2024-03-09T13:35:19
2024-03-09T13:35:19
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello! First and foremost, THANKS A LOT for Ollama! Your software is most useful! I am trying to import a finetune of Code Llama 7B into Ollama. I get this error: ``` $ollama create musicllm -f Modelfile transferring model data creating model layer Error: invalid file magic ``` Here is the model: https...
{ "login": "AI-Guru", "id": 32195399, "node_id": "MDQ6VXNlcjMyMTk1Mzk5", "avatar_url": "https://avatars.githubusercontent.com/u/32195399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AI-Guru", "html_url": "https://github.com/AI-Guru", "followers_url": "https://api.github.com/users/AI-Gur...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3010/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3010/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7847
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7847/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7847/comments
https://api.github.com/repos/ollama/ollama/issues/7847/events
https://github.com/ollama/ollama/issues/7847
2,695,991,729
I_kwDOJ0Z1Ps6gsZGx
7,847
Support for Nvidia Hymba
{ "login": "WikiLucas00", "id": 63519673, "node_id": "MDQ6VXNlcjYzNTE5Njcz", "avatar_url": "https://avatars.githubusercontent.com/u/63519673?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WikiLucas00", "html_url": "https://github.com/WikiLucas00", "followers_url": "https://api.github.com/...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2024-11-26T20:32:19
2024-11-26T20:32:19
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It would be great to support Hymba in Ollama! https://developer.nvidia.com/blog/hymba-hybrid-head-architecture-boosts-small-language-model-performance/
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7847/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7847/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5113
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5113/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5113/comments
https://api.github.com/repos/ollama/ollama/issues/5113/events
https://github.com/ollama/ollama/issues/5113
2,359,631,059
I_kwDOJ0Z1Ps6MpRzT
5,113
DeepSeek-Coder-V2-Lite-Instruct out of memory
{ "login": "tincore", "id": 20477204, "node_id": "MDQ6VXNlcjIwNDc3MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/20477204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tincore", "html_url": "https://github.com/tincore", "followers_url": "https://api.github.com/users/tincor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6849881759, "node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw...
closed
false
null
[]
null
0
2024-06-18T11:30:46
2024-06-18T23:30:59
2024-06-18T23:30:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi, Thanks for the great project. I get a crash (OOM) when trying to load new deepseek-coder-v2. Other models work fine. I've just upgraded to latest pre-release just in case but same behavior. ``` jun 18 13:23:17 ollama[26949]: INFO [main] HTTP server listening | hostname="127.0.0.1...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5113/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5113/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1891
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1891/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1891/comments
https://api.github.com/repos/ollama/ollama/issues/1891/events
https://github.com/ollama/ollama/issues/1891
2,074,049,753
I_kwDOJ0Z1Ps57n3zZ
1,891
Add ability to hide/disable/enable models
{ "login": "oliverbob", "id": 23272429, "node_id": "MDQ6VXNlcjIzMjcyNDI5", "avatar_url": "https://avatars.githubusercontent.com/u/23272429?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oliverbob", "html_url": "https://github.com/oliverbob", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
1
2024-01-10T10:22:17
2024-03-11T20:42:58
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
If we can have this feature, I'm sure it will help us out of the clutter. Or perhaps, is it possible to provide a way to Categorize models? Practical Application: Downloading large models from ollama site (consumes bandwidth) you don't really want to delete a model but just hide it from your organization or users...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1891/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1440
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1440/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1440/comments
https://api.github.com/repos/ollama/ollama/issues/1440/events
https://github.com/ollama/ollama/pull/1440
2,033,298,061
PR_kwDOJ0Z1Ps5hjuew
1,440
🛠️ Add service activation prompt
{ "login": "Samk13", "id": 36583694, "node_id": "MDQ6VXNlcjM2NTgzNjk0", "avatar_url": "https://avatars.githubusercontent.com/u/36583694?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Samk13", "html_url": "https://github.com/Samk13", "followers_url": "https://api.github.com/users/Samk13/fo...
[]
closed
false
null
[]
null
2
2023-12-08T20:51:25
2024-06-10T08:45:02
2024-06-09T18:07:32
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1440", "html_url": "https://github.com/ollama/ollama/pull/1440", "diff_url": "https://github.com/ollama/ollama/pull/1440.diff", "patch_url": "https://github.com/ollama/ollama/pull/1440.patch", "merged_at": null }
Closes #1352 ### Key Changes: - Added `ask_to_activate_service` function to prompt users for service activation post-installation. - Integrated the prompt in the script's flow, allowing conditional execution of systemd service configuration. ### Impact: - Improves user experience by providing a choice to ac...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1440/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1440/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2351
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2351/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2351/comments
https://api.github.com/repos/ollama/ollama/issues/2351/events
https://github.com/ollama/ollama/issues/2351
2,117,240,014
I_kwDOJ0Z1Ps5-MoTO
2,351
JSON mode outputs a stream of newline characters
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/...
null
2
2024-02-04T18:08:43
2024-03-12T01:31:29
2024-03-12T01:31:28
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2351/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2351/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/136
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/136/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/136/comments
https://api.github.com/repos/ollama/ollama/issues/136/events
https://github.com/ollama/ollama/pull/136
1,814,139,199
PR_kwDOJ0Z1Ps5WAoyG
136
Delete models.json
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2023-07-20T14:33:15
2023-07-24T19:30:55
2023-07-20T14:40:46
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/136", "html_url": "https://github.com/ollama/ollama/pull/136", "diff_url": "https://github.com/ollama/ollama/pull/136.diff", "patch_url": "https://github.com/ollama/ollama/pull/136.patch", "merged_at": "2023-07-20T14:40:46" }
This is no longer used.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/136/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/136/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/110
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/110/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/110/comments
https://api.github.com/repos/ollama/ollama/issues/110/events
https://github.com/ollama/ollama/pull/110
1,810,993,466
PR_kwDOJ0Z1Ps5V12ts
110
fix pull 0 bytes on completed layer
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
1
2023-07-19T01:53:18
2023-07-19T02:39:02
2023-07-19T02:38:59
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/110", "html_url": "https://github.com/ollama/ollama/pull/110", "diff_url": "https://github.com/ollama/ollama/pull/110.diff", "patch_url": "https://github.com/ollama/ollama/pull/110.patch", "merged_at": "2023-07-19T02:38:59" }
This PR fixes the bug where when the progress bar displays 0B for a layer when the layer already exists: ``` $ ollama pull llama2 pulling manifest pulling 8daa9615cce30c25... 0% | ...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/110/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/110/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2444
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2444/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2444/comments
https://api.github.com/repos/ollama/ollama/issues/2444/events
https://github.com/ollama/ollama/issues/2444
2,128,816,951
I_kwDOJ0Z1Ps5-4ys3
2,444
Ollama docker container crash full WSL2 Ubuntu
{ "login": "wizd", "id": 2835415, "node_id": "MDQ6VXNlcjI4MzU0MTU=", "avatar_url": "https://avatars.githubusercontent.com/u/2835415?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wizd", "html_url": "https://github.com/wizd", "followers_url": "https://api.github.com/users/wizd/followers", ...
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-02-11T03:41:22
2024-03-27T20:58:36
2024-03-27T20:58:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
docker container setup as bellow. ``` version: "3.7" services: ollama: container_name: ollama image: ollama/ollama:latest ports: - "5310:11434" volumes: - ./ollama:/root/.ollama restart: unless-stopped environment: - CUDA_VISIBLE_DEVICES=0,1 - OLLAMA_O...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2444/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2444/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4293
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4293/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4293/comments
https://api.github.com/repos/ollama/ollama/issues/4293/events
https://github.com/ollama/ollama/issues/4293
2,288,113,574
I_kwDOJ0Z1Ps6IYdem
4,293
longtext llama3-gradient bug
{ "login": "bambooqj", "id": 20792621, "node_id": "MDQ6VXNlcjIwNzkyNjIx", "avatar_url": "https://avatars.githubusercontent.com/u/20792621?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bambooqj", "html_url": "https://github.com/bambooqj", "followers_url": "https://api.github.com/users/bam...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-05-09T17:12:59
2024-05-09T17:16:02
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? If I use 'ollama' for long text processing, then the 'system' statements will no longer be effective. Instead, it will produce random outputs. The model is 'llama3-gradient'. ``` systemmsg=""" Please analyze the type of website based on the 'body' content I provide, and return to me in JSON...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4293/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/402
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/402/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/402/comments
https://api.github.com/repos/ollama/ollama/issues/402/events
https://github.com/ollama/ollama/issues/402
1,862,851,976
I_kwDOJ0Z1Ps5vCN2I
402
Uncensored models can't be customised
{ "login": "velkir", "id": 52069224, "node_id": "MDQ6VXNlcjUyMDY5MjI0", "avatar_url": "https://avatars.githubusercontent.com/u/52069224?v=4", "gravatar_id": "", "url": "https://api.github.com/users/velkir", "html_url": "https://github.com/velkir", "followers_url": "https://api.github.com/users/velkir/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2023-08-23T08:39:44
2023-09-01T16:04:48
2023-09-01T16:04:47
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi! Thanks for the cool tool:) Tried to customize: -llama2 - customisable -llama2-uncensored - no result -nous-hermes - customisable -wizard-vicuna-uncensored - no result -wizardlm-uncensored - no result The system msg used: FROM wizardlm-uncensored SYSTEM """ You are the Geralt of Rivia from the Witcher...
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/402/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2694
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2694/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2694/comments
https://api.github.com/repos/ollama/ollama/issues/2694/events
https://github.com/ollama/ollama/issues/2694
2,149,878,121
I_kwDOJ0Z1Ps6AJIlp
2,694
Add another binary that the linux install script could use on ROCm accelerated systems.
{ "login": "TimTheBig", "id": 132001783, "node_id": "U_kgDOB94v9w", "avatar_url": "https://avatars.githubusercontent.com/u/132001783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TimTheBig", "html_url": "https://github.com/TimTheBig", "followers_url": "https://api.github.com/users/TimThe...
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-02-22T20:24:42
2024-03-12T00:08:26
2024-03-12T00:08:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Another binary that the install script could use on `ROCm` accelerated systems would be useful. Releases are not compiled with `HIP`, therefore *non-NVidia* GPU acceleration support is not present. https://github.com/ollama/ollama/issues/2685#issuecomment-1959937668
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2694/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2694/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1954
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1954/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1954/comments
https://api.github.com/repos/ollama/ollama/issues/1954/events
https://github.com/ollama/ollama/issues/1954
2,079,092,214
I_kwDOJ0Z1Ps577G32
1,954
Support GPU A500
{ "login": "aemonge", "id": 1322348, "node_id": "MDQ6VXNlcjEzMjIzNDg=", "avatar_url": "https://avatars.githubusercontent.com/u/1322348?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aemonge", "html_url": "https://github.com/aemonge", "followers_url": "https://api.github.com/users/aemonge/...
[]
closed
false
null
[]
null
4
2024-01-12T15:22:53
2024-01-15T08:05:14
2024-01-15T08:05:14
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Can't get model tu run on GPU: ``` Fri Jan 12 16:22:20 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 545.29.06 Driver Version: 545.29.06 CUDA Version: 12.3 | |-----------------------------------------+----------------------+...
{ "login": "aemonge", "id": 1322348, "node_id": "MDQ6VXNlcjEzMjIzNDg=", "avatar_url": "https://avatars.githubusercontent.com/u/1322348?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aemonge", "html_url": "https://github.com/aemonge", "followers_url": "https://api.github.com/users/aemonge/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1954/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1954/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6083
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6083/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6083/comments
https://api.github.com/repos/ollama/ollama/issues/6083/events
https://github.com/ollama/ollama/pull/6083
2,438,936,086
PR_kwDOJ0Z1Ps5273xv
6,083
Update README to include Firebase Genkit
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
[]
closed
false
null
[]
null
0
2024-07-31T01:38:31
2024-07-31T01:40:11
2024-07-31T01:40:09
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6083", "html_url": "https://github.com/ollama/ollama/pull/6083", "diff_url": "https://github.com/ollama/ollama/pull/6083.diff", "patch_url": "https://github.com/ollama/ollama/pull/6083.patch", "merged_at": "2024-07-31T01:40:09" }
Firebase Genkit
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6083/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6083/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5040
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5040/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5040/comments
https://api.github.com/repos/ollama/ollama/issues/5040/events
https://github.com/ollama/ollama/pull/5040
2,352,462,965
PR_kwDOJ0Z1Ps5ybx0V
5,040
chore: add openapi 3.1 spec for public api
{ "login": "JerrettDavis", "id": 2610199, "node_id": "MDQ6VXNlcjI2MTAxOTk=", "avatar_url": "https://avatars.githubusercontent.com/u/2610199?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JerrettDavis", "html_url": "https://github.com/JerrettDavis", "followers_url": "https://api.github.com...
[]
closed
false
null
[]
null
3
2024-06-14T04:26:50
2025-01-28T07:13:47
2024-11-22T01:36:00
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5040", "html_url": "https://github.com/ollama/ollama/pull/5040", "diff_url": "https://github.com/ollama/ollama/pull/5040.diff", "patch_url": "https://github.com/ollama/ollama/pull/5040.patch", "merged_at": null }
Addresses issue #3383. Targets OpenAPI 3.1.0 as that's the most recent, and it appears to be the only version that supports DELETE with a request body. Also added [spectral](https://github.com/stoplightio/spectral-action) to the github test pipeline to lint the spec to ensure it's valid. Swagger utilizing this sp...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5040/reactions", "total_count": 9, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5040/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8263
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8263/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8263/comments
https://api.github.com/repos/ollama/ollama/issues/8263/events
https://github.com/ollama/ollama/issues/8263
2,761,911,580
I_kwDOJ0Z1Ps6kn20c
8,263
Ollama with AMD GPU Issue
{ "login": "kannszzz", "id": 23491305, "node_id": "MDQ6VXNlcjIzNDkxMzA1", "avatar_url": "https://avatars.githubusercontent.com/u/23491305?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kannszzz", "html_url": "https://github.com/kannszzz", "followers_url": "https://api.github.com/users/kan...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-12-28T20:22:58
2024-12-29T03:21:37
2024-12-29T03:21:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
[Ollama.log](https://github.com/user-attachments/files/18268220/Ollama.log) ### What is the issue? Environment: Debian 12 virtualized on proxmox with GPU passthrough GPU: 6650 XT (Unsupported) Using Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0" Systemd log: `Dec 28 15:03:19 AI-ML ollama[2378]: time=2024-12-...
{ "login": "kannszzz", "id": 23491305, "node_id": "MDQ6VXNlcjIzNDkxMzA1", "avatar_url": "https://avatars.githubusercontent.com/u/23491305?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kannszzz", "html_url": "https://github.com/kannszzz", "followers_url": "https://api.github.com/users/kan...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8263/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4592
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4592/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4592/comments
https://api.github.com/repos/ollama/ollama/issues/4592/events
https://github.com/ollama/ollama/issues/4592
2,313,244,559
I_kwDOJ0Z1Ps6J4U-P
4,592
Mistral-7B instruct v3 FP16 Please
{ "login": "Donno191", "id": 10705947, "node_id": "MDQ6VXNlcjEwNzA1OTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/10705947?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Donno191", "html_url": "https://github.com/Donno191", "followers_url": "https://api.github.com/users/Don...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
1
2024-05-23T15:41:26
2024-05-23T15:43:17
2024-05-23T15:43:17
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Mistral-7B instruct v3 FP16 Please - https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3/tree/main
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4592/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4592/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4425
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4425/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4425/comments
https://api.github.com/repos/ollama/ollama/issues/4425/events
https://github.com/ollama/ollama/issues/4425
2,294,928,917
I_kwDOJ0Z1Ps6IydYV
4,425
joanfm / jina-embeddings-v2-base-en and -de fail with error code 500
{ "login": "qsdhj", "id": 166700412, "node_id": "U_kgDOCe-lfA", "avatar_url": "https://avatars.githubusercontent.com/u/166700412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qsdhj", "html_url": "https://github.com/qsdhj", "followers_url": "https://api.github.com/users/qsdhj/followers", ...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
6
2024-05-14T09:30:50
2024-08-01T22:39:17
2024-08-01T22:39:17
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I tried to integerate the german embedding model **joanfm/jina-embeddings-v2-base-de** , into my LlamaIndex RAG application. During the creation of the embeddings the process ollama fails with **error 500: llama runner process has terminated: exit status 0xc0000409**. When calling: ```pytho...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4425/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4425/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/424
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/424/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/424/comments
https://api.github.com/repos/ollama/ollama/issues/424/events
https://github.com/ollama/ollama/issues/424
1,868,175,503
I_kwDOJ0Z1Ps5vWhiP
424
Error: Head "http://localhost:11434/": dial tcp: lookup localhost: no such host
{ "login": "DreamDevourer", "id": 24636471, "node_id": "MDQ6VXNlcjI0NjM2NDcx", "avatar_url": "https://avatars.githubusercontent.com/u/24636471?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DreamDevourer", "html_url": "https://github.com/DreamDevourer", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
1
2023-08-26T17:04:59
2023-08-26T19:00:34
2023-08-26T18:59:37
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
This is very odd, but after updating to the latest v0.0.16, this error started showing when I use ollama. For example,if I try to run a simple "ollama list" this shows up: Error: Head "http://localhost:11434/": dial tcp: lookup localhost: no such host I've cleaned any DNS traces, hosts file is untouched and there...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/424/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/424/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/958
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/958/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/958/comments
https://api.github.com/repos/ollama/ollama/issues/958/events
https://github.com/ollama/ollama/pull/958
1,971,409,643
PR_kwDOJ0Z1Ps5eSIfZ
958
append LD_LIBRARY_PATH
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-10-31T22:55:25
2023-11-01T15:30:39
2023-11-01T15:30:38
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/958", "html_url": "https://github.com/ollama/ollama/pull/958", "diff_url": "https://github.com/ollama/ollama/pull/958.diff", "patch_url": "https://github.com/ollama/ollama/pull/958.patch", "merged_at": "2023-11-01T15:30:38" }
only append LD_LIBRARY_PATH in case it's already set Related #758
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/958/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3692
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3692/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3692/comments
https://api.github.com/repos/ollama/ollama/issues/3692/events
https://github.com/ollama/ollama/issues/3692
2,247,394,328
I_kwDOJ0Z1Ps6F9IQY
3,692
How do I get sentence-transformers/all-mpnet-base-v2 in Ollama?
{ "login": "Kanishk-Kumar", "id": 45518770, "node_id": "MDQ6VXNlcjQ1NTE4Nzcw", "avatar_url": "https://avatars.githubusercontent.com/u/45518770?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Kanishk-Kumar", "html_url": "https://github.com/Kanishk-Kumar", "followers_url": "https://api.githu...
[]
open
false
null
[]
null
2
2024-04-17T05:19:36
2024-04-29T12:54:51
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What model would you like? I'd like to use sentence-transformers/all-mpnet-base-v2 for embeddings. Thanks.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3692/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/ollama/ollama/issues/3692/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6259
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6259/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6259/comments
https://api.github.com/repos/ollama/ollama/issues/6259/events
https://github.com/ollama/ollama/issues/6259
2,456,339,263
I_kwDOJ0Z1Ps6SaMM_
6,259
Inference fails with "llama_get_logits_ith: invalid logits id 7, reason: no logits"
{ "login": "yurivict", "id": 271906, "node_id": "MDQ6VXNlcjI3MTkwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yurivict", "html_url": "https://github.com/yurivict", "followers_url": "https://api.github.com/users/yurivic...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
9
2024-08-08T17:56:07
2024-08-09T19:14:03
2024-08-09T19:14:03
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Here is the error log: ``` [GIN] 2024/08/07 - 10:01:31 | 200 | 7.589808394s | 127.0.0.1 | POST "/api/chat" time=2024-08-07T10:01:31.521-07:00 level=DEBUG source=sched.go:462 msg="context for request finished" time=2024-08-07T10:01:31.521-07:00 level=DEBUG source=sched.go:334 msg="...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6259/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7816
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7816/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7816/comments
https://api.github.com/repos/ollama/ollama/issues/7816/events
https://github.com/ollama/ollama/issues/7816
2,687,774,719
I_kwDOJ0Z1Ps6gNC__
7,816
I import a IQ_4XS model but get an IQ1_M
{ "login": "CberYellowstone", "id": 37031767, "node_id": "MDQ6VXNlcjM3MDMxNzY3", "avatar_url": "https://avatars.githubusercontent.com/u/37031767?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CberYellowstone", "html_url": "https://github.com/CberYellowstone", "followers_url": "https://api...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6573197867, "node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw...
closed
false
null
[]
null
13
2024-11-24T14:06:50
2024-12-23T23:54:48
2024-11-24T18:33:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? As the title says, I imported a custom gguf model, which is of the IQ_4XS quantization type. But after importing it, ollama show displays it as IQ1_M. Is this behavior expected? Because I saw in previous issues that support for IQ_4XS has been added, so this confuses me. ![image](https://github...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7816/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7726
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7726/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7726/comments
https://api.github.com/repos/ollama/ollama/issues/7726/events
https://github.com/ollama/ollama/issues/7726
2,668,392,205
I_kwDOJ0Z1Ps6fDG8N
7,726
Proxy does not work for ollama, but does work for curl
{ "login": "lk-1984", "id": 105721994, "node_id": "U_kgDOBk0wig", "avatar_url": "https://avatars.githubusercontent.com/u/105721994?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lk-1984", "html_url": "https://github.com/lk-1984", "followers_url": "https://api.github.com/users/lk-1984/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-11-18T12:29:24
2024-11-18T13:08:50
2024-11-18T13:08:49
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Ollama does not work with the HTTPS_PROXY ``` foo@FOOBAR ~ % ollama pull llama3.2 pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama3.2/manifests/latest": dial tcp 104.21.75.227:443: i/o time...
{ "login": "lk-1984", "id": 105721994, "node_id": "U_kgDOBk0wig", "avatar_url": "https://avatars.githubusercontent.com/u/105721994?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lk-1984", "html_url": "https://github.com/lk-1984", "followers_url": "https://api.github.com/users/lk-1984/foll...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7726/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7726/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8261
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8261/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8261/comments
https://api.github.com/repos/ollama/ollama/issues/8261/events
https://github.com/ollama/ollama/issues/8261
2,761,628,815
I_kwDOJ0Z1Ps6kmxyP
8,261
Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address
{ "login": "davincitr", "id": 125030930, "node_id": "U_kgDOB3PSEg", "avatar_url": "https://avatars.githubusercontent.com/u/125030930?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davincitr", "html_url": "https://github.com/davincitr", "followers_url": "https://api.github.com/users/davinc...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
closed
false
null
[]
null
4
2024-12-28T08:26:56
2025-01-15T11:47:10
2025-01-08T18:02:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hello i tried everything on the internet even ı formatted my pc. Error: listen tcp 127.0.0.1:11434: bind: Normal olarak her yuva adresi (iletişim kuralı/ağ adresi/bağlantı noktası) için yalnızca bir kullanıma izin veriliyor. ![image](https://github.com/user-attachments/assets/0ab05a6b-ffaa-...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8261/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8476
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8476/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8476/comments
https://api.github.com/repos/ollama/ollama/issues/8476/events
https://github.com/ollama/ollama/issues/8476
2,796,588,748
I_kwDOJ0Z1Ps6msI7M
8,476
Receiving a new error when trying to create modelfile with same code
{ "login": "indigotechtutorials", "id": 63070125, "node_id": "MDQ6VXNlcjYzMDcwMTI1", "avatar_url": "https://avatars.githubusercontent.com/u/63070125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/indigotechtutorials", "html_url": "https://github.com/indigotechtutorials", "followers_url": ...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2025-01-18T02:21:02
2025-01-19T17:04:40
2025-01-19T17:04:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I have an existing app which was working on an older version of ollama to create the modelfile I was able to do this using a string and using the "FROM model \n SYSTEM instructions" syntax. I am using the ruby-ollama API wrapper gem. The error I am seeing is neither 'from' or 'files' was specif...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8476/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8476/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4523
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4523/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4523/comments
https://api.github.com/repos/ollama/ollama/issues/4523/events
https://github.com/ollama/ollama/issues/4523
2,304,730,924
I_kwDOJ0Z1Ps6JX2cs
4,523
GGUF imported model crashes only in v0.1.38
{ "login": "mindspawn", "id": 5296802, "node_id": "MDQ6VXNlcjUyOTY4MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/5296802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mindspawn", "html_url": "https://github.com/mindspawn", "followers_url": "https://api.github.com/users/mi...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6947643302, "node_id": "LA_kwDOJ0Z1Ps8AAAABnhyfpg...
closed
false
null
[]
null
5
2024-05-19T18:36:50
2024-06-30T23:19:01
2024-06-30T23:19:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? heegyu/EEVE-Korean-Instruct-10.8B-v1.0-GGUF worked in all prior versions of ollama. Since v0.1.38 it core dumps. I have temporarily reverted to v0.1.37 to resolve the issue. Any help is appreciated. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.38
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/jos...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4523/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4483
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4483/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4483/comments
https://api.github.com/repos/ollama/ollama/issues/4483/events
https://github.com/ollama/ollama/pull/4483
2,301,541,051
PR_kwDOJ0Z1Ps5vumh2
4,483
Don't return error on signal exit
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
2
2024-05-16T23:26:28
2024-05-17T19:02:40
2024-05-17T18:41:57
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4483", "html_url": "https://github.com/ollama/ollama/pull/4483", "diff_url": "https://github.com/ollama/ollama/pull/4483.diff", "patch_url": "https://github.com/ollama/ollama/pull/4483.patch", "merged_at": "2024-05-17T18:41:57" }
null
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4483/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3904
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3904/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3904/comments
https://api.github.com/repos/ollama/ollama/issues/3904/events
https://github.com/ollama/ollama/issues/3904
2,262,742,129
I_kwDOJ0Z1Ps6G3rRx
3,904
Error: llama runner process no longer running: -1
{ "login": "parthV2", "id": 163822058, "node_id": "U_kgDOCcO56g", "avatar_url": "https://avatars.githubusercontent.com/u/163822058?v=4", "gravatar_id": "", "url": "https://api.github.com/users/parthV2", "html_url": "https://github.com/parthV2", "followers_url": "https://api.github.com/users/parthV2/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
9
2024-04-25T06:01:58
2024-06-21T18:27:34
2024-05-03T05:41:06
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Was trying to run a finetuned version of llama2 having a gguf of 13.5gb. ![Screenshot from 2024-04-25 11-30-55](https://github.com/ollama/ollama/assets/163822058/99f79650-123b-4c84-8de2-ad697d97002a) ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version v0.1...
{ "login": "parthV2", "id": 163822058, "node_id": "U_kgDOCcO56g", "avatar_url": "https://avatars.githubusercontent.com/u/163822058?v=4", "gravatar_id": "", "url": "https://api.github.com/users/parthV2", "html_url": "https://github.com/parthV2", "followers_url": "https://api.github.com/users/parthV2/foll...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3904/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3942
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3942/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3942/comments
https://api.github.com/repos/ollama/ollama/issues/3942/events
https://github.com/ollama/ollama/pull/3942
2,265,618,138
PR_kwDOJ0Z1Ps5t1Xdq
3,942
Fix curl command in documentation
{ "login": "Isaakkamau", "id": 95031660, "node_id": "U_kgDOBaoRbA", "avatar_url": "https://avatars.githubusercontent.com/u/95031660?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Isaakkamau", "html_url": "https://github.com/Isaakkamau", "followers_url": "https://api.github.com/users/Isaak...
[]
closed
false
null
[]
null
2
2024-04-26T11:50:11
2024-04-29T11:36:41
2024-04-29T11:36:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3942", "html_url": "https://github.com/ollama/ollama/pull/3942", "diff_url": "https://github.com/ollama/ollama/pull/3942.diff", "patch_url": "https://github.com/ollama/ollama/pull/3942.patch", "merged_at": null }
Explicitly set the HTTP method to POST using the -X flag to avoid errors when creating a new model from `modelfile`
{ "login": "Isaakkamau", "id": 95031660, "node_id": "U_kgDOBaoRbA", "avatar_url": "https://avatars.githubusercontent.com/u/95031660?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Isaakkamau", "html_url": "https://github.com/Isaakkamau", "followers_url": "https://api.github.com/users/Isaak...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3942/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7140
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7140/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7140/comments
https://api.github.com/repos/ollama/ollama/issues/7140/events
https://github.com/ollama/ollama/pull/7140
2,573,673,780
PR_kwDOJ0Z1Ps59-U2Q
7,140
llama: cgo ggml
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
1
2024-10-08T16:23:06
2024-10-29T16:52:05
2024-10-29T16:52:05
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7140", "html_url": "https://github.com/ollama/ollama/pull/7140", "diff_url": "https://github.com/ollama/ollama/pull/7140.diff", "patch_url": "https://github.com/ollama/ollama/pull/7140.patch", "merged_at": null }
This builds a ~minimal cgo wrapper on ggml APIs. Replaces #7103 on main
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7140/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7140/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5079
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5079/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5079/comments
https://api.github.com/repos/ollama/ollama/issues/5079/events
https://github.com/ollama/ollama/pull/5079
2,355,789,462
PR_kwDOJ0Z1Ps5ynH7F
5,079
Add Chinese translation of README
{ "login": "sumingcheng", "id": 21992204, "node_id": "MDQ6VXNlcjIxOTkyMjA0", "avatar_url": "https://avatars.githubusercontent.com/u/21992204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sumingcheng", "html_url": "https://github.com/sumingcheng", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
1
2024-06-16T14:03:26
2024-11-21T08:33:00
2024-11-21T08:33:00
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5079", "html_url": "https://github.com/ollama/ollama/pull/5079", "diff_url": "https://github.com/ollama/ollama/pull/5079.diff", "patch_url": "https://github.com/ollama/ollama/pull/5079.patch", "merged_at": null }
This pull request adds a Chinese translation of the README file to help native Chinese speakers better understand the project.
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5079/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5079/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2697
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2697/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2697/comments
https://api.github.com/repos/ollama/ollama/issues/2697/events
https://github.com/ollama/ollama/issues/2697
2,150,060,376
I_kwDOJ0Z1Ps6AJ1FY
2,697
Unable to build Ollama on Cluster
{ "login": "Anirudh257", "id": 16001446, "node_id": "MDQ6VXNlcjE2MDAxNDQ2", "avatar_url": "https://avatars.githubusercontent.com/u/16001446?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Anirudh257", "html_url": "https://github.com/Anirudh257", "followers_url": "https://api.github.com/use...
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
8
2024-02-22T22:37:54
2024-03-12T14:50:16
2024-03-12T14:50:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, Thanks for this great work. I am trying to build Ollama on my cluster and I don't have administrative access. My cluster has the following configuration: ``` LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: CentOS Description: CentOS Linux release 7.9.2009 (Core) Release: 7.9.2009 Codename...
{ "login": "Anirudh257", "id": 16001446, "node_id": "MDQ6VXNlcjE2MDAxNDQ2", "avatar_url": "https://avatars.githubusercontent.com/u/16001446?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Anirudh257", "html_url": "https://github.com/Anirudh257", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2697/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2697/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4810
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4810/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4810/comments
https://api.github.com/repos/ollama/ollama/issues/4810/events
https://github.com/ollama/ollama/issues/4810
2,333,112,595
I_kwDOJ0Z1Ps6LEHkT
4,810
"Server disconnected without sending a response" after ~60seconds.
{ "login": "michaelgloeckner", "id": 56082327, "node_id": "MDQ6VXNlcjU2MDgyMzI3", "avatar_url": "https://avatars.githubusercontent.com/u/56082327?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelgloeckner", "html_url": "https://github.com/michaelgloeckner", "followers_url": "https://...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
10
2024-06-04T10:06:11
2024-06-19T07:40:51
2024-06-06T13:06:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I run mixtral model and using api/generate. If I run a bigger prompt it returns "Server disconnected without sending a response." I checked ollama logs and see: [GIN] 2024/06/04 - 09:36:43 | 500 | 59.693208463s | 10.0.101.220 | POST "/api/generate" Is there a way to increase thi...
{ "login": "michaelgloeckner", "id": 56082327, "node_id": "MDQ6VXNlcjU2MDgyMzI3", "avatar_url": "https://avatars.githubusercontent.com/u/56082327?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelgloeckner", "html_url": "https://github.com/michaelgloeckner", "followers_url": "https://...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4810/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4810/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/496
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/496/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/496/comments
https://api.github.com/repos/ollama/ollama/issues/496/events
https://github.com/ollama/ollama/issues/496
1,887,542,945
I_kwDOJ0Z1Ps5wgZ6h
496
CodeLlama tokenizer `<FILL_ME>` token support
{ "login": "regularfry", "id": 39277, "node_id": "MDQ6VXNlcjM5Mjc3", "avatar_url": "https://avatars.githubusercontent.com/u/39277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/regularfry", "html_url": "https://github.com/regularfry", "followers_url": "https://api.github.com/users/regular...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
3
2023-09-08T12:00:25
2024-07-16T21:37:33
2024-07-16T21:37:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It might be that I just can't find the right setting to make this work, but CodeLlama's upstream model docs refer to a [fill_token](https://huggingface.co/docs/transformers/main/model_doc/code_llama#transformers.CodeLlamaTokenizer.fill_token) for splitting the input and constructing the prompt for code infill. I can't...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/496/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/496/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8473
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8473/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8473/comments
https://api.github.com/repos/ollama/ollama/issues/8473/events
https://github.com/ollama/ollama/issues/8473
2,796,566,382
I_kwDOJ0Z1Ps6msDdu
8,473
HSA_OVERRIDE_GFX_VERSION_0 while running on only one GPU
{ "login": "occasional-contributor", "id": 140330290, "node_id": "U_kgDOCF1FMg", "avatar_url": "https://avatars.githubusercontent.com/u/140330290?v=4", "gravatar_id": "", "url": "https://api.github.com/users/occasional-contributor", "html_url": "https://github.com/occasional-contributor", "followers_url...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2025-01-18T01:28:21
2025-01-18T01:28:21
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am running `ollama:rocm` in a docker container on Ubuntu 24.04. My GPU is an RX 6600 (`gfx1032`). Everything works fine when I run `ollama` using ```bash docker run -d \ --device /dev/kfd \ --device /dev/dri \ -v ollama:/root/.ollama \ -p 11434:11434 \ --restart unless-sto...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8473/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8473/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5934
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5934/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5934/comments
https://api.github.com/repos/ollama/ollama/issues/5934/events
https://github.com/ollama/ollama/pull/5934
2,428,684,180
PR_kwDOJ0Z1Ps52ZmBy
5,934
Report better error on cuda unsupported os/arch
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-07-25T00:11:57
2024-07-29T21:24:23
2024-07-29T21:24:21
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5934", "html_url": "https://github.com/ollama/ollama/pull/5934", "diff_url": "https://github.com/ollama/ollama/pull/5934.diff", "patch_url": "https://github.com/ollama/ollama/pull/5934.patch", "merged_at": "2024-07-29T21:24:20" }
If we detect an NVIDIA GPU, but nvidia doesn't support the os/arch, this will report a better error for the user and point them to docs to self-install the drivers if possible. Fixes #3261 #2302 Example output on Ubuntu 22.04 on AWS g5g.xlarge (arm64) ``` % sh ./install.sh >>> Downloading ollama... #########...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5934/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1438
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1438/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1438/comments
https://api.github.com/repos/ollama/ollama/issues/1438/events
https://github.com/ollama/ollama/issues/1438
2,033,080,984
I_kwDOJ0Z1Ps55LlqY
1,438
Openchat in Ollama
{ "login": "itscvenk", "id": 117738376, "node_id": "U_kgDOBwSLiA", "avatar_url": "https://avatars.githubusercontent.com/u/117738376?v=4", "gravatar_id": "", "url": "https://api.github.com/users/itscvenk", "html_url": "https://github.com/itscvenk", "followers_url": "https://api.github.com/users/itscvenk/...
[]
closed
false
null
[]
null
2
2023-12-08T17:50:35
2023-12-09T07:41:02
2023-12-08T19:30:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello Nvidia, CUDA, are all installed and working fine. Phew. How do I verify that Ollama is actually using the GPU while responding. I am using the openchat model Thanks a million for Ollama and especially for including the openchat model. Stay blessed & happy folks! Regards
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1438/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4876
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4876/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4876/comments
https://api.github.com/repos/ollama/ollama/issues/4876/events
https://github.com/ollama/ollama/pull/4876
2,338,854,827
PR_kwDOJ0Z1Ps5xtsS6
4,876
Intel GPU build support
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
10
2024-06-06T17:56:05
2025-01-24T23:15:19
2024-11-21T18:23:32
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4876", "html_url": "https://github.com/ollama/ollama/pull/4876", "diff_url": "https://github.com/ollama/ollama/pull/4876.diff", "patch_url": "https://github.com/ollama/ollama/pull/4876.patch", "merged_at": null }
This enables linux, but still needs some more work to get it wired up to the Windows official builds. Fixes #1590
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4876/reactions", "total_count": 24, "+1": 3, "-1": 0, "laugh": 0, "hooray": 12, "confused": 0, "heart": 9, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4876/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3949
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3949/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3949/comments
https://api.github.com/repos/ollama/ollama/issues/3949/events
https://github.com/ollama/ollama/issues/3949
2,266,096,639
I_kwDOJ0Z1Ps6HEeP_
3,949
Inconsistent 500 errors when generating
{ "login": "ronangaillard", "id": 5607736, "node_id": "MDQ6VXNlcjU2MDc3MzY=", "avatar_url": "https://avatars.githubusercontent.com/u/5607736?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ronangaillard", "html_url": "https://github.com/ronangaillard", "followers_url": "https://api.github....
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-04-26T16:09:48
2024-05-09T21:21:23
2024-05-09T21:21:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? EDIT : I tried with `0.1.29` and I don't have the issue Hello, I'm using docker image, tag latest without any GPU. I downloaded mistral model, and I'm trying to generate answers using the promt from the doc (I have the same issue with llama model) : ``` curl http://localhost:11434...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3949/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3949/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2666
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2666/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2666/comments
https://api.github.com/repos/ollama/ollama/issues/2666/events
https://github.com/ollama/ollama/pull/2666
2,148,340,077
PR_kwDOJ0Z1Ps5nm7PN
2,666
Update client.py
{ "login": "Yuan-ManX", "id": 68322456, "node_id": "MDQ6VXNlcjY4MzIyNDU2", "avatar_url": "https://avatars.githubusercontent.com/u/68322456?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Yuan-ManX", "html_url": "https://github.com/Yuan-ManX", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
1
2024-02-22T06:46:24
2024-05-07T23:37:47
2024-05-07T23:37:47
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2666", "html_url": "https://github.com/ollama/ollama/pull/2666", "diff_url": "https://github.com/ollama/ollama/pull/2666.diff", "patch_url": "https://github.com/ollama/ollama/pull/2666.patch", "merged_at": null }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2666/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2396
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2396/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2396/comments
https://api.github.com/repos/ollama/ollama/issues/2396/events
https://github.com/ollama/ollama/issues/2396
2,123,732,549
I_kwDOJ0Z1Ps5-lZZF
2,396
llama.cpp now supports Vulkan
{ "login": "ddpasa", "id": 112642920, "node_id": "U_kgDOBrbLaA", "avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ddpasa", "html_url": "https://github.com/ddpasa", "followers_url": "https://api.github.com/users/ddpasa/follower...
[ { "id": 6677745918, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g", "url": "https://api.github.com/repos/ollama/ollama/labels/gpu", "name": "gpu", "color": "76C49E", "default": false, "description": "" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-02-07T19:33:24
2024-03-21T14:00:45
2024-03-21T14:00:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
As of 10 days ago: https://github.com/ggerganov/llama.cpp/commit/2307523d322af762ae06648b29ec5a9eb1c73032 This is great news for people who non-CUDA cards. What's necessary to support this with Ollama? I'm happy to help if you show me the pointers.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2396/reactions", "total_count": 27, "+1": 10, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 17, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2396/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7204
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7204/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7204/comments
https://api.github.com/repos/ollama/ollama/issues/7204/events
https://github.com/ollama/ollama/pull/7204
2,587,233,516
PR_kwDOJ0Z1Ps5-mfq6
7,204
Fix openapi base writer header code.
{ "login": "zhanluxianshen", "id": 161462588, "node_id": "U_kgDOCZ-5PA", "avatar_url": "https://avatars.githubusercontent.com/u/161462588?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhanluxianshen", "html_url": "https://github.com/zhanluxianshen", "followers_url": "https://api.github.c...
[]
open
false
null
[]
null
0
2024-10-14T23:00:41
2024-10-17T15:32:18
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7204", "html_url": "https://github.com/ollama/ollama/pull/7204", "diff_url": "https://github.com/ollama/ollama/pull/7204.diff", "patch_url": "https://github.com/ollama/ollama/pull/7204.patch", "merged_at": null }
Fix openapi base writer header code.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7204/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/987
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/987/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/987/comments
https://api.github.com/repos/ollama/ollama/issues/987/events
https://github.com/ollama/ollama/issues/987
1,976,403,050
I_kwDOJ0Z1Ps51zYRq
987
segmentation fault with prompts longer than 5 / 6 tokens on intel mac
{ "login": "Serpico84", "id": 7769484, "node_id": "MDQ6VXNlcjc3Njk0ODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7769484?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Serpico84", "html_url": "https://github.com/Serpico84", "followers_url": "https://api.github.com/users/Se...
[]
closed
false
null
[]
null
8
2023-11-03T15:07:05
2023-12-10T23:16:41
2023-11-17T03:18:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm running Ollama on a 2019 intel MacBook Pro with 32gb of RAM and a 4gb AMD GPU. macOS Monterey. For some reson, every prompt longer than a few words on both codellama:7b and llama2:7b end up with `Error: llama runner exited, you may not have enough available memory to run this model` Very short prompts work ok. ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/987/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/987/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5761
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5761/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5761/comments
https://api.github.com/repos/ollama/ollama/issues/5761/events
https://github.com/ollama/ollama/issues/5761
2,415,145,821
I_kwDOJ0Z1Ps6P9DNd
5,761
Tokenizer issue with tool calling with InternLM2
{ "login": "endyjasmi", "id": 1048745, "node_id": "MDQ6VXNlcjEwNDg3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/1048745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/endyjasmi", "html_url": "https://github.com/endyjasmi", "followers_url": "https://api.github.com/users/en...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2024-07-18T03:39:25
2024-07-22T03:01:22
2024-07-19T12:15:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am currently using `internlm2:7b` model from https://ollama.com/library/internlm2:7b . I am trying to use the tool calling capability of the model using https://github.com/InternLM/InternLM/blob/main/chat/chat_format.md#function-call as a prompt reference. Following is the prompt I use...
{ "login": "endyjasmi", "id": 1048745, "node_id": "MDQ6VXNlcjEwNDg3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/1048745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/endyjasmi", "html_url": "https://github.com/endyjasmi", "followers_url": "https://api.github.com/users/en...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5761/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5761/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6624
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6624/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6624/comments
https://api.github.com/repos/ollama/ollama/issues/6624/events
https://github.com/ollama/ollama/pull/6624
2,504,196,670
PR_kwDOJ0Z1Ps56VY1T
6,624
Update README.md with PyOllaMx
{ "login": "kspviswa", "id": 7476271, "node_id": "MDQ6VXNlcjc0NzYyNzE=", "avatar_url": "https://avatars.githubusercontent.com/u/7476271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kspviswa", "html_url": "https://github.com/kspviswa", "followers_url": "https://api.github.com/users/kspvi...
[]
closed
false
null
[]
null
1
2024-09-04T03:04:25
2024-09-04T03:10:53
2024-09-04T03:10:53
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6624", "html_url": "https://github.com/ollama/ollama/pull/6624", "diff_url": "https://github.com/ollama/ollama/pull/6624.diff", "patch_url": "https://github.com/ollama/ollama/pull/6624.patch", "merged_at": "2024-09-04T03:10:53" }
Based on [this comment](https://github.com/ollama/ollama/issues/5937#issuecomment-2327760726), creating this PR to add PyOllaMx to list of ollama based applications
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6624/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6624/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5683
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5683/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5683/comments
https://api.github.com/repos/ollama/ollama/issues/5683/events
https://github.com/ollama/ollama/pull/5683
2,407,188,878
PR_kwDOJ0Z1Ps51TmfX
5,683
fix: solve network disruption during downloads, add OLLAMA_DOWNLOAD_CONN setting
{ "login": "supercurio", "id": 406003, "node_id": "MDQ6VXNlcjQwNjAwMw==", "avatar_url": "https://avatars.githubusercontent.com/u/406003?v=4", "gravatar_id": "", "url": "https://api.github.com/users/supercurio", "html_url": "https://github.com/supercurio", "followers_url": "https://api.github.com/users/s...
[]
closed
false
null
[]
null
7
2024-07-13T22:54:53
2024-12-10T23:20:15
2024-11-21T10:22:14
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5683", "html_url": "https://github.com/ollama/ollama/pull/5683", "diff_url": "https://github.com/ollama/ollama/pull/5683.diff", "patch_url": "https://github.com/ollama/ollama/pull/5683.patch", "merged_at": null }
The process of managing bandwidth for model downloads has been an ongoing journey. - Users reported difficulties when downloading model since January in issue #2006 - The feature #2995 was reverted in March 2024 The situation left Ollama server with unsafe network concurrency defaults since, causing problems for ...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5683/reactions", "total_count": 11, "+1": 11, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5683/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8614
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8614/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8614/comments
https://api.github.com/repos/ollama/ollama/issues/8614/events
https://github.com/ollama/ollama/issues/8614
2,813,943,892
I_kwDOJ0Z1Ps6nuWBU
8,614
Problems with deepseek-r1:671b, ollama keeps crashing on long answers
{ "login": "fabiounixpi", "id": 48057600, "node_id": "MDQ6VXNlcjQ4MDU3NjAw", "avatar_url": "https://avatars.githubusercontent.com/u/48057600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fabiounixpi", "html_url": "https://github.com/fabiounixpi", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
11
2025-01-27T20:04:40
2025-01-30T13:07:47
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi all, I'm using an r960 with 2TB of ram, so ram is not a problem here. I'm experiencing constant crashes of ollama 0.5.7 and deepseek-r1:671b, even increasing the context window with the command /set parameter num_ctx 4096. I also tried a second system, an r670 csp with 1TB of ram, but the...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8614/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8614/timeline
null
null
false