url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/5714
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5714/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5714/comments
https://api.github.com/repos/ollama/ollama/issues/5714/events
https://github.com/ollama/ollama/pull/5714
2,410,097,651
PR_kwDOJ0Z1Ps51dX7E
5,714
README.md: Package managers: add Gentoo
{ "login": "vitaly-zdanevich", "id": 3514015, "node_id": "MDQ6VXNlcjM1MTQwMTU=", "avatar_url": "https://avatars.githubusercontent.com/u/3514015?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vitaly-zdanevich", "html_url": "https://github.com/vitaly-zdanevich", "followers_url": "https://ap...
[]
closed
false
null
[]
null
2
2024-07-16T03:09:17
2024-09-05T16:58:14
2024-09-05T16:58:14
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5714", "html_url": "https://github.com/ollama/ollama/pull/5714", "diff_url": "https://github.com/ollama/ollama/pull/5714.diff", "patch_url": "https://github.com/ollama/ollama/pull/5714.patch", "merged_at": "2024-09-05T16:58:14" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5714/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5714/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5892
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5892/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5892/comments
https://api.github.com/repos/ollama/ollama/issues/5892/events
https://github.com/ollama/ollama/issues/5892
2,426,119,832
I_kwDOJ0Z1Ps6Qm6aY
5,892
Ollama: 500 error on Larger Models
{ "login": "nicholhai", "id": 96297412, "node_id": "U_kgDOBb1hxA", "avatar_url": "https://avatars.githubusercontent.com/u/96297412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nicholhai", "html_url": "https://github.com/nicholhai", "followers_url": "https://api.github.com/users/nicholha...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
62
2024-07-23T21:00:26
2024-08-19T14:37:13
2024-07-24T15:35:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Whenever I try to run a model greater than the 7b or 8b, I get the following error. HOWEVER, any of the regular ones that are 7b and 8b run just fine. Ollama: 500, message='Internal Server Error', url=URL('http://localhost:11434/api/chat') - Running Ubuntu Server 24.04 - Running through ...
{ "login": "nicholhai", "id": 96297412, "node_id": "U_kgDOBb1hxA", "avatar_url": "https://avatars.githubusercontent.com/u/96297412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nicholhai", "html_url": "https://github.com/nicholhai", "followers_url": "https://api.github.com/users/nicholha...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5892/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5892/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3788
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3788/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3788/comments
https://api.github.com/repos/ollama/ollama/issues/3788/events
https://github.com/ollama/ollama/pull/3788
2,254,790,213
PR_kwDOJ0Z1Ps5tQoHh
3,788
types/model: export IsValidNamePart
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[]
closed
false
null
[]
null
0
2024-04-21T01:11:55
2024-04-21T01:26:35
2024-04-21T01:26:34
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3788", "html_url": "https://github.com/ollama/ollama/pull/3788", "diff_url": "https://github.com/ollama/ollama/pull/3788.diff", "patch_url": "https://github.com/ollama/ollama/pull/3788.patch", "merged_at": "2024-04-21T01:26:34" }
null
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3788/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3788/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1159
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1159/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1159/comments
https://api.github.com/repos/ollama/ollama/issues/1159/events
https://github.com/ollama/ollama/pull/1159
1,998,029,489
PR_kwDOJ0Z1Ps5fsW43
1,159
Example: Function Calling in Typescript
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
[]
closed
false
null
[]
null
1
2023-11-17T00:32:33
2023-11-21T18:06:56
2023-11-21T18:06:55
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1159", "html_url": "https://github.com/ollama/ollama/pull/1159", "diff_url": "https://github.com/ollama/ollama/pull/1159.diff", "patch_url": "https://github.com/ollama/ollama/pull/1159.patch", "merged_at": "2023-11-21T18:06:55" }
Two examples here. One to list the characters in the first few pages of War and Peace. The other parses emails for events and addresses.
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1159/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4664
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4664/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4664/comments
https://api.github.com/repos/ollama/ollama/issues/4664/events
https://github.com/ollama/ollama/issues/4664
2,319,099,097
I_kwDOJ0Z1Ps6KOqTZ
4,664
OLLAMA support MiniCPM-Llama3-V 2.5
{ "login": "zhqfdn", "id": 25156863, "node_id": "MDQ6VXNlcjI1MTU2ODYz", "avatar_url": "https://avatars.githubusercontent.com/u/25156863?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhqfdn", "html_url": "https://github.com/zhqfdn", "followers_url": "https://api.github.com/users/zhqfdn/fo...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
3
2024-05-27T12:56:32
2024-06-09T17:11:30
2024-06-09T17:11:30
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4664/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 4, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4664/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/841
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/841/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/841/comments
https://api.github.com/repos/ollama/ollama/issues/841/events
https://github.com/ollama/ollama/pull/841
1,950,446,984
PR_kwDOJ0Z1Ps5dLciz
841
cleanup: command args
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-10-18T19:05:33
2023-10-19T18:22:41
2023-10-19T18:22:40
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/841", "html_url": "https://github.com/ollama/ollama/pull/841", "diff_url": "https://github.com/ollama/ollama/pull/841.diff", "patch_url": "https://github.com/ollama/ollama/pull/841.patch", "merged_at": "2023-10-19T18:22:40" }
A number of subcommands incorrectly set `MinimumNArgs` instead of `ExactArgs` which leads to confusion. Related #803
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/841/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3679
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3679/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3679/comments
https://api.github.com/repos/ollama/ollama/issues/3679/events
https://github.com/ollama/ollama/pull/3679
2,246,710,964
PR_kwDOJ0Z1Ps5s1t5h
3,679
Update install.sh added /etc/default/ollama
{ "login": "digitalw00t", "id": 593045, "node_id": "MDQ6VXNlcjU5MzA0NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/593045?v=4", "gravatar_id": "", "url": "https://api.github.com/users/digitalw00t", "html_url": "https://github.com/digitalw00t", "followers_url": "https://api.github.com/user...
[]
closed
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
0
2024-04-16T19:05:15
2024-05-16T01:06:44
2024-05-16T01:06:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3679", "html_url": "https://github.com/ollama/ollama/pull/3679", "diff_url": "https://github.com/ollama/ollama/pull/3679.diff", "patch_url": "https://github.com/ollama/ollama/pull/3679.patch", "merged_at": null }
Added persistent env file for the server, so one update to /etc/default/ollama will stay between updates.
{ "login": "digitalw00t", "id": 593045, "node_id": "MDQ6VXNlcjU5MzA0NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/593045?v=4", "gravatar_id": "", "url": "https://api.github.com/users/digitalw00t", "html_url": "https://github.com/digitalw00t", "followers_url": "https://api.github.com/user...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3679/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3679/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5071
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5071/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5071/comments
https://api.github.com/repos/ollama/ollama/issues/5071/events
https://github.com/ollama/ollama/issues/5071
2,355,158,147
I_kwDOJ0Z1Ps6MYNyD
5,071
ollama not utilizing AMD GPU through METAL
{ "login": "dbl001", "id": 3105499, "node_id": "MDQ6VXNlcjMxMDU0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3105499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dbl001", "html_url": "https://github.com/dbl001", "followers_url": "https://api.github.com/users/dbl001/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-06-15T19:20:08
2024-06-18T19:40:41
2024-06-18T19:40:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Here's my build command: ``` % OLLAMA_CUSTOM_CPU_DEFS="-DLLAMA_AVX=on -DLLAMA_AVX2=on -DLLAMA_F16C=on -DLLAMA_FMA=on -DLLAMA_METAL=on -DLLAMA_METAL_EMBED_LIBRARY=on -DGGML_USE_METAL=on -DLLAMA_METAL_COMPILE_SERIALIZED=1" go generate -v ./... ``` The go script subsequently turns -DLLAMA_M...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5071/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5071/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5499
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5499/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5499/comments
https://api.github.com/repos/ollama/ollama/issues/5499/events
https://github.com/ollama/ollama/issues/5499
2,392,964,252
I_kwDOJ0Z1Ps6Oobyc
5,499
Error Pull Model Manifest
{ "login": "Moonlight1220", "id": 172665223, "node_id": "U_kgDOCkqphw", "avatar_url": "https://avatars.githubusercontent.com/u/172665223?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Moonlight1220", "html_url": "https://github.com/Moonlight1220", "followers_url": "https://api.github.com/...
[ { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
7
2024-07-05T17:53:36
2024-09-26T00:14:43
2024-09-26T00:14:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ### Error Pulling Manifest Repeatedly Hello Ollama Comunity, When I first installed Ollama on my early 2015 13in Macbook Air (1.6GHz Dual Core Intel Core i5, 8 GB 1600 MHz DDR3, Intel HD Graphics 600 1536 MB) it worked perfectly fine once i used it again after instilation i got the error: ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5499/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5499/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/295
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/295/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/295/comments
https://api.github.com/repos/ollama/ollama/issues/295/events
https://github.com/ollama/ollama/issues/295
1,838,028,667
I_kwDOJ0Z1Ps5tjhd7
295
`stop` parameter values don't always stop generation
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
2
2023-08-06T03:35:35
2023-08-30T04:17:43
2023-08-08T04:29:28
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Stop words don't always stop generation
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/295/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1661
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1661/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1661/comments
https://api.github.com/repos/ollama/ollama/issues/1661/events
https://github.com/ollama/ollama/pull/1661
2,052,871,624
PR_kwDOJ0Z1Ps5imIOM
1,661
Fix `template` api doc description
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2023-12-21T18:12:50
2024-01-03T16:01:00
2024-01-03T16:00:59
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1661", "html_url": "https://github.com/ollama/ollama/pull/1661", "diff_url": "https://github.com/ollama/ollama/pull/1661.diff", "patch_url": "https://github.com/ollama/ollama/pull/1661.patch", "merged_at": "2024-01-03T16:00:59" }
The API docs specify that `template` overrides the prompt which isn't the case (verified back to v0.1.13), this is the functionality that `raw` mode enables. This change fixes the description.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1661/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1661/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3975
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3975/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3975/comments
https://api.github.com/repos/ollama/ollama/issues/3975/events
https://github.com/ollama/ollama/issues/3975
2,266,883,233
I_kwDOJ0Z1Ps6HHeSh
3,975
When used, it is always cpu full instead of gpu full, and gpu usage is almost zero
{ "login": "KritoAndAsuna", "id": 59231253, "node_id": "MDQ6VXNlcjU5MjMxMjUz", "avatar_url": "https://avatars.githubusercontent.com/u/59231253?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KritoAndAsuna", "html_url": "https://github.com/KritoAndAsuna", "followers_url": "https://api.githu...
[ { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
7
2024-04-27T07:14:05
2024-05-21T18:20:44
2024-05-21T18:20:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When used, it is always cpu full instead of gpu full, and gpu usage is almost zero ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.1.32
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3975/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3975/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1326
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1326/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1326/comments
https://api.github.com/repos/ollama/ollama/issues/1326/events
https://github.com/ollama/ollama/issues/1326
2,017,827,752
I_kwDOJ0Z1Ps54RZuo
1,326
Installation fails on Fedora 39 (38+)
{ "login": "cephalization", "id": 8948924, "node_id": "MDQ6VXNlcjg5NDg5MjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8948924?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cephalization", "html_url": "https://github.com/cephalization", "followers_url": "https://api.github....
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2023-11-30T03:58:35
2024-01-18T22:23:43
2024-01-18T22:23:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Nvidia hasn't uploaded specific cuda drivers for later versions of fedora here https://developer.download.nvidia.com/compute/cuda/repos/ So, installation fails when trying to install them for 38 and 39. To fix, you can follow the steps for Fedora 35 and later here https://rpmfusion.org/Howto/CUDA ```sh sudo d...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1326/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3322
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3322/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3322/comments
https://api.github.com/repos/ollama/ollama/issues/3322/events
https://github.com/ollama/ollama/issues/3322
2,204,186,202
I_kwDOJ0Z1Ps6DYTZa
3,322
I can't make vision models work
{ "login": "donnadulcinea", "id": 34122487, "node_id": "MDQ6VXNlcjM0MTIyNDg3", "avatar_url": "https://avatars.githubusercontent.com/u/34122487?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donnadulcinea", "html_url": "https://github.com/donnadulcinea", "followers_url": "https://api.githu...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
8
2024-03-24T05:17:58
2024-11-24T22:17:59
2024-11-24T22:17:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am running ollama via docker. Everything works smootly but vision models. I tried `llava` and `bakllava` with no success. ### What did you expect to see? The description of the image I provided. ### Steps to reproduce Run an instance of ollama with docker, pull latest model of llava or ba...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3322/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3322/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1389
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1389/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1389/comments
https://api.github.com/repos/ollama/ollama/issues/1389/events
https://github.com/ollama/ollama/issues/1389
2,026,755,277
I_kwDOJ0Z1Ps54zdTN
1,389
Request: The ability to load multiple models into the same GPUs and running them concurrently.
{ "login": "phalexo", "id": 4603365, "node_id": "MDQ6VXNlcjQ2MDMzNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phalexo", "html_url": "https://github.com/phalexo", "followers_url": "https://api.github.com/users/phalexo/...
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
4
2023-12-05T17:19:34
2024-03-12T16:46:44
2024-03-12T16:46:36
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Currently what ollama does is UNLOAD the previously loaded model, and loads the last model you try to use. Although the load is reasonably fast (if you intend to manually enter text and such) but if you want to use it with AutoGen or similar, loads and unloads put additional latency into the system, when token generati...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1389/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1389/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8601
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8601/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8601/comments
https://api.github.com/repos/ollama/ollama/issues/8601/events
https://github.com/ollama/ollama/pull/8601
2,812,057,230
PR_kwDOJ0Z1Ps6JCJq5
8,601
README: Add handy-ollama to tutorial
{ "login": "AXYZdong", "id": 45477220, "node_id": "MDQ6VXNlcjQ1NDc3MjIw", "avatar_url": "https://avatars.githubusercontent.com/u/45477220?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AXYZdong", "html_url": "https://github.com/AXYZdong", "followers_url": "https://api.github.com/users/AXY...
[]
open
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
0
2025-01-27T04:29:41
2025-01-27T17:08:48
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8601", "html_url": "https://github.com/ollama/ollama/pull/8601", "diff_url": "https://github.com/ollama/ollama/pull/8601.diff", "patch_url": "https://github.com/ollama/ollama/pull/8601.patch", "merged_at": null }
Chinese Tutorial for Ollama by [Datawhale ](https://github.com/datawhalechina)- China's Largest Open Source AI Learning Community. We'd like to contribute to the Ollama community by announcing the release of our open-source Chinese tutorial. This tutorial aims to be comprehensive and easy to understand, covering:...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8601/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4971
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4971/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4971/comments
https://api.github.com/repos/ollama/ollama/issues/4971/events
https://github.com/ollama/ollama/issues/4971
2,345,302,568
I_kwDOJ0Z1Ps6Lynoo
4,971
How to disallow the use of both gpu and cpu
{ "login": "xiaohanglei", "id": 32543872, "node_id": "MDQ6VXNlcjMyNTQzODcy", "avatar_url": "https://avatars.githubusercontent.com/u/32543872?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xiaohanglei", "html_url": "https://github.com/xiaohanglei", "followers_url": "https://api.github.com/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
4
2024-06-11T03:28:49
2024-06-14T02:45:19
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When using both GPU and CPU, the output will be garbled, so I want to prohibit this scenario ![image](https://github.com/ollama/ollama/assets/32543872/8dec48d9-12aa-4ccc-b2a6-6243ec1f6b27)
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4971/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4971/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4633
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4633/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4633/comments
https://api.github.com/repos/ollama/ollama/issues/4633/events
https://github.com/ollama/ollama/issues/4633
2,316,911,164
I_kwDOJ0Z1Ps6KGUI8
4,633
Problem while pulling some models
{ "login": "skrew", "id": 738170, "node_id": "MDQ6VXNlcjczODE3MA==", "avatar_url": "https://avatars.githubusercontent.com/u/738170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skrew", "html_url": "https://github.com/skrew", "followers_url": "https://api.github.com/users/skrew/followers"...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2024-05-25T10:32:28
2024-05-28T13:21:34
2024-05-25T16:29:15
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? While using the command `ollama pull aya:35b-23-q8_0`, downloading stuck to 98-99% Tested multiple time Then i've tested with 0.1.37 version, i can pull this model without problem ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.39
{ "login": "skrew", "id": 738170, "node_id": "MDQ6VXNlcjczODE3MA==", "avatar_url": "https://avatars.githubusercontent.com/u/738170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skrew", "html_url": "https://github.com/skrew", "followers_url": "https://api.github.com/users/skrew/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4633/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4633/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/348
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/348/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/348/comments
https://api.github.com/repos/ollama/ollama/issues/348/events
https://github.com/ollama/ollama/pull/348
1,850,616,235
PR_kwDOJ0Z1Ps5X7UvX
348
cross repo blob mount
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
1
2023-08-14T22:08:27
2023-08-16T16:20:37
2023-08-16T16:20:36
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/348", "html_url": "https://github.com/ollama/ollama/pull/348", "diff_url": "https://github.com/ollama/ollama/pull/348.diff", "patch_url": "https://github.com/ollama/ollama/pull/348.patch", "merged_at": "2023-08-16T16:20:36" }
implement registry's cross repo blob mount
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/348/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/348/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4926
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4926/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4926/comments
https://api.github.com/repos/ollama/ollama/issues/4926/events
https://github.com/ollama/ollama/issues/4926
2,341,485,627
I_kwDOJ0Z1Ps6LkDw7
4,926
fail to upload models due to max try
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/tao...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
7
2024-06-08T05:26:26
2024-06-10T06:00:09
2024-06-09T23:16:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? `pushing c165271d7cbb... 73% ▕████████████████ ▏ 44 GB/ 61 GB 4.3 MB/s 1h3m Error: max retries exceeded: Put "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/repositories/taozhiyuai/qwen2-57b-a14b-instruct/_uploads/cb319ba7-5ab8-40c3-a59...
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/tao...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4926/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4926/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3564
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3564/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3564/comments
https://api.github.com/repos/ollama/ollama/issues/3564/events
https://github.com/ollama/ollama/pull/3564
2,234,426,010
PR_kwDOJ0Z1Ps5sLzNh
3,564
Revert "build.go: introduce a friendlier way to build Ollama (#3548)"
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[]
closed
false
null
[]
null
0
2024-04-09T22:40:31
2024-04-09T22:57:46
2024-04-09T22:57:45
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3564", "html_url": "https://github.com/ollama/ollama/pull/3564", "diff_url": "https://github.com/ollama/ollama/pull/3564.diff", "patch_url": "https://github.com/ollama/ollama/pull/3564.patch", "merged_at": "2024-04-09T22:57:45" }
This reverts commit fccf3eecaaecc94178a12084aabe6e0bcb24a1d9.
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3564/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3564/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7126
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7126/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7126/comments
https://api.github.com/repos/ollama/ollama/issues/7126/events
https://github.com/ollama/ollama/pull/7126
2,571,830,579
PR_kwDOJ0Z1Ps594mrb
7,126
Add web management tool to Community Integrations
{ "login": "lemonit-eric-mao", "id": 68628461, "node_id": "MDQ6VXNlcjY4NjI4NDYx", "avatar_url": "https://avatars.githubusercontent.com/u/68628461?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lemonit-eric-mao", "html_url": "https://github.com/lemonit-eric-mao", "followers_url": "https://...
[]
closed
false
null
[]
null
0
2024-10-08T01:21:12
2024-11-21T10:51:46
2024-11-21T10:51:46
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7126", "html_url": "https://github.com/ollama/ollama/pull/7126", "diff_url": "https://github.com/ollama/ollama/pull/7126.diff", "patch_url": "https://github.com/ollama/ollama/pull/7126.patch", "merged_at": "2024-11-21T10:51:46" }
"Add web management tool to Community Integrations"
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7126/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7126/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2431
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2431/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2431/comments
https://api.github.com/repos/ollama/ollama/issues/2431/events
https://github.com/ollama/ollama/issues/2431
2,127,716,970
I_kwDOJ0Z1Ps5-0mJq
2,431
Ability to preload a model?
{ "login": "powellnorma", "id": 101364699, "node_id": "U_kgDOBgqz2w", "avatar_url": "https://avatars.githubusercontent.com/u/101364699?v=4", "gravatar_id": "", "url": "https://api.github.com/users/powellnorma", "html_url": "https://github.com/powellnorma", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396191, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw", "url": "https://api.github.com/repos/ollama/ollama/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
7
2024-02-09T19:15:38
2024-05-15T18:58:59
2024-02-19T23:20:30
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Is it possible to preload a model without actually using it? For example if the users starts typing his request, it would be useful to be able to "preload" the model, instead of just loading it once the request is submitted.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2431/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2431/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6957
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6957/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6957/comments
https://api.github.com/repos/ollama/ollama/issues/6957/events
https://github.com/ollama/ollama/issues/6957
2,548,409,065
I_kwDOJ0Z1Ps6X5aLp
6,957
`ollama stop` fails if the model has been deleted
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-09-25T16:13:07
2024-10-01T22:45:44
2024-10-01T22:45:44
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ``` PS C:\Users\jmorgan> ollama ps NAME ID SIZE PROCESSOR UNTIL solar-pro:latest 9a8c71c441ca 18 GB 100% GPU 4 minutes from now PS C:\Users\jmorgan> ollama rm solar-pro deleted 'solar-pro' PS C:\Users\jmorgan> ollama stop solar-pro Error: co...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6957/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5487
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5487/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5487/comments
https://api.github.com/repos/ollama/ollama/issues/5487/events
https://github.com/ollama/ollama/issues/5487
2,391,234,042
I_kwDOJ0Z1Ps6Oh1X6
5,487
granite code page does not show the 20 and 34 b models
{ "login": "olumolu", "id": 162728301, "node_id": "U_kgDOCbMJbQ", "avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4", "gravatar_id": "", "url": "https://api.github.com/users/olumolu", "html_url": "https://github.com/olumolu", "followers_url": "https://api.github.com/users/olumolu/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6573197867, "node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw...
closed
false
null
[]
null
5
2024-07-04T17:10:19
2024-10-24T02:42:38
2024-10-24T02:42:30
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ![image](https://github.com/ollama/ollama/assets/162728301/d3143bc8-12a2-46e2-a382-b9f5fbaf441e) granite code page does not show the 20 and 34 b models we can see there are 4 models but does not have the size mentioned on the website end ### OS Linux ### GPU AMD ### CPU AMD ### Ollama ve...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5487/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5487/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8007
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8007/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8007/comments
https://api.github.com/repos/ollama/ollama/issues/8007/events
https://github.com/ollama/ollama/issues/8007
2,725,925,227
I_kwDOJ0Z1Ps6ielFr
8,007
EXAONE-3.5 2.4B, 7.8B, and 32B
{ "login": "vYLQs6", "id": 143073604, "node_id": "U_kgDOCIchRA", "avatar_url": "https://avatars.githubusercontent.com/u/143073604?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vYLQs6", "html_url": "https://github.com/vYLQs6", "followers_url": "https://api.github.com/users/vYLQs6/follower...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
3
2024-12-09T04:04:57
2024-12-10T08:04:52
2024-12-10T08:04:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
ERROR: type should be string, got "https://huggingface.co/collections/LGAI-EXAONE/exaone-35-674d0e1bb3dcd2ab6f39dbb4\r\n\r\n---\r\n\r\nNote: \r\n\r\nI enabled Q8 KV Cache, and when I tried the gguf uploaded by LG, I always get this error: `Error: llama runner process has terminated: GGML_ASSERT(hparams.n_embd_head_k % ggml_blck_size(type_k) == 0) failed`\r\n\r\nIdk if this means this model just doesn't support kv cache or ollama needs an update.\r\n\r\nEverything works fine after I disabled kv cache & flash attention\r\n\r\n---\r\n\r\n<table>\r\n <tr>\r\n <th>Models</th>\r\n <th>MT-Bench</th>\r\n <th>LiveBench</th>\r\n <th>Arena-Hard</th>\r\n <th>AlpacaEval</th>\r\n <th>IFEval</th>\r\n <th>KoMT-Bench[1]</th>\r\n <th>LogicKor</th>\r\n </tr>\r\n <tr>\r\n <td>EXAONE 3.5 32B</td>\r\n <td align=\"center\"><strong>8.51</strong></td>\r\n <td align=\"center\">43.0</td>\r\n <td align=\"center\"><strong>78.6</strong></td>\r\n <td align=\"center\"><strong>60.6</strong></td>\r\n <td align=\"center\"><strong>81.7</strong></td>\r\n <td align=\"center\"><strong>8.05</strong></td>\r\n <td align=\"center\"><strong>9.06</strong></td>\r\n </tr>\r\n <tr>\r\n <td>Qwen 2.5 32B</td>\r\n <td align=\"center\">8.49</td>\r\n <td align=\"center\"><strong>50.6</strong></td>\r\n <td align=\"center\">67.0</td>\r\n <td align=\"center\">41.0</td>\r\n <td align=\"center\">78.7</td>\r\n <td align=\"center\">7.75</td>\r\n <td align=\"center\">8.89</td>\r\n </tr>\r\n <tr>\r\n <td>C4AI Command R 32B</td>\r\n <td align=\"center\">7.38</td>\r\n <td align=\"center\">29.7</td>\r\n <td align=\"center\">17.0</td>\r\n <td align=\"center\">25.9</td>\r\n <td align=\"center\">26.1</td>\r\n <td align=\"center\">6.72</td>\r\n <td align=\"center\">8.24</td>\r\n </tr>\r\n <tr>\r\n <td>Gemma 2 27B</td>\r\n <td align=\"center\">8.28</td>\r\n <td align=\"center\">40.0</td>\r\n <td align=\"center\">57.5</td>\r\n <td align=\"center\">52.2</td>\r\n <td align=\"center\">59.7</td>\r\n <td align=\"center\">7.19</td>\r\n <td align=\"center\">8.56</td>\r\n </tr>\r\n <tr>\r\n <td>Yi 1.5 34B</td>\r\n <td align=\"center\">7.64</td>\r\n <td align=\"center\">26.2</td>\r\n <td align=\"center\">23.1</td>\r\n <td align=\"center\">34.8</td>\r\n <td align=\"center\">55.5</td>\r\n <td align=\"center\">4.88</td>\r\n <td align=\"center\">6.33</td>\r\n </tr>\r\n</table>"
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8007/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8007/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/867
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/867/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/867/comments
https://api.github.com/repos/ollama/ollama/issues/867/events
https://github.com/ollama/ollama/issues/867
1,955,179,445
I_kwDOJ0Z1Ps50iau1
867
ollama API not responding
{ "login": "abulka", "id": 11467530, "node_id": "MDQ6VXNlcjExNDY3NTMw", "avatar_url": "https://avatars.githubusercontent.com/u/11467530?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abulka", "html_url": "https://github.com/abulka", "followers_url": "https://api.github.com/users/abulka/fo...
[]
closed
false
null
[]
null
2
2023-10-21T01:01:00
2023-10-21T01:20:05
2023-10-21T01:06:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
ollama isn't reponding to ``` curl http://localhost:11434/api/show --json '{"name": "codellama:7b-instruct"}' 404 page not found ``` and I didn't configure ollama to start on a particular port, just a default install. I have the models: ``` % ollama list NAME SIZE MODIFIED code...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/867/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/867/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7848
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7848/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7848/comments
https://api.github.com/repos/ollama/ollama/issues/7848/events
https://github.com/ollama/ollama/issues/7848
2,696,108,449
I_kwDOJ0Z1Ps6gs1mh
7,848
Teuken-7b
{ "login": "tilllt", "id": 1854364, "node_id": "MDQ6VXNlcjE4NTQzNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1854364?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tilllt", "html_url": "https://github.com/tilllt", "followers_url": "https://api.github.com/users/tilllt/foll...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
2
2024-11-26T21:13:22
2024-12-05T15:15:06
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://huggingface.co/openGPT-X/Teuken-7B-instruct-research-v0.4
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7848/reactions", "total_count": 32, "+1": 32, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7848/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2719
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2719/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2719/comments
https://api.github.com/repos/ollama/ollama/issues/2719/events
https://github.com/ollama/ollama/pull/2719
2,152,000,317
PR_kwDOJ0Z1Ps5nzf0G
2,719
remove format private key
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
1
2024-02-24T00:54:51
2024-03-28T18:20:59
2024-02-24T01:15:14
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2719", "html_url": "https://github.com/ollama/ollama/pull/2719", "diff_url": "https://github.com/ollama/ollama/pull/2719.diff", "patch_url": "https://github.com/ollama/ollama/pull/2719.patch", "merged_at": "2024-02-24T01:15:14" }
the utility format/openssh.go is no longer necessary since x/crypto/ssh v0.14.0 introduced MarshalPrivateKey
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2719/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2719/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6511
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6511/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6511/comments
https://api.github.com/repos/ollama/ollama/issues/6511/events
https://github.com/ollama/ollama/issues/6511
2,486,332,769
I_kwDOJ0Z1Ps6UMm1h
6,511
Embedding model text2vec-large-chinese
{ "login": "icetech233", "id": 17383321, "node_id": "MDQ6VXNlcjE3MzgzMzIx", "avatar_url": "https://avatars.githubusercontent.com/u/17383321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/icetech233", "html_url": "https://github.com/icetech233", "followers_url": "https://api.github.com/use...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2024-08-26T09:00:11
2024-08-26T09:00:11
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
www.modelscope.cn/Jerry0/text2vec-large-chinese
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6511/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6511/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3650
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3650/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3650/comments
https://api.github.com/repos/ollama/ollama/issues/3650/events
https://github.com/ollama/ollama/issues/3650
2,243,361,026
I_kwDOJ0Z1Ps6FtvkC
3,650
Default command R Modelfile template does not respect specification
{ "login": "GiovanniGatti", "id": 1745450, "node_id": "MDQ6VXNlcjE3NDU0NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/1745450?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GiovanniGatti", "html_url": "https://github.com/GiovanniGatti", "followers_url": "https://api.github....
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-04-15T10:55:08
2024-04-15T19:10:12
2024-04-15T19:10:12
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? After reading the documentation of [Command R](https://docs.cohere.com/docs/prompting-command-r#components-of-a-structured-prompt) I found it strange that the (mandatory) `<BOS_TOKEN>` wans't specified in the default Modelfile.template [here](https://ollama.com/library/command-r:latest/blobs/4...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3650/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3650/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2210
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2210/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2210/comments
https://api.github.com/repos/ollama/ollama/issues/2210/events
https://github.com/ollama/ollama/issues/2210
2,102,604,557
I_kwDOJ0Z1Ps59UzMN
2,210
Keep models in RAM
{ "login": "LeoPiresDeSouza", "id": 40829469, "node_id": "MDQ6VXNlcjQwODI5NDY5", "avatar_url": "https://avatars.githubusercontent.com/u/40829469?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LeoPiresDeSouza", "html_url": "https://github.com/LeoPiresDeSouza", "followers_url": "https://api...
[]
closed
false
null
[]
null
2
2024-01-26T17:37:29
2024-01-28T22:29:53
2024-01-28T22:29:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I am testing llama2:7b models both using ollama and calling direct from a langchain python script. My models are stored in an Ubuntu server withu 12 cores e 36 Gb of ram, but no GPU. When I cal the model direct from python, setting memlock parameter to true, my memory usage goes above 6Gb, but when using ollma it st...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2210/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2210/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7947
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7947/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7947/comments
https://api.github.com/repos/ollama/ollama/issues/7947/events
https://github.com/ollama/ollama/issues/7947
2,719,646,444
I_kwDOJ0Z1Ps6iGoLs
7,947
Not using GPU
{ "login": "frenzybiscuit", "id": 190028151, "node_id": "U_kgDOC1OZdw", "avatar_url": "https://avatars.githubusercontent.com/u/190028151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frenzybiscuit", "html_url": "https://github.com/frenzybiscuit", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
10
2024-12-05T07:49:00
2024-12-23T08:05:46
2024-12-23T08:05:46
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I have the following setup: 7950x3d (AMD iGPU) 3090 + 2080ti When using Ollama with open-webui the GPU (3090) gets used BRIEFLY. It starts using the GPU, GPU ramps up to 90% utilization and then it just stops and falls back to the CPU. I have installed Ollama and built from source on...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7947/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7947/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/151
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/151/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/151/comments
https://api.github.com/repos/ollama/ollama/issues/151/events
https://github.com/ollama/ollama/pull/151
1,814,871,849
PR_kwDOJ0Z1Ps5WDKbV
151
add rm command for models
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2023-07-20T22:19:23
2023-07-20T23:09:23
2023-07-20T23:09:23
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/151", "html_url": "https://github.com/ollama/ollama/pull/151", "diff_url": "https://github.com/ollama/ollama/pull/151.diff", "patch_url": "https://github.com/ollama/ollama/pull/151.patch", "merged_at": "2023-07-20T23:09:23" }
This change adds an "rm" command so that you can remove models that you don't want anymore. The handler determines if other manifests require a given layer and will save anything still required.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/151/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/151/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/968
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/968/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/968/comments
https://api.github.com/repos/ollama/ollama/issues/968/events
https://github.com/ollama/ollama/pull/968
1,973,577,853
PR_kwDOJ0Z1Ps5eZcY3
968
Use default RoPE params for new models
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2023-11-02T06:13:35
2023-11-02T15:41:31
2023-11-02T15:41:30
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/968", "html_url": "https://github.com/ollama/ollama/pull/968", "diff_url": "https://github.com/ollama/ollama/pull/968.diff", "patch_url": "https://github.com/ollama/ollama/pull/968.patch", "merged_at": "2023-11-02T15:41:30" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/968/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/968/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7154
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7154/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7154/comments
https://api.github.com/repos/ollama/ollama/issues/7154/events
https://github.com/ollama/ollama/pull/7154
2,576,932,647
PR_kwDOJ0Z1Ps5-IV1-
7,154
update .gitattributes with proper linguist-vendored entry
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-10-09T20:52:51
2024-10-10T00:25:10
2024-10-10T00:25:10
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7154", "html_url": "https://github.com/ollama/ollama/pull/7154", "diff_url": "https://github.com/ollama/ollama/pull/7154.diff", "patch_url": "https://github.com/ollama/ollama/pull/7154.patch", "merged_at": null }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7154/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1772
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1772/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1772/comments
https://api.github.com/repos/ollama/ollama/issues/1772/events
https://github.com/ollama/ollama/issues/1772
2,064,537,335
I_kwDOJ0Z1Ps57Dlb3
1,772
Metadata field for multimodal models
{ "login": "shreyaskarnik", "id": 311217, "node_id": "MDQ6VXNlcjMxMTIxNw==", "avatar_url": "https://avatars.githubusercontent.com/u/311217?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shreyaskarnik", "html_url": "https://github.com/shreyaskarnik", "followers_url": "https://api.github.co...
[]
closed
false
null
[]
null
3
2024-01-03T19:26:03
2024-01-04T01:34:14
2024-01-04T00:12:03
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Would it be possible to add some metadata to the model indicating that it is multimodal? This will help to select the right model in applications that are built on top of the API to support multimodal architecture. I believe this will also help to search through models at https://ollama.ai/library and filter based on m...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1772/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1772/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8194
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8194/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8194/comments
https://api.github.com/repos/ollama/ollama/issues/8194/events
https://github.com/ollama/ollama/pull/8194
2,753,677,449
PR_kwDOJ0Z1Ps6F9uvT
8,194
Mxyng/next llama
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-12-21T01:00:23
2025-01-10T19:30:24
2025-01-10T19:30:24
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8194", "html_url": "https://github.com/ollama/ollama/pull/8194", "diff_url": "https://github.com/ollama/ollama/pull/8194.diff", "patch_url": "https://github.com/ollama/ollama/pull/8194.patch", "merged_at": "2025-01-10T19:30:24" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8194/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6673
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6673/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6673/comments
https://api.github.com/repos/ollama/ollama/issues/6673/events
https://github.com/ollama/ollama/issues/6673
2,509,963,694
I_kwDOJ0Z1Ps6VmwGu
6,673
Ollama-rocm on Kubernetes with shared AMD GPU seems to have problems allocating vram
{ "login": "kubax", "id": 1083100, "node_id": "MDQ6VXNlcjEwODMxMDA=", "avatar_url": "https://avatars.githubusercontent.com/u/1083100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kubax", "html_url": "https://github.com/kubax", "followers_url": "https://api.github.com/users/kubax/follower...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-09-06T09:19:56
2024-09-06T10:58:54
2024-09-06T10:58:54
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi, i'm pretty new to Ollama, and recently replaced my RX580 with a RX7600 to be able to use Ollama in Kubernetes with ROCm. When i run Ollama on Arch directly with ROCm support everything works great and is realy snappy. But when i run Ollama through Kubernetes with the AMD Plugin to s...
{ "login": "kubax", "id": 1083100, "node_id": "MDQ6VXNlcjEwODMxMDA=", "avatar_url": "https://avatars.githubusercontent.com/u/1083100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kubax", "html_url": "https://github.com/kubax", "followers_url": "https://api.github.com/users/kubax/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6673/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6673/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1697
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1697/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1697/comments
https://api.github.com/repos/ollama/ollama/issues/1697/events
https://github.com/ollama/ollama/pull/1697
2,055,165,058
PR_kwDOJ0Z1Ps5ityR_
1,697
Add windows native build instructions
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2023-12-24T17:05:29
2024-01-06T03:34:24
2024-01-06T03:34:21
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1697", "html_url": "https://github.com/ollama/ollama/pull/1697", "diff_url": "https://github.com/ollama/ollama/pull/1697.diff", "patch_url": "https://github.com/ollama/ollama/pull/1697.patch", "merged_at": "2024-01-06T03:34:21" }
Fixes #1694 Note: the resulting native windows binary isn't particularly user friendly right now as it requires setting your PATH deep into the source tree to pick up the dependent DLLs. I'm working on another change that will address this. I'll keep this PR as a draft until that's ready.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1697/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1697/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4570
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4570/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4570/comments
https://api.github.com/repos/ollama/ollama/issues/4570/events
https://github.com/ollama/ollama/pull/4570
2,309,546,545
PR_kwDOJ0Z1Ps5wJoQz
4,570
lint some of the things
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-05-22T04:31:33
2024-06-04T20:27:06
2024-06-04T20:27:05
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4570", "html_url": "https://github.com/ollama/ollama/pull/4570", "diff_url": "https://github.com/ollama/ollama/pull/4570.diff", "patch_url": "https://github.com/ollama/ollama/pull/4570.patch", "merged_at": "2024-06-04T20:27:05" }
now that ollama uses go1.22, `x/exp/slices` can be replaced with regular `slices` enable some useful linters: - intrange is a 1.22 feature which simplifies `for i := 0; i < n; i++ { }` with `for i := range n { }` - testifylint to find bad testify assertions - unconvert to find unnecessary type conversions - ~use...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4570/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4570/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7703
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7703/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7703/comments
https://api.github.com/repos/ollama/ollama/issues/7703/events
https://github.com/ollama/ollama/issues/7703
2,664,794,748
I_kwDOJ0Z1Ps6e1Yp8
7,703
Clarify JSONL as the Returned Format for Streaming JSON Objects
{ "login": "gwpl", "id": 221403, "node_id": "MDQ6VXNlcjIyMTQwMw==", "avatar_url": "https://avatars.githubusercontent.com/u/221403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gwpl", "html_url": "https://github.com/gwpl", "followers_url": "https://api.github.com/users/gwpl/followers", ...
[ { "id": 5667396191, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw", "url": "https://api.github.com/repos/ollama/ollama/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
3
2024-11-16T19:01:43
2024-11-20T13:41:59
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
**Current Documentation**: [API documentation](https://github.com/ollama/ollama/blob/4759d879f2376ffb9b82f296e442ec8ef137f27b/docs/api.md?plain=1#L79) states: > A stream of JSON objects is returned. **Proposal**: Specify the format explicitly as: > A stream of JSON objects in [JSON Lines (JSONL)] format ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7703/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7703/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5890
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5890/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5890/comments
https://api.github.com/repos/ollama/ollama/issues/5890/events
https://github.com/ollama/ollama/issues/5890
2,426,109,102
I_kwDOJ0Z1Ps6Qm3yu
5,890
Assistant doesn't continue from its last message on 0.2.8
{ "login": "josegtmonteiro", "id": 169712316, "node_id": "U_kgDOCh2avA", "avatar_url": "https://avatars.githubusercontent.com/u/169712316?v=4", "gravatar_id": "", "url": "https://api.github.com/users/josegtmonteiro", "html_url": "https://github.com/josegtmonteiro", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
8
2024-07-23T20:53:56
2024-07-25T01:14:29
2024-07-25T01:14:29
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? @jmorganca , thanks for the quick fix on the https://github.com/ollama/ollama/issues/5775 However, testing here with 0.2.8. Still not able to continue the message. With the same example I mentioned before, using the OLLAMA_DEBUG I'm able to see the final prompt on the console, it is: pr...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5890/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5890/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7468
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7468/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7468/comments
https://api.github.com/repos/ollama/ollama/issues/7468/events
https://github.com/ollama/ollama/pull/7468
2,630,254,140
PR_kwDOJ0Z1Ps6AsIEs
7,468
Add a command to clear the screen
{ "login": "cootshk", "id": 83678457, "node_id": "MDQ6VXNlcjgzNjc4NDU3", "avatar_url": "https://avatars.githubusercontent.com/u/83678457?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cootshk", "html_url": "https://github.com/cootshk", "followers_url": "https://api.github.com/users/cootsh...
[]
closed
false
null
[]
null
1
2024-11-02T06:18:36
2024-11-12T00:43:40
2024-11-12T00:43:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7468", "html_url": "https://github.com/ollama/ollama/pull/7468", "diff_url": "https://github.com/ollama/ollama/pull/7468.diff", "patch_url": "https://github.com/ollama/ollama/pull/7468.patch", "merged_at": null }
`/clearscreen` clears the screen Kind of like Ctrl+L I use [ansi escape codes](https://gist.github.com/fnky/458719343aabd01cfb17a3a4f7296797) because I don't want to deal with the buffer I made this because I have Ctrl+L bound to something else and wanted a quick slash command similar to `/clear`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7468/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7468/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5564
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5564/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5564/comments
https://api.github.com/repos/ollama/ollama/issues/5564/events
https://github.com/ollama/ollama/issues/5564
2,397,372,084
I_kwDOJ0Z1Ps6O5P60
5,564
token/second after ollama finish request
{ "login": "zinwelzl", "id": 113045180, "node_id": "U_kgDOBrzuvA", "avatar_url": "https://avatars.githubusercontent.com/u/113045180?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zinwelzl", "html_url": "https://github.com/zinwelzl", "followers_url": "https://api.github.com/users/zinwelzl/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-07-09T07:25:31
2024-07-09T07:29:10
2024-07-09T07:29:09
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Can you add token/second after ollama finish request at the end?
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5564/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5564/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6208
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6208/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6208/comments
https://api.github.com/repos/ollama/ollama/issues/6208/events
https://github.com/ollama/ollama/pull/6208
2,451,472,956
PR_kwDOJ0Z1Ps53mm3r
6,208
update llama.cpp submodule to `1e6f6554`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
1
2024-08-06T18:35:14
2024-08-06T19:11:47
2024-08-06T19:11:46
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6208", "html_url": "https://github.com/ollama/ollama/pull/6208", "diff_url": "https://github.com/ollama/ollama/pull/6208.diff", "patch_url": "https://github.com/ollama/ollama/pull/6208.patch", "merged_at": "2024-08-06T19:11:45" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6208/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6625
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6625/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6625/comments
https://api.github.com/repos/ollama/ollama/issues/6625/events
https://github.com/ollama/ollama/issues/6625
2,504,427,710
I_kwDOJ0Z1Ps6VRoi-
6,625
Support for HuatuoGPT-Vision-7B
{ "login": "Chuyun-Shen", "id": 59833738, "node_id": "MDQ6VXNlcjU5ODMzNzM4", "avatar_url": "https://avatars.githubusercontent.com/u/59833738?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Chuyun-Shen", "html_url": "https://github.com/Chuyun-Shen", "followers_url": "https://api.github.com/...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2024-09-04T06:32:21
2024-09-04T06:32:21
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Can you support the HuatuoGPT-Vision-7B model, or do you have any advice on how I can deploy it on GPU? Model: [FreedomIntelligence/HuatuoGPT-Vision-7B](https://huggingface.co/FreedomIntelligence/HuatuoGPT-Vision-7B) This model is built with Llava and Qwen2, and their CLI code is here: https://github.com/FreedomIntel...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6625/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6625/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2607
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2607/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2607/comments
https://api.github.com/repos/ollama/ollama/issues/2607/events
https://github.com/ollama/ollama/issues/2607
2,143,655,528
I_kwDOJ0Z1Ps5_xZZo
2,607
Does not work on Mac? Causing System Crashes building and running
{ "login": "kuro337", "id": 65412787, "node_id": "MDQ6VXNlcjY1NDEyNzg3", "avatar_url": "https://avatars.githubusercontent.com/u/65412787?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kuro337", "html_url": "https://github.com/kuro337", "followers_url": "https://api.github.com/users/kuro33...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
5
2024-02-20T06:47:31
2024-03-12T21:32:06
2024-03-12T21:32:06
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Is Ollama not meant to be run on ARM macs? I followed these steps ```bash git clone git@github.com:ollama/ollama.git cd ollama go generate ./... go build . ./ollama # First time running [1] 1651 killed ./ollama # After running again ./ollama # hangs indefinitely ``` Then it hands ind...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2607/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2607/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7901
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7901/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7901/comments
https://api.github.com/repos/ollama/ollama/issues/7901/events
https://github.com/ollama/ollama/issues/7901
2,708,360,256
I_kwDOJ0Z1Ps6hbkxA
7,901
Error: max retries exceeded: unexpected EOF
{ "login": "szzhh", "id": 78521539, "node_id": "MDQ6VXNlcjc4NTIxNTM5", "avatar_url": "https://avatars.githubusercontent.com/u/78521539?v=4", "gravatar_id": "", "url": "https://api.github.com/users/szzhh", "html_url": "https://github.com/szzhh", "followers_url": "https://api.github.com/users/szzhh/follow...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
6
2024-12-01T03:04:43
2024-12-02T11:42:56
2024-12-02T11:42:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When I pull the llama3.1:405b model, ' Error: max retries exceeded: unexpected EOF ' often appears. But usually, I can continue downloading instead of re-downloading. But I encountered a problem yesterday. I downloaded more than 200g of the model, but after the error, I had to re-download the ...
{ "login": "szzhh", "id": 78521539, "node_id": "MDQ6VXNlcjc4NTIxNTM5", "avatar_url": "https://avatars.githubusercontent.com/u/78521539?v=4", "gravatar_id": "", "url": "https://api.github.com/users/szzhh", "html_url": "https://github.com/szzhh", "followers_url": "https://api.github.com/users/szzhh/follow...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7901/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1934
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1934/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1934/comments
https://api.github.com/repos/ollama/ollama/issues/1934/events
https://github.com/ollama/ollama/pull/1934
2,077,702,149
PR_kwDOJ0Z1Ps5j3UZt
1,934
fix build and lint
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-01-11T22:20:28
2024-01-11T22:36:21
2024-01-11T22:36:21
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1934", "html_url": "https://github.com/ollama/ollama/pull/1934", "diff_url": "https://github.com/ollama/ollama/pull/1934.diff", "patch_url": "https://github.com/ollama/ollama/pull/1934.patch", "merged_at": "2024-01-11T22:36:21" }
x/exp/slices is compatible with 1.20 while slices is not also fix llm/llm.go where fmt is used but not imported
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1934/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3797
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3797/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3797/comments
https://api.github.com/repos/ollama/ollama/issues/3797/events
https://github.com/ollama/ollama/issues/3797
2,255,066,913
I_kwDOJ0Z1Ps6GaZch
3,797
hope add more embedding models
{ "login": "zhangzhongpeng02", "id": 130722043, "node_id": "U_kgDOB8qo-w", "avatar_url": "https://avatars.githubusercontent.com/u/130722043?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhangzhongpeng02", "html_url": "https://github.com/zhangzhongpeng02", "followers_url": "https://api.gi...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
3
2024-04-21T13:11:37
2024-06-04T02:28:57
2024-06-04T02:28:57
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I hope to add more embedding models, such as "bge-large-zh-v1.5"、”bce-embedding-base“、"gte-large"
{ "login": "zhangzhongpeng02", "id": 130722043, "node_id": "U_kgDOB8qo-w", "avatar_url": "https://avatars.githubusercontent.com/u/130722043?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhangzhongpeng02", "html_url": "https://github.com/zhangzhongpeng02", "followers_url": "https://api.gi...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3797/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3797/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8259
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8259/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8259/comments
https://api.github.com/repos/ollama/ollama/issues/8259/events
https://github.com/ollama/ollama/pull/8259
2,761,205,050
PR_kwDOJ0Z1Ps6GVI1G
8,259
create a default, non-root user for the container image
{ "login": "chgl", "id": 5307555, "node_id": "MDQ6VXNlcjUzMDc1NTU=", "avatar_url": "https://avatars.githubusercontent.com/u/5307555?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chgl", "html_url": "https://github.com/chgl", "followers_url": "https://api.github.com/users/chgl/followers", ...
[]
open
false
null
[]
null
6
2024-12-27T19:24:33
2025-01-20T15:56:52
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8259", "html_url": "https://github.com/ollama/ollama/pull/8259", "diff_url": "https://github.com/ollama/ollama/pull/8259.diff", "patch_url": "https://github.com/ollama/ollama/pull/8259.patch", "merged_at": null }
Closes #5986
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8259/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8677
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8677/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8677/comments
https://api.github.com/repos/ollama/ollama/issues/8677/events
https://github.com/ollama/ollama/issues/8677
2,819,603,374
I_kwDOJ0Z1Ps6oD7uu
8,677
Wrote scripts to import gguf files/folder
{ "login": "gl2007", "id": 4097227, "node_id": "MDQ6VXNlcjQwOTcyMjc=", "avatar_url": "https://avatars.githubusercontent.com/u/4097227?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gl2007", "html_url": "https://github.com/gl2007", "followers_url": "https://api.github.com/users/gl2007/foll...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2025-01-30T00:09:02
2025-01-30T00:09:02
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Don't see a "discussion" tab like I see for other repos, so just creating an issue. Had a bunch of gguf's in a folder, so wrote 2 scripts (windows and shell) to import a single gguf and all ggufs in a given folder. Don't know how to get a PR in but I can attach them here is any of you think they are useful.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8677/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5796
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5796/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5796/comments
https://api.github.com/repos/ollama/ollama/issues/5796/events
https://github.com/ollama/ollama/issues/5796
2,419,278,016
I_kwDOJ0Z1Ps6QM0DA
5,796
Streaming for tool calls is unsupported
{ "login": "vertrue", "id": 30557724, "node_id": "MDQ6VXNlcjMwNTU3NzI0", "avatar_url": "https://avatars.githubusercontent.com/u/30557724?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vertrue", "html_url": "https://github.com/vertrue", "followers_url": "https://api.github.com/users/vertru...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 7706482389, "node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q...
closed
false
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
[ { "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "htt...
null
38
2024-07-19T16:09:55
2024-11-30T06:41:17
2024-11-28T02:40:46
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi everyone! I am trying to use tools in requests to `llama3-groq-tool-use:70b`. Here is simple code in Python using langchain==0.2.9: ``` from langchain_core.tools import tool from langchain_openai import ChatOpenAI from langchain.prompts import ( ChatPromptTemplate, Messages...
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5796/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5796/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7661
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7661/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7661/comments
https://api.github.com/repos/ollama/ollama/issues/7661/events
https://github.com/ollama/ollama/issues/7661
2,657,344,510
I_kwDOJ0Z1Ps6eY9v-
7,661
How does ollam support the input of long text quantity
{ "login": "smileyboy2019", "id": 59221294, "node_id": "MDQ6VXNlcjU5MjIxMjk0", "avatar_url": "https://avatars.githubusercontent.com/u/59221294?v=4", "gravatar_id": "", "url": "https://api.github.com/users/smileyboy2019", "html_url": "https://github.com/smileyboy2019", "followers_url": "https://api.githu...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
2
2024-11-14T02:29:40
2024-11-14T22:54:18
2024-11-14T22:54:18
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The large model supports 128k data input, which is equivalent to supporting hundreds of thousands of words. What is the maximum number of words that the ollama URL can support for input? Can you input hundreds of thousands of words.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7661/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7661/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/768
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/768/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/768/comments
https://api.github.com/repos/ollama/ollama/issues/768/events
https://github.com/ollama/ollama/pull/768
1,940,368,166
PR_kwDOJ0Z1Ps5cp4TZ
768
fix memory check
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-10-12T16:34:57
2023-10-16T19:42:42
2023-10-16T19:42:41
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/768", "html_url": "https://github.com/ollama/ollama/pull/768", "diff_url": "https://github.com/ollama/ollama/pull/768.diff", "patch_url": "https://github.com/ollama/ollama/pull/768.patch", "merged_at": "2023-10-16T19:42:41" }
only do a system memory check on macos which has unified memory. on other platforms, rely on the vram offloading
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/768/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/768/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/982
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/982/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/982/comments
https://api.github.com/repos/ollama/ollama/issues/982/events
https://github.com/ollama/ollama/pull/982
1,975,233,953
PR_kwDOJ0Z1Ps5efF58
982
Set `NumKeep` to `4` by default
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2023-11-03T00:14:31
2023-11-03T00:26:12
2023-11-03T00:26:12
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/982", "html_url": "https://github.com/ollama/ollama/pull/982", "diff_url": "https://github.com/ollama/ollama/pull/982.diff", "patch_url": "https://github.com/ollama/ollama/pull/982.patch", "merged_at": "2023-11-03T00:26:11" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/982/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/982/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1738
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1738/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1738/comments
https://api.github.com/repos/ollama/ollama/issues/1738/events
https://github.com/ollama/ollama/issues/1738
2,060,387,918
I_kwDOJ0Z1Ps56zwZO
1,738
Scope of Ollama,
{ "login": "Luxadevi", "id": 116653852, "node_id": "U_kgDOBvP_HA", "avatar_url": "https://avatars.githubusercontent.com/u/116653852?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Luxadevi", "html_url": "https://github.com/Luxadevi", "followers_url": "https://api.github.com/users/Luxadevi/...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
6
2023-12-29T20:37:02
2024-11-19T17:56:47
2024-01-25T22:58:28
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Could you tell me more about the scope of Ollama, you guys build it around the llama.cpp stack and added an API and other tools on top of that. GGUF is pretty stable but there are some other formats on the horizon. I would like to add EXL2 formatting to my app but since this is a companion app for ollama I was actu...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1738/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1738/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3529
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3529/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3529/comments
https://api.github.com/repos/ollama/ollama/issues/3529/events
https://github.com/ollama/ollama/pull/3529
2,230,010,991
PR_kwDOJ0Z1Ps5r8lpI
3,529
Add metrics endpoint and request metrics
{ "login": "amila-ku", "id": 12775690, "node_id": "MDQ6VXNlcjEyNzc1Njkw", "avatar_url": "https://avatars.githubusercontent.com/u/12775690?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amila-ku", "html_url": "https://github.com/amila-ku", "followers_url": "https://api.github.com/users/ami...
[]
closed
false
null
[]
null
7
2024-04-07T23:39:42
2024-09-30T19:52:23
2024-09-30T19:52:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3529", "html_url": "https://github.com/ollama/ollama/pull/3529", "diff_url": "https://github.com/ollama/ollama/pull/3529.diff", "patch_url": "https://github.com/ollama/ollama/pull/3529.patch", "merged_at": null }
Resolves https://github.com/ollama/ollama/issues/3144 This pull request is to add /metrics endpoint and generally used metrics. It exposes default prometheus metrics and custom metrics for request endpoints. This PR does not try to cover all metrics to keep it simple. If this looks good. I could add few more th...
{ "login": "amila-ku", "id": 12775690, "node_id": "MDQ6VXNlcjEyNzc1Njkw", "avatar_url": "https://avatars.githubusercontent.com/u/12775690?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amila-ku", "html_url": "https://github.com/amila-ku", "followers_url": "https://api.github.com/users/ami...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3529/reactions", "total_count": 11, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 5, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3529/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/283
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/283/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/283/comments
https://api.github.com/repos/ollama/ollama/issues/283/events
https://github.com/ollama/ollama/issues/283
1,836,913,021
I_kwDOJ0Z1Ps5tfRF9
283
Do not prompt to install CLI if already on `$PATH`
{ "login": "justinmayer", "id": 1503700, "node_id": "MDQ6VXNlcjE1MDM3MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/1503700?v=4", "gravatar_id": "", "url": "https://api.github.com/users/justinmayer", "html_url": "https://github.com/justinmayer", "followers_url": "https://api.github.com/us...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5667396210, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg...
open
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
7
2023-08-04T14:59:07
2024-12-23T00:50:48
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When launching the Ollama application, a dialog window will appear and prompt you for administrative access in order to “install” the command line executable, which in practice means symlinking `/Applications/Ollama.app/Contents/Resources/ollama` to `/usr/local/bin/ollama`. ## Observed Behavior This dialog window...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/283/reactions", "total_count": 17, "+1": 17, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/283/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8064
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8064/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8064/comments
https://api.github.com/repos/ollama/ollama/issues/8064/events
https://github.com/ollama/ollama/issues/8064
2,735,001,249
I_kwDOJ0Z1Ps6jBM6h
8,064
How can I specify the GPU for running the LLM?
{ "login": "NilsHellwig", "id": 44339207, "node_id": "MDQ6VXNlcjQ0MzM5MjA3", "avatar_url": "https://avatars.githubusercontent.com/u/44339207?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NilsHellwig", "html_url": "https://github.com/NilsHellwig", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-12-12T06:38:20
2024-12-23T08:11:41
2024-12-23T08:11:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? The `num_gpu` parameter doesn't seem to work as expected. How can I ensure the model runs on a specific GPU? I have two A5000 GPUs available. I'm not using Docker, just installed ollama by using `curl -fsSL https://ollama.com/install.sh | sh`. ### OS Linux ### GPU Nvidia ### CPU Intel ...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8064/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8064/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4692
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4692/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4692/comments
https://api.github.com/repos/ollama/ollama/issues/4692/events
https://github.com/ollama/ollama/issues/4692
2,322,261,119
I_kwDOJ0Z1Ps6KauR_
4,692
About Deepseek - V2
{ "login": "DirtyKnightForVi", "id": 116725810, "node_id": "U_kgDOBvUYMg", "avatar_url": "https://avatars.githubusercontent.com/u/116725810?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DirtyKnightForVi", "html_url": "https://github.com/DirtyKnightForVi", "followers_url": "https://api.gi...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
2
2024-05-29T03:09:25
2024-06-04T14:24:51
2024-06-04T14:24:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
llama.cpp already supports deepseek v2. Will Ollama follow up with support? https://github.com/ggerganov/llama.cpp/pull/7519
{ "login": "DirtyKnightForVi", "id": 116725810, "node_id": "U_kgDOBvUYMg", "avatar_url": "https://avatars.githubusercontent.com/u/116725810?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DirtyKnightForVi", "html_url": "https://github.com/DirtyKnightForVi", "followers_url": "https://api.gi...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4692/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4692/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1156
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1156/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1156/comments
https://api.github.com/repos/ollama/ollama/issues/1156/events
https://github.com/ollama/ollama/pull/1156
1,997,651,867
PR_kwDOJ0Z1Ps5frDb7
1,156
fix push for model inheriting from other models
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-11-16T19:52:56
2023-11-16T21:33:31
2023-11-16T21:33:30
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1156", "html_url": "https://github.com/ollama/ollama/pull/1156", "diff_url": "https://github.com/ollama/ollama/pull/1156.diff", "patch_url": "https://github.com/ollama/ollama/pull/1156.patch", "merged_at": "2023-11-16T21:33:30" }
- fix auth scope: side effect of #1055 which changed the value of the scope parameter in the auth challenge - fix cross repo mounts resolves #1154
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1156/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1156/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6823
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6823/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6823/comments
https://api.github.com/repos/ollama/ollama/issues/6823/events
https://github.com/ollama/ollama/issues/6823
2,527,527,093
I_kwDOJ0Z1Ps6WpwC1
6,823
用qwen2微调训练出的hf转为gguf后,用ollama加载,要指定TEMPLATE才不会胡乱输出
{ "login": "czhcc", "id": 4754730, "node_id": "MDQ6VXNlcjQ3NTQ3MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/4754730?v=4", "gravatar_id": "", "url": "https://api.github.com/users/czhcc", "html_url": "https://github.com/czhcc", "followers_url": "https://api.github.com/users/czhcc/follower...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
9
2024-09-16T05:22:53
2024-12-16T01:41:43
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? 而同样的gguf,用其它框架直接加载使用是正常的。 原始的qwen2模型转为gguf加载也不需要TEMPLATE参数。只有微调训练后生成的gguf要加TEMPLATE才不会胡乱输出。为什么? ### OS Linux, Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.10
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6823/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6823/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7739
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7739/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7739/comments
https://api.github.com/repos/ollama/ollama/issues/7739/events
https://github.com/ollama/ollama/pull/7739
2,671,429,551
PR_kwDOJ0Z1Ps6CXBYM
7,739
Better error suppression when getting terminal colours
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
0
2024-11-19T09:24:56
2024-11-19T16:33:53
2024-11-19T16:33:53
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7739", "html_url": "https://github.com/ollama/ollama/pull/7739", "diff_url": "https://github.com/ollama/ollama/pull/7739.diff", "patch_url": "https://github.com/ollama/ollama/pull/7739.patch", "merged_at": "2024-11-19T16:33:53" }
Fixes #7737
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7739/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7739/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7426
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7426/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7426/comments
https://api.github.com/repos/ollama/ollama/issues/7426/events
https://github.com/ollama/ollama/issues/7426
2,624,822,219
I_kwDOJ0Z1Ps6cc5vL
7,426
x/llama3.2-vision on cli reports only "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" but works in ollama run
{ "login": "draeician", "id": 177489421, "node_id": "U_kgDOCpRGDQ", "avatar_url": "https://avatars.githubusercontent.com/u/177489421?v=4", "gravatar_id": "", "url": "https://api.github.com/users/draeician", "html_url": "https://github.com/draeician", "followers_url": "https://api.github.com/users/draeic...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-10-30T17:31:10
2024-10-30T18:10:21
2024-10-30T18:10:21
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? draeician@nomnom ~/Downloads $ ollama run x/llama3.2-vision "Describe the image in detail: /home/draeician/Downloads/test.jpg" Added image '/home/draeician/Downloads/test.jpg' !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! But if I run it through the interactive: draeician@nomnom ~/Downloads $ ollama run...
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7426/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7426/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/3105
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3105/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3105/comments
https://api.github.com/repos/ollama/ollama/issues/3105/events
https://github.com/ollama/ollama/pull/3105
2,184,184,211
PR_kwDOJ0Z1Ps5phAJy
3,105
Improve usability with Bash completion for Ollama on Linux
{ "login": "aosan", "id": 8534160, "node_id": "MDQ6VXNlcjg1MzQxNjA=", "avatar_url": "https://avatars.githubusercontent.com/u/8534160?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aosan", "html_url": "https://github.com/aosan", "followers_url": "https://api.github.com/users/aosan/follower...
[]
closed
false
null
[]
null
5
2024-03-13T14:40:44
2024-08-26T11:17:45
2024-05-09T18:58:49
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3105", "html_url": "https://github.com/ollama/ollama/pull/3105", "diff_url": "https://github.com/ollama/ollama/pull/3105.diff", "patch_url": "https://github.com/ollama/ollama/pull/3105.patch", "merged_at": null }
Please review this PR for adding Bash completion to install.sh for Linux. Currently works for all arguments and options, including autocomplete for long model names (yay!) `ollama ls` works, but it's missing from the -h/--help and ollama listing. I'll open a separate issue for it. It does not include autocomp...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3105/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3105/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8162
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8162/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8162/comments
https://api.github.com/repos/ollama/ollama/issues/8162/events
https://github.com/ollama/ollama/issues/8162
2,748,654,398
I_kwDOJ0Z1Ps6j1SM-
8,162
StructuredOutputs Schema Missing in Prompt [Unlike OpenAI API Default Behavior]
{ "login": "ikot-humanoid", "id": 190361581, "node_id": "U_kgDOC1iv7Q", "avatar_url": "https://avatars.githubusercontent.com/u/190361581?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ikot-humanoid", "html_url": "https://github.com/ikot-humanoid", "followers_url": "https://api.github.com/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-12-18T20:02:55
2024-12-18T20:11:02
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When using StructuredOutputs, I noticed that the model's outputs were nonsensical and didn't align with expectations. After debugging, I discovered that the output schema isn't included in the prompt, leaving the model unaware of its options and what it should generate. While developers could manually add the schema...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8162/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8162/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4485
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4485/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4485/comments
https://api.github.com/repos/ollama/ollama/issues/4485/events
https://github.com/ollama/ollama/issues/4485
2,301,692,122
I_kwDOJ0Z1Ps6JMQja
4,485
Import a model:latest aborted (core dumped)
{ "login": "Anorid", "id": 139095718, "node_id": "U_kgDOCEpupg", "avatar_url": "https://avatars.githubusercontent.com/u/139095718?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Anorid", "html_url": "https://github.com/Anorid", "followers_url": "https://api.github.com/users/Anorid/follower...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
closed
false
null
[]
null
12
2024-05-17T02:33:20
2024-05-30T05:36:45
2024-05-30T05:36:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I carefully read the documentation content of the README to try root@autodl-container-36e51198ae-c4ed76b0:~/autodl-tmp/model# ollama create example -f Modelfile transferring model data using existing layer sha256:8c7d76a23837d1b07ca3c3aa497d90ffafdfc2fd417b93e4e06caeeabf4f1526 using exis...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4485/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4485/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3236
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3236/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3236/comments
https://api.github.com/repos/ollama/ollama/issues/3236/events
https://github.com/ollama/ollama/issues/3236
2,194,215,654
I_kwDOJ0Z1Ps6CyRLm
3,236
Unable to run Falcon Models
{ "login": "mebinjoy77", "id": 62318229, "node_id": "MDQ6VXNlcjYyMzE4MjI5", "avatar_url": "https://avatars.githubusercontent.com/u/62318229?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mebinjoy77", "html_url": "https://github.com/mebinjoy77", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-03-19T07:18:07
2024-03-19T13:25:23
2024-03-19T13:25:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Unable to run falcon models through ollama. Running falcon models crashes ollama service. Here is the log : ![image](https://github.com/ollama/ollama/assets/62318229/2125b14f-64e4-4c40-91fd-ae4c92f4d866) ### What did you expect to see? ollama working fine with falcon. ### Steps to re...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3236/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6503
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6503/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6503/comments
https://api.github.com/repos/ollama/ollama/issues/6503/events
https://github.com/ollama/ollama/pull/6503
2,485,426,225
PR_kwDOJ0Z1Ps55XSan
6,503
add integration: py-gpt
{ "login": "szczyglis-dev", "id": 61396542, "node_id": "MDQ6VXNlcjYxMzk2NTQy", "avatar_url": "https://avatars.githubusercontent.com/u/61396542?v=4", "gravatar_id": "", "url": "https://api.github.com/users/szczyglis-dev", "html_url": "https://github.com/szczyglis-dev", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
1
2024-08-25T19:16:55
2024-11-21T09:54:40
2024-11-21T09:54:39
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6503", "html_url": "https://github.com/ollama/ollama/pull/6503", "diff_url": "https://github.com/ollama/ollama/pull/6503.diff", "patch_url": "https://github.com/ollama/ollama/pull/6503.patch", "merged_at": "2024-11-21T09:54:39" }
Add integration: PyGPT - AI desktop assistant for Linux, Windows, and Mac with support for models provided through Ollama.
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6503/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6503/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2465
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2465/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2465/comments
https://api.github.com/repos/ollama/ollama/issues/2465/events
https://github.com/ollama/ollama/pull/2465
2,130,460,835
PR_kwDOJ0Z1Ps5mpzMk
2,465
Detect AMD GPU info via sysfs and block old cards
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-02-12T16:10:40
2024-02-12T20:41:46
2024-02-12T20:41:43
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2465", "html_url": "https://github.com/ollama/ollama/pull/2465", "diff_url": "https://github.com/ollama/ollama/pull/2465.diff", "patch_url": "https://github.com/ollama/ollama/pull/2465.patch", "merged_at": "2024-02-12T20:41:43" }
This wires up some new logic to start using sysfs to discover AMD GPU information and detects old cards we can't yet support so we can fallback to CPU mode. This also serves as an initial foundation where I believe we'll be able to move away from the AMD management library and query the sysfs files to discover the d...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2465/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2465/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2448
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2448/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2448/comments
https://api.github.com/repos/ollama/ollama/issues/2448/events
https://github.com/ollama/ollama/issues/2448
2,129,086,352
I_kwDOJ0Z1Ps5-50eQ
2,448
Linux(WSL Ubuntu) installation curl command fails
{ "login": "UeberTimei", "id": 45313665, "node_id": "MDQ6VXNlcjQ1MzEzNjY1", "avatar_url": "https://avatars.githubusercontent.com/u/45313665?v=4", "gravatar_id": "", "url": "https://api.github.com/users/UeberTimei", "html_url": "https://github.com/UeberTimei", "followers_url": "https://api.github.com/use...
[ { "id": 5755339642, "node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg", "url": "https://api.github.com/repos/ollama/ollama/labels/linux", "name": "linux", "color": "516E70", "default": false, "description": "" }, { "id": 6677370291, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw", "url": "htt...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
17
2024-02-11T17:28:25
2024-03-28T20:51:57
2024-03-28T20:51:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
curl -fsSL https://ollama.com/install.sh | sh This leads to: curl: (35) OpenSSL SSL_connect: Connection reset by peer in connection to ollama.com:443 I tried everything. I reinstalled WSL and set Google DNS.
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2448/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/2448/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6528
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6528/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6528/comments
https://api.github.com/repos/ollama/ollama/issues/6528/events
https://github.com/ollama/ollama/pull/6528
2,490,061,599
PR_kwDOJ0Z1Ps55nETw
6,528
Fix import image width
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2024-08-27T18:26:09
2024-08-27T21:19:49
2024-08-27T21:19:48
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6528", "html_url": "https://github.com/ollama/ollama/pull/6528", "diff_url": "https://github.com/ollama/ollama/pull/6528.diff", "patch_url": "https://github.com/ollama/ollama/pull/6528.patch", "merged_at": "2024-08-27T21:19:48" }
This gives more reasonable output for the images: <img width="1183" alt="Screenshot 2024-08-27 at 14 10 33" src="https://github.com/user-attachments/assets/428cee86-ba62-4270-b308-2bacc07a1460">
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6528/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6528/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5530
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5530/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5530/comments
https://api.github.com/repos/ollama/ollama/issues/5530/events
https://github.com/ollama/ollama/pull/5530
2,394,116,561
PR_kwDOJ0Z1Ps50nbP4
5,530
Update llama.cpp submodule to `a8db2a9c`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-07-07T16:11:44
2024-07-07T17:03:11
2024-07-07T17:03:10
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5530", "html_url": "https://github.com/ollama/ollama/pull/5530", "diff_url": "https://github.com/ollama/ollama/pull/5530.diff", "patch_url": "https://github.com/ollama/ollama/pull/5530.patch", "merged_at": "2024-07-07T17:03:10" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5530/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5530/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7581
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7581/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7581/comments
https://api.github.com/repos/ollama/ollama/issues/7581/events
https://github.com/ollama/ollama/issues/7581
2,645,396,415
I_kwDOJ0Z1Ps6drYu_
7,581
Support importing vision models from Safetensors in `ollama create`
{ "login": "chigkim", "id": 22120994, "node_id": "MDQ6VXNlcjIyMTIwOTk0", "avatar_url": "https://avatars.githubusercontent.com/u/22120994?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chigkim", "html_url": "https://github.com/chigkim", "followers_url": "https://api.github.com/users/chigki...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6947643302, "node_id": ...
open
false
null
[]
null
5
2024-11-08T23:52:56
2024-12-29T20:09:23
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I tried to import finetuned llama-3.2-11b-vision, but I got "Error: unsupported architecture." In order to make sure my model is not the problem, I downloaded [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) from Huggingface. I copie...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7581/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7581/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1284
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1284/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1284/comments
https://api.github.com/repos/ollama/ollama/issues/1284/events
https://github.com/ollama/ollama/issues/1284
2,011,737,532
I_kwDOJ0Z1Ps536K28
1,284
Argument list too long
{ "login": "shubhammicrosoft1", "id": 50182145, "node_id": "MDQ6VXNlcjUwMTgyMTQ1", "avatar_url": "https://avatars.githubusercontent.com/u/50182145?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shubhammicrosoft1", "html_url": "https://github.com/shubhammicrosoft1", "followers_url": "https...
[]
closed
false
null
[]
null
4
2023-11-27T08:16:49
2024-01-20T00:09:39
2024-01-20T00:09:29
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When i am running a summarization using ollama for reading a 7 MB file & summarizing the data on Linux , it reports (bash: /usr/local/bin/ollama: Argument list too long) Command used ollama run llama2 "$(cat data.txt)" please summarize this data Is this a OS limitation or some configurations that we can upd...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1284/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1622
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1622/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1622/comments
https://api.github.com/repos/ollama/ollama/issues/1622/events
https://github.com/ollama/ollama/pull/1622
2,049,896,518
PR_kwDOJ0Z1Ps5ib6GB
1,622
Update Readme Quickstart Ollama with Docker
{ "login": "Hidayathamir", "id": 57469556, "node_id": "MDQ6VXNlcjU3NDY5NTU2", "avatar_url": "https://avatars.githubusercontent.com/u/57469556?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hidayathamir", "html_url": "https://github.com/Hidayathamir", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
1
2023-12-20T06:04:12
2024-06-09T18:06:14
2024-06-09T18:06:13
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1622", "html_url": "https://github.com/ollama/ollama/pull/1622", "diff_url": "https://github.com/ollama/ollama/pull/1622.diff", "patch_url": "https://github.com/ollama/ollama/pull/1622.patch", "merged_at": null }
# Update Readme Quickstart Ollama with Docker Upon initial exploration of the repository, leveraging Docker for getting started appears to be the most straightforward approach. Following the [Ollama documentation for initiating with Docker](https://github.com/jmorganca/ollama?tab=readme-ov-file#docker) led me to the...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1622/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1622/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7014
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7014/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7014/comments
https://api.github.com/repos/ollama/ollama/issues/7014/events
https://github.com/ollama/ollama/issues/7014
2,553,949,038
I_kwDOJ0Z1Ps6YOitu
7,014
Better Tool Call parsing
{ "login": "zly2006", "id": 66198935, "node_id": "MDQ6VXNlcjY2MTk4OTM1", "avatar_url": "https://avatars.githubusercontent.com/u/66198935?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zly2006", "html_url": "https://github.com/zly2006", "followers_url": "https://api.github.com/users/zly200...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
[ { "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "htt...
null
6
2024-09-28T01:55:36
2025-01-01T03:52:27
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Currently tool call patterns are defined in go templates. this is fine for cases [e.g. in this comment](https://github.com/ollama/ollama/issues/6061#issuecomment-2257137350). However, it is not ideal. ## Problems 1. Content loss To say, the model responds this text: ```plaintext Yes, I can help you compute 3...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7014/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7014/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7202
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7202/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7202/comments
https://api.github.com/repos/ollama/ollama/issues/7202/events
https://github.com/ollama/ollama/pull/7202
2,587,203,254
PR_kwDOJ0Z1Ps5-ma6Z
7,202
Add AI Summary Helper to list of community integrations
{ "login": "philffm", "id": 6079545, "node_id": "MDQ6VXNlcjYwNzk1NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/6079545?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philffm", "html_url": "https://github.com/philffm", "followers_url": "https://api.github.com/users/philffm/...
[]
closed
false
null
[]
null
0
2024-10-14T22:39:43
2024-12-11T00:13:07
2024-12-11T00:13:06
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7202", "html_url": "https://github.com/ollama/ollama/pull/7202", "diff_url": "https://github.com/ollama/ollama/pull/7202.diff", "patch_url": "https://github.com/ollama/ollama/pull/7202.patch", "merged_at": "2024-12-11T00:13:06" }
Adding AI Summary Helper to the community integrations list. The plugin allows users to generate custom summaries for web content using tailored prompts right in the browser / DOM which makes it compatible with send-to-kindle, printing articles etc.. It now supports Ollama/LLaMA models
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7202/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7202/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/874
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/874/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/874/comments
https://api.github.com/repos/ollama/ollama/issues/874/events
https://github.com/ollama/ollama/issues/874
1,955,745,228
I_kwDOJ0Z1Ps50kk3M
874
Add flag `--web-root` for serving UI (w/ code example)
{ "login": "coolaj86", "id": 122831, "node_id": "MDQ6VXNlcjEyMjgzMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/122831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/coolaj86", "html_url": "https://github.com/coolaj86", "followers_url": "https://api.github.com/users/coolaj8...
[]
closed
false
null
[]
null
8
2023-10-22T03:31:23
2023-10-26T05:36:54
2023-10-25T19:13:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
**edit**: removed potentially confusing language that was given as an example, not a fixed implementation detail ```sh ollama serve --web-root ./ollama-webui/ ``` 1. Serve `/api/*` to the API 2. For all other requests, return results from the web server 3. If the `--web-root` flag is given, serve that directo...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/874/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/874/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/6577
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6577/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6577/comments
https://api.github.com/repos/ollama/ollama/issues/6577/events
https://github.com/ollama/ollama/pull/6577
2,498,850,998
PR_kwDOJ0Z1Ps56DZ3Y
6,577
Update documentation: Change .bin to .gguf in GGUF file and adapter examples
{ "login": "rayfiyo", "id": 108730891, "node_id": "U_kgDOBnsaCw", "avatar_url": "https://avatars.githubusercontent.com/u/108730891?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rayfiyo", "html_url": "https://github.com/rayfiyo", "followers_url": "https://api.github.com/users/rayfiyo/foll...
[]
closed
false
null
[]
null
0
2024-08-31T13:39:52
2024-09-01T02:34:25
2024-09-01T02:34:25
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6577", "html_url": "https://github.com/ollama/ollama/pull/6577", "diff_url": "https://github.com/ollama/ollama/pull/6577.diff", "patch_url": "https://github.com/ollama/ollama/pull/6577.patch", "merged_at": "2024-09-01T02:34:25" }
This pull request updates the documentation to reflect the change from GGML to GGUF format. Changes made: - In the "Build from a GGUF file" section, updated the example Modelfile to use the .gguf extension instead of .bin - Modified the explanatory text to refer to "GGUF file" instead of "GGUF bin file" - In the ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6577/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6577/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4265
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4265/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4265/comments
https://api.github.com/repos/ollama/ollama/issues/4265/events
https://github.com/ollama/ollama/pull/4265
2,286,315,770
PR_kwDOJ0Z1Ps5u6zM7
4,265
routes: fix show llava models
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-05-08T19:43:19
2024-05-08T19:51:22
2024-05-08T19:51:21
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4265", "html_url": "https://github.com/ollama/ollama/pull/4265", "diff_url": "https://github.com/ollama/ollama/pull/4265.diff", "patch_url": "https://github.com/ollama/ollama/pull/4265.patch", "merged_at": "2024-05-08T19:51:21" }
show model file isn't showing the projector because it's set to name `projector` instead of `model` also change the order so adapters/projectors appear ahead of template/system to group with the language model
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4265/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7336
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7336/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7336/comments
https://api.github.com/repos/ollama/ollama/issues/7336/events
https://github.com/ollama/ollama/pull/7336
2,609,963,618
PR_kwDOJ0Z1Ps5_r9kJ
7,336
Update install.sh to support multiple init systems
{ "login": "Sachin-Bhat", "id": 25080916, "node_id": "MDQ6VXNlcjI1MDgwOTE2", "avatar_url": "https://avatars.githubusercontent.com/u/25080916?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sachin-Bhat", "html_url": "https://github.com/Sachin-Bhat", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
1
2024-10-23T22:28:33
2024-11-21T18:58:42
2024-11-21T18:58:42
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7336", "html_url": "https://github.com/ollama/ollama/pull/7336", "diff_url": "https://github.com/ollama/ollama/pull/7336.diff", "patch_url": "https://github.com/ollama/ollama/pull/7336.patch", "merged_at": null }
Hey folks, Made a few additions to the script as follows: - Support for Runit (tested), OpenRC and S6 (need help with testing) The runit support works as expected. I need help with testing openrc and s6. Please let me know if you face any challenges. If you aren't able to start the service with the commands i...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7336/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6155
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6155/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6155/comments
https://api.github.com/repos/ollama/ollama/issues/6155/events
https://github.com/ollama/ollama/issues/6155
2,446,639,425
I_kwDOJ0Z1Ps6R1MFB
6,155
Support Nested Parameters for Tools
{ "login": "kirel", "id": 9124, "node_id": "MDQ6VXNlcjkxMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9124?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kirel", "html_url": "https://github.com/kirel", "followers_url": "https://api.github.com/users/kirel/followers", "fol...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 7706482389, "node_id": ...
open
false
null
[]
null
7
2024-08-03T22:42:19
2024-11-06T00:54:07
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I have been trying to get ollama tool use to work with https://github.com/jekalmin/extended_openai_conversation but was getting errors. It looked like in the response from ollama `tool_calls.0.function.aguments` is not an object but a string containing the (sometimes correct) object with the par...
{ "login": "kirel", "id": 9124, "node_id": "MDQ6VXNlcjkxMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9124?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kirel", "html_url": "https://github.com/kirel", "followers_url": "https://api.github.com/users/kirel/followers", "fol...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6155/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6155/timeline
null
reopened
false
https://api.github.com/repos/ollama/ollama/issues/1802
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1802/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1802/comments
https://api.github.com/repos/ollama/ollama/issues/1802/events
https://github.com/ollama/ollama/pull/1802
2,066,741,855
PR_kwDOJ0Z1Ps5jR8mr
1,802
gpu: read memory info from all cuda devices
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-01-05T05:14:35
2024-01-05T16:25:59
2024-01-05T16:25:58
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1802", "html_url": "https://github.com/ollama/ollama/pull/1802", "diff_url": "https://github.com/ollama/ollama/pull/1802.diff", "patch_url": "https://github.com/ollama/ollama/pull/1802.patch", "merged_at": "2024-01-05T16:25:58" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1802/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1802/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4939
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4939/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4939/comments
https://api.github.com/repos/ollama/ollama/issues/4939/events
https://github.com/ollama/ollama/issues/4939
2,341,899,466
I_kwDOJ0Z1Ps6LlozK
4,939
qwen2 fails on MacOS
{ "login": "MikeyBeez", "id": 14264000, "node_id": "MDQ6VXNlcjE0MjY0MDAw", "avatar_url": "https://avatars.githubusercontent.com/u/14264000?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MikeyBeez", "html_url": "https://github.com/MikeyBeez", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-06-08T22:56:19
2024-06-08T22:57:48
2024-06-08T22:57:47
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ollama run qwen2 Error: llama runner process has terminated: signal: abort trap error:error loading model vocabulary: unknown pre-tokenizer type: 'qwen2' ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version ollama --version ollama version is 0.1.38
{ "login": "MikeyBeez", "id": 14264000, "node_id": "MDQ6VXNlcjE0MjY0MDAw", "avatar_url": "https://avatars.githubusercontent.com/u/14264000?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MikeyBeez", "html_url": "https://github.com/MikeyBeez", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4939/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4939/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3655
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3655/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3655/comments
https://api.github.com/repos/ollama/ollama/issues/3655/events
https://github.com/ollama/ollama/pull/3655
2,244,141,005
PR_kwDOJ0Z1Ps5ss54O
3,655
Add simple rag-chatbot to community integrations
{ "login": "datvodinh", "id": 90944231, "node_id": "MDQ6VXNlcjkwOTQ0MjMx", "avatar_url": "https://avatars.githubusercontent.com/u/90944231?v=4", "gravatar_id": "", "url": "https://api.github.com/users/datvodinh", "html_url": "https://github.com/datvodinh", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
0
2024-04-15T16:38:47
2024-04-23T00:16:55
2024-04-23T00:16:55
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3655", "html_url": "https://github.com/ollama/ollama/pull/3655", "diff_url": "https://github.com/ollama/ollama/pull/3655.diff", "patch_url": "https://github.com/ollama/ollama/pull/3655.patch", "merged_at": "2024-04-23T00:16:55" }
- Hi there! I've been using Ollama for some time now, and I've been really pleased with it. Thanks so much for developing and keeping up with this project. It's been a great help for students like me to effortlessly run llm models locally. - I have created a simple chatbot app with Ollama, with a simple interface so ...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3655/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7298
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7298/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7298/comments
https://api.github.com/repos/ollama/ollama/issues/7298/events
https://github.com/ollama/ollama/issues/7298
2,602,860,027
I_kwDOJ0Z1Ps6bJH37
7,298
llama3.1 llama3.2 Chat Template Typo
{ "login": "DexterLeung", "id": 34372429, "node_id": "MDQ6VXNlcjM0MzcyNDI5", "avatar_url": "https://avatars.githubusercontent.com/u/34372429?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DexterLeung", "html_url": "https://github.com/DexterLeung", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
0
2024-10-21T15:10:56
2024-10-21T16:40:09
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? It seems there is a typo in the following sentence of the chat template: "When you receive a tool call response, use the output to format an answer to the **orginal** user question." llama3.1: [948af2743fc7](https://ollama.com/library/llama3.1/blobs/948af2743fc7) llama3.2: [966de95ca8a6](...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7298/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/173
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/173/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/173/comments
https://api.github.com/repos/ollama/ollama/issues/173/events
https://github.com/ollama/ollama/pull/173
1,816,635,034
PR_kwDOJ0Z1Ps5WJEvb
173
change error handler behavior and fix error when a model isn't found
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2023-07-22T05:46:16
2023-07-22T06:02:12
2023-07-22T06:02:12
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/173", "html_url": "https://github.com/ollama/ollama/pull/173", "diff_url": "https://github.com/ollama/ollama/pull/173.diff", "patch_url": "https://github.com/ollama/ollama/pull/173.patch", "merged_at": "2023-07-22T06:02:12" }
null
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/173/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/173/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/227
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/227/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/227/comments
https://api.github.com/repos/ollama/ollama/issues/227/events
https://github.com/ollama/ollama/issues/227
1,824,957,156
I_kwDOJ0Z1Ps5sxqLk
227
maximum upload/download speed not reached
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2023-07-27T18:33:52
2023-10-11T00:17:53
2023-10-11T00:17:44
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When running `ollama pull`, in some cases the download rate is lower than downloading with `wget` or the browser
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/227/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/881
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/881/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/881/comments
https://api.github.com/repos/ollama/ollama/issues/881/events
https://github.com/ollama/ollama/pull/881
1,957,572,962
PR_kwDOJ0Z1Ps5djVYw
881
ggufv3
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
1
2023-10-23T16:39:33
2023-10-23T17:50:46
2023-10-23T17:50:45
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/881", "html_url": "https://github.com/ollama/ollama/pull/881", "diff_url": "https://github.com/ollama/ollama/pull/881.diff", "patch_url": "https://github.com/ollama/ollama/pull/881.patch", "merged_at": "2023-10-23T17:50:45" }
ggufv3 adds support for big endianness, mainly for s390x architecture. while that's not currently supported for ollama, the change is simple. loosen version check to be more forward compatible. unless specified, gguf versions other v1 will be decoded into v2.
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/881/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/881/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6781
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6781/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6781/comments
https://api.github.com/repos/ollama/ollama/issues/6781/events
https://github.com/ollama/ollama/issues/6781
2,523,697,790
I_kwDOJ0Z1Ps6WbJJ-
6,781
ollama minicpm-v refused to deal with images
{ "login": "colin4k", "id": 10140389, "node_id": "MDQ6VXNlcjEwMTQwMzg5", "avatar_url": "https://avatars.githubusercontent.com/u/10140389?v=4", "gravatar_id": "", "url": "https://api.github.com/users/colin4k", "html_url": "https://github.com/colin4k", "followers_url": "https://api.github.com/users/colin4...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
2024-09-13T01:40:44
2024-09-13T01:42:05
2024-09-13T01:42:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? > ollama run minicpm-v:latest "extract all java code from the image:~/Downloads/1.png" > I'm sorry, but I am not able to view or access images. Can you please provide me with a textual description of what is in the image? Then I can try my best to help you with your question. ### OS macOS #...
{ "login": "colin4k", "id": 10140389, "node_id": "MDQ6VXNlcjEwMTQwMzg5", "avatar_url": "https://avatars.githubusercontent.com/u/10140389?v=4", "gravatar_id": "", "url": "https://api.github.com/users/colin4k", "html_url": "https://github.com/colin4k", "followers_url": "https://api.github.com/users/colin4...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6781/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6781/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6443
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6443/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6443/comments
https://api.github.com/repos/ollama/ollama/issues/6443/events
https://github.com/ollama/ollama/issues/6443
2,475,788,298
I_kwDOJ0Z1Ps6TkYgK
6,443
Error: llama runner process no longer running: -1
{ "login": "ZINE-KHER", "id": 56302539, "node_id": "MDQ6VXNlcjU2MzAyNTM5", "avatar_url": "https://avatars.githubusercontent.com/u/56302539?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZINE-KHER", "html_url": "https://github.com/ZINE-KHER", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
6
2024-08-20T14:20:25
2024-08-22T05:55:05
2024-08-21T13:26:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi, I am facing the below error when trying to run ollama models (both llama3.1:8b-instruct-q4_1 and llama3.1:8b-instruct-fp16): **Error: llama runner process no longer running: -1** After checking **syslog** file, I found the following issue: **ollama.listener llama_model_load: error lo...
{ "login": "ZINE-KHER", "id": 56302539, "node_id": "MDQ6VXNlcjU2MzAyNTM5", "avatar_url": "https://avatars.githubusercontent.com/u/56302539?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZINE-KHER", "html_url": "https://github.com/ZINE-KHER", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6443/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6443/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6017
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6017/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6017/comments
https://api.github.com/repos/ollama/ollama/issues/6017/events
https://github.com/ollama/ollama/pull/6017
2,433,441,323
PR_kwDOJ0Z1Ps52pKyE
6,017
Updated Ollama4j link
{ "login": "amithkoujalgi", "id": 1876165, "node_id": "MDQ6VXNlcjE4NzYxNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/1876165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amithkoujalgi", "html_url": "https://github.com/amithkoujalgi", "followers_url": "https://api.github....
[]
closed
false
null
[]
null
2
2024-07-27T11:39:28
2024-09-03T17:13:28
2024-09-03T17:02:48
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6017", "html_url": "https://github.com/ollama/ollama/pull/6017", "diff_url": "https://github.com/ollama/ollama/pull/6017.diff", "patch_url": "https://github.com/ollama/ollama/pull/6017.patch", "merged_at": null }
Updated Ollama4j link and added link to Ollama4j Web UI tool.
{ "login": "amithkoujalgi", "id": 1876165, "node_id": "MDQ6VXNlcjE4NzYxNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/1876165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amithkoujalgi", "html_url": "https://github.com/amithkoujalgi", "followers_url": "https://api.github....
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6017/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6017/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3082
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3082/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3082/comments
https://api.github.com/repos/ollama/ollama/issues/3082/events
https://github.com/ollama/ollama/issues/3082
2,182,410,158
I_kwDOJ0Z1Ps6CFO-u
3,082
OpenRC init support for install.sh
{ "login": "ElevatedEuphoria", "id": 50528556, "node_id": "MDQ6VXNlcjUwNTI4NTU2", "avatar_url": "https://avatars.githubusercontent.com/u/50528556?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ElevatedEuphoria", "html_url": "https://github.com/ElevatedEuphoria", "followers_url": "https://...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 5755339642, "node_id": ...
open
false
null
[]
null
6
2024-03-12T18:30:36
2024-07-05T08:08:10
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, Could be really nice if the install.sh script had a "auto-detect" feature to identify the currently ran init system on a Linux machine during installation and then installed Ollama accordingly. OS: Gentoo Linux Kernel: 6.7.6-gentoo-x86_64 Init System: OpenRC
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3082/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3082/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/146
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/146/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/146/comments
https://api.github.com/repos/ollama/ollama/issues/146/events
https://github.com/ollama/ollama/pull/146
1,814,620,295
PR_kwDOJ0Z1Ps5WCTFr
146
windows: fix model pulling
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-07-20T18:54:29
2023-07-20T20:41:59
2023-07-20T20:41:54
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/146", "html_url": "https://github.com/ollama/ollama/pull/146", "diff_url": "https://github.com/ollama/ollama/pull/146.diff", "patch_url": "https://github.com/ollama/ollama/pull/146.patch", "merged_at": "2023-07-20T20:41:54" }
There are two issues preventing pull from working as expected in Windows. 1. Windows dislikes `os.Rename` when the file is still open. One approach is to close the file before calling rename. The approach taken in this PR is to call `os.Symlink` instead 2. Windows errors when file paths contain `:` so replace the `...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/146/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/146/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3552
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3552/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3552/comments
https://api.github.com/repos/ollama/ollama/issues/3552/events
https://github.com/ollama/ollama/issues/3552
2,232,734,324
I_kwDOJ0Z1Ps6FFNJ0
3,552
/api/generate gets hung that can be steadily reproduced
{ "login": "peter-gz", "id": 40975524, "node_id": "MDQ6VXNlcjQwOTc1NTI0", "avatar_url": "https://avatars.githubusercontent.com/u/40975524?v=4", "gravatar_id": "", "url": "https://api.github.com/users/peter-gz", "html_url": "https://github.com/peter-gz", "followers_url": "https://api.github.com/users/pet...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
closed
false
null
[]
null
2
2024-04-09T06:56:05
2024-10-30T20:22:55
2024-10-30T20:22:55
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I feel headache about ollama getting hung from time to time when running `codellama:13b` for code completion. I notice a few issues reported ollama gets hung, as mentioned in #1863, #1901, #2225, etc. but haven't got fixed. Now I have got a test case that can steadily reproduce the issue, whe...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3552/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3552/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1531
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1531/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1531/comments
https://api.github.com/repos/ollama/ollama/issues/1531/events
https://github.com/ollama/ollama/issues/1531
2,042,633,609
I_kwDOJ0Z1Ps55wB2J
1,531
ollama run llava --verbose empty
{ "login": "ivanfioravanti", "id": 1069210, "node_id": "MDQ6VXNlcjEwNjkyMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/1069210?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ivanfioravanti", "html_url": "https://github.com/ivanfioravanti", "followers_url": "https://api.gith...
[]
closed
false
null
[]
null
2
2023-12-14T23:03:33
2023-12-17T07:12:53
2023-12-17T07:12:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Verbose does not always return response and correct results. See video https://github.com/jmorganca/ollama/assets/1069210/f28d74d3-86cd-4320-88ca-18115c04a099
{ "login": "ivanfioravanti", "id": 1069210, "node_id": "MDQ6VXNlcjEwNjkyMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/1069210?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ivanfioravanti", "html_url": "https://github.com/ivanfioravanti", "followers_url": "https://api.gith...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1531/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1531/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/5989
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5989/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5989/comments
https://api.github.com/repos/ollama/ollama/issues/5989/events
https://github.com/ollama/ollama/issues/5989
2,432,514,897
I_kwDOJ0Z1Ps6Q_TtR
5,989
Tools should support streaming=true
{ "login": "drazdra", "id": 133811709, "node_id": "U_kgDOB_nN_Q", "avatar_url": "https://avatars.githubusercontent.com/u/133811709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/drazdra", "html_url": "https://github.com/drazdra", "followers_url": "https://api.github.com/users/drazdra/foll...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
5
2024-07-26T15:51:44
2024-09-04T04:23:18
2024-09-04T04:23:18
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When the stream=true, ollama doesn't return tool request in the final "done" message, instead it returns it just part by part as if it was a regular reply. At that, we have no way to determine it was a tool request, because ollama doesn't change the role to "tool" and it's just "assistant". Due to that we can not h...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5989/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5989/timeline
null
completed
false